Suppose we are making a base-10 logarithm table, calculating the entry that should go at log 1.41. Seems easy: just calculate log10(1.41) = 0.149219, right?
However, the log table only knows the user wants the log of an input that is somewhere between 1.405 and 1.415, the range of inputs that round to 1.41. To minimize the worst case error (minimax), it should actually give the mean of the logs of those endpoints, so the entry for 1.41 should be 0.149216.
This is a tiny bit smaller than the "easy" value calculated above because the logarithm function is concave down. This only matters if one has a 6-place log table, which is absurdly high precision, exceeding the 3 significant digits of input.
It's a bit surprising we only need to calculate the mean of the endpoints and not have to calculate an integral and expected value.
No comments :
Post a Comment