Thursday, July 07, 2022

[hxhsisdf] maximum error of ten common logarithms

we consider using the following small base-10 log table for multiplication.  the numbers between the lines are rounding boundaries.  (improving on previous.)

0 = log1
1.12+
0.1 = log1.25
1.4+
0.2 = log1.6
1.8-
0.3 = log2
2.25-
0.4 = log2.5
2.8+
0.5 = log3.2
3.6-
0.6 = log4
4.5+
0.7 = log5
5.6+
0.8 = log6.3
7+
0.9 = log8
9-
1 = log10

the worst-case error happens at boundaries.  through exhaustive enumeration, we found one of the products with worst-case error happens when computing (3.6 - epsilon) * (3.6 - epsilon):

(3.6 - epsilon)^2 = (10^(log 3.2)^2 = (10^0.5)^2 = 10^1 = 10

correct answer is 3.6^2 = 1.296 - O(epsilon).  relative error is 12.96/10 = 1.296 = 29.6% .

the same relative error also happens when multiplying among 1.8-eps, 4.5-eps, and 9-eps.

goal was to create log table whose numbers and boundaries are simple, and that has maximum error below 30%.

future work: "simple" can be quantified.  3 bits of entropy for each digit left of the rightmost.  if the rightmost digit is 5, then 1 more bit.  if 2468, then 2 more bits.  if 1379, then 3 more bits.  then, somehow, optimize allocation of bits to minimize error.

if the logarithms were exactly spaced by 0.1, then worst-case rounding would pick up 10^0.05 error per term.  worst-case multiplication of two terms would have 10^0.1 = 1.259 = 25.9% error.  we see more than that because logarithms and boundaries are not evenly spaced.

to keep errors under 10%, logarithms would have to be spaced less than log(1.1) = 0.0414 , or a log table with at least 25 rows (50 including boundaries).  (previously, 20.)

future work: if you already know 10x10 multiplication, how can you decrease the amount of additional memorization of logs, keeping errors below some threshold?

No comments :