Tuesday, March 06, 2012

[jubtbdxj] Scientific calculator computer precision

Suppose you wished to simulate a scientific calculator on a modern computer, as many applications do, but, unlike many applications, always compute results with arbitrary precision arithmetic.  Usually, the many extra digits will never be seen (internal representation only).  We calculate them "because we can".

How many digits can a computer calculate assuming it wants to guarantee a result within 1/30 of a second, fast enough that it seems instantaneous (same as the speed of double precision)?  Are there any standard functions that are difficult to calculate to arbitrary precision?  (I'm guessing arctan, judging by how slowly the series arctan 1 converges.)  What about not-so-standard functions?  (I like Log Gamma, erf, erfc, erfcx, and their inverses.)  You can't use pre-calculated polynomial approximations (e.g., Chebyshev).

Even better would be spigot algorithm which first displays the most significant digits, then continues to compute in the background more and more internal digits as it waits for further user input.

No comments :