Wednesday, April 04, 2018

[njogcgwc] In praise of calculating pi

Calculating pi to a record number digits stress-tests CPU, memory, and disk: there must not be even a 1-bit error in (typically) months of calculation.

Better -- a less complicated calculation which accomplishes the same tasks -- would be calculating sqrt(2)/2 in binary.  It can more easily be checked afterwards: just square it or compute 0.5/x.  But does calculating pi (using a Ramanujan-style formula with binary splitting as recent PC-based (not supercomputer) records have) test things that sqrt(2) (presumably calculated using Newton's method, an algorithm more similar to Brent-Salamin, not used in recent record-setting pi calculations) would not?

We specify sqrt(2)/2 and not sqrt(2) to eliminate ambiguity about whether calculating N bits includes the bits to the left of the radix point for numbers greater than 1.  There is something elegant about the square root of X in base X: dividing by X is simply a right shift.  However, one could do radix conversion to base 10 at the end.

In contrast to pi, sqrt(2)/2 has a nice feature that subsequent calculations to more bits can start from a previously published result then do more Newton iterations.  Slightly better would be if the fraction, before the final division, were published.

Parallel over many computers would also stress-test network, though it might be tricky to avoid it becoming a bottleneck.  Maybe file storage with disk arrays interconnected by network.

Are there better ways to test these things?  Probably something involving cryptographic hashes like Argon2, though it's not so glamorous.  Also cache-oblivious matrix transpose.

No comments :