The common but bad algorithm to find a random large prime number (e.g., for RSA) is first to generate a large number x, then test in order x, x+1, x+2,... until a prime is found. This is bad because it is not uniform: it will preferentially select primes with large prime gaps preceding them. We do not know yet if this bias causes significant cryptographic weakness. GnuPG uses this algorithm, as of 1.4.21.
The right way to fix this is to generate complete new random numbers for each number tested for primality. However, this consumes a lot of entropy.
We can partially repair the problem as follows: test numbers in order starting from x as before. However, before testing a number for primality, reject it outright with high probability, perhaps 0.999. Then, only 0.1% of numbers will be tested for primality, which will skip over many actual primes, so avoid many starting x's mapping to the same found prime x+i.
The pseudorandom number generator used for sampling the 0.1% probably does not have to be very good, so it can be quick, so very little computing time will be spent rejecting numbers. Most of the computing time will be primality testing.
How good is this repair? How strong does the sampling PRNG need to be? Does it need to be cryptographically strong?
Another way to do it might be to select a large random-sized jump to each new number to be tested.
No comments :
Post a Comment