Fermat's primality test vs. Miller-Rabin

Anton Stiglic astiglic at okiok.com
Mon Nov 21 22:39:17 EST 2005



-----Original Message-----
From: owner-cryptography at metzdowd.com
[mailto:owner-cryptography at metzdowd.com] On Behalf Of Joseph Ashwood
Sent: November 18, 2005 3:18 AM
To: cryptography at metzdowd.com
Subject: Re: Fermat's primality test vs. Miller-Rabin

>> Look at table 4.3 of the Handbook of
>> applied cryptography: for t = 1 (one iteration) and for a 500-bit 
>> candidate,
>> we have probability p(X | Y_1) <= 2^-56, which is better than what you
>> concluded.  (X representing the event that the candidate n is composite, 
>> Y_t
>> representing the event that Miller-Rabin(n, t) declares n to be prime).
>>
>> The results in table 4.3 and 4.4 of HAC are for randomly (uniform) chosen
>> candidates, and I think you need to do a basic sieving (don't remeber if
>> that is necessary, but I think it is).  The result is due to the fact 
>> that under these conditions, the strong pseudoprime test does in fact 
>> much  better than 1/4 probability of error ( value of P(Y_t | X) is very
>> low ), this result is due to Damgard, Landrock and Pomerance, based on 
>> earlier work of Erdos and Pomerance.

>I think much of the problem is the way the number is being applied. Giving
>a stream of random numbers that have passed a single round of MR you will
>find that very close to 50% of them are not prime, this does not mean that
>it passes 50% of the numbers (the 2^-80 probability given above is of this 
>type). 

Do you do an initial sieving to get rid of the more obvious primes?  I'm
guessing you don't since you seem to have a result contradictory to what has
been proven by Damgard, Landrock and Pomerance.  If you look at table 4.3 of
HAC (which comes from Damgard & al. paper), it says that if your candidates
come from a uniform random distribution, then for 500 bit candidate, the
probability that a candidate n is composite when one round of miller-Rabin
said it was prime is <= (1/2)^56.  You are finding that the probability is
about 1/2, that seems very wrong (unless you are not doing the sieving,
which is very important).  Am I misunderstanding something?


>In fact it appears that integers fall on a continuum of difficulty 
>for MR, where some numbers will always fail (easy composites), and other 
>numbers will always pass (primes). The problem comes when trying to denote 
>which type of probability you are discussing. 

Well I think I explained it pretty clearly.  I can try to re-iterate.  Let X
represent the event that a candidate n is composite, and let Y_n denote the
event that Miller-Rabin(n,t) declares n to be prime, where Miller-Rabin(n,t)
means you apply t iterations of Miller-Rabin on n.
Now the basic theorem that we all know is that P(Y_t | X) <= (1/4)^t (this
is problem in one of Koblitz basic textbooks on cryptography, for example).
But this is not the probability that we are interested in, we are (at least
I am) more interested in P(X | Y_t).  In other words, what is the
probability that n is in fact composite when Miller-Rabin(n, t) declared n
to be prime?  Do we agree that this is the probability that we are
interested in?


>What are the odds that a 
>random 512-bit composite will be detected as composite by MR in one round?
>I don't think anyone has dependably answered that question, but the answer
>is very different from 1-(probability that MR-* says it's a prime)^-k. Any 
>discussion needs to be more accurately phrased.

You are looking for P( Comp Y_t | X), where Comp Z is the complementary
event of Z. In our case, Comp Y_t is the event that Miller-Rabin(n,t) proves
n to be composite. Is that what you are looking for?


>For example, my phrasing is that in the tests that I performed 50% (+/- 
>experimental noise) of those numbers that passed a single round of MR also 
>passed 128 rounds, leading me to conclude that 50% of the numbers that 
>passed a single round of MR are in fact prime. Since each number that
>passed a single round was subjected to 127 additional rounds, a number of 
>additional statistics can be drawn, in particular that of those that failed

>at least one round none failed less than 40 rounds, and that few passed
>less than 40 rounds. Due to the fact that this was only iterated 65536
>times there is still substantial experimental error available. These pieces
>of information combined indicate that for 512-bits it is necessary to have
>80 rounds of MR to verify a prime.
 
I don't understand what you are trying to point out.  If you chose your
candidates uniformly at random, do the sieving before applying the
Miller-Rabin tests, then for 512 bit number it is sufficient to apply 5
rounds to get probability of error lower than (1/2)^80.  

You should take a look at Damgard & al's paper, they did a very good
analysis.

--Anton
                  



---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at metzdowd.com



More information about the cryptography mailing list