Extracting uniform randomness from noisy source

David Wagner daw at mozart.cs.berkeley.edu
Wed Aug 7 19:09:20 EDT 2002


John Kelsey  wrote:
>a.  If my input samples have enough entropy to make my outputs random, then
>I need to resist computationally unbounded attackers.  (Otherwise, why
>bother with distilling entropy; just use a PRNG.)
>
>b.  If my input samples are within some much more forgiving bound, then I
>need to resist computationally-bounded attackers.  
>
>c.  Attackers with control over some part of my inputs mustn't be able to
>cancel out or block entropy in the other parts of the inputs.  

I agree these would be great properties to have.  Sadly, I don't know
of any construction that plausibly achieves all three, in theory or
in practice.

If we have to give up one, the one I'd be most willing to give up is
property a.  After all, we almost never have enough entropy, and we almost
always take the output of the PRNG and just use it in some cryptosystem
that is insecure against computationally unbounded attacks anyway.

Think of a. like asking for your encryption scheme to be information
theoretically secure.  Sure, if you can afford such an encryption scheme,
that's great.  But in practice the one-time pad is too expensive, so we
gladly settle for mere security against computationally bounded attacks.
I think PRNGs are similar.

Fortunately, SHA1 in practice seems to achieve b+c (the two properties
we'd most like to have), even if we can't prove or rigorously justify
this belief.  For these reasons, in practice I still think SHA1 is
the best solution available.  At the same time, we probably could use
further research on schemes that can be justified in a rigorous and
principled way.

---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at wasabisystems.com



More information about the cryptography mailing list