entropy depletion

John Denker jsd at av8n.com
Thu Jan 6 04:48:24 EST 2005


I wrote:

 >>  Taking bits out of the PRNG *does* reduce its entropy.

Enzo Michelangeli wrote:
> 
> By how much exactly?

By one bit per bit.

> I'd say, _under the hypothesis that the one-way
> function can't be broken and other attacks fail_, exactly zero; in the
> real world, maybe a little more. 

If you said that, you'd be wrong.

This is getting repetitious.  As I said before, this is an abuse of the
terminology.  If you want to quantify the goodness of your PRNG, go right
ahead, but please don't apply the word "entropy" to it.  The word is already
taken.  It means something very specific.

> But in
> /usr/src/linux/drivers/char/random.c I see that the extract_entropy()
> function, directly called by the exported kernel interface
> get_random_bytes(), states:
> 
>         if (r->entropy_count / 8 >= nbytes)
>                 r->entropy_count -= nbytes*8;
>         else
>                 r->entropy_count = 0;
> 
> ...which appears to assume that the pool's entropy (the upper bound of
> which is POOLBITS, defined equal to 4096) drops by a figure equal to the
> number of bits that are extracted (nbytes*8). This would only make sense
> if those bits weren't passed through one-way hashing. 

The linux /dev/random driver has lots of problems, but that's not one of them.
That makes perfect sense.  Anything else would not make sense.  That's what
entropy is.  If you're not interested in entropy, then go measure whatever
you are interested in, but don't call it entropy.

 > Perhaps, a great
 > deal of blockage problems when using /dev/random would go away with a more
 > realistic estimate.

100% of the blocking problems go away if you use /dev/urandom to the exclusion
of /dev/random.

For the Nth time:
   a) Most of modern cryptography is based on notions of computational intractability.
I'm not saying that's good or bad;  it is what it is.
   b) My point is that there is an entirely different set of notions, including the
notion of entropy and the related of unicity distance, which have got *nothing* to
do with computational intractability.

You can study (a) if you like, or (b) if you like, or both.  Maybe (a) is best
suited to your application, or maybe (b).  But whatever you do, please don't
mistake one for the other.

Lots of things have large amounts of usable randomness, with little or no
entropy.  Please don't mistake one for the other.


---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at metzdowd.com



More information about the cryptography mailing list