Entropy and PRNGs

David Wagner daw at cs.berkeley.edu
Sun Jan 9 18:53:00 EST 2005


John Denker writes:
>Ben Laurie wrote:
>> http://www.apache-ssl.org/randomness.pdf
>
>I just took a look at the first couple of pages.
>IMHO it has much room for improvement.

I guess I have to take exception.  I disagree.  I think Ben Laurie's
paper is quite good.  I thought your criticisms missed some of the points
he was trying to make (these points are subtle, so this is completely
understandable).  Presumably his paper could be criticized as not clear
enough, since it seems it did not convey those points adequately, but
I don't think his paper is inaccurate.  I'll respond point-by-point.

>*) For instance, on page 2 it says
>
>>> I can have a great source of entropy, but if an attacker knows
>>> all about that source, then it gives me no unpredictability at
>>> all.
>
>That's absurd.  If it has no unpredictability, it has no entropy,
>according to any reasonable definition of entropy, including the
>somewhat loose definition on page 1.

Actually, I think Ben got it right.  Entropy depends on context.
The attacker might have extra context that allows him to narrow down
the possible values of the randomness samples.

For instance, imagine if we use packet inter-arrival times (measured down
to the nanosecond) as our randomness source.  From the point of view of
an outsider, there might a lot of entropy in these times, perhaps tens
of bits.  However, from the point of view of an attacker who can eavesdrop
on our local area network, there might be very little or no entropy.

This is the difference between unconditional and conditional entropy that
Ben was trying to introduce.  In information-theoretic notation, H(X)
vs H(X|Y).  Let X = packet inter-arrival time, and Y = everything seen by
a local eavesdropper, and you will see that H(X|Y) can be much smaller
than H(X).  Indeed, we can have H(X|Y) = 0 even if H(X) is very large.
This is Ben's point, and it is a good one.

>*) Later on page 2 it says:
>
>>> Cryptographers want conditional entropy, but for UUIDs, and
>>> other applications, what is needed is unconditional entropy.
>
>First of all, you can't throw around the term "conditional
>entropy" without saying what it's conditioned on.

Conditioned on everything known to the attacker, of course.

>Also importanty, for UUIDs no entropy is required at all.
>A globally-accessible counter has no entropy whatsoever, and
>suffices to solve the UUID problem

A counter is fine as long as there is only one machine in the universe
that will ever assign UUIDs.  However, if you want to do distributed
generation of UUIDs, then counters are insufficient because there is no
way to prevent overlap of two machine's counter spaces.

Perhaps what Ben should have said is that:
* Unconditional entropy is sufficient for UUIDs;
  conditional entropy is not needed.
* For centrally-assigned UUIDs, even unconditional entropy is unnecessary;
  a centrally-managed counter is fine.
* For distributed, unsynchronized assignment of UUIDs, unconditional
  entropy appears to be necessary and sufficient.

>*) At the bottom of page 2 it says:
>
>>> Well, for many purposes, the system time has plenty of
>>> unconditional entropy, because it is usually different when
>>> used to generate different UUIDs.
>
>No, the system time has no entropy whatsoever, not by any
>reasonable definition of entropy.

Ok, this seems like a fair criticism.

>*) On page 4, I find the name "trueprng" to be an oxymoron.
>The P in PRNG stands for Pseudo, which for thousands of
>years has meant "false", i.e. the opposite of true.

Another reasonable point.  Perhaps truerng would be a better name, then?

---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at metzdowd.com



More information about the cryptography mailing list