[Cryptography] randomness +- entropy

Nico Williams nico at cryptonector.com
Fri Nov 8 16:31:03 EST 2013


On Fri, Nov 08, 2013 at 12:23:57PM -0700, John Denker wrote:
> > I was only arguing that consuming n bits of PRNG output != lowering the
> > PRNG's "entropy" by n bits.
> 
> That inequality is true and useful and well said.

Good.  We could argue about how slowly entropy gets consumed as outputs
of a properly seeded PRNG/SRNG get produced, which I think should be...

...incredibly slowly, at least by 2^-n for every unit of output where n
is the number of bits of entropy estimated in the RNG's state.  So if
you have 256 bits of entropy to start with you'll never in a million
years (figure of speech) end up with less than 180 or so bits of entropy
left.  Add to that periodic TRNG inputs to provide fast recovery from
state compromises and what more could we want?

> In a rational world that would be all there was to
> say about it.  However, alas, there is more to the
> story, especially when we look at the context, which
> has to do with the Linux /dev/random and /dev/urandom.
> 
> It turns out that:
>  1a) /dev/random is "supposed" to be a TRNG.
>  1b) However, it can operate as a PRNG in exceptional 
>   circumstances, when it is recovering from a compromise.

It only needs to have high entropy, unpredictable state.  Given a
mixer/extractor design that does not cause entropy to go down much at
all just because output bits are extracted... it doesn't matter if
/dev/random is a PRNG that is frequently reseeded with a TRNG's outputs
(an SRNG, basically) as opposed to just a TRNG.  With proper crypto no
harm could come from that, nor could anyone distinguish one from the
other.

>  2a) /dev/urandom is some sort of hermaphroditic chimera.

Given RNGs whose states' entropy is difficult to consume there should be
no real difference between /dev/random and /dev/urandom once they've
been properly initialized, especially if they are both designed to be to
recover quickly from one-time state compromises.  Or, if there should be
a difference, a) what is it, b) why should I care, c) how could I tell?

If indeed there is no need for there to be a difference between the two
[once properly initialized] then why do we have the two interfaces?  And
also, (2a) would follow naturally.

The key here is "once properly initialized".

Proposal:

 - /dev/random should block whenever it hasn't had fresh entropy mixed
   in in the past N seconds or if it has not yet been properly seeded.

   I.e., guarantee strong and robust outputs.

 - /dev/urandom should block if it has not yet been properly seeded,
   then it should never block again.

   I.e., guarantee strong outputs.

 - Both should get fresh entropy mixed in frequently (at least as
   frequently as outputs are demanded).

 - Nothing in the boot sequence should need entropy before /dev/urandom
   has been seeded properly.  Anything that does should be modified not
   to.

   For example, ASLR should not require strong RNG outputs until
   /dev/urandom has been seeded, and then init(1M) and any other
   long-running processes should restart.  (Solaris' init(1M) can be
   restarted, so this is not farfetched.)  Seeding of the RNG should be
   done as early as possible in the boot sequence.  Sources of entropy
   should include:

    - RDRAND or similar
    - jitter (requires hi-res CPU cycle counters and timers)
    - async event timing (interrupts)
    - a seed in the RAMfs boot image
    - a seed in /var saved at last boot
    - a seed in /var saved at last shutdown
    - datetime, CPUID, ...
    - network entropy servers

   in roughly that order.  Only jitter entropy should be trusted for
   estimating initial RNG state entropy, but some of each of the others
   must be required.

>   That is to say, I don't know what it is.  Under a
>   wide range of "typical" conditions it functions as
>   a PRNG.
>  2b) However, it also tries to approximate a TRNG if 
>   it can.

Once seeded, and if frequently reseeded then it's the same as
/dev/random.  Alternatively /dev/urandom might have an interface
contract where each open file descriptor represents an instance of a
seeded-one-time-only PRNG, but I don't think anyone needs this, and
anyways, most apps open, read, close /dev/urandom rather than keep it
open.

> There are some heavy tradeoffs involved here, trading

Not really.  The two pseudo-devices have to be reasonable
high-bandwidth, and [once properly seeded] secure under fairly broad
threat models.  So starting from that premise the goal is a
fast-but-secure mixer/extractor with TRNG inputs fed to the mixer as
often as possible (though not on a timer, for power mamangement reasons,
so much as on demand as read()s are done).

> possibly-better performance under exceptional conditions 
> against waste of CPU cycles and waste of entropy under 
> normal conditions.  This leaves us with more questions
> [...]

If entropy isn't consumed linearly (but way sub-linearly) with demand
then I don't see this trade-off.  You just agreed with my statement that
that is a desirable property.

> Because /dev/urandom is a hermaphroditic chimera, when
> somebody says they are reading thousands of bits with
> only 23 bits of entropy, that is not quite as insane
> as you might think.  It's not a normal TRNG and it's
> not a normal PRNG, but it is what it is.  There are
> millions upon millions of machines in the field that
> depend on it.

No, we should demand 128 bits of entropy before anything useful can be
done with /dev/urandom's output.  The only question is: whence the
entropy -- we always end up at this very first square.

> [...]
>
> So the name "extract_entropy()" is quite misleading.

Right!  We're not extracting entropy.  We're extracting outputs from an
SRNG.

Nico
-- 


More information about the cryptography mailing list