[Cryptography] [RNG] on RNGs, VM state, rollback, etc.

John Kelsey crypto.jmk at gmail.com
Sat Oct 19 15:19:20 EDT 2013


On Oct 19, 2013, at 12:36 PM, Adam Back <adam at cypherspace.org> wrote:

> On Sat, Oct 19, 2013 at 10:33:34AM -0400, Theodore Ts'o wrote:
...
>> One of them was to do precisely this --- /dev/urandom now mixes in
>> salting information (ethernet MAC addresses, etc, via the new
>> interface add_device_randomness).  Zero entropy is indeed assessed,
>> and the main goal is to avoid the trivially easy case of shared primes
>> in the case where we fail to gather enough entropy.
> 
> I know its obvious and you mentioned the risks, but this is in principle a
> band-aid or worse; it gives the illusion of entropy in the face of actually
> no entropy to an attacker who can readily obtain the serial numbers in
> question (eg because the MAC is broadcast on the LAN) or simply brute forced
> because the guid is while large, highly structured and sparse.

You should think of this like salting a password hash, not like adding entropy.  The attacker can probably know most or all of this data, but he won't be able to just run his entropy pool state guessing attack once and then exploit it for everyone everywhere.  

> It would seem safer to fail/stop and depand user action.  I know thats not a
> popular decision in a distro/package/boot sequence, but churning out
> 0-entropy keys disguised as having entropy being E_0( mac ) and such analogs
> is a bad outcome and wont be observable via identical P, Q key searches.

I think the problem we have now is built into the assumptions of /dev/random and /dev/urandom.  It looks like /dev/urandom is typically expected to both never block, and to always give cryptographically secure random bits.  Right now, when those two requirements are not compatible, it fails to give secure random bits.  Fixing that makes it block, which will presumably break some programs, maybe causing a big impact.  

I think the problem is that random and urandom split the random number generation problem in the wrong place.  What we probably need is something like a best-try non-blocking random number generator, suitable for non-crypto things where you want really unpredictable values if they're available but you can live with less unpredictability if you have to--stuff like address space randomization might want this.  And then, we want a crypto random number generator that blocks only at the beginning when it doesn't have enough entropy, and otherwise manages its reseeds intelligently.  (Implicit in this:  While I get why people might like to have a full entropy source in some situations, I'm extremely skeptical that it adds much from a real security perspective.)

What would break if /dev/random became something that only provided cryptographic strength random bits instead of full entropy bits, but never blocked except at startup?  Would it be possible to convince developers to then only use /dev/urandom for non-cryptographic applications, and to use /dev/random when they needed cryotographic random bits?  I'm sure there is a ton of code out there that uses /dev/urandom the wrong way now, though, and a change like this wouldn't affect that at all.  For that, the better entropy collection and maybe some external seeding of distributions seem like the only easy fixes, assming you can't make /dev/urandom block.

> Adam

--John


More information about the cryptography mailing list