[Cryptography] randomness +- entropy

Theodore Ts'o tytso at mit.edu
Fri Nov 8 16:12:54 EST 2013


On Fri, Nov 08, 2013 at 12:23:57PM -0700, John Denker wrote:
> It turns out that:
>  1a) /dev/random is "supposed" to be a TRNG.
>  1b) However, it can operate as a PRNG in exceptional 
>   circumstances, when it is recovering from a compromise.

(1b) is true for all RNG's which has any kind of batching in their
design, and that tends to be all RNG's.  Even a design which is using
a noise diode and generating bits on demand, still has to batch bits
until it sends back a byte or a set of bytes to the requester.

Also note that you may not know when you've suffered from a
compromise; so part of a good design is to have ways to limit damage
after a state compromise.

>  2a) /dev/urandom is some sort of hermaphroditic chimera.
>   That is to say, I don't know what it is.  Under a
>   wide range of "typical" conditions it functions as
>   a PRNG.
>  2b) However, it also tries to approximate a TRNG if 
>   it can.

2b) was historically true, but after the 2.13 kernel, it will be
moving much more towards the PRNG.  The main reason for this is that
if you have processes such as the Chrome browser which is using
/dev/urandom very aggressively, to make more entropy available in the
input pool for use by /dev/random.  The reason why I made this change
was a user complaining to me that generating a new 2048 GPG key was
taking over ten minutes on his desktop, and I started taking a closer
look at how the entropy was getting used.

> Also note that this printk warning is in one person's
> "development" branch and has not been incorporated into
> any released version of the kernel.
>   http://git.kernel.org/cgit/linux/kernel/git/tytso/random.git/log/?h=dev

It's my (the /dev/random maintainer's) development tree, and so it's
already in test integration builds in the linux-next tree, and it's
scheduled to go into the mainline linux when the merge window opens
next week.  Linus and I will be both be in Korea giving at the
LinuxCon Korea conference (my talk is going to be about security
requirements for Linux, and it's going to include discussion about
/dev/random), but I expecgt to send the git PULL request to Linus
while I am in Seoul, and I expect that he'll pull it in, do his test
build, and push it to the official mainline tree, while he is in
Korea.

> > One of the reasons why we don't attempt to extract "true random bits"
> > and save them across a reboot is that even we had such bits that were
> > secure even if the underlying crypto primitives were compromised to a
> > fare-thee-well, once you write them to the file on the hard drive and
> > the OS gets shut down, there's no guarantee that an adversary might
> > not be able to read the bits while the OS is shut down.  Even if you
> > don't do something truly stupid (such as leaving your laptop
> > unattended in a hotel room while visiting China), the risk of having
> > your "true random bits" stolen is probably higher than the
> > cryptographic primitives getting compromised.
> 
> That reason doesn't make much sense.  There is a better reason,
> as discussed below, but first we should observe that at present
> Linux stores only a pseudo-random seed and then relies entirely 
> on the PRNG!  This is in no way more trustworthy than storing
> some real entropy.  Any attacker who could steal the hypothetical
> random-seed file can also steal the urandom-seed file.

That's what we are doing *today*.  I'm explaining why it doesn't
really make sense to extract out however many bits of entropy we have
out of /dev/random, and then push it into the input pool.  This is
something that is within my power to change, but as I've described
above, I don't really think it's that great of an idea.

> Here's a better reason why at present it would make no sense to
> take the obvious approach to storing real entropy across reboots:
> The /dev/urandom quasi-PRNG would immediately waste the entropy.
> The device apparently assumes that a steady supply of new raw 
> entropy will always be available.  In situations where entropy 
> is scarce -- e.g. when there is a finite stored supply -- normal 
> operation of /dev/urandom is tantamount to a denial-of-service 
> attack on the entropy supply.

With the changes that are in the random.git tree, which are in the
linux-next tree, and will probably be in Linus's tree by the end of
the next week, if there are heavy users of /dev/urandom, the amount of
entropy consumed out of the input pool has been significantly reduced.

There are some further changes that could be made, and which I am
thinking about.  Part of this includes using AES for /dev/urandom,
since we now have CPU's with AES acceleration, and we no longer need
to worry as much about export control laws (the current design was
implemented in 1994, back when crypto export was a real issue).  One
of the things that is holding me back is that currently the Crypto
layer in Linux is optional, and can be compiled as a module, and I've
always wanted to make sure /dev/random was something user progams
could always count on being there.  So there are some negotiations I
need to make with the maintainers of the Crypto subsystem about how to
make this all work, since it would require making such changes in how
the Crypto layer is configured.

It is true that part of the current design relies on the fact we can
sample interrupt timings and storage timing issues, so we do get a
continuous stream of incoming entropy.  We want to make sure that this
entropy is used wisely, but not using it all is, at the end of the
day, just as wasteful as using it too profligately.  One of the ways
that we do use these bits is to limit the damage in the unlikely case
of internal state compromise.

Regards,

					- Ted


More information about the cryptography mailing list