chip-level randomness?

Theodore Tso tytso at MIT.EDU
Wed Sep 19 17:17:18 EDT 2001


On Wed, Sep 19, 2001 at 01:50:53PM -0700, John Gilmore wrote:
> The real-RNG in the Intel chip generates something like 75 kbits/sec
> of processed random bits.  These are merely wasted if nobody reads them
> before it generates 75kbits more in the next second.
> 
> I suggest that if application programs don't read all of these bits
> out of /dev/intel-rng (or whatever it's called), and the kernel
> /dev/random pool isn't fully charged with entropy, then the real-RNG
> driver should feed some of the excess random bits into the /dev/random
> pool periodically.  When and how it siphons off bits from the RNG is a
> separate issue; but can we agree that feeding otherwise-wasted bits
> into a depleted /dev/random would be a good idea?

It's definitely the case that feeding extra bits of randomness into
the /dev/random entropy pool is a good idea.  Whether or not to give
any entropy credit for this is another question.

One of the things which I've always been worried about with the 810
hardware random number generators in general is how to protect against
their failing silently.  My original design intention here was that
this be done in a user-mode process that could run FFT's, and do other
kinds of analysis on the output of the hardware random number
generator, and then if it passed, it could use an already-existing
interface to atomically add the random bytes to the entropy pool and
give credit to the entropy counter.

It turns out that with the Intel 810 RNG, it's even worse because
there's no way to bypass the hardware "whitening" which the 810 chip
uses.  Hence, if the 810 random number generator fails, and starts
sending something that's close to a pure 60 HZ sine wave to the
whitening circuitry, it may be very difficult to detect that this has
happened.

In addition, depending on how paranoid you are, your threat model
might encompass the secenario where an NSA "black bag" job which
punches the 810 rng into a mode where its output is the sequence 1, 2,
3, 4, ... encrypted by some secret SkipJack key.  And there isn't much
that could be done to protect or even detect that something like this
is going on.  (Yes, this scenario requires the cooperation of Intel
with the NSA when doing the chip design --- I *said* it was a highly
paranoid scenario....)

On the other hand, for most people, on balance it's probably better
for the kernel to just blindly trust the 810 random number generator
to be free from faults (either deliberate or accidentally induced),
since the alternative (an incompletely seeded RNG) is probably worst
for most folks.  (Unless, of course, your threat model has a very
heavy bias towards national security and law enforcement agencies.  :-)

So probably what makes sense is to make this be configurable.... and
probably at run-time.  So what probably what makes sense is two /proc
control parameters.  One controls whether or not excess entropy is fed
from the 810 RNG to the /dev/random pool at all, and the other
controls (from 0 to 100 percent) how many bits of "entropy" is
credited for each bit read out from the 810 RNG.  

I expect for the common case, the /dev/random pool will just blindly
trust the 810 RNG, and so entropy will be siphoned over at 75 to 100%.
Hopefully, even if the 810 RNG is completely compromised, there will
be enough other sources of randomness being drawn from the general
system operation that it will make the job of the attacker somewhat
more difficult.

However, I *do* want to preserve the original design goal of allowing
the transfer of entropy from hardware random number generators to
/dev/random to be controlled by a user-mode process, which can be
arbitrarily paranoid about trying to do quality checks on the output
of the hardware random number generator before feeding it to
/dev/random.  Sure, this won't protect against a deliberately
compromised RNG (since an encrypted stream will be indisguishable from
noise unless you no the crypto key), but it does protect against
random hardware failures.

Does this seem reasonable?

> Also, the PRNG in /dev/random and /dev/urandom may someday be broken
> by analytical techniques.  The more diverse sources of true or
> apparent randomness that we can feed into it, the less likely it is
> that a successful theoretical attack on the PRNG will be practically
> successful.  If even a single entropy source of sufficiently high
> speed is feeding it, even a compromised PRNG may well be unbreakable.

If the PRNG in /dev/random can be broken by analytical techniques,
this means that someone was able to find potential inputs (or at least
significant information about potential inputs) to a SHA-1 hash given
the SHA-1 hash.  I'm not going to say that this is impossible, but if
it is, I suspect the analytical breakthrough(s) necessary to make such
a feat impossible is going to make life interesting for a rather large
number of people.....  :-)

But yes, your general point stands; if we only pull less entropy from
/dev/random than is pushed into it, even if SHA-1 is completely
compromised, the resulting random stream should be secure; that's why
/dev/random keeps track of an estimate of how much entropy is in the
pool, and limits how much randomness it will emit based on that
entropy estimation.  

However, given the use of /dev/urandom, being able to feed more
possible randomness into the entropy pool, even if we don't bump the
entropy estimator, can only be a good thing.

						- Ted





---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at wasabisystems.com




More information about the cryptography mailing list