[Cryptography] /dev/random is not robust

Jerry Leichter leichter at lrw.com
Thu Oct 17 13:53:11 EDT 2013


On Oct 17, 2013, at 1:05 PM, Kent Borg <kentborg at borg.org> wrote:
> But is this something that /dev/urandom might do better?  Should blocking be added to /dev/urandom immediately after boot until some reasonable threshold has been reached at least once?  Or on first boot are common distributions restoring a bad seed file and /dev/random can't tell?  Arrgh, I am starting to think that the RNG is the wrong place to fix it.
> 
> Should RNGs attempt to detect uninitialized states and refuse to run?
One answer to this question appears in the FIPS standards for RNG's.  At times, they've required a continuous on-line test of the numbers being generated, with automatic shutdown if the test fail.  These requirements almost certainly came from the hardware background of the FIPS standards.  For hardware, certain failure modes - stuck at 0/stuck at 1 are the most obvious; short cycles due to some internal oscillation may be another - are extremely common, and worth checking for.  For software-based deterministic PRNG's, such tests are mainly irrelevant - code doesn't develop such failures in the field.  As the FIPS standards were adjusted for a more software-based world, the requirement for on-line testing was dropped.

Looking through some old messages on the subject here on the Cryptography list, I found one from Francois Grieu back in July of 2010:

> The Smart Card industry uses True RNG a lot. There, a common line of
> thought is to use:
> - a hardware RNG, which raw output (perhaps biased) is directly
> accessible for testing purposes (only), so that the software can check
> it in depth at startup and from time to time to ascertain that it is at
> least generating a fair amount of entropy
> - followed by appropriate post-processing in hardware (so as to gather
> entropy at all time), acting as a mixer/debiaser:; e.g. something LFSR-based
> - followed by a crude software test (e.g. no bit stuck)
> - optionally followed by software postprocessing (the subject is
> debated; this software has to be proven to not include weakness, and the
> hardware + crude software test is certified to eliminate such weakness,
> so why bother, some say)
> 
> There is a standard, known as AIS31, on evaluating True RNG, which
> de-facto enforces the first three steps
> <https://www.bsi.bund.de/cae/servlet/contentblob/478130/publicationFile/30270/ais31e_pdf.pdf>
> which references
> <https://www.bsi.bund.de/cae/servlet/contentblob/478152/publicationFile/30275/ais20e_pdf.pdf>

More recently, David Johnston, who I gather was involved in the design of the Intel on-chip RNG, commented in a response to a question about malfunctions going undetected:

> That's what BIST is for. It's a FIPS and SP800-90 requirement.


Of course, with generators like the Linux /dev/random, we're in some intermediate state, with hardware components that could fail feeding data into software components.

My own view on this is that there's no point in testing the output of a deterministic PRNG, but the moment you start getting information from the outside world, you should be validating it.  You can never prove that a data stream is random, but you can cheaply spot some common kinds of deviation from randomness - and if you're in a position to "pay" more (in computation/memory) you can spot many others.  You have no hope of spotting a sophisticated *attack*, and even spotting code bugs that destroy randomness can be hard, but it's hard to come up with an example of an actual real-world hardware failure that would slip through.  So you might as well do the testing.

                                                        -- Jerry

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.metzdowd.com/pipermail/cryptography/attachments/20131017/2f18b5b5/attachment.html>


More information about the cryptography mailing list