[Cryptography] RNG design principles

John Denker jsd at av8n.com
Tue Nov 29 15:19:18 EST 2016

On 11/28/2016 11:35 PM, Bill Cox wrote:

> I think I see some potential for consensus here: Read from /dev/urandom,


> but only once it is properly seeded. 

There is no but.  It must be properly seeded, always.

> Predictable bits suck for crypto.


>  we should lobby for /dev/urandom to
> stop feeding us predictable bits before proper seeding.  

Yes, that would be a step in the right direction, but it's
not the whole story.  We face a fundamental problem that
cannot be solved without coordination between the kernel
team, the grub team, the hardware team, and others.

Seriously, von Neumann was right:  It is absolutely not
possible to produce a random distribution using software
alone.  No amount of lobbying is going to change this.

>  It is the OS's job to properly seed
> /dev/urandom and to make it block until this has happens.

No, it must never block.

Here is crypto design principle #137:

137) The system must provide a unified, good RNG device,
 namely one that has 100% availability along with 100%
 high quality, high enough for any earthly purpose.

 Rationale and other observations:

 a) If /dev/urandom or /dev/random blocks, applications (and
  libraries such as openssl) will not rely on it.  Instead
  they will roll their own PRNG, with predictably terrible

 b) If /dev/random or /dev/urandom returns insufficiently
  unpredictable bits, applications (and libraries) will use
  it to produce insecure products.

 Other ways of expressing this principle include:
  -- Neither /dev/random nor /dev/urandom should be permitted
   to block.
  -- Neither /dev/random nor /dev/urandom should be permitted
   to emit untrustworthy bits.
  -- When designing the "main" RNG that is offered to users,
  if the designer ever needs to make the tradeoff, namely
  blocking versus producing untrustworthy bits, the game is
  already lost.

 c) If this requires storing some randomness so that it is
  available super-early during the boot process, so be it.
  If this requires improvements to the kernel, grub, userland
  tools, and (!) hardware, so be it.

 d) Minor additional point:  It might be nice to assign a proper
  name to the unified trustworthy device, perhaps /dev/jrandom
  (as in J. Random Hacker) or /dev/vrandom (as in Very random,
  also the lexical successor to urandom).

  That serves as a way of advertising its features to applications
  (and libraries).  Then the advice is simple:  If the good device
  exists, use it.  On systems where it exists, it can be aliased
  as /dev/random and /dev/urandom et cetera, so that legacy
  applications continue to work, and indeed work better than

  On systems where the good device does not exist, applications
  are stuck with unanswerable questions about whether to use
  /dev/random or /dev/urandom, neither of which reliably solves
  the problem.

Also:  Please let's not imagine that the right answer can be
expressed as a list of 3 or 4 pithy axioms.  It's very much
more complicated than that.

While we're in the neighborhood, here's a recommendation:

138) It is best to avoid the word "entropy" in this context.
 It almost never expresses the right idea.  If you don't
 need to be quantitative, you can use words such as
 "randomness" or "unpredictability" or "unguessability".
 At the next level of detail, you might choose to distinguish
 "pseudo randomness" from "hard randomness" (such as might
 come from a hardware RNG).

 If you wish to be quantitative, entropy is still almost
 never the right idea.  Depending somewhat on your threat
 model, you might want to quantify the /adamance/ which is
 a name for the Rényi functional H∞[P], although there are
 innumerable other functionals that may be of interest


139) I'd like to re-clarify a previous point about combining
sources of randomness.  We must distinguish ≥ (better than
or equal to) versus » (much better than):

 4 trustworthy sources
  ≥ 3 trustworthy sources
    ≥ 2 trustworthy sources
     ≥ 1 trustworthy source
      »  »  »  »  »  »  »  »  »  »  »  »  »  »
        »  »  »  »  »  »  »  »  »  »  »  »  »  »
          »  »  »  »  »  »  »  »  »  »  »  »  »  » 4 lousy sources
                                                  ≥ 3 lousy sources
                                                   ≥ 2 lousy sources
                                                    ≥ 1 lousy source.

Sure, combining sources is better than not combining sources.
I don't have a problem with that.  The problem is that it is
*not enough better* to make up for the difference between a
good source and a lousy source.

On 11/28/2016 10:12 PM, Bill Frantz wrote:
> I am very nervous about trusting only one source

I am not recommending a single source.  My point is that *any*
number of good sources is better than *any* number of lousy

The problem isn't the combining.  The problem is the reliance
(with or without combining) on sources that have no demonstrable
good properties.  Arguing that such-and-such source is "better
than nothing" is not acceptable, because lots of sources are
*not enough better* ... in contrast to actual well-engineered
good sources, which are available at modest cost.

More information about the cryptography mailing list