[Cryptography] Is Ron right on randomness

ianG iang at iang.org
Mon Nov 28 12:38:01 EST 2016


On 28/11/2016 11:46, Bill Cox wrote:
> On Sun, Nov 27, 2016 at 11:02 AM, ianG <iang at iang.org
> <mailto:iang at iang.org>> wrote:
>
>     On 26/11/2016 09:38, Salz, Rich wrote:
>
>             Absolutely right.  Only TRNGs that make raw data available
>             should be trusted.  Further, the source should have a simple
>             physical model which is proven out by measurements,
>             preferably continuously.
>
>
>         Meanwhile, back in the real world...  What should OpenSSL do,
>         given the wide number of platforms and huge uninformed community
>         that depends on it, do?
>
>
>     It should read from /dev/urandom [1]
>
>
> Ian, would you agree that something on the platform needs to first
> ensure that /dev/random is well seeded before OpenSSL reads from
> /dev/urandom?

Yes absolutely!  That is a platform responsibility - see the thread with 
John Denker where he says:

    By way of example, here is something that might go into such
    a specification:  There should be *one device* ... or if for
    back-compatibility there are two, they should behave the same.
    The device should guarantee 100% availability and 100% high
    quality, high enough for all practical purposes.

    Let's be clear:  a proper RNG device should never block, *and*
    there should never be any temptation -- or even possibility --
    of using the device (or the corresponding intra-kernel function
    call) before the RNG is well and truly initialized.

This is the only interface or promise that makes sense to the general 
purpose app, library, developer.

> I suggested perhaps OpenSSL should read 1024 bits from
> /dev/random, and all later bits from /dev/urandom, but then every app
> that needs cryptographically unpredictable numbers would each
> independently reseed the entropy pool.

So, you are throwing the responsibility back to OpenSSL to assume that 
the platform hasn't well seeded.  That's what I'd advise against because 
(insert John's list) the OpenSSL only has the vaguest understanding of 
what Linux is up to at any one point in time / release / etc.  That's 
not to say the devs don't know more, but what goes into the code 
benefits by being simple and standardised.

Note also that random and urandom are the same on *BSD which I guess 
goes for all Mac OSX and all Android.  E.g., a far larger slice of the 
world than Linux, albeit the client side not the server side.


> Maybe Linux could provide a way to read total entropy generated since
> boot?  That could be used to compute how much data to read from
> /dev/random, and in most cases it would be 0.


Not in my opinion.  Linux should provide good random numbers 
(unpredictable to the adversary) from urandom.  End of story.  If it 
doesn't, the user is screwed, and Linux is broken.

It's not efficient for any general app or general library to second 
guess this problem.  Only the paranoid can afford the luxury of solving 
the RNG problem themselves, and OpenSSL is a general purposes crypto 
library delivering to general purpose applications.

iang

ps; taking up Rich's challenge to reduce N+1 to N ;-)


More information about the cryptography mailing list