[Cryptography] OpenSSL and random
waywardgeek at gmail.com
Mon Nov 28 11:30:12 EST 2016
On Mon, Nov 28, 2016 at 5:06 AM, Salz, Rich <rsalz at akamai.com> wrote:
> Ian: Use /dev/urandom
> Bill: Use /dev/random
> So I have a new basic theory about randomness: ask N crypto folks and get
> at least N+1 opinions.
> I look forward to the day when the community can come to consensus. Until
> then, OpenSSL will proceed as best as it can and get slammed for it at some
I hate to suggest using /dev/random because it blocks when it does not need
to, and is susceptible to simple denial-of-service attacks (cat /dev/random
> /dev/null). On the other hand, /dev/urandom does not block when it
should. Here is a quote from the urandom man page on my 14.04 Ubuntu
"A read from the /dev/urandom device will not block waiting for more
entropy. As a result, if there is not sufficient entropy in the entropy
pool, the returned values are theoretically vulnerable to a cryptographic
attack on the algorithms used by the driver."
So, my simple answer is, "Don't change OpenSSL". If the OpenSSL folks are
willing to do a bit more Linux-specific work, then I would instead suggest:
Read N bits (like 1024) bits from /dev/random when OpenSSL first needs
random data, and throw them away. Thereafter, read only from /dev/urandom.
Regardless, bugs like the one in rngd that cause RdRand to provide 100% of
the entropy is currently causing /dev/random on many Linux systems to
initially return random numbers that depend on only one unauditable
source. That's not OpenSSL's fault, but it needs to be addressed, IMO. I
believe that no unauditable random source should be incrementing the
internal entropy estimate of /dev/random. Those that do should be
carefully audited, and preferably their health should be monitored over
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the cryptography