[Cryptography] Proper Entropy Source

John Denker jsd at av8n.com
Wed Jan 22 06:53:53 EST 2020


On 1/22/20 12:09 AM, Ryan Carboni wrote:

> No, the McNamara fallacy is a valid point. Reality is not merely a
> series of numbers to be inventoried by other people or machines,
> reality is far more complex and overlooking the complexity will
> directly result in failure. The scrypt paper estimates the value of 64
> bits of security to be in the millions for password derivation
> functions, Schneier's paper on Minimal Key Lengths estimates the value
> of 60 bits of security to be $300k in FPGA, COPACOBANA costs
> $10,000...
> 
> Factually at $1 million, there are more efficient ways to do things. (
> https://www.theregister.co.uk/2019/08/06/att_unlock_fraud_hack_charges/
> https://en.wikipedia.org/wiki/Greek_wiretapping_case_2004%E2%80%9305 )
> 
> Regardless, if you have an adversary capable of benchmarking
> production lots of processors for intrinsic properties, be glad that
> they are breaking the RNG that way as opposed to other ways...

Alas, Mr. Carboni is falling prey to several of the same 
fallacies that Mr. McNamara did, namely:

a) Just because you can throw a bunch of numbers together
and come up with a quantitative answer (e.g. "64") that
does not mean it is the right calculation, much less the
right answer.

b) OTOH just because some quantitative analyses are wrong
does not prove that a hand-wavy analysis is better.  The
latter is likely to be just as wrong, or worse.

c) Just because you want it to be true, and think it might
be true, that doesn't mean it is true.

d) It is easy to underestimate the complexity of a proper RNG.


In particular, in the present case, the calculation quoted
above tells us how random the proposed scheme (involving
memory timing) "needs" to be.  The question of how random
it actually *is* remains open. It remains unaddressed.  We
have not seen proof, or even evidence.

By way of analogy, NASA engineers calculated that the
Challenger needed reliable O-rings.  Needing them is not
the same as having them.

Here's a point that may be of some help in understanding
the complexity of the issue.  This was mentioned a few
years ago but bears repeating:

It helps to think in terms of a trichotomy:
 -- some processes are reliably deterministic
 -- some processes are reliably random
 -- some processes are what I call /squish/,
  meaning they are neither reliably random
  nor reliably deterministic.

Let's be clear:  Just because it is not reliably deterministic
does *not* mean it is reliably random.

The poster child is the time-of-day at which the program
was started.  I cannot reliably predict a_priori the time
at which some user will start the program I've written ...
but that does *not* make it random.  It's squish.  An
adversary may well be able to closely estimate the time
a_posteriori, and may even be able to manipulate it
to some extent.

Seeding your PRNG with the time of day is not good enough.
Really not.

Somewhat more generally, a squishy signal may be /correlated/
with or /conditional/ upon something else.  That means it
is random or nonrandom, depending on the conditions.  An
encrypted counter is a good example; much depends on whether
the encryption key is known.

Again:  Just because you can't easily predict it does
not mean it is random enough for any serious purpose
(cryptologic or otherwise).

Also keep in mind the fundamental equation:
	squish + squish = squish.

In other words: Combining multiple squishy sources does 
*not* produce randomness.  It might make it harder for
you to predict, but again, that is not the criterion.

I trust thermodynamics.  The second law can be used to
establish a reliable lower bound on the entropy density
of certain sources.  For anything(*) else, I'm from
Missouri:  show me the proof.

Cheap, provably-good sources exist.  Given the choice
between that and a not-provably-good source, it is
hard to imagine any sane reason for choosing the latter.

(*) Quantum noise is not fundamentally different from
thermal noise.  The theoretical analysis is the same
either way.  Quantum fluctuations are not in any sense
"better" or more random than thermal fluctuations.  In
practice they're usually smaller and harder to deal with.

There is nothing simple about a proper RNG.

A PRNG depends on computational complexity, and also
depends on a seed.  We still need a good RNG to produce
the seed.

I have never seen a hardware process that produces 100%
entropy density.  I suspect no such thing exists.  There
is always "some" squish.

Not all squish is equally troublesome.  Therefore any
decent RNG depends on hardware *and* on computational
complexity.  The former is used to create a signal that
has almost 100% entropy density, and the latter is used
to hide the remaining squish so it cannot feasibly be
exploited.

On Wed, Jan 22, 2020 at 1:20 AM Theodore Y. Ts'o
<tytso at mit.edu> wrote in part:

> I generally consider statistical testing for randomness to be worse
> than useless, since it causes people to get a false sense of 
> security.

We agree that such tests are very, very widely misused.

Specifically:  A RNG that fails a statistical test is
surely broken, but one that passes such a test is not
guaranteed to be any good.  h/t Dykstra.


More information about the cryptography mailing list