# [Cryptography] A TRNG Review Per Day: TrueRNG

Bill Cox waywardgeek at gmail.com
Wed Nov 12 18:11:34 EST 2014

```On Wed, Nov 12, 2014 at 2:21 PM, Jerry Leichter <leichter at lrw.com> wrote:

> On Nov 12, 2014, at 12:45 PM, Bill Cox <waywardgeek at gmail.com> wrote:
>
> ...However, a small correlation is OK.  We can just subtract any
> measurable correlation from the estimated entropy per source.
>
> It's not so simple.  Correlation and entropy in an active attack
> environment don't play well.
>
> Suppose you had source R1.  I use R1 delayed by one bit as a second source
> R2.  Any correlation between R1 and R2 is an autocorrelation at length 1 of
> R1 with itself - one of the things we would presumably check for to begin
> with, and it's something even fairly simple RNG's would be expected to
> attain.  And yet if someone learns either R1 or R2, there is no randomness
> left.  Also, the combined entropy had better be exactly that of R1 (or R2)
> or we could bootstrap arbitrarily high entropy out of any source by simply
> combining itself with itself at different lags.  (Note that what I'm
> suggesting outputs bits at the same rate as R1.  If you actually used two
> successive bits of R1 to generate one bit of output, you would have gained
> entropy per output bit.)
>
> This is one of those situations (common in tests for RNG's) where if we
> *find* something (correlation), we know the RNG is bad; but if we fail to
> find it, we haven't really learned anything.
>                                                         -- Jerry
>
>
>
This certainly is true for CPRNGs, where finding any correlation is deadly
to the algorithm.  However, SFAIK, there is no such thing as a hardware RNG
that generates data that is undetectably non-random.  There are always
correlations.  They all require "whitening" before the data can be used for
crypto.  Some of the less impressive TRNGs whiten before you can see the
data, without giving you access to the raw data from the entropy source.
This destroys any chance of verifiability.

There are many ways to do whitening.  For example, I used to use the parity
of 80 sequential bits from a zener noise based RNG as the output, and that
passed the old diehard tests.  However, I wasted most of the entropy in
whitening.  A better solution would is to try and get a good estimate of
the entropy, and compress it just enough through a cryptographic sponge to
get it where it needs to be.

Some TRNGs, like Turbid, have provable levels of average entropy output.
That's helpful in whitening.  The Turbid paper showed that you don't need
to compress entropy much through the sponge to have undetectable levels of
non-randomness.  It's pretty cool.

Bill
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.metzdowd.com/pipermail/cryptography/attachments/20141112/421e3ae4/attachment.html>
```