[Cryptography] Uncorrelated sequence length, was: A TRNG review per day
waywardgeek at gmail.com
Sat Oct 25 05:51:03 EDT 2014
On Fri, Oct 24, 2014 at 7:32 PM, David Leon Gil <coruus at gmail.com> wrote:
> On Fri, Oct 24, 2014 at 5:01 PM, Bill Cox <waywardgeek at gmail.com> wrote:
> > On Fri, Oct 24, 2014 at 1:56 PM, Bear <bear at sonic.net> wrote:
> >> On Fri, 2014-10-24 at 05:31 -0400, Bill Cox wrote:
> >> > So, why do we need true random data at high speed so badly that Intel
> >> > decided to build in a device requiring large capacitors and it's own
> >> > power regulator? The truth is, we don't need high speed. As many
> >> > people have argued here, all any single system requires is 256 bits of
> >> > true random data. That's all they *ever* need, so long as it remains
> >> > secret (which is hard), and so long as a cryptographically secure PRNG
> >> > (CPRNG) is used to generate all future cryptographically pseudo-random
> >> > data (which is comparatively easy).
> The current provable-security bounds on recovering from state
> compromise require anywhere from 2KiB to 20 KiB of input entropy to
> recover from "state compromise". See section 5.3 of
> So, perhaps some applications would want a fairly large amount of entropy.
Interesting paper. Here's how I would recover much faster.
I write 512 bits containing over 400 bits of entropy in one call, as the
minimum, with ioctl. I have to look at the kernel code to see how it
works, but assuming:
1) The kernel sucks in all 512 bits at once, blocking all other users of
/dev/random and /dev/urandom, and then performs a secure cryptographic
one-way hash on it's entire entropy pool.
2) The cryptographic hash is ideal in the sense that it's output cannot be
distinguished from true random, and cannot be reversed short of brute force
guessing all 2^n input possibilities.
3) No attacker can guess a state of the pool when no state has higher than
a 1/2^256 probability
Under these assumptions, the pool recovers from a state compromise in one
call. The pool is not full, but no state has a probability higher than
about 1/2^400 so it does not matter.
However, I just went and looked a bit at random.c. I would have to look a
*lot* harder to feel confident I am reading it right, but at first glance,
it's mixing function, _mix_pool_bytes does not satisfy my assumptions
above. It does not appear to be a cryptographically secure hash function.
It simply stirs data in the pool weakly, counting on lots of entropy data
to make that OK.
This seems insecure to me, but I suppose there are probably reasons for the
Linux kernel to weakly mix the input entropy rather than performing a
secure hash. If I were writing that code, I'd turn Blake2b into a sponge
(similar to Lyra2), and would only mix in entropy once I'd collected at
least 256 bits worth. That way, the state becomes secure again on every
Can I use a simple hack to insure the Linux entropy is secure after every
write to /dev/random? I am thinking of force-feeding it 4096 bits from the
Keccac sponge, rather than just 512 bits like I do now. Is my reading of
random.c's mixing accurate? Will this hack insure that the entropy pool is
Given that they write the entropy pool to disk on shutdown, instant
recovery from state compromise seems like an important goal.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the cryptography