/dev/random is probably not

Charles M. Hannum root at ihack.net
Sun Jul 3 07:42:21 EDT 2005


On Sunday 03 July 2005 05:21, Don Davis wrote:
> > From: "Charles M. Hannum" <root at ihack.net>
> > Date: Fri, 1 Jul 2005 17:08:50 +0000
> >
> > While I have found no fault with the original analysis,
> > ...I have found three major problems with the way it
> > is implemented in current systems.
>
> hi, mr. hannum -
>
> i'm sorry, but none of your three "problems" is substantial.
>
> > a) Most modern IDE drives... ship with write-behind
> >    caching enabled.
>
> i've addressed this caching question quite a bit
> over the years.  for an early mention of the issue,
> please see:
>   http://www.cs.berkeley.edu/~daw/rnd/disk-randomness
> anyway, to deal with caching controllers, any disk rng
> needs to discard sub-millisecond access-times, or at
> least needs not to count such fast accesses as contributing
> any entropy to the RNG's entropy-pool.  otherwise, the
> rng will tend to overestimate how much entropy is in
> the entropy pool, and dev/random will tend to become
> no more secure than /dev/urandom.

Remember that I specifically stated that I'm talking about problems with 
real-world implementations, not your original analysis.  Unfortunately, a few 
implementations (FreeBSD's implementation of "Yarrow" and NetBSD's "rnd" come 
to mind immediately) do not appear to implement the behavior you describe -- 
they simply always count disk I/O as contributing some entropy (using the 
minimum of the first-, second- and third-order differentials, which is likely 
to be non-0, but small and predictable, due to other timing variance).

> > b) At least one implementation uses *all* "disk" type
> >    devices...
>
> yes, that would be broken., though it's not a total
> security loss, as long as the machine has at least one
> hard drive.  this memory-disk question too was raised
> and answered, long ago.

Again, this problem exists in real-world implementations.

> > By timing how long this higher-level operation (read(),
> > or possibly even a remote request via HTTP, SMTP, etc.)
> > takes, we can apply an adjustment factor and determine
> > with a reasonable probability how long the actual disk
> > I/O took.
>
> this remote-timing approach won't work in any useful way.
>
> you'd need to get the same timing accuracy as the
> /dev/random driver gets;

No, you just need to be able to estimate it with a high probability.  I don't 
see any reason this is not possible, given that response times are directly 
proportional to the interrupt timing.  This may be especially bad in 
implementations such as OpenBSD and NetBSD which limit the precision of the 
time samples to 1 microsecond.

Also, I don't buy for a picosecond that you have to gather "all" timings in 
order to predict the output.  As we know from countless other attacks, 
anything that gives you some bits will reduce the search space and therefore 
weaken the system, even if it does not directly give you the result.

---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at metzdowd.com



More information about the cryptography mailing list