entropy depletion (was: SSL/TLS passive sniffing)

Ian G iang at systemics.com
Sun Jan 9 06:32:19 EST 2005


William Allen Simpson wrote:

> There are already other worthy comments in the thread(s).


This is a great post.  One can't stress enough
that programmers need programming guidance,
not arcane information theoretic concepts.

> We are using
> computational devices, and therefore computational infeasibility is the
> standard that we must meet.  We _NEED_ "unpredictability" rather than
> "pure entropy".


By this, do you mean that /dev/*random should deliver
unpredictability, and /dev/entropy should deliver ...
pure entropy?

> So, here are my handy practical guidelines:
>
> (1) As Metzger so wisely points out, the implementations of /dev/random,
> /dev/urandom, etc. require careful auditing.  Folks have a tendency to
> "improve" things over time, without a firm understanding of the
> underlying requirements.


Right, but in the big picture, this is one of those
frequently omitted steps.  Why?  Coders don't have
time to acquire the knowledge or to incorporate
all the theory of RNG in, and as much of today's
software is based on open source, it is becoming the
baseline that no theoretical foundation is required
in order to do that work.  Whereas before, companies
c/would make a pretence at such a foundation, today,
it is acceptable to say that you've read the Yarrow
paper and are therefore qualified.

I don't think this is a bad thing, I'd rather have a
crappy /dev/random than none at all.  But if we
are to improve the auditing, etc, what we would
need is information on just _what that means_.

E.g., a sort of "webtrust-CA" list of steps to take
in checking that the implementation meets the
desiderata.

> (2) The non-blocking nature of /dev/urandom is misunderstood.  In fact,
> /dev/urandom should block while it doesn't have enough entropy to reach
> its secure state.  Once it reaches that state, there is no future need
> to block.


If that's the definition that we like then we should
create that definition, get it written in stone, and
start clubbing people with it (*).

> (2A) Of course, periodically refreshing the secure state is a good
> thing, to overcome any possible deficiencies or cycles in the PRNG.


As long as this doesn't effect definition (2) then it
matters not.  At the level of the definition, that is,
and this note belongs in the "implementation notes"
as do (2B), (2C).

> (2B) I like Yarrow.  I was lucky enough to be there when it was first
> presented.  I'm biased, as I'd come to many of the same conclusions,
> and the strong rationale confirmed my own earlier ad hoc designs.

> (2C) Unfortunately, Ted Ts'o basically announced to this list and
> others that he didn't like Yarrow (Sun, 15 Aug 1999 23:46:19 -0400).  Of
> course, since Ted was also a proponent of 40-bit DES keying, that depth
> of analysis leads me to distrust anything else he does.  I don't know
> whether the Linux implementation of /dev/{u}random was ever fixed.


( LOL... Being a proponent of 40-bit myself, I wouldn't
be so distrusting.  I'd hope he was just pointing out
that 40-bits is way stronger than the vast majority
of traffic out there;  that which we talked about here
is buried in the noise level when it comes to real effects
on security simply because it's so rare. )

> (3) User programs (and virtually all system programs) should use
> /dev/urandom, or its various equivalents.
>
> (4) Communications programs should NEVER access /dev/random.  Leaking
> known bits from /dev/random might compromise other internal state.
>
> Indeed, /dev/random should probably have been named /dev/entropy in the
> first place, and never used other than by entropy analysis programs in
> a research context.


I certainly agree that overloading the term 'random'
has caused a lot of confusion.  And, I think it's an
excellent idea to abandon hope in that area, and
concentrate on terms that are useful.

If we can define an entropy device and present
that definition, then there is a chance that the
implementors of devices in Unixen will follow that
lead.  But entropy needs to be strongly defined in
practical programming terms, along with random
and potentially urandom, with care to eliminate
such crypto academic notions as information
theoretic arguments and entropy reduction.


> (4A) Programs must be audited to ensure that they do not use
> /dev/random improperly.
>
> (4B) Accesses to /dev/random should be logged.
>

I'm confused by this aggresive containment of the
entropy/random device.  I'm assuming here that
/dev/random is the entropy device (better renamed
as /dev/entropy) and Urandom is the real good PRNG
which doesn't block post-good-state.

If I take out 1000 bits from the *entropy* device, what
difference does it make to the state?  It has no state,
other than a collection of unused entropy bits, which
aren't really state, because there is no relationship
from one bit to any other bit.  By definition.  They get
depleted, and more gets collected, which by definition
are unrelated.

Why then restrict it to non-communications usages?
What does it matter if an SSH daemon leaks bits used
in its *own* key generation if those bits can never be
used for any other purpose?

(Other than to SSH that is...) (**)


Great post!

iang

(*) last night I discovered that the new /tmp cleanup
setting for FreeBSD also means X won't run after boot
up because it expects its special dir to be safe in /tmp
... this means that after 30 years, the definition of /tmp
is still being fought over in the patch wars.

(**) Ideally, I'd like secure machines to generate server
primary install-time keys from the entropy device and
all other uses can go to the Urandom device.

-- 
News and views on what matters in finance+crypto:
        http://financialcryptography.com/


---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at metzdowd.com



More information about the cryptography mailing list