[Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

Nico Williams nico at cryptonector.com
Tue Oct 15 20:35:07 EDT 2013


On Tue, Oct 15, 2013 at 09:56:45AM +0300, ianG wrote:
> I see two ways of looking at this.  If the choice is between
> 
>     (a) top-bottom layers with complementary strengths, and
>     (b) inserting ciphersuites or protocol versions as time goes on
> 
> I would definitely go for (a) because it is bounded and we can at
> least control the effects into the future.

Performance matters in the real world.  Adding layers is generally nice
from a security p.o.v., but not from an economics p.o.v.

If you want secure protocols deployed then they have to perform well
enough.

Now, what kind of layers are we talking about: multiple layers of crypto
in the same protocol, or multiple secure protocol layers (e.g., TLS over
IPsec)?  I suspect the former.  In either case AES-NI is the game to
beat.

> Alternatively, it is making things more complex.  How do we
> differentiate between that and the simpler case of a single layer?
> 
>     (c) single simple layer, done best we can.

Yes, that.

> I'm unconvinced by the argument that if we can't make things work
> with one layer, we should use two layers.  It seems to have an

ikr.

> element of hope in it, which I don't like.  If in some sense it is
> more than hope, if it is logical and scientific, why can't we figure
> that out in one layer?

Because legacy and deployment issues get in the way.  Make one mistake
and getting over it will take at least half a decade.  Make N>3 mistakes
and we don't ever settle on a stable *and* secure state.  That's TLS.
We can't even start over.  The real world imposes constraints that make
fixing things hard.  Bill Sommerfeld's analogy (I think it was his) was
that what we do is like rebuilding a 747, full of passengers,
mid-flight, at 30,000ft.

That's not doing it justice though -- it's much, much worse than that.
If you get the APIs wrong, then fixing things takes much more time and
effort than if you merely get some protocol detail wrong -- protocol
details can be negotiated as long as changes to them don't bleed through
into the APIs.  When that happens it's like switching between unit
systems while rebuilding the 747 mid-flight.

The TLS re-negotiate vulnerability involved API design mistakes.  It
arose from not even thinking of TLS as having an API.  (Few people agree
with the proposition we need to think about abstract APIs when designing
protocols.)

Starting over is like moving the passengers from a dilapidated plane to
a newer one -- mid-flight, and possibly with the new plane mid-
construction.  This is why we don't start over.  Plus the situation
w.r.t. firewalls and proxies is such that the new plane has to look just
like the old one, just not dilapidated.

The complexity of the crypto pales by comparison to the complexity of
dealing with legacy and getting everyone to move on to the new thing.
And yet we're only now beginning to get being confident about the crypto
(constant-time block ciphers + AEAD and also some non-AEAD modes used
carefully + constant-time sponge hash functions and KDFs + safe,
constant-time ECC curves + good RNGs).

> At a reduction level, we often hope that we can improve a cipher by
> xoring another cipher with it.  But this amounts to "inventing" a
> new cipher, so it seems to reduce to a sort of absurdium argument
> that an xoring programmer can outdo a cryptographer.

If you've gotten here you've missed the big picture because you're
probably doubling the cost of the crypto even though crypto weaknesses
are not the biggest problem -- not for symmetric key encryption anyways.
It's everything else, particularly the cipher modes, the infrastructure,
and so on.

Also, it's not clear to me that more crypto is the answer to all
problems.  Consider compression-related vulnerabilities...  We need
full-stack security *and* economic analysis and design.

> I'd also go further.  I would say that the problems that have caused
> us not to have confidence in protocols have been born out of
> complexity, and not our ability to do things right the first time,
> one time.  SSL, etc is a trap of complexity, and the result is a
> failure in upgrade path.

No, it's legacy -> complexity.  Any brand new TLS-replacement protocol
will soon look like TLS unless we manage to get all the details just so.
Assuming we're near a stable understanding of how to build secure
protocols (see above), we could get all the details just so *once*.  But
how likely do you think that will be?

> Empirically, where's the beef?  The one case I can think of where we
> have clear claims of damages from a protocol layer attack is
> Bitcoin's Java RNG embarrasment.  Some number of coins were stolen.
> How many?  I don't recall, but I doubt the value was in excess of
> the time-value spent here in the group discussing how to do it
> better...  Adam reports that Disneyland Tokyo suffered loss of
> business, probably far more damaging than the

Well, PKI RSA-MD5 certs were involved in the Flame kit, no?  At least
some good guys got hit (since it spread outside its suspected original
target).  And the bad guys (who probably think they are the good guys)
were hit.  Strictly speaking crypto vulns have resulted in real-world
attacks.  Most real-world vulns are not in crypto nor crypto protocols,
so it's a fair argument that there's not much to worry about in the
crypto.

And anyways, absence of evidence is not evidence of absence.

> >>Right now we've got a TCP startup, and a TLS startup.  It's pretty messy.  Adding another startup inside isn't likely to gain popularity.

Well, I think DJB has this right.  The best way to minimize latency
would be a combination of DNS[SEC], ECC, and UDP with ECDH-based key
exchange and authentication, and TCP Fast Open-like fast reconnect
functionality.  UDP because TCP hasn't aged so well, because -unlike any
new transports- it traverses middle-boxes, because [...].

Layers can cooperate to reduce round trips.  For example, GSS-API has a
channel binding input and "PROT_READY", which allow for intersting round
trip optimizations.  A transport with TCP Fast Open-like functionality
too can lead to the common case being zero round trips to setup new
connections, the next common case being a single round trip, and the
worst case being two.  This is not something we can make happen for TCP
and TLS concurrently: ETOOHARD, too many moving parts.

> People would use it if it were the only one mode, and it were secure.

"Always secure" is not really that easy.  Ignoring the economic costs,
there's the introduction problem, and the fact that sloppy CAs and
registrars (no matter how few) make PKI lame.  At best you can do
opportunistic anonymous key exchange and encryption -- better than
nothing, as they say[0], but not enough either since a) there's UI
issues (see the whole browser status bar lock icon saga), b) as long as
unauthenticated usage prevails the middle boxes that sprout up after you
can force you to stick to that usage.

[0] Ahem, see RFC5386.

Nico
-- 


More information about the cryptography mailing list