[Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

ianG iang at iang.org
Tue Oct 15 02:56:45 EDT 2013


On 11/10/13 17:41 PM, John Kelsey wrote:
> On Oct 11, 2013, at 1:48 AM, ianG <iang at iang.org> wrote:
>
> ...
>> What's your goal?  I would say you could do this if the goal was ultimate security.  But for most purposes this is overkill (and I'd include online banking, etc, in that).
>
> We were talking about how hard it is to solve crypto protocol problems by getting the protocol right the first time, so we don't end up with fielded stuff that's weak but can't practically be fixed.

Hmm, ok, that is what you said the first time :)

No reply needed, it is unfair to pound the table when you're shutdown!


> One approach I can see to this is to have multiple layers of crypto protocols that are as independent as possible in security terms.  The hope is that flaws in one protocol will usually not get through the other layer, and so they won't lead to practical security flaws.


I see two ways of looking at this.  If the choice is between

     (a) top-bottom layers with complementary strengths, and
     (b) inserting ciphersuites or protocol versions as time goes on

I would definitely go for (a) because it is bounded and we can at least 
control the effects into the future.

Alternatively, it is making things more complex.  How do we 
differentiate between that and the simpler case of a single layer?

     (c) single simple layer, done best we can.

I'm unconvinced by the argument that if we can't make things work with 
one layer, we should use two layers.  It seems to have an element of 
hope in it, which I don't like.  If in some sense it is more than hope, 
if it is logical and scientific, why can't we figure that out in one layer?

At a reduction level, we often hope that we can improve a cipher by 
xoring another cipher with it.  But this amounts to "inventing" a new 
cipher, so it seems to reduce to a sort of absurdium argument that an 
xoring programmer can outdo a cryptographer.

I'd also go further.  I would say that the problems that have caused us 
not to have confidence in protocols have been born out of complexity, 
and not our ability to do things right the first time, one time.  SSL, 
etc is a trap of complexity, and the result is a failure in upgrade path.

Also, it seems somewhat hubristic of us -- we continually pretend to 
predict the future failure.  If we could really do that, we'd fix it 
now.  Instead, we need to recognise that knowledge advances -- attacks 
always get better -- and we need a more accepting model to deal with a 
future where our world has been turned upside down.


> Actually getting the outer protocol right the first time would be better, but we haven't had great success with that so far.


Well, is this a glass half full or half empty?

I don't think the record is so bad.  Few attackers have really put a 
dent in SSL v3 and beyond.  I personally don't see the recent attacks 
(in their singular form) as being more than annoyances and 
embarrassments.  To quote Adam Langley [al] "On the whole, the crypto is 
doing great in comparison to everything else!"  SSH survived nicely. 
Skype did rather well, and had to be bought out from eBay before it 
could be perverted.

Empirically, where's the beef?  The one case I can think of where we 
have clear claims of damages from a protocol layer attack is Bitcoin's 
Java RNG embarrasment.  Some number of coins were stolen.  How many?  I 
don't recall, but I doubt the value was in excess of the time-value 
spent here in the group discussing how to do it better...  Adam reports 
that Disneyland Tokyo suffered loss of business, probably far more 
damaging than the

(OTOH, I can imagine the NSA arguing that we're exaggerating the Dual_EC 
case as no damages have been shown ;-)

If we look at where the problems are occurring, it's (a) outside the 
crypto, which reminds me:  the best comment I've seen so far is 
tlscrypt's thoughtful approach on authentication, and (b) in the failure 
of an upgrade model when problems are identified.


>> Right now we've got a TCP startup, and a TLS startup.  It's pretty messy.  Adding another startup inside isn't likely to gain popularity.
>
> Maybe not, though I think a very lightweight version of the inner protocol adds only a few bits to the traffic used and a few AES encryptions to the workload.  I suspect most applications would never notice the difference.  (Even the version with the ECDH key agreement step would probably not add noticable overhead for most applications.)  On the other hand, I have no idea if anyone would use this.  I'm still at the level of thinking "what could be done to address this problem," not "how would you sell this?"


People would use it if it were the only one mode, and it were secure.

If there was a choice, then I think it is very difficult to predict how 
the usage equation would fall out.  Which means the protection would 
necessarily be equally impossible to predict, and easy to manipulate.

C.f., the current nonsense with Android and Java SSL suites [android]. 
Unbelievable!  More evidence that defaults & choice are the friend of 
your enemy.

iang


[al] https://www.imperialviolet.org/2013/01/13/rwc03.html

[android]  http://op-co.de/blog/posts/android_ssl_downgrade/


More information about the cryptography mailing list