[Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

ianG iang at iang.org
Thu Oct 3 05:17:45 EDT 2013


On 2/10/13 17:46 PM, John Kelsey wrote:
> Has anyone tried to systematically look at what has led to previous crypto failures?

This has been a favourite topic of mine, ever since I discovered that 
the entire foundation of SSL was built on theory, never confirmed in 
practice.  But my views are informal, never published nor systematic. 
Here's a history I started for risk management of CAs, informally:

http://wiki.cacert.org/Risk/History

But I don't know of any general history of internet protocol breaches.



> That would inform us about where we need to be adding armor plate.  My impression (this may be the availability heuristic at work) is that:

> a.  Most attacks come from protocol or mode failures, not so much crypto primitive failures.  That is, there's a reaction attack on the way CBC encryption and message padding play with your application, and it doesn't matter whether you're using AES or FEAL-8 for your block cipher.


Most attacks go around the protocol, or as Adi to eloquently put it. 
Then, of the rest, most go against the software engineering outer 
layers.  Attacks become less and less frequent as we peel the onion to 
get to the crypto core.  However, it would be good to see an empirical 
survey of these failures, in order to know if my picture is accurate.


> b.  Overemphasis on performance (because it's measurable and security usually isn't) plays really badly with having stuff be impossible to get out of the field when it's in use.  Think of RC4 and DES and MD5 as examples.


Yes.  Software engineers are especially biased by this issue.  Although, 
it rarely causes a breach, it more often distracts attention from what 
really matters.

> c.  The ways I can see to avoid problems with crypto primitives are:
>
> (1)  Overdesign against cryptanalysis (have lots of rounds)


Frankly, I see this as a waste.  The problem with rounds and analysis of 
same is that it isn't just one algorithm, it's many.  Which means you 
are overdesigning for many algorithms, which means ... what?

It is far better to select a target such as 128 bit security, and then 
design each component to meet this target.  If you want "overdesign" 
then up the target to 160 bits, etc.  And make all the components 
achieve this.

The papers and numbers shown on keylength.com provide the basis for 
this.  It's also been frequently commented that the NSA's design of 
Skipjack was balanced this way, and that's how they like it.

Also note that the black-box effect in crypto protocols is very 
important.  Once we have the black box (achieved par excellence by block 
ciphers and MDs) we can then concentrate on the protocol using those 
boxes.  Which is to say that, because crypto can be black-boxed, then 
security protocols are far more of a software engineering problem than 
they are a crypto problem.  (As you know, the race is now on to develop 
an AE stream black box.)

Typically then we model the failure of an entire black box, as if it is 
totally transparent, rather than if it becomes weak.  For example, in my 
payments work, I say "what happens if my AES128 fails?  Well, because 
all payments are signed by RSA204 then the attacker can simply read the 
payments, cannot make or inject payments."  And the converse.

This software engineering approach dominates questions such as AES at 
128 level or 96 level, as it covers more attack surface area than the 
bit strength question.


> (2)  Overdesign in security parameters (support only high security levels, use bigger than required RSA keys, etc.)


As above.  Perhaps the reason why I like a balanced approach is that, by 
the time that some of the components have started to show their age (and 
overdesign is starting to look attractive in hindsight) we have moved on 
*for everything*.

Which is to say, it's time to replace the whole darn lot, and no 
overdesign would have saved us.  E.g., look at SSL's failures.  All 
(most?) of them were design flaws from complexity, none of them could be 
saved by overdesign in terms of rounds or params.

So, overdesign can be seen as a sort of end-of-lifecycle bias of hindsight.


> (3)  Don't accept anything without a proof reducing the security of the whole thing down to something overdesigned in the sense of (1) or (2).


Proofs are ... good for cryptographers :)  As I'm not, I can't comment 
further (nor do I design to them).



iang


More information about the cryptography mailing list