[Cryptography] On the Impending Crypto Monoculture

ianG iang at iang.org
Sun Mar 27 12:13:54 EDT 2016


On 27/03/2016 11:23 am, Jerry Leichter wrote:
>> Is monoculture therefore golden?  No.  By no means - it's a risk decision.  We all know we're putting our one egg in one basket and feeding the world.  It's just that over time, this risk is lower than any other we know of - like Churchill's democracy, monoculture in algorithms is a really bad idea, but it's the least bad of the rest of the ideas.
> I'm not sure you're assessing the *nature* of the risks correctly.

As you elegantly show, it's not really sustainable to assess the risks 
mathematically because we're into the infinites.


> If there's an AES monoculture, and AES is broken, every message ever encrypted is broken, and until an alternative can be established and fielded, every new message is also broken.  Extremely low probability event, immensely high cost.

Well, I know cryptographers try to push the meme that a couple of bits 
off the strength of a cipher means it is "broken" but we don't need to 
take that as gospel :)  If AES is broken that means many messages can be 
broken.  It doesn't automatically mean all are broken - although it 
possibly means that all are vulnerable.


> If there's an AES monoculture and a fallback, and AES is broken, every existing message is broken, but future messages are safe once you can get everything switched to the alternative (which we'll assume is fairly quickly).  Same extremely low probability event, but a "half infinity" cost.  If, alternatively, it's the fallback that's broken (while AES survives), there's a much lower (but still significant) cost of find and fielding a new alternative.  (Or you could just continue in the "AES monoculture" world.)

Another factor to take into account is that

(a) as above, AES being broken doesn't mean the messages are read.  We 
know this because even clear text messages aren't being attacked/read as 
seen in recent European news.
(b) even if not broken, the messages are far more likely to be "broken" 
by other means.  The most common attack on cryptographic systems isn't 
by the hacker or MITM, it's your own people.


> If we choose k equally standard algorithms, and use different algorithms in different situations, and any one of them is broken, 1/k of previous messages ever encrypted is broken.  Once you can effectively blacklist the broken algorithm, future messages are secure.  Since attacker now have k algorithms to attack, perhaps their chances of breaking one are better - but it's still an extremely low probability event, thought costs are now considerably lower.

I saw an estimate somewhere that intelligence analysts only need to see 
about 2% of the traffic to divine the general gist.

Which is to suggest, if any messages are broken, then that's sufficient 
to incur a very high cost.  Someone estimated that the cost of 
Heartbleed re-work was in the vicinity of $500 million - and that's 
without any particular evidence of any breaches.  Which is enough to 
suggest that a partial break or a full break of 1/k might incur the full 
costs of "infinity" regardless.


> These are simplified scenarios, and without being able to plug in some values, it's hard to say anything meaningful.  And the problem is exactly that we have no way to guess at the values.  What's the probability of AES being broken in the next 10 years?  20 years?  50 years?  We have insufficient information to make any reasonable guess.  I think most people would say "vanishingly small" for 10 and even 20 years, but trying to predict *anything* out 50 years is very problematic.


Typically what people do when the probability is vanishingly small is to 
accept the risk.  This is indeed what everyone does with crypto anyway - 
it is very complicated to use, and often goes wrong for inane reasons 
(try teaching your grandma to do online banking securely).  How we 
handle this is to simply accept the risk that it will go wrong and she 
might be hacked/phished whatever.

Now, if that's how we handle the application, the same standard should 
be applied to the crypto.  Get the probability to so small as to be 
below noise, then accept any residual risk.  This process is already 
familiar with for example with prime search for RSA.

This approximately says:  AES has zero probability of being broken 
before the next cycle of major upgrades.  Full stop.


> And on the cost side, things are only a bit better.  Sure, "half infinity" is "less than" "infinity" (since "infinity" is just a codeword here for "so I high I can't even imagine") but is that really a meaningful comparison?
>
> What about the additional costs of maintaining a fallback, or choosing amoung k alternatives?  That's probably not *that* high, but how do you compare it to the "infinitesimal * infinite" expected costs of a break if you don't do it?


Well, I think the cost of maintaining a fallback is probably below the 
cost of maintaining k alternatives in good order.  Also, the key 
difference might not be the direct cost, but the fact that once the k 
are out there, the maintainer can wash hands of any additional costs and 
say "your problem" to the users.  Whereas a fallback has to be deployed 
- maintainer costs.


> BTW, the "k alternatives" can be seen in different ways.  At one extreme, each protocol might choose a single one of the alternatives.  At the other, you could randomly choose an alternative for each connection, in effect using a single "meta-algorithm" in which part of the key determines the algorithm.  (If you're doing PFS, the algorithm might switch per message.)  Should you assume that the probability of a break of the "meta-algorithm" is different from the cost of a break of one of the constituent algorithms?  Given that it's all just guesswork anyway....


As a sort of sideswipe to this alternative, with k alternatives, it is 
easy to avoid any k-1/k breaks:  Simply XOR all k together.

(Some systems even do that, although I've only seen systems that do that 
with two algorithms.)

Why wouldn't we do that?  There are performance arguments which are 
interesting because in userland this isn't really a blocker, it's only 
an issue for the mass-connection servers.  But even there it's not 
really so much of an issue any more, so if we *really cared* about 
algorithm breaks we'd carry the cost.

The philosophical argument however is also compelling.  If XOR was an 
effective way to do things, the cryptographers would do it inside their 
black boxes.  They don't.  So it's obviously not appealing to 
cryptographers, and not only not appealing, it goes against the general 
dictum of "don't write your own algorithms" which can be better read as 
"the cryptographers did know what they were doing, don't second guess them."

Which all is to say it is important to evaluate the risks of a 
k-algorithm solution.  But I don't think it exists just because we think 
one of the k might fail.  There is other stuff going on...


> The net result:  The choice to be made here is simply not informed by enough data to be more than philosophy and gut feel.


IMHO we have enough data - we've now got about 25 years of mass Internet 
crypto.  But what we lack is a full reading of the experiences.  There 
are a few factors that need to be added into the mix above.

1.  Almost all breaks are protocol breaks, or higher.

2.  Because the (higher) breaks happen all the time, well-used systems 
already include a fast upgrade method.

3.  The fast upgrade method is typically well understood by the users 
(albeit often turned off).

4.  When there is an algorithm break, the most likely fix is to use the 
fast upgrade method to turn off any broken algorithm in a k-alg set. 
Indeed, most users won't even know that this break is "different."

5.  Notwithstanding the incur-infinite-cost argument, most users are 
unaffected by any break, and the cost to them is the rework cost.

6.  The same methods can be used to switch to the fallback.

7.  All large scale security systems work to the greater good, not the 
perfect protection for every user.  There is already sacrifice of 
security built in because users are using the same set of features.


This suggests that the protocol be locally monocultural.  That is, 
protocol version N be strictly monocultural (if that's the term of art 
that Peter is presenting) and version N+1 be countercultural - the fallback.

But version N+1 can be waiting in the wings, it doesn't need to be 
deployed, it can just be sitting there ready for the call in fast upgrade.


> I'm sympathetic to the arguments against complexity - but sometimes you need complexity in your system to deal with real complexity in the world.  Codebook mode is the simplest mode, but it's not quite good enough.


Right - but what is the complexity in the real world telling us? 
There's been almost no algorithm breaks.  Lots of protocol and higher 
breaks.  I like Figure 2 here to make that point:

https://tahoe-lafs.org/~zooko/preimage-attacks-color.html

with apologies to Zooko for spying on his works in progress.

I think there's room to lean a bit more on the algorithms and deploy a 
bit more resource up the stack.



iang


More information about the cryptography mailing list