[Cryptography] open questions in secure protocol design?

ianG iang at iang.org
Sun May 31 10:16:38 EDT 2015


On 31/05/2015 03:21 am, Jerry Leichter wrote:
> On May 28, 2015, at 2:31 PM, dj at deadhat.com wrote:
>> Algorithm migration in this context means:
>>
>> 1) We start with algorithm version 1. It is a suite of algorithms chosen
>> wisely to be considered good for several years. Where 'several' is long
>> compared with the typical lifespan of the devices.
>>
>> 2) New algorithms are adopted by the standards body in sequence (1,2,3..)
>> when there is a reason to (example sha1 looked shaky years before it
>> failed). The algorithm list is in time order. There's no branching or a
>> menu to choose from or negotiate.


(Hmmm.. thinking about this, there are two SPOFs here - the cipher 
suite, and the "body" that designs the next one.  Whereas arguably the 
1000 flowers method and other points in between are less vulnerable to 
these SPOFs.  Point to note in discussions.)


>> 3) New devices implement the current algorithm version and the next
>> algorithm version if it exists.
>>
>> 4) When the new algorithm version is widely deployed, policy is updated to
>> deprecate the old algorithm version.
>>
>> So algorithm migration is a slow migration from one cipher-suite to the
>> next, when supported by deployed hardware. No a run time negotiation
>> between many cipher-suites. The cadence may be 1 decade.
                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ !


> This is an interesting approach, but it's missing a careful threat analysis.  You've baked in consideration of the "death by 1000 cuts" failure, where there's plenty of time to anticipate that the current algorithm is heading for trouble.  But what if the current algorithm fails suddenly?


Threat analysis should really be a subset of risk analysis.  In that 
art, we should be allocating likelihoods to all our threats.  As it 
happens, the likelihood of a sudden fail is below the noise level, 
assuming we chose wisely, of which the counter-example to wisdom is WEP.

If a threat is so unlikely we can't measure it, the typical response 
from risk analysis is to accept the risk.  This is the right choice 
because of our psychological bias to over-react on scary risks and 
under-react on non-scary risks;  to counteract that fear, we need to 
show data, and if the data approximates zero, we call it zero.


> You have no mechanism to move quickly to something else.


This is a general problem.  It isn't solved by any of the systems we 
talk about, because we don't have a way to send out a "quick" 
instruction like that [0].

(a) If we look at the canonical case here of certificate revocation as a 
quick instruction, it's pretty clear in 2010s that it doesn't work for 
CA-breach, so the super-CA has to revoke the CA with a browser software 
upgrade.

(b) If we look at the recent dozen TLS breaches, there was no quick 
response.  It started at 3.5 years, and went down [1].  I'm not sure 
what the figures are for the last one, but I'd be surprised (and 
encouraged) if was down to less than a year.

(c) his system involves upgrade in *new devices only* and all old 
devices are left stuck on old protocols.  As we're talking IoTs these 
things might have a 20 year lifetime ...


>  Alternatively, what if it turns out, half way through your distribution of a new algorithm, that the *new* one fails, while the old one seems to still be strong?  Can you somehow move back to it, without enabling rollback attacks?


My understanding of his method is that the software handles two suites. 
  So he's covered as long as N-1 fails, N is still good and N+1 goes bad 
in delivery.  N+2 carries great incentives tho :)

If however he gets a catastrophic fail in N, and moves to N+1 which was 
broken, then he's screwed.

However, this is the old joke about carrying a bomb on a plane - the 
chances of their being another bomb on a plane at the same time are next 
to zero, so everyone should do it [2] !

Also, a system that employs a cryptosystem will be typically plagued by 
higher layer threats, and if it is properly designed, it will model a 
catostrophic failure of the crypto and build in defence in depth.  E.g., 
in payments systems, there are typically controls on the cash-out agents 
so that they can't easily deliver a million bucks because someone broke 
the card crypto.



> Negotiation of the protocol to use makes sense in situations where endpoints can actually make a rational choice among alternatives.  You might be able to construct such a situation for communication within a group of experts - e.g., well-trained spies.  I find it very hard to come up with more common situations in which it makes sense.


So much of security design in the 1990s was predicated on "we know what 
we're doing so we'll teach the users to do what we do..."


> Picking a single cipher is like engineering with a known single point of failure.  That's not necessarily a bad thing!  Your car's steering system has multiple single points of failure, because there's really not much you can do to make it redundant without increasing other risks due to complexity.  On the other hand, your car's brakes are dual-redundant because that's a better assignment of risks for them.


Right!  If you're steering fails, you can clamp on the breaks and limit 
the damage.  If your brakes fail, you're screwed.  And so is anyone in 
front of you...

Having redundant brakes covers a lot of the other risks as well.

Which works for cars, roadtrains, etc.  But it doesn't work for planes. 
  Planes have redundant steering and engines, but crappy brakes.


>  Whether a single-cipher system makes more sense than one with N ciphers depends on your failure scenarios.  Do you include rollback attacks?  How about "directed failure" attacks, where an opponent is able to break one of your ciphers and then force connections to use that one?  Against such attacks, adding more ciphers *decreases* your security!


I think that's pretty easy to answer for the general Internet security 
case.  If someone is capable of *attacking your cryptography* then (a) 
they are well easily capable of hacking your platforms, and (b) we are 
talking about a very clueful attacker who is against a very high value 
target, so we should be demanding more money for the product.

For everyone/everything else:  take on some small risks.

The security high tide watermark is reached when the general attacker 
sees 50:50 between cryptographic attack versus platform attack.  We then 
build the crypto a bit stronger, higher, up to the 100 year flood level.


...
> We go back and forth on 1TCS versus algorithm agility versus algorithm evolution without stepping back and admitting that *this choice is itself a protocol design problem*, and needs to be approached as such.  A threat analysis is the first step....



That.  This is *all about protocol design informed by threat analysis*.



iang




[0] In the 1990s there was a system called Mondex that had been told by 
their regulating CB to make sure they were safe.  So they designed in a 
second ciphersuite and developed the techniques to switch over in case 
of algorithm meltdown.  They never had to use it, indeed their system 
failed in general at market level.  The alternate systems that I knew of 
(it's all proprietary/secret) used defence in depth mechanisms to deal 
with algorithmic failure, and stuck more to 1TCS.

[1] Arbitrarily, I'm calling 80% deployment as the benchmark for the 
renegotiation breach.  This works because all I'm doing here is 
measuring the diameter of the OODA loop.
http://financialcryptography.com/mt/archives/001210.html

[0] Eat this email before going through security.


More information about the cryptography mailing list