[Cryptography] open questions in secure protocol design?

Jerry Leichter leichter at lrw.com
Sat May 30 22:21:43 EDT 2015


On May 28, 2015, at 2:31 PM, dj at deadhat.com wrote:
> Algorithm migration in this context means:
> 
> 1) We start with algorithm version 1. It is a suite of algorithms chosen
> wisely to be considered good for several years. Where 'several' is long
> compared with the typical lifespan of the devices.
> 
> 2) New algorithms are adopted by the standards body in sequence (1,2,3..)
> when there is a reason to (example sha1 looked shaky years before it
> failed). The algorithm list is in time order. There's no branching or a
> menu to choose from or negotiate.
> 
> 3) New devices implement the current algorithm version and the next
> algorithm version if it exists.
> 
> 4) When the new algorithm version is widely deployed, policy is updated to
> deprecate the old algorithm version.
> 
> So algorithm migration is a slow migration from one cipher-suite to the
> next, when supported by deployed hardware. No a run time negotiation
> between many cipher-suites. The cadence may be 1 decade.
This is an interesting approach, but it's missing a careful threat analysis.  You've baked in consideration of the "death by 1000 cuts" failure, where there's plenty of time to anticipate that the current algorithm is heading for trouble.  But what if the current algorithm fails suddenly?  You have no mechanism to move quickly to something else.  Alternatively, what if it turns out, half way through your distribution of a new algorithm, that the *new* one fails, while the old one seems to still be strong?  Can you somehow move back to it, without enabling rollback attacks?

Negotiation of the protocol to use makes sense in situations where endpoints can actually make a rational choice among alternatives.  You might be able to construct such a situation for communication within a group of experts - e.g., well-trained spies.  I find it very hard to come up with more common situations in which it makes sense.

Picking a single cipher is like engineering with a known single point of failure.  That's not necessarily a bad thing!  Your car's steering system has multiple single points of failure, because there's really not much you can do to make it redundant without increasing other risks due to complexity.  On the other hand, your car's brakes are dual-redundant because that's a better assignment of risks for them.  Whether a single-cipher system makes more sense than one with N ciphers depends on your failure scenarios.  Do you include rollback attacks?  How about "directed failure" attacks, where an opponent is able to break one of your ciphers and then force connections to use that one?  Against such attacks, adding more ciphers *decreases* your security!

Consider a situation where you have two ciphers, and you think your opponent might be able to break one, *but you don't know which*.  Then you're into game theory, and the optimal strategy is a mixed one:  Choose one cipher or the other by tossing a coin each time.  (Of course, you could also encrypt with both.  That reduces you to a single cipher approach - but the cipher is a more complicated one resulting from combining the other two, a combination that may be less well studied that either of your original choices.)

We go back and forth on 1TCS versus algorithm agility versus algorithm evolution without stepping back and admitting that *this choice is itself a protocol design problem*, and needs to be approached as such.  A threat analysis is the first step....
                                                        -- Jerry



More information about the cryptography mailing list