[Cryptography] IETF discussion on new ECC curves.

Watson Ladd watsonbladd at gmail.com
Sun Aug 10 13:55:45 EDT 2014


On Fri, Aug 8, 2014 at 4:34 PM, Stephen Farrell
<stephen.farrell at cs.tcd.ie> wrote:
>
> Two wee comments...
>
> On 08/08/14 17:28, ianG wrote:
>
>> Why do we pander to these organisations?  People quote Russian and
>> Chinese ciphers, but I don't see why we should inflict the choice on
>> the rest of the net just because some organisation thinks they'd like
>> to push an agenda.
>>
>> It seems to be a logical absurdity.  NIST has a standards suite that
>> people think highly of.  So we have to accept NIST.
>>
>> So, if we accept NIST, we now must let the Russians GOSTs in.  And
>> the Chinese.  ... We're back then at the same place of vanity
>> ciphers, 'cept on a national level.  Absurd.
>
> Well yes and no, but mostly yes. Purely personally I figure that's
> becuase NIST did such a good job with AES that we figured that
> suite B would be fine. The "no" part is that unfortunately once
> we'd gone there, and NIST do turn out to be a .gov, that did make
> it impractical to easily say no to other .gov.ccTLD types. Somewhat
> absurd though, no question.

Actually, letting in RC4 after it was cryptanalyzed was a mistake. How
did this happen? Stupidity: no one took the time to reevaluate the
ciphers in TLS 1.0 when moving to TLS 1.1. How did Triple Handshake
and the Renegotiation Bug happen? Extra complexity, and some
misfeatures, that made academic analysis far harder than it needed to
be.

By contrast IKE2 was designed by Hugo Krawczyk, and adopted wholesale.
The result is far better (the rest of IPsec is still pretty bad: lots
of red buttons waiting to be pressed).
>
> ...
>
>> If say IETF committees
>
> Sigh. Ian - you have participated on a few IETF mailing lists
> recently. Are you a part of this cabal of committees now? I'm
> guessing you would reasonably say no. And the reason is not that
> you're special, but rather that there is no cabal of committees.
> That's just not how it works and continuing to describe the IETF
> thusly seems to me to lack... style. Well accuracy at least.

Have you ever sat down, examined the output of WGs, and tried to go
back from that to the process that produced them? It's pretty clear
that IETF WGs lead in a lot of cases to very poor designs, for a
number of sociological reasons similar to those inherent in design by
committee. The joke about compilers having n-1 passes, where n is the
number of people working on them, is pretty spot on. Now apply it to a
process with no structure.

(That (some) IETF standards are bad is not up for debate: DNSSEC, TLS,
probably dozens that haven't attracted attention).

There are a number of reasons I see for this: the first is that the
sort of expertise that makes Rogaway believeable and Joe Someone else
not is really hard to evaluate. This especially true when the argument
for the correct position is "Consider a multitape Turing machine A
limited to n oracle queries with advantage \epsilon in game
\mathcal{G}..." and the argument for the wrong position is "the MAC
might leak information", or there are subtle issues with arguments for
the wrong position.

The second is that it's very easy to think that your particular
feature or desire won't massively increase complexity. But if everyone
thinks this way bad things happen. At least in aerospace the thing
needs to fly with all the features, limiting the impact. But see the
F-35 Lightning II.

The third is it's very hard to make a complaint that "I don't have
confidence in this design" seem meaningful to people who don't
understand cryptography and the ease with which good seeming designs
can fail. People also become invested in the outcome of the product
far too much: there was an eight page report, not made public sadly,
about why TLS 1.0 should be sent back to the WG. But it was decided
that this wouldn't work, so it was better to push what we had.
Sentences like "not believed to be exploitable" should be red flags:
all too often they aren't.

Implementor feedback is often completely ignored. How someone thinks
dynamically validating that a prime is safe enough for Diffie-Hellman
is possible is beyond me: only if they hadn't implemented it, or seen
how long it takes, would the believe this to be possible.

It's to the point we have people openly speculating that some primes
are more "proven" than others in Diffie-Hellman. Newsflash: that's not
what cryptanalysts do all day. Network engineers should leave crypto
to the cryptographer, singular, who designs a solution to the problem.

Sincerely,
Watson Ladd


More information about the cryptography mailing list