[Cryptography] a question on consensus over algorithmic agility

ianG iang at iang.org
Thu Jun 26 06:33:53 EDT 2014


Just to separate the meta-notion of consensus from the precise argument
at hand, here is response in separate cover.  Shooting from the rough,
take cover!



On 25/06/2014 23:53 pm, Stephen Farrell wrote:

> Please do note though that:
> 
> - there's the backup algs argument that the protocol has to have
> codepoints defined and interop working now for the next one we
> may need to deploy in some years e.g. its taking some work to get
> the fine detail of how to do 25519 (and ed25519) key representation
> in TLS now, and that has to happen well before being needed for a
> backup alg; you just cannot do that and get interop in the time it
> takes to upgrade a single implementation and assuming all future
> alg breaks will be slow burners is not considered good engineering
> by many


There is the backup algs argument, and then there is the slow
implementation issue.  I'll take them separately.

- firstly, backup algs argument is a postulation that an alg can break.

Debunk 1.  This has never really happened, within reasonable bounds
(pick a proper algorithm to start with, and migrate as it is getting
weaker).

Debunk 2.  In risk analysis, if something has never happened, that means
we shouldn't mitigate it, unless it is free to mitigate.  But the
mitigation is indeed more costly, as we now know that the presence of
multiple negotiation options is causing pain and loss of security.  In
short, we're making security worse to deal with a mythical break.

Debunk 3.  Indeed, we have so much confidence in our algorithms that the
following argument shows we've never taken the 'alg backup' argument
seriously:

Take your favourite two block ciphers.  Independent keys, XOR the
output.  This provides the strength of both, together.  In agile terms,
it totally dominates the agile argument because it employs both, fully
at the same time, and it eliminates the negotiation morass.  (Same thing
can be done with other algs like HMACs).

Yet, we rarely do that.  We rarely use superior engineering, we always
fall back to the negotiation approach.  Where I've seen it used is in a
Dutch cipherphone and I think blockchain technologies used 2 for one
hashing.  Rare!

Why?  (a) Because cryptographers teach us that to do this is futile
because if it was a good idea *they'd already have done it*. (b) many
would say that to use two algorithms would be a performance hit (never
mind that they have created a performance uncertainty...) so this shows
that 'backup alg' argument is secondary to performance.  Which kind of
puts it in the shade as far as security is concerned.

In security and software engineering, doubling the cipher totally
dominates the backup alg argument ... conundrum?  Or debunk.

Debunk 4.  Although we've never had an alg break, we have had protocol
breaks.  So why don't we have a backup protocol?  There are plenty of
good protocol ideas out there.  Why doesn't the IETF entertain a TLS
over TCP and a TLS over UDP ?  and if there are any problems, we switch?

(No, I'm not talking about DTLS, more like a QUIK within TLS as a
selectable protocol element.)

The reason is that we're assuming alg breaks *because we feel we can fix
them*, whereas we're not assuming protocol breaks because we don't feel
we can fix them.  This is creating our requirements from our
capabilities, which is to ignore the user public's needs, and
concentrate on our own needs for gratification.

I'm not sure what this argumentation is called, but it ain't engineering.




- then secondly, it takes time to bed a new alg in.  Well, true.  But it
takes time to bed anything new in.  This rests on the above argument
that breaks happen in algs and we need to mitigate;  but this also is a
false separation.  In fact, breaks never happen in algs, but they do
happen in protocols, and fixing protocols takes time.

So, yes, this is actually the opportunity.  The new protocol should be
in the works and well advanced.  And that's the perfect time to change
ciphers.



> - there are national algs in the world, and its not easy to say
> no to all of those in all cases for all implementers and deployments
> (I wish it were)


DUAL_EC, Snowden and AES.

The DUAL_EC story is a case study of how a national algorithm can be
used to force in an insecurity into a cryptosystem.

Therefore, the case for national algs can be seen as an open door for
insecurity.  I'm not sure how much easier it gets, as Snowden has given
us all the answers here.

Finally AES.  The AES algorithm wasn't a national algorithm.  It was a
worldwide competition, and the winner came from some uni somewhere.  As
all the cryptographers involved were competitors, as they all
scrutinised the winner, they gave the inside 5 a clean bill of health,
and they participated in and scrutinised the process, we've got a fairly
good answer to the selection of algorithm question -- at least against
bowing to national prejudices.


> - sometimes you have platform issues, e.g. lots of layer 2 h/w does
> AES CCM, whereas we generally prefer AES GCM in many cases. Both
> are reasonable decisions made in different places (IEEE and IETF
> for example) so protocols on the boundaries (e.g. CoAP) will need
> to support one or both of those in different deployments;


They're in the past.  We're planning for the future.  At some stage we
have to cut away the legacy of the past.

The hardware argument is just a variation of the vanity argument, as
expressed by the big corps that send their salesmen into the committees
to press their arguments for more sales.  Remember, for each of those
big corp salesmen, there are groups out there in the world who can't
press their case because the entrance price to IETF is too high, and
they get no security because those same big corp salesmen stuffed up
previous protocols.

And, if they really care, that much, what they can do is install feature
changes.  If the protocol is a better protocol and wins the day, and if
the single cipher suite is better, and it wins, then they'll adjust
their hardware.

Software leads, hardware follows.

Indeed, if the protocol wins, it will win in the software domain.  It
won't get any help from hardware because nobody will create the extra
linkage until it is worth wringing the speed out.  First deployment.
Then speed.

Also, note that inside the open software world, outside the IETF, there
is widespread suspicion of hardware solutions.  E.g., RNGs and AES.  So
much so that if a WG voted for say Salsa *because it was not in
hardware*, you'd make a lot of friends in the software world.  And it is
the software devs who will decide the deployment win, not the hardware
devs...  And of those, it is primarily the open software crowd, who are
most suspicious of hardware.


> the same
> thing can happen for timing reasons e.g. if a highly popular dev
> environment just doesn't support the perfect ciphersuite but has
> one nearly as good and no sign of an update coming


Same story.  Actually I have a lot of sympathy for this argument as some
libraries are slow to respond.  My pet peeve is Java institutionalised
crypto (aka JCE) which is so limiting, kludgy and ancient that if you
were planning for that, it sometimes feels as though you'd be better
doing plaintext.

But this is not an argument for the protocol.  It's a strawman.  The
algorithms that are in use in a typical tight suite can be coded or
cribbed in less time than it takes to manage the negotiation suite.  You
just need to make sure that the protocol specifies some nice algs within
a tightly designed suite, all with properly documented test numbers.  Do
a reference in some easy language.  And find yourself a cryptoplumber
per other language.

It's security coding.  You get better results by *not using a library*.
 (OK, I'm the only one who believes that...)


> I do however agree that once you start down that road you quickly
> end up in 300+ ciphersuite hell, so yes alg agility is a pain, but
> maybe an unavoidable one.


The apocryphal story of the Dutch boy and the dyke ... once you let one
choice in, well ... and we all know how that story ends.


> I could however see an argument that only 1 bit be allocated for
> algorithm/ciphersuite identifiers though that doesn't work so well
> for the platform issue, it might be good enough for the other two.


I suppose if one were in the process of fixing TLS <cough /> then one
could postulate a sort of A/B process.  E.g., in each new version
release, the older of the pair was retired, and the newer of the
previous two took pole position, to be accompanied by a new gen suite as
backup.  If it were a disciplined approach such that for example the
protocol said that the primary suite in use was the A for odds and B for
evens, then it might stick enough for people to handle it.

(But such an argument makes no sense in the context of TCPcrypt.)


iang


More information about the cryptography mailing list