[Cryptography] ratcheting DH strengths over time

ianG iang at iang.org
Sat Nov 21 09:49:01 EST 2015


On 16/11/2015 15:57 pm, Perry E. Metzger wrote:
> On Mon, 16 Nov 2015 00:10:09 +0000 ianG <iang at iang.org> wrote:
>> [Suggests that software should up key sizes, DH group sizes,
>> etc. automatically with time, taking the decision to do so out of
>> user hands.]
>
> There's an old adage that software lives longer than hardware, and I
> think this does make sense if only because it takes certain kinds of
> decisions away not only from end users but also from release
> engineers, system integrators, and others who likely aren't going to
> spend a lot of time thinking about things like the DH moduli their
> systems ship with.


I see two benefits to this approach - one is that it "works" in and of 
itself to some degree of approximation - we can reasonably map out a 
curve of expected strength and we can call that a better prediction than 
say aiming at a hypothetical point in the future.

The second is realpolitik, that it might well force the designer to 
think about the bigger problem - the future.  The big flaw that has 
emerged in the last 5 years, as evidenced by all (?) the crypto fails 
we've seen, without exception (?), have been caused by "old stuff" that 
we knew should have been deprecated and removed ages ago.

With apologies to Jon, that's the world we live in.

So, what can we do to encourage protocol designers to start planning for 
end-of-life?  That's a very general question of course, because we're 
hacking what amounts to a 'best practice' to handwave the future away, 
which take decades to change.  But, if we turn around and say "use a 
ECDH schedule rather than choice or a too-big number" we're actually 
forcing them to pick that schedule.  Which then gets them thinking about 
the end-of-life problem.


> That said, I do worry about the effect of such schedules in the one
> place that hardware lives ridiculously long, which is embedded
> hardware. Often, small embedded controllers remain in place for insane
> periods of time, sometimes vastly longer than anyone had originally
> envisioned, and an underpowered controller that communicates very
> nicely with a key of size N might not handle a key of size 3N nearly
> so well. Furthermore, engineers are unlikely test their equipment with
> the date set twenty years into the future near the End of Life.


As a partial answer, if I was an engineer of one of these devices, I'd 
be pulling the crypto from some well-known stack that would therefore 
have the schedule built in already.  I'd also expect the testing to be 
provided as well.

Also, if one reads the other thread "Long-term security" it seems that

1. the IoTarget has to keep going elsewise the owner freezes...
2. automatic updates are not the answer;
3. expiry is the least worst of the bad options;

In contrast, this does suggest a 4th option:  the device gets slower and 
slower.  E.g., it appears to be wearing out ;)

But, yes, I agree that the IoT device has to keep working, so the notion 
of arbitrary increases out to infinity doesn't seem like a good idea. 
Something like 1x -> 2x -> 3x and stop.

Also, there is nothing about the approach that means everyone has to 
follow suit.  If the IoT manufacturer just hacks his cryptostack to turn 
off scheduled increases, and he does so on the controllers as well, he's 
lost nothing except "compliance with an IETF WG" which he's happy to 
ignore anyway.


> Ideally, one should set the constants on such controllers to the ones
> that will be viable at the end of life (say picking likely end of
> manufacturer sale plus thirty years), thus forcing the designers to
> confront not just today's computing needs but possibly tomorrows --
> but it also seems unlikely that we can get engineers to do that.


Yes - this is the challenge, to get the protocol engineers to think of 
the future EoL scenario at a crypto level.  But I think, to use Jerry's 
phrase, we will only succeed if we keep banging the drum.


> So, I suppose my overall comment is: this is an interesting idea, but I
> suspect it will not be entirely easy to make work, especially in the
> places where it is most needed (that is, hardware being kept around
> way past the point where it should be retired.)
>
> Regardless, let me remind almost everyone that the main problem in the
> recent Logjam attacks was a combination of a downgrade attack and the
> fact that engineers had forgotten that DH primes really should not be
> shared by everyone on earth. One could easily mis-engineer future
> systems so that they upped the sizes of various keys and groups and
> yet remained vulnerable to things like downgrade attacks etc. Once
> again, people tend not to attack cryptography head on, but to find
> weak points and go after *those*.


Yes - but even that is solved by the notion that we should be replacing 
things within a reasonable timeframe.  Although this knowledge of DH 
primes sharing was known once upon a time, it was forgotten -- only to 
be reminded by some attacks.

So, again, we are in a situation where our knowledge (typically) gets 
better in the long-run.  Attacks also get better in the long-run.  So we 
are again forced to conclude that what we did 10 years ago, no matter 
how carefully, will need refresh in some sense or other.

If the Internet of Targets world refuses to think about these things, 
that is their engineering decision.  But we on the Internet of software 
shouldn't live up to that excuse.  We should be better than that.



iang



More information about the cryptography mailing list