[Cryptography] ratcheting DH strengths over time

ianG iang at iang.org
Sat Nov 21 09:47:44 EST 2015


On 16/11/2015 17:59 pm, Roland C. Dowdeswell wrote:
> On Mon, Nov 16, 2015 at 12:10:09AM +0000, ianG wrote:
>>
>
>> How could we do this in a DH protocol?  I would suggest a schedule over
>> time.  Most or all of our implementations have a timebase available.
>> Something like this:
>>
>> 2015 - 1024
>> 2016 - 1280
>> 2017 - 1536
>> 2018 - 1792
>
> A strategy that I have used in the past in the similar problem of
> choosing a PCKS#5 PBKDF2 iteration count was to benchmark the
> current machine and use it to calibrate the cost on the current
> hardware.  This approach more or less naturally increases the key
> size based on the improvements in hardware.  Of course, it doesn't
> address people running old machines in some sense (security) but
> it does in another (cost of computation.)
>
> Maybe a similar strategy could be used for these key sizes?  That
> is:  use the largest key size which can be reasonably computed on
> the current machine as measured by the software with a specified
> cost.  The specified cost should be set low enough that if you
> calibrate on the most modern cutting edge machine, the cost on a
> five year old box is unlikely to be onerous.


OK, you can do that - but the problem here is that your machine isn't a 
reliable predictor of the attacker's machine.  Indeed it isn't even a 
vaguely handwavy aligned predictor.  So what you're really doing is 
setting the number to what the user can tolerate, which might or might 
not deliver the security desired.  I think this is fine if you're not 
protecting much, but if you're actually protecting something important, 
then not so fine.


iang



More information about the cryptography mailing list