[Cryptography] Why is a short HMAC key zero-padded instead of hashed?

Ray Dillinger bear at sonic.net
Sun Feb 5 15:05:23 EST 2017



On 02/05/2017 03:31 AM, Jerry Leichter wrote:

>>  So what's left will presumably be oddball DIY stuff ...

> That may well be, but we're talking ... about two *standards*, one from the IETF, one from NIST, recommending a procedure ... for no known reason.  Does anyone know where the "hash it if too long" mechanism came from, as it's not in the base research paper?

In the case of longer keys, I can argue in favor of hashing, but only on
the basis of oddball protocols where there is key derivation from
multiple sources with different origins or security properties, and
that's properly protocols rather than primitives.

In the case of shorter keys, zero-padding and hashing have equivalent
security properties as far as I can see; if a hashing functionality is
provided to cope with longer keys, then hashing should be preferred on
the basis of code simplicity. Secure code minimizes the number of choice
points and different execution paths.

On the other hand if a fixed-size buffer is zeroed before the key is
written to it, then truncation of longer keys and zero-padding of
shorter keys are also the natural consequence of a simplest code path.

One might argue in favor of hashing longer keys on security grounds and
zero-padding shorter keys on efficiency grounds, but I don't buy
complicating the execution path as a security choice.  This would IMO be
poorer practice than either of the cases above.

But, IMO all three of these are the wrong thing to do on consistency
grounds. It's inconsistent because it means that the same operation -
adding a bit to the key size - doesn't mean the same thing at different
key sizes.  If a bunch of different longer-than-standard keys are
members of equivalent-key groups, then a primitive simply oughtn't
accept keys that size.  And if a short input results in a long key that
has a short work factor, then hashing has disguised a problem without
fixing it.

That is, if adding the 128th bit to a key increases the attacker's work
by a factor of two, then so should adding the 65536th bit, and if it
doesn't then the primitive ought not be accepting 65536-bit keys.  When
we start talking about hashing or truncating to get a particular key
length, there can be good reasons to do that but any such reason we
invoke has to be considered in the context of a particular usage, and
therefore moves from primitive design to protocol design.

				Bear

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <http://www.metzdowd.com/pipermail/cryptography/attachments/20170205/e50a85c9/attachment.sig>


More information about the cryptography mailing list