[Cryptography] Why is a short HMAC key zero-padded instead of hashed?

Jerry Leichter leichter at lrw.com
Sun Feb 5 06:31:52 EST 2017


> My suggestion is that HMAC should be like AES:  Defined for a random key
>> whose length equal to the input block size of the hash function on which it's
>> based.
> 
> That's already how it's used in the major protocols that use HMAC, SSH, SSL,
> CMS, and so on.  So what's left will presumably be oddball DIY stuff, which
> probably does all sorts of other odd things in any case so the HMAC keying
> will be the least of your worries.
That may well be, but we're talking not about the actual usage of the algorithm but about two *standards*, one from the IETF, one from NIST, recommending a procedure ... for no known reason.  Does anyone know where the "hash it if too long" mechanism came from, as it's not in the base research paper?  *Someone* must have proposed it.  Or was it in some pre-existing implementation that got standardized without any thinking about it?  Note that unlike the zero extension - which *could* have appeared as a side-effect of careful code; the reference implementation in the RFC does it because it pre-clears a fixed-size buffer - pre-hasing requires a deliberate effort.  (Not that that's where the zero extension came from; it was in the base research paper.)

This is *probably* not a big deal:  No one does it, and it *probably* doesn't introduce a vulnerability even if they do.  But what does it say about our standards processes that unnecessary complexity, solving no real problem, and *perhaps* introducing one, somehow gets slipped in to them?  Haven't we seen this story before?  It didn't work out so well for us - the "big" us - in the case of Dual_EC_DRBG....

                                                        -- Jerry



More information about the cryptography mailing list