[Cryptography] Why is a short HMAC key zero-padded instead of hashed?

Jerry Leichter leichter at lrw.com
Sun Feb 5 08:06:38 EST 2017


> That may well be, but we're talking not about the actual usage of the
>> algorithm but about two *standards*, one from the IETF, one from NIST,
>> recommending a procedure ... for no known reason.  Does anyone know where the
>> "hash it if too long" mechanism came from, as it's not in the base research
>> paper?  *Someone* must have proposed it.
> 
> It came from the people who gave us IKE.
Well, that's interesting isn't it?

>> But what does it say about our standards processes that unnecessary
>> complexity, solving no real problem, and *perhaps* introducing one, somehow
>> gets slipped in to them?  
> 
> Standards are designed by the original authors, and then there's a lot of
> horsetrading to get them adopted.  Sometimes you have to do weird stuff to get
> them through, vendor X wants feature Y in order to endorse it so it goes into
> the standard.  Or, sometimes, vendor A invents new feature B, badly, so the
> standard gets changed to include a fixed-up version of B before other people
> copy the broken one.
My questions were somewhat rhetorical; I understand how standards come to be.  In the case of cryptographic standards, however, we've learned how dangerous this process is - how easy it is to subvert it to produce results that look good but have subtle problems that lead those in the know to breaks, whether of a cryptographic or simply a procedural nature.

It's not that others haven't exploited the standards processes in other ways before. Microsoft was historically a master of getting standards to include tons of options that everyone would have to implement "for compatibility" while MS had pre-chosen options that *they* would actually use well ahead of time.  And there are plenty of even older examples:  As I heard it told, the original Ethernet standards, as developed by DEC/Intel/Xerox, deliberately left out some details (on timing or shaping of pulses, something like that) that made it extremely difficult for anyone but those three to build reliable Ethernet repeaters.

Still ... cryptographic standards *are* different, and we now know that if they're to be useful, they need to be *treated* differently.  At the least, everything in a cryptographic standard should be traceable to an independent, published analysis.  That's not the case here.  The only reference in either the NIST standard or the RFC is to the original paper - which doesn't mention long keys.

Ironically, the listed authors of the RFC are actually the same as the authors of the original paper!  Were they actually involved at the point the final algorithm was decided?  Or was it some late editorial decision by some committee somewhere?

                                                        -- Jerry



More information about the cryptography mailing list