[Cryptography] Just in case it isn't obvious...

Ron Garret ron at flownet.com
Mon Feb 27 11:14:54 EST 2017


On Feb 27, 2017, at 4:47 AM, Bill Cox <waywardgeek at gmail.com> wrote:

> On Fri, Feb 24, 2017 at 5:27 PM, Ron Garret <ron at flownet.com> wrote:
> 
> There is an easy short-term mitigation for this: before computing the hash of any object longer than 319 bytes, compute the hash of the first 320 bytes and check if it is f92d74e3874587aaf443d1db961d4e26dde13e9c .  If it is, throw an error.  But of course that will only work until the next SHA1 collision is found.
> 
> I found another simple fix for git.  I thought it would be really hard, because "SHA1" is a hard-coded call in ~1,000 places.  Instead, just define a new function called sha1.  I've added a BLAKE2b wrapper locally.  It was a tiny change, makes it more secure, and is faster than SHA1.

That would only work for new repos.  Your new patched git would break on existing repos.

The fundamental problem with fixing git is not that it commits to SHA1 as the One True Hash, it is that it assumes that there is a One True Hash to begin with.  That assumption is woven deeply into the structure of git, even down to its data representations.  Git repos have no place to store information about which has is being used, even on the repository level, let alone on the individual blob level.

(Just on general principle, it’s a pretty good bet that if there were a simple solution that actually worked, Linus would have adopted it long ago.  It’s not like the existence of a SHA1 hash is a surprise.)

rg

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.metzdowd.com/pipermail/cryptography/attachments/20170227/980ae280/attachment.html>


More information about the cryptography mailing list