PlayStation 3 predicts next US president

William Allen Simpson william.allen.simpson at gmail.com
Thu Dec 6 09:30:21 EST 2007


I had intended to stop posting, but with the request for history and a few
misguided postings later, please indulge me.  I'll try not to be too long.

We've been talking about "collision" and "preimage" resistance at least
since 1989.  We didn't use those terms (I'm not fond of "preimage").  We
merely had a clever bit of design by Rivest, and were eager to use the
tools in our network protocols, but had the usual qualms about new things.
If anything, being security minded, we were somewhat paranoid!

Remember, at the time we had 80286s in deployment, and MD2 was considered
too slow!  So, MD4 came along (1990), and its little-endian design was
fast enough to use on a server with 24 incoming connections.  Not only was
it "computationally infeasible" to find a collision, it was barely
feasible to use the functions at (56 Kbps) line speed!

There was some concern that MD4 would not be sufficiently resistant in the
future.  To be "safe", PPP CHAP was originally specified with either MD2
or MD4.  Later, by 1994, we had settled upon MD5 as the compromise.

To answer Benne's question, I was introduced to chosen-prefix collisions
(again, we didn't use that term) by Dave Balenson after the Common
Authentication Technology (CAT) Working Group meeting in Santa Fe circa
mid-1991, and changed the PPP CHAP hash field order to protect against the
possibility.  The original drafts hashed the packet in field order.
Splitting the Identifier from Challenge (with the secret between) protects
against prefix calculations (and provides a modicum of replay protection).

Within a couple of years, I would have included the Length as well, and
additional MD-strengthening after the secret.  The current CHAP design
depends upon the final MD-strengthening to protect against appending
attacks on the secret.  Actually, PPP CHAP is probably not vulnerable, as
the Challenge is short and fixed size for a particular connection.

We added length for IPv6 in 1993, what I retrospectively call the L-MAC
algorithm.  That evolved in 1994 to the "envelope" method (secret before
and after the data), and again in 1995 after the "On the Security of Two
MAC Algorithms" paper presented at the rump session of Crypto '95.  Within
days, we adopted the recommended additional MD-strengthening between the
data and the trailing secret, presented at the next IETF (and later
published) as the "interleaved padding" IP-MAC algorithm.

Sadly, the slower and weaker H-MAC (even though developed after IP-MAC) is
the currently used specification.  Sometimes, folks just assume that
something published later is better.  A reminder that the pressure of
commerce, lobbying, and politics often trumps our best efforts.

To reiterate, when designing security systems, there is a continuum of
possibilities, and we choose those that fit the threat model.  Even today,
with laptop processors that out-perform the supercomputers of that era,
PPP CHAP with MD5 is still resistant to such limited time-frame attacks
over a serial link.

As our understanding improves, it is perfectly reasonable to evolve our
designs.  That's why it is so important to specify flexible protocols from
the very beginning!


Dirk-Willem van Gulik wrote:
> .... And on unsealing
> day, which for tax reasons can be months later,  the govt. entity just 
> checks the MD5's versus the RFC3161 attest. ...
> 
> An in-house Mallory (at the bidder) may well want to tweak things a bit 
> and make several doctored copies with different bid levels; and send in 
> the one joint MD5 through the RFC3161 service.
> 
Reminding you that this is something like a notary function, *not* a code
distribution or signing function.  And the attack requires (as I've shown
repeatedly) the original document generator to be compromised.

Folks on this list seem to forget that the legal purpose of a notary is
not to protect the bidder/contractor/deed/etc.  It is to protect the
*public* against *fraudulent* bidders/contractors/deeds/etc.  As we all
agree (except for one increasingly phlegmatic person), the only purpose of
a notary is to bind a person to a document at a particular time.

The notary would never sign a hash generated by somebody else.  Instead,
the notary generates its own document (from its own tuples), and signs its
own document, documenting that some other document was submitted by some
person before some particular time.

As in code distribution, every certifier *MUST* *ALWAYS* generate their
own document.  Then, there is no chosen-prefix collision, and the attack
does not apply.

In your hypothetical, the certifier virtually guarantees that Mallory
will be caught!  When Mallory tries to assert the claim, the bids can
easily be compared.  When the verifier compares the original documents
and hashes with the later hashs, it will become readily apparent that two
documents have the same hash.  And then the trial judge will appoint a
special master (such as me) to review the documents.

It would be no problem to show that the documents re-typed by a
certified court reporter do not generate the same hash.  It would also be
no problem to show that the documents with the common hash also have an
"invisible" block of data.  Even a jury will easily understand that
apparently random garbage that serves no purpose other than to obfuscate
the document is proof beyond a reasonable doubt of conspiracy and fraud.

And because the document hash is certifiably bound to a person, there's
no escaping the conviction....

---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at metzdowd.com



More information about the cryptography mailing list