Broken SSL domain name trust model

leichter_jerrold at emc.com leichter_jerrold at emc.com
Thu Dec 1 10:05:37 EST 2005


| ...basically, there was suppose to be a binding between the URL the user
| typed in, the domain name in the URL, the domain name in the digital
| certificate, the public key in the digital certificate and something
| that certification authorities do. this has gotten terribly obfuscated
| and looses much of its security value because users rarely deal directly
| in actual URLs anymore (so the whole rest of the trust chain becomes
| significantly depreciated)....
One can look at this in more general terms.  For validation to mean
anything,
what's validated has to be the semantically meaningful data - not some
incidental aspect of the transaction.  The SSL model was based on the
assumption that the URL was semantically meaningful, and further that any
other semantically meaningful data was irreversibly bound to it, so that if
the URL were valid, anything you read using that URL could also be assumed
to be equally valid.

This fails today in (at least) two different ways.  First, as you point out,
URL's are simply not semantically meaningful any more.  They are way too
complex, and they're used in ways nothing like what was envisioned when SSL
was designed.  In another dimension, things like cache poisoning attacks
lead to a situationd in which, even if the URL is valid, the information
you actually get when you try to use it may not be the information that was
thought to be irreversibly bound to it.

Perhaps the right thing to do is to go back to basics.  First off, there's
your observation that for payment systems, certificates have become a
solution in search of a problem:  If you can assume you have on-line access
- and today you can - then a certificate adds nothing but overhead.

The SSL certificate model is, I contend, getting to pretty much the same
state.  Who cares if you can validate a signature using entirely off-line
data?  You have to be on-line to have any need to do such a validation, and
you form so many connections to so many sites that another one to do a
validation would be lost in the noise anyway.

Imagine an entirely different model.  First off, we separate encryption
from authentication.  Many pages have absolutely no need for encryption
anyway.  Deliver them in the clear.  To validate them, do a secure hash,
and look up the secure hash in an on-line registry which returns to you
the "registered owner" of that page.  Consider the page valid if the
registered owner is who it ought to be.  What's a registered owner?  It
could be the URL (which you never have to see - the software will take
care of that).  It could be a company name, which you *do* see:  Use a
Trustbar-like mechanism in which the company name appears as metadata
which can be (a) checked against the registry; (b) displayed in some non-
alterable form.

The registry can also provide the public key of the registered owner, for
use
if you need to establish an encrypted session.  Also, for dynamically
created
pages - which can't be checked in the registry - you can use the public key
to
send a signed hash value along with a page.

Notice that a phisher can exactly duplicate a page on his own site, and it
may
well end up being considered valid - but he can't change the links, and he
can't change the public key.  So all he's done is provide another way to get
to the legitimate site.

The hash registries now obviously play a central role.  However, there are a
relatively small number of them and this is all they do.  So the SSL model
should work well for them:  They can be *designed* to match the original
model.
							-- Jerry





---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at metzdowd.com



More information about the cryptography mailing list