Announcing httpsy://, a YURL scheme

Ian Grigg iang at systemics.com
Wed Jul 16 18:02:17 EDT 2003


"Perry E. Metzger" wrote:

> I'm talking about replacing keys.

We were indeed talking about different things.

I'll address that here then.

> Almost every protocol out there lets
> you replace your keys at periodic intervals. Proper key hygiene
> dictates that you change your keys often enough that the security harm
> caused by disclosures or cracking is mitigated. Using this system,
> they're basically frozen forever because everyone on earth expects
> your HURL, er, YURL, to remain constant.


That's true as it is basically stated, but one
needs to consider that URLs without any crypto
suffer from exactly the same problem.  That is,
bookmarked or distributed or page-published
URLs all lose their relevance over time.

This is from different causes but the effect
is the same, people need to do a little bit of
maintenance to keep links fresh.

I grant you that a user could be stuck with a
URL that has had a key change underneath it,
but I can't see that this is too traumatic,
as the website can simply indicate that the
key is changed.  A bit like a 404 for keys,
except that the original page might still be
displayed?


> > the key management issue here is pushed back to
> > the user & client.  It relies on browser assistance
> > in caching, and correlation between many introducers.
> 
> Our evidence in the long run is that users are extremely poor at
> handling security decisions like this. They don't understand the
> security implications of their actions.


That is indeed a huge issue.  Recall that the
paper mentioned yesterday did in fact research
this they built a browser according to a properly
constructed security model, and they claimed good
results:


X>
X> > The question at hand is this:  if secure browsing
X> > is meant to be secure, but the security is so easy
X> > to bypass, why are we bothering to secure it?
X> >
X> > Or, if we should bother to secure it, shouldn't
X> > we mandate the security model as applying to the
X> > browser as well?
X> 
X> Exactly.
X> 
X> That was the whole point of our Usenix paper last year
X> 
X> E. Ye, S.W. Smith.
X> ``Trusted Paths for Browsers.''
X> 11th Usenix Security Symposium. August 2002
X> http://www.cs.dartmouth.edu/~sws/papers/usenix02.pdf
X>


Right now, this is the only reference I know
of, and it does at least indicate that users
*can* participate in the security process with
some success.

It may be that these guys were lucky and had
some smarter users - they talk about that -
or it may be that any work done on the current
model of brower security was unlucky for some
reason.

But, whichever, user particiption in the security
model should not be ruled out if that paper is
anyything to go by.  Also, see below.


> > In comparison to the CA regime,
> 
> I think that's a poor point of comparison. It is like saying "I don't
> like bad system A, so note how bad system B is so much better".


I would rephrase that as "I don't like bad system A,
and let's look at other systems, however bad, so that
we can think about how to come up with a good system."

Like you, I think the YURL system is so incomplete
that it won't move onto the tarmac, let alone fly.
But, that doesn't stop me picking over the bones
looking for some good ideas.


> There are fine ideas out there on how to handle this sort of thing
> that don't involve bad ideas at all -- I don't see why I should pick
> a bad way of doing things at all.


Well, only Tyler is 100% behind this idea, the rest
of us are either damning it or treating it with as
much hope as it can justify.

I'd definately we'd all like to hear more ideas.
(That's why I took the trouble to skim those links
of this morning and summarise by example each of
their methods.)


> ... You start by looking at document A. You click on it and end up
> at document B at another site. Then you click on it and end up at
> document C at yet another site. Before long, you're trusting documents
> that are very, very far from the original HURL, er, YURL you started
> with, and you have no idea what your trust relationship with them is
> at all. It is a recipe for serious trouble. "Hmm, this claims to be
> www6.amazon.com and I somehow got there by an unknown sequence of
> clicks -- guess I'll give it my credit card number."


I still don't understand that - who said that I would
treat the second URL with as much trust as the first
one?  If that's what Tyler Close is proposing, then I
can't see it either.

But I certainly didn't pick up that impression (mind
you, we are back to the doco situation there).


> SFS has the same problem -- by the time you've cd'ed into a few
> directories on a few file systems, you no longer have any idea what
> your trust path is at all.

OK.  That's correct.  If I send you a trusted link that
we both agree has been vetted and certfified and orange-
booked and all, and you then go to that place, you have
a good trust path.

If you then cd/click/refer around a bit, well, the trust
is watered down.  I can't help you with that, all I did
was send you the first link.


> > > 3) It is impossible for people to determine that a "YURL" actually is
> > >    what it claims it is, given that most people can't actually
> > >    remember one hash, let alone large numbers of them, etc.
> >
> >
> > Right.  I don't think the YURL really meant for
> > people to read the things.  It could be better
> > explained, the browser has to record and correlate
> > the hashes.
> 
> So in that sense, we end up with the worst part of the SSH model --
> you get a message that a key has changed and you have no idea why or
> if it is legitimate so users ignore it. "Not an improvement."


Ahhh...  We are talking about user choice to
participate in the security process here.

SSH improves life because in the 99% of cases
where the key hasn't changed, it goes through
easily.  So we have the benefit of a widely
distributed tool that is available to all.

In the 1% of cases where the key changes, there
is a horrible warning displayed [1]

Then, the user has a choice:  ignore it, and face
the consequences.  Or find out what the hell
happened.

(When it happens to me, I generally yell at a
sysadm and he yells back that such and such a
server was reinstalled.  Good enough for me.
Translate this to the web.  This happened recently
in the gold community.  Messages went out on the
mailgroups asking what had happened to a particular
group off SSL keys that were mucking up.  The sysadm
came back within the hour and said they'd changed.
Good enough.)

Many users will ignore it perhaps because they
aren't so lucky to be able to yell at the sysadm.

So we are left with an economic decision:  those
that can't easily figure it out have a choice of
ignoring it or not carrying on.  They need to
balance the value of their work against the risk
of an attack.

To cut a long story short, the risk of an attack
is infinitesimal.  Not zero, as emails after that
rant of mine pointed out [2].  But still infinitesimal.

So, most users choose to ignore it.  And they
thus benefit.  Because they get their work done.

Choice is a wonderful thing.  I wouldn't want to
take choice away from users of the Internet, as
that would be to poison the very lifeblood of
the Internet.


[1] I don't know whether it is 1%, that's just
    a number that feels ok.

[2] http://www.iang.org/ssl/mallory_wolf.html



-- 
iang

---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at metzdowd.com



More information about the cryptography mailing list