[Cryptography] "Trust in digital certificate ecosystem eroding"

Jerry Leichter leichter at lrw.com
Sun May 3 16:02:05 EDT 2015


On May 3, 2015, at 11:45 AM, John Levine <johnl at iecc.com> wrote:
>> It would take _considerable_ (re-)training of users to actually take
>> security warnings seriously, and to reduce the number of false
>> warnings.
> 
> All the studies I've seen say that no amount of training will make
> users take security warnings seriously.  Partly it's the number of
> false alarms, partly it's a not totally irrational tradeoff between
> the risk of what might happen and the desire to get their work done.
I agree in general, but even though I don't think you meant it that way all, the general engineer's mindset of blaming the user slipped in there.  "If only users would take the training to heart!"  No.  If users need training to make reasonable choices, the first thing to look at is whether the choices really *are* reasonable.

I'll give you an example.  One of my mail services comes from a small ISP. Periodically, my Mac reports that their certificate is invalid because it comes from an unknown authority.  The first 10 times or so this happened, I'd look at the certificate.  It would come from some GoDaddy variant, and would otherwise appear to be correct.  How would I check further?  With hundreds of trusted CA's on the list - including a bunch of GoDaddy variants - the effort to determine just why this one is being flagged is too difficult to be practical.  So in the end, after staring at the thing a bit, I end up clicking Trust (temporarily).  Eventually, I found myself skipping over the "staring" stage - which always has the same outcome anyway - and just hitting Trust immediately.  I know it's bad practice - but do I have any realistic alternatives?  The entire system is set up so that I don't, if I want my mail to continue to flow.  *And I know a hell of a lot more about this stuff than most users!*

Forget the blame game.  Forget training.  We long ago decide that telling people to drive carefully didn't save lives - adding seat belts and air bags to cars saved lives.

> If this stuff is going to work at all, it has to work automatically.
Yes ... but.  "Automatically" is another of those words beloved by engineers.  Translated, it means "without humans in the loop to screw things up."  But we're talking about *trust* here, which is a *human* concept, not a machine concept.

What we've come down to, in reality, is this:  Humans place their trust in their browser makers, who are effectively super-CA's.  (Businesses, in theory, could build their own internal lists of trusted CA's, but I've never seen any that do.  Rather, they just add their own internal CA to the list they get from the browser makers.)

Browser makers never really agreed to take on this job, they aren't compensated for it, and they aren't even really punished when they get it wrong.  It should be no surprise that they are not deserving of the trust we are forced to give them.

I've previously argued here that we could do pretty well entirely without CA's - and with certification pinning we're heading there (in an inefficient, roundabout fashion) anyway.  But as things are today, that wouldn't change the "super-CA" role of the browser makers anyway.

You can argue that you have to trust your browser maker anyway, since he has control of all the code.  But while I trust, say, Google to write pretty secure browser code, that's very different from say that I trust Google to make check that the CA's they list are actually "good guys", for pretty much any sense of "good guy."  It's not a job they ever really offered to do, and they aren't well placed - because they have to do business all over the world - to say "no" except in really egregious cases.

What we need is that CA's were *supposed* to be to begin with:  Organizations that built up trust, whose value came from the trust they built up, and whose usage was based on that trust.  There's no hope of that now.  There *might* be hope in separating the roles of "browser maker" and "trusted CA database supplier".  If browser makers made it really easy to replace the databases of CA's - so simple that anyone could do it - you might see a market for such databases emerge.  Some of it might be for profit; some of it might be governmental; some might be charitable.  (Perhaps there'd be an EFF CA database, and an ACLU CA database.)  Some of these databases might accrue enough value and users that CA's would be forced to make a case to them that they deserved to be listed.

Humans can make trust decisions about other humans, and about organizations.  They may be wrong - this world offers few absolutes - but such decisions are likely a hell of a lot more reliable than trust decisions based on some random prompt from a program.

                                                        -- Jerry





More information about the cryptography mailing list