"SSL stops credit card sniffing" is a correlation/causality myth

Perry E. Metzger perry at piermont.com
Tue May 31 18:43:56 EDT 2005


Ian G <iang at systemics.com> writes:
>> Perhaps you are unaware of it because no one has chosen to make you
>> aware of it. However, sniffing is used quite frequently in cases where
>> information is not properly protected. I've personally dealt with
>> several such situations.
>
> This leads to a big issue.  If there are no reliable reports,
> what are we to believe in?  Are we to believe that the
> problem doesn't exist because there is no scientific data,
> or are we to believe those that say "I assure you it is a
> big problem?"
[...]
> The only way we can overcome this issue is data.

You aren't going to get it. The companies that get victimized have a
very strong incentive not to share incident information very
widely. However, those of us who actually make our living in the field
generally have a pretty strong sense of what is going wrong out there.

> It can't be the latter;  not because I don't believe you in
> particular, but because the industry as a whole has not
> the credibility to make such a statement.  Everyone who
> makes such a statement is likely to be selling some
> service designed to benefit from that statement, which
> makes it very difficult to simply believe on the face of it.

Those who work as consultants to large organizations, or as internal
security personnel at them, tend to be fairly independent of particular
vendors. I don't have any financial reason to recommend particular
firms over others, and customers generally are in a position to judge
for themselves whether what gets recommended is a good idea or not.

> If you have seen such situations, document them and report them - on
> forums like these.  Anonymise them suitably if you have to.

Many of us actually take our contract obligations not to talk about
our customers quite seriously, and in any case, anonymous anecdotal
reports about unnamed organizations aren't really "data" in the
traditional sense. You worry about vendors spreading FUD -- well, why
do you assume you can trust anonymous comments not to be FUD from
vendors?

You don't really need to hear much from me or others on this sort of
thing, though. Pretty much common sense and reasoning will tell you
things like "the bad guys attack the weak points" etc. Experience says
if you leave a vulnerability, it will be exploited eventually, so you
try not to leave any.

All the data in the world isn't going to help you anyway. We're not
talking about what percentage of patients with melanoma respond
positively to what drug. Melanomas aren't intelligent and don't change
strategy based on what other melanomas are doing. Attack strategies
change. Attackers actively alter their behavior to match conditions.

The way real security professionals have to work is analysis and
conservatism. We assume we're dumb, we assume we'll make mistakes, we
try to put in as many checks as possible to prevent single points of
failure from causing trouble. We assume machines will be broken in to
and try to minimize the impact of that. We assume some employees will
turn bad at some point and try to have things work anyway in spite of
that.

> Another way of looking at this is to look at Choicepoint.
> For years, we all suspected that the real problem was
> the insider / node problem.  The company was where
> the leaks occurred, traditionally.
>
> But nobody had any data.  Until Choicepoint.  Now we
> have data.

No you don't.

1) You have one anecdote. You really have no idea how
   frequently this happens, etc. 
2) It doesn't matter how frequently it happens, because no two
   companies are identical. You can't run 100 choicepoints and see
   what percentage have problems.
3) If you're deciding on how to set up your firm's security, you can't
   say "95% of the time no one attacks you so we won't bother", for
   the same reason that you can't say "if I drive my car while
   slightly drunk 95% of the time I'll arrive safe", because the 95%
   of the time that nothing happens doesn't matter if the cost of the
   5% is so painful (like, say, death) that you can't recover from
   it. In particular, you don't want to be someone on who's watch a
   major breech happens. Your career is over even if it never happens
   to anyone else in the industry.
3) Most of what you have to worry about is obvious anyway. There's
   nothing really new here. We've understood that people were the main
   problem in security systems since before computer security. Ever
   wonder why accounting controls are set up the way they are? How
   long were people separating the various roles in an accounting
   system to prevent internal collusion? That goes back long before
   computers.

> So we need to see a "Choicepoint" for listening and sniffing and so
> forth.

No, we really don't.

> And we need that before we can consider the listening threat to be
> economically validated.

Spoken like someone who hasn't actually worked inside the field.

Statistics and the sort of economic analysis you speak of depends on
assumptions like statistical independence and the ability to do
calculations. If you have no basis for calculation and statistical
independence doesn't hold because your actors are not random processes
but intelligent actors, the method is worthless.

In most cases, by the way, the raw cost of attempting a cost benefit
analysis will cost far more than just implementing a safeguard. A
couple thou for encrypting a link or buying an SSL card is a lot
cheaper than the consulting hours, and the output of the hours would
be an utterly worthless analysis anyway.

>> Bluntly, it is obvious that SSL has been very successful in thwarting
>> certain kinds of interception attacks. I would expect that without it,
>> we'd see mass harvesting of credit card numbers at particularly
>> vulnerable parts of the network, such as in front of important
>> merchants. The fact that phishing and other attacks designed to force
>> people to disgorge authentication information has become popular is a
>> tribute to the fact that sniffing is not practical.
>
> And I'd expect to see massive email scanning by
> now of say lawyer's email at ISPs.  But, no, very
> little has occurred.

You don't understand the problem then. You also don't understand the
threat model most law firms face.

>> The bogus PKI infrastructure that SSL generally plugs in to is, of
>> course, a serious problem. Phishing attacks, pharming attacks and
>> other such stuff would be much harder if SSL weren't mostly used with
>> an unworkable fake PKI. (Indeed, I'd argue that PKI as envisioned is
>> unworkable.)  However, that doesn't make SSL any sort of failure -- it
>> has been an amazing success.
>
> In this we agree.  Indeed, my thrust all along in
> "attacking PKI" has been to get people to realise
> that the PKI doesn't do nearly as much as people
> think, and therefore it is OK to consider improving
> it.  Especially, where it is weak and where attackers
> are attacking.
>
> Unfortunately, PKI and SSL are considered to be
> sacrosanct and perfect by the community.

Bull. You haven't been listening. I think a lot of us have been saying
bad things about PKI going back for many, many years now. Many of us
have given numerous talks about this, written papers, and even run
IETF working groups devoted to the proposition.

>> >  * We know that from our experiences
>> > of the wireless 802.11 crypto - even though we've
>> > got repeated breaks and the FBI even demonstrating
>> > how to break it, and the majority of people don't even
>> > bother to turn on the crypto, there remains practically
>> > zero evidence that anyone is listening.
>>
>> Where do you get that idea? Break-ins to firms over their unprotected
>> 802.11 networks are not infrequent occurrences. Perhaps you're unaware
>> of whether anyone is listening in to your home network, but I suspect
>> there is very little that is interesting to listen in to on your home
>> network, so there is little incentive for anyone to break it.
>
> Can you distinguish between break-ins and sniffing
> and listening attacks?  Break-ins, sure, I've seen a
> few cases of that.  In each case the hackers tried to
> break into an unprotected site that was accessible
> over an unprotected 802.11.
>
> My point though is that this attack is not listening.
> It's an access attack.  So one must be careful not
> to use this as evidence that we need to protect
> data from being listened to.

How much does it cost an end user to use SSL? Zero, for practical
purposes. How much does it cost a company to SSL protect its
transactions with its customers? Nearly nothing compared to other
costs, even if they need hardware acceleration. If you're doing enough
business to need an accelerator, you won't notice the price. How much
does a web server that does SSL cost? Zero -- you can't buy one that
doesn't have it, and the free ones all have it already.

So, to save practically no money, you're willing to tell your
customers that they shouldn't bother, especially when people tell you
that there is good reason to bother and raw a priori reasoning says
that there is good reason to bother? What sort of advice is that?

>> >> As for DNS hijacking -- that's what's behind "pharming" attacks.  In
>> >> other words, it's a real threat, too.
>> >
>> > Yes, that's being tried now too.  This is I suspect the
>> > one area where the SSL model correctly predicted
>> > a minor threat.  But from what I can tell, server-based
>> > DNS hijacking isn't that successful for the obvious
>> > reasons
>>
>> You are wrong there again.
>>
>> Where are you getting your information from? Whomever your informant
>> is, they're not giving you accurate information.
>
> I've seen a few reports of DNS hijacking for phsishing over
> the last year.  In each case that I saw, the eventual conclusion
> was that it wasn't a sensible attack, it was under control,
> and the attacker did himself mischief by potentially leading
> the ISPs back to him.

Your information is less than perfect it would seem.

> It if is anything other than that, let us know.  We need
> more data.  Without the data it's just more FUD.  Schechter
> and Smith's FC03 paper went further and suggests that lack
> of data is part of the problem of security.

The day to day problem of security at real financial institutions is
the fact that humans are very poor at managing complexity, and that
human error is extremely pervasive. I've yet to sit in a conference
room and think "oh, if I only had more statistical data", but I've
frequently been frustrated by gross incompetence.


-- 
Perry E. Metzger		perry at piermont.com

---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at metzdowd.com



More information about the cryptography mailing list