[Cryptography] the TOFU lie - or why I want my meat...

ianG iang at iang.org
Tue Apr 14 19:24:37 EDT 2015


On 13/04/2015 20:35 pm, Christoph Anton Mitterer wrote:

> What have we had in the recent years when it came about
> cryptographically secured message exchange (and message doesn't mean
> "mail", it means "any data") that was actually used on a broader basis?
>
> a) OpenPGP and similar schemes, where peers are typically more or less
>     directly authenticated (e.g. by personal meeting and fingerprint
>     exchange)[0].
>     This btw. also includes things like SSH, at least when one
>     directly/securely exchanges SSH keys.
>
>     This PKIs put the whole control under the user. He can decide who he
>     signs/trusts, or how many indirections (in the case of WoT) he'll
>     trust and so on.


All agreed except "typically".  I've done email exchanges with hundreds 
of people over PGP in various guises, and I probably checked FPs in 
about 5% of cases.  A lot of people I know also don't bother to check.

I don't know what the ratio is, but it certainly not clear that "OpenPGP 
people typically do direct auth" rather I'd suggest it is the other way 
around.


> b) X.509, and similar schemes, where trust in another one's identity is
>     not directly authenticated, but rather one trusts one (or hundreds)
>     of central points (the CAs) to do the right thing.


Well, except for the "trust in CAs" part, sure.


>     This includes basically all SSL/TLS, because this is typically only
>     used with X.509, and yes I know there is a RFC for OpenPGP + TLS, but
>     is there even a client who implements that?
>
>     Here, the control is effectively fully out the user's hand. The CA
>     alone decides, can forge (accidentally or on purpose) identities and
>     so on.

Also, the browser typically places most if not quite all of the checking 
under the covers.  It is the browser that makes any choices, thus 
resulting in weird results when the browser says "you do not trust this 
website" when in fact you do.


>     Theoretically the user can decide which CAs he trust, but in practise
>     this won't work either, since you have no control which CAs your peer
>     use.
>     Each CA can typically also assert the whole namespace (i.e. *all*
>     domainnames,... or *all* personal names - and not just the ones from
>     e.g. Lithuania)
>
>
> (a) Is typically only used by people who want stronger security (i.e.
> those who don't trust that fragile strict hierarchical and CA based
> model of X.509). Or in cases where it needs to be sure that a 3rd party
> cannot forge anything (e.g. when distributing packages of a Linux
> distro).
>
>
> (b) Is - whether intended as such or not - typically used for the
> masses. Everything in the web (i.e. https) that is secured.
> (btw: (b) is in concepts quite similar to TOFU,... no-one ever actually
> verifies the CA's root certs... it's also trust on first use, what your
> (e.g. browser) vendor ships)


Yes, we sometimes say "leap of faith" to deal with these unclear parts 
of the user's trust.

> That the X.509/strict hierarchical system is inherently broken, was
> clear to everyone for many years (not only since Snowden or the growing
> frequency of cases where CAs did something evil).
> The masses didn't really complain, neither did any of the bigger players
> (banks, Google, Mozilla, MS). The system worked at least to that extent
> that people felt save and not enough damage was made by cybercriminals
> to make a change worth.


To the extent that people understood or didn't understand who was to 
blame for what crime there was, then yes, people didn't perceive that 
the right parties should make an effort.


> Thanks to incompetent and/or corrupt CAs (does really anyone believe the
> story that Turktrust or CNNIC's sub CA just made that forged CAs by
> accident?), thanks to greedy companies like Google/Mozilla/MS/Apple who
> only care about money and or market share, the system was kept alive and
> thanks to them it was weakened more and more by the introduction of more
> and more CAs (IIRC Modzilla ships around 150 these days, not counting
> intermediate CAs).
> You have the money? You'll be a CA and can do what ever you like!


It's not that cheap, but money certainly helps.

> And even if abuse gets public, Mozilla and friends likely won't ban you
> (again see e.g. Turktrust, CNNIC).




> Now since the whole NSA/GHCQ scandal and since the CA system showed more
> and more to be what it is - broken - people started to actually
> recognise that problem.
>
>
> So the same people/player who knowingly kept the broken system alive,
> are now looking for ways to fix it (which however isn't really possible
> by nature).
> The most prominent "solution" is probably TOFU, or key pinning or
> however you call it.
>
> It seems like a bad joke that those player, who are all too often
> against open standard and who are well known to happily cooperate with
> or even advise government agencies are now the ones trying to push TOFU
> as "soluion".
> Honi soit qui mal y pense!
>
>
>
>
> So... TOFU.
>
> Trust on first use.
> That's basically like what we've had in the good old days with anonymous
> SSL/TLS modes, where it was clear to everyone, that this doesn't really
> provide security. Or similar to just blindly accepting a SSH server host
> key without checking whether it's actually the right one.
> Well it's that anonymous authentication + pinning of the respective
> credentials (key, cert, or however you call it).
>
> One can use TOFU "alone", e.g. just trusting any credential (like a
> self-signed cert) on the first use. Or hybrid with e.g. the strict
> hierarchical model from X.509.
>
>
> The idea how TOFU should "solve" or at least improve things is, that
> you'd recognise if subsequent connections go to the same destination
> (because you have it's credentials/keys pinned/trusted - on their first
> use).
>
> The first bad assumption here is that one would have gained "trust" at
> any stage. This is simply not true.
> One cannot know, whether the peer on the first connection/communication
> was actually the desired one (and has thus deserved "trust") or whether
> it was my neighbours son, some cyber criminal or the BND (yes even the
> German intelligence service isn't that bad as people often may think).
>
> TOFU doesn't prevent MitM at the first connection at all, and once that
> would have happened, an attacker could simply MitM every further
> connection.


Simply ... is probably an overstatement, but ok.


> So TOFU makes some further assumptions:
> 1a) In practise one would have simply good enough chances that the first
>      connection (where trust is given) is not attacked.


Which is probably a good assumption.  Your attacker doesn't typically 
predict what you're going to do next, especially with a new site, so he 
has to pretty much attack everything.  Which leaves him open to 
detection if you have even a few certs cached from somewhere.


> 1b) (see below)
> 2) And even if it was attacked (and all further communication relayed
>     via Mallory), one would sooner or later notice it.
>
> I really wonder how one can just dare to make any of these two
> assumptions and sell it to people... o.O


Well, people don't "sell" it.  Rather, it's what you get for free. 
Pretty good privacy, perhaps we should suggest it?

For a tiny little bit more work, users can do things like ADH and 
compare the results out of band.  This is done by some phone products, 
in-band, assuming that it is pretty hard to forge the other person's 
voice in real time.  Again, pretty good privacy, could be on to 
something here ;)


> As for (1):
> We already know that NSA/etc. sit literally at all the central network
> places, the internet exchanges, the transatlantic cables, quite surely
> in satellites and so on.
> They either cooperate with the big content providers (Facebook, Google,
> etc.) and the big Tier-1s (Level, Akamai and that like), they force them
> to cooperate by law (national security letters, gag orders) or they
> simply hack them.
> Quite likely most of the commercial companies (i.e. those who file
> lawsuits against the NSA and protest loudly) just happily cooperate in
> reality.
> They (NSA/etc.) also even hack the network hardware before it's
> delivered to customers.
> We know that they have extremely large powers, even already when
> operating under law (cause in the US and others, when it comes to
> surveillance or economical espionage law doesn't really matter),... and
> if law should be in their way, well then they simply ignore it.
>
> So again, how on earth could one believe that one would be safe from
> MitM attacks in the "OFU" stage of TOFU?
> Quite contrary, one must very well assume that they actually are listen
> and sneak in as soon as a target would be interesting.


Listen --> SNEAK is what we want.  Now they can listen to everything for 
free, literally because we didn't paper the globe in crypto, and again 
that because we got tricked into doing the whole CA thing.

However the key here is that when we use TOFU, it's still an option.  We 
can always do the fingerprint thing.  And we will, when the NSA sneaks 
in and does enough MITMs.


> And even if you don't look at NSA/Co. - the same principle just applies
> to the big players AND to cyber criminals.


Bring it on!

> They likely don't have access to that large part of the cake as e.g. the
> NSA, but how can one just assume that the simple cyber criminal who
> attacks you for ransom money isn't capable of getting in the line for a
> MitM?
> We see that basically daily with highly sophisticated attacks on two
> factor authentication systems like smsTAN in mobile banking and lots
> more.


Ahhhhhhh... so you're aware of persistent attacks on the banking system. 
  So now we have to ask, why are there persistent attacks on the banking 
accounts of users, but no persistent attacks on email?

In the past it was because there was no efficient way to monetarise it 
as a crim.  Now however, it turns out that any ISP worth its salt is 
MITMing the customers to the point where we can spot the difference on 
pure performance.  Which is to say they figured out how to sell your data.

So the answer is:  money, or absence of it.


> So the argument (which is 1b) that typically comes next:
> "Well they may be able to MitM most connections, but it would be too
> expensive for them to do this on a broad scale


Not so much too expensive but too dangerous.  Even if they do 0.1% we're 
going to spot it, and then the flag goes up.  It's already the case that 
people now getting annoyed at cafe wirelesses that MITM the SSL 
connections, and are starting to find local-password WIFIs in preference 
as they have no MITMing rather than big service providers' WIFI.


> ... and _therefore_ it
> prevents or at least helps against mass surveillance.
> Again, how can one just make such a blind and naive assumption.
> We already know the extreme things they're capable of, like storing vast
> amounts of data for lata use. We know the extremely big computing
> centres they have (and these are just the ones publicly known - I don't
> need to wear my alu hat, to believe that there real capacities are many
> times higher; history has proven that.).


No, so ok, there is something that might not be clear to the world. 
When dealing with the spooks, there is one and only one rule over 
everything:  don't get caught.  This is their whole constitution.

The same will apply to cyberhacking.  Recall they actually promised 
POTUS that the Iranian virus stuff would never get out.

They will only MITM when there is a very high near-certainty of not 
being caught.

> IMHO the arguments (1a) "in practice on will be lucky and the 'first
> use', i.e. when the key is pinned/trusted, will be the right one" as
> well as (1b) "well even if not, it at least prevents mass
> surveillance"... are at best completely unproven, but likely simply
> plain wrong, naive and - in all doing respect - stupid[1].


Bring it on!


> Then there's argument (2). The idea behind is:
> If one makes an anonymous key exchange, then even when it's anonymous
> (but trusted) in the first place, one would/could sooner or later
> notice, if one was attacked (in the single case), respectively whether
> mass surveillance continues.
>
> Let's look at the single case:
> When I notice "sooner or later" that I was MitM attacked, then it's
> likely already too late. My precious data is likely already stolen or
> e.g. evil code may have been already introduced in my system or e.g. my
> bank account is already empty.
> A cybercriminal who's on to me, or a intelligence agent who has really
> targeted me simply wouldn't care.
> The former only wants money and if his attack is noticed, well he didn't
> send me his address in advance so he'll just move on to the next victim.
> And the later... either they have already what they wanted, or it's at
> least better for them to have a bit than nothing.
> In both cases, the argument "that one may sooner or later notice it" is
> simply moot.


No, you're assuming that because the attack succeeded, then the system 
has failed.  It has only failed in your case, but in the aggregate it 
has succeeded (assumed) as it has defended more people more times than 
the alternate.

The way to think of this is risk analysis.  You take a risk in all 
things.  What matters is not whether you get hit by one particular risk 
but whether the expected value over all of (damage, risks) is kept to a 
reasonably low level.  It's about aggregation of all your risks.

Now, in the above, you got attacked, successfully.  But in using say 
TOFU PGP you would also benefit from not being mass surveilled.  Only a 
successful MITM would hit you then.

And, if you believe that you're at risk of that -- again risk analysis 
-- you have the option of doing the fingerprint thing.

In the alternate, so what is your alternate?  It is likely nothing, 
because the "perfect counterparty auth" system never worked well enough 
to spread to the masses.  It is simply falacious to compare your 
situation to a system that never got fielded, and decide your situation 
is fail.


> And let's look at the mass surveillance case, the idea is basically:
> *If* the masses would use opportunistic encryption with TOFU, then
> they'd be secure unless the agencies already MitM most or all of them at
> the "OFU" stage of TOFU.


Well, not really.  The assumption is that they will do a few.  Then 
we'll notice.  Then we'll run around like headless chooks and add in the 
rest that we need.

It's just a darn site easier to upgrade everyone's TOFU infra than to 
roll out the CA thing to everyone.  Seemingly.


> But, since one can find out later (e.g. by really comparing the
> credentials when meeting the actual peer) people would notice that mass
> surveillance is still in place... and then...
> then..
> then...
> then what?
> A big outcry? Governments changing the system and stopping mass
> surveillance? People start switching to really secure (i.e. mutually
> authenticated communication)?


Then we get the resources to put more infra in place.  As a slice of 
personaly history, I ran a crypto project in the 1990s.  When Clinton 
signed the exec order that apparently opened things up, the sex appeal 
disappeared.  My project could no longer attract crypto hackers... they 
all went off and did money making stuff.

So, we NEED the NSA to hack us so we can stop them.  Perverse, isn't it?!

This is why Snowden was so dangerous to them.


> Forgot it already? We've had these things already! A big scandal. A big
> outcry.
> What happend? Nothing (at least on "their" side).
> Actually quite the contrary - what paranoid people just assumed to be
> the case (i.e. the mass surveillance before) is now publicly confirmed
> and justified by NSA/Co.
>
> So, to all the proponents of TOFU/key pinning and that like:
> How can you dare to make assumption (2)?


As above.


> How can you dare to believe that this would prevent NSA/Co. from
> attacking (in the form of surveillance) people?


We don't dare.  We know they will attack.  But we are about making it 
non-costless, and forcing a cost on them.  We have to do this because 
there is no other cost on them right now, and they are in danger of 
turning their rampage across society into a police state.

They've breached the costs we put on them through legislation.  Now we 
have to put the costs back on them through opportunistic crypto.


> We already know that they do much worse things (like actively breaking
> into computer systems, computer sabotage, and so on), they're basically
> the same as cyber criminals just that they do it for the "good"[2] and
> that they don't need to fear any consequences (in contrast to cyber
> criminals; remember that people who illegally copy a video get worse
> punishment than rapists or murderers).


Yes.  But those things all carry costs.  Make them spend, make them 
choose.  Make them choose from budget means make them spend their value 
on the bad guys.  Not everyone.


> IMHO, TOFU won't help you at all against mass surveillance:
> - As said above, it won't keep the high level attackers (NSA level,
> which are the typical bodies for mass surveillance) from doing their
> business.
> If everyone would do opportunistic encryption in conjunction with TOFU,
> they would simply adapt and MitM every connection they can. It would be
> publicly known, just as  their mass surveillance is known now.


Now re-analyse.



> - The next lower level of attackers who do mass surveillance are
> actually the big companies which now try to sell security to people
> (Google, Facebook and that like).
> For them it would actually get harder to do surveillance (because they
> cannot easily operate outside the law). But their form of surveillance
> is anyway completely different.
> People voluntarily (actually just happily) give them all their data
> (look at Facebook).
> So they don't care about encryption as an enemy at all


Yes and no.  What people don't know doesn't upset them.  That's 
different to "we just had over our data".


> - Last but not least, cyber criminals.
> They typically don't care so much about mass surveillance, and even if
> they would: They already operate outside the law, so as soon as they can
> MitM people - they would.
>
>
>
> Last but not least, some motivational analysis and my personal opinion
> about how TOFU-like ideas affects security of single people as well as
> the masses.
>
> TOFU is IMHO clearly indented for the masses, i.e. those who don't know
> to much about crypto, and simply want to use the web. Why? Well simply
> because it doesn't give any real strong additional security.


No, simply because it is costless.  It gives real additional strong 
security - it forces an attacker to actually attack, not just eavesdrop. 
  Remember, attacking a computer is likely against the law!  Which is a 
cost, even to a criminal.


> So all the
> paranoid people, or experts and that like, they likely simply would want
> to continue with their safe mutually performed authentication (be it for
> OpenPGP, or accessing an SSH server).


See above.  Risk analysis & upgrade.


> So the argument by proponents is often:
> 3) "that we (i.e. software developers, standard makers, other experts)
> need to secure the masses".


This is what security projects do, right.


> Remember the beginning of my lengthy mail (sorry for that btw.)? Were I
> basically wrote that noone (not even the banks who loose money) care
> about the flaws of the X.509 model?


Actually a bunch of us do.  But the X.509 broken model is owned by the 
CAs and browsers.  Hard to change, isn't our property, it's theirs.


> That's just it. No one cares. At least not until people would suffer
> more severe consequences.


Oh, people care.  They could just never breach the firewall of 
CABForum/CA/Browser/IETFWG/audit/validation/CPS/..........


> Mass surveillance? Well all people complain, but apart from a few none
> of them *really* cares (because if, then they would look for ways to
> protect themselves).
> A hacked email account or the knowledge that all unencrypted
> emails/Whatsupp/etc. can be read either by anyone or at least some
> others? Do you really think the masses(!) would care?
> A few cases of hacked bank accounts? Well that's perhaps when people
> start to get annoyed, but as long as the banks pay for the damage...
> this ain't a big deal either.


You're missing one key fact.  Even in security circles, we all knew the 
NSA and friends could probably hack most of us with some effort.

However there was one remaining rule:  what happens in Las Vegas stays 
in Las Vegas.  Which is to say, when the NSA hacks or sucks up 
something, it stays inside the NSA.  This meant that as long as we 
weren't bona fide enemies of the state -- by which I mean real enemies 
-- they wouldn't care.

However, over the last decade the NSA breached the last firewall.  They 
started handing over data to 19 civil agencies in the USA.  We don't 
know who they are, we don't know why, how, when.  What we do know is 
that, not only is it *not under court supervision* they actually lie to 
courts about it.

Which crosses the line.  They are now conspirators to pervert justice, 
and it's there in black and white in their manuals.

So, yeah.  In effect they declared war against .. the people and the 
courts.  Tough words, but how else do you put a spin on the documentary 
evidence that says "lie to court?"


> So when it comes to (3) I really wonder:
> Why would "we" need to secure the masses, when they typically don't care
> anyway? At least not above the level of says "yeah it's a shame that XYZ
> happens.." + secretly thinking .oO(but I don' really care either).


Yeah.  Now, this is a *good question*.  I don't have a good answer to 
why we should secure the masses.

I know why I do it -- it is because the analysis of my money systems 
leads me to lock out inside attackers.  Which by consequence also 
secures users against all enemies, domestic and foreign.

But why say OpenPGP does it ... when the masses won't pay our salaries? 
  *good question* ...


> Don't get me wrong, I don't say that we should remove security/crypto
> from the masses.
> But I don't see why we should be obliged to introduce TOFU (for which
> it's IMHO questionable whether it increases security at all) when we
> already have a system which works for the masses:
> <hate-to-say-it>X.509</hate-to-say-it>


We have no system that works for the masses except TOFU.

X.509 does not work for the masses, it simply provides a fairly simple 
protection in some small areas of the attack model.  Less said about 
that the better, because this argument has been trolled to a 1000 deaths 
for 20 years now.


> Apart from allowing forgeries and surveillance, and apart from single
> cases where this was even done by Cybercriminals (remember when the
> reserved the TLD www.pаypal.com[3] or something like that and got a
> certificate for it).


Phishing.  Don't forget...


> So much for the question, why would the masses want TOFU.
> And I'm not going to analyse now which motivation some bigger (e.g. US)
> players may actually have to advertise key pinning and that like now as
> a big step forward so that people feel secure again.
> Honi soit qui mal y pense!
>
>
> Long story short, my analysis of the TOFU principle and key pinning
> methods is the following:
>
> a) at best, it would give people a short longing improvement in they
> security, *if* (and only *if* and as long) attackers don't decide to
> already start attacking[4] them at the "OFU" stage.


I hope you can now see that this *iff* is actually not realistic. In 
short, bring it on!


> b) at worst - people could assume - it wouldn't harm either
> but I think that's wrong:
> more realistically:
>
> c) the massive campaign in favour of TOFU that we see at all different
> levels: standards making (yeah, HTML2 has now nearly-mandatory
> encryption - so it's secure, isn't it?), development and the communities
> has IMHO actually quite a number of dangers:
> - The masses will actually believe that they'd be now at least more
>    secure than before.
>    Thus they will care less about their effective security and even less
>    about the political dimension of the whole topic.
>    In the light of new crypto wars being probably just started - quite
>    bad.[5]
> - Developers, standard makers and experts actually start believing the
>    illusion of TOFU and care less about implementing stronger rock solid
>    mutually authenticated crypto systems.
> - Which in turn will (in the long term) also affect those people (like
>    most/many OpenPGP users) who really wanted strong security, and who
>    put their efforts into it.
>    Simply because less software/standards may provide that strong
>    security.


Well, not going into unravelling the perceptional security myths tonight.


> In the end, the minority who really wants (and or needs) security, would
> likely start to suffer for a majority who even doesn't care about it.
>
>
> Best wishes,
> Chris.
>
>
>
>
> [0] And just to prevent Werner from the usual comment: yes I know,
> OpenPGP doesn't mandate this or the WoT,... but I guess one can easily
> say that it's mostly used in that way.
> [1] And yes I use such harsh words, because people already believed in
> the past the intelligence services, cybercriminals weren't capable of
> this and that (or at least not doing it)... and they did as if it was a
> complete surprise when Snowden revealed his stuff. And now, where they
> should know much better, they do it again.
> [2] And no I'm not questioning here whether this is the case or not,
> actually I don't believe that NSA/BND/Co. are evil per se.


They aren't evil.  They just suffer from a disconnect to the agreed 
contract with civil society, in most places.


> [3] I assume everyone noticed the first a being actually not an a but a
> U+0430 CYRILLIC SMALL LETTER A.
> [4] And we must generally assume that an attacker has no reason not to,
> since he'll always attack the weakest chain in the target. So if he
> finds something weaker, e.g. flash installed ;-) he'll take that, but if
> there's no better alternative, why should he not do MitM respectively
> mass MitMs?
> [5] And probably quite bad for Snowden, cause the first day Putin sees
> no PR use in him anymore and the US would have to offer anything, we'll
> probably disappear forever in some US supermax.
> [6] http://lists.gnupg.org/pipermail/gnupg-devel/2015-April/029638.html



iang

ps;


More information about the cryptography mailing list