[Cryptography] upgrade mechanisms and policies
iang at iang.org
Fri Apr 17 14:15:17 EDT 2015
> On 16 Apr 2015 22:59 +0100, from iang at iang.org (ianG):
>> For most traffic on the net, I'd say auth is highly dependent. For
>> some things we want auth. But for other things we want the opposite
>> of auth, call it anti-auth or unauth. This is the notion of
>> sexchat, snapchat, OTR, etc in principle, not in implementation.
> Confidentiality is meaningless if you don't know that you are
> communicating with the entity that you believe you are communicating
> with, and not someone passing traffic along.
See, this is such an overstatement that all your models fall to dust.
It is literally not true, and it is so fallacious as to be dangerous.
Confidentiality can be modelled between two people Alice and Bob. If
Mallory interjects and passes the traffic along, we know that Alice, Bob
and Mallory know the secrets.
That's 3 and we all know that 3 people can keep secrets if 2 are dead...
But the other viewpoint is that it is 3 people amongst 6 billion. Now,
if the normal threats that face Alice are amongst the other 6 billion
then she still has the benefit of confidentiality . Even if the
threats are evenly spread across Mallory, Bob and 98 others, she's still
batting at 98%.
Ergo, confidentiality still exists and is valuable even if there is an
MITM. What is lacking in meaningful value is your security model,
because you stripped out the value of confidentiality for ... other
reasons as we'll show.
To put it more in context, consider the model in greater depth. There
is no security system that goes Alice <--> wire <--> Bob, it just
doesn't exist. The hallowed Internet model is simplistically
unrealistic because Alice can't do RSA in her head.
In practice, there is at least Alice and Alice's computer - which latter
has such a combination of agents inside it that we can't even catalogue
them let alone secure them. So even if you believe that you can't
permit Mallory, you can't actually reach that target in any meaninful
sense. Your model is not real world.
So why is there an old adage of confidentiality as 'meaningless' without
protection against unauthorised listeners? It is really reduced to a
marketing statement: you must put authentication ahead of other
considerations, and by the way, we happen to be selling a mighty fine
Now back to the real world of real protection for real users...
> Suppose Alice and Bob want to communicate in such a way that Eve and
> Mallory cannot know _what_ is being communicated. (For simplicity's
> sake, let's say that Alice and Bob are fine with Eve and Mallory
> knowing _that_ they are communicating with each other; they want
> message confidentiality, not communications secrecy
*hold onto that thought*
> .) By having an
> authenticated, encrypted channel to transport the data, this is easy,
> but Alice and Bob somehow need to authenticate each other initially.
> If this authentication is persistent at the endpoints and is tied to
> something that only each of Alice and Bob knows (such as their
> respective private keys), then they can be confident that after they
> have verified that the other endpoint is the intended one, as long as
> that value (say, a key fingerprint) remains the same, everything is
> very likely fine; Eve the passive attacker can see that they are
> communicating (which was okay in their threat model
/still holding onto that thought/
> ), and Mallory the
> active attacker could in theory insert himself in the middle but that
> would invalidate the previous endpoint authentication between Alice
> and Bob, alerting both.
> If Mallory can insert himself in the middle, to Alice _appearing_ as
> Bob and to Bob _appearing_ as Alice, then you have no real
> confidentiality, even if the link is encrypted.
As covered above, this conclusion is only theoretically plausible if you
write the assumptions to be so far away from reality as to be meaningless.
> That's the situation
> you get with encryption without authentication. Incidentally, it's
> also what you have with e-mail opportunistic transport-level
> encryption without certificate validation; it protects against passive
> eavesdropping, which is a step up from everything being in plain text,
> but it does not offer protection against active attackers.
Everything you've said is the classical argument for PKI/CA/TLS/ITM. It
is a reconstructed argument starting from the position that we have a
hammer (x.509) and we need to go out and find some nails.
Now, if we went back to actual privacy considerations -- not your
constructed but well learnt theory -- and asked what Alice and Bob
wanted to do privately:
1. do you want your messages to be secret?
2. do you want your contacts to be secret?
3. do you want your activity to be untracked?
The answer to the above is typically YES, YES, YES . My business is
But of course everything-secure is a hard problem, really challenging.
Let's assume we can't answer that right now.
What can we do? We know how to authenticate people using this telco
design. We know how to use that to bootstrap a secure point to point
connection. So why don't we do that? Hey presto, the auth-pyramided
system we now know so well and love/hate.
Unfortunately these systems however are diabolically bad at certain
things. X.509 directly makes tracking and tracing not only easy but
*authenticated*, so points 2,3 above are blown out of the water. What's
the solution? Education. We have to go out and tell people "your
threat is mallory" and we have to also tell them that "your threat is
Go back to those thoughts: "let's say that Alice and Bob are fine with
Eve and Mallory knowing _that_ they are communicating with each other."
That's not true. That's you telling Alice and Bob what they are
allowed to do in order to benefit from your system.
Indeed, it is so not true, it's illegal under (eg) European data
protection directive. It is literally not allowed to use and release
numbers that relate to people that permit cross-correlation without
showing some seriously hard reasons. In other words, certificates are
actually against the data protection directive (the reason nobody much
cares about this is because certs only work for companies, who aren't
"protected" under the data protection directive).
Indeed, it's so not true that most police forces and most intel agencies
and most other such people say directly that if they can pen-trace
everything, then the rest is ... not worth much more than they already
have. And most cheating spouses and business partners and whathaveyou
have exactly the same problem.
It is NOT OK in their threat model to accept tracking.
So what's going on here? Only by telling users to ignore certain
threats, and telling them and telling them over and over again, can you
educate people to accept and love your security model. But you haven't
delivered security -- what you've delivered is sales.
Security only delivers if it starts out from real user needs -- not from
reversed seller needs. Reverse the education you've learnt. Assume
that Alice and Bob want secrecy and don't care about confidentiality.
Build it. Or something. Until you've unravelled and expunged the
CIA/ITM security model, you can't actually have (another) security
iang, unravelling false ITM/CIAs since 2003  ;)
 This is the insight that historically allowed the entire open source
crypto community to wave off ("accept") the threat of the NSA. Until
they started shipping data to the FBI, IRS, egg board and taxi medallion
printer, the NSA was benign. It therefore didn't matter to Alice that
NSA could read her traffic. It didn't matter to practically anyone here
- or our customers. We accepted that risk of a clear MITM danger, back
then. Of course, now, we're screwed. Now we have to go back and re-do
 Typically, of course we argue that point, but this'll do for now.
More information about the cryptography