[Cryptography] asymmetric attacks on crypto-protocols - the rough consensus attack

Stephen Farrell stephen.farrell at cs.tcd.ie
Sun Aug 2 16:58:15 EDT 2015


Hiya,

On 02/08/15 20:55, Watson Ladd wrote:
> On Sun, Aug 2, 2015 at 4:33 AM, Stephen Farrell
> <stephen.farrell at cs.tcd.ie> wrote:
>>
>> On 02/08/15 05:27, ianG wrote:
>>> It turns out that there is a really nice attack.
>>
>> Also trying to keep away from specifics of any one protocol.
> 
> That way, no one can actually argue about what's going on, 

If you're implying Ian or I wanted to obfuscate something that's
nonsense. He chose to try to generalise, and I'm fine with that.
The alternative would be to repeat arguments that are currently
already being (pointlessly IMO) repeated on the relevant IETF list.

> as they
> have no idea what sources you are examining and how you are drawing
> the conclusions you are drawing. Unless we talk about specifics, we
> can't actually come to grips with what is, as opposed to what we think
> is.
> 
>>
>> In general you assume that the attacker (who I agree exists) is active
>> as part of the process. There's no way to know the probability of
>> that. I do know that people have the ability and propensity to disagree
>> with one another for all sorts of reasons that are nothing to do with
>> the posited attacker. Perhaps especially the kind of people who
>> currently dominate discussions about new Internet protocols. And even
>> more especially in fully open environments where anyone can try to
>> participate. And since the new work represents change, and for some
>> folks, significant change, it's entirely likely that genuine
>> differences of opinion will exist even without any action from the
>> attacker.
> 
> Yes, it's true that some people will not consider the costs of any
> change to deploy. But that's not the situation we're talking about.
> Rather its when you have 2 proposals, one with running code, and
> another with no running code, both very similar properties, and yet we
> can't pick the one that works. Not reacting to known defects until it
> is too late is a distinct failure mode.
> 
>>
>> There is also the fact that any rough consensus process has to be
>> run by fallible humans. Not everyone is good at herding cats so that
>> the cats agree they have arrived at rough consensus. So in addition
>> to genuine technical disagreement one also has to take into account
>> the chances of accidental mis-management. IMO, that probability is
>> also quite high - not every engineer ends up being good at cat
>> herding sadly;-)
>>
>> Lastly, given there are a whole bunch of proposals and bits of work
>> being done in parallel, it's entirely to be expected that at least
>> one of those gets stuck because of some process-stupidity.
>>
>> All of the above argues that we need be very realistic and quite
>> well informed to arrive at a realistic evaluation of whether or not
>> they might be an active attack being attempted against a specific
>> proposal.
> 
> We know that the NSA spent millions of dollars on influencing
> standards. 

S/spent/wasted/ but yes. (And I don't mean wasted in terms of
did/didn't get what they want, I mean in terms of it being a
really really stupid way to mis-use money.)

> We know some of these activities involved NIST and ISO. Why
> wouldn't they also target IETF? 

I'd be surprised if some of that money wasn't mis-spent on trying
to muck up IETF work. And I ack'd that already.

My point is that Ian's supposed defence is surrender. I am not
trying to deny that there may be an attack. The point is though
that we will never know if any specific action is part of such
an attack and we therefore have to react via our normal processes
that aim to counter that and other kinds of gaming. We do have to
be more alert/vigilant for some aspects of what is proposed but
mostly we just need to run the processes well. (There are I'm sure
some improvements that can be suggested too, but that's again
not at all the same as Ian's surrender proposal.)

> We also know that the TLS WG
> repeatedly ignored email messages concerning holes TLS that were later
> exploited, as well as papers and documents outlining these problems
> for years. The process needs to stand up to subversion.

I don't agree with the above characterisation.

While some of the history of TLS hasn't been great, I doubt that's
down to this kind of attack.

>> To move to slightly more specifics, you mention rough consensus so
>> I assume you're talking mainly about the IETF, since the IETF is
>> afaik the only set of folks that use "rough consensus" as a term of
>> art.
>>
>> In the case of the many bits of good work that are being done to improve
>> security and privacy in the IETF, I do think it's quite likely
>> that some but not all people working for signals intelligence agencies,
>> and/or companies who work with them, do disagree with some of that
>> work. Some of that disagreement is openly expressed I'm sure and
>> that's just fine - we can handle openly expressed technical
>> disagreement fairly easily, if not perfectly.
>>
>> Since there are only a tiny number of direct employees of signals
>> intelligence agencies who participate in the IETF, and those folks
>> are generally not trying to game the system in obvious ways, (I
>> think I would notice if they were, 'cause yes I look out for it:-),
>> I think we can ignore them here. Three are however a lot of esp.
>> large companies who work with/for signals intelligence agencies
>> and who do participate in the IETF, so I'll focus on those since
>> any sensible attack would be done via a player like that.
> 
> Are you capable of determining backdoors in protocols yourself? No.

I've no idea what you mean (that could be relevant).

> Does the IETF process catch crypto vulnerabilities in protocols? No.

Bad question and a wrong answer anyway. The question is bad because
it's people (or maybe programs written by people) who discover
vulnerabilities. Whether they chose to feed that information into the
IETF process in a usefully timely manner is a different question. As
is how well or badly the IETF process handles such input. And even
though it's a bad question, I think the example of DKG figuring
out issues with 0-RTT in the TLS1.3 proposals is a case that comes
close to providing a "yes, stuff works sometimes" answer to your bad
question. (Not that I'm yet very happy with the results so far on
that score:-)

> So why are you confident that you can find disruption of the process
> by intelligence agencies? 

If you think I said I was, you mis-read what I wrote or I wrote
badly.

My main point is that we ought treat this as another kind of gaming
the system and ensure that we handle it well, just as we have to
with other more purely commercially motivated attempts to game the
system.

> I agree it might seem more visible, but
> consider that the complexity of X509 lead to holes, and X509 was
> pushed by governments heavily over simpler options. Was this part of
> the thinking? (The NSA also shapes how grants are paid out in the US
> to discourage some kinds of research: this is openly discussed on
> their website)

I doubt it. X.509 was part of X.500, all of which was similarly baroque,
as was X.400 at the time. And that all started back in the mid-1980's
too when using strong crypto was hard to impossible in most
applications. Seems pretty unlikely to me that X.509-complexity was
part of any such attack.

>> In any such case, my experience is that perceived commercial advantage
>> (which may be long term) is what causes such participants to try
>> to game the system. And indeed working with signals intelligence
>> agencies is presumably profitable, so there is the potential for
>> this attack. (One can argue that individuals within such enterprises
>> may be used in an attack by leveraging their inflated egos etc,
>> and that's true, but is indistinguishable from other personality
>> related reasons to disagree so is covered above I think.)
> 
> John Kelsey had no reason to believe the NSA was pulling anything over
> on him. But ultimately he ended up defending the inclusion of
> Dual_EC_DRBG, despite having questioned it internally. Consider that
> as the prototypical example of an attack.

The dual-ec fiasco isn't a good model for a similar attack on a piece
of IETF work IMO. The setup there was much more vulnerable to capture
by just a few parties for many reasons. That problem affects the IETF
much less - it's still an issue but far less of an issue so long as
we have enough capable folks participating. I do agree that other
standards development organisations can be very vulnerable to that kind
of capture though. As are industry consortia and small-team projects.
The scale of the IETF is a PITA in many ways, but for this aspect it
helps.

>> The remaining question then is whether or not people from commercial
>> enterprises are, in addition to openly participating as expected,
>> attempting to manipulate the open process to their own commercial
>> advantage. And the answer is yes, of course they are, as always. But
>> is that only because of the signals intelligence agencies? No it is
>> not. For any of the relevant players, which includes basically all
>> large companies, they have many more interests in play and it's not
>> possible to disentangle those from the outside. (Or even from inside
>> sometimes I bet:-)
>>
>> So it's impossible to tell what has motivated any particular bit of
>> process gaming, and it's mostly silly to bother asking. That's just a
>> part of operating in the big bad world, once you get beyond the playing-
>> with-friends stage of any project you can't worry about the motivations
>> of all participants. (You can decide if specific folks are worth
>> worrying more about and pay more attention to technically examining
>> their inputs, that's IMO fine, and I do that, but not based on current
>> employer, rather based on a pattern of contributions.)
>>
>> Basically, we can describe what we consider good behaviour but we need
>> to recognise that clever people will figure out ways to try to game any
>> system for reasons we can't know, so worrying about all motivations is
>> counter-productive, we need to examine visible actions and not worry
>> about the unknowable.
> 
> But we know that some IETF protocols have had better track records on
> security than others, and that many changes
> <chop />
>>
>>> I think this really puts a
>>> marker on the map - you simply can't do a security/crypto protocol under
>>> rough consensus in open committee, when there is an attacker out there
>>> willing to put in the resources to stop it.
>>>
>>> Thoughts?
>>
>> Your argument is ill-informed and incomplete and your conclusion is
>> erroneous. (That's my thought anyway:-)
> 
> Can you point to a correctly designed protocol done by rough
> consensus? 

My argument doesn't require one and I even acknowledged that starting
from the output of a small team is often a good way to end up with
better IETF output. The IETF isn't great at starting from a blank
sheet of paper, but is often good at improving various aspects of
small-team output.

> It's clear most successful protocols have actually been
> designed by small teams, and adopted through consensus. Asking a
> committee to design something is a proverbially bad idea.

Wrt "committee" see my earlier mail to the list.

S.

> 
>>
>> Cheers,
>> S.
>>
>>
>> _______________________________________________
>> The cryptography mailing list
>> cryptography at metzdowd.com
>> http://www.metzdowd.com/mailman/listinfo/cryptography
> 
> 
> 


More information about the cryptography mailing list