[Cryptography] Security clearances and FOSS encryption?

ianG iang at iang.org
Sun Jul 13 06:26:43 EDT 2014


On 12/07/2014 14:29 pm, John Kelsey wrote:
> 
>> On Jul 11, 2014, at 7:20 AM, ianG <iang at iang.org> wrote:
>>
>>> On 9/07/2014 17:18 pm, John Kelsey wrote:
>>> To the extent clearances do what they're supposed to do, they should indicate less risk of compromise to the project--less blackmail or bribery potential, for example.
>>
>>
>> Well, there are clearances that we do on our people, and the clearances
>> that our enemy does on his people.  We're talking about the latter, so
>> following your train of thought, we are dealing with (a) a signal of
>> something, and (b) people who are already compromised ... by the issuer
>> of the clearance, aka, the enemy.
> 
> There isn't *one* enemy sitting in Ft Meade (or Mordor).  There are hundreds of potential enemies. Blackmail and bribery are generic techniques that can be used to compromise people--they can be used by the US government, foreign governments, private criminals, activists, *anyone*.  If the security clearance investigation excludes the people who would have been most susceptible to those techniques, then having passed it adds some value.  How much?  That, I don't know.  


Right.  We're all arguing loudly in agreement -- dealing with a person
with a security clearance, who might have loyalty to an attacker --
should be treated as a variation of existing processes.


>>> but no one trying to infiltrate your project will tell you about those.  
>>
>> Sort of, maybe.  Actually, anyone infiltrating your project will set it
>> up so they don't need to tell you.
>>
>> Very different thing.  You simply have to respond by making it mandatory
>> for them to state such things.  It's a common thing to have a policy
>> requiring conflicts of interest to be disclosed, indeed it is even law
>> in some circumstances.
> 
> Maybe you should simply monitor packets coming from them to check if the evil bit is set?


Yep, we can work with that.  So, the tactic is this:  Monitor the
packets coming in, and ask them if they have their evil bit set.

If the answer is YES (I have a conflict of interest) then you have
established a framework for a shared ethical response to future
problems.  This protects both you and the person.

If the answer is NO (I lie, I'm an agent trying to infiltrate) then you
have established a deception, which you can write down.  If your
processes are good, then you can establish future costs that mean that
an agent faces future risks of being exposed and run out of town
backwards on a donkey.

In CAcert [0] we collect transcripts of interviews that explore all
these issues.  We have processes to check identity.  We have it wrapped
in civil arbitration with the power to punish.  On the basis of the
original deception, the Arbitrator can blow open the case.

Obviously a spy can deceive and get in using a cover story.  But at the
end, we'll have a story to tell, and that spy will be blown.

Imagine his photo on the newspaper [1].  That spy's career is now over,
as a spy.  Think Valerie Plame, by whatever name.  If that spy has been
caught making a deception, then they are completely exposed.


> If someone is a covert employee of the FBI on assignment to inflitrate your organization,

Right, this is not about the FBI.  They are investigating crimes.  This
is about spooks, who are spying.  Very very different.


> ...  Similarly, if someone is under the thumb of the Chinese government thanks to those really revealing blackmail photos of their vacation in 
> Thailand, they just aren't going to tell you who they are ultimately working for, because they *really* want to keep the guys holding those photos happy with them.  


No, this is an error.  You're assuming that because there are some
attackers who will lie to you, then all attackers will lie to you.  This
is not the case.  Most spying is done by not lying, but by getting the
victim into a state of self-deception.  There are reasons for this.

So the tactic is to first make sure you don't self-deceive.  Block their
favourite approach.  Which means you have to address the possibilities
face on.  Require disclosure of conflict of interest.

If you develop this process, that also makes it harder for the other
attack (with deception) because you are taking records.  Which is an
increased cost to them, therefore a benefit to you;  they won't attack
unless they really really need to.


> Federal employees have to disclose conflicts of interest--there is a yearly declaration involving your investments and arrangements and planned jobs and such.  I guess many companies do the same thing.  And this is worthwhile for what it gives you--it probably helps keep people from getting into situations where their personal interests and their job is in conflict.


Right.  And, any attacker may be being deceived by the spook agency.  So
creating the framework for disclosure makes it harder for the spooks to
trick honest people, at all levels.  What they do in defence, FLOSS
projects can do in defence.

E.g., hypothetically, if you are in a project to write (say) NIST crypto
code, and there is a standard to which you have agreed that says "we
operate only to the benefit of our users" (say).

Then, the spooks have to get you to break that commitment.  This reduces
their potential attack surface area, because most people are honest, and
will be intensely conflicted if asked to slip a backdoor in.  Even if
you agree (the Chinese spies have compromising photos...) your chance of
making a mistake goes up immensely, and your time to being caught goes down.


>  But it doesn't keep them from having some covert interest which they have decided not to disclose.  


Right, we can't stop them lying.  But we can make it non-cost-effective.



iang



[0] http://wiki.cacert.org/Risks/SecretCells
[1] we don't take photos as yet.  We should...


More information about the cryptography mailing list