[Cryptography] Security clearances and FOSS encryption?

John Denker jsd at av8n.com
Thu Jul 10 18:45:30 EDT 2014


On 07/09/2014 09:18 AM, John Kelsey began by saying:

> To the extent clearances do what they're supposed to do [....]

Lost me already.

The problem is, the clearance system has never worked 
very well AFAICT, and I've never seen a plausible
proposal for fixing it.

For starters, the concept of "need to know" is a central
pillar of the system, but if you take it literally, bad 
things happen.  For instance, politicians complain that 
in the run-up to 9/11, the law enforcement and intelligence 
communities failed to "connect the dots".  The dots could
not be shared, because there was not a demonstrated "need
to know".

In the usual bureaucratic way, after 9/11 the pendulum 
swung in the other direction.  People were encouraged 
to share more.  As a result, people like Pvt. Manning,
Edward Snowden, and others got access to huge amounts 
of stuff.

This is relevant to crypto, insofar as the better 
your cryptography is, the more obvious it becomes 
that human weakness (not code-breaking) is the 
dominant threat to your encrypted secrets.

There are fundamental reasons why there is no happy
medium, no way to make "need to know" actually work.
Consider the whole idea of /research/ on classified
topics.  Good research depends on knowing what other
people are up to.  It always has, at least since the
1600s.  Even Newton needed to "stand on the shoulders 
of giants."  The problem is, you don't know until
afterwards what it was that you needed to know.

There are fundamental dilemmas even in non-research
settings.  Consider the lowly airport security screeners.
On the one hand, you would like them to know exactly 
what to look for.  On the other hand, it would be a 
disaster if the bad guys found out exactly what the 
screeners were looking for.  There is a huge number of 
such screeners.  Do you trust them all?  In particular,
what about the screeners who serve United Airlines
flight 802, departing Dubai direct Dulles?  Logic says
those guys /especially/ need to know.  OTOH, how much 
do you trust them?

There are also long-standing and never-ending problems 
with how clearances are granted.  Once upon a time, 
clearances were given to proper gentlemen with the 
correct "old school tie" ... which led to the Cambridge 
Five and other fiascos.  More recently, as we know
from news stories, corners were cut on the background 
checks for people in fairly responsible positions.
Now extrapolate:  Imagine how many corners get cut 
in connection with low-level positions, such as
airport screeners.

(A) Publicly-available evidence suggests that almost 
everybody with a clearance fudges the rules, even while
exercising good judgment and complete loyalty.  Without
fudging, nothing would ever get done.  On the other 
hand, it opens up endless opportunities for blackmail 
... from the inside.  Suppose you are an engineer doing 
fluid dynamics at a top-secret government lab.  If 
some coworker doesn't like you, he can have you 
arrested and threatened with life imprisonment under 
the 1917 espionage act.  After a year or so in solitary 
confinement without bond, the charges might (if you're 
lucky) get bargained down to misdemeanor mishandling 
of classified information ... which you probably /are/ 
guilty of, along with everybody else.  The fact that
you exercised good judgment and that everybody else is 
doing the same thing is not a defense under the law.

  (B) This is not the only law that works this way.  I 
  reckon everybody on K street routinely violates the
  bribery statute, 18 USC 201, not to mention the Hobbs 
  act, 18 USC 1951, but they almost never get prosecuted
  for it.  Jack Abramoff understands that he broke the
  law, but he doesn't understand why he got prosecuted
  while so many others continue with monkey business 
  as usual.

There is a partial analogy here, but *not* a complete 
equivalence.  I'm unhappy with bad laws that go mostly
unenforced, and also with good laws that go mostly 
unenforced, but the cure is different in the two cases.
I have sympathy for the routine law-breakers in group 
(A) but not group (B).  IMHO many of the former are 
just trying to do the right thing, while the latter are 
selfish bastards who pose a grave threat to democracy.

> We have a kind of instinctive security notion of wanting to build a
> nice big wall with bad guys outside and good guys inside, but that
> doesn't really work too well. 

That's bureaucratic approach #1.  We agree, it doesn't
work too well.  There's nothing wrong with trying to 
identify bad guys and keep them outside, but it doesn't 
solve the whole problem.

The whole idea that there exist good guys as distinct
from bad guys reflects a Manichaean world-view that
has little connection to reality.  In the real world,
there do exist adversaries and enemies who *must* be
kept outside ... but that does not make everybody else
a reliably good guy.

> Instead, we need processes that don't
> rely overmuch on any one person's integrity or competence.  (That
> protects against errors as well as malfeasance.)

That's bureaucratic approach #2.  There's nothing wrong 
with it as far as it goes, and indeed sometimes it solves 
a great many small sub-problems, but it does not solve 
the big overall problem, not by a long shot.

The word "instead" does not belong here.  The two approaches
are not mutually exclusive.  Indeed, they work better 
together than separately.

Alas, crucially, even the combination of the two doesn't 
work very well.  

In particular, if the amount of bad behavior on the inside
exceeds critical mass, exceeds the percolation threshold,
then the system fails spectacularly.  Well-known examples
include:
  -- LAPD Rampart scandal
  -- peer reviewing for Chinese scientific journals
  -- various emperors overthrown and/or assassinated 
   by the praetorian guard
  -- politicians in thrall to the aforementioned
   selfish bastards
  -- state secrets privilege used to conceal government
   wrongdoing (e.g. Reynolds among many others)
  -- et cetera

As a separate matter, even if everybody on the inside
is a good guy, the claim that "given enough eyeballs, 
all bugs are shallow" still doesn't work all that well.
Maybe a lot of bugs, but not "all" bugs.

The fundamental issues of trust, trustworthiness, integrity,
patriotism, loyalty, good judgment, et cetera are beyond 
what bureaucracy can understand, much less regulate.



More information about the cryptography mailing list