[Cryptography] lessons learned -- or not learned -- from Enigma et cetera

John Denker jsd at av8n.com
Thu Jan 8 02:47:14 EST 2015


On 01/07/2015 08:42 PM, Phillip Hallam-Baker wrote:

> Later on, what really destroyed the NAZI war machine was the paranoia that
> the ULTRA decrypts introduced. Hitler believed everyone was stabbing him in
> the back. Rupert Murdoch's wiretapping victims had the same experience,
> they trusted their phones and the only explanation they could see for the
> stories was that one of their friends had betrayed them.

It's not that simple.  Some people were non-metaphorically
trying to stab Hitler in the back and/or blow him to 
smithereens, for reasons having nothing to do with ULTRA 
or Enigma.  It's not a binary choice.  Weak crypto can 
coexist with subversion and treason.

Furthermore, when you have a little bit of each, the
combined effect can be strongly nonlinear, strongly
synergistic.  There is such a thing as multi-factor 
causation.  Competent crypto designers MUST take this
into account.

For example, over an 18-year period (1967--1985), the USSR
was able to break many millions of highly classified US
messages.  Broadly speaking, there were two causal factors:
 a) Chief Warrant Officer John Walker was a spy.
 b) The system was so badly designed that it was
  defenseless against a low-level spy.

For details on the many, many things that were done wrong:
  http://www.fas.org/irp/eprint/heath.pdf

I hardly need remind this group that one of the selling
points for public key crypto is that it greatly simplifies
key distribution and key security.  This is just one of 
the ways in which a well-designed system can demagnify
the effects of a local breach.

  By way of contrast, having more than 150 trusted root
  CAs, any of which can sign any domain whatsoever, is
  an example of a bad system.  Any breach is a fiasco.

  This needs to get fixed, yesterday if not sooner.

Note that not every subversive is a traitor.  Before and
during WWII, it was certainly possible for Canaris to be
pro-Germany and anti-Hitler.  Similarly Snowden can be a
patriot and opposed to NSA abuses.  Furthermore, not every
screw-up is subversion or treason.  The root-CA situation
is an example.  I would call it a plain old lousy design,
except that I don't think it was designed at all.  It was
hatched.

The Walker fiasco was extensively analyzed in the 1980s
and again by Heath (2005).  However, the Manning leak 
(2010) showed that the NSA had forgotten whatever lessons 
they had learned.  Furthermore, the Snowden leak (2013) 
showed that they STILL hadn't gotten their act together.

Obviously there is no such thing as perfect security,
but you can rig it up so that you don't bleed to death
from a shaving cut.

==================

As an almost-separate issue, here's another lesson that
should have been learned over and over again, yet people
seem to keep forgetting:  These things need to be tested!

This is something you were supposed to learn early in
grade school:  CHECK THE WORK.

The Abwehr should have been able to figure out that the
allies had broken Enigma.  The US should have been able
to figure out that the USSR was reading virtually all of
the signals sent to US ships at sea.

For starters, you can send a steady trickle of false but
tempting information via encrypted channels.  Sooner or
later the opposition is going to take one of the baits. 
If they check to see whether the information is true or 
not, the act of checking gives away the game.

  It will always be a chore to untangle cryptanalysis
  from direction-finding, traffic analysis, overhead
  surveillance, plain old espionage, et cetera ... but
  you still have to do it.

On an even more basic level, you need to CHECK THE WORK
of the code clerks.  Check it in the field, under less
than ideal conditions.  You didn't really just send a
test message consisting of all Xs -- again -- did you?
You didn't really just use your girlfriend's name as
the passphrase, did you?

This is partially a discipline issue, but also partly
a system design issue, directly relevant to this group:
Make it easy to use the system properly, and hard to
use it improperly.  Don't just guess, OBSERVE how non-
experts use it in the field, and then revise the design
accordingly.

  One good clue is the size of the instruction manual:
  The more instructions there are, the more ways of
  screwing up there are.

======================

One more thing about Enigma, and Fialka as well:  The
rotors were hand-wired and hand-soldered.  This strikes
me as weird, given that printed-circuit-board technology
was available even before WWII.  Millions of boards were
made during the war.  They were used in proximity fuses
for anti-aircraft artillery shells, primarily because 
they were extremely robust, not to mention lightweight 
and cheap.

If you miniaturize the rotors you can have more of them,
dramatically increasing the security.  Maybe I'm missing 
something, but it is hard to imagine any good reason for 
using hand-wired rotors, not even in the earliest days,
and certainly not later on.  Reportedly Fialka remained 
in use into the 1990s.

  Of course if you have toooo many rotors you get into
  reliability problems, but still, six rotors straight
  through is in many ways better and in no ways worse
  than three rotors and a reflector.

Cryptology is always interesting because it lives at the
intersection of super-abstract mathematics and super-
down-to-earth engineering.  Some of these rotor machines
suffered from dubious engineering as well as dubious math.



More information about the cryptography mailing list