Columbia crypto box

bear bear at
Sun Feb 9 12:48:34 EST 2003

On Sat, 8 Feb 2003, Lucky Green wrote:

>In July of 1997, only days after the Mars Pathfinder mission and its
>Sojourner Rover successfully landed on Mars, I innocently inquired on
>the Cypherpunks mailing list if any subscribers happened to know if and
>how NASA authenticates the command uplink to what at the time was
>arguably the coolest RC toy in the solar system.


>Apparently, my original inquiry had been copied and forwarded several
>times. By the time my inquiry had reached the office of the President,
>just as in a children's' game of telephone, my question of "are they
>using any decent crypto" had turned in to "hackers ready to take over
>Mars Rover".


>Needless to say and regardless of anyone's intent, such concern
>would be entirely unfounded if the uplink were securely authenticated.
>Which I believes represents an answer to my initial question as to
>whether the uplink is securely authenticated.

Actually, I don't think it does.  It's been my experience that the
decision-makers never even *KNOW* whether their systems are secure.
They've been sold snake-oil claims of security so many times, and,
inevitably, seen those systems compromised, that even when responsible
and knowledgeable engineers say a system is secure, they have to
regard it as just another claim of the same type that's been proven
false before.

So I can easily imagine them just not knowing whether the link was
secure, thinking that the NASA engineer's job of "securing" uplinks
might be no better than Microsoft's job of "securing" communications
or operating systems, because they've had it demonstrated time and
again that even when they hear words like "secure", the system can be

The fact is that the NASA engineer has a huge advantage; s/he's not
working for a marketing department that will toss security for
convenience, s/he's not working on something whose code has to be
copied a million times and distributed to people with debuggers all
over the world, s/he's not trying to "hide" information from people on
their own computer systems, and s/he's not complying with deals made
with various people that require backdoors and "transparency to law
enforcement" in every box.

So the NASA engineer's actually got a chance of making something
secure, where the Microsoft engineer didn't.  Microsoft has to claim
their junk is secure, but in their case it's just marketing gas.  But
all this is below the notice of the decision makers; they *LIVE* in a
world where marketing gas is indistinguishable from reality, because
they don't have the engineer's knowledge of the issues.

So having the decision makers get real nervous was likely to happen,
whether the link is secure or not.  There's no information there
except that the decision makers have finally realized they don't
really *know* whether the link is secure.  That's progress, of a sort.

>[Remind me to some time recount the tale of my discussing key management
>with the chief-cryptographer for a battlefield communication system
>considerably younger than the shuttle fleet. Appalling does not being to
>describe it].

Battlefield systems have been that way forever.  Battlefield
information only has to remain secure for a few seconds to a few
hours, and they exploit that to the max in making the systems flexible
and fast enough for actual use.  You want appalling?  In the civil
war, they used monoalphabetic substitution as a "trench code" -- on
both sides.


The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at

More information about the cryptography mailing list