[Cryptography] Trust & randomness in computer systems

Henry Baker hbaker1 at pipeline.com
Wed Mar 16 12:07:59 EDT 2016


Even though I'm a formalist by nature & training,
I can see that formal methods are not going to be
sufficient to solve most of the problems in computer
security today.

Part/most of the reasons have to do with the fact
that we're trying to replace the engine & wings on
a plane that's already flying with billions of folks
aboard.  For example, we jumped into e-commerce
before we even knew how to build safe & secure
crypto systems.  We still don't, but we're a lot
better than we used to be; unfortunately, we're
still putting out crypto fires that started 25
years ago.

I've come around to Dan Geer's way of thinking:
look to biological systems.  They've been dealing
with "security" problems for perhaps 2 billion
years, so there's some chance that they have
some tricks up their microscopic sleeves.

For example, it would seem that cell "suicide"
is a lot more common than previously thought.
If a cell determines that it has been overwhelmed
by forces that it cannot control, and this is
a threat that can overwhelm other cells, as well,
it will commit suicide in an attempt to stop a
pathogen from spreading.  Ditto for individual
plants and animals; the survival of the species
is more important than the survival of the
individual.

As IoT computers become cheaper than the postage
it costs to mail them, it is no longer necessary
to "save" the computer or even "reprogram" it.
Throw it away -- or better yet, grind it to dust.
(Note to E.E.'s: we need cheap chips which can
self-destruct rather than disclose priceless
information.)

Since it's "turtles all the way down", and since
turtles can't be trusted, we need to *build
distrust* into all of our systems.  We can no
longer take a NAND gate at face value & trust
that it computes correctly.  Yes, the vast
majority of faulty NAND gates will be due to
the usual manufacturing defects, but some will
be due to *faulty design*, and some will be
due to *malicious behavior* on the part of
some criminal or state (but I might be
repeating myself).

We now build *distributed* power supplies
into all of our electronic components,
because it's far more robust than attempting
to guarantee a sufficiently smooth source of
power from the higher-level subsystem.  We
didn't do this out of a lack of trust in
power supplies, but perhaps we should, as
power supplies can be maliciously manipulated
to cause glitches which can be exploited.

We now build *error correcting codes* into
nearly every subsystem, because 1) it's
relatively cheap; and 2) because the cost
of attempting to debug every single type
of signal propagation error is prohibitive.
We may not have considered trust when
incorporating ECC, but nowadays we might
seriously consider using SHA256 instead of
(or in addition to) traditional ECC.

For all of these reasons, we need to build
distributed *distrust* into every component.

Another inspiration from biology: embrace
randomness.  We've gone to every conceivable
effort to eliminate randomness from our
electronic systems, yet every IoT device
*requires* randomness in order to properly
generate the random crypto *keys* it will
need in order to communicate with other
components *securely*.

Furthermore, this exquisite *cleanliness*
of component power supplies and signals
means that it is almost trivial to snoop
on these subsystems to determine when
they are computing with crypto keys and
then to extract those keys.

There has got to be a new type of computer
design in which the randomness is not only
not extinguished, but embraced, so that
computations are inherently far more
random (and hence can't be easily snooped),
and randomness for crypto keys is trivially
available.

I don't have the solutions, but I'm afraid
that we've only been looking near the
lampposts where the light is the brightest.
We need to move away from the lampposts &
look further afield.



More information about the cryptography mailing list