[Cryptography] Trust & randomness in computer systems

Natanael natanael.l at gmail.com
Wed Mar 16 18:29:19 EDT 2016


Den 16 mar 2016 18:16 skrev "Henry Baker" <hbaker1 at pipeline.com>:
>
> Even though I'm a formalist by nature & training,
> I can see that formal methods are not going to be
> sufficient to solve most of the problems in computer
> security today.

[...]

> I've come around to Dan Geer's way of thinking:
> look to biological systems.  They've been dealing
> with "security" problems for perhaps 2 billion
> years, so there's some chance that they have
> some tricks up their microscopic sleeves.
>
> For example, it would seem that cell "suicide"
> is a lot more common than previously thought.
> If a cell determines that it has been overwhelmed
> by forces that it cannot control, and this is
> a threat that can overwhelm other cells, as well,
> it will commit suicide in an attempt to stop a
> pathogen from spreading.  Ditto for individual
> plants and animals; the survival of the species
> is more important than the survival of the
> individual.

Expirations and death signals have been discussed in the past on the
various crypto lists. No broad concensus yet, but I favor predefined
expiration dates in each and every software component which only can be
updated in a signed software update. And to decrease the risk of broken
signature algorithms, after a second expiration date (?) it can't be
updated remotely, only in-person.

Also, I advocate grateful degradation where possible. An expiration should
where possible only take out networking and other untrusted interfaces.
Where the functionality needs networking to be meaningful or safe, that too
would have to be disabled. A smart fridge becomes just a fridge, a traffic
light becomes a brick.

Components that relies on different security assumptions doesn't need to
all go offline when only a few has their assumptions broken. Traffic lights
for example only need authenticated commands (hash based signatures should
last), no secrecy (perhaps the particular block cipher mode in use breaks),
so only the traffic camera on that same pole goes offline.

> We now build *distributed* power supplies
> into all of our electronic components,
> because it's far more robust than attempting
> to guarantee a sufficiently smooth source of
> power from the higher-level subsystem.  We
> didn't do this out of a lack of trust in
> power supplies, but perhaps we should, as
> power supplies can be maliciously manipulated
> to cause glitches which can be exploited.

[...]

> Another inspiration from biology: embrace
> randomness.  We've gone to every conceivable
> effort to eliminate randomness from our
> electronic systems, yet every IoT device
> *requires* randomness in order to properly
> generate the random crypto *keys* it will
> need in order to communicate with other
> components *securely*.

My currently favored IoT architecture is an electrically segmented design.
Based on the previous ideas of trusted electrical paths and a necessity
because of  all the security bugs in all of the more advanced CPUs.

First you create a trusted (trustworthy!) controller segment with the
minimal amount of code and circuitry it needs to handle the I/O for all of
a device's local sensitive inputs and outputs. Sensitive is defined as
anything that can cause harm either by accident or malice. Microphones,
cameras, radios, engines, safety mechanisms, etc. This controller provides
an optical I/O gate to the computing segment (no electrical glitching),
together with a minimal carefully defined API.

This controller segment can have multiple groups of electrical paths for
I/O - what it is directly responsible for is connected straight to the
controller IC:s (like an engine).  However, for non-controlling
intelligence-supporting connections (such as sensors that feed the
computing segment with data) there could be direct low-latency lines to the
computing segment although with switches that allow the controller segment
to electrically disconnect them.

Then there's the computing segment. It has all the intelligence, but it
needs permission from the controller segment to do anything (it is an
electrical island, the controller is its only bridge). Even for networking,
as you might have noticed - the controller segment acts as a proxy for it
and can electrically cut the connection. If the controller segment says
it's expiration date has passed, then it goes offline - but up until then
the computing segment can download and pass on signed firmware updates with
extended expiration dates, if one is available.

For as long as the controller accepts commands, the computing segment is
free to send any command over the API which the API offers, and the
controller will happily comply (the controller may have safety limitations
programmed in).

Crypto operations should only happen on circuitry and with code which you
can with reasonable certainty assure won't have sidechannels. Either with
dedicated crypto acceleration chips or on well understood microcontrollers
with safe crypto code adapted for the chip.

Also, for death signals to be effective, the devices issuing them to their
local networks should know what exactly is running in there (for
efficiency, so you don't need to mirror every such signal ever issued,
daily).

To do that and preserve privacy, it would be easier if we had a setup like
PHB's Mesh concept where each user has their own master keys and there's a
hierarchy between a user's devices, where you could for example have a
secure home server (which I'm advocating) which all your devices reports in
to and declares their status to (including running software,
configurations, etc). The home server would benefit too from the segmented
design, with one powerful computing part and one dedicated to security
functions.

This home server would then be responsible for polling for security flags
and issue death signals when necessary to devices under that user's
control. It would also check for software updates and may very well
download and cache them while the devices they're for are powered off or
otherwise busty.

If the IoT devices at least have a secure connection to the home server,
there might be an option to issue a command to bypass the expiration date
by making the home server an inspecting proxy / DPI firewall on its
external API, by telling the server exactly how to detect and filter
malicious commands so that only benign commands will reach the device. This
way we can also use software to rewrite commands (when possible) for the
IoT device when the old API version gets deprecated and a new one is put in
use, but there's no corresponding firmware update issued.

Any questions, suggestions, criticism?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.metzdowd.com/pipermail/cryptography/attachments/20160316/a1424f33/attachment.html>


More information about the cryptography mailing list