[Cryptography] The Crypto Bone's Threat Model

Ray Dillinger bear at sonic.net
Fri Feb 27 12:42:12 EST 2015



On 02/26/2015 11:23 PM, Ralf Senderek wrote:

> I don't have a solution for a complete system reset at the moment at
> all, because I don't want to
> place a button in the web interface that removes the symbolic link. I'm
> open for suggestions how
> to do this securely, other than to insert a virgin, verified new SD
> image and start from scratch.

---

> I'm really interested to find out how verification can be implemented
> apart from signing the mSD card
> image file. When the system is in use a tripwire-like IDS may help, but
> the user won't be able to make
> sense of the scan results. What are your suggestions?

Here is how I would try to do it if I were designing hardware.

FWIW, this is exactly what I'd like in a desktop or laptop
computer too.  But then I'm one of those curmudgeons who
misses diskettes with write-protect tabs and wishes hard
disks (and solid-state drives) came with write-protect toggle
switches.

Essentially this amounts to separating all firmware updating
from all normal operation at as low a level as possible, while
providing a "hardware reset" capability to restore a known
firmware configuration.

1) The Device has in ROM (ie, not rewritable AT ALL) the
    required code to disable the device's external connections,
    then "flash" all the rewritable firmware in the device
    back to a known configuration.  Any updates to firmware
    beyond the default configuration, must happen again
    after any time this code is run.  Absolutely no on-
    device memory image anywhere, save only the non-rewritable
    ROM itself, survives the autoflash.

2)  The "autoflash" action described above happens when and only
    when a physical switch on/in the device is triggered by a
    human.  It cannot be triggered nor prevented in any way
    from software, including firmware.

    I'd suggest a switch with a good mollyguard; maybe a
    tiny button located behind a 12mm-deep 1mm-wide hole, such
    that someone would have to stick a wire into the socket
    to reset it or something. Or maybe a DIP switch mounted
    behind a snap-out panel.  Up to you, just don't leave it
    hanging out where someone could hit it accidentally.

3)  When in its "autoflashed" mode it can copy new firmware
    images from the mSD card into its own RAM, but does not
    actually install any of it, on any device, at all,
    until....

4)  When the operator flips the switch back to "normal operation"
    mode, a different chunk of the  ROM code runs.  It flashes
    the new images to all on-device firmware, disables all
    firmware updating, and THEN re-enables network communications.
    At no time until it's finished running does it call any of
    the firmware it is installing or has installed.

Any such device which is physically unmodified can be fully
configured (including being restored to REAL factory settings)
by anyone who can control what contents are on the mSD card.

At the same time there's no "signature checking" phase that
would mean a key has to be built into hardware and that
users have to trust a single potentially-subvertible source
for updates.

This preserves the principle that trust is subjective;
ie, a human who subjectively experiences trust can check
signatures or whatever that information comes from a source
s/he personally, subjectively, trusts.  But a device with
no subjective experience within which concepts like "trust"
have a meaning, cannot trust others on behalf of its owner.

Of course, anyone can make a device which *looks* like it
works exactly that way, and *claim* it works in exactly that
way, even if it doesn't.  And that is including the original
manufacturer as well as anyone who interdicts the device in
shipment or anyone who gets access to it via a black bag job.

So we're now talking about trusting trust, which is an
infinite regress.  But, because trust is purely subjective,
the regress breaks at the level of the user.  If he
subjectively experiences trust, then for better or worse
he trusts the device.

Electronic devices *can* be made as trustworthy as hammers
and wrenches. The problem at the bottom level is that almost
no one can check that they truly have one of the trustworthy
ones.

But that's also true of mechanical devices. Someone can
sabotage a hammer or a wrench, too -- or a nail gun, an air-
wrench, a car, a bridge, a high-rise building, an airliner,
or a hydroelectric dam.  As per the usual pattern with
electronic devices, the more large and complicated a
mechanical device is, the easier it is to hide a sabotage.
The more powerful it is and the more who depend on it, the
worse the consequences of the sabotage.

Bear

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: OpenPGP digital signature
URL: <http://www.metzdowd.com/pipermail/cryptography/attachments/20150227/ca7069fe/attachment.sig>


More information about the cryptography mailing list