[Cryptography] What is a safe CPU? (was Re: ORWL - The First Open Source, Physically Secure Computer)

Perry E. Metzger perry at piermont.com
Mon Aug 29 15:11:48 EDT 2016


On Mon, 29 Aug 2016 17:33:35 -0000 dj at deadhat.com wrote:
> The practicality of building a secure CPU in an FPGA is of course
> dependent on first understanding what is meant by secure.

Ignoring the FPGA question and concentrating entirely on what is
meant by "secure" here, there are of course several levels, two of
the more obvious ones being the design and the fabricated
realization of that design.

Going from the bottom up:

1) One wishes to assure that the processor returned to you by the
fabrication facility follows the precise instructions for fabrication
you gave to that facility. That is, you wish to know that the masks
you specified were the ones that were actually used, that no extra
dopant was injected into a key transistor (see that particular
attack), no extra circuits have been added, none have been disabled,
etc.

Knowing that this is truly the case is Very Very Difficult, but there
is at least a known physical process for doing it. You have to take a
statistical sample of what is returned by the fab and literally tear
the things apart in a very expensive lab, removing layer after layer
of the chip and assuring that what is there is precisely what was
supposed to be there.

2) One wishes to assure that the specified realization of the gate
level design (that is, the implementation of the gates in terms of
actual transistors and wires placed in real world locations on the
chip) possesses no unknown/unwanted behaviors or side channel
attacks.

This can be very challenging indeed given that bad placement of long
wires or transistors could produce all sorts of weird behaviors.

For a normal microprocessor things are quite difficult already in this
regard -- it might be possible, for example, that analog level
electrical glitches that alter externally observable behavior could be
achieved by executing instructions in particular patterns. This has
considerable precedents in the literature -- such things have actually
happened. I do not know that we currently have reliable tools, even at
the theoretical level, for handling this problem, though things like
design rules and the like help a bit.

For a secure enclave, smart card, etc. that might operate in a
hostile environment, the problems are far worse. The attacker might
do things like injecting noise into the processor's power supply
lines or certain data lines to alter its behavior. (Power glitching,
for example, is a well known attack, able to bypass secure execution
of code by stopping the processor from executing particular
instructions in its firmware reliably.) The attacker might also
employ side channel attacks to try to extract securely held
information and the like, and there are a lot of side channels
available to the attacker when they have complete physical control
over a "secure" piece of hardware.

3) Going up the stack again, one would wish to assure that the network
of gates in the gate level version of the design, presuming that the
gates operate in an idealized manner, faithfully implements the
abstract specification of the processor.

Here we at last reach the level where things are sort of possible,
just really hard.  One can model the behavior of the processor in a
formal language (say, Coq or what have you), and then if you produce a
mathematical theorem showing the equivalence of the gate level design
to the specification you're done.

Note that doing this isn't easy at all, I'm just noting that the
problem is at least finally well defined, unlike problem (2).

4) At a higher level still, one presumably (see (3)) is in possession
of a description in a formal language of the exact behavior of the
processor. One may then ask for various theorems about the security of
the specified design.

For example, one might ask "is it the case that no sequence of
instructions other than some specified system call gate or trap
instructions will allow the user to enter supervisor mode" or "is it
the case that no mechanism exists by which unprivileged code may
directly alter memory pages that are not mapped into that user's
address space" and the like.

5) And now we come to a general principle finally.

Benjamin Pierce (in "Types and Programming Languages" I believe) notes
that a good operational definition of a "safe" programming language is
one whose behavior is completely specified by its manual or spec.

For example, in C, the effects of running off the end of an array or
using a random integer as a pointer and writing to it are not things
you can know a priori from the C spec, so C is not a safe language.

Although you can certainly create perversely bad programming language
designs that are very difficult for ordinary people to work in, this
"everything you need to know is in the manual, works just like
specified" seems like a good first order approximation of what we mean
by a "safe programming language".

So similarly, perhaps what we mean intuitively by a "safe
microprocessor" is one in which the well specified high level
description of the processor (a formalization of the manual) is
faithfully carried out by the low level slab of etched silicon that
you've been handed to actually use. This is only an approximation of
what we mean by "safe" since we still want the sort of properties in
(4) to hold, but I think it's a good first order approximation.

Perry
-- 
Perry E. Metzger		perry at piermont.com


More information about the cryptography mailing list