"Designing and implementing malicious hardware"

Leichter, Jerry leichter_jerrold at emc.com
Mon Apr 28 15:47:54 EDT 2008


On Mon, 28 Apr 2008, Ed Gerck wrote:
| Leichter, Jerry wrote:
| > I suspect the only heavy-weight defense is the same one we use against
| > the "Trusting Trust" hook-in-the-compiler attack:  Cross-compile on
| > as many compilers from as many sources as you can, on the assumption
| > that not all compilers contain the same "hook". ...
| > Of course, you'd end up with a machine no faster than your slowest
| > chip, and you'd have to worry about the correctness of the glue
| > circuitry that compares the results. 
| 
| Each chip does not have to be 100% independent, and does not have to
| be used 100% of the time.
| 
| Assuming a random selection of both outputs and chips for testing, and
| a finite set of possible outputs, it is possible to calculate what
| sampling ratio would provide an adequate confidence level -- a good
| guess is 5% sampling.
I'm not sure how you would construct a probability distribution that's
useful for this purpose.  Consider the form of one attack demonstrated
in the paper:  If a particular 64-bit value appears in a network packet,
the code will jump to the immediately succeeding byte in the packet.
Let's for the sake of argument assume that you will never, by chance,
see this 64-bit value across all chip instances across the life of the
chip.  (If you don't think 64 bits is enough to ensure that, use 128
or 256 or whatever.)  Absent an attack, you'll never see any deviation
from the theoretical behavior.  Once, during the lifetime of the system,
an attack is mounted which, say, grabs a single AES key from memory and
inserts it into the next outgoing network packet.  That should take no
more than a few tens of instructions.  What's the probability of your
catching that with any kind of sampling?

| This should not create a significant impact on average speed, as 95%
| of the time the untested samples would not have to wait for
| verification (from the slower chips). 
I don't follow this.  Suppose the system has been running for 1 second,
and you decide to compare states.  The slower system has only completed
a tenth of the instructions completed by the faster.  You now have to
wait .9 seconds for the slower one to catch up before you have anything
to compare.

If you could quickly load the entire state of the faster system just
before the instruction whose results you want to compare into the
slower one, you would only have to wait one of the slower systems's
instruction times - but how would you do that?  Even assuming a
simple mapping between the full states of disparate systems, the
state is *huge* - all of memory, all the registers, hidden information
(cache entries, branch prediction buffers).  Yes, only a small amount
of it is "relevant" to the next instruction - but (a) how can you
find it; (b) how can you find it *given that the actual execution of
the next instruction may be arbitrarily different from what the
system model claims*?

|					One could also trust-certify
| each chip based on its positive, long term performance -- which could
| allow that chip to run with much less sampling, or none at all.
Long term-performance against a targetted attack means nothing.

| In general, this approach is based on the properties of trust when
| viewed in terms of Shannon's IT method, as explained in [*]. Trust is
| seen not as a subjective property, but as something that can be
| communicated and measured.  One of the resulting rules is that trust
| cannot be communicated by self-assertions (ie, asking the same chip)
| [**]. Trust can be positive (what we call trust), negative (distrust),
| and zero (atrust -- there is no trust value associated with the
| information, neither trust nor distrust). More in [*].
The papers look interesting and I'll have a look at them, but if you
want to measure trust, you have to have something to start with.  What
we are dealing with here is the difference between a random fault and
a targetted attack.  It's quite true that long experience with a chip
entitles you to trust that, given random data, it will most likely
produce the right results.  But no amount of testing can possibly
lead to proper trust that there isn't a special value that will induce
different behavior.
							-- Jerry

| Cheers,
| Ed Gerck
| 
|  References:
| [*] www.nma.com/papers/it-trust-part1.pdf
| www.mcwg.org/mcg-mirror/trustdef.htm
| 
| [**] Ken's paper title (op. cit.) is, thus, identified to be part of
| the very con game described in the paper.

---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at metzdowd.com



More information about the cryptography mailing list