Tamperproof devices and backdoors

Eugene.Leitl at lrz.uni-muenchen.de Eugene.Leitl at lrz.uni-muenchen.de
Fri May 25 04:56:27 EDT 2001


Enzo Michelangeli wrote:
> 
> On another mailing list, someone posted an interesting question: how to
> ascertain that a tamperproof device (e.g., a smartcard) contains no hidden
> backdoors? By definition, anything open to inspection is not tamperproof. Of

If the tamperproof functionality is trapdoor activable, you ship
the device with tamperproof off, inspect it at leasure, then switch it
on. If it's passive, you'll get your die (deliberately large fabbing
process so you can use light microscopy, simple architecture). It could be
overwriting itself with zeroes when tampered with, activable by a trapdoor
switched bit. You could put a lead azide crystal over a die's critical component, 
and a largish capacitor, and let actively blow up itself (this is not
as scary as it sounds) when it detects a tampering condition (power 
out of bounds, pressure change, change in photonic flux (internal light 
source illuminating a mirrored cavity). Etc.

Btw, it is perfectly possible to power a device with a semiconductor
laser and do I/O with photonic pulses, so that people have
more trouble partially frying something via pins and or I/O, and
to make it lead bits. The photodiode at the other end would act 
as fuse if the flux goes too high.

> course, one can ask the manufacturer to disclose the design, but there is no

Of course, it's best if you're the manufacturer. If I'm remembering
correctly, current plastic transistors can switch up to 4 MHz. The
inkjet circuit printing capabilities seem to make good progress, and
a minimal CPU can be implemented within <10 kTransistor (a Forth stack
CPU is one screenful of VHDL). No, I'm not making this up, you
can verify yourself by checking.

The CPU is too simple to hide anything in there if you expect the
layout (injet printed structures are pretty large by semiconductor
photolitho standards), and you designed the CPU yourself and the
program that drives the inkjet, and watch how it is deposited. 
A pretty useful Forth OS will fit into 2 kByte ROM, so you would 
probably just print the ROM with the program, and maybe a bit of 
your crypto code.

Less lunatic fringe and down to earth is: you buy an FPGA from a random
manufacturer, and download your FORTH CPU into the FPGA, including
crypto code. FPGA architecture is very orthogonal, judging from number
of cells on surface you can gauge whether there is some hidden state
sitting on the die (it helps if you need a shoehorn to fit it into
a given space, and let the code compute a cryptohash of itself, so
that you know that all of it is there).

If you keep it simple, and go back a couple generations of tech you
can be 99% that there are no easter eggs in there. Well, none but
those you put in yourself.

> way of verifying that the actual device really implements the design that
> was disclosed, because the act of inspecting its innards could remove the
> backdoor, and also the code thet implement the removal itself.
> 
> Any idea, besides relying on the manufacturer's reputation?

For most practical applications one would seem to have no choice.
 
> Enzo
> 
> [In the general case, Goedel, Turing and Rice come to our "rescue" by
> telling us it is impossible. As you know, Rice's theorem (an easy
> extention of Goedel and Turing) tells us any non-trivial property of
> the recursively enumerable sets is undecidable.

While true, it is quite irrelevant for a very minimal embedded. With
really careful coding techniques you can contain the few bugs which might
be still remaining in there. I'm not saying I could do it, but there
are people I trust who could do it, and other people who I trust who
would take a look at the code, and pick it apart.
 
> Now, in practice, you would think things are better, but I refer
> everyone to Ken Thompson's ACM Turing Award lecture "Reflections on
> Trusting Trust"...
> 
> On the Other Other Hand, I vaguely remember a neat paper by Matt Blaze
> some years ago that shows that certain classes of back doors, like
> "good" back doors in conventional crypto systems, are equivalent in
> difficulty to building a public key system. Anyone remember the name
> of the paper and the exact content?

If anyone is sufficiently smart, you lose. For most values of sufficiently
smart this is luckily irrelevant.
 
>                 --Perry, stepping way out of the usual moderator role.]



---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at wasabisystems.com




More information about the cryptography mailing list