Tamperproof devices and backdoors

Ian Farquhar ian.farquhar at aus.sun.com
Fri May 25 04:15:28 EDT 2001

It is nearly impossible to be absolutely sure.

Look at it from a couple of different angles:

The usual situation is that one must trust the reputation and
competence of the manufacturer.  This is suboptimal, as many
tamper-resistant devices, especially unpowered "tamper
resistant" devices, have been successfully reverse engineered
with minimal facilities.  Note: I acknowledge that I am being
deliberately vague here about the definition of "reverse
engineered" for the purposes of this discussion.  That could
mean anything from recovering ROM code, to recovering
keys, to making modifications to the running code.  For
the purposes of this email, I don't think this is an important

Let's imagine that you have access to reverse engineering
facilities, such as those found in a typical semiconductor
manufacturer's failure analysis lab.  You could probably,
with some effort, recover the masks.  Software would turn
this into a netlist, and schematic recovery could give you
a circuit diagram, and possibly even a module deconstruction.
You'd get the ROM code, and be able to disassemble that.
With enough analysis, you might even be able to extract
keys from EEPROM.

But even then, it would be very difficult to be sure the device
contained no backdoors.  What about emissive or timing
attacks?  What about power analysis?  What about irradating
the running die and observing the emitted radiation as it
runs (what NONSTOP is supposed to be)?  What about
other attacks which aren't yet publicly known, but which
certain organisations might have persuaded the manufacturer
to include?  After all, they'd have deniability - noone else
knew about the attack.  What about failures which could
be induced by marginal tolerances between two
connected components, which would be very difficult
to detect without detailed knowledge of the process
which fabbed the chip?  And this ignores the fact that
any decent tamper resistance mechanism should
substantially complicate this analysis.

Ok, let's assume that the manufacturer shares the design
with you.  You get a CD-ROM full of VHDL or Verilog.
What about the tool suite which generates the masks?
What about the standard cells which are used?  Are there
any subtle interactions between them?  Are the masks
being used to manufacture actually those which came
out of the suite?  Are you sure of the standard cells?
What about those subtle tolerance failures I mentioned
above.  etc. etc. etc.

Finally, let's assume that you design your own device?
It's not difficult to do, but even then, you need to trust
your tools, the fab house, and so forth.  You could build
some verification into this, by including diagnostics into
the device when it's fabbed, looking for suspicious
electrical deviations from the simulated design, as well
as reverse engineering a random sampling of your own
devices to determine if the fabbed device varies.
But do you really know all of the attacks?  And what
if 1 in 1000 modules would yield enough valuable
data to make slipping in that one worthwhile.  You've
only got 0.1% chance of finding it.

Don't even think that FPGA's make things better.

One of the NSA's technique for analysing these situations has
been published.  Basically, you analyse all risk vectors
throughout the lifecycle of the module, and try
to counter them (ie. threat trees).  But the reality is
that there are so many risk vectors in this situation that
it is nearly impossible to address them all, even for
the NSA.   They have their own fab house, and
design suite, although even they use off-the-shelf
VLSI design tools and FPGA's extensively.
They can't remove the risk, but they try to understand
it as comprehensively as possible.

In summary, you will always have risk.  But what you
need to do is understand it, model the effect of the
failure of that component, and build a mitigation

This is why security in depth is very important.  If you
totally rely on the security of your tamper resistant
modules, especially for something as unprotected as
a smartcard, you probably need to sit down and see
if you can design a better protocol.  If you can't,
then what you need to do is to minimise the spread
of that failure, so that it effects only a small part
of your application.

This isn't always easy.  But it's very important.
Smartcards and tamper resistant modules aren't
a solution, they're a technique.  People who think
they solve all problems aren't thinking about the
problem clearly enough.


Disclaimer: Scott speaks for Sun, I speak only for

Enzo Michelangeli wrote:

> On another mailing list, someone posted an interesting question: how to
> ascertain that a tamperproof device (e.g., a smartcard) contains no hidden
> backdoors? By definition, anything open to inspection is not tamperproof. Of
> course, one can ask the manufacturer to disclose the design, but there is no
> way of verifying that the actual device really implements the design that
> was disclosed, because the act of inspecting its innards could remove the
> backdoor, and also the code thet implement the removal itself.
> Any idea, besides relying on the manufacturer's reputation?
> Enzo

The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at wasabisystems.com

More information about the cryptography mailing list