"Designing and implementing malicious hardware"

Leichter, Jerry leichter_jerrold at emc.com
Fri Apr 25 11:09:31 EDT 2008


On Thu, 24 Apr 2008, Jacob Appelbaum wrote:
| Perry E. Metzger wrote:
| > A pretty scary paper from the Usenix LEET conference:
| > 
| > http://www.usenix.org/event/leet08/tech/full_papers/king/king_html/
| > 
| > The paper describes how, by adding a very small number of gates to a
| > microprocessor design (small enough that it would be hard to notice
| > them), you can create a machine that is almost impossible to defend
| > against an attacker who possesses a bit of secret knowledge. I
| > suggest reading it -- I won't do it justice with a small summary.
| > 
| > It is about the most frightening thing I've seen in years -- I have
| > no idea how one might defend against it.
| 
| "Silicon has no secrets."
| 
| I spent last weekend in Seattle and Bunnie (of XBox hacking
| fame/Chumby) gave a workshop with Karsten Nohl (who recently cracked
| MiFare).
| 
| In a matter of an hour, all of the students were able to take a
| selection of a chip (from an OK photograph) and walk through the
| transistor layout to describe the gate configuration. I was surprised
| (not being an EE person by training) at how easy it can be to
| understand production hardware. Debug pads, automated masking,
| etc. Karsten has written a set of MatLab extensions that he used to
| automatically describe the circuits of the mifare devices. Automation
| is key though, I think doing it by hand is the path of madness.
While analysis of the actual silicon will clearly have to be part of
any solution, it's going to be much harder than that:

	1.  Critical circuitry will likely be "tamper-resistant".
	    Tamper-resistance techniques make it hard to see what's
	    there, too.  So, paradoxically, the very mechanisms used
	    to protect circuitry against one attack make it more
	    vulnerable to another.  What this highlights, perhaps,
	    is the need for "transparent" tamper-resistance techniques,
	    which prevent tampering but don't interfere with inspec-
	    tion.

	2.  An experienced designer can readily understand circuitry
	    that was designed "normally".  This is analogous to the
	    ability of an experience C programmer to understand what a
	    "normal", decently-designed C program is doing.  Under-
	    standing what a poorly designed C program is doing is a
	    whole other story - just look at the history of the
	    Obfuscated C contests.  At least in that case, an
	    experienced analyst can raise the alarm that something
	    wierd is going on .  But what *deliberately deceptive*
	    C code?  Look up "Underhanded C Contest" on Wikipedia.
	    The 2007 contest was to write a program that implements
	    a standard, reliable encryption algorithm, which some
	    percentage of the time makes the data easy to decrypt
	    (if you know how) - and which will look innocent to
	    an analyst.  There have been two earlier contests.
	    I remember seeing another, similar contest in which
	    the goal was to produce a vote-counting program that
	    looked completely correct, but biased the results.
	    The winner was amazingly good - I consider myself
	    pretty good at analyzing code, but even knowing that
	    this code had a "hook" in it, I missed it completely.
	    Worse, none of the code even set of my "why is it
	    doing *that*" detector.

	3.  This is another step in a long line of attacks that
	    attack something by moving to a lower-level of abstraction
	    and using that to invalidate the assumptions that
	    implementations at higher levels of abstraction use.
	    There's a level below logic gates, the actual circuitry.
	    A paper dating back to 1999 - "Analysis of Unconventional
	    Evolved Electronics", CACM V42#4 (it doesn't seem to be
	    available on-line) reported on experiments using genetic
	    algorithms to evolve an FPGA design to solve a simple
	    program (something like "generate a -.5V output if you
	    see a 200Hz input, and a +1V output if you see a 2KHz
	    input).  The genetic algorithm ran at the design level,
	    but fitness testing was done on actual, synthesized
	    circuits.

	    A human engineer given this problem would have used a
	    counter chain of some sort.  The evolved circuit had
	    nothing that looked remotely like a counter chain.  But
	    it worked ... and the experimenters couldn't figure out
	    exactly how.  Probing the FPGA generally caused it to
	    stop working.  The design included unconnected gates -
	    which, if removed, caused the circuit to stop working.
	    Presumably, the circuit was relying on the analogue
	    characteristics of the FPGA rather than its nominal
	    digital characteristics.

	    The paper at hand shows some very simple attacks, which
	    today would be very difficult and expensive to counter.
	    Attacks only get better over time - and even if we come
	    up with counters to all the digital-domain attacks, the
	    analogue layer underlying all this stuff is still out
	    there.

| If we could convince (this is the hard part) companies to publish what
| they think their chips should look like, we'd have a starting point.
Why would you believe that what they publish doesn't already contain the
attack circuitry?  How far would you have them go?  Publish the VHDL
specs as well?  That's exactly the level at which the writers of this
paper added their code - around a hundred lines added to a total of
11,000 or so that describe a very simple chip.  Going further, suppose
someone has managed to "spike" the VHDL toolchain - recall Ken
Thompson's classic "On Trusting Trust".  Given the funding potentially
available to the kinds of adversaries who might want to mount such
attacks, the possible entry points are many.

This is a very tough problem.
							-- Jerry

---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at metzdowd.com



More information about the cryptography mailing list