Tamperproof devices and backdoors

dmolnar dmolnar at hcs.harvard.edu
Fri May 25 05:17:45 EDT 2001



On Fri, 25 May 2001, Enzo Michelangeli wrote:

> [In the general case, Goedel, Turing and Rice come to our "rescue" by
> telling us it is impossible. As you know, Rice's theorem (an easy
> extention of Goedel and Turing) tells us any non-trivial property of
> the recursively enumerable sets is undecidable.

This is true - but I think we could require the device design to be
engineered in a certain way to make proving compliance easier. After all,
we do prove programs (and floating point hardware) correct (sometimes),
even though the halting problem is in general unsolvable and Rice's
theorem obtains.

I think there may be several separate questions here. I might have ideas
for some of them and no idea about others. So let me try to restate the
problem in an overly verbose manner and see if I can make these
distinctions any clearer.

We have (at best)

	* a device design - specifying a function f() the box is
	"supposed to compute"
	* the tamperproof device - a black box for f()
	which really outputs some function BOX()
	* the ability to query the box and make
	a trace of the box's inputs and outputs
	 (x, BOX(x))


There are at least four questions involved with back doors

	1) Does the design f() have a back door?
	2) Does BOX() output deviate from f() to add a back door?
	e.g. is BOX() defined as
		if x = "NOBODY CUMZ IN HERE SEKRIT" output true
		ele f(x)
	3) Does BOX() implement f() correctly, but leak information
	somehow (perhaps by using a biased coin flip instead of a
	probability 1/2 coin flip after being triggered)?
	4) Does the tamperproof device have something awful hidden
	in it waiting to be triggered by a back door?

For 1) Rice's theorem holds for arbitrary designs, but we can hold out
hope for open source or formal methods or code reviews or whatever. No
more and no less of a problem than the usual security issues. Perhaps in
this context we'd have to replace the design with "zero-knowledge proofs
that the design does not have undesirable properties", since the
manufacturer might not want to release the entire design - and only a
commitment to the design is known.

For 2), it seems there are at least two more questions

	* Can we check or prove that BOX = f ?
	* Can we check that BOX(x) = f(x) *faster than computing f(x)*?

	Checking that an arbitrary BOX = f seems hopeless. BOX might
	differ on only one input out of exponentially many. Plus without
	a bound on the input size, it's undecidable. I don't know what
	to do about that one.

	As for checking that BOX(x) = f(x), that might be easier.
	Especially if BOX outputs helpful information (like intermediate
	steps of its computation) as part of the output. Maybe some of
	the proof-carrying code work would come in handy here.
	Then the idea would be that no one ever accepts the output from
	your tamperproof device without checking the proof that the
	output is consistent with the specification of f(). This
	seems to reduce the problem to 1). Of course, it's *only*
	useful if checking the proof is easier than computing f()!!

For 3), maybe some of the work on "subliminal free channels" would come in
handy. Another model I've been playing with is this: you have a box B that
implements a randomized algorithm A. The box B has an input tape for
random bits, which it is "supposed" to use for its random coin flips.
However, B might choose to flip its *own* coins and ignore your coins.
You tell B to compute algorithm A on input x. How do you tell whether B
used *your* coins or *its* coins or *both*? (do you even care? weaker
question - can you force B to use a particular distribution, though not
your exact coins).

If you evaluate A yourself with your coins you can catch B. That's
slow and so you can't do it every time (or even most times). Is there a
general way to do better? I ask because of a comment of Adam Young that
subliminal channels occur in protocols where random choices occur.
The subliminal channel occurs when the choice isn't actually "random" but
instead conditioned on an event. So my goal would be a way to force a
black box to use a particular string or distribution of coins (or be
caught immediately), given only control over the coins tape and the input
tape and assuming the box can flip its own coins.

For 4), that seems to be an issue of physical design. Don't build
tamperproof devices big enough to hold a nuclear bomb. More than that,
I don't know.

There are also some audit issues which come to mind

	* Can you identify particular inputs x on which BOX(x) != f(x),
	or just that such inputs exist? (e.g. there's some statistical
	test which says BOX(x) distributed differently from f(x), but
	doesn't identify any single element).

	* Can you identify the input x which triggered the back door?

	* Can you recreate the backdoor trigger sequence given the
	trace of the interaction with the box?

		This seems difficult because the box could change an
	internal variable when it sees a special value x1. Then trigger
	the backdoor proper on x2. x1 will look just like any other x
	in the trace.

Oh and finally

	* How many inputs does BOX(x) have to deviate from f(x) in order
	to do serious damage? Is just one enough? If it is, then we
	need to either check BOX(x) = f(x) every time or figure out a
	way for BOX to prove to us that its internal circuitry implements
	f() and no more...

-David




---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at wasabisystems.com




More information about the cryptography mailing list