Limitations of limitations on RE/tampering (was: Re: biometrics)

Seth David Schoen schoen at loyalty.org
Sun Jan 27 02:22:33 EST 2002


Carl Ellison writes:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> At 03:55 PM 1/26/2002 -0500, Perry E. Metzger wrote:
> >
> > [quoting a third poster]
> >> but all these absolutes in
> >> the comments are just too simplistic. Devices can be made as
> >> tamper-resistant as the threat- and value-model required.
> >
> >No, they can't. That's an engineering hope, not an engineering
> >reality. The hope you're expressing is that "well, maybe we can't
> >make it impossible to break this design, but we can make it cost
> >more to
> >break the system than breaking it will bring the bad guy, and we can
> >do that without said tamper-resistance costing us more than we can
> >afford."
> 
> I've heard rumor of an effort a while back to layer Thermite into a
> printed circuit board, so that a machine could self-destruct in case
> of tampering.  I doubt it ever got reviewed by OSHA, however. :)

I'm curious about the theoretical limits of tamper-resistance and
reverse-engineering resistance.  Clearly, at any given moment, it's
an arms race.  But who is destined to win it in the long run?

I was very interested in a result which Prof. Steven Rudich of CMU
told me about -- the non-existence of obfuscators.  There is a
research paper on this:

http://citeseer.nj.nec.com/barak01impossibility.html
http://www-2.cs.cmu.edu/~rudich/papers/obfuscators.ps

"[A]n obfuscator O [...] takes as input a program (or circuit) P and
produces a new program O(P) that has the same functionality as P yet is
'unintelligible' in some sense. [...]  Our main result is that, even
under very weak formalizations of the above intuition, obfuscation is
impossible."

Rudich said that his collaborators' impossibility proof hadn't stopped
commercial software vendors from continuing to develop obfuscation
techniques, but that's not surprising.  (I do enjoy mentioning this
impossibility proof whenever I hear about obfuscation, though.)

The result applies both to software obfuscation and to circuit
obfuscation.  (I need to think a bit more about its scope.  As I
understand it, there _do_ exist obfuscated programs -- which perform
a function but which can't be "understood" -- but there are just no
reliable algorithmic techniques for obfuscating an arbitrary piece of
code.)

Now, programs can attempt to tell whether they're being run under
debuggers, but, at least in open-source operating systems, there's no
ultimately reliable way to decide.  When you ask the operating system
"am I traced?", it can just say "no".  Simulators and debuggers are
becoming a lot more sophisticated, and there's no indication that
"software protection" is any more effective now than it was in the
1980s.  (The DMCA has made it more "effective" in a certain sense,
by creating, as Judge Kaplan said, "a moat filled with litigators
rather than alligators".)  There are also really cool things like
Subterfugue:

http://www.subterfugue.org/

But this obviously doesn't say anything about tamper-resistance at a
physical level, in hardware, because of devices which can destroy
themselves, whether with thermite or with some active tamper-detection
circuit, when they "believe" that some probing activity has exceeded a
particular threshold.  Software simply can't do that unless it can
communicate with some tamper-proof authority (a hardware dongle or a
revocation entity).

On the other side, probing and imaging techniques have been getting
more sophisticated all the time.  Medical technology has produced all
kinds of non-invasive scanners (CT, MRI, SPECT, PET, etc.) and
researchers have been using microscopes to look inside of many
"tamper-proof" smart cards.  A device which carries its own power
supply can _try_ to detect that it's been scanned (the equivalent of
software detecting that it's being traced or running on a virtual
machine), and certainly many of the medical imaging techniques use
some sort of active irradiation or otherwise provide a lot of energy
which a device could detect (assuming there's no way to disable the
device's power supply or otherwise destroy the tamper-detection logic).
So maybe devices could be made 

I understand that the state of the art in hardware favors the reverse
engineers in most cases, but a lot of people still have confidence in
the ability of hardware engineers to create genuinely tamper-resistent
devices.  And some people believe in particular contemporary designs
and products.

A couple of years ago, I heard about a technique called
interaction-free measurement, which uses quantum physics to measure or
photograph/image an object _without touching it or interacting with it
in any way_ (from the point of view of classical physics); this was
colloquially called "seeing in the dark" because no light or other
electromagnetic radiation need end up being incident on the target
object.

http://cornell.mirror.aps.org/abstract/PRA/v58/i1/p605_1

Does IFM justify the conclusion that tamper-resistance in hardware
will never be achieved?  (There could still be an arms race over
costs and benefits.)

-- 
Seth David Schoen <schoen at loyalty.org> | Reading is a right, not a feature!
     http://www.loyalty.org/~schoen/   |                 -- Kathryn Myronuk
     http://vitanuova.loyalty.org/     |



---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at wasabisystems.com




More information about the cryptography mailing list