dangers of TCPA/palladium

Seth David Schoen schoen at loyalty.org
Fri Aug 9 00:15:33 EDT 2002


R. Hirschfeld writes:

> > From: "Peter N. Biddle" <peternbiddle at hotmail.com>
> > Date: Mon, 5 Aug 2002 16:35:46 -0700
> 
> > You can know this to be true because the
> > TOR will be made available for review and thus you can read the source and
> > decide for yourself if it behaves this way.
> 
> This may be a silly question, but how do you know that the source code
> provided really describes the binary?
> 
> It seems too much to hope for that if you compile the source code then
> the hash of the resulting binary will be the same, as the binary would
> seem to depend somewhat on the compiler and the hardware you compile
> on.

I heard a suggestion that Microsoft could develop (for this purpose)
a provably-correct minimal compiler which always produced identical
output for any given input.  If you believe the proof of correctness,
then you can trust the compiler; the compiler, in turn, should produce
precisely the same nub when you run it on Microsoft's source code as
it did when Microsoft ran it on Microsoft's source code (and you can
check the nub's hash, just as the SCP can).

I don't know for sure whether Microsoft is going to do this, or is
even capable of doing this.  It would be a cool idea.  It also isn't
sufficient to address all questions about deliberate malfeasance.  Back
in the Clipper days, one question about Clipper's security was "how do
we know the Clipper spec is secure?" (and the answer actually turned
out to be "it's not").  But a different question was "how do we know
that this tamper-resistant chip produced by Mykotronix even implements
the Clipper spec correctly?".

The corresponding questions in Palladium are "how do we know that the
Palladium specs (and Microsoft's nub implementation) are secure?" and
"how do we know that this tamper-resistant chip produced by a
Microsoft contractor even implements the Palladium specs correctly?".

In that sense, TCPA or Palladium can _reduce_ the size of the hardware
trust problem (you only have to trust a small number of components,
such as the SCP), and nearly eliminate the software trust problem, but
you still don't have an independent means of verifying that the logic
in the tamper-resistant chip performs according to its specifications.
(In fact, publishing the plans for the chip would hardly help there.)

This is a sobering thought, and it's consistent with ordinary security
practice, where security engineers try to _reduce_ the number of
trusted system components.  They do not assume that they can eliminate
trusted components entirely.  In fact, any demonstration of the
effectiveness of a security system must make some assumptions,
explicit or implicit.  As in other reasoning, when the assumptions are
undermined, the demonstration may go astray.

The chip fabricator can still -- for example -- find a covert channel
within a protocol supported by the chip, and use that covert channel
to leak your keys, or to leak your serial number, or to accept secret,
undocumented commands.

This problem is actually not any _worse_ in Palladium than it is in
existing hardware.  I am typing this in an ssh window on a Mac laptop.
I can read the MacSSH source code (my client) and the OpenSSH source
code (the server listening at the other end), and I can read specs for
most of the software and most of the parts which make up this laptop,
but I can't independently verify that they actually implement the
specs, the whole specs, and nothing but the specs.

As Ken Thompson pointed out in "Reflections on Trusting Trust", the
opportunities for introducing backdoors in hardware or software run
deep, and can conceivably survive multiple generations, as though they
were viruses capable of causing Lamarckian mutations which cause the
cells of future generations to produce fresh virus copies.  Even if I
have a Motorola databook for the CPU in this iBook, I won't know
whether the microcode inside that CPU is compliant with the spec, or
whether it might contain back doors which can be used against me
somehow.  It's technically conceivable that the CPU microcode on this
machine understands MacOS, ssh, vt100, and vi, and is programmed to
detect BWA HA HA! arguments about trusted computing and invisibly
insert errors into them.  I would never know.

This problem exists with or without Palladium.  Palladium would
provide a new place where a particular vendor could put
security-critical (trusted) logic without direct end-user
accountability.  But there are already several such places in the
PC.  I don't think that trust-bootstrapping problem can ever be
overcome, although maybe it's possible to chip away at it.  There is
a much larger conversation about trusted computing in general, which
we ought to be having:

What would make you want to enter sensitive information into a
complicated device, built by people you don't know, which you can't
take apart under a microscope?

That device doesn't have to be a computer.

-- 
Seth David Schoen <schoen at loyalty.org> | Reading is a right, not a feature!
     http://www.loyalty.org/~schoen/   |                 -- Kathryn Myronuk
     http://vitanuova.loyalty.org/     |

---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at wasabisystems.com



More information about the cryptography mailing list