Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

Seth David Schoen schoen at loyalty.org
Sun Dec 21 18:03:23 EST 2003


Carl Ellison writes:

> I, meanwhile, never did buy the remote attestation argument for high price
> content.  It doesn't work.  So, I looked at this as an engineer.  "OK, I've
> got this hardware. If remote attestation is worthless, then I can and should
> block that (e.g., with a personal firewall).  Now, if I do that, do I have
> anything of value left?"  My answer was that I did - as long as I could
> attest about the state of the software to myself, the machine owner.
> 
> [...]
> 
> What we need is some agent of mine - a chip - that:
> 1) has access to the machine guts, so it can verify S/W state
> 2) has a cryptographic channel to me, so it can report that result to me
> and
> 3) has its own S/W in a place where no attacker could get to it, even if
> that attacker had complete control over the OS.
> 
> The TCPA/TPM can be used that way.  Meanwhile, the TPM has no channel to the
> outside world, so it is not capable of doing remote attestation by itself.
> You need to volunteer to allow such communications to go through. If you
> don't like them, then block them.  Problem solved.  This reminds me of the
> abortion debate bumper sticker.  "If you're against abortion, don't have
> one."

The only difficulty here is the economic effect if an attestation
capability is really ubiquitous, since people you interact with can
tell whether you chose to offer them attestations or not.  (Imagine
if there was a way to tell whether someone had had an abortion in
the past by looking at her.  That would have a major effect on the
decision to have an abortion, without directly affecting the
availability of abortion services at all.  It would all be a matter
of secondary effects.)

My two favorite examples currently are the polygraph machine and
genetic screening.  Of course, both of these are opt-in technologies;
nobody will come up to you on the street and force you to take a
polygraph, and nobody will come up to you and stab you to collect
blood for genetic screening.  (There are supposedly a few cases of
genetic testing being done surreptitiously, and that might become
more common in the future.)  On the other hand, people can conceivably
condition certain interactions or benefits on the results of a
polygraph test or a genetic test for some condition.  The most obvious
example is employment: someone can refuse to hire you unless you
submit to one or the other of these tests.  As a result of various
concerns about this, Congress has now regulated the use of both of
these technologies by employers in the U.S.  Whether or not people
agree with that decision by Congress, they should be able to see that
the very _existence_ of these opt-in technologies, and their potential
availability to employers, would make many prospective employees worse
off than they would have been if the technologies had not been
developed.

(Oh, and then there are the ways insurers would like to use genetic
tests.  I'm sure some insurers wouldn't have minded subjecting their
insureds to lie detector tests, either, when asking them whether they
ever smoke.)

On-line interactions are becoming terribly frequent, and it's common
for on-line service providers to wish to know about you (or know what
software you use or try to get you to use particular other software
instead).  In the current environment you can use the personal
firewalls you mention, and a host of other techniques, to prevent
on-line services from learning much more about you than you would like
them to -- and in principle they can't determine whether or not you're
using the programs they would prefer.

John Gilmore recently quoted in this thread the TCPA white paper
"Building a Foundation of Trust in the PC", which says

	[M]aking the platform secure requires a ubiquitous
	solution, supported by vendors throughout the industry.
	[... A]t some point, all PCs will have the ability to
	be trusted to some minimum level -- a level that is
	higher than possible today -- and to achieve this level
	of trust in the same way.

	[...] Every PC will have hardware-based trust, and
	every piece of software on the system will be able to
	use it.

If that happens, publishers and service providers can use their
leverage over software choices to gain a lot more power over computer
owners than they have right now.

"Building a Foundation of Trust in the PC" suggests this:

	[B]efore making content available to a subscriber, it is
	likely that a service provider will need to know that the
	remote platform is trustworthy.  [... T]he digital signature
	prevents tampering and allows the challenger to verify the
	signature.  If the signature is verified, the challenger
	can then determine whether the identity metrics are
	trustworthy.  If so, the challenger, in this case the
	service provider, can then deliver the content.

Some people may have read things like this and mistakenly thought that
this would not be an opt-in process.  (There is some language about
how the user's platform takes various actions and then "responds" to
challenges, and perhaps people reasoned that it was responding
autonomously, rather than under its user's direction.)

But it's clear from the context why the computer user is opting in:
because it's a condition of access to the service.  If you don't
attest at all, that's treated as giving an unacceptable answer.

There might seem to be a certain circularity here (you can only get
people to give attestations if you can deny them access to the service
if they refuse, and you can only deny them access to the service for
refusing if people are generally willing to give attestations).  But I
think it's mainly a question of network effects.

Your desire to "attest about the state of the software to myself, the
machine owner" could be met in various ways without increasing other
people's potential leverage over what software you use.  I have
suggest that you could have a TPM that allows you deliberately to attest
to a software environment that is different from your real software
environment.  There are other possibilities.  Ka-Ping Yee suggested
that, when you buy a device with a TPM, you should get a copy of all the
secret keys that were preloaded in your TPM; another alternative would be
not pre-loading any keys at all.  In these models, not only can you
get an attestation through some local UI, as you suggest, but you can
also give an attestation to a machine or service that you operate --
or that someone else operates -- whenever you have reason to believe
that the attestation will be used in a way that will benefit you.

-- 
Seth David Schoen <schoen at loyalty.org> | Very frankly, I am opposed to people
     http://www.loyalty.org/~schoen/   | being programmed by others.
     http://vitanuova.loyalty.org/     |     -- Fred Rogers (1928-2003),
                                       |        464 U.S. 417, 445 (1984)

---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at metzdowd.com



More information about the cryptography mailing list