Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

Seth David Schoen schoen at loyalty.org
Wed Dec 31 04:20:12 EST 2003


David Wagner writes:

> So it seems that third-party-directed remote attestation is really where
> the controversy is.  Owner-directed remote attestation doesn't have these
> policy tradeoffs.
>
> Finally, I'll come back to the topic you raised by noting that your
> example application is one that could be supported with owner-directed
> remote attestation.  You don't need third-party-directed remote
> attestation to support your desired use of remote attestation.  So, TCPA
> or Palladium could easily fall back to only owner-directed attestation
> (not third-party-attestation), and you'd still be able to verify the
> software running on your own servers without incurring new risks of DRM,
> software lock-in, or whatever.
> 
> I should mention that Seth Schoen's paper on Trusted Computing anticipates
> many of these points and is well worth reading.  His notion of "owner
> override" basically converts third-party-directed attestation into
> owner-directed attestation, and thereby avoids the policy risks that so
> many have brought up.  If you haven't already read his paper, I highly
> recommend it.  http://www.eff.org/Infra/trusted_computing/20031001_tc.php

Thanks for the kind words.

Nikita Borisov has proposed an alternative to Owner Override which
Ka-Ping Yee has called "Owner Gets Key", and which is probably the
same as what you're discussing.

Most TC vendors have entered into this with some awareness of the
risks.  For example, the TCPA whitepaper that John Gilmore mentioned
here earlier specifically contemplates punishing people for using
disapproved software, without considering exactly why it is that
people would want to put themselves into a position where they could
be punished for doing that (given that they can't now!).  (In
deference to Unlimited Freedom's observations, it is not logically
impossible that people would ever want to put themselves into that
position; the TCPA whitepaper just didn't consider why they would.)

As a result, I have not had any TC vendor express much interest in
Owner Override or Owner Gets Key.  Some of them correctly pointed out
that there are interesting user interface problems associated with
making this usable yet resistant to social engineering attacks.  There
might be "paternalistic" reasons for not wanting to give end-users the
attestation keys, if you simply don't trust that they will use them
safely.  (But there's probably no technical way to have our cake and
eat it too: if you want to do paternalistic security, you can probably
then abuse it; if you want to give the owner total control, you can't
prevent the owner from falling victim to social engineering.)  Still,
the lack of a totally obvious secure UI hasn't stopped research from
taking place in related areas.  For example, Microsoft is reportedly
still trying to figure out how to make clear to people whether the
source of a particular UI element is the program they think it is, and
how to handle the installation of NGSCB trusted computing agents.
Secure UI is full of thorny problems.

I've recently been concerned about one problem with the Owner Override
or Owner Gets Key approaches.  This is the question of whether they
are particularly vulnerable to a man-in-the-middle attack.

Suppose that I own a computer with the TCG TPM "FOO" and you are a
server operator, and you and I trust each other and believe that we
have aligned interests.  (One example is the case where you are a bank
and we both want to be sure that I am using a pristine, unaltered
computing environment in order to access my account.  Neither of us
will benefit if I can be tricked into making bogus transactions.)

An attacker Mallory owns a computer with the TCG TPM "BAR".  We assume
that Mallory has already compromised my computer (because our ability
to detect when Mallory does that is the whole reason we're using
attestation in the first place).  Mallory replaces my web browser (or
financial software) with a web browser that he has modified to send
queries to him instead of to you, and to contain a root CA certificate
that makes it trust a root CA that Mallory controls.  (Alternatively,
he's just made the new web browser ignore the results of SSL
certificate validation entirely, though that might be easier to detect.)

Now when I go to your web site, my connection is redirected to Mallory's
computer, which proxies it and initiates a connection to you.  You ask
for an attestation as a condition of accessing your service.  Since I
have no particular reason to lie to you (I believe that your reason
for requesting the attestation is aligned with my interest), I direct
my computer to give you an attestation reflecting the actual state of
FOO's PCR values.  This attestation is generated and reflects a
signature by foo on a set of PCR values that show the results of
Mallory's tampering.  But Mallory does _not_ pass this attestation
along to you.  Instead, Mallory uses Owner Override or Owner Gets Key
to generate a new attestation reflecting the original set of PCR
values that FOO had _before Mallory tampered with my software
environment_.  He then generates an attestation by BAR falsely
reflecting that BAR presently has those PCR values.  (Instead, BAR's
PCR values actually reflect that Mallory is running some custom-built
MITM attack software.  But the attestation Mallory generates conceals
that fact.)

Mallory then passes his false attestation along to you in place of my
real attestation.  On the basis of his attestation, you believe that
my computer has not been tampered with and you exchange a session key
with Mallory (believing that you are exchanging it with me).  Mallory
now exchanges a session key with me, and proxies the remainder of the
encrypted connection, which he can then observe or alter.  You falsely
believe that the session key is held by my original, unmodified
software environment, where it is really held by Mallory.

I think the lesson of this example is that believing attestations
requires some out-of-band mechanism to establish trust in TPM signing
keys.  That mechanism _could be_ vendor signatures in the present TCG
scheme, or it could be some completely different mechanism in an Owner
Override or Owner Gets Key system.  (For instance, in a corporate
environment, an IT manager can generate all the keys directly, and
then knows their values.  The IT manager does not need to rely on TPM
manufacturers to establish the legitimacy of TPM signing keys.)  The
MITM attack works because the TPM manufacturer's signature is no
longer a sufficient basis to establish trust in a TPM, if the TPM
might have an Owner Override feature.

So in my example, I need to find an out-of-band way to tell you FOO's
key so that you know it and can trust it (and distinguish it from
BAR's key).  If Owner Override exists and I've never told you precisely
which TPM I'm using, all you can tell is that you got an attestation
from _some_ TPM, but that might be an attacker's TPM and not my TPM!
In the corporate environment, the IT manager knows which TPM is in
each machine and can therefore easily tell who really generated a
particular attestation.  (I'm oversimplifying in various ways, but I
think this point is right.)

This extra work is not necessarily a bad thing -- it can be seen as a
legitimate cost of making TPMs that can never be used against their
owners -- but I'm not sure it can be avoided, and it might make
attestation less useful for some applications.  In all the cases where
the application is "technically sophisticated people want to receive
attestations from computers they own and configure", it's probably
still fine, but there may be some challenges for other purposes.


Here's an ASCII art diagram of that attack (eliding many details of
key exchange protocols that don't seem relevant):

     me           Mallory          you
  ________       _________       ________
 | EvilOS | --- | MITM-OS | --- | BankOS |
  --------       ---------       --------
  TPM "FOO"      TPM "BAR"

Me (to Mallory): Hi, Bank, it's me!
Mallory (to you): Hi, Bank, it's me!
You (to Mallory): Hi, sign your PCRs with nonce "ahroowah" to prevent replay.
Mallory (to me): Hi, sign your PCRs with nonce "ahroowah" to prevent replay.
Me (to Mallory): Ok.  sign(FOO, "EvilOS/ahroowah")
Mallory (to you): Ok.  sign(BAR, "NormalOS/ahroowah")
You (to Mallory): Great, you're NormalOS.  session key: encrypt(BAR, sugheequ)
Mallory (to me): Great, you're NormalOS.  session key: encrypt(FOO, heepheig)

(Again, I can't exchange a session key with you that Mallory can't
intercept because I'm running EvilOS, which deliberately fails to
notice when Mallory's SSL certificate gets substituted for the bank's.
and the bank can't benefit from this attestation because it doesn't
know whether my TPM is FOO or BAR.  If I had previously had an
out-of-band way to tell the bank that my TPM was FOO, Mallory would not
be able to carry out this attack.  Mallory still can't sign things as
FOO, and without a sufficiently clever social engineering attack,
can't get me to sign things as FOO for him.)

-- 
Seth David Schoen <schoen at loyalty.org> | Very frankly, I am opposed to people
     http://www.loyalty.org/~schoen/   | being programmed by others.
     http://vitanuova.loyalty.org/     |     -- Fred Rogers (1928-2003),
                                       |        464 U.S. 417, 445 (1984)

---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at metzdowd.com



More information about the cryptography mailing list