Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

Jerrold Leichter jerrold.leichter at smarts.com
Mon Dec 15 19:02:06 EST 2003


| > Which brings up the interesting question:  Just why are the reactions to
| > TCPA so strong?  Is it because MS - who no one wants to trust - is
| > involved?  Is it just the pervasiveness:  Not everyone has a smart card,
| > but if TCPA wins out, everyone will have this lump inside of their
| > machine.
|
| There are two differences between TCPA-hardware and a smart card.
|
| The first difference is obvious. You can plug in and later remove a smart
| card at your will, at the point of your choice. Thus, for homebanking with
| bank X, you may use a smart card, for homebaning with bank Y you
| disconnect the smart card for X and use another one, and before online
| gambling you make sure that none of your banking smart cards is connected
| to your PC. With TCPA, you have much less control over the kind of stuff
| you are using.
|
| This is quite an advantage of smart cards.
However, this advantage is there only because there are so few smart cards,
and so few smart card enabled applications, around.

Really secure mail *should* use its own smart card.  When I do banking, do
I have to remove my mail smart card?  Encryption of files on my PC should
be based on a smart card.  Do I have to pull that one out?  Does that mean
I can't look at my own records while I'm talking to my bank?  If I can only
have one smart card in my PC at a time, does that mean I can *never* cut and
paste between my own records and my on-line bank statement?  To access my
files and my employer's email system, do I have to have to trust a single
smart card to hold both sets of secrets?

I just don't see this whole direction of evolution as being viable.  Oh,
we'll pass through that stage - and we'll see products that let you connect
multiple smart cards at once, each "guaranteed secure" from the others.  But
that kind of add-on is unlikely to really *be* secure.

Ultimately, to be useful a trusted kernel has to be multi-purpose, for exactly
the same reason we want a general-purpose PC, not a whole bunch of fixed-
function appliances.  Whether this multi-purpose kernel will be inside the PC,
or a separate unit I can unplug and take with me, is a separate issue. Give
the current model for PC's, a separate "key" is probably a better approach.
However, there are already experiments with "PC in my pocket" designs:  A
small box with the CPU, memory, and disk, which can be connect to a small
screen to replace a palmtop, or into a unit with a big screen, a keyboard,
etc., to become my desktop.  Since that small box would have all my data, it
might make sense for it to have the trusted kernel.  (Of course, I probably
want *some* part to be separate to render the box useless is stolen.)

| The second point is perhaps less obvious, but may be more important.
| Usually, *your* PC hard- and software is supposed to to protect *your*
| assets and satisfy *your* security requirements. The "trusted" hardware
| add-on in TCPA is supposed to protect an *outsider's* assets and satisfy
| the *outsider's* security needs -- from you.
|
| A TCPA-"enhanced" PC is thus the servant of two masters -- your servant
| and the outsider's. Since your hardware connects to the outsider directly,
| you can never be sure whether it works *against* you by giving the
| outsider more information about you than it should (from your point if
| view).
|
| There is nothing wrong with the idea of a trusted kernel, but "trusted"
| means that some entity is supposed to "trust" the kernel (what else?). If
| two entities, who do not completely trust each other, are supposed to both
| "trust" such a kernel, something very very fishy is going on.
Why?  If I'm going to use a time-shared machine, I have to trust that the
OS will keep me protected from other users of the machine.  All the other
users have the same demands.  The owner of the machine has similar demands.

The same goes for any shared resource.  A trusted kernel should provide some
isolation guarantees among "contexts".  These guarantees should be independent
of the detailed nature of the contexts.  I think we understand pretty well
what the *form* of these guarantees should be.  We do have problems actually
implementing such guarantees in a trustworthy fashion, however.

Part of the issue with TCPA is that the providers of the kernel that we are
all supposed to trust blindly are also going to be among those who will use it
heavily.  Given who those producers are, that level of trust is unjustifiable.

However, suppose that TCPA (or something like it) were implemented entirely by
independent third parties, using open techniques, and that they managed to
produce both a set of definitions of isolation, and an implementation, that
were widely seen to correctly specify, embody, and enforce strict protection.
How many of the criticisms of TCPA would that mute?  Some:  Given open
standards, a Linux TCPA-based computing platform could be produced.
Microsoft's access the the trusted kernel would be exactly the same as
everyone else's; there would be no basis for complaining that they had left a
back door into your system.  However, it would do absolutely nothing to block
the "nightmare scenarios" that get raised - like a Word version that stores
all data in encrypted files that can only be read by "the real Word".

| Can we do better?
|
| More than ten years ago, Chaum and Pedersen presented a great idea how to
| do such things without potentially compromising your security. Bringing
| their ideas into the context of TCPA, things should look like in the
| following picture
|
|    +---------------+         +---------+         +---------------+
|    | Outside World | <-----> | Your PC | <-----> | TCPA-Observer |
|    +---------------+         +---------+         +---------------+
|
| So you can trust "your PC" (possibly with a trusted kernel ... trusted by
| you). And an outsider can trust the observer.
|
| The point is, the outside world does not directly talk to the observer!
|
| Chaum and Pedersen (and some more recent authors) defined protocols to
| satisfy the outsider's security needs without giving the outsider any
| chance to learn more about you and the data stored in your PC than you
| want her to learn.
|
| TCPA mixes "Your PC" and the "observer" into one "trusted kernel" and is
| thus open to abuse.
I remember looking at this paper when it first appeared, but the details
have long faded.  It's an alternative mechanism for creating trust:  Instead
of trusting an open, independently-produced, verified implementation, it
uses cryptography to construct walls around a proprietary, non-open
implementation that you have no reason to trust.  Kind of like deciding
there's no way you can ever have a mix you trust completely, so you chain
them together so that all have to be corrupted before any information is
leaked.

That certainly may have its role, but it leaves all the "nightmare scenarios"
exactly as they were.  (BTW, I don't recall if it addresses the issue of
how I have multiple mutually-suspicious Observers all on line at the same
time.)
							-- Jerry

---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at metzdowd.com



More information about the cryptography mailing list