Difference between TCPA-Hardware and other forms of trust

Jerrold Leichter jerrold.leichter at smarts.com
Wed Dec 17 10:30:25 EST 2003


| > | means that some entity is supposed to "trust" the kernel (what
| > | else?). If two entities, who do not completely trust each other, are
| > | supposed to both "trust" such a kernel, something very very fishy is
| > | going on.
| >
| > Why?  If I'm going to use a time-shared machine, I have to trust that the
| > OS will keep me protected from other users of the machine.  All the other
| > users have the same demands.  The owner of the machine has similar
| > demands.
|
| I used to run a commercial time-sharing mainframe in the 1970's.
| Jerrold's wrong.  The owner of the machine has desires (what he calls
| "demands") different than those of the users.
You're confusing policy with mechanism.  Both sides have the same notion of
what a trusted mechanism would be (not that it's clear, even today, that we
could actually implement such a thing).  They do, indeed, have different
demands on the policy.

| The users, for example, want to be charged fairly; the owner may not.
| We charged every user for their CPU time, but only for the fraction that
| they actually used.  In a given second, we might charge eight users
| for different parts of that fraction.
|
| Suppose we charged those eight users amounts that added up to 1.3
| seconds?  How would they know?  We'd increase our prices by 30%, in
| effect, by charging for 1.3 seconds of CPU for every one second that
| was really expended.  Each user would just assume that they'd gotten a
| larger fraction of the CPU than they expected.  If we were tricky
| enough, we'd do this in a way that never charged a single user for
| more than one second per second.  Two users would then have to collude
| to notice that they together had been charged for more than a second
| per second.
The system owner's policy is that each user be charged for *at least as much
time* as he used.

The individual user's policy is that they be charged for *no more than* the
time he used.

A system trusted by both sides would satisfy both constraints (or report that
they were unsatisfiable).  In this case, a trusted system would charge each
user for exactly the time he used!

| ...The users had to trust us to keep our accounting and pricing fair.
| System security mechanisms that kept one user's files from access by
| another could not do this.  It required actual trust, since the users
| didn't have access to the data required to check up on us (our entire
| billing logs, and our accounting software).
Exactly:  *You* had a system you trusted.  Your users had to rely on you.

The situation you describe is hardly new!  Auditing procedures have been
developed over hundreds of years in order to give someone like the user of
your system reason to believe that he is being treated correctly.  Like
everything else in the real world, they are imperfect.  But commerce couldn't
exist unless they worked "well enough".  (I once dealt with a vendor support
charge of around $50K.  It was sent with no documentation - just the bald
assertion that we owed that money.  It tooks weeks to get the vendor to send
their phone conversion and back-room work logs.  They contained all kinds of
interesting things - like a charge for a full 8-hour day to send a 5-line
email message.  Yes, *send* it - the logs showed it had been researched and
written the day before.  We eventually agreed to pay about half the billed
amount.  In another case, someone I know audited the books - kept by a very
large reseller - on which royalties for a re-sold piece of software were
based.  The reseller claimed that no one ever bother to check their books - it
was a waste of time.  Well... there were *tons* of "random" errors - which
just by chance all happened to favor the reseller - who ended up writing a
large check.)

| TCPA is being built specifically at the behest of Hollywood.... [various
| evidence].
I'm not defending TCPA.  I'm saying that many of the attacks against it
miss the point:  That the instant you have a truely trustable secure kernel,
no matter how it came about, you've opened the door for exactly the kinds of
usage scenarios that you object to in TCPA.

Let's think about what a trusted kernel must provide.  All it does is enforce
an access control matrix in a trusted fashion:  There are a set of objects
each of which has an associated set of operations (read, write, delete,
execute, whatever), a set of subjects, and a mapping from an object/subject
pair to a subset of the operations on the object.  "Subjects" are not just
people - a big lesson of viruses is that I don't necessarily want to grant
all my access rights to every program I run.  When I run a compiler, it
should be able to write object files; when I run the mail program it probably
should not.  (You can identify individual programs with individual "virtual
users" and implement things setuid-style, but that's just an implementation
technique - what you're doing is making the program the subject.)

Subjects have to be able to treat each other with suspicion.  However, we
assume that the trusted kernel is *above* suspicion:  It's been verified by
those who use it.  One aspect of this is that it can give trusted reports:
I - or anyone with appropriate authorization - can ask for a list of the
access rights for some object, and trust the answer.

Given this setup, a music company will sell you a program that you must
install with a given set of access rights.  The program itself will check
(a) that it wasn't modified; (b) that a trusted report indicates that it
has been given exactly the rights specified.  Among the things it will check
in the report is that no one has the right to change the rights!  And, of
course, the program won't grant generic rights to any music file - it will
specifically control what you can do with the files.  Copying will, of course,
not be one of those things.

Now, what you'll say is that you want a way to override the trusted system,
and grant yourself whatever rights you like.  Well, then you no longer have
a system *anyone else* can trust, because *you* have become the real security
kernel.  And in a trusted system, you could certainly create a subject that
would automatically be granted all access rights to everything.  Of course,
since the system is trusted, it would include information about the
override account in any reports.  The music company would refuse to do
business with you.

More to the point, many other businesses would refuse to do business with you.
If the system were really trusted, it could store things like your credit
balance:  A vendor would trust your system's word about the contents, because
even you would not be able to modify the value.  This is what smart cards
attempt to offer - and, again, it would be really nice if you didn't have to
have a whole bunch of them.  The bank records stored on your system could
be trusted:  By the bank, by you - and, perhaps quite useful to you, by a
court if you claimed that the bank's records had been altered.  (Today, you
can save and later produce a paper copy of a bank statement, and it will
generally be believed.  An on-line statement is worthless for this purpose.
Individual messages from the bank may be signed, but they are much less
convenient for this purpose.)

Yes, you can construct a system that *you* can trust, but no one else has
any reason to trust.  However, the capability to do that can be easily
leveraged to produce a system that *others* can trust as well.  There are
so many potential applications for the latter type of system that, as soon
as systems of the former type are fielded, the pressure to convert them to
the latter type will be overwhelming.

Ultimately, TCPA or no, you will be faced with a stark choice:  Join the
broad "trust community", or "live in the woods".

That being the case, I don't find attacks on TCPA as such fruitful.  Take it
as a given.  Attack any parts of it that are kept secret.  When/if it's open,
attack any faults.  Attack the actual - and potential - abuses.

							-- Jerry

PS  All the above starts with the working assumption that one *can* actually
produce a trusted kernel.  TCPA itself starts from the same assertion.  I'm
skeptical about how far we can go - and in fact the general access control
mechanism I used in my illustration is known to give rise to fairly obvious
questions about who can access what that are NP-complete (or is it actually
non-computable?)  If it can't be done - the implications of it *being* done
are irrelevant.

---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at metzdowd.com



More information about the cryptography mailing list