Difference between TCPA-Hardware and other forms of trust

John Gilmore gnu at toad.com
Tue Dec 16 16:53:24 EST 2003


> | means that some entity is supposed to "trust" the kernel (what else?). If
> | two entities, who do not completely trust each other, are supposed to both
> | "trust" such a kernel, something very very fishy is going on.
>
> Why?  If I'm going to use a time-shared machine, I have to trust that the
> OS will keep me protected from other users of the machine.  All the other
> users have the same demands.  The owner of the machine has similar demands.

I used to run a commercial time-sharing mainframe in the 1970's.
Jerrold's wrong.  The owner of the machine has desires (what he calls
"demands") different than those of the users.

The users, for example, want to be charged fairly; the owner may not.
We charged every user for their CPU time, but only for the fraction that
they actually used.  In a given second, we might charge eight users
for different parts of that fraction.

Suppose we charged those eight users amounts that added up to 1.3
seconds?  How would they know?  We'd increase our prices by 30%, in
effect, by charging for 1.3 seconds of CPU for every one second that
was really expended.  Each user would just assume that they'd gotten a
larger fraction of the CPU than they expected.  If we were tricky
enough, we'd do this in a way that never charged a single user for
more than one second per second.  Two users would then have to collude
to notice that they together had been charged for more than a second
per second.

(Our CPU pricing was actually hard to manage as we shifted the load
among different mainframes that ran different applications at
different multiples of the speed of the previous mainframe.  E.g. our
Amdahl 470/V6 price for a CPU second might be 1.78x the price on an
IBM 370/158.  A user's bill might go up or down from running the same
calculation on the same data, based on whether their instruction
sequences ran more efficiently or less efficiently than average on the
new CPU.  And of course if our changed "average" price was slightly
different than the actual CPU performance, this provided a way to
cheat on our prices.

Our CPU accounting also changed when we improved the OS's timer
management, so it could record finer fractions of seconds.  On average,
this made the system fairer.  But your application might suffer, if its
pattern of context switches had been undercharged by the old algorithm.)

The users had to trust us to keep our accounting and pricing fair.
System security mechanisms that kept one user's files from access by
another could not do this.  It required actual trust, since the users
didn't have access to the data required to check up on us (our entire
billing logs, and our accounting software).

TCPA is being built specifically at the behest of Hollywood.  It is
built around protecting "content" from "subscribers" for the benefit
of a "service provider".  I know this because I read, and kept, all
the early public design documents, such as the white paper

  http://www.trustedcomputing.org/docs/TCPA_first_WP.pdf

(This is no longer available from the web site, but I have a copy.)
It says, on page 7-8:

  The following usage scenarios briefly illustrate the benefits of TCPA
  compliance.

  Scenario I: Remote Attestation

  TCPA remote attestation allows an application (the "challenger") to
  trust a remote platform. This trust is built by obtaining integrity
  metrics for the remote platform, securely storing these metrics and
  then ensuring that the reporting of the metrics is secure.

  For example, before making content available to a subscriber, it is
  likely that a service provider will need to know that the remote
  platform is trustworthy. The service provider's platform (the
  "challenger") queries the remote platform. During system boot, the
  challenged platform creates a cryptographic hash of the system BIOS,
  using an algorithm to create a statistically unique identifier for the
  platform. The integrity metrics are then stored.

  When it receives the query from the challenger, the remote platform
  responds by digitally signing and then sending the integrity
  metrics. The digital signature prevents tampering and allows the
  challenger to verify the signature. If the signature is verified, the
  challenger can then determine whether the identity metrics are
  trustworthy. If so, the challenger, in this case the service provider,
  can then deliver the content. It is important to note that the TCPA
  process does not make judgments regarding the integrity metrics. It
  merely reports the metrics and lets the challenger make the final
  decision regarding the trustworthiness of the remote platform.
 
They eventually censored out all the sample application scenarios like
DRM'd online music, and ramped up the level of jargon significantly,
so that nobody reading it can tell what it's for any more.  Now all
the documents available at that site go on for pages and pages saying
things like "FIA_UAU.1 Timing of authentication. Hierarchical to: No
other components. FIA_UAU.1.1 The TSF shall allow access to data and
keys where entity owner has given the 'world' access based on the
value of TCPA_AUTH_DATA_USAGE; access to the following commands:
TPM_SelfTestFull, TPM_ContinueSelfTest, TPM_GetTestResult,
TPM_PcrRead, TPM_DirRead, and TPM_EvictKey on behalf of the user to be
performed before the user is authenticated."

But the historical record is clear that DRM was "Usage Scenario #1"
for TCPA.

Now, back to Hollywood.  If you have not read "This Business of Music"
(a thick book on how musicians can arm themselves with knowledge to
get slightly less screwed by the record industry -- including sample
contracts and explanations of the impact and history of each
provision), you won't know the long history of why Hollywood can be
trusted only to cheat everyone they deal with.

A music-industry contract equivalent to charging for 30% more seconds
than you deliver, is the provision for "breakage".  No artist gets
paid for more than 90% of the albums that the record company sells,
because in the days of shellac records, about 10% of them would break
in shipping.  That problem largely went away with vinyl records, and
went even further away with CDs.  Today's actual breakage is way under
1%.  But record companies won't sign a contract that pays the artist
for more than 90% of the albums shipped on CD.  That 10% underpayment
of musicians goes straight back to the record company's profits.

Their DRM software will cheat users the same way -- or a different 
way -- or a hundred different ways.  And TCPA will make it un-auditable
by us.

	John



---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at metzdowd.com



More information about the cryptography mailing list