Overcoming the potential downside of TCPA

John S. Denker jsd at monmouth.com
Wed Aug 14 13:43:50 EDT 2002


bear wrote:
> 
> The problem with this idea is that TCPA is useless.  

That's an overstatement;  better versions appear below.

> For all the *useful*
> things you are thinking of, you need TCPA plus an approved key.  The only
> way you are going to get an approved key is inside a tamper-resistant chunk
> of hardware.  

Yes!  Now we're talking.

BTW I would have said _supposedly_ the only way to get an approved key
is .....

> If you should manage to extract the key, then yes, you'll be
> able to create that CD.  

Right.

> The idea is that you shouldn't get anywhere without hardware hacking. 

OK, that's an important part of the idea.

> The
> people doing this have decided hardware hacks are acceptable risks because
> they only want to protect cheap data -- movies, songs, commercial software,
> whatever.  They are sticking to stuff that's not expensive enough to justify
> hardware hacks.

OK, that's part of the story, but not the whole story.

1) The conversation over the last many days seems like the
classic blind men + elephant story.  Different people have
emphasized different parts of a complex issue.

2) The conversation has been greatly impeded by extremism.
People have been talking in black-and-white terms when
shades of gray are needed.

For starters, we must stop talking about "trust" as a binary
quantity.  That's like asking whether a lock is unpickable
or a cipher is unbreakable -- it's usually the wrong question.
We need to talk about _how much_ something is trusted.  We
need to be clear about threat models.  The current TCPA stuff
must be rated somewhere between "amateurish" and "preliminary"
because it doesn't clearly articulate what threat(s) it is
meant to address.  (Not to mention lying about what threats it 
is meant to address. :-)


> But the idea is that you, the hardware owner, are
> not authorized to extract the information contained in your own hardware.
> I find the idea of "owning" something without having the legal right to
> open it up and look inside legally dubious at best, but I'm no lawyer....

Again, let's not get carried away.  A long time ago, in a galaxy
far away, I derived a bunch of income from the sale of software 
back when most people who tried that lost their shirt.  Part of 
the story was that the software was sold in cartridges that could 
not be duplicated without hardware hacking and/or substantial 
investment in non-standard equipment.  So there was no widespread 
grass-roots copying.  At the opposite end of the spectrum, any 
large-scale copying would have been detected and stopped by legal 
action.

In this case, as the creator of the intellectual property, I didn't 
much care if you "looked inside" the cartridge.  The main thing
that mattered to me was preventing rampant copyright infringement.

I don't want to start a holy war about copyrights _per se_.  Probably 
nobody thinks that current copyright laws are ideal from a public-policy 
point of view.  But I do think that there ought to be _some_ way to 
make sure authors and performers get paid.

Similar notions apply to patents and inventors.  In revolutionary
France, they did away with patents and copyrights ... but they
quickly discovered that was a really bad idea and reversed course.

Also, the idea that you would have and use something that isn't
100% yours isn't such a radical idea.  I have partial control over 
my credit-cards and badges, but they technically belong to the 
issuers, not to me, and the owners can demand that I return them.
Whether you want to use a computer (or anything else) that you don't 
fully own and don't fully control is a complex shades-of-gray 
question.  It depends on what it's good for, and what it costs
(including direct charges plus possibly very-indirect downsides).

> However, if this infrastructure does in fact become trusted and somebody
> tries to use it to protect more valuable data, God help them.  They'll get
> their asses handed to them on a platter.

Again, the question is _how much_ trust, including how much
tamper resistance.  The answer depends on how much you value
the data that is being protected.

==================

Important parts of the elephant include
  -- what threats are to be addressed (type and magnitude of threat)
  -- what degree of hardware tamper resistance
  -- what degree of security of keys, key infrastructure, etc.
  -- what range of policies is to be supported
  -- who gets to set policy
  -- who decides who trusts whom, and how much
  -- etc. etc.

These items are interlinked.  In particular, AFAICT to have a
useful system, somebody will have to jointly certify that 
such-and-such piece of hardware is tamper resistant _and_ holds 
a particular key.

(And of course many items are totally dependent on the threat
model.  For example, the required degree of hardware tamper-
resistance might be rather rather modest for a low-grade 
communications endpoint inside a well-guarded facility, and 
vastly higher for the Permissive Action Link on a weapon carried 
on a bomber, which could conceivably be captured and subjected 
to intense scrutiny.)

---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at wasabisystems.com



More information about the cryptography mailing list