Palladium summary?

R. A. Hettinga rah at shipwright.com
Sun Jul 7 15:42:17 EDT 2002


http://vitanuova.loyalty.org/2002-07-05.html

Friday, July 5, 2002

Palladium summary?

Cory says I wrote some nice detailed notes about Palladium. Personally, I
thought I wrote some nice detailed notes about Descartes and some cursory
notes about Palladium. :-)

I'm reminded of when I wrote about a Richard Dawkins speech and Leonard
said I'd summarized it. So let me actually try to summarize most of our
Microsoft meeting.

The Microsoft meeting

(I am omitting some "sensitive" material, but none of what I omit is
material which I think would embarrass Microsoft or expose it to criticism.
No part of the meeting was under an NDA or confidentiality agreement.)

Please don't attribute anything below to Microsoft, e.g. in a news article;
instead, you should call them to confirm it. I'm just giving my impressions
and my understanding based on some fairly sparse notes.

    * Peter Biddle at Microsoft began thinking around 1997 about how to
protect his bits when they were on someone else's computer. (He was
Microsoft's representative at CPTWG and in the DVD-CCA, and was somewhat
skeptical of the technical efficacy of software-based DRM.)
    * His view, and the view of some of his colleagues, was that they
ultimately did not know how to enforce a policy for the use of information
when it was kept and used on somebody else's PC. The PC platform did not
seem to support this.
    * In thinking about this, he decided that "a blob is a blob". ["Blob"
is a database term for "binary large object", and roughly means "file",
"data structure", or "sequence of bits whose internal structure is
unanalyzed".] So, it was not appropriate to think about protecting some
bits more than others, or enforcing some kinds of policies but not others.
So the protection of privacy was the same technical problem as the
protection of copyright, because in each case bits owned by one party were
being entrusted to another party and there was an attempt to enforce a
policy. Technologically, this could not be done securely with software
alone.
    * It is hard to imagine how, in software alone, one part of the
functionality of a general-purpose operating system can be protected from
another part of the functionality of a general-purpose operating system.
The existing PC architecture does not support this kind of
compartmentalization. Consequently, a virus could potentially access or
capture any kind of data (including very sensitive personally identifiable
information, financial and medical records, etc.), and redistribute it over
a network.
    * Similarly, an emulator or debugger could be used to extract
copyrighted material and redistribute it or use it contrary to policy.
    * The view of some people working on Palladium is that it's appropriate
to create technology which would permit each creator of any kind of digital
information to set and enforce any policy whatsoever for the use of that
information. (If you don't want to abide by the policy, you don't have to
accept the information.) There are various subtleties here and some debate
about public policy, but the basic assumption is that you have a right to
control, if you wish, how other people will use bits you create.
    * [Omit some discussion of business models, DRM, file sharing,
legislation, etc.]
    * Microsoft does not have the desire or means to control any
information which is input into a computer via a means beyond the scope of
DRM or Palladium (in unencrypted formats such as MP3), and intends to
continue supporting such formats.
    * Microsoft employees have a broad variety of opinions on legal and
technical issues related to copyright enforcement. The company's position
is that the use of DRM should be purely voluntary (in the sense in which
the industry uses that term; they do not have a public position that the
DMCA's anticircumvention provisions need to be modified).
    * Microsoft wants to compete with proprietary platforms which offer DRM
[to publishers], such as proprietary consumer electronics platforms.
Microsoft believes that, if it did not support DRM at all, it would be at a
competitive disadvantage relative to proprietary platforms which did.
    * The Palladium architecture has been under development since around
1997, and Microsoft holds or has filed for some patents which cover
portions of it. At least one of the inventors of the Digital Rights
Management Operating System patent is working on Palladium, although we did
not discuss whether the DRM OS patent is related to Palladium, whether
Microsoft is writing an operating system using the techniques disclosed in
that patent, or whether the DRM OS patent covers any parts of Palladium.
    * Palladium is distinct from TCPA and has technical differences from
TCPA. It has some architectural points in common with TCPA, including, most
significantly, the use of "trusted hardware" within a PC in order to
establish a root of trust. Both TCPA and Palladium require modifications to
existing PC hardware architecture in order to work. In addition, they both
require modifications to software in order to use trust features. Both are
intended to run existing "untrusted" software without any modifications.
    * Palladium would, inter alia, add a new opcode and a new operating
mode to the CPU. A portion of the enforcement resides within the CPU itself.
    * Microsoft assumed as a design criterion for Palladium that existing
versions of Windows should be able to run on a Palladium PC, as should
existing Windows applications, as should existing non-Windows operating
systems like Linux. There is no attempt to stop people from booting
whatever code they currently use or may write in the future. In addition,
the hardware trust features can potentially be used by specially-adapted
software, regardless of what operating system is running. It is possible to
imagine that a Palladium-hardware-aware version of Linux could be created
and could make full use of Palladium's hardware features in order to
achieve trust comparable to the Windows implementation. Microsoft is only
writing an implementation for Windows, but plans to publish all the
technical details. (Microsoft has not yet decided about patent policies or
stated whether an operating system which used Palladium hardware features
would necessarily infringe any of Microsoft's Palladium patents.)
    * Microsoft, like chemists, calls Palladium "Pd" for short.
    * I'm going to type ":abbr pd Palladium" in vi so that I can stop
typing "Palladium" all the time. Hooray for :abbr!
    * The initial version of Palladium will require changes to five parts
of the PC's hardware. Changes will be required to the CPU, the chipset (on
the motherboard), the input devices (e.g. keyboard), and the video output
devices (graphics processor). In addition, a new component must be added: a
tamper-resistant secure cryptographic co-processor, which Microsoft calls
SCP or SPP.
    * Although the SCP is tamper-resistant, it is likely that a skilled
attacker with physical access to the inside of a Palladium PC can still
compromise it or subvert its policies in some way. One possible attack is
one I discussed with Ross Anderson last week: you can replace the system
RAM with special RAM which allows its contents to be read or modified by an
external circuit.
    * So it is possible that an attacker with physical access can still
compromise the system, even though the SCP is meant to be tamper-resistant,
partly because other components (like RAM) are less robust against
modification. Palladium primarily defends effectively against two classes
of attacks: (1) remote network-mounted attacks (buffer overflows and other
programming flaws, malicious mobile code, etc.), because even if some
malicious code is installed in one part of the system, it still can't
effectively subvert the policy of another part of the system, and (2) local
software-based attacks, including things like using a debugger to try to
read a program's internal state while it's executing or to try to subvert
its policy. Thus, Palladium can probably guarantee that you can't write or
download any software (and nobody else can write or upload to you any
software) which would compromise the policy of software running locally
which is making use of Palladium trust features.
    * Although hardware attacks can work, they are probably not portable
from one machine to another. This is especially interesting for users of
DRM -- even though one user can launch an expensive and successful attack,
that user can't publish an inexpensive software-based technique or HOWTO
which would enable others to reproduce this attack cheaply. (Cue reference
to Bunnie's X-BOX reverse engineering paper, where he suggests that his
inexpensive attack on the X-BOX can yield portable techniques which can be
used by others inexpensively, but that the X-BOX could have been designed
so that his attack was not readily portable to other machines.)
    * Palladium's changes to the CPU allow it to be placed into a new mode
where certain areas of memory are restricted via a technique called "code
curtaining" to an ultra-privileged piece of code called the "nub" or "TOR".
("Nub" is the Palladium team's term for this code, and "TOR", for "Trusted
Operating Root", is the official public term.) The nub is a kind of trusted
memory manager, which runs with more privilege than an operating system
kernel. The nub also manages access to the SCP.
    * The SCP is an 8-bit tamper-resistant cryptographic smart-card which
contains unique keys, including public keypairs (2048-bit RSA), and
symmetric keys for AES in CBC mode. These keys are unique per machine and
the SCP does not reveal them to anything outside the SCP's security
perimeter. It also contains a variety of other cryptographic functionality,
including SHA-1, RSA, AES, and other cipher implementations, a small amount
of memory, and a monotone counter. The SCP can do a number of cryptographic
protocols. It also contains a thing called a PCR. (I think that stands for
"platform configuration register".)
    * When you want to start a Palladium PC in trusted mode (note that it
doesn't have to start in trusted mode, and, from what Microsoft said, it
sounds like you could even imagine booting the same OS in either trusted or
untrusted mode, based on a user's choice at boot time), the system hardware
performs what's called an "authenticated boot", in which the system is
placed in a known state and a nub is loaded. A hash (I think it's SHA-1) is
taken of the nub which was just loaded, and the 160-bit hash is stored
unalterably in the PCR, and remains there for as long as the system
continues to operate in trusted mode. Then the operating system kernel can
boot, but the key to the trust in the system is the authentication of the
nub. As long as the system is up, the SCP knows exactly which nub is
currently running; because of the way the CPU works, it is not possible for
any other software to modify the nub or its memory or subvert the nub's
policies. The nub is in some sense in charge of the system at a low level,
but it doesn't usually do things which other software would notice unless
it's asked to.
    * Palladium's authenticated boot is simpler than TCPA's version,
because only a single hash (or "measurement", in TCPA language) is taken.
Palladium does not attempt to "measure" the hardware, BIOS, boot loader, OS
kernel, etc., or at least not within the SCP. In TCPA, several separate
hashes will be taken and stored in secure registers.
    * The nub interfaces with other software on the system by means of
programs (outside the nub) called trusted agents (or TAs). The TAs can
implement sophisticated policies and authentication methods, where the nub
(and SCP) just implement fairly simple primitives. A TA can also
communicate with user-space programs (at least, that will be a feature of
Microsoft's nub; other people can write their own nubs which can support
different kinds of TAs or even do without TAs entirely). The TAs are
protected by hardware from one another and from the rest of the system.
    * Even PCI DMA can't read or write memory which has been reserved to a
nub's or TA's use (including the nub's or TA's code). This memory is
completely inaccessible and can only be accessed indirectly through API
calls. The chipset on the motherboard is modified to enforces this sort of
restriction.
    * The SCP provides a feature called "sealed storage" by means of two
API calls (called SEAL and UNSEAL). The Microsoft nub provides more
complicated wrappers around these calls; using the Microsoft wrappers, you
can have features like "migration strategy" or "migration policy" (allowing
at least three different policies for how encrypted data can be moved from
one machine to another). If a TA running on a system in trusted mode wants
to use sealed storage, it can call into the APIs implemented in the nub.
    * Sealed storage is implemented by means of encryption (sealing) or
decryption (unsealing) with a symmetric cipher (probably AES in CBC mode).
When the SCP is given data to seal, it's given two arguments: the data
itself and a 160-bit "nub identifier" (which is the SHA-1 hash of some nub
and so uniquely identifies that nub).
    * Sealing is performed by prepending the nub identifier to the data to
be sealed, and then encrypting the result with a private symmetric key -- I
want to call this the "platform-specific key", which varies from machine to
machine and is secret. (I don't remember whether "platform-specific key" is
Microsoft's term for this.) That key is kept within the SCP and is a unique
identifier for the machine which performed the sealing operation.
    * The SCP actually also prepends a random nonce to the data to be
sealed before encryption (and discards the nonce upon decryption). This is
a clever privacy feature which prevents someone from creating an
application which "cookies you" by recording the output of sealing an empty
string (and then using the result as a persistent unique identifier for
your machine). A program which tried to "cookie you" this way would find
that, because of the random nonce, the result of sealing a given string is
constantly completely different, and no useful information about the
identity of the machine is revealed by the sealing operation.
    * After encryption, the SCP returns the encrypted result as the return
value of the SEAL operation.
    * When an SCP is given encrypted data to UNSEAL, it internally attempts
to decrypt the encrypted data using its platform-specific key. This means
that, if the encrypted data was originally sealed on a different machine,
the UNSEAL operation will fail outright immediately. (You can't take a
sealed file and transfer it to another machine and unseal it there; because
the platform-specific key is used for encryption and decryption, and can't
be extracted from the SCP, you can only UNSEAL data on the same machine on
which it was originally SEALed.)
    * If the decryption is successful, the SCP performs a second check: it
examines the nub identifier which resides within the decrypted data. The
nub identifier was specified at the time the data was originally SEALed,
and indicates which nub is allowed to receive the decrypted data. If the
nub identifier for the decrypted data is identical to the nub identifer
which is currently stored in the PCR (which is the SHA-1 hash of the
currently-running nub on the machine at the moment UNSEAL was called), the
UNSEAL is successful and the decrypted data is returned to the calling nub.
However, if the nub identifier does not match the contents of the PCR, the
SCP concludes that the nub which is currently running is not entitled to
receive this data, and discards it.
    * Thus, sealing is specific to a physical machine and also specific to
a nub. Data sealed on one machine for a particular nub cannot be decrypted
on a different machine or under a different nub. An application which
trusts a particular nub (and is running under that nub) can seal important
secret data and then store the resulting sealed data safely on an untrusted
hard drive, or even send it over a network.
    * If you reboot the machine under a debugger, there is no technical
problem, and you can debug the software which created the encrypted file.
However, since you aren't running the proper (non-debugger-friendly) nub,
the debugger will work, but the UNSEAL call won't. The SCP will receive the
UNSEAL call, examine the PCR, and conclude that the currently-running nub
is not cleared (so to speak) to receive the sealed data. Your applications
can only decrypt sealed data if they are running under the same machine and
under the same software environment within which they originally sealed
that data!
    * This is remarkably clever. When you are running under a trusted nub,
your applications can use the SCP to decrypt and process data, but you
can't run software which subverts a TA's policy (because the nub will not
permit the policy to be subverted).
    * When you are not running under a trusted nub, you can run software
which subverts a TA's policy (because the nub isn't able to prevent it),
but your applications will no longer be able to decrypt any sealed data,
because the SCP won't be willing to perform the decryption.
    * There is a long discussion of how you can make a backup, or upgrade
your system, or migrate your software and data to a new system, etc. The
default with sealed storage is that any sealed data will be unusable when
migrated to a new system. (Thus Ross Anderson mentioned that you can't
easily leak a document to a reporter, because if the document is sealed for
use only on your PC, the reporter's PC will be unable to decrypt the
document.)
    * The Microsoft nub provides wrappers around the SCP's sealing features
which allow the software which performs the sealing operation to specify a
migration policy at the time the sealing operation is originally performed.
The migration policy can be (approximately) one of the following, at the
software's sole option: (1) Migration is prevented entirely, and the data
must die with the current PC where it was created. (2) Migration is
permitted upon some kind of authentication by a local user (e.g. a
password) which will decrypt or command the decryption of data temporarily
in order to permit it to be migrated. (3) Migration is permitted with the
assistance and consent of a 3rd party -- e.g. in DRM applications, the DRM
software might have to "phone home" to get consent and decryption keys
which will permit a file to be decrypted temporarily in order to permit it
to be migrated. This last option might be called a key escrow application,
although it's not technically parallel to something like the Clipper Chip,
because it doesn't facilitate wiretapping or threaten communications
privacy.
    * Palladium's modifications to input and output hardware will prevent
software from doing certain kinds of monitoring and spoofing, as well as
"screen scraping". A program will be able to ask Palladium to display a
dialog box which can't be "obscured" or "observed" by other software, and
Palladium hardware can enforce these conditions. And there is a way to be
sure that input is coming from a physical input device and not spoofed by
another program. This is probably also comparable to the "physical presence
detection" in TCPA, which tries to ascertain whether a user is physically
present (which is a requirement in order for certain security-sensitive
things to happen).
    * The secure output features also permit, e.g., a DVD player program to
prevent other software from making screen captures. The initial version of
Palladium does not control audio output in this way, so you can still
record all sound output via something like TotalRecorder. (Microsoft also
has an initiative called Secure Audio Path which could potentially restrict
that, but SAP isn't part of Palladium proper. The Palladium secure output
features are currently totally video-specific.)
    * We didn't talk much about the details of how TAs communicate with
user-space programs, which is key to how a programmer would actually use
Palladium features. We also didn't talk about whether there is some kind of
authentication of a kernel or precisely which traditional kernel features
are taken over by the nub. Microsoft did say that most things which are
currently in the kernel will remain in the kernel.
    * In principle, nub and kernel are independent, so a non-Microsoft
kernel could run on a Microsoft nub, or vice versa. Patent and copyright
issues might prevent this from being done in practice, but it is apparently
technically possible within the design of Palladium.
    * Microsoft's nub, including its source code, will be published for
review by anyone who wants to examine it, in order to allow all of
Microsoft's claims about its security properties to be verified. There is
no part of Palladium's design or code which needs to be kept secret,
although each SCP will contain secret cryptographic keys loaded at the time
of its manufacture. Microsoft will encourage non-Microsoft people to read
and discuss its nub. You will also be able to create your own nub, except
that changing the nub will (as discussed above) prevent previously-sealed
data from being decrypted.
    * If you choose to allow people on the network to tell which nub you
are running, they can probably find out in a way you can't fake (using a
cryptographic protocol). You can refuse to tell them, but if you do choose
to tell them, you will not be able to lie about it (except maybe if you
know a way to tamper with the hardware). This is like TCPA; critics note
that many entities which use Palladium might assume by default that any
non-Microsoft nub is untrustworthy, which would make it very inconvenient
to change your nub from the Microsoft-supplied default.
    * Your nub's identifier is not a unique identifier for your machine,
because it is the same as everyone else's nub. Palladium does not create
any remotely-visible unique identifier for your machine, and actually
contains some features to try to avoid inadvertently disclosing a unique
identifier. There is a concept of an "identity server" which is a separate
service which issues you some kind of identity credential which uses
Palladium and may or may not reveal particular personal information. (I
didn't get a lot of details on how identity servers would work or who would
run them.)
* Microsoft suggests that Palladium is flexible enough that many entities
could use it to create their own policies, judgments, certification
services, etc. That part of the discussion reminded me in some ways of PICS
and P3P, although Palladium has a more robust technical enforcement
mechanism than either of those standards.

We talked about lots of other things, but that's all I have notes on.

I also wrote a message to the cryptography list in response to someone who
wondered whether Palladium would prevent you from writing your own programs
and scripts:

> * or are not able to use shell scripts (at least not in
>   trusted context). This means a
>   strict separation between certified software and data.

The latter is closest to what's intended in Palladium. Individual programs
using Palladium features are able to prevent one another from reading their
executing or stored state. You can write your own programs, but somebody
else can also write programs which can process data in a way that your
programs can't interact with.

The Palladium security model and features are different from Unix, but you
can imagine by rough analogy a Unix implementation on a system with
protected memory. Every process can have its own virtual memory space, read
and write files, interact with the user, etc. But normally a program can't
read another program's memory without the other program's permission.

The analogy starts to break down, though: in Unix a process running as the
superuser or code running in kernel mode may be able to ignore memory
protection and monitor or control an arbitrary process. In Palladium, if a
system is started in a trusted mode, not even the OS kernel will have
access to all system resources. That limitation doesn't stop you from
writing your own application software or scripts.

Interestingly, Palladium and TCPA both allow you to modify any part of the
software installed on your system (though not your hardware). The worst
thing which can happen to you as a result is that the system will know that
it is no longer "trusted", or will otherwise be able to recognize or take
account of the changes you made. In principle, there's nothing wrong with
running "untrusted"; particular applications or services which relied on a
trusted feature, including sealed storage (see below), may fail to operate.

Palladium and TCPA both allow an application to make use of hardware-based
encryption and decryption in a scheme called "sealed storage" which uses a
hash of the running system's software as part of the key. One result of
this is that, if you change relevant parts of the software, the hardware
will no longer be able to perform the decryption step. To oversimplify
slightly, you could imagine that the hardware uses the currently-running OS
kernel's hash as part of this key. Then, if you change the kernel in any
way (which you're permitted to do), applications running under it will find
that they're no longer able to decrypt "sealed" files which were created
under the original kernel. Rebooting with the original kernel will restore
the ability to decrypt, because the hash will again match the original
kernel's hash.

(I've been reading TCPA specs and recently met with some Microsoft
Palladium team members. But I'm still learning about both systems and may
well have made some mistakes in my description.)

I'm also ignoring, for the time being, my attacks on Peter Biddle's "a blob
is a blob" and "privacy = content protection" claims. That topic is
interesting to me, but not necessarily urgent for people who are looking
for more information on Palladium. But you should read his and Paul
England's presentations from WinHEC.

Google juice

I really like that the first Google hit for "Jack Valenti" is now Jack
Valenti's 1982 VCR testimony! That's the first time I've ever been
responsible for putting something ahead of a famous person's own home page
in the Google results for a search on that person's name.

This result is not inaccurate. If you want to know who Jack Valenti is, and
what he has done and what he has had to say, the 1982 testimony is a key
document.


-- 
-----------------
R. A. Hettinga <mailto: rah at ibuc.com>
The Internet Bearer Underwriting Corporation <http://www.ibuc.com/>
44 Farquhar Street, Boston, MA 02131 USA
"... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience." -- Edward Gibbon, 'Decline and Fall of the Roman Empire'

---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at wasabisystems.com



More information about the cryptography mailing list