trade-offs of secure programming with Palladium (Re: Palladium: technical limits and implications)

Adam Back adam at cypherspace.org
Mon Aug 12 17:13:58 EDT 2002


At this point we largely agree, security is improved, but the limit
remains assuring security of over-complex software.  To sum up:

The limit of what is securely buildable now becomes what is securely
auditable.  Before, without the Palladium the limit was the security
of the OS, so this makes a big difference.

Yes some people may design over complex trusted agents, with sloppy
APIs and so forth, but the nice thing about trusted agents are they
are compartmentalized:

If the MPAA and Microsoft shoot themselves in the foot with a badly
designed over complex DRM trusted agent component for MS Media Player,
it has no bearing on my ability to implement a secure file-sharing or
secure e-cash system in a compartment with rigorously analysed APIs,
and well audited code.  The leaky compromised DRM app can't compromise
the security policies of my app.

Also it's unclear from the limited information available but it may be
that trusted agents, like other ring-0 code (eg like the OS itself)
can delegate tasks to user mode code running in trusted agent space,
which can't examine other user level space, nor the space of the
trusted agent which stated them, and also can't be examined by the OS.

In this way for example remote exploits could be better contained in
the sub-division of trusted agent code.  eg. The crypto could be done
by the trusted-agent proper, the mpeg decoding by a user-mode
component; compromise the mpeg-decoder, and you just get plaintext not
keys.  Various divisions could be envisaged.


Given that most current applications don't even get the simplest of
applications of encryption right (store key and password in the
encrypted file, check if the password is right by string comparison is
suprisingly common), the prospects are not good for general
applications.  However it becomes more feasible to build secure
applications in the environment where it matters, or the consumer
cares sufficiently to pay for the difference in development cost.

Of course all this assumes microsoft manages to securely implement a
TOR and SCP interface.  And whether they manage to succesfully use
trusted IO paths to prevent the OS and applications from tricking the
user into bypassing intended trusted agent functionality (another
interesting sub-problem).  CC EAL3 on the SCP is a good start, but
they have pressures to make the TOR and Trusted Agent APIs flexible,
so we'll see how that works out.

Adam
--
http://www.cypherspace.org/adam/

On Mon, Aug 12, 2002 at 04:32:05PM -0400, Tim Dierks wrote:
> At 09:07 PM 8/12/2002 +0100, Adam Back wrote:
> >At some level there has to be a trade-off between what you put in
> >trusted agent space and what becomes application code.  If you put the
> >whole application in trusted agent space, while then all it's
> >application logic is fully protected, the danger will be that you have
> >added too much code to reasonably audit, so people will be able to
> >gain access to that trusted agent via buffer overflow.
>
> I agree; I think the system as you describe it could work and would be
> secure, if correctly executed. However, I think it is infeasible to
> generally implement commercially viable software, especially in the
> consumer market, that will be secure under this model. Either the
> functionality will be too restricted to be accepted by the market, or there
> will be a set of software flaws that allow the system to be penetrated.
>
> The challenge is to put all of the functionality which has access to
> content inside of a secure perimeter, while keeping the perimeter secure
> from any data leakage or privilege escalation. [...]

---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at wasabisystems.com



More information about the cryptography mailing list