[Cryptography] defaults, black boxes, APIs, and other engineering thoughts

Jerry Leichter leichter at lrw.com
Wed Jan 8 07:15:14 EST 2014


On the general issue of Java "escapes from the sandbox":  When Java fielded the notion that you could let someone run arbitrary programs in a general purpose language safely on your machine because the compiler and run time system would ensure that they did nothing untoward, they were replaying a theme that's surfaced in the computer science world repeatedly over the years.  Burroughs actually built a series of machines, starting, I believe in the late 1950's, which "had no assembly language":  It was Algol all the way down.  The hardware was designed to enforce Algol rules.  Only approved compilers could mark files as executable, and they "correct by design" and could never emit code that violated the rules.

It was a great idea, completely secure - until it wasn't, when someone realized that you could take a backup tape written on one of these machines to a different kind of machine, muck with the contents, then restore it with unsafe code marked as "certified safe".

Knowing this history - and the history of other, similar failures - I was skeptical when I saw these claims.  Sun and the Java designers, to their credit, put a good security team together and designed a sane, reasonably small security model that one just might be able to get right and *keep* right.  But it was not to be.  Java has joined the ranks of many such "absolutely secure" environments that are "absolutely secure" - except when they aren't, and need yet another patch.

This is a dream that won't die.  In the Web world, we tell people to turn off Java - but Javascript is pervasive.  (I ran my browsers with Javascript disabled for years, but it eventually became untenable.)  Javascript has a reasonable track record for safety, though how much of that is limitations on what it can do, how much is at the cost of performance, and how much is a laxer view of what security properties Javascript is supposed to enforce anyway, is hard to disentangle.  Google's NACL is an attempt to do what Java did with actual raw code.  We'll see how that works out.

We seem to be learning.  We've been trying for 60-odd years, after all!  There's some good theory now (e.g., proof-carrying code) though how much actual application that theory has had, I don't know - NACL seems to have picked up from themes from PCC, but I think the actual implementation is entirely different.  Still, the long history of failures - often, as in the case of Java, after apparent initial success - should make one extremely cautious.

BTW, I find the distinction between the continuing failure of software-based security and the apparent success of hardware-enforced security quite fascinating.  There were some very early failures in hardware-enforced privileged separation - MIT's old Project Mac had the classic examples - but I haven't heard of problems of this sort in many years.  The Project Mac failures were clearly the result of too much complexity - hardware support of complex segment-based memory with complicated security semantics - and for many years we kept things very simple - a privileged and a non-privileged mode, with a simple division of resources into accessible everywhere or only from privileged mode.  Virtual memory was more complex, but still fairly straightforward, with a central "reference monitor" in the page tables.  In general, we kept things fairly simple, and they worked.

Looked at from this point of view, the explosion in complexity in modern chips - virtual machine support, System Management Mode, "secure enclaves", etc. - should be worrying.  System Management Mode is well known to have some very problematic aspects - though so far as I know, they are all in the way it gets used, rather than in the hardware implementations of the primitives.  But should we have any confidence that it will stay that way?  Yes, the hardware guys have gotten really good at making sure their hardware actually does what's specified - the Intel divide bug probably could not be repeated today. But all those techniques are aimed at errors "in natural conditions", not under active attack from an intelligent opponent.

                                                        -- Jerry



More information about the cryptography mailing list