[Cryptography] Speculation re Intel HW cockup; reqs. OS rewrites & slow execution

Jerry Leichter leichter at lrw.com
Fri Jan 5 06:34:13 EST 2018


> Wouldn't this be a good time to think about putting x86 & x86-64 out of everyone's misery?
> 
> Have there been any *clean sheet* architecture designs since the Snowden revelations?
Let's follow this thought through all the way.

For the last 75 years, a driving force in the computer design business has been:  Computer hardware is expensive, we can't afford to dedicate a machine entirely to one person/program; let's find a way to share the expensive machine among multiple simultaneous users.  Once we did that, of course, we bought into the problem of keeping those uses safely isolated from each other.  The solutions have gotten more and more complex, but the promised land of "effective sharing with complete isolation" hasn't moved - and we show no signs of having reached it.

Perhaps the solution is to avoid the root cause.  Imagine a system with a "physical hypervisor".  It has a large number of *completely isolated* cores:  Each with its own caches, branch predictors, and paths to memory.  The "hypervisor" assigns *physical* cores to processes, not logical/virtual ones.  Sharing of resources is serial, with a complete hardware reset between assignment of processors to different processes.  We're getting to the point where something like this would perhaps be practical for high-security domains.

A sort-of-a datapoint:  Apple has a "secure enclave" in its ARM chips that's supposed to be inaccessible to any normal code.  Of course, it's really just a logically partitioned off portion of the chip itself, sharing resources with the rest of the chip.  Attacks on these kinds of implementations have appeared in the past.  But consider the newest iMac Pro.  This has an Intel chip that runs all the normal user code, including the OS; and a completely separate ARM chip running various support functions, including all kinds of security related functions.  No user code gets in there, ever.  Obviously, this will have its own problems and vulnerabilities - but it may be a path to the future. (Then again, it may point to the deep past:  As I discuss this, the image that comes back to me is of the CDC "super-computers" of the 1960's and 1970's:  User code in the "CPU"; OS/control code in completely separate "PPU"'s.  Forward into the past?)

This doesn't deal with all problems, of course.  The current issues have been rendered much more serious because we also persist in related illusion:  That we can allow someone potentially hostile to run code within our own security domain safely if we restrict the language he can use.  The same pattern shows up *twice* in these attacks:  The language starts out really restrictive, but as it appears "safe" it grows and eventually becomes powerful enough to allow big attacks.  One example is eBPF, allowing anyone to run "safe" code in kernel context - initially, in BPF, with very limited capabilities; now, in eBPF, with enough power to make use of these resource sharing bugs to leak kernel data at high rates.  And, of course, there's Javascript - the product of an evolution from simple display (HTML) to full Turing-equivalent programming language which "we can safely isolate".  We've been burned on *that* front since at least the 1960's, when Burroughs built machines in which the only user-accessible programming interfaces were in "safe" Algol variants.  "Safe", until they weren't.  (The published attack on these systems was to take a tape with an executable on it to a system written by someone else which allowed diddling directly with the binary and changing the verified executable to "break the rules".  Side channels have always been there!)

When you're at the bottom of a hole, the first thing to do is to stop digging.  If particular approaches have failed so consistently for so long ... it's time to start looking elsewhere.
                                                        -- Jerry



More information about the cryptography mailing list