[Cryptography] RISC-V branch predicting

Jerry Leichter leichter at lrw.com
Sat Feb 10 15:32:20 EST 2018


> Spectre is a completely different beast. It has nothing to do with using
> speculative accesses bypassing memory protections....
> 
> One of the original proof-of-concept attacks for Spectre was a
> Javascript applet that could read arbitrary memory in the browser. Both
> the applet and the Javascript runtime execute in a single process; there
> are no memory access controls between the two in the first place.
Yes ... and no.

The Javascript is *intended* to be in a separate security domain from the rest of the browser.  The browser tries to maintain the separation in software.  Spectre proves once again - this has happened repeatedly, for decades - that pure software enforcement of security boundaries is very problematic.

The nature of the breaks - this one is typical - is an indication of why this remains a fool's quest.  All the fancy proof techniques we've developed, all the language invariants - all *assume the underlying model of the hardware that they build upon actually corresponds to the real hardware on which the resulting code runs*.  What Spectre illustrates, yet again, is that real hardware isn't the same as hardware models - and *any* variance has the potential to destroy your alleged proofs.

Is something like memory access mode protection, enforced by the hardware, really different?  Well ... perhaps not in principle, as Meltdown demonstrates.  But at the hardware layer, the interfaces and assumptions are much simpler (well, I'm not sure you could describe anything about x86 *hardware* security at this point as "simple" - which has lead to an entirely different set of problems), so there's a greater chance of getting it right.  Further, if you're analyzing and defining properties at the level of hardware, you have an easier time representing and getting a handle on hardware-level properties, like precise timings and cache interactions, something that's typically completely missing from the hardware models on which software security proofs are based.

The whole notion that you can safely execute hostile code within a single hardware security domain is one we need to get away from.  You want to run someone else's Javascript?  Run it in a separate address space and process.  We have years of experience in protecting separate OS processes and address spaces from each other.  Yes, there are periodically new bugs - in fact, periodically, new *classes* of bugs, like Meltdown - but we've done much better at keeping those under control than we have on pure software isolation.  It turns out that the Unix approach - in which process creation is assumed to be very inexpensive - is probably better than the approach of other OS's, where processes are more expensive to create, thus longer-lived and more likely to be subdivided into software-enforced security domains.  If even Unix processes are too expensive - which will likely be the response of browser makers to the notion that each individual piece of Javascript should be segmented off into its own process - then perhaps we should look at hardware and software models of very cheap hardware isolation.
                                                        -- Jerry



More information about the cryptography mailing list