[Cryptography] Speculation considered harmful?

Jerry Leichter leichter at lrw.com
Tue Jan 9 14:42:55 EST 2018


>>> Eh. In the context of Spectre, the CPU knows which cachelines it
>>> loaded in a speculative fetch. It should simply mark them invalid
>>> when unrolling the speculation.
>> 
>> John Levine already pointed out the root of the problem - and the
>> right solution:  Speculated code must run *in exactly the same way as
>> non-speculated code*.  In particular, a speculated path needs to stop
>> immediately if it attempts a forbidden memory access.  There's
>> absolutely no point in continuing down this path, as it can't
>> possibly be committed in any case:  It will terminate at this point
>> with a memory access exception.
> 
> AFAIUI that does not deal with all variants of the attack i.e. it
> solves the Meltdown problem but not the Spectre problems.  Meltdown is
> much easier to exploit and needs immediate attention, but the long-term 
> solutions should deal with Spectre as well.
I would say that Spectre is fundamentally not soluble within our current system designs. Meltdown is a way for one security domain known to the hardware to access information from a distinct security domain that it should not have access to.  It's a failure of the hardware to fully maintain the properties it promises for the separation of security domains.

Spectre involves code within one hardware security domain gaining access to information *within the same security domain*.  There *are* two security domains at play here - but they are enforced only by software.  No matter how many times we get hit over the head by attacks that get around software-only security enforcement, we keep convincing ourselves that *this time is different*.  Yes, Javascript has grown from a fairly simple scripting language to a Turing-complete monstrosity with access to all kinds of low-level resources.  But we can stop Javascript code (from doing nasty things) any time we want.

We could stop the Javascript attacks by aligning the software security domains with the hardware-enforced security domains:  Continue the splitting up of browser responsibility in order to run each untrusted piece of Javascript code in its own process, carefully ensuring that there is no sensitive information anywhere within that process's address space.  The performance implications - not to mention the likely functionality implications - are hard to know without trying, but if you want safety from Javascript-based attacks, you need to *strongly* sandbox all untrusted JS code.  Let's stop pretending that there's any safe way to grant attackers such powerful access inside of our own security domain.

There are likely other places where untrusted code runs within the same security domain as trusted code.  In fact, the entire sandboxing effort arises from the recognition that the old days, where the code I ran was code I could trust, so there was no point in limiting what it could do, are long gone and are not coming back.

Perhaps it would be useful to make the kinds of hardware mechanisms that OS's use to enforce the separation of OS-defined security domains from each other inside of processes, accessible to normal user code.  The VAX had four privilege levels, and sort of kind of did this, but in the end it was very limited and not in any way accessible to ordinary user code.  Capability-based systems are the ultimate development along these lines, but represent a radical departure from current system designs.  Perhaps we can get some of their power without giving up all compatibility with existing code.

                                                        -- Jerry




More information about the cryptography mailing list