[Cryptography] Buffer Overflows & Spectre

Jerry Leichter leichter at lrw.com
Thu Nov 22 07:27:32 EST 2018


>> "Don't allow malicious, attacker-controlled code to run on the same CPU/CPU
>> cluster as your precious secret-containing code" would be a good start.
> 
> Yeah, that’s pretty much it.
> 
> Arbitrary, hostile code can dump all of memory. The end.
> 
> ...That’s one issue; or it’s an issue that has different facets. If you’re running servers (etc.) then clients can spy on each other. If you’re running client software (oh, like a web browser running Javascript), then you have to be careful that the JS doesn’t manage to get a covert channel on other things.
I think it's a bit more subtle than that.

A number of assumptions combined to get us where we are:

1.  "Hardware is expensive; we can't afford to give everyone his own computer".  This is what led us to invent timesharing.
2.  Since I have to share the hardware, I need to protect different entities from each other.
3.  I can make the protection between entities good enough that I can share it between actively hostile entities.
4a.  Since I can make the protection good enough that I can share the hardware safely, I can build a business model in which I rent out "slices" of my hardware to arbitrary customers (initially time-shared hosts, these days Cloud machines).
OR
4b.  Since I can make the protection good enough, I can take a personal machine dedicated only to me and let arbitrary others run code on it - the Web/Java Applet/Javascript model.

In parallel - and these you can find explicitly written down in the early literature:
A.  There's no practical way to close down all covert channels.
B.  If we slow the covert channels down enough (perhaps a few bits/second), they don't matter because individual bits aren't that important - an attacker can't get enough of them to be a threat.

Finally:
X.  We can safely do cryptography on a general-purpose, shared computer.  Note that back when DES was first coming into use, the NSA refused to accept this assumption.

The problem today is that assumptions 3 has turned out not to be quite true, because assumption X has rendered assumption B false:  Pretty much all the concern about side channels centers on leaking keys, which pack huge value into a very small number of bits.  Oh, I'm sure you can construct examples of other "highly valuable" bits, but they tend to be specialized and particular to individual programs, and often only at particular times.  Stealing encryption keys, though, is the "killer app" of side channel attacks. 

There are two root assumptions to these chains:  Hardware is expensive so must be shared; crypto can be safely done on a general-purpose shared computer.  Neither of these is true - the first due to Moore's law, the second ... by observation.  And we're just beginning to see the first hints of changes in approach by companies that at least "talk the talk" (we can argue forever about the degree to which they actually "walk the walk") of security and privacy:

.  Apple has moved some of the basic encryption operations off the main CPU into their T2 chips where nothing but their own code runs;
.  At the other extreme, Apple has eliminated most browser extensions in Safari, even though their extension architecture always provided much more limited access than some other versions out there; and they've begun to choke off certain accesses that Javascript/HTML have (e.g., they deliberately lie in response to certain queries about capabilities to make machine fingerprinting harder).  This isn't much, but it's among the first mainstream moves that say "OK, there's a Javascript standard which was designed to make advertisers happy, we don't see why our own customers should fulfill their requirements."
.  Oracle has an interesting little marketing line about their Cloud architecture:  Why should you run your vendor's software on your machine?  So they propose that the machine they rent you has none of their database code on it - the database runs off in a separate server (with none of *your* code in it), and the two communicate only over a link.  (There's some discussion of related stuff at https://www.datacenterknowledge.com/oracle/ellison-touts-hardware-barriers-and-robots-guarding-oracle-s-cloud).  Whether this is real or not isn't as important as that we're beginning to talk about it seriously.

Given the realities of today's hardware, one could imagine a "Cloud" in which you don't rent a VM:  You rent an actual piece of hardware (so-called "bare metal"), which runs your code and your code only.  (There are "public Kubernetes container services" that are approaching something along these lines.)  When you're ready to relinquish that hardware, you do a hardware reset back to a fixed state.  Making such a thing real requires re-thinking of the hardware organization:  Rather than individual "bigger, faster" chips that share hardware resources and then mainly get divided down into small VM's anyway, you want smaller, cheaper individual chips sharing as little as possible.  Maybe - it's a direction we haven't explored - though in some sense, that might be what "edge computing" (the latest buzzword) will be about.  (Again, Apple is arguably already there:  Its bias is to put huge processing capabilities in each iPhone and do as much as possible in each phone, rather than in the Cloud - the exact opposite of Google, which plays to its own strength in the Cloud by providing tightly integrated off-loading of processing.)

The emerging hardware/software world of 10 years from now may look very different frame what we've been accustomed to.  This level of change is difficult; there's a huge amount of inertia to overcome.  But something will have to give.
                                                        -- Jerry



More information about the cryptography mailing list