[Cryptography] Secure erasure

Jerry Leichter leichter at lrw.com
Tue Sep 13 15:21:26 EDT 2016


> [Many complaints about the complexity of existing designs]
> I would plea for such "clean sheet architectures", because the amount of money required to develop these will be far less than the amount of money required to deal with the insecurities of the current code base and architectures.
Gee.  We did this.  Not so long ago.  Remember the RISC revolution?  Pretty much everything got re-thought, from the ground up.  Tons of complexity was removed - or, more accurately, removed from the hardware and added to the compilers and operating systems.  Case in point:  Some RISC's got by with no TLB refresh in hardware.  Let the OS worry about flushing old TLB entries and inserting new ones from the page tables it maintained any way it liked.  All that documentation in the hardware manual about the layout and care and feeding of page tables - gone.  That was a pretty radical clean-sheet design.

RISC lost - and kind of won.  Initially, RISC killed off what were considered the classic examples of CISC, the VAX and some imitators (e.g., 68K).  But then for general purpose computation, the x86 and the x64 pretty much killed RISC.  Though if you looked deeper, the say x86/x64 survived was by translating their complex instructions into micro-operations, on the fly, for a somewhat RISC-like underlying but invisible architecture.  Meanwhile ... ARM is kind of RISC'ie - as, for that matter, is PowerPC, which is making a comeback, though PowerPC is also radically different in its own ways.  Did RISC win?  Lose?  Damned if I know.

We used to believe that worrying about micro-optimizations and a few cycles here and there didn't matter, because a year from now, there would be a new chip iteration which would be significantly faster.  Good optimizing compilers gained you, after a big software investment, about what you gained by using two-year's-ago compiler with this year's hardware.  If the advantages were multiplicative - so you got your 50% speedup from the compiler for code running on the new 2x faster hardware - that was great, but it didn't always work out that way, as the compilers often needed to be tuned to the performance quirks of the new hardware.

And then speedups fell off a cliff.  We could shrink the gates allowing for faster clock speeds - but you couldn't cool the damn thing, so you had to put those gates to some other use.  (Now we're reaching other walls:  The wavelength of an electron under typical working conditions is around 5nm, so if you manage to get features that small - not that far off - the question of whether the electron is or is not at the gate becomes rather fuzzy.)

So ... the argument that we'll have cycles to spare just doesn't work any more.  So far, we've managed to find more computational work for our machines to do as fast as we've managed to generate more cycles to do the work.

Which brings me back to my argument all along:  Machines that do general computation will continue to be full of tons of speedups - hacks, if you want to define them that way.  They will not, in the foreseeable future, themselves be secure, because performance demands, not security, will drive the market for them.

So the alternative is to look elsewhere:  Security is a *system* property, just like reliability; so as we build reliable systems from unreliable components, we need to build secure systems out of insecure components.  Though as far as we can tell, there needs to be more of a secure core to bootstrap with than a reliable core.
                                                        -- Jerry



More information about the cryptography mailing list