[Cryptography] Secure erasure

Henry Baker hbaker1 at pipeline.com
Wed Sep 14 13:00:50 EDT 2016


At 12:21 PM 9/13/2016, Jerry Leichter wrote:
>Gee.  We did this.  Not so long ago.  Remember the RISC revolution?  Pretty much everything got re-thought, from the ground up.  Tons of complexity was removed - or, more accurately, removed from the hardware and added to the compilers and operating systems.  Case in point:  Some RISC's got by with no TLB refresh in hardware.  Let the OS worry about flushing old TLB entries and inserting new ones from the page tables it maintained any way it liked.  All that documentation in the hardware manual about the layout and care and feeding of page tables - gone.  That was a pretty radical clean-sheet design.
>
>RISC lost - and kind of won.

Re: Remember the RISC revolution?

That was *so* long ago -- 40+ years!

But like pruning and "mowing the grass" (not the Israeli version!), a gardener's work is never done.

Regardless of the way RISC was pitched in its infancy, its primary goal was to simplify the architecture enough to fit onto a single chip.  In the process, of course, much unneeded complexity was thrown away.

In the mean time -- 40+ years later -- the garden has become completely overgrown again.

We can now fit billions of transistors on a chip; yuge flash memories are now extremely cheap.

"Fitting onto a chip" is no longer an issue, but security and power are yuge issues.

The multiprocessor revolution is just beginning to take off; I can recall my first computer (IBM 1401) with 4,000 (not 4096!) "characters" of RAM; we're nearly to the point where a consumer can purchase a computer with 4,000 *processors*.

Since fitting more processors is more important than making any one processor more powerful, it makes sense (once again) to pare processor design down to the bare minimum so that one can fit more processors onto the same chip.

Since instructions/joule has become more important than instructions/sec, it's time to seriously consider asynchronous systems again.  However, we need to modify the goals of asynchronous systems so that they don't try to compute their answers as *fast* as possible, but as *energy-efficient* as possible.

Given the yuge number of transistors that can fit on a chip -- yet only a vanishingly small fraction of these transistors can be switching at any moment else the chip will burn up -- means that we can have a large amount of specialized circuitry that may only operate once a day, once a month, once a year, or once ever.

We're now in the same boat with software, where the marginal cost isn't very high.  This now means HW bloat in addition to SW bloat.  (I.e., whatever floats your bloat!)

I recall the IBM System 91, which was an extremely sophisticated "out of order execution" machine.  Yet its performance was soon surpassed by far simpler & cheaper machines which simply included caches.  Sometimes, conceptual progress obsoletes (or at least renders secondary) a lot of sophistication.

Now that nearly all compilers utilize *pure functional* internal representations of computations, and *variable assignments* have been shown to be frighteningly expensive -- especially in a multiprocessor world -- it's time to move to fully functional machine languages, and push the transactional complexity of shared-memory state out into the open.

Mocking Richard Feynman, "there's still plenty of room at the bottom" (i.e., in extremely simple CPU architectures).



More information about the cryptography mailing list