[Cryptography] Secure erasure

Jerry Leichter leichter at lrw.com
Sat Sep 10 04:46:08 EDT 2016


>>> The language is guaranteed to produce "the" right answer and leave
>>> it in "the" appropriate place.  It offers no guarantees as to how
>>> many extraneous copies are left lying around.
>>> 
> Indeed.  That is in fact the main problem.  All these systems
> are defined in terms of what they make visible or available,
> and security is defined in terms of what is destroyed or kept
> unavailable.  The result is that a lot of things are made
> available by copying or caching or whatever, when we wanted
> to assure that no copies are ever made.  But we don't have
> the ability to express that, because NOT making copies isn't
> defined in terms of making something visible or available....
Let's step back a moment here.  The entire stack, hardware to OS to libraries and compilers and languages, are designed to be (a) general purpose; (b) efficient; (c) usable.  Cryptographic code represents a vanishingly small percentage of either bytes or cycles used on any system.  The system is not and *will not* be designed around its needs.

Back in the early days of DES, the NSA was reportedly against the idea of doing crypto in software *at all*.  Their approach was always to provide a sealed hardware "black box".  They weren't all wrong....

There's an inherent tension between general-purpose hardware and cryptography.  It's not going away.  Diddling with compilers isn't going to help.  Changing OS's isn't going to help:  As fast as the OS declares "I won't cache that page", the hardware introduces some new kind of optimization, or another layer of software gets shoved in under the OS, and wham, the stuff gets copied into some cache no one ever heard of.

In fact, what we're seeing is the migration of crypto into hardware.  AES built into processors; secure enclaves to hold keys and do crypto that even the most privileged software can't even observe, much less modify.  No, this doesn't make the hill of dirt under the rug go away - it just moves it elsewhere.  Now you worry that there's no way to know just what the hardware is doing.

It is worth pointing out - and this and many other discussions here make it clear - that "trusted, audited software" running on non-trustworthy hardware (which is pretty much all the hardware available today) is really no more secure than the same algorithms in dedicated hardware.  No matter what you do ... you have to trust the hardware.  Sure, we as software hackers lose the ability to play around with new algorithms and read all that fun code ... but realistically using the built-in AES instructions on an x64 chip *may well reduce* the attack surface relative to writing AES in software and running it on that same chip.

So:  If we admit that the cause of really secure crypto on general-purpose hardware is lost ... what do we get to?  I suspect it's something in the direction of verified crypto processors embedded in the hardware, with verified isolation from the rest of the chip.  What exactly verification means in the context, and how it might be accomplished, are open research questions.  (It is clear that "simple is better":  The Apple approach of a fairly fixed-function "secure enclave" is more secure than the - already attacked - more general "secure enclaves" that allow code (to implement DRM, for example) to be loaded into them.
                                                        -- Jerry



More information about the cryptography mailing list