[Cryptography] Secure erasure

Kent Borg kentborg at borg.org
Sat Sep 10 12:00:44 EDT 2016


On 09/09/2016 08:28 PM, John Denker wrote:
> So the oversimplified argument reduces to this: Either you trust the
> hardware and the OS to not give away your information, or you don't.
>    -- If you don't, then what makes you think it is safe to do any
>     crypto at all on this platform?   Your information could be given
>     away shortly after it is acquired and before it is erased.
>    -- If you do, then what is the added value of your so-called
>     secure erase routine?
>
> The way forward is to stop oversimplifying.  Start by asking What's
> Your Threat Model.

Very good points. These questions are interesting to think through, but 
absent a threat model, how do you decide?

Well...how DO you decide? If one is building some crypto component, how 
can the final threat model possibly be known? Um, gotta make some guesses.

Okay, once you make those guesses (let's call them "assumptions"--sounds 
classier), shouldn't they be captured? So when someone gets to the point 
of applying your component as part of building a larger system, when a 
threat model is closer to being knowable, the consequences of those 
assumptions could be worked through?

[Pretend I here repeated my recent rant about needing to define system 
boundaries.]

Once you make some assumption about threat model, you are also baking in 
vulnerabilities (being vulnerable to the threat models you can't, or 
choose not to, defend against). Shouldn't these vulnerabilities be 
documented by defining what is inside your system and defended, and what 
is outside and so the responsibility of the larger system using it? And 
at the next higher level up, when someone builds that bigger system 
incorporating your component, shouldn't someone also document the system 
boundaries and threat model assumptions made at the level? Does this 
ever happen? I know we say RTFM, read the millions of lines of source 
code too...

At each subsystem level there will be configuration options that the 
larger system needs to get the right behavior; these are dangling bits 
that the subsystem necessarily can't control, yet it requires they *be* 
sensibly controlled. They should be handed off carefully, but I don't 
see that happening.

As we build systems it seems like there is a whole stack of security 
assumptions being made but not documented nor otherwise coordinated. How 
could we ever hope to build secure systems that way? Oh, wait, we don't 
come close to building secure systems.

This approach seems doomed to fail. That's why it does.

-kb



More information about the cryptography mailing list