[Cryptography] Photon beam splitters for "true" random number generation ?

Jerry Leichter leichter at lrw.com
Tue Dec 29 10:42:45 EST 2015


> ...Just doing secure delete on a few files means that you're leaking all of the data in your _unencrypted_ files, including your system log files, etc.
VMS, many many years ago (and still, for the few still using it) had a feature that it amazes me no one has copied since:  You could mark a file "erase on delete".  This had no effect until the file was deleted - at which point it was subject to a secure erasure step before the blocks were freed.  (The erasure mechanism was pluggable; the default one used multiple overwrites, and could even be set for the famous "35 overwrites" that everyone imitated with no understanding of what it was.)

Since this was implemented by the file system, it didn't matter how the file got deleted; it would be erased.  I'm pretty sure the "pending erasure" even survived a reboot - the blocks would not go onto the free list until they had been erased.

Once you build such a feature into the file system, it can be implemented correctly for the features of that file system.  Even if the system uses data journaling, it can use this setting to ensure that replaced blocks are scrubbed immediately.  (There's not much it can do with current SSD's - another of the list of examples where the duplication of functionality between the file system and the disk emulation layer has unexpected and nasty side-effects - but given support in the chips - an extension of TRIM support would do - a file system could make sure that relevant blocks were wiped.)

Of course, you can take this feature "to the limit" and consider every file to be "erase on delete".  There are situations in which this makes sense; there are many where it's expensive overkill.

Sometimes layering considerations can lead you to make sub-optimal choices.  For example, we currently put encryption into one of two layers:  The user layer, or the driver layer, underneath the file system.  But one could imagine encryption integrated into the file system.  The difficult UI issues of requiring the user to make the right choice every time go away.  But some of the annoying constraints of the driver layer - in particular, the fixed size of blocks - go away, too.  A per-file key, with extra space in the metadata for nonces and such, has all kinds of advantages.  And integrating encryption with "erase on delete" makes it possible to erase just the keys, not all the blocks in huge files.

The hardware technologies available for persistent storage went through one revolution with CCD's, and are about to go through further revolutions that are even more significant.  SSD's were (are) the kind of hack that fits a new technology into old slots - the huge electric motor running all the drive belts to individual tools in a factory, simply replacing the previous water wheel or steam engine.  We're only beginning to see use of CCD's in "native" ways, rather than through a disk emulation layer.  As we move forward with newer technologies, it would be good to build in appropriate security support - as John Denker has been proposing for SSD's.  This will require rethinking and often discarding some of the assumptions and architectural verities of the past.  This won't be easy:  Note the Linux team's rejection of ZFS because it didn't fit the existing layering model.

                                                        -- Jerry



More information about the cryptography mailing list