[Cryptography] [Crypto-practicum] Justify the sequence of operations in CTR mode.

Jerry Leichter leichter at lrw.com
Tue Feb 23 16:17:58 EST 2016


>> Technology does advance, though.  When you were talking about
>> spinning rust, it was essential that any metadata associated with a
>> block be physically part of the block - putting something like an IV
>> off elsewhere in a metadata area would destroyed performance....
> 
> The biggest problem with separated metadata is the atomic update
> problem.  What if you've updated the disk block, but not the
> authentication/integrity metadata?  Or vice versa?
This is an oversimplification.

1.  There's always *some* metadata that describes the current state of the file/object/whatever.  Sure, the actual data is in the block - but you have to have a pointer to it somewhere so you can find it.  Writing the data without the pointer makes it impossible to find; writing the pointer without the data means that anyone following the pointer will get garbage.

In practice, you may use transactional mechanisms to tie the two together; or you may do "careful writes" where you ensure the data is there before you write the pointer, so that the failure mode is that the write is lost, to be recovered later by an fsck-like file system cleaner.  In reality, at some level, the transactional mechanisms rely on "careful writes":  E.g., you have to write the commit log to stable storage before you can write the data.

If you write the authentication/integrity metadata along with the pointer, your existing mechanisms handle the issue.  Then again, if you're using transactional writes, you can generally handle adding another piece of data to each transaction with little problem.  So ... the problem is easier to solve than it appears (though of course it's *far* from free if you want to add it to an existing system).

2.  We generally assume that a write to a disk block is atomic:  It either completes successfully (replacing all the old data with the new) or fails completely (leaving the old data unchanged).  Unfortunately, this isn't true.  I can't give a reference right now, but detailed studies of disk failure modes show that all kinds of bizarre failures can and do occur.  For example, partial overwrites can occur; usually the remainder of the block is zeroed.  Or data may be written correctly - to the wrong block.  These are very rare, but should come as no surprise:  There's more code running inside a typical disk these days than in some operating systems.  The disk drive makers haven't solved the problem of writing 100% reliable code any more than the rest of us have.  And that doesn't even consider hardware failures.

Given the size of modern disks and the amount of data they are called upon to read and write, these failures can't be ignored.  (The paper had to do with whether existing file systems could recover from the actual bizarre errors that can be found in the field.  They did this by constructing models of the failures, of the file system algorithms, and of correctness conditions, and then running a theorem prover against them to try to find counter-examples.  It turned out that not one of the file systems they studied could.  As I recall, ZFS was one of the best - but it could be "led astray", too.)  So ... the problem is actually much harder than you expect *even if you keep all the relevant data together*.

                                                        -- Jerry



More information about the cryptography mailing list