OTP, was Re: data under one key, was Re: analysis and implementation of LRW

Travis H. travis+ml-cryptography at subspacefield.org
Mon Feb 5 07:39:35 EST 2007


On Sun, Feb 04, 2007 at 11:27:00PM -0500, Leichter, Jerry wrote:
> | 1) use a random key as large as the plaintext (one-time-pad)
> ...thus illustrating once again both the allure and the uselessness (in
> almost all situations) of one-time pads.

For long-term storage, you are correct, OTP at best gives you secret
splitting.  However, if people can get at your stored data, you have
an insider or poor security (network or OS).  Either way, this is not
necessarily a crypto problem.  The system should use conventional
crypto to deal with the data remanance problem, but others have
alleged this is bad or unnecessary or both; I haven't seen it proven
either way.  In any case, keeping the opponent off your systems is
less of a crypto problem than a simple access control problem.

It was my inference that this data must be transmitted around via some
not-very-secure channels, and so the link could be primed by
exchanging key material via registered mail, courier, or whatever
method they felt comfortable with for communicating paper documents
_now_, or whatever system they would use with key material in any
other proposed system.  The advantage isn't magical so much as
practical; you don't have to transmit the pad material every time you
wish to send a message.  You do have to store it securely (see above).
You should compose it with a conventional system, for the best of both
worlds.

Of course any system can be used incorrectly; disclosing a key or
choosing a bad one can break security in most systems.  So you already
have a requirement for unpredictability and secure storage and
confidential transmission of key material (in the case of symmetric
crypto).  The OTP is the only "cipher" I know of that hasn't had any
cryptanalytic success against it for over 70 years, and offers a
proof. [1]

As an aside, it would be interesting to compare data capacity/density
and networking speeds to see if it is getting harder or easier to use
OTP to secure a network link.

[1] Cipher meaning discrete symbol-to-symbol encoding.  OTP's proof
does rely on a good RNG.  I am fully aware that unpredictability is
just as slippery a topic as resistance to cryptanalysis, both being
universal statements that can only be proved by a counterexample, but
that is an engineering or philosophical problem.  By securely
combining it with a CSPRNG you get the least predictable of the pair.

Everyone in reliable computing understands that you don't want single
points of failure.  If someone proposed that they were going to deploy
a system - any system - that could stay up for 70 years, and it didn't
have any form of backup or redundancy, and no proof that it wouldn't
wear down over 70 years (e.g. it has moving parts, transistors, etc.),
they'd be ridiculed.

And yet every time OTP comes up among cryptographers, the opposite
happens.

When it comes to analysis, absence of evidence is not evidence of
absence.

> Anyway ... while the question "how can we keep information secure for
> 70 years" has some theoretical interest, we have enough trouble knowing
> how to keep digital information *accessible* for even 20 years that it's
> hard to know where to reasonably start.

I think that any long-term data storage solution would have to accept two
things:

1) The shelf life is a complete unknown.  By the time we know it, we will
be using different media, so don't hold your breath.

2) The best way to assure being able to read the data is to seal up a
seperate instance of the hardware, and to use documented formats so
you know how to interpret them.  Use some redundancy, too, with
tolerance of the kind of errors the media is expected to see.

3) Institutionalize a data refresh policy; have a procedure for
reading the old data off old media, correcting errors, and writing it
to new media (see below).

The trend seems to be that I/O capacity is going up much faster than
I/O bandwidth is increasing, and there doesn't seem to be a
fundamental limitation in the near future, so the data is "cooling"
rapidly and will continue to do so (in storage jargon, temperature is
related to how often the data is read or written).

Further, tape is virtually dead, and it looks like disk-to-disk is the
most pragmatic replacement.  That actually simplifies things; you can
migrate the data off disks before they near their lifespan in an
automated way (plug in new computer, transfer data over direct network
connection, drink coffee).  Or even more simply, stagger your primary
and backup storage machines, so that 1/2 way through the MTTF of the
drive, you have a new machine with a new set of drives as the backup,
do one backup and swap roles.  Now your data refresh and backup are
handled with the same mechanism.

At least, that's what I'm doing.  YMMV.
-- 
The driving force behind innovation is sublimation.
-><- <URL:http://www.subspacefield.org/~travis/>
For a good time on my UBE blacklist, email john at subspacefield.org.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 827 bytes
Desc: not available
URL: <http://www.metzdowd.com/pipermail/cryptography/attachments/20070205/c0320b58/attachment.pgp>


More information about the cryptography mailing list