bounded storage model - why is R organized as 2-d array?

alex at alten.org alex at alten.org
Thu Mar 9 02:10:58 EST 2006


> ----- Original Message -----
> From: "Travis H." <solinym at gmail.com>
> To: cryptography at metzdowd.com
> Subject: bounded storage model - why is R organized as 2-d array?
> Date: Thu, 2 Mar 2006 23:06:41 -0600
> 
> 
> Hey,
> 
> In Maurer's paper, which is the last link here on the following page,
> he proposes to use a public random "pad" to encrypt the plaintext
> based on bits selected by a key.  What I'm wondering is why he chose
> the strange construction for encryption; namely, that he uses an
> additive (mod 2) cipher where each plaintext bit is (apparently) XORed
> against K bits from the random pad.  He also uses a 2-d array
> structure, both of which appear kind of arbitrary to me.
> 
> http://www.crypto.ethz.ch/research/itc/samba/
> 

This is a subject that I'm painfully familiar with.  Dr. Maurer has 
slowly but surely over the last decade or so been putting together
a solid proof of a cipher very similar in construction to Dr. Atalla's 
proprietary Tristrata cipher (US patent #6088449).  The basic idea 
is that you do something like random1 XOR random2 = pad, then pad XOR
plaintext = ciphertext. If random1 and random2 are arrays of bits 
(n-bit random numbers) from a much larger amount of random material, 
then one can tune the probability of the pad repeating to be almost
zero.  Essentially you get an equation with 2 unknowns (pad and 
plaintext), which is pretty hard to solve if the pad is hard to 
determine.  The 2-D array simplifies getting the randomX's.  
Basically you treat the initial random material as a big array of N 
random bit strings.  Each randomX value is a random strip inside 
each bit string. Each strip is the same size (at TriStrata we 
typically used four 64-Kbyte strips, each from a 250 Kbyte bit 
string).  You XOR them together and get a "random" pad (ours was 64 
Kbytes long).  We were able to XOR about 32 Kbytes of plaintext into
ciphertext, before an inverse matrix attack became feasible.  Then 
a new pad had to be generated for the next 32 Kbytes.

The reason people like Dr. A or Dr. M are interested in doing this
is because you can get super-fast ciphers or get some sort of 
fundamental proof about its strength that is applicable to all types
of ciphers.  The speed, from a software engineering perspective, 
results from the classic tradeoff between memory lookup and execution 
loop complexity. If one can get down to a couple of XORs and memory
copies (per 32 or 64 bit data)using ordinary microprocessor 
instructions then it will outperform AES by around a couple of orders
of magnitude. This is very useful for encrypting things like video 
streams without an expensive hardware cryptographic accelerator card.

Dr. M showed in an earlier paper that if one uses enough randomX 
values that you can get arbitarily close to generating a truly random
pad each time you need it to encrypt some ciphertext (but with N XORs!).
Anyway, the big negatives are that with this type of Vernam cipher
the entropy decays really fast, especially if the initial big source
of random bits is publicly known, the pad management is really hairy, 
and generating lots of random bits of the initial material is very 
slow. Huge numbers of pads have to be generated and distributed all 
the time, and a great deal of care has to be taken to mitigate the 
cipher's decaying strength and to avoid several types of attacks on 
the pad generation.

It is of great personal interest to me to watch Dr. M slowly prove 
that the underlying mathematics of Dr. A's Tristrata cipher may not 
have been snake oil after all.

- Alex


---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at metzdowd.com



More information about the cryptography mailing list