The role of compression in cryptosystems

John Denker jsd at research.att.com
Tue May 29 08:40:30 EDT 2001


In a discussion of the role of compression in IPsec,
at 06:54 PM 5/28/01 -0400, Henry Spencer wrote:
 >
>The CPU load is quite substantial, and although I don't have recent
>timings, I wouldn't recommend compression unless you have a slow line.

Hmmmm.  My recommendation would be to shift the burden of proof:  I would 
leave compression turned on unless there is a good reason to turn it off.

This recommendation is based on the following general principle:

         Any cipher applied to well-compressed data is much
         harder to cryptanalyze than the same cipher applied to
         uncompressed plaintext.

As an illustration, a block of well-compressed text encrypted by single-DES 
would be secure against EFF's DES cracker (whereas a block of uncompressed 
text is not).  http://www.eff.org/descracker/

This principle is based on information theory, specifically the idea of 
unicity distance.  Note that these days the security of a typical crypto 
scheme is explained in terms of _complexity theory_ (the assertion that 
cryptanalysis is computationally infeasible), whereas information theory is 
a completely different kettle of fish.

And while we're on the subject:  the module that compresses the data should 
add a goodly percentage of random cryptologic nulls (no, I don't mean ASCII 
NULs).  The decompressor of course removes them.


To decide whether or not this principle is important, we can consider three 
cases:
  a) In the case where the cipher is really weak and the adversary has lots 
of ciphertext to work with, pre-encipherment precautions (such as 
compressing + adding nulls) won't save the day.  No compressor is perfect, 
so the unicity distance is never infinite.
  b) There is a vast middle ground where such precautions can increase the 
adversary's workload by a large factor (thousands) while increasing the 
good guy's workload by only slightly.
  c) In the case where the cipher is really strong, further strengthening 
is just lily-gilding.

We all !hope! case (c) applies to us.  But the history files are bulging 
with stories of people who thought they were in situation (c) but weren't.

In general, the following seems like a good design discipline:
  1) Assume the cipher is imperfect.  Pretend you're using single-DES or 
some such.
  2) Design the rest of the cryptosystem to survive such imperfections.
  3) Then plug a strong cipher into that system.

Specifically:  My instinct says that any serious cryptosystem should 
compress the input, then add nulls, then encipher.

Another way of saying the same thing:  In the usual discussions, some 
systems described 100% in terms of information theory (one time pads), 
while others are described 100% in terms of complexity theory (RSA etc.)  I 
haven't seen a good practical discussion of how to combine the two viewpoints.

Comments, anyone?




---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at wasabisystems.com




More information about the cryptography mailing list