[Cryptography] Question re: Initialization Vector for AES Counter Mode…

Ray Dillinger bear at sonic.net
Sat Apr 29 05:25:18 EDT 2017



On 04/26/2017 12:42 PM, Andrew Donoho wrote:
> Gentlefolk,
> 
> 
> 
> 	I am writing an app that encrypts files. Based upon private communications with members of this list and other crypto recommendation documents, the common advice is that I should use AES counter mode. What should my IV be? I’ve read one IETF standard that puts random bytes in the upper 64 bits and 1 in the lower 64 bits. My naïve view is that I should just choose the same number of random bytes, 16, for the IV as I do for a CBC mode. This is for a file whose length is limited, by the platform API, to an unsigned long long size (i.e. 64 bits). My concern is unsigned overflow of the IV. In practice, this is only ever a problem when the top 68 bits of the IV are all 1s. I can easily test for this situation and just ask the random number system for a new 128 bits. Of course, this is an infinitesimal reduction in the numbers available for an IV (2^128 - 2^60 or thereabouts).

Mmmm.  It is my opinion that counter mode is (and stream ciphers in
general are) a mistake.  We've discussed the limitations of stream
ciphers and counter modes before on this list. It allows opponents who
know what is stored at a given location to alter it by bitflipping, in
such a way that the decrypted message will be their chosen text rather
than the text that was originally written there.

There are ways to protect against this (checksums, hashes, etc) but all
of those ways add complexity to the implementation.  Protecting yourself
from bitflip attacks with hashing, in particular, destroys one of the
most valuable properties of counter modes, because with a hash rewriting
any block requires the entire document (or disk sector) to be hashed again.

Complexity is to be avoided where possible in security software:  it's
directly proportional to the number of opportunities you have to make a
mistake.

A reasonable protection from bitflipping, if you want something that
works like a counter mode, doubles the work of encryption/decryption.
That is to separately encrypt the plaintext and the counter, and let the
ciphertext be the XOR of the result.  To decrypt, one encrypts the
counter, XORs the result with the ciphertext, then decrypts the result
to get the plaintext.

Ideally one ought to encrypt plaintext and counter using uncorrelated
keys, (effectively doubling the key length) but that ought only matter
in cases where the cipher in use is somewhat broken anyway.  AES, and
any other standard ciphers that a knowledgeable person might recommend
in good faith are not broken in that way, or at least not yet.

Doubling the work of encryption/decryption is a compute cost that many
are unwilling to pay, but it achieves the desirable property that a
known plaintext cannot be easily manipulated into a known modified
plaintext, while retaining the desirable properties of counter modes
that they do not require reading and decrypting another block to make
sense of the current block and do not require a document to be rehashed
when a change is made in the content of a single block. (Though you may
need to keep document hashes anyway for other reasons, and in that case
you would still need to rehash...)

Otherwise secure modes don't really allow random-access; they all depend
on the previous block and rewriting any block usually means it's
necessary to rewrite the remainder of the file or re-hash the entire
file (or the disk sector depending on what level the encryption works
on) to protect data integrity.

> 	The bigger question is, I think, that the above problematic IV has lower entropy than I think it should. Perhaps I should be putting my random IV bytes through some kind of entropy test before using them? (Picking the IV is a rare event and much faster than writing the data to the file. Hence, testing before using is totally practical.) This, by design, reduces the pool of values from which the IV can be chosen, albeit in a non-deterministic fashion. Any recommendations of which entropy test to use?

Entropy tests only measure *anticipated* departures from perfect
entropy, such as statistically correlated bits at given offsets, bits
correlated with anticipated sequences, etc.  Given something that
departs from perfect entropy in any unanticipated way, a given entropy
test is useless.  Possibly worse than useless since it can give a false
sense of security.

If the attacker has partial knowledge (like a restriction to the members
of a 16-bit-enumerable set) of the IV, a departure from perfect entropy
that no entropy test could detect in terms of statistical correlations
is simple for the attacker to discover by checking each possibility.
This would give him 65536 possible values for the un-masked ciphertext,
meaning you wouldn't get very much of even the limited benefit of having
used a counter mode.  Any entropy test written without that specific
knowledge would never detect any departure from perfect entropy on that
sequence.

				Bear


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <http://www.metzdowd.com/pipermail/cryptography/attachments/20170429/b1522466/attachment.sig>


More information about the cryptography mailing list