[Cryptography] Question re: Initialization Vector for AES Counter Mode…

Jerry Leichter leichter at lrw.com
Wed Apr 26 19:49:45 EDT 2017


> 	I am writing an app that encrypts files. Based upon private communications with members of this list and other crypto recommendation documents, the common advice is that I should use AES counter mode. What should my IV be?
Counter mode is a stream cipher.  It relies on the assumption that the value of AES(K,n) is unpredictable given the values of AES(K,n-1), AES(K,n-2), and so on, down through any number m of previous values AES(K,n-m).  (In practice, of course, there has to be *some* bound on m - if you let it be 2^128, you've given out all the values of AES(K,x) for all x!  In practice, m=2^32 is certainly safe; probably m=2^64 is, too, but I don't know what the current recommendations are.)

Note that nothing here depends on the "first" value - the IV - which would be n-m.  It really makes no difference what you use.  Always starting at 0 is as good as anything else.

> I’ve read one IETF standard that puts random bytes in the upper 64 bits and 1 in the lower 64 bits.
So why this?  Remember, counter mode is a stream cipher - so also depends on another essential property:  Key uniqueness.  If you ever encrypt two messages with the same key (and the same IV), you leak information about both messages.  That *should* never happen ... keys in this mode *must* be unique.  But it's probably a worthwhile protection against bugs/operational problems to protect against it.  If you choose your IV in the recommended way, even if a key gets reused, you're protected, as the sequences generated will be independent of each other.  In fact, if you always rekey after no more than 2^64 bytes, even using just a single key, you won't see a duplicate of both K and n.  (Well, of course, you could always generate the same 64 bit top bits - something that approaches a 50% chance after 2^32 tries.)

*Something* has to go in the bottom 64 bits.  It might as well be constant (to ensure you get a full "cycle" of 2^64 values before you can increment from a value generated with one initial top 64 bits to another one with a different top 64 bits), but it makes no difference at all what you use.  1 is as good as anything else.

> My naïve view is that I should just choose the same number of random bytes, 16, for the IV as I do for a CBC mode.
Assuming that K is properly chosen (so that it never repeats), this adds nothing to the security.  What it does for the repeated K case is complicated to compute (at least for me, right now).  But it's certainly no better than the recommended technique.

> This is for a file whose length is limited, by the platform API, to an unsigned long long size (i.e. 64 bits). My concern is unsigned overflow of the IV.
Why do you think that matters?  All values are equivalent.  The encryption doesn't do arithmetic on this value.  In fact, one of the advantages of the recommended technique is that you can generate your 2^64 values using 64-bit arithmetic, as there will be no carries out into the top 64 bits.  When the value for the bottom 64 bits wraps back to 0, you've emitted 2^64 blocks and should re-key.

> In practice, this is only ever a problem when the top 68 bits of the IV are all 1s. I can easily test for this situation and just ask the random number system for a new 128 bits. Of course, this is an infinitesimal reduction in the numbers available for an IV (2^128 - 2^60 or thereabouts).
> 
> 	The bigger question is, I think, that the above problematic IV has lower entropy than I think it should.
The entropy of the IV is irrelevant if K is properly chosen, and not the right measure if K is *not* properly chosen.
                                                        -- Jerry




More information about the cryptography mailing list