[Cryptography] The cost of exhaustive search for crypto attacks

Ron Garret ron at flownet.com
Fri Jul 29 13:55:49 EDT 2016


On Jul 26, 2016, at 10:01 AM, Arnold Reinhold <agr at me.com> wrote:

> In a previous post ... I attempted to show that for a fixed size block cipher there exists a finite (tho entirely impractical) algorithm for finding all possible attacks and that therefore the the question is not undecidable. 

On a lark I decided to actually do this calculation.  This is not intended to be a rigorous analysis, just a rough back-of-the-envelope exercise, mainly for fun.

Deciding what constitutes an effective attack is something of a judgement call.  In the literature, attacks get attention if they reduce the key space at all, even if it’s just by a few bits.  But what really matters is not extent to which the search space is reduced, but rather the absolute size of the resulting search space.  An attack that reduces the search space of a 256-bit cipher by 100 bits is still less effective than an attack that reduces the search space of a 128-bit cipher by 1 bit.  So instead of focusing on the amount of reduction, let us simply pick a number k such that a search space of <2^k is insecure, and a search space of >2^k is secure.

Let us further assume that we can ascertain whether or not a particular algorithm reduces a cipher’s search space in 2^k steps by running it for 2^k steps.  This is a questionable assumption, but it errs on the conservative side.  It essentially means that we evaluate a candidate attack by running it on a single ciphertext and examining the result.  In reality evaluating a candidate attack is almost certainly going to be more expensive than that.

In order to prove that no attack exists by exhaustive search we therefore need to enumerate all algorithms and group them into equivalence classes according to their behavior in <2^k steps.  The only way to do that is to enumerate all Turing Machines with <=2^k states.  The reason for this is that TM equivalence is undecidable (because it reduces to the halting problem) so the only way to insure that we haven’t missed an attack is to enumerate all of the possibilities.  We can stop at 2^k states because no TM can possibly visit more than that number of states in <=2^k steps.

The number of machines with <=2^k states is 2^(2^k).  Then we need to run each of those machines for 2^k steps, but that factor is lost in the noise and can be safely ignored.

To calculate the value of k that we can handle in this universe we need to compute the computational capacity of the universe.  There are various ways to do this.  The most rigorous calculation I know of is this one:

http://arxiv.org/abs/quant-ph/9908043

But that’s a calculation on the upper bound of the capacity of a computer weighing one kilogram under all kinds of realistic physical constraints.  We want an upper bound on the computational capacity of the entire universe, so let’s do a conservative calculation of our own: the number of elementary particles in the universe is estimated to be about 10^80.  (Wikipedia gives the number as 10^86, but most of those are neutrinos.  It doesn’t matter.  You can use 10^100 if you like, it won’t significantly affect the result.)  Let’s suppose that each of those elementary particles can somehow be made into a computer, and that each of those computers can enumerate states at the Planck frequency, ~10^43 Hz.  The current age of the universe is 13 billion years, or about 10^18 seconds.  Estimates of the time until the heat death of the universe are a little harder to come by; it depends on what assumptions you make about what “heat death” actually means, but let’s just pick 10^100 seconds.  Multiply all those together and you get 10^260 or so, or 2^860 total states that can be enumerated in our universe.  Let’s round up to 2^1024 to make the math easier (what’s a few orders of magnitude among friends?).

So k < log-base-2(1024) = 10.

And there you have it.  If you start right now, then by the time the last black hole evaporates due to Hawking radiation you might be able to prove by exhaustive search that a particular cipher does not have an attack that reduces it search space to less than 10 bits.  Anything more than that is hopeless.

It’s interesting to compare this to what one might realistically achieve.  A typical contemporary computer runs at ~10^9 Hz.  So a single computer running for (say) a year (10^8 seconds) could enumerate 10^17 = 2^56 states.  So doing an exhaustive search for attacks with k<6 is potentially achievable.  It’s interesting that the gap between what is achievable and what is not is quite narrow.  This is due to the hyper-exponential cost of enumerating algorithms that run in 2^k steps.

rg



More information about the cryptography mailing list