[Cryptography] State of sin (was Re: What to put in a new cryptography course)

Arnold Reinhold agr at me.com
Tue Jul 12 07:37:36 EDT 2016


> On Jul 11, 2016, at 6:38 PM, Thierry Moreau <thierry.moreau at connotech.com> wrote:
> 
> On 11/07/16 06:11 PM, Arnold Reinhold wrote:
>> 
>> The lack of mathematical proofs for the security of cryptographic primitives is a reality with which the cryptographic community is perhaps too comfortable. [...]
>> 
>> A recent example of this is the Logjam attack for which there was no new mathematics developed, merely a realization that most the work needed in the best attack on D-H in the group of integers modulo a prime p was only dependent on p, not on the group element whose logarithm was to be computed. Since many protocols use the same p for all D-H encryption, attacks became more feasible than previously thought. The assertion "D-H is hard" omitted that bit of fine print.  What other overlooked technicalities are out there behind statements that "X is assumed to be hard"?
>> 
> 
> Ah ah! You might need a different example, or a different explanation:
> 
> A prudent recommendation in section 5.2 of the original Station-To-Station Protocol publication ([1]) anticipated to the very root cause of the Logjam major vulnerability.
> 
> [1] Whitfield Diffie, Paul C. Van Oorschot, and Michael J. Wiener "Authentication and Authenticated Key Exchanges", in Designs, Codes and Cryptography, 2, 107-125 (1992) (received by editor Nov. 22, 1991 and revised Mar. 6, 1992), available as Appendix B to US patent 5,724,425.
> 
> Hence the "realization" was not new.
> 
> Maybe the cryptographic community has to listen more carefully to these theoretical contributions that are being relied upon in deployed crypto schemes.
> 

Thanks for the interesting reference. I’ve added it to the relevant Wikipedia articles. But I think this case still supports my argument. A valid theorem on the security of D-H would have to include a limitation on the reuse of p, as well as other limitations, such as choosing p to avoid Pohlig–Hellman attacks. Limitations would be all in one place instead of being scattered throughout the literature. 

Even the mention of the prime reuse vulnerability in Section 5.2 of the Station-To-Station Protocol publication seems to be an afterthought. Later in Section 5.2, the importance of including p and the group generator in public key certificates is mentioned, but the example given for why is downright silly, when avoiding prime reuse would have been a much more convincing argument.

Perhaps we need a body of pseudo-theorems: clear statements of what exactly we are assuming about each of our cryptographic primitives that can be updated when new limitations are found.

> On Jul 11, 2016, at 2:23 PM, Ron Garret <ron at flownet.com> wrote:
> 
>> 
>> The lack of mathematical proofs for the security of cryptographic primitives is a reality with which the cryptographic community is perhaps too comfortable.
> 
> Unless someone manages to prove that P != NP we have no other choice but to make our peace with this.

P vs. NP has nothing to do with cryptography. Nothing. I wrote a rant on this over two decades ago (http://theworld.com/~reinhold/p=np.txt). Give me an encryption algorithm whose difficulty grows as (key size)^20 (i.e. polynomial time) and I can pick a reasonable key size that will not be crackable in the age of the universe.  And for that matter AES-256 is in P because it is solvable in constant time, it’s just a very big constant.

If that isn’t enough to convince you, consider this. The whole theory of NP-completeness is based on finding problems that are "NP-hard.” NP-hard just means that there exist examples of the problem into which you can embed a Turing machine and its program. The best example is Boolean-satisfiability. You can unwind any Turing machine into a giant Boolean expression in polynomial time. NP-competenss of NP-hard problems follows because for a problem, X, to be in NP there must exist a Turing machine program, testX, that can check any proposed solution of X in polynomial time.  If you can solve any NP-hard problem, Y, in polynomial time, then you can solve X in polynomial time by embedding its testX program in Y.  All very pretty and all dependent on the fact that polynomial-time is such an elastic equivalence relation: the product of two polynomials is another polynomial, but the degrees keep getting bigger and problems whose solution time is s high degree polynomial are intractable in practice.

Note that this theory is all about existence proofs: we find some example of the problem that can contains a Turing machine (or an arbitrary Boolean expression, which is usually much easier to show) and that problem gets the NP-hard stamp of approval. But that is no proof that all or even most examples of the problem are in any way difficult. And for cryptography we want assurance that our algorithms are strong in all cases, not just a sparse subset. 

We don’t know if factoring primes or finding discrete logs are NP-hard because no one has found enough structure in those problems to embed Turing machines. Suppose someone did. Then the problem would be officially NP-hard, but the newly discovered structure might nonetheless enable new attacks on practical sized problems. 

So the whole P vs NP theory has no bearing on cryptography. 

Arnold Reinhold






More information about the cryptography mailing list