[Cryptography] Many curves versus one curve

William Allen Simpson william.allen.simpson at gmail.com
Sun Aug 10 15:19:41 EDT 2014


On 8/9/14 10:26 PM, David Leon Gil wrote:
>     Cc: "cryptography at metzdowd.com <javascript:;>" <cryptography at metzdowd.com <javascript:;>>
>     Subject: Re: [Cryptography] IETF discussion on new ECC curves.
>     Message-ID: <683EFDEB-06A0-4248-8579-CDCF17CAEF34 at gmail.com <javascript:;>>
>
>     Does it make sense to have a small set of curves that everyone uses?  Or would it be better to have every application or even every user generate their own curve, using some process that would convince skeptics that the curves had been generated randomly?
>
>     --John
>
I've already opined on this topic a couple of weeks ago, but again:

   Never-the-less, all protocols should be designed so that one
   party (usually the responder/server) lists all supported
   algorithms, and the other party chooses from that list.


> I think that Mike Hamburg has been working on a Sage(?) script to select a curve satisfying the SafeCurves criteria deterministically from a random seed. (He mentioned this on another list.)
>
Good.  However, I have never been of the opinion that everybody
should generate their own curves/moduli, nor that everybody should
check each others' curves/moduli at run time.

Rather, each manufacturer should generate a new set for each
release.  That gives us some confidence that they were made
with some deliberation from well-known processes.  Yet gives us
public opportunity for verification.

Also, regular releases further broadens the base to be attacked,
hopefully to the extent that it no longer makes sense to attack
the individual curves/moduli!


> It makes most sense on a per-application basis -- the computational cost of verifying these conditions is fairly high.[*]
>
Agreed, I don't expect my light bulb to verify.

Who are we to decide that the application needs the "utmost"
security instead of speed?


> However. Some EC primitives lose some of their nice properties if users can select arbitrary curves (even over a single prime field). E.g., ECDSA is not subject to key-share attacks if all users use the same curve; it is, if arbitrary curves are permitted.
> (Koblitz and Menezes discuss this in their 'Another look' papers.)
>
Which one?

That's only when there's some sort of guarantee that
they will never use the same parameters.  Known flaws in
random number generators obviate that assumption.

How long ago did I raise this issue on this list?  1999?


> So current uses of EC might need reëxamination.
>
> -dlg
>
> PS, or, taking the other side: There is, by the way, a good counter-argument to the 'just increase the bit-length' argument djb uses for single curves:
>
> Suppose that an unknown fraction of elliptic curves has some undesirable property. By using a large number of curves, we decrease the variance of our risk in expectation. Under a minimax cost model, this is a big gain. (A certainty of small loss, rather
> than a small chance of catastrophe.)
>
That is the argument we've been making for over 20 years.


> [*] For applications that are extremely length-constrained -- e.g., some embedded devices -- provisioning with a per-device curve is the most feasible way of increasing security. I, personally, would like a standardized process for this.
>
Embedded devices are the worst case scenario, and still need
variety in curves/moduli.  Why would picking one that turns out to
be bad ever be a good idea?

How do you "prove" that nothing will ever be learned that makes
one become bad in the future?

Better that 20 not yet bad ones are available to choose.  In that
case, the security updater could reliably ignore the bad one, then
update the device with a better list without the known bad.

If all you have is the bad one, you'll never know whether you've
made a good connection, and whether your update itself hasn't been
corrupted....

Again, we've covered all this logic over 20 years ago.



More information about the cryptography mailing list