[Cryptography] quantum computers & crypto

Rodney Van Meter rdv at sfc.wide.ad.jp
Fri Oct 29 02:20:38 EDT 2021

I’m going to take the liberty of responding to Peter and Cherry together here. Inline.

Rodney Van Meter
rdv at sfc.wide.ad.jp
Professor, Faculty of Environment and Information Studies, Keio University, Japan

> On Oct 28, 2021, at 7:29, cherry <cherry at cpal.pw> wrote:
> On 10/26/21 7:52 PM, Peter Gutmann wrote:
>> 15 was factored with Shor's algorithm twenty years ago, 21 was factored just
>> under a decade ago.  Drawing a line through these two data points (yeah, I
>> know, but it's all we've got), we can see that 1,024-bit RSA will be
>> vulnerable to Shor's algorithm some time around 4,000 AD.  However this
>> ignores two problems:
>> The first is that, like FTL travel, things get much, much harder as you scale
>> up.  So the line likely isn't straight, and may in fact never reach 1,024
>> bits.


Jaw-Shen Tsai has a really good plot showing the increase in memory lifetimes in the different flavors of superconducting qubits over their history. Since 1998 (I think) lifetimes have improved from less than one nanosecond to about 100 usec, about five decimal orders of magnitude in less than 25 years. At this point, memory lifetime is no longer the biggest problem for transmon superconducting systems. It’s the two-qubit couplers, which still hover around the 0.5-1% gate error rate, or close to 0.1% in the recent IBM reports. I don’t have data in front of me, but here, too, improvement continues quarterly if not daily. One big improvement in IBM’s systems has been increased consistency from fab. It used to be (in the olden days of 2019) that qubit-to-qubit variance was high. Now the worst qubits & couplers on a chip are not so much worse than the best ones.
Jay Gambetta claims their best chip just hit “three nines”:

This is good enough for quantum error correction, though there are a lot of issues before QEC starts to work. The biggest quantum-level issue is “feed forward”; you have to be able to measure qubits that have the error syndrome info, then adjust your actions in accordance with the syndrome. That’s complicated, especially because measurement is a slow, noisy process both for the individual qubit and because it tends to disrupt nearby qubits.

Other technologies have their own problems, but IonQ and UMD claim to have demonstrated a fault-tolerant qubit in ion traps.

Once all that starts to work, there are a ton of real systems engineering questions to be addressed, so it’s really hard to draw a good roadmap yet with dates and milestones toward fault tolerance. But I’m sure the major vendors have plans that are constantly being revised.

So, when do we get to factoring RSA at scale?  Who knows? But are you really willing to bet that IBM, Rigetti, Google, IonQ, Psi Quantum, and a ton of other folks *won’t* do it in the next twenty years?


> If you are straightforwardly using regular gates, each additional qbit is exponentially more difficult, with a large exponent.  The cost and difficulty of an quantum computer is something like `exp(a*N)` where N is the number of bits.  The way things have been going, it looks like each additional bit is something like ten to a thousand times more difficult than the last.  Could be worse than that.

I’m not quite sure what you’re trying to say here.  Are you talking about the difficulty of gate control or memory lifetime for operating on qubits? Or are you talking about the quantum states or algorithms?

Barring actual *events* like strong RF/uwave pulses, the basic control problems in almost all quantum hardware platforms (superconducting, ions, quantum dots, nitrogen vacancy diamond, neutral atoms; photons might be an exception) are “memoryless” Markov processes, so the goal is to reduce the constant per nanosecond or per operation.

There is nothing at all “exponentially more difficult” about any of this; the error processes (mostly) affect qubits independently of their state, including whether they are in superposition or in entangled states. A lot of the error process, in fact, can be characterized using straightforward classical probability. If you have twice as many qubits, you will have, on average, twice as many errors per nanosecond. Keep them for twice as long, get twice as many errors. Conversely, cut your error rate by half and you can use twice as many qubits or twice as many gates. (Crosstalk between distant qubits has been a serious problem, where the RF resonances of the chip package itself interfere with qubits; a lot of that has been handled, such that it’s now feasible to treat qubits as independent error processes, for most purposes.)

> This can be addressed by quantum error correction, which drops the difficulty of increasing the the number of qbits down to merely polynomial rather than exponential.

The problem is the ratio of physical qubits per logical qubit, and (just like in classical ECC) the code distance. We have done analysis of systems using hundreds or thousands of physical qubits per logical qubit, but nobody really thinks those levels are practical. We are going to need to be in the, oh, say, 50-200 physical qubits per logical qubit range for hardware to be reasonable. Then if you want 1,000 logical qubits (enough to factor a 200-bit classical number, maybe, depending on your implementation of arithmetic and other such topics), you need 50-200K physical qubits. Superconducting qubits are *huge* (ten microns on a side, maybe), so you need to couple together multiple chips, which is a hard problem. And you’ll need physical gate error rates down around 10^{-5} or 10^{-6} to run even mid-sized algorithms on such a machine; that might or might not be enough for factoring a 200-bit number. Except for the needed code distance and physical error rate, the resources required will be linear in qubits and size of the number you want to factor. Execution time changes, as expected, by O(N^3), where N is the length of the number you want to factor, in bits.

> Quantum error correction, however, is not just a matter of implementing ordinary error correction in ordinary gates.

See the discussion of syndrome extraction and feed forward, above.

> What you want is that the forbidden states are strongly coupled to the heat sink, and the permitted states are completely uncoupled from the heat sink.  Implementing this as a physical reality is at present just a gleam in the eyes of researchers.  They have some ideas of how it could be done, but no one has actually proposed a physical device.

Lots and lots of physical designs have been proposed. The early ones were naive. They get better every year. See the 100+ references here.

I do believe strongly that the experimental physicists designing chips don’t really understand systems, but they are gradually hiring people who do.

> We have lots of nice mathematics about quantum error correction, it is easy to produce publishable papers about it, but no idea how to embody it physically.  Any paper that proposed a physically realizable implementation would be talking about temperature, heat flow, and coupling between degrees of freedom.  It would look like a thermodynamics of bulk materials paper, not a quantum computing paper.

Not really sure where you’re getting this impression, but there was one paper about a decade ago from Fujii (and possibly Kitagawa?) on a self-repairing system, where syndromes would essentially leak out into the heat bath and the whole system would stay stable. I didn’t particularly like that paper, and I don’t think the design has had any real impact. The researchers are good, though.


In just a few days, we will have a preprint of a paper available on near-term issues in implementing Shor’s algorithm. I’ll try to remember to see y’all a note here when it’s available. We’ll certainly be looking for feedback.


More information about the cryptography mailing list