Setting the Scene for the ETSI Quantum-safe Cryptography Workshop

26 September 2013

The following is a transcript of the speech delivered by Michele Mosca to the Quantum-Safe-Crypto Workshop hosted by the European Telecommunications Standards Institute (ETSI). For more information about the workshop program, please see the e-proceedings.

MOSCA: There is a catastrophe looming for our information and communication infrastructure. Solutions to avert this catastrophe exist, but serious work needs to be done, starting immediately, to develop, deploy and standardize the solutions.

Mark Pecen nicely described in his presentation one of the unfortunate challenges – the problem of switching costs, which we often face when we want to replace one solution with a better one – and he outlined ways to mitigate this problem.

Let me outline the problem that needs a solution.

To appreciate the depth of the problem, we need to understand how profoundly our current ICT infrastructure, which is central to stability of the global economy, depends on cryptography.

We necessarily start with some trust assumptions, and some physical security (at least at the end-points, for some period of time). Cryptography is what lets us leverage a relatively small amount of trust and physical security to take advantage of an untrusted ICT infrastructure.

Without cryptography, we essentially need to “unplug” from the ICT infrastructure, and stop using untrusted parties and media. This is simply not practical for the majority of applications, including anything involving a financial transaction that uses real-time communication (credit card purchases, money transfers, online banking, etc.), online communication (e-mail, texting, social media, etc), online advertising, e-health, and so on.

So reliable cryptography is fundamental to the global economy and our daily lives.

It is worth noting that cryptography is generally regarded as one of the strongest parts in the “chain” of tools that provide information security. The weak links we see today in cyber-security are typically related to the other pieces of the chain. These urgent cyber-security challenges naturally distract from the need to act today to prevent a cyber-catastrophe in the future if the cryptography link in the chain of tools is broken. This is another of the challenges to solving this problem.

The problem we discussed at this workshop is a well-defined catastrophe on the horizon for some critical parts of the cryptography infrastructure.

Peter Shor discovered in 1994 that quantum computers will break factoring and discrete-logarithm-based cryptosystems. Fortunately, we didn’t have quantum computers at the time. But 20 years later, the picture is becoming clearer. I’ll return to this shortly.

Of course, this quantum computing risk is on top of the daily risk we already face – that of an unexpected algorithmic advance. For example, it was once believed that discrete logarithms in GF(2127) were intractable, and the method was even being implemented in hardware until my colleagues at Waterloo found a way to solve instances of this size.  Coppersmith found a way to build on this work and take the exponent in the complexity of solving such discrete logarithms from O(n1/2 ) to O(n1/3), which was a major milestone that led to other important advances in cryptanalysis. More recently, our colleagues in France found another major weakness for discrete logarithms in fields with low characteristic.

With regard to the specific threat of a quantum computer, one must ask: is this really something we need to worry about now?

Roughly speaking, this depends on three variables, X, Y and Z:

X is the number of years you need your public-key cryptography to remain unbroken. For example, how long is it necessary to protect health information, or national security information, or trade secrets?

Y is the number of years it will take to replace the current system with one that is quantum-safe.

Z is the number of years it will take the break the current tools, using quantum computers or other means.

If X+Y>Z, then we have a problem now, and immediate action needs to be taken.

This means that for the latter part of those Y years, we have to either stop doing business or continue using the tools we know will be broken in Z years. Either way, these will not be pleasant times for doing business

So what is Z? We don’t know, but we can say something more useful.

Firstly, let me emphasize that the size of the number factored to date by quantum means is an essentially irrelevant metric. What we should ask is how close we are to meeting a meaningful fault-tolerance threshold. We do know that a scalable quantum computer is feasible (given reasonable error models and architectures) thanks to fault-tolerant quantum error correcting codes. If experimentalists can perform a handful of fundamental physical operations with precision better than a fault-tolerance threshold ɛ, then they can perform large-scale quantum computations with efficient scaling. In the past 10 years, great advances have been made on all three of these fronts. The error models are more realistic and better understood. The architectures are more realistic and basic components are being implemented. And we have much more powerful families of fault-tolerant codes. The threshold ɛ has risen from 10-6 to roughly 1%. The experiments are starting to approach the threshold.

IBM states [1]: “We are still a long way from building a practical quantum computer, but the path toward this goal is becoming clearer. Rapid improvements in experimental quantum hardware suggest that a threshold for the design and construction of fault-tolerant systems may be reached in the next five years. At that point, the goal of building a useful and reliable quantum computer will be within our reach.” Note this was written in 2011.

Our colleagues at Yale [2] wrote a nice review in 2013 tracking relevant metrics, which seem to support the predictions made by IBM. For example, they show that qubit life-times have increased exponentially over the past 14 years. They nicely describe seven stages to building a fault-tolerant quantum computer, and point out that in the past 14 years, superconducting qubits have achieved the first three stages and are moving towards the fourth.

The bottom line is that progress toward building a quantum computer is measured in orders of magnitude, and has been truly impressive over the past two decades, and will likely continue accelerating in pace. I personally would say the likelihood of a large quantum computer in the next 20 years is an order of magnitude higher than it was 10 years ago, and cannot be ignored.

Now on to the question of Y: the time required to transition from our current infrastructure to quantum-safe tools.

Cryptographers have been thinking of alternatives for many years; some of these proposed solutions are almost as old as public-key cryptography, others are much newer. Quantum algorithms experts are starting to look at these systems more and more, though still not enough. Standards organizations, including ETSI, NIST and NICT, have been looking at this space for many years. But I think it’s fair to say it would not be easy in the near future to change from the current cryptography algorithms to quantum-safe ones. The standards and practices are not ready.

Looking at past examples, we can ask how long it took for RSA to go from discovery to large-scale deployment. Or Elliptic Curve Cryptography (ECC)? Roughly 20 years.

What about the recent BEAST attacks? The vulnerability was known for nearly a decade, and a fix was available about five years prior to the attacks, but not widely deployed (and still not fully deployed).

The short answer is “a long time” – roughly a decade or two. The bottom line is that the “wait and see” approach is too risky. It’s optimistic to think we can wait until a point when people will know large quantum computers will be here in less than 20 years, and guarantee not less than, say, 15 years. Will anyone really be able to predict with this relative precision? Will organizations building quantum computers necessarily tell us when they are close?

Even 15 years doesn’t give us much time, given the broad range of work that needs to be done, including battle-testing these systems in the field for a number of years before people would really trust them. Not to mention that sometimes X>0.

We need alternatives, even just to protect against unexpected (classical) algorithmic advances, even modest ones. And these alternatives should also be quantum-safe in order to protect against the well-defined and emerging quantum threat.

We also need the infrastructure to have greater algorithmic agility, especially if it will contain computationally secure tools.

Fortunately, we have several solutions. There are two families of complementary solutions.

The first class of solutions are often called “post-quantum,” though some use the term to encompass practical quantum cryptography as well. Here I refer to classical, mathematics-based codes deployable on the existing infrastructure, which are believed to be secure against quantum attacks. The key advantage of these solutions is that they don’t require new physical infrastructure. The downside is that we are still dealing with computational security, with the occasional unexpected advances that we may or may not learn about until it’s too late to avert a problem.

The other set of tools are quantum tools, requiring some amount of quantum technology. In general, however, this is available technology, such as fiber, free-space and single-photon sources and detectors. The performance parameters (distance, rate, etc.) are an issue for many applications, but they will continue to improve. That’s the downside. The main upside is that for quantum key establishment we don’t have computational assumptions, such as assuming that factoring large integers is infeasible.

These two sets of tools are not mutually exclusive; in fact, they complement each other and work very well in combination to fill the spectrum of security needs.

Let me highlight two of the most important cryptographic primitives: signatures and key-establishment.

In an era with quantum computers, the quantum-safe alternatives for authentication or signatures include:

  • Trapdoor-based public-key signatures
  • Hash-based signatures, which have the advantage of not requiring a trapdoor
  • Symmetric key authentication (if we are ready to leave the public-key setting in some circumstances).

Quantum-safe alternatives for key-establishment include:

  • Key establishment via public-key encryption, which requires trapdoors
  • Quantum key establishment.

We had several talks on both families of approaches throughout the first ETSI workshop on quantum-safe cryptography.

As I mentioned, the quantum and classical solutions can work very well together, achieving some things that they individually cannot. For example, one can use hash-based signatures to authenticate quantum key establishment, and thereby get long-term security not achievable using just public-key key establishment by only assuming the short-term security of a hash-function.

During the workshop we heard a number of presentations and had discussions that will help us answer the following questions:

  • How ready are these potentially quantum-safe systems for widespread deployment?
  • What gaps remain?
  • What are the pitfalls we should avoid? For example, do we face the risk of standardizing too little or too late, or too much or too soon?
  • Can we agree on some kind of roadmap, and identify meaningful next steps? If so, who will spearhead this effort?

This workshop was only a first step. We look forward to identifying the next steps and working on their implementation together.

 

References:
[1] M. Steffen, D. P. DiVincenzo, J. M. Chow, and T. N. Theis, “Quantum computing: An IBM perspective”, IBM J. Res. & Dev., Vol. 55 No. 5 Paper 13, September/October 2011.
[2] M. H. Devoret and R. J. Schoelkopf. "Superconducting circuits for quantum information: an outlook." Science 339.6124 (2013): 1169-1174.