John Martinis, University of Santa Barbara physics professor and head of Google's quantum computing lab, is less concerned with the well-being of Schrödinger's infamous cat than with how he can train it to solve complex maths problems.
Last year, Martinis achieved the first step towards building a quantum computer with a working group of nine quantum bits (qubits) able to perform error-checking1. Now he's begun scaling this up, with the aim of demonstrating a 100-qubit group within the next couple of years. WIRED talked to him to him about the challenges involved.
WIRED: Just how powerful does the kind of quantum computer you're building have the potential to be?
John Martinis: Classical computation is based on the storage and manipulation of simple bits of information, which can be either a 0 or a 1. With quantum computing, we use the laws of quantum mechanics to build bits that are both 0 and 1 at the same time. This allows us to create a parallel processing machine where, instead of an algorithm running the case 0 and then running the case 1 and so on to get an answer, we can run 0 and 1 simultaneously. With a single bit that parallelisation speeds things up by a factor of two to the power of one - you've doubled the speed - but this power increases for every additional quantum bit you add, so the speed increase is exponential. So once you get to 300 qubits, you've sped things up by a factor of two to the power of 300, which is greater than the number of atoms in the entire universe. You can't achieve that with a classical computer.
How do you build qubits and why did you choose that method?
Read more: Quantum computing and quantum supremacy, explained
We build ours out of superconducting aluminium wires, cooled to 20 millikelvins (-273.13°C), which are oscillating at about five or six gigahertz - mobile-phone frequencies. Then you can write wave functions such that there is current flowing in the oscillator wire in two directions at the same time. That doesn't make sense under classical mechanics, but quantum mechanically it gives us a simultaneous 0 and 1 state. The advantage is that we can use microwave engineering concepts to design and operate these, which ties into existing technology. We can put them into integrated circuits2, so once we learn how to do this with a few qubits, we can scale up using current semiconductor fabrication.
How difficult is it to engineer qubits that can maintain both 0 and 1 states?
You can build a classical computer so that it has tiny error rates, but you cannot build that stability into your quantum bits. Take the energy decay process, for example. The 1 state of the qubit has more energy than the 0 state and the difference in energy is very small, so if the qubit loses a tiny bit of energy it falls into the 0 state and you've lost the information it held. That's called decoherence. Because imperfect materials can absorb energy we have to develop a whole new way of building our qubits. Typical microprocessors use silicon dioxide as an insulator between wires for example, but it's really lossy at low temperatures, so we can't use it. Instead, we have to make a freestanding air bridge whenever we're crossing wires over, so there's no materials to absorb the energy.
For how long have you been able to maintain qubits in a coherent state?
About 50 microseconds or so. That sounds small, but it's the ratio between coherence time and operation time that matters. We can run an operation in about ten nanoseconds, so the ratio is about 5,000 - quite a large number of operations before we lose the coherence. That 50 microseconds is the result of ten years of research in how to build the integrated circuits. Next is to learn how to properly control the qubits to extend the lifetime further.
Can you give an indication of the scale of the challenge that the errors caused by decoherence pose in making your quantum computer reliable?
As with classical computing, there's a threshold for how often you can get an error - if you have too many, you're making errors faster than you can correct them. You need 1,000 qubits to store one bit of information, so you need the error rate to be less than one error in 1,000 operations. For that, we need to improve the error rate by a factor of 1018 (a quintillion). We've been able to show error checking with a factor of ten improvement in two qubits, and we're looking at scaling up to tens or hundreds of qubits. So we can just add more and more qubits and the error correction should get better and better.3
How many qubits are you working with and how will you scale this up?
We're still working at about the nine qubit level, but we're building up the infrastructure for 100 or 200 qubits. The core problem is that you cannot copy quantum information and everything is tied up with everything else. So, unlike with classical computing, you can't break up the task among different groups of engineers and put it all back together again.
You're developing a quantum annealer, like the one built by Canadian startup D-Wave that Google purchased in 2013. What this does this do?
A quantum annealer allows you to solve optimisation problems by finding the minimum energy solution for a system, given how the qubits are interacting together. That's going to be very useful for machine learning, where you're trying to find the minimum of some function for a neural net that gives you the best fit for a large data set. This takes a long time with classical algorithms; with quantum algorithms we should be able explore the entire possibility space all at once to find this minimum. We're taking a different approach from D-Wave. They've built up lots and lots of qubits without worrying too much about coherence time, so we think it's not going to get much more powerful than a laptop.
What else are you working on?
I view the quantum annealer as the analogue approach, but we're also trying to create a digital quantum computer - you could theoretically program any problem you wanted on it. We're also planning a quantum supremacy experiment, which would involve performing a calculation on a quantum computer that would require the biggest supercomputer in the world to check. Maybe no classical computer could check it. A classical computer can compete up to about 40-45 qubits, so we'd need at least that many with good coherence time. Then, in the next five to ten years, maybe longer, we hope to to solve a useful real-world problem with it. Lots of people speculate on the timescale for this, and there's a lot of different numbers, but I'm actually trying to build it - and that's a really, really hard thing to do.
\1. & 3. Kelly, J., Barends, R., Fowler, A. G., Megrant, A., Jeffrey, E., White, T. C., & Chen, Z. (2015). State preservation by repetitive error detection in a superconducting quantum circuit. Nature, 519.
\2. Devoret, M. H., & Martinis, J. M. (2005). Implementing qubits with superconducting integrated circuits. In Experimental Aspects of Quantum Computing (pp. 163-203). Springer US.
This article was originally published by WIRED UK