Ever since the news that John M. Martinis will join Google to develop a chip based on the work that has been performed at UCSB, speculations abound as to what kind of quantum architecture this chip will implement. According to this report, it is clear now that it will be adiabatic quantum computing:
But examining the D-Wave results led to the Google partnership. D-Wave uses a process called quantum annealing. Annealing translates the problem into a set of peaks and valleys, and uses a property called quantum tunneling to drill though the hills to find the lowest valley. The approach limits the device to solving certain kinds of optimization problems rather than being a generalized computer, but it could also speed up progress toward a commercial machine. Martinis was intrigued by what might be possible if the group combined some of the annealing in the D-Wave machine with his own group’s advances in error correction and coherence time.
“There are some indications they’re not going to get a quantum speed up, and there are some indications they are. It’s still kind of an open question, but it’s definitely an interesting question,” Martinis said. “Looking at that, we decided it would be really interesting to start another hardware approach, looking at the quantum annealer but basing it on our fabrication technology, where we have qubits with very long memory times.”
This leads to the next question: Will this Google chip be indeed similarly restricted to implementing the Ising model like D-Wave, or strive for more universal adiabatic quantum computation? The later has theoretically been shown to be computationally equivalent to gate based QC. It seems odd to just aim for a marginal improvement of the existing architecture as this article implicates.
At any rate, D-Wave may retain the lead in qubit numbers for the foreseeable future if it sticks to no, or less costly, error correction schemes (leaving it to the coders to create their own). It will be interesting to eventually compare which approach will offer more practical benefits.
Hi Henning: This article, in the IEEE Spectrum online, confirms the essence of your blog more concretely. My only concern is this: what if there is a clash of cultures in the two groups(Martinis’s & D-Wave’s), then what? That’s when Google’s hand would be forced to do something drastic, such as acquiring D-Wave. We shall see where all this leads.
http://spectrum.ieee.org/tech-talk/computing/hardware/googles-first-quantum-computer-will-build-on-dwaves-approach?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+IeeeSpectrum+%28IEEE+Spectrum%29
The adiabatic model is only universal when there is no noise. Let’s say the D-wave machine could perform arbitrary 2-local Hamiltonians, and let’s say the noise rate was 10^5 times slower than the gate speed (instead of 10^5 times faster). We still would have no idea how to perform FTQC.
What does ‘noise rate’ mean? There are no gates in quantum annealing?
Well, I guess there’s no definition of noise rate because there’s no threshold theorem.
One plausible proxy for a noise rate would be the coupling strength between a single qubit and the environment times the amount of time required for a single qubit to experience a Rabi oscillation. Roughly speaking the ratio of uncontrolled terms in the Hamiltonian to controlled terms. However, I can’t speak strongly in favor of this definition because there’s no corresponding threshold theorem.
Why should Rabi oscillations matter? The reason I’m asking these questions is that I don’t think you ‘get’ the underlying physical model we’re dealing with here. Your way of thinking of this seems to be to use concepts from the gate model and apply them directly to this situation, which is incorrect. As an example, the qubits in our set-up have relatively short coherence times, but it’s been shown experimentally that there is equilibrium entanglement in the system. The fact that coherence between ground and first excited states is lost quickly does not affect the fact that the system’s ground state is entangled and even more strongly that the states the system evolves through are entangled.
In terms of the ratio of uncontrolled to controlled terms in our Hamiltonian, that’s something that we can measure — it’s about 0.05 currently.
“time for Rabi oscillations” is just another way of saying “1 / strength of single-qubit Hamiltonian”. You can think of it as how long a Rabi oscillation would take if there were no noise in the system. My understanding is that your system is too noisy for a single Rabi oscillation. Is that right?
I understand that your model is different from the gate model. I’m saying that there is no known way of provably correcting noise in your model without having the local Hamiltonian strength grow with the length of the computation.
It’s true that your system is immune to dephasing in the energy eigenbasis. The problem is that as you scale up the system, the energy eigenstates become entangled over longer distances and the system-environment interaction remains local. Thus the effective interaction will eventually stop looking like decoherence in the energy eigenbasis.
The scaling of this in terms of the parameters of your system is an interesting question. I’m assuming it hasn’t been answered yet, right?
What Martinis actually said makes a lot of sense, in fact it crossed my mind – what if they just use Martinis error correction technology on the Dwave machine, considering a lot of folks are complaining about the Dwave coherence time and noise (which is the crux of the issue whether Dwave is quantum or not). The last study from Lidar seems to indicate speed up can be achieved with the right error correction.
My concern is, can both parties (Dwave and Martinis group) actually agree to work on this? Lidar’s error correction is fine, but Martinis approach not only provides error correction but also longer coherence times. The Dwave machine is based on a principle/theory that it can manage to operate quantumly with just enough coherence time and this may run counter with Martinis’ approach.
Other than technical differences, they also have to deal with different teams and projects, management, and patents, etc. Let’s hope they get along.
There are no proposals known for using “Martinis” error correction (I guess you mean Kitaev’s surface code?) on the D-wave machine, and this is almost certainly impossible with their current architecture.
Looks like I jumped the gun 😉 but it would be interesting indeed if Martinis is going to try quantum annealing as well but with a different hardware and implement longer coherence times.
I think Dwave is interested in longer coherence times also to improve performance but they probably don’t have enough resources at this time to implement major changes in the hardware. In this instance, the findings from Martinis hardware might help Dwave.
On the other hand, I think Dwave and Martinis belong to two different camps or school of thought when it comes to the principle of coherence/decoherence. Dwave is based on the idea that quantum computation is still possible long after coherence time, while Martinis approach may conclude otherwise. Personally, I think these ideas are hard to prove. I can already see the ensuing debates that will come out of this in the future. I would rather see them adopt an engineering approach to find out what works and what doesn’t and implement them in the short term. The really hard and deep scientific questions, they can probably leave that to the Phds at university labs to chew on.
Hi Ramsey! It’s not correct that our systems are based on a principle / theory that you don’t need high coherence times. Our approach has always been to build real computers at scale. When you do this you encounter problems that you don’t if all you want to do is maximize the coherence times of a handful of qubits. Given the complexity of circuitry required to build real processors, our noise levels are ridiculously low. No-one has ever build circuits at the scale we’re at with the levels of noise we have.
Maximizing coherence time is fine and good, but in real-world situations often that quantity has trade-offs with other equally (or more) important quantities, such as being able to build actual computers. If all we wanted to do was build a handful of qubits that had high coherence times, we could have done that 10 years ago. We didn’t because the set-ups people have shown that exhibit long coherence times are inherently not scalable.
Engineering large-scale systems is a very different regime than building a handful of qubits.
Here are a couple of brief videos on QC by Google and Lockheed Martin:
Lockheed Martin video –
https://www.youtube.com/watch?feature=player_embedded&v=Fls523cBD7E
Google video –
Oh, what a tangled web that Neven weaves.