Tag Archives: Quantum Error Correction

Quantum Computing Coming of Age

Are We There Yet? That's the name of the talk that Daniel Lidar recently gave at Google (h/t Sol Warda who posted this in a previous comment).

Spoiler alert, I will summarize some of the most interesting aspects of this talk as I finally found the time to watch it in its entirety.

The first 15 min you may skip if you follow this blog, he just gives a quick introduction to QC. Actually, if you follow the discussion closely on this blog, then you will find not much news in most of the presentation until the very end, but I very much appreciated the graph 8 minutes in, which is based on this Intel data:

CPU_hit_a_wall
Performance and clock speeds are essentially flat for the last ten years. Only the ability to squeeze more transistors and cores into one chip keeps Moore's law alive (data source Intel Corp.).

Daniel, deservedly, spends quite some time on this, to drive home the point that classical chips have hit a wall.  Moving from Silicon to Germanium will only go so far in delaying the inevitable.

If you don't want to sit through the entire talk, I recommend skipping ahead to the 48 minute mark, when error correction on the D-Wave is discussed. The results are very encouraging, and in the Q&A Daniel points out that this EC scheme could be inherently incorporated into the D-Wave design. Wouldn't be surprised to see this happen fairly soon. The details of the ECC scheme are available at arxiv, and Daniel spends some time on the graph shown below. He is pointing out that, to the extent that you can infer a slope, it looks very promising, as it get flatter as the problems get harder, and the gap between non-ECC and the error corrected annealing widens (solid vs. dashed lines). With ECC I would therefore expect D-Wave machines to systematically outperform simulated annealing.

Number of repetitions to find a solution at least once.
Number of repetitions to find a solution at least once.

Daniel sums up the talk like this:

  1. Is the D-Wave device a quantum annealer?
    • It disagrees with all classical models proposed so far. It also exhibits entanglement. (I.e. Yes, as far as we can tell)
  2.  Does it implement a programmable Ising model in a transverse field and solve optimization problems as promised?
    • Yes
  3. Is there a quantum speedup?
    • Too early to tell
  4. Can we error-correct it and improve its performance?
    • Yes

With regard to hardware implemented qubit ECC, we also got some great news from Martinis' UCSB lab, whom Google drafted for their quantum chip. The latest results have just been published in Nature (pre-print available at arxiv).

Martinis explained the concept in a talk I previously reported on, and clearly the work is progressing nicely. Unlike the ECC scheme for the D-Wave architecture, Martinis' approach is targeting a fidelity that not only will work for quantum annealing, but should also allow for non-trivial gate computing sizes.

Quantum Computing may not have fully arrived yet, but after decades of research we clearly are finally entering the stage where this technology won't be just the domain of theorists and research labs, and at this time, D-Wave is poised to take the lead.

 

 

About that Google Quantum Chip

In light of the recent news that John Martinis is joining Google, it is worthwhile to check out this Google talk from last year:

It is an hour long talk but very informative. John Martinis does an excellent job at explaining, in very simple terms, how hardware-based surface code error correction works.

Throughout the talk he uses the Gate model formalism.  Hence it is quite natural to assume that this is what the Google chip will aim for. This is certainly reinforced by the fact that other publications, such as from the IEEE, have also drawn a stark contrast between the Martinis approach, and D-Wave's quantum annealing architecture. This is certainly how I interpreted the news as well.

But on second thought, and careful parsing of the press releases, the case is not as clear cut. For instance, Technology Review quotes Martinis in this fashion:

“We would like to rethink the design and make the qubits in a different way,” says Martinis of his effort to improve on D-Wave’s hardware. “We think there’s an opportunity in the way we build our qubits to improve the machine.”

This sounds more like Martinis wants to build a quantum annealing chip based on his logical, error corrected qubits.  From an engineering stand-point this would make sense, as this should be easier to achieve than a fully universal gate-based architecture, and it will address the key complaint that I heard from developers programming the D-Wave chip i.e. that they really would like to see error correction implemented on the chip.

On the other hand, in light of Martinis presentation, I presume that he will regard such an architecture simply as another stepping stone towards universal quantum computation.