About that Google Quantum Chip

In light of the recent news that John Martinis is joining Google, it is worthwhile to check out this Google talk from last year:

It is an hour long talk but very informative. John Martinis does an excellent job at explaining, in very simple terms, how hardware-based surface code error correction works.

Throughout the talk he uses the Gate model formalism.  Hence it is quite natural to assume that this is what the Google chip will aim for. This is certainly reinforced by the fact that other publications, such as from the IEEE, have also drawn a stark contrast between the Martinis approach, and D-Wave’s quantum annealing architecture. This is certainly how I interpreted the news as well.

But on second thought, and careful parsing of the press releases, the case is not as clear cut. For instance, Technology Review quotes Martinis in this fashion:

“We would like to rethink the design and make the qubits in a different way,” says Martinis of his effort to improve on D-Wave’s hardware. “We think there’s an opportunity in the way we build our qubits to improve the machine.”

This sounds more like Martinis wants to build a quantum annealing chip based on his logical, error corrected qubits.  From an engineering stand-point this would make sense, as this should be easier to achieve than a fully universal gate-based architecture, and it will address the key complaint that I heard from developers programming the D-Wave chip i.e. that they really would like to see error correction implemented on the chip.

On the other hand, in light of Martinis presentation, I presume that he will regard such an architecture simply as another stepping stone towards universal quantum computation.

6 thoughts on “About that Google Quantum Chip

  1. I would interpret this as an alternate approach betting (again) on superconducting electronics but with the introduction of some form of coherence preservation (error correction).

    One unfortunate feature of the traditional manner of viewing quantum computing (as superposition of basic qubit states) is that it takes one rather far from the true physics (coherent many-electron states).

    Since we can exhibit coherence on a large scale and with long coherence times in other settings, I think the advances with be on the control front which is where this individual hails from.

    In many respects, the electronics tradition is the right heritage. Think, for instance, of electronic circuit innovations such as the phase-locked loop. This enabled much finer frequency stabilization and measurement in electronic circuits.

    The mainstream of physics has too much “stochastic quantum measurement” on the brain to really comprehend what might be possible in quantum control. This is because their mental model of the underlying physical process is pretty wayward and wedded to intrinsic stochasticity.

    There is (to my mind) no fundamental physical reason why many-body systems must lose quantum coherence quickly. However, to avoid that you need something akin to phase-locking and nonlinear processes. If I am correct and QED self-energy is in fact non-linear, then these superconducting electronics systems *will* have strong inherent non-linearities.

    That can lead to mode and phase locking.

  2. On another front, the macroscopic nature of the underlying superconducting wave state begs the question of the appropriate treatment in Maxwell’s equations. The present Feynman-inspired treatments of QED based upon the assumption of building up the dynamics from discrete virtual processes have perhaps blinded many to the simple question: “What is the right phenomenological Maxwell equation?” Schwinger seemed to get this with his “Particles, Sources and Fields” approach, but the majority of mainstream physicists seem somewhat obtuse in not even comprehending the question.

    Hopefully, this addition of a new team and a different approach (now 3 contenders in superconducting electronics) will give a great boost to (again) asking the right fundamental questions… namely:

    “What is the correct statement of a fundamental and non-perturbative QED?”

    We have clues, but we don’t (as yet) have that theory.

    This is the major scientific prize that I feel confident will emerge from this race.

    As engineers, the various teams will realize what the physics community does not.

    QED is not good enough for this job: a new theory is *necessary*

    1. To me the surface code technology that the Martinis group pioneers, feels very much like a computer science approach to error correction.

      So far from what I have seen nobody really seems to tackle this from the vantage point of a phenomenological Maxwell equation. I think the latter is closer related to an analog computing mindset whereas this approach is very much tailored to the quantum gate model.

      In a sense most engineers will build what they are asked to, and the wast majority of the QIS community wants quantum gates. D-Wave is an exception to the rule, having truly bucked and irritated the mainstream.

      1. Good point. The vast majority of physicists “believe” the physical picture promoted by Feynman diagrams that there are actually point-like particles jumping around down there in stochastic fashion. Hence they think “discrete” is closest to the metal of Nature. I am saying the opposite. This is actually rather far from what it going on with a collective many-body coherent wave state. The very idea of “separate” gates is part of the problem and the reason why physicists had to adopt the notion of “entanglement”. Well… certainly the wave structure is complicated, since it is (in general) correlated in configuration space. However, if that is the way things are (that is my view) then the “digital” obsession is a passing phase. If nature is analog, then that is that. Of course, the digital mode of computation is general-purpose but it comes at the high cost of irreversible operations and high power dissipation (for those devices built so far). Nonetheless, people feel this is the “right” and proper way of doing things. I am suggesting that such views may well prove a temporary aberration.

        The resurgence of interest in Boltzmann machines off the back of progress in Deep Learning is but one example of this trend. Neural networks ain’t discrete computational devices.

  3. May I ask why digital should be considered opposite to analog in the first place? That would only mean a certain lack of imagination. Did anyone considered the possibility of “embedding” digital calculations in analog signals? It is sufficient to think of a frequency comb with appropriate spacings as the classical equivalent of the famous quantum “collective” register. And it is as conservative as anything as long as moving frequencies around does not change the overall signal spectrum. The difficulty with analog lies elsewhere. Try to figure a dynamical system capable of moving said frequencies according to another prescribed signal (the “program”) and you will get a perfectly classical SWIC (Single Wire Isentropic Computer). And who knows, maybe the whole goddamn CMB is but a collection of programs running around in the matrix fabric!

    http://www.youtube.com/watch?v=hjE2sxCQ_rU

    “Embed? Don’t you mean in bed?” (Madonna)

    (From header of Chap. 6 in P. Wesson’s book: “Five Dimensional Physics…”, World Sci. 2006)

Comments are closed.