Scott Aaronson (again) resigns as chief D-Wave critic and endorses their experiments

An exercise in positive spin.

Update below.

The English language is astoundingly malleable. It feels almost as if it was tailor made for marketing spin. I noticed this long ago (feels like a lifetime) when working in a position that required me to sell software. Positioning products was much easier when I spoke English.  Mind you, I never told a blatant lie, but I certainly spun the facts to put our product in the best light, and if a customer committed I’d do my darnedest to deliver the value that I promised. The kind of customers I dealt with were of course aware of this dance, and perfectly capable of performing their due diligence. From their perspective, in the end, it is always about buying into the vision, knowing full well that a cutting edge technology, one that will give a real competitive benefit, will of course be too new to be without risk.

During the courting of the customers, any sales person worth their salt will do anything to make the product look as good as possible. One aspect of this is of course to stress positive things that others are saying about your offerings.

To accomplish this, selective quoting can come in very handy. For instance, after reviewing the latest pre-print paper that looks at D-Wave’s 503 qubit chip performance, Scott Aaronson stepped down for the second time as chief D-Wave critic. In the blog post where he announced this, he also observed that on “the ~10% of instances on which the D-Wave machine does best, (…) the machine does do slightly better (…) than simulated annealing”.

This puts in words what the following picture shows in much more detail.

Screenshot 2014-01-18 17.47.52
Instance-by-instance comparison of annealing times and wall-clock times. Shown is a scatter plot of the pure annealing time for the DW2 compared to a simulated classical annealer (SA) using an average over 16 gauges on the DW2. This is figure 6 of the recent benchmark paper. Wall clock times include the time for programming, cooling, annealing, readout and communication. Gauges refer to different encodings of a problem instance. (Only plot A and B are relevant to the settling of my bet).

Now, if you don’t click through to Scott’s actual blog post. you may take away that he actually changed his stance. But of course he hasn’t. You can look at the above picture and think the glass is ninety percent empty or you could proclaim it is ten percent full.

The latter may sound hopelessly optimistic, but let’s contemplate what we are actually comparing.  Current computer chips are the end product of half a century highly focused R&D, with billions of dollars poured into developing them. Yet, we know we are getting to the end of the line of Moore’s law. Leak currents already are a real problem, and the writing is on the wall that we are getting ever closer to the point where the current technology will no longer allow for tighter chip structures.

On the other hand, the D-Wave chip doesn’t use transistors. It is an entirety different approach to computing, as profoundly different as the analog computers of yore.

The integration density of a chip is usually classified by the length of the silicon channel between the source and drain terminals in its field effect transistors (e.g. 25nm). This measure obviously doesn’t apply to D-Wave, but the quantum chip integration density isn’t even close to that. With the ridiculously low number of about 500 qubits on D-Wave’s chip, which was developed on a shoestring budget when compared to the likes of Intel or IBM, the machine still manages to hold its own against a modern CPU.

Yes, this is not a universal gate-based quantum computer, and the NSA won’t warm up to it because it cannot implement Shore’s algorithm, nor is there a compelling theoretical reason that you can achieve a quantum speed-up with this architecture. What it is, though, is a completely new way to do practical computing using circuit structures that leave plenty of room at the bottom.  In a sense, it is resetting the clock to when Feynman delivered his famous and prophetic talk on the potentials of miniaturization. Which is why from a practical standpoint I fully expect to see a D-Wave chip eventually unambiguously outperform a classical CPU.

On the other hand, if you look at this through the prism of complexity theory none of this matters, only proof of actual quantum speed-up does.

Scott compares the quantum computing skirmishes he entertains with D-Wave to the American Civil war.

If the D-Wave debate were the American Civil War, then my role would be that of the frothy-mouthed abolitionist pamphleteer

Although clearly tongue in cheek, this comparison still doesn’t sit well with me.  Fortunately, in this war, nobody will lose life or limb. The worst that could happen is a bruised ego, yet if we have to stick to this metaphor, I don’t see this as Gettysburg 1863 but the town of Sandwitch 1812.

Much more will be written on this paper. Once it has fully passed peer review and been published, I will also be finally able to reveal my betting partner. But in the meantime there a Google+ meeting scheduled that will allow for more discussion (h/t Mike).

Update

Without careful reading of the paper a casual observer may come away with the impression that this test essentially just pitted hardware against hardware. Nothing could be further from the truth, some considerable effort had to go into constructing impressive classical algorithms to beat the D-Wave machine on its own turf.  This Google Quantum AI lab post elaborates on this (h/t Robert R. Tucci).

Update 2

D-Wave’s Geordie Rose weighs in.

 

 

 

 

 

 

23 thoughts on “Scott Aaronson (again) resigns as chief D-Wave critic and endorses their experiments

  1. Hi Henning: Thanks for latest post. Bye the way, your blog is right on the money!. In the pictures you posted, you say only A and B are relevant to settling your bet. If that’s the case, then I would say you are EVEN!. I think, however, the reason for his second resignation as “Chief D-Wave Skeptic” maybe more sinister!. Somebody with an LL.D. after his name may know better!.

    1. Well, it’s close but not cigar.

      I am holding out for results that even Scott would find unambiguous. To stay with the war terminology, it’s only a battle (barely) lost …

  2. Interesting. I begin to understand better why these guys are so hung up. They seem to be evaluating a classical annealing algorithm versus a quantum annealing algorithm on the same “dimension” space without realizing that quantum dynamics on an N qubit space is *always* a special case of classical dynamics (at least in pseudo form) for a 2^N dimensional classical phase space.

    If that is what you are trying to do you will stay confused for a long time.

    It seems to me that this is what these folks are actually trying to do. No wonder they don’t get it.

    1. For all intends and purposes this can be regarded as a pragmatic test to see if the D-Wave machine can outperform a current CPU, and the trickiest aspect is the problem encoding to ensure a good comparability.

      On the other hand from a theoretical stand-point, to the extend that the connection of classical and QM phase space is informed by your generalized Schroedinger equation, I think it is a safe bet that this is virtually unknown.

  3. Henning, one thing your sales pitch leaves out (maybe not surprisingly, since it IS a sales pitch…) is that the D-Wave Two is just barely starting to become competitive with a conventional computer *for the problem of finding ground states of the D-Wave Two.* For “useful” (encoded) problems, one expects that the comparison wouldn’t even be close.

  4. Hi Henning: Just to get your mind off this heavy stuff of quantum computers, I thought I should share this short video with you. It’s just to remind us how insignificant we & our planet are in the scheme of things. Enjoy!.

  5. Henning: Thanks for the update. This reminds me of a “race”, some 10 years ago, about who could write the fastest algorithm to compute Pi to a million decimal places. After countless number of people trying to do just that, two students(one from US and the other from Japan), collaborated & finally won the race!. So that now I can calculate Pi to a million places on my Laptop in 1/3 to 1/2 of a second!!. Bye the way, they are also the record holders for computing Pi to 10^13 decimal places.

  6. Henning: I just came across an article in Scientific American that Scott wrote in March, 2008. This is the opening of his article:

    The Limits of Quantum Computers

    Quantum computers would be exceptionally fast at a few specific tasks, but it appears that for most problems they would outclass today’s computers only modestly. This realization may lead to a new fundamental physical principle.

    Isn’t that what D-Wave is doing or trying to do?. You may bring this contradiction, in his position, to his attention someday. Here is the link. Thanks.
    http://www.scientificamerican.com/article/the-limits-of-quantum-computers/

    1. Scott’s beef is that the D-Wave design is theoretically unproven. The QC model that he, like pretty much all theorists, prefers is the gate based model. Within this model you can for instance conclusively show, with mathematical rigor, that Shor’s algorithm scales better than anything possible on a classical machine. Given the absence of strong theoretical motivation for Quantum Annealing to deliver a speed-up, he pretty much doesn’t expect D-Wave to pull through at all.

      1. Henning, two quick corrections:

        (1) Yes, we can prove with mathematical rigor that Shor’s algorithm factors numbers in polynomial time (which, of course, is much more than we can show for the adiabatic algorithm), but not that it “scales better than anything possible on a classical machine”! For no one has ruled out a fast classical factoring algorithm — a necessary first step for that would be to prove P!=NP.

        (2) My beef with the D-Wave design is not just that it’s “theoretically unproven,” but also that it’s practically unproven! Right now, we don’t have any clear evidence at all—not even numerical, heuristic, “physicist” evidence—that the adiabatic algorithm (let alone quantum annealing…) should outperform simulated annealing on any practically-important class of NP-hard problem instances. It might or it might not. This is not a matter of dotting some i’s and crossing some t’s for theorists’ satisfaction, but of plunging into the unknown. That’s why I’ve said from the beginning that implementing the adiabatic algorithm and trying it out could make for great fundamental science, so long as one communicates honestly about what one has and hasn’t accomplished.

        (On the other hand, it’s worth stressing that, unless and until D-Wave improves its design to get better coherence times and non-stoquastic Hamiltonians, we have a pretty strong theoretical reason to not expect any asymptotic speedup. Namely, it seems like Quantum Monte Carlo should continue to be able to efficiently simulate everything that’s going on.)

        1. Thanks Scott, for clarifying this. I certainly implicit assumed P!=NP although this is of course is just a conjecture, Yet, it seems to me to be a pretty strong one. Would be rather earth shattering if NP turned out to be just a mirage and the universe was polynomial to the core.

          As to D-Wave, regarding this as a great experiment to see what kind of performance can be squeezed out of quantum annealing is a valid way of looking at this.

          It seems to me that Google is a perfect fit to run the kind of data mining necessary to flush out if there is a class of problems where this design has a clear edge. And of course partnering with D-Wave they will have first dips at the IP.

          If they come up empty it’ll be disappointing.

          On the hardware front I think D-Wave has a lot of good options for improvements, in my mind more spin coupling and better decoherence times top the list.

          1. (1) The trouble is, factoring is a very “special” problem: it could easily have a fast classical algorithm even assuming P!=NP. My personal guess is that it doesn’t, but the only real argument is that people have looked for such an algorithm for decades and only found mildly-subexponential ones. It’s not like a fast factoring algorithm would bring thousands of other problems crashing down with it, the way a fast 3SAT algorithm would.

            (2) I think D-Wave would be more interesting as a scientific experiment if the hardware were good enough to evade Quantum Monte Carlo simulation! That’s why one of my biggest concerns has been how little emphasis they’ve put on understanding and improving the fundamentals of their hardware: even the academics who work with them, like Lidar, have had little success in convincing them to do this. Instead, their approach—driven (I guess) by the demands of the VC world—has just been to scale up the existing hardware to as many qubits as possible, as fast as possible. The trouble is that, if (as is looking likely…) you indeed can’t get an asymptotic speedup that way, that still doesn’t tell us about the inherent limitations of quantum adiabatic optimization itself, but only about the limitations of some particular hardware. So it’s of limited use even as science.

          2. Scott, you are certainly right in pointing to the “creative tension” between business strategy and science. A wast topic that deserves its own post eventually.

            But give it some time, D-Wave’s architecture is a moving target. They may not let it on, but I’d be surprised if they wouldn’t consider some of Lidar’s input.

            One thing I’ll take for certain, because that’s were business and science interest clearly align, whatever their customers prioritize, will probably also become a top priority for D-Wave.

            They may eventually have something that’ll even be interesting to a complexity theorist.

  7. Hi Henning: Did you see Dr. T. Ronnow’s presentation on Google+ hangout? I did see it & it wasn’t much of a presentation. It was simply a rehash of their findings in their recent paper. He got a couple of questions, the most interesting of which, he knew nothing about!. Namely, that the reason Lockheed bought a D-Wave machine was that because they gave D-Wave a source code for one of their jet fighters & knew there was an error in it, which took them some six months to find. But D-Wave was able to find it much faster!, and hence their reason for buying it. This story was confirmed by Dr. Daniel Lidar of USC, according to the questioner.
    Bye the way, why do you think this whole thing is spearheaded by a couple of Swiss physicists & sponsored by the Swiss government? Is politics a factor in all of this & why?. Thanks.

    1. Sol, actually I came away with a different impression from the presentation. I find it always helps to hear from the researchers themselves. My take away was that they clearly appreciate that D-Wave created something genuinely new in the computing space.

      Will summarize my impressions in a short follow-up blog post.

      As to why a Swiss institution is so strongly involved. That’s easy to answer: The ETH Zurich (Einstein’s old alma mater) is where Mathias Troyer, one of the lead authors, teaches. So much of the coding, and yeoman’s work, has been performed by his graduate students. Mathias incidentally is not Swiss but Austrian. Not that it’ll really matter, the ETHZ, being one of best engineering schools you’ll find on this planet attracts faculty from all over the world. It is a bastion of academic independence with a proud tradition, as trustworthy as a Swiss bank account 🙂 There’s no reason to suspect that the feeble Swiss government may run interference.

  8. Thanks for educating me on the ETH. I didn’t mention that he did acknowledge the fact that D-Wave had created something interesting, but spent more time mocking silly blogs & articles online from people who know next to nothing about Physics, let alone quantum mechanics. I thought he was wasting his time on that aspect of his presentation.

    1. Agreed that was pretty juvenile. Then again the dude looked frighteningly young 🙂

      Can’t imagine he has much experience in giving presentations. Trying to throw in some humor into a technical talk can work magic – but when the magic doesn’t work it’s just awkward.

Comments are closed.