D-Wave – Fast Enough to Win my Bet?

Really Would Like to Get That Raclette Cheese.

Last summer I had to ship a crate of maple syrup to Matthias Troyer at the ETHZ in Switzerland. The conditions we had agreed on for our performance bet were so that, at this point, the D-Wave One could not show a clear performance advantage over a conventional, modern CPU running fine-tuned optimization code. The machine held its own, but there weren't any problem classes to point to that really demonstrated massive performance superiority.

Impressive benchmark graph. Next on my Christmas wishlist: A decisive widening of the gap between the green QMC curve and the blue D-Wave line as the problem size increases (as is the case when compared to the red Simulated Annealing curve).


The big news to take away from the recent Google/D-Wave performance benchmark is that, with certain problem instances, the D-Wave machine clearly shines. 100 million times better in comparison to a Quantum Monte Carlo Simulation is nothing to sneeze at. This doesn't mean that I would now automatically win my bet with Matthias if we were to repeat it with the D-Wave Two, but it'll make it much more interesting for sure.

One advantage of being hard-pressed to find time for blogging is that once I get around to commenting on recent developments, most other reactions are already in. Matthias provided this excellent write-up, and the former D-Wave critic-in-chief remains in retirement. Scott Aaronson's blog entry on the matter strikes a (comparatively) conciliatory tone. One of his comments explains one of the reason for this change:

"[John Martinis] told me that some of the engineering D-Wave had done (e.g., just figuring out how to integrate hundreds of superconducting qubits while still having lines into them to control the couplings) would be useful to his group. That’s one of the main things that caused me to moderate a bit (while remaining as intolerant as ever of hype)."

Scott also gave a pretty balanced interview to the MIT News (although I have to subtract a star on style for working in a dig at Geordie Rose - clearly the two won't become best buds in this lifetime).

Hype is generally and righteously scorned in the scientific community.  And when it is pointed out (for instance when the black hole information loss problem had been "solved"), the scientists involved are usually on the defensive.

Buddy the Elf believes anything Steve Jurvetson ever uttered and then some.

Of course, business follows very different rules, more along the Donald Trump rules of attention. Any BS will do as long as it captures audience. Customers are used to these kinds of commercial exaggerations, and so I am always a bit puzzled by the urge to debunk D-Wave "hype". To me it feels almost a bit patronizing. The average Joe is not like Buddy the Elf, the unlikely hero of my family's favorite Christmas movie. When Buddy comes to NYC and sees a diner advertising the world's best coffee,  he takes this at face value and goes crazy over it.  The average Joe, on the other hand, has been thoroughly desensitized to high tech hype. He knows that neither Google Glasses nor Apple Watches will really change his life forever, nor will he believe Steve Jurvetson that the D-Wave machines will outperform the universe within a couple of years. Steve, on the other hand, does what every good VC business man is supposed to do for a company that he invested in, i.e. create hype. The world has become a virtual bazaar, and your statements have to be outrageous and novel in order to be heard over the noise. What he wants to get across is that the D-Wave machines will grow in performance faster than conventional hardware. Condensing this into Rose's Law is the perfect pitch vehicle for that - hype with a clear purpose.

People like to pick an allegiance and cheer for their "side". It is the narrative that has been dominating the D-Wave story for many years, and it made for easy blogging, but I won't miss it. The hypers gonna hype, the haters gonna hate, but now the nerds should know to trust the published papers.

Max Planck famously quipped that science advances one funeral at a time, because even scientists have a hard time acting completely rationally and adjusting their stances when confronted with new data.  This is the 21st century, here's to hoping that the scientific community has lost this kind of rigidity, even while most of humanity remains as tribal as ever.

7 thoughts on “D-Wave – Fast Enough to Win my Bet?

  1. My apologize that it took so long to finally post on this, numerous friends and acquaintances pointed me to this news. Too many to give a tip of the hat in the main text, hence this comment to thank everybody who wrote me about it. Please keep it up.

    Currently I am unfortunately hard pressed to make enough time for blogging, but this too shall pass.

  2. Mmm…maple syrup! One of the four food groups–now all you need is some spaghetti to put it on! Well, as you know, I’m not a physicist, nor do I understand anything about how quantum computing actually works. But…I gotta tell you that the graph’s visuals make D Wave look bad. There has to be a better way of graphing the results to show that the lower graph line is the superior outcome. The impact of seeing D Wave’s line moseying across the bottom while the others soar above makes a bad first impression until you actually read the comments and the y axis label. As a non-scientist (but one with a communications degree), I am prompted to ask if there’s another graph style that might have more positive visual impact. Just wondering.

  3. I’m a rabid D-wave fan but I’m making a bet, what is annoying are those on the other camp, who trash and pretend they are not taking sides and just defending science (they may not be even aware of their own bias). Oh please, what load 😉 Let’s face it, in part this is a race, and it creates competition and excitement. If D-Wave lose I would definitely feel bad about it but it’s no different feeling than losing my team in a hockey game, give me a day and I will move on. In the end you have to respect the outcome whatever it may be because this is all for science where everybody should be.

    1. It is a rare occurrence that somebody is aware of their own bias. For the CS camp this is all about the fundamental quantum speed-up, anything less than that is considered a non-event or worse a cheapening of the pure quantum computing quest, whereas D-Wave pragmatically decided to go for a lower hanging fruit, and see how much mileage you can get out of qubits designed to be good enough for annealing.

      In a sense the two camps never even agreed on the nature of their disagreement.

  4. I have built a beautiful and complex mobile with feathers, cane and string, similar to the ones Alexander Calder made. If I were to simulate its motion by a digital computer, the computer will be 100 million times slower because of the very fine details of the feather motions.

    Shall I write an article about my mobile being 100 million times faster than a digital computer and submit it to Nature?

    1. Nikos, I strongly suggest you write that paper and see if it will pass Nature’s peer review. Don’t be discouraged if it doesn’t at first. Just re-write it and try it again and again. At the same time you should try to see if you can attract venture capital for your idea. I am sure you already have a concept of how to encode general problems into your feather motion model.

      The world will be your oyster.

      1. Kidding aside, the Google/D-Wave manuscript that you characterise as ‘big news’ states in the abstract:

        […] the D-Wave 2X quantum annealer achieves significant runtime advantages relative to Simulated Annealing (SA). For instances with 945 variables this results in a time-to-99%- success-probability that is ∼ 10^8 times faster than SA running on a single processor core.

        The above, in my simple mind, translates to: A purpose built analog device is 100M times faster than a digital computer simulating that analog device. Please correct me if I am wrong.

        The property to be able to map encodings of general problem instances is necessary, –you are absolutely right– but it is not sufficient. You also have to solve those problems as well.

        I do not think there is a computational scientist who wouldn’t want to have a machine to solve, say, Quadratic Assignment Problems millions of times faster than in conventional computers.

        But there was nowhere in the news any information that we are now closer to such a feat.

Comments are closed.