Tag Archives: Matthias Troyer

D-Wave – Fast Enough to Win my Bet?

Really Would Like to Get That Raclette Cheese.

Last summer I had to ship a crate of maple syrup to Matthias Troyer at the ETHZ in Switzerland. The conditions we had agreed on for our performance bet were so that, at this point, the D-Wave One could not show a clear performance advantage over a conventional, modern CPU running fine-tuned optimization code. The machine held its own, but there weren’t any problem classes to point to that really demonstrated massive performance superiority.

google_benchmark
Impressive benchmark graph. Next on my Christmas wishlist: A decisive widening of the gap between the green QMC curve and the blue D-Wave line as the problem size increases (as is the case when compared to the red Simulated Annealing curve).

 

The big news to take away from the recent Google/D-Wave performance benchmark is that, with certain problem instances, the D-Wave machine clearly shines. 100 million times better in comparison to a Quantum Monte Carlo Simulation is nothing to sneeze at. This doesn’t mean that I would now automatically win my bet with Matthias if we were to repeat it with the D-Wave Two, but it’ll make it much more interesting for sure.

One advantage of being hard-pressed to find time for blogging is that once I get around to commenting on recent developments, most other reactions are already in. Matthias provided this excellent write-up, and the former D-Wave critic-in-chief remains in retirement. Scott Aaronson’s blog entry on the matter strikes a (comparatively) conciliatory tone. One of his comments explains one of the reason for this change:

“[John Martinis] told me that some of the engineering D-Wave had done (e.g., just figuring out how to integrate hundreds of superconducting qubits while still having lines into them to control the couplings) would be useful to his group. That’s one of the main things that caused me to moderate a bit (while remaining as intolerant as ever of hype).”

Scott also gave a pretty balanced interview to the MIT News (although I have to subtract a star on style for working in a dig at Geordie Rose – clearly the two won’t become best buds in this lifetime).

Hype is generally and righteously scorned in the scientific community.  And when it is pointed out (for instance when the black hole information loss problem had been “solved”), the scientists involved are usually on the defensive.

Buddy
Buddy the Elf believes anything Steve Jurvetson ever uttered and then some.

Of course, business follows very different rules, more along the Donald Trump rules of attention. Any BS will do as long as it captures audience. Customers are used to these kinds of commercial exaggerations, and so I am always a bit puzzled by the urge to debunk D-Wave “hype”. To me it feels almost a bit patronizing. The average Joe is not like Buddy the Elf, the unlikely hero of my family’s favorite Christmas movie. When Buddy comes to NYC and sees a diner advertising the world’s best coffee,  he takes this at face value and goes crazy over it.  The average Joe, on the other hand, has been thoroughly desensitized to high tech hype. He knows that neither Google Glasses nor Apple Watches will really change his life forever, nor will he believe Steve Jurvetson that the D-Wave machines will outperform the universe within a couple of years. Steve, on the other hand, does what every good VC business man is supposed to do for a company that he invested in, i.e. create hype. The world has become a virtual bazaar, and your statements have to be outrageous and novel in order to be heard over the noise. What he wants to get across is that the D-Wave machines will grow in performance faster than conventional hardware. Condensing this into Rose’s Law is the perfect pitch vehicle for that – hype with a clear purpose.

People like to pick an allegiance and cheer for their “side”. It is the narrative that has been dominating the D-Wave story for many years, and it made for easy blogging, but I won’t miss it. The hypers gonna hype, the haters gonna hate, but now the nerds should know to trust the published papers.

Max Planck famously quipped that science advances one funeral at a time, because even scientists have a hard time acting completely rationally and adjusting their stances when confronted with new data.  This is the 21st century, here’s to hoping that the scientific community has lost this kind of rigidity, even while most of humanity remains as tribal as ever.

A Bet Lost, Despite Entanglement

And Why There Still May be Some Free Cheese in my Future.

Occasionally I like to bet. And in Matthias Troyer I found somebody who took me up on it.  I wrote about this bet a while ago, but back then I agreed that I wouldn’t identify him as my betting pal, until his paper was published. Now the paper has been out for a while and it is high time to settle the first part of this bet.

The conditions were straightforward, can the D-Wave machine beat a single classical CPU?  But of course we specified things a bit more precisely.

The benchmark used is the time to find the ground state with 99% probability, and then not only the median is considered but also the 90%, 95% and 99% quantile. We then agreed on basing the bet on the 90% quantile. I.e. the test needs to run long enough to make sure that for 90% or more of the instances we find a ground state with 99%.

Assuming that Matthias gets to conduct his testing on the current and next chip generation of D-Wave, we agreed to make this a two part bet, i.e. same betting conditions for each.

Unfortunately, I have to concede the first round.  The D-Wave One more or less tied the classical machine, although there were some problem instances where it was doing better. So the following jars of Maple Syrup will soon be shipped to Zürich:

maple_syrup_debt
Not the most exquisite or expansive maple syrup, but 100% pure, and Canadian. The same kind I use at home, and I figure the plastic jars will tolerate shipping much better than glass.

What I was obviously hoping for was a decisively clear performance advantage, but at this point this isn’t the case, unless you compare it to off-the-shelf optimizer software as was done in the test conducted by McGeoch et. al.

This, despite the evidence for quantum entanglement of D-Wave’s machines getting ever more compelling. A paper has just been published in Pysical Review X, that demonstrates eight qubit entanglement. Geordie blogged about it here, and it already generated some great press (h/t Pierre O.), probably the most objective mainstream article on D-Wave I’ve seen yet. It is a welcome change from the drivel the BBC put out on QC in the past.

So will I ever get some Raclette cheese in return for my Maple syrup? The chances for winning the next part of my bet with Matthias hinge on the scaling behavior, as well as on the question if a class of hard problems can be identified where quantum annealing manages to find the ground state significantly faster. For the generic randomly generated problem set, scaling alone does not seem to cut it (although more data will be needed to be sure).  So I am counting on D‑Wave’s ingenuity, as well as those bright minds who now get to work hands-on with the machine.

Nevertheless, Matthias is confident he’ll win the bet even at 2000 qubits. He thinks D-Wave will have to improve much more than just the calibration to outperform a single classic CPU. On the other hand, when I had the pleasure of meeting him last year in Waterloo, he readily acknowledged that it was impressive what the company had accomplished so far. After all, this is an architecture that was created within just ten years based on a shoestring budget, compared to the multimbillion dollar,  decades mature semiconductor industry.

Unfortunately, when his university, the venerated ETH Zürich (possibly the best engineering school on this planet) came out with this press release, they nevertheless (accidentally?) played into the old canard that D-Wave falsely claimed to have produced a universal quantum computer.

It puts into context the Chinese whisper process as depicted in this cartoon that I put up in an earlier post. Unlike depicted here, where the press gets most of the blame, ever since I started paying attention to university press releases, I am compelled to notice that they are more often than not the true starting point of the distortions.

“The Science Newscycle” by Jorge Cham www.phdcomics.com