**Updated below**

This is essentially an extended update to my last D-Wave post. Rather than stick it there, I think it is important enough to merit its own post. The reason being, I wish I could make anybody who plans on writing anything on D-Wave first watch the video below from the first *Q+* Google+ hang-out this year.

It summarizes the results of the paper I blogged about in my last post on the matter. Ultimately, it answers what is objectively known about D-Wave's machine based on the analyzed data, and sets out to answer three questions.

- Does the machine work?
- Is is quantum or classical?
- Is it faster than a classical computer?

The short version is

- Yes
- Based on their modeling D-Wave 2 is indeed a true Quantum Annealer.
- While it can beat an off the shelf solver it cannot (yet) outperform on average a highly targeted hand-crafted classical algorithm.

Of course there is much more in the video, and I highly recommend watching the whole thing. It comes with a good introduction to the subject, but if you only want the part about the test, you may want to skip 11 minutes into the video (this way you also cut out some of the cheap shots at completely clueless popular media reports - an attempt at infusing some humor into the subject that may or may not work for you).

With regards to point (2) the academic discussion is not settled. A paper with heavyweight names on it just came out *(h/t Michael Bacon)*. It proposes a similar annealing behavior could be accomplished with a classical set-up after all. Too me this is truly academic in the best and worst sense i.e. a considerable effort to get all the i's dotted and the t's crossed. It simply seems a bit far fetched that the company would set out to build a chip with coupled qubits that behave like a quantum annealer, yet somehow end up with an oddly behaving classical annealer.

From my point of view it is much more interesting to explore all the avenues that are open to D-Wave to improve their chip, such as this new paper on strategies for a quantum annealer to increase the success probability for hard optimization problems. *(h/t Geordie Rose)*.

**Update 1**

Geordie Rose weighs in on the paper that claims that the D-Wave machine can be explained classically. He expected a Christmas present and felt he only got socks ...

**Update 2**

Helmut Katzgraber et al. propose in this paper that the current benchmarks are using the wrong problem set to possibly find a quantum speed-up with D-Wave's machine.

### Like this:

Like Loading...

Henning: Thanks for the new update. Your second Link “this new paper” is not working!. The two IBM fellows, Smith & Smolin, tried first in May of last year to shoot down the original paper but got trounced by the authors of the original paper & they kept mum for the last 8 months. Now that have recruited a couple of “Big Guns” from Berkeley to bolster their case. Will see if they even respond to them!.

Their first attempt was here:http://arxiv.org/abs/1305.4904

The original authors’ response was here:http://arxiv.org/abs/1305.5837

Thanks for noticing. Fixed the link now.

Quantum information theory (QIT) academics like to paint D-wave as being dishonest ( “full of hype”) And yet when it comes to quantum cryptography,(which Smolin has based his career on and Vazirani has written papers on, e.g., http://arxiv.org/abs/1210.1810), they don’t point out that quantum cryptography is totally pointless, pure hucksterism. Hypocrisy?

http://qbnets.wordpress.com/2014/01/06/quantum-cryptography-succumbs-to-an-ant/

To me this paints an interesting contrast. You are right in observing that the practical value of quantum cryptography is dubious, as hardening already typically fairly secure point to point connections is not going to make much of a difference. Yet, it is academically well understood and hence sacrosanct.

D-Wave is about to establish a completely different platform of computing that points towards a post silicon IT future. Lot’s of avenues to explore that have the potential to be of imminent applicability.

Anyhow, my first association with Smolin is Loop Quantum Gravity. I guess if you are in Waterloo you are just expected to do quantum cryptography as well as they seem to like it so much there.

Henning: Are you thinking of Lee Smolin at the Perimeter Institute?. Is John Smolin at IBM related to Lee Smolin?. Just curious.

Sol is absolutely right. Lee Smolin and John Smolin (IBM) are different people.

Indeed, got my Smolins mixed up. Though it was Lee Smolin who co-authored this.

I think we should have a third party. I just don’t trust either the Democrats or the Republicans

🙂

Fortunately there is a third party in this case, the customers. Ultimately their vote is the only one that matters.

http://tech.slashdot.org/story/14/02/03/1449220/first-evidence-that-googles-quantum-computer-may-not-be-quantum-after-all

http://arxiv.org/abs/1401.7087

According to http://arxiv.org/abs/1401.7087, a classical model of spin dynamics outperforms Dwave itself to emulate a Quantum Annealer. This is devastating, don’t you think?

Didn’t have time yet to read the paper in full, but you don’t need this one to see that, yes classical algos can still outperform this machine, especially if they throw enough hardware at it.

This was certainly true when they started this journey with a couple of qubits on a chip, and with over 500 they are just getting to the point of producing interesting performance. But given how far they’ve come within a rather short time frame and limited budget, there is absolutely no grounds to call this paper devastating.

To the contrary, it illustrates that D-Wave is now taking seriously enough to produce these kind of papers (which take some considerable effort) in order to put them back in the box 🙂

No, the point is not that classical algos can outperform Dwave on its published benchmark, which as you mentionned was already known (given you have enough hardware, which means given you have one laptop 🙂 ).

The point is even perfect quantum annealing behave classically, at least for the random instances Dwave used as benchmark. If you still believe quantum annealing can do better than classical algorithms, then that should be on a set of instances that happened to be sparse or absent in Dwave benchmark.

The good news is this work makes it easy to find a set of instances for which quantum annealing outperfom classical algorithm (more precisely, it makes easy to discard the instances for which classical algorithm correctly predict quantum annealing).

The devastating news is, this set appears at first sight as empty.

One fast Laptop with some decent GPU, quite a of bit time and a really good coder 🙂

I’d be surprised if quantum annealing turns out to be perfectly classically emulatable. We’ll see, if that’s the case it’ll be a bummer.

Yet, not all would be lost, D-Wave has gained valuable know-how in creating these kind of chips and I’d certainly think it’s within their capabilities to develop a universal adiabatic quantum computing chip. At any rate I’d expect them to go down this route eventually.

Don’t take it seriously!. This is an IBM conspiracy (their second attempt, by the way) to shoot down the original paper. They won’t have much luck, even with Vazirani & co.!

Well, as far as conspiracies goes – they’d be really bad at it. After all, it pretty much is all in the open.

D-Wave is getting to the point where they are taking seriously and of course that means that they will get a lot flack from players who have revenue streams to protect. IBM will love this kind of research, but that doesn’t mean that the results are compromised in any way.

It’s just business as usual.

Following the update, “Shin et.al. focuses only on one experiment for which there was no expectation of an experimental difference between quantum and classical models”.

It seems Geordie Rose shares your ineffable sense of humour 🙂

Henning: I just posted this comment on rrtucci’s blog:

“Robert: That’s quite a scoop on TIME magazine!!. Congratulations! Time does say that they have sold one to a US Intelligence Agency & you assumed it must be the NSA? Or is it just a guess on your part?. And if the NSA got one, it’s almost certain that its Canadian equivalent the “CSEC” has one as well, because they work very closely with each other, as was recently disclosed by E. Snowden. Also, in the recent blog of Google about DW2, Geordie insisted that his machine CAN factor numbers, but when pressed by me & others, he would NOT elaborate!. Very interesting”

Do you believe that a DW2 may have been sold to a US Intelligence Agency?.

Sol, In-Q-Tel is a D-Wave investor, so it’s not a stretch to assume that US intelligence agencies will have a machine. And of course this is not anything anybody will advertise. There exists a factorization algorithm for universal adiabatic computing, maybe somebody at the NSA figured one out that could work with quantum annealing, but at any rate ~500 qubits won’t get you very far at this point.

Henning: As a physicist, do you think this paper has a possible breakthrough in factoring large numbers, using simulated annealing? And can that be adapted to DW2?. Thanks.

http://arxiv.org/pdf/1402.1201v1.pdf

C’mon Sol, stop making a fool of yourself. If one could factor large numbers using simulated annealing, there would ne no reason to do that on a QC.

Please RTFP again. They factored three (3) numbers, all uterly close to a squared number. This is such a specific property you could do it by hand, in the same way you can factor most number because most numbers divide by 2, 3 or 5. Seriously, why do you think they’re not already richer than Bill Gate?

Please play nice, Sol doesn’t have a scientific background.

Yes, sorry for the tone of my post.

Henning & “One Spin”: Thanks guys for straightening me out!, I thought it looked pretty simplistic even with my limited knowledge, but then I figured ” what the heck do I know?”. I’m an investment banker by trade!, but have retained an interest in science, as a hobby, especially in math & physics. Now you guys know!!!. Thanks a lot anyway.

The point is to be faster than what can run on a classical computer via simulation. Other than that the algorithm could probably be adapted but figuring out if it’ll be any good is another question.