Category Archives: D-Wave

He Said She Said – How Blogs are Changing the Scientific Discourse

The debate about D-Wave‘s “quantumness” shows no signs of abating, hitting a new high note with the company being prominently featured on Time magazine’s recent cover, prompting a dissection of the article on Scott Aaronson’s blog. This was quickly followed by yet another scoop: A rebuttal by Umesh Vazirani to Geordie Rose who recently blogged about the Vazirani et al. paper which sheds doubt on D-Wave’s claim to implement quantum annealing. In his take on the Time magazine article Scott bemoans the ‘he said she said’ template of journalism which gives all sides equal weight, while acknowledging that the Times author Lev Grossman quoted him correctly, and obviously tries to paint an objective picture.

If I had to pick the biggest shortcoming of the Times article, my choice would have been different. I find Grossman entirely misses Scott’s role in this story by describing him as “one of the closest observers of the controversy“.

Scott isn’t just an observer in this. For better or worse he is central to this controversy. As far as I can tell, his reporting on D-Wave’s original demo is what started it to begin with. Unforgettable, his inspired comparison of the D-Wave chip to a roast beef sandwich, which he then famously retracted when he resigned as D-Wave’s chief critic. The latter is something he’s done with some regularity, first when D-Wave started to publish results, then after visiting the company and most recently after the Troyer et al. pre-print appeared in arxiv (although the second time doesn’t seem to count, since it was just a reiteration of the first resignation).

And the say sandwiches and chips go together ...Scott’s resignations never seem to last long. D-Wave has a knack for pushing his buttons. And the way he engages D-Wave and associated research is indicative of a broader trend in how blogs are changing the scientific discourse.

For instance, when Catherine McGeoch gave a talk about her benchmarking of the DW2, Scott did not immediately challenge her directly but took to his blog (a decision he later regretted and apologized for). Anybody who has spent more than five minutes on a Web forum knows how the immediate, yet text only, communication removes inhibitions and leads to more forceful exchanges. In the scientific context, this has the interesting effect of colliding head on with the more lofty perception of a scientist.

It used to be that arguments were only conducted via scientific publications, in person such as in scientific seminars, or the occasional letter exchange. It’s interesting to contemplate how corrosive the arguments between Bohr and Einstein may have turned out, if they would have been conducted via blogs rather than in person.

But it’s not all bad. In the olden days, science could easily be mistaken for a bloodless intellectual game, but nobody could read through the hundreds of comments on Scott’s blog that day and come away with that impression. To the contrary, the inevitable conclusion will be that science arguments are fought with no less passion than the most heated bar brawl.

During this epic blog ‘fight’ Scott summarized his preference for the media thusly

“… I think this episode perfectly illustrates both the disadvantages and the advantages of blogs compared to face-to-face conversation. Yes, on blogs, people misinterpret signals, act rude, and level accusations at each other that they never would face-to-face. But in the process, at least absolutely everything gets out into the open. Notice how I managed to learn orders of magnitude more from Prof. McGeoch from a few blog comments, than I did from having her in the same room …”

it is by far not the only controversy that he courted, nor is this something unique to his blog. Peter Woit continues the heretical work he started with his ‘Not Even Wrong’ book, Robert R. Tucci fiercely defends his quantum algorithm work when he feels he is not credited, Sabine Hossenfelder had to ban a highly qualified String theory troll due to his nastiness (she is also a mum of twins, so you know she has practice in being patient, and it’s not like she doesn’t have a good sense of humor). But my second favorite science blog fight also occurred on Scott’s blog when Joy Christian challenge him to a bet to promote his theory that supposedly invalidates the essential non-locality of quantum mechanics due to Bell’s theorem.

It’s instructive to look at the Joy Christian affair and ask how a mainstream reporter could have possibly reported it. Not knowing Clifford algebra, what could a reporter do but triangulate the expert opinions? There are some outspoken smart critics that point to mistakes in Joy Christian’s reasoning, yet he claims that these are based on flawed understanding and have been repudiated. The reporter will also note that doubting Bell’s theorem is very much a minority position, yet such a journalist not being able to check the math himself can only fall back on the ‘he said she said’ template. After all, this is not a simple straight forward fact like reporting if UN inspectors found Saddam Hussein’s weapons of mass distractions or not (something that surprisingly most mainstream media outside the US accomplished just fine). One cannot expect a journalist to settle an open scientific question.

The nature of the D-Wave story isn’t different, how is Lev Grossman supposed to do anything but report the various stances on each side of the controversy? A commenter at Scott’s blog was dismissively pointing out that he doesn’t even have a science degree. As if this were to make any difference, it’s not like everybody else on each side of the story doesn’t boast such degrees (non-PhDs are in the minority at D-Wave).

Mainstream media reports as they always did, but unsettled scientific questions are the exception to the rule, one of the few cases when ‘he said she said’ journalism is actually the best format. For everything else we fortunately now have the blogs.

One Video to Rule Them All

Updated below

This is essentially an extended update to my last D-Wave post. Rather than stick it there, I think it is important enough to merit its own post.  The reason being, I wish I could make anybody who plans on writing anything on D-Wave first watch the video below from the first Q+ Google+ hang-out this year.

It summarizes the results of the paper I blogged about in my last post on the matter. Ultimately, it answers what is objectively known about D-Wave’s machine based on the analyzed data, and sets out to answer three questions.

  1. Does the machine work?
  2. Is is quantum or classical?
  3. Is it faster than a classical computer?

The short version is

  1. Yes
  2. Based on their modeling D-Wave 2 is indeed a true Quantum Annealer.
  3. While it can beat an off the shelf solver it cannot (yet) outperform on average a highly targeted hand-crafted classical algorithm.

Of course there is much more in the video, and I highly recommend watching the whole thing. It comes with a good introduction to the subject, but if you only want the part about the test, you may want to skip 11 minutes into the video (this way you also cut out some of the cheap shots at completely clueless popular media reports – an attempt at infusing some humor into the subject that may or may not work for you).

 

With regards to point (2) the academic discussion is not settled. A paper with heavyweight names on it just came out (h/t Michael Bacon). It proposes a similar annealing behavior could be accomplished with a classical set-up after all.  Too me this is truly academic in the best and worst sense i.e. a considerable effort to get all the i’s dotted and the t’s crossed.  It simply seems a bit far fetched that the company would set out to build a chip with coupled qubits that behave like a quantum annealer, yet somehow end up with an oddly behaving classical annealer.

From my point of view it is much more interesting to explore all the avenues that are open to D-Wave to improve their chip, such as this new paper on strategies for a quantum annealer to increase the success probability for hard optimization problems. (h/t Geordie Rose).

Update 1

Geordie Rose weighs in on the paper that claims that the D-Wave machine can be explained classically.  He expected a Christmas present and felt he only got socks …

Update 2

Helmut Katzgraber et al. propose in this paper that the current benchmarks are using the wrong problem set to possibly find a quantum speed-up with D-Wave’s machine.

Scott Aaronson (again) resigns as chief D-Wave critic and endorses their experiments

An exercise in positive spin.

Update below.

The English language is astoundingly malleable. It feels almost as if it was tailor made for marketing spin. I noticed this long ago (feels like a lifetime) when working in a position that required me to sell software. Positioning products was much easier when I spoke English.  Mind you, I never told a blatant lie, but I certainly spun the facts to put our product in the best light, and if a customer committed I’d do my darnedest to deliver the value that I promised. The kind of customers I dealt with were of course aware of this dance, and perfectly capable of performing their due diligence. From their perspective, in the end, it is always about buying into the vision, knowing full well that a cutting edge technology, one that will give a real competitive benefit, will of course be too new to be without risk.

During the courting of the customers, any sales person worth their salt will do anything to make the product look as good as possible. One aspect of this is of course to stress positive things that others are saying about your offerings.

To accomplish this, selective quoting can come in very handy. For instance, after reviewing the latest pre-print paper that looks at D-Wave’s 503 qubit chip performance, Scott Aaronson stepped down for the second time as chief D-Wave critic. In the blog post where he announced this, he also observed that on “the ~10% of instances on which the D-Wave machine does best, (…) the machine does do slightly better (…) than simulated annealing”.

This puts in words what the following picture shows in much more detail.

Screenshot 2014-01-18 17.47.52
Instance-by-instance comparison of annealing times and wall-clock times. Shown is a scatter plot of the pure annealing time for the DW2 compared to a simulated classical annealer (SA) using an average over 16 gauges on the DW2. This is figure 6 of the recent benchmark paper. Wall clock times include the time for programming, cooling, annealing, readout and communication. Gauges refer to different encodings of a problem instance. (Only plot A and B are relevant to the settling of my bet).

Now, if you don’t click through to Scott’s actual blog post. you may take away that he actually changed his stance. But of course he hasn’t. You can look at the above picture and think the glass is ninety percent empty or you could proclaim it is ten percent full.

The latter may sound hopelessly optimistic, but let’s contemplate what we are actually comparing.  Current computer chips are the end product of half a century highly focused R&D, with billions of dollars poured into developing them. Yet, we know we are getting to the end of the line of Moore’s law. Leak currents already are a real problem, and the writing is on the wall that we are getting ever closer to the point where the current technology will no longer allow for tighter chip structures.

On the other hand, the D-Wave chip doesn’t use transistors. It is an entirety different approach to computing, as profoundly different as the analog computers of yore.

The integration density of a chip is usually classified by the length of the silicon channel between the source and drain terminals in its field effect transistors (e.g. 25nm). This measure obviously doesn’t apply to D-Wave, but the quantum chip integration density isn’t even close to that. With the ridiculously low number of about 500 qubits on D-Wave’s chip, which was developed on a shoestring budget when compared to the likes of Intel or IBM, the machine still manages to hold its own against a modern CPU.

Yes, this is not a universal gate-based quantum computer, and the NSA won’t warm up to it because it cannot implement Shore’s algorithm, nor is there a compelling theoretical reason that you can achieve a quantum speed-up with this architecture. What it is, though, is a completely new way to do practical computing using circuit structures that leave plenty of room at the bottom.  In a sense, it is resetting the clock to when Feynman delivered his famous and prophetic talk on the potentials of miniaturization. Which is why from a practical standpoint I fully expect to see a D-Wave chip eventually unambiguously outperform a classical CPU.

On the other hand, if you look at this through the prism of complexity theory none of this matters, only proof of actual quantum speed-up does.

Scott compares the quantum computing skirmishes he entertains with D-Wave to the American Civil war.

If the D-Wave debate were the American Civil War, then my role would be that of the frothy-mouthed abolitionist pamphleteer

Although clearly tongue in cheek, this comparison still doesn’t sit well with me.  Fortunately, in this war, nobody will lose life or limb. The worst that could happen is a bruised ego, yet if we have to stick to this metaphor, I don’t see this as Gettysburg 1863 but the town of Sandwitch 1812.

Much more will be written on this paper. Once it has fully passed peer review and been published, I will also be finally able to reveal my betting partner. But in the meantime there a Google+ meeting scheduled that will allow for more discussion (h/t Mike).

Update

Without careful reading of the paper a casual observer may come away with the impression that this test essentially just pitted hardware against hardware. Nothing could be further from the truth, some considerable effort had to go into constructing impressive classical algorithms to beat the D-Wave machine on its own turf.  This Google Quantum AI lab post elaborates on this (h/t Robert R. Tucci).

Update 2

D-Wave’s Geordie Rose weighs in.

 

 

 

 

 

 

Quantum Computing Interrupted

This is the second part of my write-up on my recent visit to D-Wave. The first one can be found here.

d_wave_close

The recent shut-down of the US government had wide spread repercussions. One of the side-effects was that NASA had to stop all non-essential activities and this included quantum computing.  So the venture which, in cooperation with Google, jointly operates a D-Wave machine was left in limbo for a while.  Fortunately, this was short lived enough to hopefully not have any lasting adverse effects.  At any rate, maybe it freed up some time to produce a QC mod for Minecraft and the following high level and very artsy Google video that ‘explains’ why they want to do quantum computing in the first place.

If you haven’t been raised on MTV music videos and find rapid succession sub-second cuts migraine inducing (at about the 2:30 mark things settle down a bit), you may want to skip it. So here’s the synopsis (Spoiler alert). The short version of what motivates Google in this endeavor, to paraphrase their own words: We research quantum computing, because we must.

In other news, D-Wave recently transferred its foundry process to a new location, partnering with Cypress Semiconductor Corp, a reminder that D-Wave firmly raised the production of superconducting Niobium circuitry to a new industrial-scale level.  Given these new capabilities, it may not be a coincidence that the NSA has recently announced its intention to fund research into super-conducting computing. Depending on how they define “small-scale” the D-Wave machine should already fall into the description of the solicitation bid, which aspires to the following …

“… to demonstrate a small-scale computer based on superconducting logic and cryogenic memory that is energy efficient, scalable, and able to solve interesting problems.”

… although it is fair to assume this program is aimed at classical computing. Prototypes for such chips have been already researched and look rather impressive (direct link to paper).  They are using the same chip material and circuitry (Josephson junctions) as D-Wave, so it is not a stretch to consider that industrial scale production of those more conventional chips can immediately benefit from the foundry process know-how that D-Wave has accumulated. It doesn’t seem too much of a stretch to imagine that D-Wave may expand into this market space.

When putting the question to D-Wave’s CTO Geordie Rose, he certainly took some pride in his company’s manufacturing expertise. He stressed that, before D-Wave, nobody was able to scale superconducting VLSI chip production, so this now opens up many additional opportunities. He pointed out that one could, for instance, make an immediate business case for a high through-put router based on this technology, but given the many venues open for growth he stressed the need to chose wisely.

The capacity of the D-Wave fridges are certainly so that they could accommodate more super-conducting hardware. Starting with the Vesuvius chip generation, measurement heat is now generated far away from the chip. Having several in close proximity should therefore not disturb the thermal equilibrium at the core.  Geordie considers deploying stacks of quantum chips so that thousands could work in parallel, since they are currently just throwing away a lot of perfectly good chips that come off a wafer.  This may eventually necessitate larger cooling units than the current ones that draw 16KW. This approach certainly could make a lot of sense for a hosting model where processing time is rented out to several customers in parallel.

One attractive feature that I pointed out was that if you had classical logic within the box, you’d eliminate a potential bottleneck that could occur if rapid reinitialization and read out of the quantum chip is required, and it would also potentially open the possibility for direct optical interconnects between chips. Geordie seemed to like this idea. One of the challenges to make the current wired design work, was to design high efficiency low pass filters to bring the noise level in these connectors down to an acceptable level.  So, in a sense, an optical interconnect could reduce complexity, but clearly would also require some additional research effort to bring down the heat signature of such an optical transmission.

This triggered an interesting, and somewhat ironic, observation on the challenges of managing an extremely creative group of people.  Geordie pointed out that he has to  think carefully about what to focus his team on, because an attractive side project e.g. ‘adiabatic’ optical interconnects, could prove to be so interesting to many team members that they’d gravitate towards working on this rather than keeping their focus on the other work at hand.

Some other managerial headaches stem from the rapid development cycles.  For instance, Geordie would like to develop some training program that will allow a customer’s technical staff to be quickly brought up to speed.  But by the time such a program is fully developed, chances are a new chip generation will be ready and necessitate a rewrite of any training material.

Some of  D-Wave’s challenges are typical for high tech start ups, others specific to D-Wave. My next, and final, installment will focus on Geordie’s approach to managing these growing pains.

The D-Wave Phenomenon

This is my first installment of the write-up on my recent visit to D-Wave in Burnaby, BC.

No matter where you stand on the merits of D-Wave technology, there is no doubt they have already made computing history. Transistors have been the sole basis for our rapidly improving information technology since the last vacuum tube computer was sold in the early sixties.  That is, until D-Wave started to ship their first system. Having won business from the likes of NASA and Google, this company is now playing in a different league. D‑Wave now gets to present at high profile events such as the HPC IDP conference,  and I strongly suspect that they caught the eye of the bigger players in this market.

The entire multi-billion dollar IT industry is attached at the hip to the existing computing paradigm, and abhors cannibalize existing revenue streams. This is why I am quite certain that as I write this, SWOT assessments and talking-points on D-Wave are being typed up in some nondescript Fortune 500 office buildings (relying on corporate research papers like this to give them substance).  After all, ignoring them is no longer an option.  Large companies like to milk cash cows as long as possible.  An innovative powerhouse like IBM, for instance, often follows the pattern to invest in R&D up to productization, but they are prone to holding back even superior technology if it may jeopardize existing lines of business. Rather, they just wait until a competitor forces their hand, and then they rely on their size and market depth, in combination with their quietly acquired IP, to squash or co-opt them. They excel at this game and seldom lose it (it took somebody as nimble and ruthless as Bill Gates to beat them once).

This challenging competitive landscape weighed on my mind when I recently had an opportunity to sit down with D-Wave founder and CTO Geordie Rose, and so our talk first focused on D-Wave’s competitive position.  I expected that patent protection and technological barriers of entry would dominate this part of our conversation, and was very surprised about Geordie’s stance, which certainly defied conventional thinking.
 

geordie_in_box
Geordie Rose founder and CTO of D-Wave in one of the Tardis-sized boxes that host his quantum chip. The interior is cooled close to absolute zero when in operation. If you subscribe to the multiverse interpretation of quantum mechanics one may argue that it then will in fact be bigger on the inside. After all, the Hilbert space is a big place.

While he acknowledged the usefulness of the over 100 patents that D-Wave holds,  he only considers them to be effectively enforceable in geographies like North America. Overall, he does not consider them an effective edge to keep out competition, but was rather sanguine that the fate of any computing hardware is to eventually become commoditized. He asserted that the academic community misjudged how hard it would be produce a device like the D-Wave machine.  Now that D-Wave has paved the way, he considers a cloning and reverse engineering of this technology to be fairly straightforward.  One possible scenario would be a government funded QC effort in another geography to incubate this new kind of information processing.  In the latter case, patent litigation will be expensive, and may ultimately be futile.  Yet, he doesn’t expect these kind of competitive efforts unless D-Wave’s technology has further matured and proven its viability in the market place.

I submitted that the academic push-back that spreads some FUD with regards to their capabilities, may actually help in this regard. This prompted a short exchange on the disconnect with some of the academic QC community.  D-Wave will continue to make it’s case with additional publication to demonstrate entanglement and the true quantum nature of their processor.  But ultimately this is a side-show, the research focus is customer driven and to the extent that this means deep learning (e.g. for pattern recognition) the use case of the D-Wave chip is evolving.  Rather than only using it as an optimization engine, Geordie explained how multiple solution runs can be used to sample the problem space of a learning problem and facilitate more robust learning.

It is the speed of customer driven innovation that Geordie relies on giving D-Wave a sustainable edge, and ultimately he expects that software and services for his platform will prove to be the key to a sustainable business.  The current preferred mode of customer engagement is what D-Wave calls a deep partnership, i.e. working in very close collaboration with the customer’s staff. Yet, as the customer base grows, more management challenges appear, since clear lines have to be drawn to mark where the customer’s intellectual property ends and D-Wave’s begins. The company has to be able to re-sell solutions tied to its architecture.

D-Wave experiences some typical growing pains of a successful organization, and some unique high tech challenges in managing growth. How Geordie envisions tackling those will be the subject of the next installment.

Out of the AI Winter and into the Cold

dwave_log_temp_scale
A logarithmic scale doesn’t have the appropriate visual impact to convey how extraordinarily cold 20mK is.

Any quantum computer using superconducting Josephson junctions will have to be operated at extremely low temperatures. The D-Wave machine, for instance, runs at about 20 mK, which is much colder than anything in nature (including deep space). A logarithmic scale like the chart to the right, while technically correct, doesn’t really do this justice.  This animated one from D-Wave’s blog shows this much more drastically when scaled linearly (the first link goes to an SVG file that should play in all modern browsers, but it takes ten seconds to get started).

Given that D-Wave’s most prominent use case is the field of machine learning, a casual observer may be misled to think that the term “AI winter” refers to the propensity of artificial neural networks to blossom in this frigid environment. But what the term actually stands for is the brutal hype cycle that ravaged this field of computer science.

One of the original first casualties of the collapse of artificial intelligence research in 1969 was the ancestor of the kind of learning algorithms that are now often implemented on D-Wave’s machines. This incident is referred to as the XOR affair, and the story that circulates goes like this:  “Marvin Minsky, being a proponent of structured AI, killed off the connectionism approach when he co-authored the now classic tome, Perceptrons. This was accomplished by mathematically proving that a single layer perceptron is so limited it cannot even be used (or trained for that matter) to emulate an XOR gate. Although this does not hold for multi-layer perceptrons, his word was taken as gospel, and smothered this promising field in its infancy.”

Marvin Minsky begs to differ, and argues that he of course knew about the capabilities of artificial neural networks with more than one layer, and that if anything, only the proof that working with local neurons comes at the cost of some universality should have had any bearing.  It seems impossible to untangle the exact dynamics that led to this most unfortunate AI winter, yet in hindsight it seems completely misguided and avoidable, given that a learning algorithm (Backpropagation) that allowed for the efficient training of multi-layer perceptrons had already been published a year prior, but at the time it received very little attention.

The fact is, after Perceptrons was published, symbolic AI flourished and connectionism was almost dead for a decade. Given what the authors wrote in the forward to the revised 1989 edition, there is little doubt how Minsky felt about this:

“Some readers may be shocked to hear it said that little of significance has happened in this field [since the first edition twenty year earlier]. Have not perceptron-like networks under the new name connectionism – become a major subject of discussion at gatherings of psychologists and computer scientists? Has not there been a “connectionist revolution?” Certainly yes, in that there is a great deal of interest and discussion. Possibly yes, in the sense that discoveries have been made that may, in time, turn out to be of fundamental importance. But certainly no, in that there has been little clear-cut change in the conceptual basis of the field. The issues that give rise to excitement today seem much the same as those that were responsible for previous rounds of excitement. The issues that were then obscure remain obscure today because no one yet knows how to tell which of the present discoveries are fundamental and which are superficial. Our position remains what it was when we wrote the book: We believe this realm of work to be immensely important and rich, but we expect its growth to require a degree of critical analysis that its more romantic advocates have always been reluctant to pursue – perhaps because the spirit of connectionism seems itself to go somewhat against the grain of analytic rigor.” [Emphasis added by the blog author]

When fast-forwarding to 2013 and the reception that D-Wave receives from some academic quarters, this feels like deja-vu all over again. Geordie Rose, founder and current CTO of D-Wave, unabashedly muses about spiritual machines, although he doesn’t strike me as a particularly romantic fellow. But he is very interested in using his amazing hardware to make for better machine learning, very much in “the spirit of connectionism”.  He published an excellent mini-series on this at D-Wave’s blog (part 1, 2, 3, 4, 5, 6, 7).  It would be interesting to learn if Minsky was to find fault with the analytic rigor on display here (He is now 86 but I hope he is still going as strong as ten years ago when this TED talk was recorded).

So, if we cast Geordie in the role of the 21st century version of Frank Rosenblatt (the inventor of the original perceptron) then we surely must pick Scott Aaronson as the modern day version of Marvin Minsky.  Only that the argument this time is not about AI, but how ‘quantum’ D-Wave’s device truly is.  The argument feels very similar: On one side, the theoretical computer scientist, equipped with boat-loads of mathematical rigor, strongly prefers the gate model of quantum computing. On the other one, the pragmatist, whose focus is to build something usable within the constraints of what chip foundries can produce at this time.

But the ultimate irony, it seems, at least in Scott Aaronson’s mind, is that the AI winter is the ultimate parable of warning to make his case (as was pointed out by an anonymous poster to his blog).  I.e. he thinks the D-Wave marketing hype can be equated to the over-promises of AI research in the past. Scott fears that if the company cannot deliver, the babe (I.e. Quantum Computing) will be thrown out with the bathwater, and so he blogged:

“I predict that the very same people now hyping D-Wave will turn around and—without the slightest acknowledgment of error on their part—declare that the entire field of quantum computing has now been unmasked as a mirage, a scam, and a chimera.”

A statement that of course didn’t go unchallenged in the comment section (Scott’s exemplary in allowing this kind of frankness on his blog).

I don’t pretend to have any deeper conclusion to draw from these observations, and will leave it at this sobering thought: While we expect science to be conducted in an eminently rational fashion, history gives ample examples of how the progress of science happens in fits and starts and is anything but rational.

Septimana Mirabilis – Major Quantum Information Technology Breakthroughs

Update 4: The award for the funniest photo commentary on this imbroglio goes to Robert Tucci.

Update 3: Congratulations to D-Wave for their  recent sale of the D-Wave Two machine to  the non-profit Space Research Association  – to be used collaboratively by Google and NASA. (h/t Geordie Rose)

Update 2: Scott Aaronson finally weighs in, and as Robert Tucci predicted in the comments, he resumed his sceptical stance.

Update: Link to Catherine McGeoch and Cong Wang’s paper.

D-Wave Cooper-pair states in real space. The company that derived it's name from this now makes some major waves of its own.
D-Wave Cooper-pair states in real space. Now the company that derived its name from this wavefunction makes some waves of its own.

What a week for Quantum Information Science. D-Wave made some major news when the first peer reviewed paper to conclusively demonstrate that their machine can drastically outperform conventional hardware was recently announced.  It’s hardly a contest.  For the class of optimization problems that the D-Wave machines are designed for, the algorithms executed on the conventional chip didn’t even come close. The D-Wave machine solved some of the tested problems about 3600 times faster than the best conventional algorithm. (I’ll leave it to gizmodo to not mince words).

Apparently, my back of the envelope calculation from last year, that was based on the D-Wave One performance of a brute force calculation of Ramsey numbers, wasn’t completely off.  Back then I calculated that the 128 qubit chip performed at the level of about 300 Intel i7 Hex CPU cores (the current test ran on the next generation 512 qubit chip). So, I am now quite confident in my ongoing bet.

If conventional hardware requires thousands of conventional cores to beat the current D-Wave machine, then the company has certainly entered a stage where its offering becomes attractive to a wider market.  Of course, other factors will weigh in when considering total cost of ownership.  The biggest hurdle in this regard will be software, as to date any problem you want to tackle the D-Wave way requires dedicated coding for this machine.  At first these skills will be rare and expansive to procure. On the other hand, there are other cost factors working in D-Wave’s favor:  Although I haven’t seen power consumption numbers, the adiabatic nature of the chip’s architecture suggests that it will require far less wattage than a massive server farm or conventional super-computer.  Ironically, while the latter operate at normal ambient temperature they will always require far more cooling effort to keep them at this temp than the D-Wave chips in their deep freeze vacuum.

That the current trajectory of our supercomputer power consumption is on an unsustainable path should be obvious by simply glancing at this chart.

Despite the efforts there are hard efficiency limits for conventional CMOS transistors. (for the original pingdom.com article click image)

D-Wave matures just at the right time to drive a paradigm change, and I hope they will pursue this opportunity aggressively.

But wait, there’s more.  This week was remarkable in unveiling yet another major breakthrough for Quantum Information technology: At Los Alamos National Labs, an Internet scalable quantum cryptographic network has been operating without a hitch for the last two years.  Now there’s an example for research that will “directly benefit the American people” (something that should please Congressman Lamar Smith, the current controversial chairman of the House of Representatives Committee on Science).

Why it took two years for this news to be published is anybody’s guess. Did somebody just flip a switch and then forget about it? Probably more likely that this research has been considered classified for some time.

Certainly this also suggests a technology who’s time has come.  Governmental and enterprise networks have been compromised at increasing rates, even causing inflammatory talk of ongoing cyber warfare. And while there have been commercial quantum encryption devices on the market for quite some time now, these have been limited to point to point connections.  Having a protocol that allows the seamless integration of quantum cryptography into the existing network stack raises this to an entirely different level.  This is of course no panacea against security breaches, and has been criticized as providing superfluous security illusions, since the social engineering attacks clearly demonstrate the human users as the weakest link. Nevertheless, I maintain that it has the potential to relegate brute force attacks to history’s dustbin.

The new quantum protocol uses a typical “hub-and-spoke” topology as illustrated in the following figure and explained in more detail in the original paper.

Network-Centric Quantum Communications with Application to Critical  Infrastructure Protection Topology
The NQC topology maps well onto those widely encountered in optical fiber networks, and permits a hierarchical trust architecture for a “hub” to act as the trusted authority (TA, “Trent”) to facilitate quantum authenticated key exchange.

Another key aspect is the quantum resource employed in the network:

The photonic phase-based qubits typically used in optical fiber QC require interferometric stability and inevitably necessitate bulky and expensive hardware. Instead, for NQC we use polarization qubits, allowing the QC transmitters – referred to as QKarDs – to be miniaturized and fabricated using integrated photonics methods [12]. This opens the door to a manufacturing process with its attendant economy of scale, and ultimately much lower-cost QC hardware.

It will be interesting to observe how quickly this technology will be commercialized, and if the US export restriction on strong cryptography will hamper the global adoption.

Stretching Quantum Computing Credulity

Update: Corrected text (h/t Geordie)

My interest in D-Wave prompted me to start this blog, and it is no secret that I expect the company to deliver products that will have a significant impact on the IT market. Yet, to this day, I encounter the occasional low-information posters in various online forums who dismiss the company, smugly asserting that they are fraudulent and only milk their investors.

08-finanzbetrugIt would be one thing if you’d only encounter some hold-outs on Scott Aaranson’s blog, which originally emerged as the most prominent arch-nemesis before he moderated his stance.  But it’s actually an international phenomenon, as I just came across such a specimen in a German IT forum.

To understand where these individuals are coming from, it is important to consider how people usually go about identifying a “high tech” investment scam.  The following list makes no claim to be complete, but is a good example of the hierarchy of filters to forming a quick judgment (h/t John Milstone):

  1. Claims of discovering some new physics that has been overlooked by the entire scientific world for centuries. (For each example of this actually happening, there are hundreds or thousands of con men using this line).
  2. Eagerness to produce “demos” of the device, but refusal to allow any independent testing. In particular, any refusal to even do the demos anywhere other than his own facilities is a clear warning sign (indicating that the facilities are somehow “rigged”).
  3. Demos that only work when the audience doesn’t contain competent skeptics.
  4. Demos that never demonstrate the real claims of the “inventor”.
  5. Lying about business relationships in order to “borrow” credibility from those other organizations.
  6. Failing to deliver on promises.
  7. Continually announcing “improvements” without ever delivering on the previous promises. This keeps the suckers pacified, even though the con man is never actually delivering.

One fateful day, when D-Wave gave an initial presentation to an IT audience, they inadvertently set a chain in motion that triggered several of these criteria in the minds of a skeptical audience.

Of course D-Wave never claimed new physics, but ran afoul of theoretical computer science when claiming that its computer can efficiently take on a NP hard problem, given as a Sudoku puzzle irritated theoretical computer scientists when claiming that its computer can take on a Sudoku puzzle (the latter is known to be NP hard.) (#1). [Ed. Changed wording to make clear that D-Wave didn’t explicitly claim to efficiently solve NP hard Soduko.]

At the time, D-Wave was still not ready to allow independent testing (#2) and the audience did not contain theoretical computer scientists who would have challenged scrutinized the company’s claims (#3).

Subsequently, critics questioned how much the quantum computing chip was actually engaged in solving the demonstrated Sudoku puzzle, since a normal computer was also in the mix.  Scott Aaranson also pointed out that there was no way of knowing if actual entanglement was happening on the chip, and as such the demo wasn’t proving D-Wave’s central claim (#4).

To my knowledge, D-Wave never misrepresented any business relationships, but touting their relationship with Google may have inadvertently triggered criteria #5 in some people’s minds.

Although D-Wave has been rapidly increasing their chip’s integration density, and are now shipping a product that I expect to outperform conventional hardware, they didn’t deliver as quickly as initially anticipated (#6).

Criteria #7 held until they shipped the D-Wave One to Lockheed, and this marked the turning point after which the pattern rapidly unraveled.  Only people who haven’t paid attention could still hold on to the “investment fraud” canard:

  • D-Wave published internals of their machine in Nature and co-authored several papers that utilize their machine for research as diverse as Ramsey number calculations and protein folding.
  • Independent testers are now able to test the machine.  I can verify that the one tester I am corresponding with is a top notch academic from one of the best engineering and science faculties this world has to offer.  He is also fiercely independent, believing that he can outperform the D-Wave machine with hand-optimized code on a conventional chip.
  • The central claim that their chip is a true quantum chip leveraging massive qubit entanglement has been proven.

It’s time for the IT audience to come to terms with this.

Quantum computing has arrived.  It’s real. Better get used to it.

 

 

The Dark Horse of Quantum Computing

Updated below.

Recently, Science magazine prominently featured Quantum Information Processing on their cover:

 

The periodical has a great track record in publishing on QIS, and this is the main reason why I subscribe to it.

Unfortunatelly, reading this issue, yet again drove home what a dark horse enterprise D-Wave is. And this is despite some recent prominent news, that D-Wave effortlessly passed a test devised to check for the quality of entanglement that they realize on their chip. There is hardly any lingering doubt that they managed to leverage real quantum annealing, yet, neither their approach, nor adiabatic quantum computing, is featured at all in this issue of Science.  In the editorializing introduction to the cover story dubbed “the Future of Quantum Information Processing” these fields aren’t even mentioned in passing.  Are we to conclude that there is no future for adiabatic quantum computing?

This I found so puzzling, that it prompted me to write my first ever letter to the editors of Science:

The Science journal has been invaluable in the past in advancing the QIS field, publishing an impressive roster of ground breaking papers. Yet, it seems to me the editorializing introduction of the March 8th cover story by Jelena Stajic dropped the ball.

If QIS is prominently featured on the cover of your journal shouldn’t the reader at least expect a cursory exposition of all prominent developments in the entire field? There is nothing wrong with the authors of the paper on the superconducting Josephson junctions approach to QC, restricting themselves to universal gate based architectures. Nevertheless, at least in the accompanying editorial, I would have expected a nod towards adiabatic quantum computing, and approaches utilizing quantum annealing. This oversight seems all the more glaring as the latter already resulted in a commercial offering.

Sincerely,

Henning Dekant

Disclaimer: I am not affiliated with the company D-Wave, which ships the first commercial quantum computing device, just puzzled that an exemplary publication like Science doesn’t feature the entire spectrum of approaches towards quantum computing.

My bet with a sceptical QC and CIS expert is still outstanding, and in my exchange with him, he mentioned that he didn’t expect D-Wave to pass this first entanglement hurdle. The next one to pass now is the matter of actual benchmarking against established chip architectures.

If D-Wave’s One cannot outperfom a conventional single-threaded architecture I’ll lose 4 gallons of maple syrup, but even if that was to come to pass, it wouldn’t spell the end for D-Wave, as it’ll be just a matter of increasing the qbit density until a quantum annealing chip will surpass conventional hardware. The latter only improves linearly with the integration density, while a quantum chip’s performance grows exponentially with the numbers of qbits that it can bring to bear.

Update:

Without further comment, here is the answer that I received from Science:

Thank you for your feedback regarding the introductory page to our recent QIP special section. I appreciate the point you are making, and agree that quantum annealing is an important concept. Let me, however, clarify my original reasoning. The Introduction was aimed at the general reader of Science, which, as you are aware, has a very broad audience. It was not meant to be an exhaustive account, or to complement the reviews in the issue, but rather to serve as a motivation for covering the topic, and hopefully to induce a non-specialist reader to delve into the reviews, while introducing only a minimal set of new concepts.

I hope that this is helpful, and once again, I am grateful for your feedback.
Best regards,
Jelena Stajic
Associate Editor

Quantum Computing Hype Cycle and a Bet of my Own

box in troubleThe year 2013 started turbulent for the quantum computing field with a valiant effort by long time skeptic and distinguished experimentalist Michel  I. Dyakonov  to relegate it to the status of a pathological science akin to cold fusion (he does not use the term in his paper but later stated: “The terms ‘failed, pathological’ are not mine, but the general sense is correct.”).

Scott Aaranson took on this paper in his unique style (it’s a long read but well worth it). There really isn’t much to add to his arguments, but there is another angle that intrigues my inner “armchair psychologist”:  What exactly is it about this field that so provokes some physicists?  Is it that …

  • … Computer Scientists of all people are committing Quantum Mechanics?
  • … these pesky IT nerds have the audacity to actually take the axioms of Quantum Mechanics so seriously as to regard them as a resource for computational engineering?
  • … this rabble band of academics are breeding papers at a rabbit’s pace, so that no one can possibly keep up and read them all?
  • … quantum information science turned the ERP paradox on its head and transformed it into something potentially very useful?
  • … this novel science sucks up all sorts of grant money?

The answer is probably all of the above, to some extent.  But this still doesn’t feel quite right.  It seems to me the animosity goes deeper.  Fortunately, Kingsley Jones (whom I greatly admire) blogged about similar sentiments, but he is much more clear eyed on what is causing them.

It seems to me that the crux of this discomfort stems from the fact that many physicists have a long harbored discomfort with Quantum Mechanic’s intractabilities, which were plastered over with the Copenhagen Interpretation (which caused all sorts of unintended side effects).  It’s really a misnomer, it should have been called the ostrich interpretation, as its mantra was to ignore the inconstancies and to just shut-up and calculate. It is the distinct merit of Quantum Information science to have dragged this skeleton out of the closet and made it dance.

The quantum information scientists are agnostic on the various interpretations, and even joke about it.  Obviously, if you believe there is a truth to be found, there can be only one, but you first need to acknowledge the cognitive dissonance if there’s to be any chance of making progress on this front. (My favorite QM interpretation has been suggested by Ulrich Mohrhoff, and I have yet to find the inspiration to blog about this in an manner that does it justice – ironically, where he thinks of it as an endpoint, I regard it as allowing for a fresh start).

Meanwhile, in the here and now, the first commercial quantum computing device the D‑Wave One has to overcome its own challenges (or being relegated to a computing curiosity akin to analog neural VLSI).  2013 will be the year to prove its merits in comparison to conventional hardware. I’ve been in touch with a distinguished academic in the field (not Scott A.) who is convinced that optimization on a single conventional CPU will always outperform the D-Wave machines – even on the next generation chip. So I proposed a bet, albeit not a monetary one: I will gladly ship a gallon of Maple sirup to him if he is proven right and our dark horse Canadian trail blazer don’t make the finish line. The results should be unambiguous and will be based on published research, but just in case, if there should be any disagreement we will settle on Scott Aaronson as a mutually acceptable arbiter.  Scott is blissfully unaware of this, but as he is also the betting kind (the really big ones), I hope he’d be so kind as to help us sort this out if need be. After all, I figure, he will be following the D-Wave performance tests and at that time will already have formed an informed opinion on the matter.

The year 2013 started off with plenty of QIS drama and may very well turn out to be the crucial one to determine whether the field has crossed the rubicon.   It’s going to be a fun one.