Category Archives: D-Wave

The Creative Destruction Lab Reaches a New Quantum Level

Planet earth as seen from Toronto.

If excitement was a necessary and sufficient criteria to reach higher quantum levels, they certainly must have been achieved yesterday morning in room 374 of the Rotman School of Business here in Toronto (aka “the center of the universe” as our modest town is known to Canadians outside the GTA).

In Canadian start-up circles, the Creative Destruction Lab (CDL) is a household name by now, and ever since the program went global, its recognition has reached far past the borders of Canada.

The CDL kicked off with its first cohort in the quantum machine learning stream today, and our company Artiste has been honoured to be part of this exciting new chapter.

For a casual observer, the CDL may look like just another effort to bring venture capital and start-ups together, with some MBA students thrown in for that entrepreneurial spirit. I.e. it may appear as just another glorified pitch competition. But nothing could be further from the truth, as this program has essentially been built around an academic hypothesis of why there is so little start-up activity outside Silicon Valley, and why it has been so difficult to replicate this kind of ecosystem. It certainly is not for lack of scientific talent, capital, or trying.

Ajay Agrawal, the founder of the Creative Destruction Lab, beautifully laid out the core hypothesis around which he structured the CDL. He suspects a crucial market mismatch, in that start-up founders are under-supplied with one crucial resource: Sound entrepreneurial judgment. And the latter can make all the difference. He illustrated this with a somewhat comical email from the nineties, written by a Stanford Ph.D. student pitching a project to an Internet provider, arguing that the technology that his small team would build could be extremely profitable, and indicating that they’d love to build this on a fixed salary basis. A hand written note was scribbled on the email print-out from a Stanford business advisor, who suggested realizing this project as their own start-up venture. This company, of course, went on to become Google.

The linked chart should not be misconstrued as sound investment advise.
Two pretty things that are not like the other at all, but the mania is very much the same.

Ajay’s thinking throws some serious shade on the current ICO craze which, like most start-ups, I’ve been following very closely. Blockchain technology has some truly disruptive potential way beyond crypto-currency, and I see many synergies between this trustless distributed computing environment, and how Quantum information will interface with the classical world.

From a start-up’s standpoint, an ICO looks extremely attractive, but like all crowdfunding efforts it still requires a good campaign. However, it all hinges on a whitepaper and technology rather than a business plan, and the former typically comes pretty naturally to technical founders. There are also very few strings attached:

  • The (crypto-)money that comes in is essentially anonymous.
  • Fund raising occurs on a global basis,
  • The process is still essentially unregulated in most jurisdictions.

But if the CDL got it right,  ICOs are missing the most critical aspect to making a venture successful: Sound entrepreneurial advice.

There is little doubt in my mind that we are currently experiencing something akin to tulip mania in the crypto-currency and ICO arena, but the market for tulips did not vanish after the 1637 mania ran its course, and neither will ICOs.  For my part, I expect we will see something of a hybrid model emerge: VC seed rounds augmented by ICOs.

From an entrepreneur’s stand-point, this would be the best of both worlds.

Let’s aspire to be more than just a friendly neighbour

The Canadarm - a fine piece of Canadian technology, that would have gone nowhere without the US.
The Canadarm – a fine piece of Canadian technology, that would have gone nowhere without the US.

This blog is most emphatically not about politics, and although it has often been observed that everything is political, this exaggeration actually has become less true the more it is raised.

Whereas, in a feudal society all activity is at the pleasure of the ruler, within a liberal democracy, citizens, and scientists alike, don’t have to pay attention to politics, and their freedoms are guarded by an independent judiciary.

Globalism has been an attempt to free border crossing business from the whims of politics. Since history never moves in a straight line, we shouldn’t be surprised that, after the 2008 financial meltdown, this trend, towards more global integration, is now  facing major headwinds, which now happen to gust heavily from the White House.

Trudeau, who is one of the few heads of states who can explain what Quantum Computing is about, will do his best on his state visit to Washington to ensure freedom of trade will commence across the world’s longest open border, but Canada can’t take anything for granted. Which brings me around to the topic that this blog is most emphatically about: Canada is punching way above its weight when it comes to Quantum Computing, not the least because of the inordinate generosity of Mike Lazaridis, who was instrumental in creating the Perimeter Institute as well as giving his alma matter the fantastic  Institute for Quantum Computing (IQC).  This facility even has its own semiconductor fab, and offers tremendous resources to its researchers. There have been some start-up spin-offs, and there is little doubt that this brings high-tech jobs to the region, but when I read headlines like the one about the quantum socket, I can’t help but wonder if Canada again seems to be  content to play second fiddle.  It’s a fine piece of engineering, but let’s be real, after everything is said and done, it’s still just a socket, a thing you will plug into the really important piece, your quantum chip. I am sure Google will be delighted to use this solid piece of Canadian engineering, and we may even get some nice press about it, just as we did for the Canadarm on the space shuttle, another example for top notch technology that would have gone nowhere without the American muscle.

It's what you plug in that counts.
It’s what you plug in that counts.

But the Quantum Computing frontier is not like access to space. Yes, it takes some serious money to leave a mark, but I cannot help but think that Canada got much better bang for its loonies when the federal BDC fund invested early into D-Wave. The scrappy start-up stretched the dollars much further, and combined great ambition with brilliant pragmatism. It is the unlikely story where a small Canadian company was driving development, and inspired an American giant like Google to jump in with both feet.

 

 

Canada needs this kind of spirit. Let’s be good neighbors, sure, but also ambitious. Let there be a Canadian QC chip for the Canadian quantum socket.

 

To Reach Quantum Supremacy We Need to Cross the Entanglement Firepoint

You can get ignition at the flash point, but it won't last.

There’s been a lot of buzz recently about Quantum Computing. Heads of state are talking about it, and lots of money is being poured into research. You may think the field is truly on fire, but could it still fizzle out? When you are dealing with fire, what makes the critical difference between just a flash in the pan, and reaching the firepoint when that fire is self-sustaining?

Finding an efficient quantum algorithm is all about quantum speed-up, which has, understandably, mesmerized theoretical computer scientists.  Their field had been boxed into the Turing machine mold, and now, for the first time, there was something that demonstratively went beyond what was possible with this classical, discrete model.

Quantum speed-up is all about scaling behaviour.  It’s about establishing that a quantum algorithm’s resource requirements will grow more slowly with the amount of computational resources than the next best classical algorithm.

While this is a profound theoretical insight, it doesn’t necessarily immediately translate into practice, because this scaling  behaviour may come into play at a resource threshold far beyond anything technically realizable for the foreseeable future.

For instance, Shor’s algorithm requires tens of thousands of pristine, entangled qubits in order to become useful.  While not Sci Fi anymore, this kind of gate based QC is still far off. On the other hand, Matthias Troyer et al. demonstrated that you can expect to perform quantum chemical calculations that will outmatch any classical supercomputer with much more modest resources (qubits numbered in the hundreds not thousands).

The condition of having a quantum computing device performing tasks outside the reach of any classical technology is what I’d like to define as quantum supremacy (a term invented by John Preskill that I first heard used by DK Matai).

Quantum speed-up virtually guarantees that you eventually will reach quantum supremacy for the posed problem (i.e. factoring in Shore’s algorithm case) but it doesn’t tell you anything about how quickly you will get there. Also, while quantum speed-up is a useful criteria for eventually reaching quantum supremacy, it is not a necessary one for outperforming conventional super-computers.

We are just now entering a stage where we see the kind of quantum annealing chips that can tackle commercially interesting problems.  The integration density of these chips is still minute in comparison to that of the established silicon based ones (for quantum chips there is still lots of room at the bottom).

D-Wave just announced the availability of a 2000 qubit chip for early next year (h/t Ramsey and Rolf).  If the chip’s integration density can continue to double every 16 months, then quantum algorithms that don’t scale better (or only modestly so) than classical ones may at some point still end up outperforming all classical alternatives, assuming that we are indeed living in the end times of Moore’s law.

From a practical (and hence commercial) perspective, these algorithms won’t be any less lucrative.

Yet, the degree to which quantum correlations can be technologically controlled is still the key to go beyond what the current crop of “wild qubits” on a non-error corrected adiabatic chip can accomplish.  That is why we see Google invest in its own R&D, hiring Prof. Martinis from the UCSB, and the work has already resulted in a nine qubit prototype chip that combines “digital” error correction (ECC) with quantum annealing (AQC).

D-Wave is currently also re-architecting its chip, and it is a pretty safe bet that they will also incorporate some form of error correction in the new design. More intriguing, the company now also talks about a road map towards universal quantum computing (i.e. see the second to last paragraph in this article).

It is safe to say that before we get undeniable quantum supremacy, we will have to achieve a level of decoherence control that allows for essentially unlimited qubit scale-out. For instance, IBM researchers are optimistic that they’ll get there as soon as they incorporate a third layer of error correction into their quantum chip design.

D-Wave ignited the commercial quantum computing field.  And with the efforts underway to build EEC into QC hardware, I am more optimistic than ever that we are very close to the ultimate firepoint where this technological revolution will become unstoppable. Firepoint Entanglement is drawing near, and when these devices enter the market, you will need software that will bring Quantum Supremacy to bear on the hardest challenges that humanity faces.

This is why I teamed up with Robert (Bob) Tucci, who pioneered an inspired way to describe quantum algorithms (and arbitrary quantum systems) with a framework that extends Bayesian Networks (B-nets, sometimes also referred to as Belief Networks) into the quantum realm. He did this in such a manner that an  IT professional who knows this modelling approach, and is comfortable with complex numbers, can pick up on it without having to go through a quantum physics boot camp. It was this reasoning on a higher abstraction level that enabled Bob to come up with the concept of CMI entanglement (sometimes also referred to as Squashed Entanglement).

An IDE built on this paradigm will allow us to tap into the new quantum resources as they become available, and to develop intuition for this new frontier in information science with a visualization that goes far beyond a simple circuit model. The latter also suffers from the fact that in the quantum realm some classical logic gates (such as OR and AND) are not allowed, which can be rather confusing for a beginner.  QB-nets, on the other hand, fully embed and encompass the classical networks, so any software that implements QB-nets, can also be used to implement standard Bayesian network use cases, and the two can be freely mixed in hybrid nets. (This corresponds to density matrices that include classical thermodynamic probabilities.)

So far, the back-end for the QB-net software is almost completed, as well as a stand-alone compiler/gate-synthesizer. Our end goal is to build an environment every bit as complete as Microsoft’s Liqui|>.  They make this software available for free, and ironically distribute it via Github, although the product is entirely closed source (but at least they are not asking for your firstborn if you develop with Liqui|>).  Microsoft also stepped up their patent activity in this space, in all likelihood in order to allow for a similar shake-down business model as the one that allows them to derive a huge amount of revenue from the Linux based (Google developed) Android platform.  We don’t want want the future of computing to be held in a stranglehold by Microsoft, which is why our software is Open Source, and we are looking to build a community of QC enthusiasts within and outside of academia to carry the torch of software freedom.  If you are interested, please head over to our github repository. Any little bit will help, feedback, testing, documentations, and of course coding. Come and join the quantum computing revolution!

 

 

 

D-Wave – Fast Enough to Win my Bet?

Really Would Like to Get That Raclette Cheese.

Last summer I had to ship a crate of maple syrup to Matthias Troyer at the ETHZ in Switzerland. The conditions we had agreed on for our performance bet were so that, at this point, the D-Wave One could not show a clear performance advantage over a conventional, modern CPU running fine-tuned optimization code. The machine held its own, but there weren’t any problem classes to point to that really demonstrated massive performance superiority.

google_benchmark
Impressive benchmark graph. Next on my Christmas wishlist: A decisive widening of the gap between the green QMC curve and the blue D-Wave line as the problem size increases (as is the case when compared to the red Simulated Annealing curve).

 

The big news to take away from the recent Google/D-Wave performance benchmark is that, with certain problem instances, the D-Wave machine clearly shines. 100 million times better in comparison to a Quantum Monte Carlo Simulation is nothing to sneeze at. This doesn’t mean that I would now automatically win my bet with Matthias if we were to repeat it with the D-Wave Two, but it’ll make it much more interesting for sure.

One advantage of being hard-pressed to find time for blogging is that once I get around to commenting on recent developments, most other reactions are already in. Matthias provided this excellent write-up, and the former D-Wave critic-in-chief remains in retirement. Scott Aaronson’s blog entry on the matter strikes a (comparatively) conciliatory tone. One of his comments explains one of the reason for this change:

“[John Martinis] told me that some of the engineering D-Wave had done (e.g., just figuring out how to integrate hundreds of superconducting qubits while still having lines into them to control the couplings) would be useful to his group. That’s one of the main things that caused me to moderate a bit (while remaining as intolerant as ever of hype).”

Scott also gave a pretty balanced interview to the MIT News (although I have to subtract a star on style for working in a dig at Geordie Rose – clearly the two won’t become best buds in this lifetime).

Hype is generally and righteously scorned in the scientific community.  And when it is pointed out (for instance when the black hole information loss problem had been “solved”), the scientists involved are usually on the defensive.

Buddy
Buddy the Elf believes anything Steve Jurvetson ever uttered and then some.

Of course, business follows very different rules, more along the Donald Trump rules of attention. Any BS will do as long as it captures audience. Customers are used to these kinds of commercial exaggerations, and so I am always a bit puzzled by the urge to debunk D-Wave “hype”. To me it feels almost a bit patronizing. The average Joe is not like Buddy the Elf, the unlikely hero of my family’s favorite Christmas movie. When Buddy comes to NYC and sees a diner advertising the world’s best coffee,  he takes this at face value and goes crazy over it.  The average Joe, on the other hand, has been thoroughly desensitized to high tech hype. He knows that neither Google Glasses nor Apple Watches will really change his life forever, nor will he believe Steve Jurvetson that the D-Wave machines will outperform the universe within a couple of years. Steve, on the other hand, does what every good VC business man is supposed to do for a company that he invested in, i.e. create hype. The world has become a virtual bazaar, and your statements have to be outrageous and novel in order to be heard over the noise. What he wants to get across is that the D-Wave machines will grow in performance faster than conventional hardware. Condensing this into Rose’s Law is the perfect pitch vehicle for that – hype with a clear purpose.

People like to pick an allegiance and cheer for their “side”. It is the narrative that has been dominating the D-Wave story for many years, and it made for easy blogging, but I won’t miss it. The hypers gonna hype, the haters gonna hate, but now the nerds should know to trust the published papers.

Max Planck famously quipped that science advances one funeral at a time, because even scientists have a hard time acting completely rationally and adjusting their stances when confronted with new data.  This is the 21st century, here’s to hoping that the scientific community has lost this kind of rigidity, even while most of humanity remains as tribal as ever.

Riding the D-Wave

Update: Thanks to everybody who keeps pointing me to relevant news (Ramsey, Rolf, Sol and everybody else my overtired brain may not recall at this time).

There is no doubt that D-Wave is on a role:

And then there’s the countdown to what is billed as a D-Wave related watershed announcement from Google coming Dec 8th.  Could this be an early Christmas present to D-Wave investors?

 

~~~~~~

dwavetrain_wide

Back in the day before he re-resigned as D-Wave’s chief critic, Scott Aaronson made a well-reasoned argument as to why he thought this academic, and at times vitriolic, scrutiny was warranted. He argued that a failure of D-Wave to deliver a quantum speed-up would set the field back, similar to the AI winter that was triggered by Marvin Minsky’s Perceptrons book.

Fortunately, quantum annealers are not perceptrons. For the latter, it can be rigorously proven that single layer perceptrons are not very useful. Ironically, at the time the book was published, multilayered perceptrons, i.e. a concept that is now fundamental to all deep learning algorithms, were already known, but in the ensuing backlash research funding for those also dried up completely. The term “perceptron” became toxic and is now completely extinct.

Could D-Wave be derailed by a proof that shows that quantum annealing could, under no circumstances, deliver a quantum speed-up? To me this seems very unlikely, not only because I expect that no such proof exists, but also because, even if this was the case, there will still be a practical speed-up to be had. If D-Wave manages to double their integration density at the same rapid clip as in the past, then their machines will eventually outperform any classical computing technology in terms of annealing performance. This article (h/t Sol) expands on this point.

So far there is no sign that D-Wave will slow its manic pace. The company recently released its latest chip generation, featuring quantum annealing with an impressive 1000+ qubits (in practice, the number will be smaller, as qubits will be consumed for problem encoding and software EEC). This was followed with a detailed test under the leadership of Catherine McGeoch, and it will be interesting to see what Daniel Lidar, and other researchers with access to D‑Wave machines, will find.

My expectation has been from the get-go that D-Wave will accelerate the development of this emerging industry, and attract more money to the field. It seems to me that this is now playing out.

Intel recently (and finally as Robert Tucci points out) entered the fray with a $50M investment. While this is peanuts for a company of Intel’s size, it’s an acknowledgement that they can’t leave the hardware game to Google, IBM or start-ups such as Rigetti.

On the software side, there’s a cottage industry of software start-ups hitching their wagons to the D-Wave engine. Many of these are still in stealth mode, or early stage such as QC Ware, while others already start to receive some well deserved attention.

Then there are also smaller vendors of established software and services that already have a sophisticated understanding of the need to be quantum ready. The latter is something I expect to see much more in the coming years as the QC hardware race heats up.

The latest big name entry into the quantum computing arena was Alibaba, but at this time it is not clear what this Chinese initiative will focus on. Microsoft, on the other hand, seems to be a known quantity and will not get aboard the D‑Wave train, but will focus exclusively on quantum gate computing.

Other start-ups, like our artiste-qb.net, straddle the various QC hardware approaches. In our case, this comes “out-of-the-box”, because our core technology, Quantum Bayesian Networks, as developed by Robert Tucci, is an ideal tool to abstract from the underlying architecture. Another start-up that is similarly architecture agnostic is Cambridge QC. The recent news of this company brings to mind that sometimes reality rather quickly imitates satire. While short of the $1B seed round of this April Fool’s spoof, the influx of $50M dollars from the Chile based Grupo Arcano is an enormous amount for a QC software firm, that as far as I know, holds no patents.

Some astoundingly big bets are now being placed in this field.

 

 

 

 

 

Will Super Cool SQUIDs Make for an Emerging Industry Standard?

dwave_log_temp_scale
This older logarithmic (!) D-Wave Graphic gives an idea how extreme the cooling requirement is for SQUID based QC (it used to be part of a really cool SVG animation, but unfortunately D-Wave no longer hosts it).

D‑Wave had to break new ground in many engineering disciplines.  One of them was the cooling and shielding technology required to operate their chip.

To this end they are now using ANSYS software, which of course makes for very good marketing for this company (h/t Sol Warda). So good, in fact, that I would hope D‑Wave negotiated a large discount for serving as an ANSYS reference customer.

Any SQUID based quantum computing chip will have similar cooling and shielding requirements, i.e. Google and IBM will have to go through a similar kind of rigorous engineering exercise to productize their approach to quantum computing, even though this approach may look quite different.

Until recently, it would have been easy to forget that IBM is another contender in the ring for SQUID based quantum computing, yet the company’s researchers have been working diligently outside the limelight – they last created headlines three years ago. And unlike other quantum computing news, that often only touts marginal improvements, their recent results deserved to be called a break-through, as they improved upon the kind of hardware error correction that Google is betting on.

IBM_in_atoms
IBM has been conducting fundamental quantum technology research for a long time, this image shows the company’s name spelled out using 35 xenon atoms, arranged via a scanning tunneling microscope (a nano visualization and manipulation device invented at IBM).

Obviously, the better your error correction, the more likely you will be able to achieve quantum speed-up when you pursue an annealing architecture like D‑Wave, but IBM is not after yet another annealer. Most articles on the IBM program reports that IBM is into building a  “real quantum computer”, and the term clearly originates from within the company, (e.g. this article attributes the term to Scientists at IBM Research in Yorktown Heights, NY). This leaves little doubt about their commitment to universal gate based QC.

The difference in strategy is dramatic. D‑Wave decided to forgo surface code error correction on the chip in order to get a device to the market.  Google, on the other hand, decided to snap up the best academic surface code implementation money could buy, and also emphasized speed-to-market by first going for another quantum adiabatic design.

All the while, IBM researchers first diligently worked through the stability of SQUID based qubits .  Even now, having achieved the best available error correction, they clearly signaled that they don’t consider it good enough for scale-up. It may take yet another three years for them to find the optimal number and configuration of logical qubits that achieves the kind of fidelity they need to then tackle an actual chip.

It is a very methodological engineering approach. Once the smallest building block is perfected,  they will have the confidence that they can go for the moonshot. It’s also an approach that only a company with very deep pockets can afford, one with a culture that allows for the pursuit of a decades long research program.

Despite the differences, in the end, all SQUID based chips will have to be operated very close to absolute zero.  IBM’s error correction may eventually give it a leg-up over the competition, but I doubt that standard liquid helium fridge technology will suffice for a chip that implements dozens or hundreds of qubits.

By the time IBM enters the market there will be more early adopters of the D‑Wave and Google chips, and the co-opetition between these two companies may have given birth to an emerging industry standard for the fridge technology. In a sense, this may lower the barriers of entry for new quantum chips if the new entrant can leverage this existing infrastructure. It would probably be a first for IBM to cater to a chip interfacing standard that the company did not help to design.

So while there’s been plenty of news in the quantum computing hardware space to report, it is curious, and a sign of the times, that a recent Washington Post article on the matter opted to headline with a Quantum Computing Software company i.e. QxBranch. (Robert R. Tucci channeled the journalists at the WP when he wrote last week that the IBM news bodes well for software start-ups in this space).

While tech and business journalists may not (and may possibly never) understand what makes a quantum computer tick, they understand perfectly well that any computing device is just dead weight without software, and that the latter will make the value proposition necessary to create a market for these new machines.

 

 

 

How many social networks do you need?

The proliferation of social networks seems unstoppable now. Even the big ones you can no longer count on one hand: Facebook, LinkedIn, GooglePlus, Twitter, Instagram, Tumblr, Pinterest, Snapchat – I am so uncool I didn’t even know about the latter until very recently. It seems that there has to be a natural saturation point with diminishing marginal return of signing up to yet another one, but apparently we are still far from it.

Recently via LinkedIn I learned about a targeted social network that I happily signed up for, which is quite against my character (i.e. I still don’t have a Facebook account).

iQEi_logo
Free to join and no strings attached. (This targeted social network is not motivated by a desire to monetize your social graph).

The aptly named International Quantum Exchange for Innovation is a social network set up by DK Matai with the express purpose of bringing together people of all walks of life anywhere on this globe who are interested in the next wave of the coming Quantum Technology revolution. If you are as much interested in this as I am, then joining this UN of Quantum Technology, as DK puts it, is a no-brainer.

The term ‘revolution’ is often carelessly thrown around, but in this case I think, when it comes to the new wave of quantum technologies, it is more than justified. After all, the first wave of QM driven technologies powered the second leg of the  Industrial Revolution. It started with a bang, in the worst possible manner, when the first nuclear bomb ignited, but the new insights gained led to a plethora of new high tech products.

Quantum physics was instrumental in everything from solar cells, to lasers, to medical imaging (e.g. MRI) and of course, first and foremost, the transistor. As computers became more powerful, Quantum Chemistry coalesced into an actual field, feeding on the ever increasing computational power. Yet Moore’s law proved hardly adequate for its insatiable appetite for the compute cycles required by the underlying quantum numerics.

During Richard Feynman’s (too short) life span, he was involved in the military as well as civilian application of quantum mechanics, and his famous “there is plenty of room at the bottom” talk can be read as a programmatic outline of the first Quantum Technology revolution.  This QT 1.0 wave has almost run its course. We made our way to the bottom, but there we encountered entirely new possibilities by exploiting the essential, counter-intuitive non-localities of quantum mechanics.  This takes it to the next step, and again Information Technology is at the fore-front. It is a testament to Feynman’s brilliance that he anticipated QT 2.0 as well, when suggesting a quantum simulator for the first time, much along the lines of what D-Wave built.

It is apt and promising that the new wave of quantum technology does not start with a destructive big bang, but an intriguing and controversial black box.

D-Wave_2001

 

Quantum Computing Road Map

No, we are not there yet, but we are working on it.Qubit spin states in diamond defects don’t last forever, but they can last outstandingly long even at room temperature (measured in microseconds which is a long time when it comes to computing).

So this is yet another interesting system added to the list of candidates for potential QC hardware.

Nevertheless, when it comes to the realization of scalable quantum computers, qubits decoherence time may very well be eclipsed by the importance of another time span: 20 years, the length at which patents are valid (in the US this can include software algorithms).

With D-Wave and Google leading the way, we may be getting there faster than most industry experts predicted. Certainly the odds are very high that it won’t take another two decades for useable universal QC machines to be built.

But how do we get to the point of bootstrapping a new quantum technology industry? DK Matai addressed this in a recent blog post, and identified five key questions, which I attempt to address below (I took the liberty of slightly abbreviating the questions, please check at the link for the unabridged version).

The challenges DK laid out will require much more than a blog post (or a LinkedIn comment that I recycled here), especially since his view is wider than only Quantum Information science. That is why the following thoughts are by no means comprehensive answers, and very much incomplete, but they may provide a starting point.

1. How do we prioritise the commercialisation of critical Quantum Technology 2.0 components, networks and entire systems both nationally and internationally?

The prioritization should be based on the disruptive potential: Take quantum cryptography versus quantum computing for example. Quantum encryption could stamp out fraud that exploits some technical weaknesses, but it won’t address the more dominant social engineering deceptions. On the upside it will also facilitate iron clad cryptocurrencies. Yet, if Feynman’s vision of the universal quantum simulator comes to fruition, we will be able to tackle collective quantum dynamics that are computationally intractable with conventional computers. This encompasses everything from simulating high temperature superconductivity to complex (bio-)chemical dynamics. ETH’s Matthias Troyer gave an excellent overview over these killer-apps for quantum computing in his recent Google talk, I especially like his example of nitrogen fixation. Nature manages to accomplish this with minimal energy expenditure in some bacteria, but industrially we only have the century old Haber-Bosch process, which in modern plants still results in 1/2 ton of CO2 for each ton of NH3. If we could simulate and understand the chemical pathway that these bacteria follow we could eliminate one of the major industrial sources of carbon dioxide.

2. Which financial, technological, scientific, industrial and infrastructure partners are the ideal co-creators to invent, to innovate and to deploy new Quantum technologies on a medium to large scale around the world? 

This will vary drastically by technology. To pick a basic example, a quantum clock per se is just a better clock, but put it into a Galileo/GPS satellite and the drastic improvement in timekeeping will immediately translate to a higher location triangulation accuracy, as well as allow for a better mapping of the earth’s gravitational field/mass distribution.

3. What is the process to prioritise investment, marketing and sales in Quantum Technologies to create the billion dollar “killer apps”?

As sketched out above, the real price to me is universal quantum computation/simulation. Considerable efforts have to go into building such machines, but that doesn’t mean that you cannot start to already develop software for them. Any coding for new quantum platforms, even if they are already here (as in the case of the D-Wave 2) will involve emulators on classical hardware, because you want to debug and proof your code before submitting it to the more expansive quantum resource. In my mind building such an environment in a collaborative fashion to showcase and develop quantum algorithms should be the first step. To me this appears feasible within an accelerated timescale (months rather than years). I think such an effort is critical to offset the closed sourced and tightly license controlled approach, that for instance Microsoft is following with its development of the LIQUi|> platform.

4. How do the government agencies, funding Quantum Tech 2.0 Research and Development in the hundreds of millions each year, see further light so that funding can be directed to specific commercial goals with specific commercial end users in mind?

This to me seems to be the biggest challenge. The amount of research papers produced in this field is enormous. Much of it is computational theory. While the theory has its merits, I think the governmental funding should try to emphasize programs that have a clearly defined agenda towards ambitious yet attainable goals. Research that will result in actual hardware and/or commercially applicable software implementations (e.g. the UCSB Martinis agenda). Yet, governments shouldn’t be in the position to pick a winning design, as was inadvertently done for fusion research where ITER’s funding requirements are now crowding out all other approaches. The latter is a template for how not to go about it.

5. How to create an International Quantum Tech 2.0 Super Exchange that brings together all the global centres of excellence, as well as all the relevant financiers, customers and commercial partners to create Quantum “Killer Apps”?

On a grassroots level I think open source initiatives (e.g. a LIQUiD alternative) could become catalysts to bring academic excellence centers and commercial players into alignment. This at least is my impression based on conversations with several people involved in the commercial and academic realm. On the other hand, as with any open source products, commercialization won’t be easy, yet this may be less of a concern in this emerging industry, as the IP will be in the quantum algorithms, and they will most likely be executed with quantum resources tied to a SaaS offering.

 

Quantum Computing Coming of Age

Are We There Yet? That’s the name of the talk that Daniel Lidar recently gave at Google (h/t Sol Warda who posted this in a previous comment).

Spoiler alert, I will summarize some of the most interesting aspects of this talk as I finally found the time to watch it in its entirety.

The first 15 min you may skip if you follow this blog, he just gives a quick introduction to QC. Actually, if you follow the discussion closely on this blog, then you will find not much news in most of the presentation until the very end, but I very much appreciated the graph 8 minutes in, which is based on this Intel data:

CPU_hit_a_wall
Performance and clock speeds are essentially flat for the last ten years. Only the ability to squeeze more transistors and cores into one chip keeps Moore’s law alive (data source Intel Corp.).

Daniel, deservedly, spends quite some time on this, to drive home the point that classical chips have hit a wall.  Moving from Silicon to Germanium will only go so far in delaying the inevitable.

If you don’t want to sit through the entire talk, I recommend skipping ahead to the 48 minute mark, when error correction on the D-Wave is discussed. The results are very encouraging, and in the Q&A Daniel points out that this EC scheme could be inherently incorporated into the D-Wave design. Wouldn’t be surprised to see this happen fairly soon. The details of the ECC scheme are available at arxiv, and Daniel spends some time on the graph shown below. He is pointing out that, to the extent that you can infer a slope, it looks very promising, as it get flatter as the problems get harder, and the gap between non-ECC and the error corrected annealing widens (solid vs. dashed lines). With ECC I would therefore expect D-Wave machines to systematically outperform simulated annealing.

Number of repetitions to find a solution at least once.
Number of repetitions to find a solution at least once.

Daniel sums up the talk like this:

  1. Is the D-Wave device a quantum annealer?
    • It disagrees with all classical models proposed so far. It also exhibits entanglement. (I.e. Yes, as far as we can tell)
  2.  Does it implement a programmable Ising model in a transverse field and solve optimization problems as promised?
    • Yes
  3. Is there a quantum speedup?
    • Too early to tell
  4. Can we error-correct it and improve its performance?
    • Yes

With regard to hardware implemented qubit ECC, we also got some great news from Martinis’ UCSB lab, whom Google drafted for their quantum chip. The latest results have just been published in Nature (pre-print available at arxiv).

Martinis explained the concept in a talk I previously reported on, and clearly the work is progressing nicely. Unlike the ECC scheme for the D-Wave architecture, Martinis’ approach is targeting a fidelity that not only will work for quantum annealing, but should also allow for non-trivial gate computing sizes.

Quantum Computing may not have fully arrived yet, but after decades of research we clearly are finally entering the stage where this technology won’t be just the domain of theorists and research labs, and at this time, D-Wave is poised to take the lead.

 

 

The Year That Was <insert expletive of your choice>

Usually, I like to start a new year on an upbeat note, but this time I just cannot find the right fit. I was considering whether to revisit technology that can clean waterlauding the effort of the Bill Gates foundation came to mind, but while I think this is a great step in the right direction, this water reclaiming technology is still a bit too complex and expensive to become truly transformational and liberating.

At other times, a groundbreaking progress in increasing the efficiency of solar energy would have qualified, the key being that this can be done comparatively cheaply. Alas, the unprecedented drop in the price of oil is not only killing off the fracking industry, but also the economics for alternative energy.  For a planet that has had its fill of CO2, fossil fuel this cheap is nothing but an unmitigated disaster.

So while it was a banner year for quantum computing, in many respects 2014 was utterly dismal, seeing the return of religiously motivated genocide, open warfare in Europe, a resurgence of diseases that could be eradicated by now, and a pandemic that caused knee jerk hysterical reactions that taught us how unprepared we are for these kind of health emergencies. This year was so depressing it makes me want to wail along to my favorite science blogger’s song about it (but then again I’d completely ruin it).

And there is another reason to not yet let go of the past, corrections:

With these corrections out of the way I will finally let go of 2014, but with the additional observation that in the world of quantum computing, the new year started very much in the same vein as the old, generating positive business news for D-Wave, which managed to just raise another 29 million dollars, while at the same time still not getting respect from some academic QC researchers.

I.H. Deutsch (please note, not the Deutsch but Ivan) states at the end of this interview:

  1. [1]The D-Wave prototype is not a universal quantum computer.
  2. [2]It is not digital, nor error-correcting, nor fault tolerant.
  3. [3]It is a purely analog machine designed to solve a particular optimization problem.
  4. [4]It is unclear if it qualifies as a quantum device.”

No issues with [1]-[3].  But how many times do classical algos have to be ruled out before D-Wave is finally universally accepted as a quantum annealing machine?  This is getting into climate change denying territory. It shouldn’t really be that hard to define what makes for quantum computation. So I guess we found a new candidate for D-Wave chief critic, after Scott Aaronson seems to have stepped down for good.

Then again, with a last name like Deutsch, you may have to step up your game to get some name recognition of your own in this field.  And there’s no doubt that controversy works.

So 2015 is shaping up to become yet another riveting year for QC news. And just in case you made the resolution that, this year, you will finally try to catch that rainbow, there’s some new tech for you.
SOURCE: chaosgiant.deviantart.com

 

 

Update: Almost forgot about this epic fail of popular science reporting at the tail end of 2014.  For now I leave it as an exercise to the reader to spot everything that’s wrong with it. Of course most of the blame belongs to PLoS ONE which supposedly practices peer review.