The Creative Destruction Lab Reaches a New Quantum Level

Planet earth as seen from Toronto.

If excitement was a necessary and sufficient criteria to reach higher quantum levels, they certainly must have been achieved yesterday morning in room 374 of the Rotman School of Business here in Toronto (aka “the center of the universe” as our modest town is known to Canadians outside the GTA).

In Canadian start-up circles, the Creative Destruction Lab (CDL) is a household name by now, and ever since the program went global, its recognition has reached far past the borders of Canada.

The CDL kicked off with its first cohort in the quantum machine learning stream today, and our company Artiste has been honoured to be part of this exciting new chapter.

For a casual observer, the CDL may look like just another effort to bring venture capital and start-ups together, with some MBA students thrown in for that entrepreneurial spirit. I.e. it may appear as just another glorified pitch competition. But nothing could be further from the truth, as this program has essentially been built around an academic hypothesis of why there is so little start-up activity outside Silicon Valley, and why it has been so difficult to replicate this kind of ecosystem. It certainly is not for lack of scientific talent, capital, or trying.

Ajay Agrawal, the founder of the Creative Destruction Lab, beautifully laid out the core hypothesis around which he structured the CDL. He suspects a crucial market mismatch, in that start-up founders are under-supplied with one crucial resource: Sound entrepreneurial judgment. And the latter can make all the difference. He illustrated this with a somewhat comical email from the nineties, written by a Stanford Ph.D. student pitching a project to an Internet provider, arguing that the technology that his small team would build could be extremely profitable, and indicating that they’d love to build this on a fixed salary basis. A hand written note was scribbled on the email print-out from a Stanford business advisor, who suggested realizing this project as their own start-up venture. This company, of course, went on to become Google.

The linked chart should not be misconstrued as sound investment advise.
Two pretty things that are not like the other at all, but the mania is very much the same.

Ajay’s thinking throws some serious shade on the current ICO craze which, like most start-ups, I’ve been following very closely. Blockchain technology has some truly disruptive potential way beyond crypto-currency, and I see many synergies between this trustless distributed computing environment, and how Quantum information will interface with the classical world.

From a start-up’s standpoint, an ICO looks extremely attractive, but like all crowdfunding efforts it still requires a good campaign. However, it all hinges on a whitepaper and technology rather than a business plan, and the former typically comes pretty naturally to technical founders. There are also very few strings attached:

  • The (crypto-)money that comes in is essentially anonymous.
  • Fund raising occurs on a global basis,
  • The process is still essentially unregulated in most jurisdictions.

But if the CDL got it right,  ICOs are missing the most critical aspect to making a venture successful: Sound entrepreneurial advice.

There is little doubt in my mind that we are currently experiencing something akin to tulip mania in the crypto-currency and ICO arena, but the market for tulips did not vanish after the 1637 mania ran its course, and neither will ICOs.  For my part, I expect we will see something of a hybrid model emerge: VC seed rounds augmented by ICOs.

From an entrepreneur’s stand-point, this would be the best of both worlds.

Taking it to the Next Level

        The summit of the Computing Pyramid is no longer just an academic specter.

It is no secret that I’ve been always impressed with D-Wave. Sure, they “only” perform quantum annealing on their chip, and there is no guarantee that there is any “quantum supremacy” to be had, but with their machine, qubits can be harnessed for the first time to do something useful. I.e D-Wave now offers enough computational power to hold its own in comparison to the established computing architecture. It’s unrealistic to expect it to be the silver bullet for all optimization problems, but it is for real and it is available, and as soon as something can be actually useful you can put a price on it.

My company is in the business of developing Open Source software in the QC space, and to offer advice and consulting to customers who want to assess and explore the possibilities in this new frontier of Information Technology. Our software can already talk to the IBM Quantum Experience Chip, a universal gate based chip which is an impressive proof of concept, but is not yet of any practical use. It did not sit well with me that we could not address the D-Wave API in a similar manner.

That’s why I picked up the phone and reached out to D-Wave to establish a relationship that will allow my company, artiste-qb.net, to do just that.

So while I will always strive for full transparency when discussing quantum information technologies, when I am writing about D-Wave in the future it will no longer be the vantage point of an unaffiliated observer, but rather the perspective of someone who will work actively to help them succeed on the merits of their technology.

Let’s aspire to be more than just a friendly neighbour

The Canadarm - a fine piece of Canadian technology, that would have gone nowhere without the US.
The Canadarm – a fine piece of Canadian technology, that would have gone nowhere without the US.

This blog is most emphatically not about politics, and although it has often been observed that everything is political, this exaggeration actually has become less true the more it is raised.

Whereas, in a feudal society all activity is at the pleasure of the ruler, within a liberal democracy, citizens, and scientists alike, don’t have to pay attention to politics, and their freedoms are guarded by an independent judiciary.

Globalism has been an attempt to free border crossing business from the whims of politics. Since history never moves in a straight line, we shouldn’t be surprised that, after the 2008 financial meltdown, this trend, towards more global integration, is now  facing major headwinds, which now happen to gust heavily from the White House.

Trudeau, who is one of the few heads of states who can explain what Quantum Computing is about, will do his best on his state visit to Washington to ensure freedom of trade will commence across the world’s longest open border, but Canada can’t take anything for granted. Which brings me around to the topic that this blog is most emphatically about: Canada is punching way above its weight when it comes to Quantum Computing, not the least because of the inordinate generosity of Mike Lazaridis, who was instrumental in creating the Perimeter Institute as well as giving his alma matter the fantastic  Institute for Quantum Computing (IQC).  This facility even has its own semiconductor fab, and offers tremendous resources to its researchers. There have been some start-up spin-offs, and there is little doubt that this brings high-tech jobs to the region, but when I read headlines like the one about the quantum socket, I can’t help but wonder if Canada again seems to be  content to play second fiddle.  It’s a fine piece of engineering, but let’s be real, after everything is said and done, it’s still just a socket, a thing you will plug into the really important piece, your quantum chip. I am sure Google will be delighted to use this solid piece of Canadian engineering, and we may even get some nice press about it, just as we did for the Canadarm on the space shuttle, another example for top notch technology that would have gone nowhere without the American muscle.

It's what you plug in that counts.
It’s what you plug in that counts.

But the Quantum Computing frontier is not like access to space. Yes, it takes some serious money to leave a mark, but I cannot help but think that Canada got much better bang for its loonies when the federal BDC fund invested early into D-Wave. The scrappy start-up stretched the dollars much further, and combined great ambition with brilliant pragmatism. It is the unlikely story where a small Canadian company was driving development, and inspired an American giant like Google to jump in with both feet.

 

 

Canada needs this kind of spirit. Let’s be good neighbors, sure, but also ambitious. Let there be a Canadian QC chip for the Canadian quantum socket.

 

Big Challenges Require Bold Visions

Unless we experience a major calamity resetting the world’s economy to a much lower output, it is a foregone conclusion that the world will miss the CO2 target to limit global warming to 1.5C. This drives a slow motion multi-faceted disaster exacerbated by the ongoing growth in global population, which puts additional stress on the environment.  Unsurprisingly, we are in the midst of earth’s sixth massive extinction event.

It just takes three charts to paint the picture:

1) World Population Growth

2) Temperature Increase

3) Species Extinction

We shouldn’t delude ourselves in believing that our species is safe from adding itself to the extinction list. The next decades are pivotal in stopping the damage we do to our planet. Given our current technologies, we have every reason to believe that we can stabilize population growth and replace fossil fuel dependent technologies with CO2 neutral ones, but the processes that are already set in motion will produce societal challenges of unprecedented proportion.

Population growth and the need for arable land keeps pushing people ever closer to formerly isolated wildlife.  Most often with just fatal consequences for the latter, but sometimes the damage goes both ways.  HIV, Ebola and bird flu, for instance, are all health threats that were originally contracted from animal reservoirs (zoonosis), and we can expect more such pathogens, many of which will not have been observed before. At the same time, old pathogens can easily resurface. Take tuberculosis, for instance. Even in an affluent country with good public health infrastructure, such as Canada, we see over a thousand new cases each year, and, as in other parts of the world, multi-resistant TB strains are on the rise.

Immunization and health management require functioning governmental bodies. In a world that will see ever more refugee crises and civil strife, the risk for disruptive pandemics will massively increase. The recent outbreak of Ebola is a case study in how such mass infections can overwhelm the medical infrastructure of developing countries, and should serve as a wake-up call to the first world to help establish a global framework that can manage these kinds of global health risks. The key is to identify emerging threats as early as possible, since the chance of containment and mitigation increases by multitudes the sooner actions can be taken.

Such a framework will require robust and secure data collection and dissemination capabilities and advanced predictive analytics that can build on all available pooled health data as well as established medical ontologies. Medical doctor and bioinformatic researcher Andrew Deonarine has envisioned such a system that he has dubbed Signa.OS, and he has assembled a stellar team including members from his former alma mater Cambridge, the UBC, as well as Harvard, where he will soon start post-graduate work. Any such system should not be designed with just our current hardware in mind, but with the technologies that will be available within the decade.  That is why quantum computer accelerated Bayesian networks are an integral part of the analytical engine for Signa.OS. We are especially excited to also have Prof. Marco Scutari from Oxford join the Signa.OS initiative, whose work in Bayesian network training in R is stellar, and served as a guiding star for our python implementation.

Our young company, artiste-qb.net, which I recently started with Robert R. Tucci, could not have wished for a more meaningful research project to prove our technology.

[This video was produced by Andrew for entering the MacArthur challenge.]

 

To Reach Quantum Supremacy We Need to Cross the Entanglement Firepoint

You can get ignition at the flash point, but it won't last.

There’s been a lot of buzz recently about Quantum Computing. Heads of state are talking about it, and lots of money is being poured into research. You may think the field is truly on fire, but could it still fizzle out? When you are dealing with fire, what makes the critical difference between just a flash in the pan, and reaching the firepoint when that fire is self-sustaining?

Finding an efficient quantum algorithm is all about quantum speed-up, which has, understandably, mesmerized theoretical computer scientists.  Their field had been boxed into the Turing machine mold, and now, for the first time, there was something that demonstratively went beyond what was possible with this classical, discrete model.

Quantum speed-up is all about scaling behaviour.  It’s about establishing that a quantum algorithm’s resource requirements will grow more slowly with the amount of computational resources than the next best classical algorithm.

While this is a profound theoretical insight, it doesn’t necessarily immediately translate into practice, because this scaling  behaviour may come into play at a resource threshold far beyond anything technically realizable for the foreseeable future.

For instance, Shor’s algorithm requires tens of thousands of pristine, entangled qubits in order to become useful.  While not Sci Fi anymore, this kind of gate based QC is still far off. On the other hand, Matthias Troyer et al. demonstrated that you can expect to perform quantum chemical calculations that will outmatch any classical supercomputer with much more modest resources (qubits numbered in the hundreds not thousands).

The condition of having a quantum computing device performing tasks outside the reach of any classical technology is what I’d like to define as quantum supremacy (a term invented by John Preskill that I first heard used by DK Matai).

Quantum speed-up virtually guarantees that you eventually will reach quantum supremacy for the posed problem (i.e. factoring in Shore’s algorithm case) but it doesn’t tell you anything about how quickly you will get there. Also, while quantum speed-up is a useful criteria for eventually reaching quantum supremacy, it is not a necessary one for outperforming conventional super-computers.

We are just now entering a stage where we see the kind of quantum annealing chips that can tackle commercially interesting problems.  The integration density of these chips is still minute in comparison to that of the established silicon based ones (for quantum chips there is still lots of room at the bottom).

D-Wave just announced the availability of a 2000 qubit chip for early next year (h/t Ramsey and Rolf).  If the chip’s integration density can continue to double every 16 months, then quantum algorithms that don’t scale better (or only modestly so) than classical ones may at some point still end up outperforming all classical alternatives, assuming that we are indeed living in the end times of Moore’s law.

From a practical (and hence commercial) perspective, these algorithms won’t be any less lucrative.

Yet, the degree to which quantum correlations can be technologically controlled is still the key to go beyond what the current crop of “wild qubits” on a non-error corrected adiabatic chip can accomplish.  That is why we see Google invest in its own R&D, hiring Prof. Martinis from the UCSB, and the work has already resulted in a nine qubit prototype chip that combines “digital” error correction (ECC) with quantum annealing (AQC).

D-Wave is currently also re-architecting its chip, and it is a pretty safe bet that they will also incorporate some form of error correction in the new design. More intriguing, the company now also talks about a road map towards universal quantum computing (i.e. see the second to last paragraph in this article).

It is safe to say that before we get undeniable quantum supremacy, we will have to achieve a level of decoherence control that allows for essentially unlimited qubit scale-out. For instance, IBM researchers are optimistic that they’ll get there as soon as they incorporate a third layer of error correction into their quantum chip design.

D-Wave ignited the commercial quantum computing field.  And with the efforts underway to build EEC into QC hardware, I am more optimistic than ever that we are very close to the ultimate firepoint where this technological revolution will become unstoppable. Firepoint Entanglement is drawing near, and when these devices enter the market, you will need software that will bring Quantum Supremacy to bear on the hardest challenges that humanity faces.

This is why I teamed up with Robert (Bob) Tucci, who pioneered an inspired way to describe quantum algorithms (and arbitrary quantum systems) with a framework that extends Bayesian Networks (B-nets, sometimes also referred to as Belief Networks) into the quantum realm. He did this in such a manner that an  IT professional who knows this modelling approach, and is comfortable with complex numbers, can pick up on it without having to go through a quantum physics boot camp. It was this reasoning on a higher abstraction level that enabled Bob to come up with the concept of CMI entanglement (sometimes also referred to as Squashed Entanglement).

An IDE built on this paradigm will allow us to tap into the new quantum resources as they become available, and to develop intuition for this new frontier in information science with a visualization that goes far beyond a simple circuit model. The latter also suffers from the fact that in the quantum realm some classical logic gates (such as OR and AND) are not allowed, which can be rather confusing for a beginner.  QB-nets, on the other hand, fully embed and encompass the classical networks, so any software that implements QB-nets, can also be used to implement standard Bayesian network use cases, and the two can be freely mixed in hybrid nets. (This corresponds to density matrices that include classical thermodynamic probabilities.)

So far, the back-end for the QB-net software is almost completed, as well as a stand-alone compiler/gate-synthesizer. Our end goal is to build an environment every bit as complete as Microsoft’s Liqui|>.  They make this software available for free, and ironically distribute it via Github, although the product is entirely closed source (but at least they are not asking for your firstborn if you develop with Liqui|>).  Microsoft also stepped up their patent activity in this space, in all likelihood in order to allow for a similar shake-down business model as the one that allows them to derive a huge amount of revenue from the Linux based (Google developed) Android platform.  We don’t want want the future of computing to be held in a stranglehold by Microsoft, which is why our software is Open Source, and we are looking to build a community of QC enthusiasts within and outside of academia to carry the torch of software freedom.  If you are interested, please head over to our github repository. Any little bit will help, feedback, testing, documentations, and of course coding. Come and join the quantum computing revolution!

 

 

 

Fusion is Hotter Than You May Think

As I am preparing to again get back into more regular blogging on Quantum Computing, I learned that my second favourite Vancouver based start-up, General Fusion, got some well deserved social media traction.  Michel Labarge’s TED talk has now been viewed over a million times (h/t Rolf D).  Well deserved, indeed.

This reminded me of a Milken Institute fusion panel from earlier this year, which seems to have less reach than TED, but is no less interesting. It also features Michel, together with representatives from other Fusion ventures (Tri Alpha Energy and Lockheed Martin) as well as MIT’s Dennis Whyte. The panel makes a compelling case as to why we see private money flowing into this sector now, and why ITER shouldn’t be the only iron we have in the fire.

Canadian PM Justin Trudeau talks Quantum Computing

He is already fluently bi-lingual, but he also speaks pretty good Quantum.  This isn’t half bad for a head of state:

If you also want to impress your friends like this, I recommend Sabine Hossenfelder’s Quantum lingo crash course.

This bodes well for the prospects of seeing some federal initiatives for the emerging Canadian QC industry in the not too distant future.

 

Late Wave

It took only one scientist to predict them but a thousand to get them confirmed (1004 to be precise). I guess if the confirmation of gravitational waves couldn’t draw me out of my blogging hiatus nothing could, although I am obviously catching a very late wave. The advantage of this – I can compile and link to all the best content that has already been written on the topic.

Of course this latest spectacular confirmation will unfortunately not change the mind of those quixotic individuals who devote themselves to fight the “wrongness” of all of Einstein’s work (I once had the misfortune of encountering the maker of this abysmal movie. Safe to say I had more meaningful conversations talking to Jehovah Witnesses).

But given the track record of science news journalism, what are the chances that this may be a fluke similar to the BICEP news that turned out to be far less solid than originally reported? Or another repeat of the faster than light neutrino measurements?

The beauty of a direct experimental measurement as performed by LIGO, is that the uncertainty can be calculated statistically. Since this is a “5-sigma” event, this means the signal is real with a 99.9999% probability. The graph at the bottom shows that what has been measured matches a theoretically expected signal from a black hole merger so closely that the similarity is immediately compelling even for a non-scientists.

But more importantly, unlike faster than light neutrinos, we have every reason to believe that gravitational waves exist. There is no new physics required, and the phenomenon is strictly classical, in the sense that General Relativity produces a classical field equation that unlike Quantum Mechanics adheres to physical realism. That is why this discovery does nothing to advance the search for a unification of gravity with the other three forces. The importance of this discovery lies somewhere else, but is no less profound. Sabine Hossenfelder says it best:

Hundreds of millions of years ago, a primitive form of life crawled out of the water on planet Earth and opened their eyes to see, for the first time, the light of the stars. Detecting gravitational waves is a momentous event just like this – it’s the first time we can receive signals that were previously entirely hidden from us, revealing an entirely new layer of reality.

The importance of this really can’t be overstated.  The universe is a big place and we keep encountering mysterious observations. There is of course the enduring puzzle of dark matter, lesser known may be the fast radio bursts first observed in 2007 that are believed to be the highest energy events known to modern astronomy.  Until recently it was believed that some one-off cataclysmic events were the underlying cause, but all these theories had to be thrown out when it was recently observed that these signals can repeat.  (The Canadian researcher who published on this recently received the highest Canadian science award, and the CBC has a nice interview with her).

We are a long way off from having good spatial resolution with the current LIGO setup. The next logical step is of course to simply drastically increase the scale of the device, and when it comes to Laser interferometry this can be done on a much grander scale then with other experimental set-ups (e.g. accelerators).  The eLISA space based gravitational wave detector project is well underway. And I wouldn’t yet count out advanced quantum interferometry as a means to drastically improve the achievable resolution, even if they couldn’t beat LIGO to the punch.

After all, it was advanced interferometry that had been driving the hunt for gravitational waves for many decades. One of its pioneers, Heinz Billing, was determined to bring about and witness their discovery, reportedly stating that he refused to die before the discovery was made.  The universe was kind to him, so at age 101 he is still around and got his wish.

LIGO signal
LIGO measurement of gravitational waves. Shows the gravitational wave signals received by the LIGO instruments at Hanford, Washington (left) and Livingston, Louisiana (right) and comparisons of these signals to the signals expected due to a black hole merger event.

D-Wave – Fast Enough to Win my Bet?

Really Would Like to Get That Raclette Cheese.

Last summer I had to ship a crate of maple syrup to Matthias Troyer at the ETHZ in Switzerland. The conditions we had agreed on for our performance bet were so that, at this point, the D-Wave One could not show a clear performance advantage over a conventional, modern CPU running fine-tuned optimization code. The machine held its own, but there weren’t any problem classes to point to that really demonstrated massive performance superiority.

google_benchmark
Impressive benchmark graph. Next on my Christmas wishlist: A decisive widening of the gap between the green QMC curve and the blue D-Wave line as the problem size increases (as is the case when compared to the red Simulated Annealing curve).

 

The big news to take away from the recent Google/D-Wave performance benchmark is that, with certain problem instances, the D-Wave machine clearly shines. 100 million times better in comparison to a Quantum Monte Carlo Simulation is nothing to sneeze at. This doesn’t mean that I would now automatically win my bet with Matthias if we were to repeat it with the D-Wave Two, but it’ll make it much more interesting for sure.

One advantage of being hard-pressed to find time for blogging is that once I get around to commenting on recent developments, most other reactions are already in. Matthias provided this excellent write-up, and the former D-Wave critic-in-chief remains in retirement. Scott Aaronson’s blog entry on the matter strikes a (comparatively) conciliatory tone. One of his comments explains one of the reason for this change:

“[John Martinis] told me that some of the engineering D-Wave had done (e.g., just figuring out how to integrate hundreds of superconducting qubits while still having lines into them to control the couplings) would be useful to his group. That’s one of the main things that caused me to moderate a bit (while remaining as intolerant as ever of hype).”

Scott also gave a pretty balanced interview to the MIT News (although I have to subtract a star on style for working in a dig at Geordie Rose – clearly the two won’t become best buds in this lifetime).

Hype is generally and righteously scorned in the scientific community.  And when it is pointed out (for instance when the black hole information loss problem had been “solved”), the scientists involved are usually on the defensive.

Buddy
Buddy the Elf believes anything Steve Jurvetson ever uttered and then some.

Of course, business follows very different rules, more along the Donald Trump rules of attention. Any BS will do as long as it captures audience. Customers are used to these kinds of commercial exaggerations, and so I am always a bit puzzled by the urge to debunk D-Wave “hype”. To me it feels almost a bit patronizing. The average Joe is not like Buddy the Elf, the unlikely hero of my family’s favorite Christmas movie. When Buddy comes to NYC and sees a diner advertising the world’s best coffee,  he takes this at face value and goes crazy over it.  The average Joe, on the other hand, has been thoroughly desensitized to high tech hype. He knows that neither Google Glasses nor Apple Watches will really change his life forever, nor will he believe Steve Jurvetson that the D-Wave machines will outperform the universe within a couple of years. Steve, on the other hand, does what every good VC business man is supposed to do for a company that he invested in, i.e. create hype. The world has become a virtual bazaar, and your statements have to be outrageous and novel in order to be heard over the noise. What he wants to get across is that the D-Wave machines will grow in performance faster than conventional hardware. Condensing this into Rose’s Law is the perfect pitch vehicle for that – hype with a clear purpose.

People like to pick an allegiance and cheer for their “side”. It is the narrative that has been dominating the D-Wave story for many years, and it made for easy blogging, but I won’t miss it. The hypers gonna hype, the haters gonna hate, but now the nerds should know to trust the published papers.

Max Planck famously quipped that science advances one funeral at a time, because even scientists have a hard time acting completely rationally and adjusting their stances when confronted with new data.  This is the 21st century, here’s to hoping that the scientific community has lost this kind of rigidity, even while most of humanity remains as tribal as ever.

Riding the D-Wave

Update: Thanks to everybody who keeps pointing me to relevant news (Ramsey, Rolf, Sol and everybody else my overtired brain may not recall at this time).

There is no doubt that D-Wave is on a role:

And then there’s the countdown to what is billed as a D-Wave related watershed announcement from Google coming Dec 8th.  Could this be an early Christmas present to D-Wave investors?

 

~~~~~~

dwavetrain_wide

Back in the day before he re-resigned as D-Wave’s chief critic, Scott Aaronson made a well-reasoned argument as to why he thought this academic, and at times vitriolic, scrutiny was warranted. He argued that a failure of D-Wave to deliver a quantum speed-up would set the field back, similar to the AI winter that was triggered by Marvin Minsky’s Perceptrons book.

Fortunately, quantum annealers are not perceptrons. For the latter, it can be rigorously proven that single layer perceptrons are not very useful. Ironically, at the time the book was published, multilayered perceptrons, i.e. a concept that is now fundamental to all deep learning algorithms, were already known, but in the ensuing backlash research funding for those also dried up completely. The term “perceptron” became toxic and is now completely extinct.

Could D-Wave be derailed by a proof that shows that quantum annealing could, under no circumstances, deliver a quantum speed-up? To me this seems very unlikely, not only because I expect that no such proof exists, but also because, even if this was the case, there will still be a practical speed-up to be had. If D-Wave manages to double their integration density at the same rapid clip as in the past, then their machines will eventually outperform any classical computing technology in terms of annealing performance. This article (h/t Sol) expands on this point.

So far there is no sign that D-Wave will slow its manic pace. The company recently released its latest chip generation, featuring quantum annealing with an impressive 1000+ qubits (in practice, the number will be smaller, as qubits will be consumed for problem encoding and software EEC). This was followed with a detailed test under the leadership of Catherine McGeoch, and it will be interesting to see what Daniel Lidar, and other researchers with access to D‑Wave machines, will find.

My expectation has been from the get-go that D-Wave will accelerate the development of this emerging industry, and attract more money to the field. It seems to me that this is now playing out.

Intel recently (and finally as Robert Tucci points out) entered the fray with a $50M investment. While this is peanuts for a company of Intel’s size, it’s an acknowledgement that they can’t leave the hardware game to Google, IBM or start-ups such as Rigetti.

On the software side, there’s a cottage industry of software start-ups hitching their wagons to the D-Wave engine. Many of these are still in stealth mode, or early stage such as QC Ware, while others already start to receive some well deserved attention.

Then there are also smaller vendors of established software and services that already have a sophisticated understanding of the need to be quantum ready. The latter is something I expect to see much more in the coming years as the QC hardware race heats up.

The latest big name entry into the quantum computing arena was Alibaba, but at this time it is not clear what this Chinese initiative will focus on. Microsoft, on the other hand, seems to be a known quantity and will not get aboard the D‑Wave train, but will focus exclusively on quantum gate computing.

Other start-ups, like our artiste-qb.net, straddle the various QC hardware approaches. In our case, this comes “out-of-the-box”, because our core technology, Quantum Bayesian Networks, as developed by Robert Tucci, is an ideal tool to abstract from the underlying architecture. Another start-up that is similarly architecture agnostic is Cambridge QC. The recent news of this company brings to mind that sometimes reality rather quickly imitates satire. While short of the $1B seed round of this April Fool’s spoof, the influx of $50M dollars from the Chile based Grupo Arcano is an enormous amount for a QC software firm, that as far as I know, holds no patents.

Some astoundingly big bets are now being placed in this field.