Category Archives: Quantum Computing

The Creative Destruction Lab Reaches a New Quantum Level

Planet earth as seen from Toronto.

If excitement was a necessary and sufficient criteria to reach higher quantum levels, they certainly must have been achieved yesterday morning in room 374 of the Rotman School of Business here in Toronto (aka “the center of the universe” as our modest town is known to Canadians outside the GTA).

In Canadian start-up circles, the Creative Destruction Lab (CDL) is a household name by now, and ever since the program went global, its recognition has reached far past the borders of Canada.

The CDL kicked off with its first cohort in the quantum machine learning stream today, and our company Artiste has been honoured to be part of this exciting new chapter.

For a casual observer, the CDL may look like just another effort to bring venture capital and start-ups together, with some MBA students thrown in for that entrepreneurial spirit. I.e. it may appear as just another glorified pitch competition. But nothing could be further from the truth, as this program has essentially been built around an academic hypothesis of why there is so little start-up activity outside Silicon Valley, and why it has been so difficult to replicate this kind of ecosystem. It certainly is not for lack of scientific talent, capital, or trying.

Ajay Agrawal, the founder of the Creative Destruction Lab, beautifully laid out the core hypothesis around which he structured the CDL. He suspects a crucial market mismatch, in that start-up founders are under-supplied with one crucial resource: Sound entrepreneurial judgment. And the latter can make all the difference. He illustrated this with a somewhat comical email from the nineties, written by a Stanford Ph.D. student pitching a project to an Internet provider, arguing that the technology that his small team would build could be extremely profitable, and indicating that they’d love to build this on a fixed salary basis. A hand written note was scribbled on the email print-out from a Stanford business advisor, who suggested realizing this project as their own start-up venture. This company, of course, went on to become Google.

The linked chart should not be misconstrued as sound investment advise.
Two pretty things that are not like the other at all, but the mania is very much the same.

Ajay’s thinking throws some serious shade on the current ICO craze which, like most start-ups, I’ve been following very closely. Blockchain technology has some truly disruptive potential way beyond crypto-currency, and I see many synergies between this trustless distributed computing environment, and how Quantum information will interface with the classical world.

From a start-up’s standpoint, an ICO looks extremely attractive, but like all crowdfunding efforts it still requires a good campaign. However, it all hinges on a whitepaper and technology rather than a business plan, and the former typically comes pretty naturally to technical founders. There are also very few strings attached:

  • The (crypto-)money that comes in is essentially anonymous.
  • Fund raising occurs on a global basis,
  • The process is still essentially unregulated in most jurisdictions.

But if the CDL got it right,  ICOs are missing the most critical aspect to making a venture successful: Sound entrepreneurial advice.

There is little doubt in my mind that we are currently experiencing something akin to tulip mania in the crypto-currency and ICO arena, but the market for tulips did not vanish after the 1637 mania ran its course, and neither will ICOs.  For my part, I expect we will see something of a hybrid model emerge: VC seed rounds augmented by ICOs.

From an entrepreneur’s stand-point, this would be the best of both worlds.

Let’s aspire to be more than just a friendly neighbour

The Canadarm - a fine piece of Canadian technology, that would have gone nowhere without the US.
The Canadarm – a fine piece of Canadian technology, that would have gone nowhere without the US.

This blog is most emphatically not about politics, and although it has often been observed that everything is political, this exaggeration actually has become less true the more it is raised.

Whereas, in a feudal society all activity is at the pleasure of the ruler, within a liberal democracy, citizens, and scientists alike, don’t have to pay attention to politics, and their freedoms are guarded by an independent judiciary.

Globalism has been an attempt to free border crossing business from the whims of politics. Since history never moves in a straight line, we shouldn’t be surprised that, after the 2008 financial meltdown, this trend, towards more global integration, is now  facing major headwinds, which now happen to gust heavily from the White House.

Trudeau, who is one of the few heads of states who can explain what Quantum Computing is about, will do his best on his state visit to Washington to ensure freedom of trade will commence across the world’s longest open border, but Canada can’t take anything for granted. Which brings me around to the topic that this blog is most emphatically about: Canada is punching way above its weight when it comes to Quantum Computing, not the least because of the inordinate generosity of Mike Lazaridis, who was instrumental in creating the Perimeter Institute as well as giving his alma matter the fantastic  Institute for Quantum Computing (IQC).  This facility even has its own semiconductor fab, and offers tremendous resources to its researchers. There have been some start-up spin-offs, and there is little doubt that this brings high-tech jobs to the region, but when I read headlines like the one about the quantum socket, I can’t help but wonder if Canada again seems to be  content to play second fiddle.  It’s a fine piece of engineering, but let’s be real, after everything is said and done, it’s still just a socket, a thing you will plug into the really important piece, your quantum chip. I am sure Google will be delighted to use this solid piece of Canadian engineering, and we may even get some nice press about it, just as we did for the Canadarm on the space shuttle, another example for top notch technology that would have gone nowhere without the American muscle.

It's what you plug in that counts.
It’s what you plug in that counts.

But the Quantum Computing frontier is not like access to space. Yes, it takes some serious money to leave a mark, but I cannot help but think that Canada got much better bang for its loonies when the federal BDC fund invested early into D-Wave. The scrappy start-up stretched the dollars much further, and combined great ambition with brilliant pragmatism. It is the unlikely story where a small Canadian company was driving development, and inspired an American giant like Google to jump in with both feet.

 

 

Canada needs this kind of spirit. Let’s be good neighbors, sure, but also ambitious. Let there be a Canadian QC chip for the Canadian quantum socket.

 

Big Challenges Require Bold Visions

Unless we experience a major calamity resetting the world’s economy to a much lower output, it is a foregone conclusion that the world will miss the CO2 target to limit global warming to 1.5C. This drives a slow motion multi-faceted disaster exacerbated by the ongoing growth in global population, which puts additional stress on the environment.  Unsurprisingly, we are in the midst of earth’s sixth massive extinction event.

It just takes three charts to paint the picture:

1) World Population Growth

2) Temperature Increase

3) Species Extinction

We shouldn’t delude ourselves in believing that our species is safe from adding itself to the extinction list. The next decades are pivotal in stopping the damage we do to our planet. Given our current technologies, we have every reason to believe that we can stabilize population growth and replace fossil fuel dependent technologies with CO2 neutral ones, but the processes that are already set in motion will produce societal challenges of unprecedented proportion.

Population growth and the need for arable land keeps pushing people ever closer to formerly isolated wildlife.  Most often with just fatal consequences for the latter, but sometimes the damage goes both ways.  HIV, Ebola and bird flu, for instance, are all health threats that were originally contracted from animal reservoirs (zoonosis), and we can expect more such pathogens, many of which will not have been observed before. At the same time, old pathogens can easily resurface. Take tuberculosis, for instance. Even in an affluent country with good public health infrastructure, such as Canada, we see over a thousand new cases each year, and, as in other parts of the world, multi-resistant TB strains are on the rise.

Immunization and health management require functioning governmental bodies. In a world that will see ever more refugee crises and civil strife, the risk for disruptive pandemics will massively increase. The recent outbreak of Ebola is a case study in how such mass infections can overwhelm the medical infrastructure of developing countries, and should serve as a wake-up call to the first world to help establish a global framework that can manage these kinds of global health risks. The key is to identify emerging threats as early as possible, since the chance of containment and mitigation increases by multitudes the sooner actions can be taken.

Such a framework will require robust and secure data collection and dissemination capabilities and advanced predictive analytics that can build on all available pooled health data as well as established medical ontologies. Medical doctor and bioinformatic researcher Andrew Deonarine has envisioned such a system that he has dubbed Signa.OS, and he has assembled a stellar team including members from his former alma mater Cambridge, the UBC, as well as Harvard, where he will soon start post-graduate work. Any such system should not be designed with just our current hardware in mind, but with the technologies that will be available within the decade.  That is why quantum computer accelerated Bayesian networks are an integral part of the analytical engine for Signa.OS. We are especially excited to also have Prof. Marco Scutari from Oxford join the Signa.OS initiative, whose work in Bayesian network training in R is stellar, and served as a guiding star for our python implementation.

Our young company, artiste-qb.net, which I recently started with Robert R. Tucci, could not have wished for a more meaningful research project to prove our technology.

[This video was produced by Andrew for entering the MacArthur challenge.]

 

To Reach Quantum Supremacy We Need to Cross the Entanglement Firepoint

You can get ignition at the flash point, but it won't last.

There’s been a lot of buzz recently about Quantum Computing. Heads of state are talking about it, and lots of money is being poured into research. You may think the field is truly on fire, but could it still fizzle out? When you are dealing with fire, what makes the critical difference between just a flash in the pan, and reaching the firepoint when that fire is self-sustaining?

Finding an efficient quantum algorithm is all about quantum speed-up, which has, understandably, mesmerized theoretical computer scientists.  Their field had been boxed into the Turing machine mold, and now, for the first time, there was something that demonstratively went beyond what was possible with this classical, discrete model.

Quantum speed-up is all about scaling behaviour.  It’s about establishing that a quantum algorithm’s resource requirements will grow more slowly with the amount of computational resources than the next best classical algorithm.

While this is a profound theoretical insight, it doesn’t necessarily immediately translate into practice, because this scaling  behaviour may come into play at a resource threshold far beyond anything technically realizable for the foreseeable future.

For instance, Shor’s algorithm requires tens of thousands of pristine, entangled qubits in order to become useful.  While not Sci Fi anymore, this kind of gate based QC is still far off. On the other hand, Matthias Troyer et al. demonstrated that you can expect to perform quantum chemical calculations that will outmatch any classical supercomputer with much more modest resources (qubits numbered in the hundreds not thousands).

The condition of having a quantum computing device performing tasks outside the reach of any classical technology is what I’d like to define as quantum supremacy (a term invented by John Preskill that I first heard used by DK Matai).

Quantum speed-up virtually guarantees that you eventually will reach quantum supremacy for the posed problem (i.e. factoring in Shore’s algorithm case) but it doesn’t tell you anything about how quickly you will get there. Also, while quantum speed-up is a useful criteria for eventually reaching quantum supremacy, it is not a necessary one for outperforming conventional super-computers.

We are just now entering a stage where we see the kind of quantum annealing chips that can tackle commercially interesting problems.  The integration density of these chips is still minute in comparison to that of the established silicon based ones (for quantum chips there is still lots of room at the bottom).

D-Wave just announced the availability of a 2000 qubit chip for early next year (h/t Ramsey and Rolf).  If the chip’s integration density can continue to double every 16 months, then quantum algorithms that don’t scale better (or only modestly so) than classical ones may at some point still end up outperforming all classical alternatives, assuming that we are indeed living in the end times of Moore’s law.

From a practical (and hence commercial) perspective, these algorithms won’t be any less lucrative.

Yet, the degree to which quantum correlations can be technologically controlled is still the key to go beyond what the current crop of “wild qubits” on a non-error corrected adiabatic chip can accomplish.  That is why we see Google invest in its own R&D, hiring Prof. Martinis from the UCSB, and the work has already resulted in a nine qubit prototype chip that combines “digital” error correction (ECC) with quantum annealing (AQC).

D-Wave is currently also re-architecting its chip, and it is a pretty safe bet that they will also incorporate some form of error correction in the new design. More intriguing, the company now also talks about a road map towards universal quantum computing (i.e. see the second to last paragraph in this article).

It is safe to say that before we get undeniable quantum supremacy, we will have to achieve a level of decoherence control that allows for essentially unlimited qubit scale-out. For instance, IBM researchers are optimistic that they’ll get there as soon as they incorporate a third layer of error correction into their quantum chip design.

D-Wave ignited the commercial quantum computing field.  And with the efforts underway to build EEC into QC hardware, I am more optimistic than ever that we are very close to the ultimate firepoint where this technological revolution will become unstoppable. Firepoint Entanglement is drawing near, and when these devices enter the market, you will need software that will bring Quantum Supremacy to bear on the hardest challenges that humanity faces.

This is why I teamed up with Robert (Bob) Tucci, who pioneered an inspired way to describe quantum algorithms (and arbitrary quantum systems) with a framework that extends Bayesian Networks (B-nets, sometimes also referred to as Belief Networks) into the quantum realm. He did this in such a manner that an  IT professional who knows this modelling approach, and is comfortable with complex numbers, can pick up on it without having to go through a quantum physics boot camp. It was this reasoning on a higher abstraction level that enabled Bob to come up with the concept of CMI entanglement (sometimes also referred to as Squashed Entanglement).

An IDE built on this paradigm will allow us to tap into the new quantum resources as they become available, and to develop intuition for this new frontier in information science with a visualization that goes far beyond a simple circuit model. The latter also suffers from the fact that in the quantum realm some classical logic gates (such as OR and AND) are not allowed, which can be rather confusing for a beginner.  QB-nets, on the other hand, fully embed and encompass the classical networks, so any software that implements QB-nets, can also be used to implement standard Bayesian network use cases, and the two can be freely mixed in hybrid nets. (This corresponds to density matrices that include classical thermodynamic probabilities.)

So far, the back-end for the QB-net software is almost completed, as well as a stand-alone compiler/gate-synthesizer. Our end goal is to build an environment every bit as complete as Microsoft’s Liqui|>.  They make this software available for free, and ironically distribute it via Github, although the product is entirely closed source (but at least they are not asking for your firstborn if you develop with Liqui|>).  Microsoft also stepped up their patent activity in this space, in all likelihood in order to allow for a similar shake-down business model as the one that allows them to derive a huge amount of revenue from the Linux based (Google developed) Android platform.  We don’t want want the future of computing to be held in a stranglehold by Microsoft, which is why our software is Open Source, and we are looking to build a community of QC enthusiasts within and outside of academia to carry the torch of software freedom.  If you are interested, please head over to our github repository. Any little bit will help, feedback, testing, documentations, and of course coding. Come and join the quantum computing revolution!

 

 

 

Riding the D-Wave

Update: Thanks to everybody who keeps pointing me to relevant news (Ramsey, Rolf, Sol and everybody else my overtired brain may not recall at this time).

There is no doubt that D-Wave is on a role:

And then there’s the countdown to what is billed as a D-Wave related watershed announcement from Google coming Dec 8th.  Could this be an early Christmas present to D-Wave investors?

 

~~~~~~

dwavetrain_wide

Back in the day before he re-resigned as D-Wave’s chief critic, Scott Aaronson made a well-reasoned argument as to why he thought this academic, and at times vitriolic, scrutiny was warranted. He argued that a failure of D-Wave to deliver a quantum speed-up would set the field back, similar to the AI winter that was triggered by Marvin Minsky’s Perceptrons book.

Fortunately, quantum annealers are not perceptrons. For the latter, it can be rigorously proven that single layer perceptrons are not very useful. Ironically, at the time the book was published, multilayered perceptrons, i.e. a concept that is now fundamental to all deep learning algorithms, were already known, but in the ensuing backlash research funding for those also dried up completely. The term “perceptron” became toxic and is now completely extinct.

Could D-Wave be derailed by a proof that shows that quantum annealing could, under no circumstances, deliver a quantum speed-up? To me this seems very unlikely, not only because I expect that no such proof exists, but also because, even if this was the case, there will still be a practical speed-up to be had. If D-Wave manages to double their integration density at the same rapid clip as in the past, then their machines will eventually outperform any classical computing technology in terms of annealing performance. This article (h/t Sol) expands on this point.

So far there is no sign that D-Wave will slow its manic pace. The company recently released its latest chip generation, featuring quantum annealing with an impressive 1000+ qubits (in practice, the number will be smaller, as qubits will be consumed for problem encoding and software EEC). This was followed with a detailed test under the leadership of Catherine McGeoch, and it will be interesting to see what Daniel Lidar, and other researchers with access to D‑Wave machines, will find.

My expectation has been from the get-go that D-Wave will accelerate the development of this emerging industry, and attract more money to the field. It seems to me that this is now playing out.

Intel recently (and finally as Robert Tucci points out) entered the fray with a $50M investment. While this is peanuts for a company of Intel’s size, it’s an acknowledgement that they can’t leave the hardware game to Google, IBM or start-ups such as Rigetti.

On the software side, there’s a cottage industry of software start-ups hitching their wagons to the D-Wave engine. Many of these are still in stealth mode, or early stage such as QC Ware, while others already start to receive some well deserved attention.

Then there are also smaller vendors of established software and services that already have a sophisticated understanding of the need to be quantum ready. The latter is something I expect to see much more in the coming years as the QC hardware race heats up.

The latest big name entry into the quantum computing arena was Alibaba, but at this time it is not clear what this Chinese initiative will focus on. Microsoft, on the other hand, seems to be a known quantity and will not get aboard the D‑Wave train, but will focus exclusively on quantum gate computing.

Other start-ups, like our artiste-qb.net, straddle the various QC hardware approaches. In our case, this comes “out-of-the-box”, because our core technology, Quantum Bayesian Networks, as developed by Robert Tucci, is an ideal tool to abstract from the underlying architecture. Another start-up that is similarly architecture agnostic is Cambridge QC. The recent news of this company brings to mind that sometimes reality rather quickly imitates satire. While short of the $1B seed round of this April Fool’s spoof, the influx of $50M dollars from the Chile based Grupo Arcano is an enormous amount for a QC software firm, that as far as I know, holds no patents.

Some astoundingly big bets are now being placed in this field.

 

 

 

 

 

Classic Quantum Confusion

Paris_Tuileries_Facepalm_statueBy now I am pretty used to egregiously misleading summarization of physics research in popular science outlets, sometimes flamed by the researchers themselves. Also self-aggrandized, ignorant papers sneaked into supposedly peer reviewed journals by non-physicists are just par for the course.

But this is in a class of it’s own.  Given the headline and the introductory statement that “a fully classical system behaves like a true quantum computer“, it essentially creates the impression that QC research must be pointless. Much later it sneaks in the obvious, that an analog emulation just like one on a regular computer can’t possibly scale past 40 qubits due to the exponential growth in required computational resources.

But that’s not the most irritating aspect of this article.

Don’t get me wrong, I am a big fan of classical quantum analog systems. I think they can be very educational, if you know what you are looking at (Spreeuw 1998).  The latter paper, is actually quoted by the authors and it is very precise in distinguishing between quantum entanglement and the classical analog. But that’s not what their otherwise fine paper posits (La Cour et al. 2015).  The authors write:

“What we can say is that, aside from the limits on scale, a classical emulation of a quantum computer is capable of exploiting the same quantum phenomena as that of a true quantum system for solving computational problems.”

If it wasn’t for the phys.org reporting, I would put this down as sloppy wording that slipped past peer review, but if the authors are correctly quoted, then they indeed labour under the assumption that they faithfully recreated quantum entanglement in their classical analog computer – mistaking the model for the real thing.

It makes for a funny juxtaposition on phys.org though, when filtering by ‘quantum physics’ news.

Screenshot 2015-05-28 01.35.43

The second article refers to a new realization of Wheeler’s delayed choice experiment (where the non-local entanglement across space is essentially swapped for one across time).

If one takes Brian La Cour at his word then according to his other paper he suggest that these kind of phenomena should also have a classical analog.

So it’s not just hand-waving when he is making this rather outlandish sounding statement with regards to being able to achieve an analog to the violation of Bell’s inequality:

“We believe that, by adding an emulation of quantum noise to the signal, our device would be capable of exhibiting this type of [Bell’s inequality violating] entanglement as well, as described in another recent publication.”

Of course talk is cheap, but if this research group could actually demonstrate this Bell’s inequality loophole it certainly could change the conversation.

Will Super Cool SQUIDs Make for an Emerging Industry Standard?

dwave_log_temp_scale
This older logarithmic (!) D-Wave Graphic gives an idea how extreme the cooling requirement is for SQUID based QC (it used to be part of a really cool SVG animation, but unfortunately D-Wave no longer hosts it).

D‑Wave had to break new ground in many engineering disciplines.  One of them was the cooling and shielding technology required to operate their chip.

To this end they are now using ANSYS software, which of course makes for very good marketing for this company (h/t Sol Warda). So good, in fact, that I would hope D‑Wave negotiated a large discount for serving as an ANSYS reference customer.

Any SQUID based quantum computing chip will have similar cooling and shielding requirements, i.e. Google and IBM will have to go through a similar kind of rigorous engineering exercise to productize their approach to quantum computing, even though this approach may look quite different.

Until recently, it would have been easy to forget that IBM is another contender in the ring for SQUID based quantum computing, yet the company’s researchers have been working diligently outside the limelight – they last created headlines three years ago. And unlike other quantum computing news, that often only touts marginal improvements, their recent results deserved to be called a break-through, as they improved upon the kind of hardware error correction that Google is betting on.

IBM_in_atoms
IBM has been conducting fundamental quantum technology research for a long time, this image shows the company’s name spelled out using 35 xenon atoms, arranged via a scanning tunneling microscope (a nano visualization and manipulation device invented at IBM).

Obviously, the better your error correction, the more likely you will be able to achieve quantum speed-up when you pursue an annealing architecture like D‑Wave, but IBM is not after yet another annealer. Most articles on the IBM program reports that IBM is into building a  “real quantum computer”, and the term clearly originates from within the company, (e.g. this article attributes the term to Scientists at IBM Research in Yorktown Heights, NY). This leaves little doubt about their commitment to universal gate based QC.

The difference in strategy is dramatic. D‑Wave decided to forgo surface code error correction on the chip in order to get a device to the market.  Google, on the other hand, decided to snap up the best academic surface code implementation money could buy, and also emphasized speed-to-market by first going for another quantum adiabatic design.

All the while, IBM researchers first diligently worked through the stability of SQUID based qubits .  Even now, having achieved the best available error correction, they clearly signaled that they don’t consider it good enough for scale-up. It may take yet another three years for them to find the optimal number and configuration of logical qubits that achieves the kind of fidelity they need to then tackle an actual chip.

It is a very methodological engineering approach. Once the smallest building block is perfected,  they will have the confidence that they can go for the moonshot. It’s also an approach that only a company with very deep pockets can afford, one with a culture that allows for the pursuit of a decades long research program.

Despite the differences, in the end, all SQUID based chips will have to be operated very close to absolute zero.  IBM’s error correction may eventually give it a leg-up over the competition, but I doubt that standard liquid helium fridge technology will suffice for a chip that implements dozens or hundreds of qubits.

By the time IBM enters the market there will be more early adopters of the D‑Wave and Google chips, and the co-opetition between these two companies may have given birth to an emerging industry standard for the fridge technology. In a sense, this may lower the barriers of entry for new quantum chips if the new entrant can leverage this existing infrastructure. It would probably be a first for IBM to cater to a chip interfacing standard that the company did not help to design.

So while there’s been plenty of news in the quantum computing hardware space to report, it is curious, and a sign of the times, that a recent Washington Post article on the matter opted to headline with a Quantum Computing Software company i.e. QxBranch. (Robert R. Tucci channeled the journalists at the WP when he wrote last week that the IBM news bodes well for software start-ups in this space).

While tech and business journalists may not (and may possibly never) understand what makes a quantum computer tick, they understand perfectly well that any computing device is just dead weight without software, and that the latter will make the value proposition necessary to create a market for these new machines.

 

 

 

How many social networks do you need?

The proliferation of social networks seems unstoppable now. Even the big ones you can no longer count on one hand: Facebook, LinkedIn, GooglePlus, Twitter, Instagram, Tumblr, Pinterest, Snapchat – I am so uncool I didn’t even know about the latter until very recently. It seems that there has to be a natural saturation point with diminishing marginal return of signing up to yet another one, but apparently we are still far from it.

Recently via LinkedIn I learned about a targeted social network that I happily signed up for, which is quite against my character (i.e. I still don’t have a Facebook account).

iQEi_logo
Free to join and no strings attached. (This targeted social network is not motivated by a desire to monetize your social graph).

The aptly named International Quantum Exchange for Innovation is a social network set up by DK Matai with the express purpose of bringing together people of all walks of life anywhere on this globe who are interested in the next wave of the coming Quantum Technology revolution. If you are as much interested in this as I am, then joining this UN of Quantum Technology, as DK puts it, is a no-brainer.

The term ‘revolution’ is often carelessly thrown around, but in this case I think, when it comes to the new wave of quantum technologies, it is more than justified. After all, the first wave of QM driven technologies powered the second leg of the  Industrial Revolution. It started with a bang, in the worst possible manner, when the first nuclear bomb ignited, but the new insights gained led to a plethora of new high tech products.

Quantum physics was instrumental in everything from solar cells, to lasers, to medical imaging (e.g. MRI) and of course, first and foremost, the transistor. As computers became more powerful, Quantum Chemistry coalesced into an actual field, feeding on the ever increasing computational power. Yet Moore’s law proved hardly adequate for its insatiable appetite for the compute cycles required by the underlying quantum numerics.

During Richard Feynman’s (too short) life span, he was involved in the military as well as civilian application of quantum mechanics, and his famous “there is plenty of room at the bottom” talk can be read as a programmatic outline of the first Quantum Technology revolution.  This QT 1.0 wave has almost run its course. We made our way to the bottom, but there we encountered entirely new possibilities by exploiting the essential, counter-intuitive non-localities of quantum mechanics.  This takes it to the next step, and again Information Technology is at the fore-front. It is a testament to Feynman’s brilliance that he anticipated QT 2.0 as well, when suggesting a quantum simulator for the first time, much along the lines of what D-Wave built.

It is apt and promising that the new wave of quantum technology does not start with a destructive big bang, but an intriguing and controversial black box.

D-Wave_2001

 

Quantum Computing Road Map

No, we are not there yet, but we are working on it.Qubit spin states in diamond defects don’t last forever, but they can last outstandingly long even at room temperature (measured in microseconds which is a long time when it comes to computing).

So this is yet another interesting system added to the list of candidates for potential QC hardware.

Nevertheless, when it comes to the realization of scalable quantum computers, qubits decoherence time may very well be eclipsed by the importance of another time span: 20 years, the length at which patents are valid (in the US this can include software algorithms).

With D-Wave and Google leading the way, we may be getting there faster than most industry experts predicted. Certainly the odds are very high that it won’t take another two decades for useable universal QC machines to be built.

But how do we get to the point of bootstrapping a new quantum technology industry? DK Matai addressed this in a recent blog post, and identified five key questions, which I attempt to address below (I took the liberty of slightly abbreviating the questions, please check at the link for the unabridged version).

The challenges DK laid out will require much more than a blog post (or a LinkedIn comment that I recycled here), especially since his view is wider than only Quantum Information science. That is why the following thoughts are by no means comprehensive answers, and very much incomplete, but they may provide a starting point.

1. How do we prioritise the commercialisation of critical Quantum Technology 2.0 components, networks and entire systems both nationally and internationally?

The prioritization should be based on the disruptive potential: Take quantum cryptography versus quantum computing for example. Quantum encryption could stamp out fraud that exploits some technical weaknesses, but it won’t address the more dominant social engineering deceptions. On the upside it will also facilitate iron clad cryptocurrencies. Yet, if Feynman’s vision of the universal quantum simulator comes to fruition, we will be able to tackle collective quantum dynamics that are computationally intractable with conventional computers. This encompasses everything from simulating high temperature superconductivity to complex (bio-)chemical dynamics. ETH’s Matthias Troyer gave an excellent overview over these killer-apps for quantum computing in his recent Google talk, I especially like his example of nitrogen fixation. Nature manages to accomplish this with minimal energy expenditure in some bacteria, but industrially we only have the century old Haber-Bosch process, which in modern plants still results in 1/2 ton of CO2 for each ton of NH3. If we could simulate and understand the chemical pathway that these bacteria follow we could eliminate one of the major industrial sources of carbon dioxide.

2. Which financial, technological, scientific, industrial and infrastructure partners are the ideal co-creators to invent, to innovate and to deploy new Quantum technologies on a medium to large scale around the world? 

This will vary drastically by technology. To pick a basic example, a quantum clock per se is just a better clock, but put it into a Galileo/GPS satellite and the drastic improvement in timekeeping will immediately translate to a higher location triangulation accuracy, as well as allow for a better mapping of the earth’s gravitational field/mass distribution.

3. What is the process to prioritise investment, marketing and sales in Quantum Technologies to create the billion dollar “killer apps”?

As sketched out above, the real price to me is universal quantum computation/simulation. Considerable efforts have to go into building such machines, but that doesn’t mean that you cannot start to already develop software for them. Any coding for new quantum platforms, even if they are already here (as in the case of the D-Wave 2) will involve emulators on classical hardware, because you want to debug and proof your code before submitting it to the more expansive quantum resource. In my mind building such an environment in a collaborative fashion to showcase and develop quantum algorithms should be the first step. To me this appears feasible within an accelerated timescale (months rather than years). I think such an effort is critical to offset the closed sourced and tightly license controlled approach, that for instance Microsoft is following with its development of the LIQUi|> platform.

4. How do the government agencies, funding Quantum Tech 2.0 Research and Development in the hundreds of millions each year, see further light so that funding can be directed to specific commercial goals with specific commercial end users in mind?

This to me seems to be the biggest challenge. The amount of research papers produced in this field is enormous. Much of it is computational theory. While the theory has its merits, I think the governmental funding should try to emphasize programs that have a clearly defined agenda towards ambitious yet attainable goals. Research that will result in actual hardware and/or commercially applicable software implementations (e.g. the UCSB Martinis agenda). Yet, governments shouldn’t be in the position to pick a winning design, as was inadvertently done for fusion research where ITER’s funding requirements are now crowding out all other approaches. The latter is a template for how not to go about it.

5. How to create an International Quantum Tech 2.0 Super Exchange that brings together all the global centres of excellence, as well as all the relevant financiers, customers and commercial partners to create Quantum “Killer Apps”?

On a grassroots level I think open source initiatives (e.g. a LIQUiD alternative) could become catalysts to bring academic excellence centers and commercial players into alignment. This at least is my impression based on conversations with several people involved in the commercial and academic realm. On the other hand, as with any open source products, commercialization won’t be easy, yet this may be less of a concern in this emerging industry, as the IP will be in the quantum algorithms, and they will most likely be executed with quantum resources tied to a SaaS offering.

 

Quantum Computing Coming of Age

Are We There Yet? That’s the name of the talk that Daniel Lidar recently gave at Google (h/t Sol Warda who posted this in a previous comment).

Spoiler alert, I will summarize some of the most interesting aspects of this talk as I finally found the time to watch it in its entirety.

The first 15 min you may skip if you follow this blog, he just gives a quick introduction to QC. Actually, if you follow the discussion closely on this blog, then you will find not much news in most of the presentation until the very end, but I very much appreciated the graph 8 minutes in, which is based on this Intel data:

CPU_hit_a_wall
Performance and clock speeds are essentially flat for the last ten years. Only the ability to squeeze more transistors and cores into one chip keeps Moore’s law alive (data source Intel Corp.).

Daniel, deservedly, spends quite some time on this, to drive home the point that classical chips have hit a wall.  Moving from Silicon to Germanium will only go so far in delaying the inevitable.

If you don’t want to sit through the entire talk, I recommend skipping ahead to the 48 minute mark, when error correction on the D-Wave is discussed. The results are very encouraging, and in the Q&A Daniel points out that this EC scheme could be inherently incorporated into the D-Wave design. Wouldn’t be surprised to see this happen fairly soon. The details of the ECC scheme are available at arxiv, and Daniel spends some time on the graph shown below. He is pointing out that, to the extent that you can infer a slope, it looks very promising, as it get flatter as the problems get harder, and the gap between non-ECC and the error corrected annealing widens (solid vs. dashed lines). With ECC I would therefore expect D-Wave machines to systematically outperform simulated annealing.

Number of repetitions to find a solution at least once.
Number of repetitions to find a solution at least once.

Daniel sums up the talk like this:

  1. Is the D-Wave device a quantum annealer?
    • It disagrees with all classical models proposed so far. It also exhibits entanglement. (I.e. Yes, as far as we can tell)
  2.  Does it implement a programmable Ising model in a transverse field and solve optimization problems as promised?
    • Yes
  3. Is there a quantum speedup?
    • Too early to tell
  4. Can we error-correct it and improve its performance?
    • Yes

With regard to hardware implemented qubit ECC, we also got some great news from Martinis’ UCSB lab, whom Google drafted for their quantum chip. The latest results have just been published in Nature (pre-print available at arxiv).

Martinis explained the concept in a talk I previously reported on, and clearly the work is progressing nicely. Unlike the ECC scheme for the D-Wave architecture, Martinis’ approach is targeting a fidelity that not only will work for quantum annealing, but should also allow for non-trivial gate computing sizes.

Quantum Computing may not have fully arrived yet, but after decades of research we clearly are finally entering the stage where this technology won’t be just the domain of theorists and research labs, and at this time, D-Wave is poised to take the lead.