Category Archives: artiste-qb.net

The Creative Destruction Lab Reaches a New Quantum Level

Planet earth as seen from Toronto.

If excitement was a necessary and sufficient criteria to reach higher quantum levels, they certainly must have been achieved yesterday morning in room 374 of the Rotman School of Business here in Toronto (aka “the center of the universe” as our modest town is known to Canadians outside the GTA).

In Canadian start-up circles, the Creative Destruction Lab (CDL) is a household name by now, and ever since the program went global, its recognition has reached far past the borders of Canada.

The CDL kicked off with its first cohort in the quantum machine learning stream today, and our company Artiste has been honoured to be part of this exciting new chapter.

For a casual observer, the CDL may look like just another effort to bring venture capital and start-ups together, with some MBA students thrown in for that entrepreneurial spirit. I.e. it may appear as just another glorified pitch competition. But nothing could be further from the truth, as this program has essentially been built around an academic hypothesis of why there is so little start-up activity outside Silicon Valley, and why it has been so difficult to replicate this kind of ecosystem. It certainly is not for lack of scientific talent, capital, or trying.

Ajay Agrawal, the founder of the Creative Destruction Lab, beautifully laid out the core hypothesis around which he structured the CDL. He suspects a crucial market mismatch, in that start-up founders are under-supplied with one crucial resource: Sound entrepreneurial judgment. And the latter can make all the difference. He illustrated this with a somewhat comical email from the nineties, written by a Stanford Ph.D. student pitching a project to an Internet provider, arguing that the technology that his small team would build could be extremely profitable, and indicating that they’d love to build this on a fixed salary basis. A hand written note was scribbled on the email print-out from a Stanford business advisor, who suggested realizing this project as their own start-up venture. This company, of course, went on to become Google.

The linked chart should not be misconstrued as sound investment advise.
Two pretty things that are not like the other at all, but the mania is very much the same.

Ajay’s thinking throws some serious shade on the current ICO craze which, like most start-ups, I’ve been following very closely. Blockchain technology has some truly disruptive potential way beyond crypto-currency, and I see many synergies between this trustless distributed computing environment, and how Quantum information will interface with the classical world.

From a start-up’s standpoint, an ICO looks extremely attractive, but like all crowdfunding efforts it still requires a good campaign. However, it all hinges on a whitepaper and technology rather than a business plan, and the former typically comes pretty naturally to technical founders. There are also very few strings attached:

  • The (crypto-)money that comes in is essentially anonymous.
  • Fund raising occurs on a global basis,
  • The process is still essentially unregulated in most jurisdictions.

But if the CDL got it right,  ICOs are missing the most critical aspect to making a venture successful: Sound entrepreneurial advice.

There is little doubt in my mind that we are currently experiencing something akin to tulip mania in the crypto-currency and ICO arena, but the market for tulips did not vanish after the 1637 mania ran its course, and neither will ICOs.  For my part, I expect we will see something of a hybrid model emerge: VC seed rounds augmented by ICOs.

From an entrepreneur’s stand-point, this would be the best of both worlds.

Big Challenges Require Bold Visions

Unless we experience a major calamity resetting the world’s economy to a much lower output, it is a foregone conclusion that the world will miss the CO2 target to limit global warming to 1.5C. This drives a slow motion multi-faceted disaster exacerbated by the ongoing growth in global population, which puts additional stress on the environment.  Unsurprisingly, we are in the midst of earth’s sixth massive extinction event.

It just takes three charts to paint the picture:

1) World Population Growth

2) Temperature Increase

3) Species Extinction

We shouldn’t delude ourselves in believing that our species is safe from adding itself to the extinction list. The next decades are pivotal in stopping the damage we do to our planet. Given our current technologies, we have every reason to believe that we can stabilize population growth and replace fossil fuel dependent technologies with CO2 neutral ones, but the processes that are already set in motion will produce societal challenges of unprecedented proportion.

Population growth and the need for arable land keeps pushing people ever closer to formerly isolated wildlife.  Most often with just fatal consequences for the latter, but sometimes the damage goes both ways.  HIV, Ebola and bird flu, for instance, are all health threats that were originally contracted from animal reservoirs (zoonosis), and we can expect more such pathogens, many of which will not have been observed before. At the same time, old pathogens can easily resurface. Take tuberculosis, for instance. Even in an affluent country with good public health infrastructure, such as Canada, we see over a thousand new cases each year, and, as in other parts of the world, multi-resistant TB strains are on the rise.

Immunization and health management require functioning governmental bodies. In a world that will see ever more refugee crises and civil strife, the risk for disruptive pandemics will massively increase. The recent outbreak of Ebola is a case study in how such mass infections can overwhelm the medical infrastructure of developing countries, and should serve as a wake-up call to the first world to help establish a global framework that can manage these kinds of global health risks. The key is to identify emerging threats as early as possible, since the chance of containment and mitigation increases by multitudes the sooner actions can be taken.

Such a framework will require robust and secure data collection and dissemination capabilities and advanced predictive analytics that can build on all available pooled health data as well as established medical ontologies. Medical doctor and bioinformatic researcher Andrew Deonarine has envisioned such a system that he has dubbed Signa.OS, and he has assembled a stellar team including members from his former alma mater Cambridge, the UBC, as well as Harvard, where he will soon start post-graduate work. Any such system should not be designed with just our current hardware in mind, but with the technologies that will be available within the decade.  That is why quantum computer accelerated Bayesian networks are an integral part of the analytical engine for Signa.OS. We are especially excited to also have Prof. Marco Scutari from Oxford join the Signa.OS initiative, whose work in Bayesian network training in R is stellar, and served as a guiding star for our python implementation.

Our young company, artiste-qb.net, which I recently started with Robert R. Tucci, could not have wished for a more meaningful research project to prove our technology.

[This video was produced by Andrew for entering the MacArthur challenge.]

 

To Reach Quantum Supremacy We Need to Cross the Entanglement Firepoint

You can get ignition at the flash point, but it won't last.

There’s been a lot of buzz recently about Quantum Computing. Heads of state are talking about it, and lots of money is being poured into research. You may think the field is truly on fire, but could it still fizzle out? When you are dealing with fire, what makes the critical difference between just a flash in the pan, and reaching the firepoint when that fire is self-sustaining?

Finding an efficient quantum algorithm is all about quantum speed-up, which has, understandably, mesmerized theoretical computer scientists.  Their field had been boxed into the Turing machine mold, and now, for the first time, there was something that demonstratively went beyond what was possible with this classical, discrete model.

Quantum speed-up is all about scaling behaviour.  It’s about establishing that a quantum algorithm’s resource requirements will grow more slowly with the amount of computational resources than the next best classical algorithm.

While this is a profound theoretical insight, it doesn’t necessarily immediately translate into practice, because this scaling  behaviour may come into play at a resource threshold far beyond anything technically realizable for the foreseeable future.

For instance, Shor’s algorithm requires tens of thousands of pristine, entangled qubits in order to become useful.  While not Sci Fi anymore, this kind of gate based QC is still far off. On the other hand, Matthias Troyer et al. demonstrated that you can expect to perform quantum chemical calculations that will outmatch any classical supercomputer with much more modest resources (qubits numbered in the hundreds not thousands).

The condition of having a quantum computing device performing tasks outside the reach of any classical technology is what I’d like to define as quantum supremacy (a term invented by John Preskill that I first heard used by DK Matai).

Quantum speed-up virtually guarantees that you eventually will reach quantum supremacy for the posed problem (i.e. factoring in Shore’s algorithm case) but it doesn’t tell you anything about how quickly you will get there. Also, while quantum speed-up is a useful criteria for eventually reaching quantum supremacy, it is not a necessary one for outperforming conventional super-computers.

We are just now entering a stage where we see the kind of quantum annealing chips that can tackle commercially interesting problems.  The integration density of these chips is still minute in comparison to that of the established silicon based ones (for quantum chips there is still lots of room at the bottom).

D-Wave just announced the availability of a 2000 qubit chip for early next year (h/t Ramsey and Rolf).  If the chip’s integration density can continue to double every 16 months, then quantum algorithms that don’t scale better (or only modestly so) than classical ones may at some point still end up outperforming all classical alternatives, assuming that we are indeed living in the end times of Moore’s law.

From a practical (and hence commercial) perspective, these algorithms won’t be any less lucrative.

Yet, the degree to which quantum correlations can be technologically controlled is still the key to go beyond what the current crop of “wild qubits” on a non-error corrected adiabatic chip can accomplish.  That is why we see Google invest in its own R&D, hiring Prof. Martinis from the UCSB, and the work has already resulted in a nine qubit prototype chip that combines “digital” error correction (ECC) with quantum annealing (AQC).

D-Wave is currently also re-architecting its chip, and it is a pretty safe bet that they will also incorporate some form of error correction in the new design. More intriguing, the company now also talks about a road map towards universal quantum computing (i.e. see the second to last paragraph in this article).

It is safe to say that before we get undeniable quantum supremacy, we will have to achieve a level of decoherence control that allows for essentially unlimited qubit scale-out. For instance, IBM researchers are optimistic that they’ll get there as soon as they incorporate a third layer of error correction into their quantum chip design.

D-Wave ignited the commercial quantum computing field.  And with the efforts underway to build EEC into QC hardware, I am more optimistic than ever that we are very close to the ultimate firepoint where this technological revolution will become unstoppable. Firepoint Entanglement is drawing near, and when these devices enter the market, you will need software that will bring Quantum Supremacy to bear on the hardest challenges that humanity faces.

This is why I teamed up with Robert (Bob) Tucci, who pioneered an inspired way to describe quantum algorithms (and arbitrary quantum systems) with a framework that extends Bayesian Networks (B-nets, sometimes also referred to as Belief Networks) into the quantum realm. He did this in such a manner that an  IT professional who knows this modelling approach, and is comfortable with complex numbers, can pick up on it without having to go through a quantum physics boot camp. It was this reasoning on a higher abstraction level that enabled Bob to come up with the concept of CMI entanglement (sometimes also referred to as Squashed Entanglement).

An IDE built on this paradigm will allow us to tap into the new quantum resources as they become available, and to develop intuition for this new frontier in information science with a visualization that goes far beyond a simple circuit model. The latter also suffers from the fact that in the quantum realm some classical logic gates (such as OR and AND) are not allowed, which can be rather confusing for a beginner.  QB-nets, on the other hand, fully embed and encompass the classical networks, so any software that implements QB-nets, can also be used to implement standard Bayesian network use cases, and the two can be freely mixed in hybrid nets. (This corresponds to density matrices that include classical thermodynamic probabilities.)

So far, the back-end for the QB-net software is almost completed, as well as a stand-alone compiler/gate-synthesizer. Our end goal is to build an environment every bit as complete as Microsoft’s Liqui|>.  They make this software available for free, and ironically distribute it via Github, although the product is entirely closed source (but at least they are not asking for your firstborn if you develop with Liqui|>).  Microsoft also stepped up their patent activity in this space, in all likelihood in order to allow for a similar shake-down business model as the one that allows them to derive a huge amount of revenue from the Linux based (Google developed) Android platform.  We don’t want want the future of computing to be held in a stranglehold by Microsoft, which is why our software is Open Source, and we are looking to build a community of QC enthusiasts within and outside of academia to carry the torch of software freedom.  If you are interested, please head over to our github repository. Any little bit will help, feedback, testing, documentations, and of course coding. Come and join the quantum computing revolution!