Big Challenges Require Bold Visions

Unless we experience a major calamity resetting the world's economy to a much lower output, it is a foregone conclusion that the world will miss the CO2 target to limit global warming to 1.5C. This drives a slow motion multi-faceted disaster exacerbated by the ongoing growth in global population, which puts additional stress on the environment.  Unsurprisingly, we are in the midst of earth's sixth massive extinction event.

It just takes three charts to paint the picture:

1) World Population Growth

2) Temperature Increase

3) Species Extinction

We shouldn't delude ourselves in believing that our species is safe from adding itself to the extinction list. The next decades are pivotal in stopping the damage we do to our planet. Given our current technologies, we have every reason to believe that we can stabilize population growth and replace fossil fuel dependent technologies with CO2 neutral ones, but the processes that are already set in motion will produce societal challenges of unprecedented proportion.

Population growth and the need for arable land keeps pushing people ever closer to formerly isolated wildlife.  Most often with just fatal consequences for the latter, but sometimes the damage goes both ways.  HIV, Ebola and bird flu, for instance, are all health threats that were originally contracted from animal reservoirs (zoonosis), and we can expect more such pathogens, many of which will not have been observed before. At the same time, old pathogens can easily resurface. Take tuberculosis, for instance. Even in an affluent country with good public health infrastructure, such as Canada, we see over a thousand new cases each year, and, as in other parts of the world, multi-resistant TB strains are on the rise.

Immunization and health management require functioning governmental bodies. In a world that will see ever more refugee crises and civil strife, the risk for disruptive pandemics will massively increase. The recent outbreak of Ebola is a case study in how such mass infections can overwhelm the medical infrastructure of developing countries, and should serve as a wake-up call to the first world to help establish a global framework that can manage these kinds of global health risks. The key is to identify emerging threats as early as possible, since the chance of containment and mitigation increases by multitudes the sooner actions can be taken.

Such a framework will require robust and secure data collection and dissemination capabilities and advanced predictive analytics that can build on all available pooled health data as well as established medical ontologies. Medical doctor and bioinformatic researcher Andrew Deonarine has envisioned such a system that he has dubbed Signa.OS, and he has assembled a stellar team including members from his former alma mater Cambridge, the UBC, as well as Harvard, where he will soon start post-graduate work. Any such system should not be designed with just our current hardware in mind, but with the technologies that will be available within the decade.  That is why quantum computer accelerated Bayesian networks are an integral part of the analytical engine for Signa.OS. We are especially excited to also have Prof. Marco Scutari from Oxford join the Signa.OS initiative, whose work in Bayesian network training in R is stellar, and served as a guiding star for our python implementation.

Our young company, artiste-qb.net, which I recently started with Robert R. Tucci, could not have wished for a more meaningful research project to prove our technology.

[This video was produced by Andrew for entering the MacArthur challenge.]

 

To Reach Quantum Supremacy We Need to Cross the Entanglement Firepoint

You can get ignition at the flash point, but it won't last.

There's been a lot of buzz recently about Quantum Computing. Heads of state are talking about it, and lots of money is being poured into research. You may think the field is truly on fire, but could it still fizzle out? When you are dealing with fire, what makes the critical difference between just a flash in the pan, and reaching the firepoint when that fire is self-sustaining?

Finding an efficient quantum algorithm is all about quantum speed-up, which has, understandably, mesmerized theoretical computer scientists.  Their field had been boxed into the Turing machine mold, and now, for the first time, there was something that demonstratively went beyond what was possible with this classical, discrete model.

Quantum speed-up is all about scaling behaviour.  It's about establishing that a quantum algorithm's resource requirements will grow more slowly with the amount of computational resources than the next best classical algorithm.

While this is a profound theoretical insight, it doesn't necessarily immediately translate into practice, because this scaling  behaviour may come into play at a resource threshold far beyond anything technically realizable for the foreseeable future.

For instance, Shor's algorithm requires tens of thousands of pristine, entangled qubits in order to become useful.  While not Sci Fi anymore, this kind of gate based QC is still far off. On the other hand, Matthias Troyer et al. demonstrated that you can expect to perform quantum chemical calculations that will outmatch any classical supercomputer with much more modest resources (qubits numbered in the hundreds not thousands).

The condition of having a quantum computing device performing tasks outside the reach of any classical technology is what I'd like to define as quantum supremacy (a term invented by John Preskill that I first heard used by DK Matai).

Quantum speed-up virtually guarantees that you eventually will reach quantum supremacy for the posed problem (i.e. factoring in Shore's algorithm case) but it doesn't tell you anything about how quickly you will get there. Also, while quantum speed-up is a useful criteria for eventually reaching quantum supremacy, it is not a necessary one for outperforming conventional super-computers.

We are just now entering a stage where we see the kind of quantum annealing chips that can tackle commercially interesting problems.  The integration density of these chips is still minute in comparison to that of the established silicon based ones (for quantum chips there is still lots of room at the bottom).

D-Wave just announced the availability of a 2000 qubit chip for early next year (h/t Ramsey and Rolf).  If the chip's integration density can continue to double every 16 months, then quantum algorithms that don't scale better (or only modestly so) than classical ones may at some point still end up outperforming all classical alternatives, assuming that we are indeed living in the end times of Moore's law.

From a practical (and hence commercial) perspective, these algorithms won't be any less lucrative.

Yet, the degree to which quantum correlations can be technologically controlled is still the key to go beyond what the current crop of "wild qubits" on a non-error corrected adiabatic chip can accomplish.  That is why we see Google invest in its own R&D, hiring Prof. Martinis from the UCSB, and the work has already resulted in a nine qubit prototype chip that combines "digital" error correction (ECC) with quantum annealing (AQC).

D-Wave is currently also re-architecting its chip, and it is a pretty safe bet that they will also incorporate some form of error correction in the new design. More intriguing, the company now also talks about a road map towards universal quantum computing (i.e. see the second to last paragraph in this article).

It is safe to say that before we get undeniable quantum supremacy, we will have to achieve a level of decoherence control that allows for essentially unlimited qubit scale-out. For instance, IBM researchers are optimistic that they'll get there as soon as they incorporate a third layer of error correction into their quantum chip design.

D-Wave ignited the commercial quantum computing field.  And with the efforts underway to build EEC into QC hardware, I am more optimistic than ever that we are very close to the ultimate firepoint where this technological revolution will become unstoppable. Firepoint Entanglement is drawing near, and when these devices enter the market, you will need software that will bring Quantum Supremacy to bear on the hardest challenges that humanity faces.

This is why I teamed up with Robert (Bob) Tucci, who pioneered an inspired way to describe quantum algorithms (and arbitrary quantum systems) with a framework that extends Bayesian Networks (B-nets, sometimes also referred to as Belief Networks) into the quantum realm. He did this in such a manner that an  IT professional who knows this modelling approach, and is comfortable with complex numbers, can pick up on it without having to go through a quantum physics boot camp. It was this reasoning on a higher abstraction level that enabled Bob to come up with the concept of CMI entanglement (sometimes also referred to as Squashed Entanglement).

An IDE built on this paradigm will allow us to tap into the new quantum resources as they become available, and to develop intuition for this new frontier in information science with a visualization that goes far beyond a simple circuit model. The latter also suffers from the fact that in the quantum realm some classical logic gates (such as OR and AND) are not allowed, which can be rather confusing for a beginner.  QB-nets, on the other hand, fully embed and encompass the classical networks, so any software that implements QB-nets, can also be used to implement standard Bayesian network use cases, and the two can be freely mixed in hybrid nets. (This corresponds to density matrices that include classical thermodynamic probabilities.)

So far, the back-end for the QB-net software is almost completed, as well as a stand-alone compiler/gate-synthesizer. Our end goal is to build an environment every bit as complete as Microsoft's Liqui|>.  They make this software available for free, and ironically distribute it via Github, although the product is entirely closed source (but at least they are not asking for your firstborn if you develop with Liqui|>).  Microsoft also stepped up their patent activity in this space, in all likelihood in order to allow for a similar shake-down business model as the one that allows them to derive a huge amount of revenue from the Linux based (Google developed) Android platform.  We don't want want the future of computing to be held in a stranglehold by Microsoft, which is why our software is Open Source, and we are looking to build a community of QC enthusiasts within and outside of academia to carry the torch of software freedom.  If you are interested, please head over to our github repository. Any little bit will help, feedback, testing, documentations, and of course coding. Come and join the quantum computing revolution!

 

 

 

Fusion is Hotter Than You May Think

As I am preparing to again get back into more regular blogging on Quantum Computing, I learned that my second favourite Vancouver based start-up, General Fusion, got some well deserved social media traction.  Michel Labarge's TED talk has now been viewed over a million times (h/t Rolf D).  Well deserved, indeed.

This reminded me of a Milken Institute fusion panel from earlier this year, which seems to have less reach than TED, but is no less interesting. It also features Michel, together with representatives from other Fusion ventures (Tri Alpha Energy and Lockheed Martin) as well as MIT's Dennis Whyte. The panel makes a compelling case as to why we see private money flowing into this sector now, and why ITER shouldn't be the only iron we have in the fire.

Canadian PM Justin Trudeau talks Quantum Computing

He is already fluently bi-lingual, but he also speaks pretty good Quantum.  This isn't half bad for a head of state:

If you also want to impress your friends like this, I recommend Sabine Hossenfelder's Quantum lingo crash course.

This bodes well for the prospects of seeing some federal initiatives for the emerging Canadian QC industry in the not too distant future.

 

Late Wave

It took only one scientist to predict them but a thousand to get them confirmed (1004 to be precise). I guess if the confirmation of gravitational waves couldn't draw me out of my blogging hiatus nothing could, although I am obviously catching a very late wave. The advantage of this - I can compile and link to all the best content that has already been written on the topic.

Of course this latest spectacular confirmation will unfortunately not change the mind of those quixotic individuals who devote themselves to fight the "wrongness" of all of Einstein's work (I once had the misfortune of encountering the maker of this abysmal movie. Safe to say I had more meaningful conversations talking to Jehovah Witnesses).

But given the track record of science news journalism, what are the chances that this may be a fluke similar to the BICEP news that turned out to be far less solid than originally reported? Or another repeat of the faster than light neutrino measurements?

The beauty of a direct experimental measurement as performed by LIGO, is that the uncertainty can be calculated statistically. Since this is a “5-sigma” event, this means the signal is real with a 99.9999% probability. The graph at the bottom shows that what has been measured matches a theoretically expected signal from a black hole merger so closely that the similarity is immediately compelling even for a non-scientists.

But more importantly, unlike faster than light neutrinos, we have every reason to believe that gravitational waves exist. There is no new physics required, and the phenomenon is strictly classical, in the sense that General Relativity produces a classical field equation that unlike Quantum Mechanics adheres to physical realism. That is why this discovery does nothing to advance the search for a unification of gravity with the other three forces. The importance of this discovery lies somewhere else, but is no less profound. Sabine Hossenfelder says it best:

Hundreds of millions of years ago, a primitive form of life crawled out of the water on planet Earth and opened their eyes to see, for the first time, the light of the stars. Detecting gravitational waves is a momentous event just like this – it’s the first time we can receive signals that were previously entirely hidden from us, revealing an entirely new layer of reality.

The importance of this really can't be overstated.  The universe is a big place and we keep encountering mysterious observations. There is of course the enduring puzzle of dark matter, lesser known may be the fast radio bursts first observed in 2007 that are believed to be the highest energy events known to modern astronomy.  Until recently it was believed that some one-off cataclysmic events were the underlying cause, but all these theories had to be thrown out when it was recently observed that these signals can repeat.  (The Canadian researcher who published on this recently received the highest Canadian science award, and the CBC has a nice interview with her).

We are a long way off from having good spatial resolution with the current LIGO setup. The next logical step is of course to simply drastically increase the scale of the device, and when it comes to Laser interferometry this can be done on a much grander scale then with other experimental set-ups (e.g. accelerators).  The eLISA space based gravitational wave detector project is well underway. And I wouldn't yet count out advanced quantum interferometry as a means to drastically improve the achievable resolution, even if they couldn't beat LIGO to the punch.

After all, it was advanced interferometry that had been driving the hunt for gravitational waves for many decades. One of its pioneers, Heinz Billing, was determined to bring about and witness their discovery, reportedly stating that he refused to die before the discovery was made.  The universe was kind to him, so at age 101 he is still around and got his wish.

LIGO signal
LIGO measurement of gravitational waves. Shows the gravitational wave signals received by the LIGO instruments at Hanford, Washington (left) and Livingston, Louisiana (right) and comparisons of these signals to the signals expected due to a black hole merger event.

D-Wave – Fast Enough to Win my Bet?

Really Would Like to Get That Raclette Cheese.

Last summer I had to ship a crate of maple syrup to Matthias Troyer at the ETHZ in Switzerland. The conditions we had agreed on for our performance bet were so that, at this point, the D-Wave One could not show a clear performance advantage over a conventional, modern CPU running fine-tuned optimization code. The machine held its own, but there weren't any problem classes to point to that really demonstrated massive performance superiority.

google_benchmark
Impressive benchmark graph. Next on my Christmas wishlist: A decisive widening of the gap between the green QMC curve and the blue D-Wave line as the problem size increases (as is the case when compared to the red Simulated Annealing curve).

 

The big news to take away from the recent Google/D-Wave performance benchmark is that, with certain problem instances, the D-Wave machine clearly shines. 100 million times better in comparison to a Quantum Monte Carlo Simulation is nothing to sneeze at. This doesn't mean that I would now automatically win my bet with Matthias if we were to repeat it with the D-Wave Two, but it'll make it much more interesting for sure.

One advantage of being hard-pressed to find time for blogging is that once I get around to commenting on recent developments, most other reactions are already in. Matthias provided this excellent write-up, and the former D-Wave critic-in-chief remains in retirement. Scott Aaronson's blog entry on the matter strikes a (comparatively) conciliatory tone. One of his comments explains one of the reason for this change:

"[John Martinis] told me that some of the engineering D-Wave had done (e.g., just figuring out how to integrate hundreds of superconducting qubits while still having lines into them to control the couplings) would be useful to his group. That’s one of the main things that caused me to moderate a bit (while remaining as intolerant as ever of hype)."

Scott also gave a pretty balanced interview to the MIT News (although I have to subtract a star on style for working in a dig at Geordie Rose - clearly the two won't become best buds in this lifetime).

Hype is generally and righteously scorned in the scientific community.  And when it is pointed out (for instance when the black hole information loss problem had been "solved"), the scientists involved are usually on the defensive.

Buddy
Buddy the Elf believes anything Steve Jurvetson ever uttered and then some.

Of course, business follows very different rules, more along the Donald Trump rules of attention. Any BS will do as long as it captures audience. Customers are used to these kinds of commercial exaggerations, and so I am always a bit puzzled by the urge to debunk D-Wave "hype". To me it feels almost a bit patronizing. The average Joe is not like Buddy the Elf, the unlikely hero of my family's favorite Christmas movie. When Buddy comes to NYC and sees a diner advertising the world's best coffee,  he takes this at face value and goes crazy over it.  The average Joe, on the other hand, has been thoroughly desensitized to high tech hype. He knows that neither Google Glasses nor Apple Watches will really change his life forever, nor will he believe Steve Jurvetson that the D-Wave machines will outperform the universe within a couple of years. Steve, on the other hand, does what every good VC business man is supposed to do for a company that he invested in, i.e. create hype. The world has become a virtual bazaar, and your statements have to be outrageous and novel in order to be heard over the noise. What he wants to get across is that the D-Wave machines will grow in performance faster than conventional hardware. Condensing this into Rose's Law is the perfect pitch vehicle for that - hype with a clear purpose.

People like to pick an allegiance and cheer for their "side". It is the narrative that has been dominating the D-Wave story for many years, and it made for easy blogging, but I won't miss it. The hypers gonna hype, the haters gonna hate, but now the nerds should know to trust the published papers.

Max Planck famously quipped that science advances one funeral at a time, because even scientists have a hard time acting completely rationally and adjusting their stances when confronted with new data.  This is the 21st century, here's to hoping that the scientific community has lost this kind of rigidity, even while most of humanity remains as tribal as ever.

Riding the D-Wave

Update: Thanks to everybody who keeps pointing me to relevant news (Ramsey, Rolf, Sol and everybody else my overtired brain may not recall at this time).

There is no doubt that D-Wave is on a role:

And then there's the countdown to what is billed as a D-Wave related watershed announcement from Google coming Dec 8th.  Could this be an early Christmas present to D-Wave investors?

 

~~~~~~

dwavetrain_wide

Back in the day before he re-resigned as D-Wave's chief critic, Scott Aaronson made a well-reasoned argument as to why he thought this academic, and at times vitriolic, scrutiny was warranted. He argued that a failure of D-Wave to deliver a quantum speed-up would set the field back, similar to the AI winter that was triggered by Marvin Minsky's Perceptrons book.

Fortunately, quantum annealers are not perceptrons. For the latter, it can be rigorously proven that single layer perceptrons are not very useful. Ironically, at the time the book was published, multilayered perceptrons, i.e. a concept that is now fundamental to all deep learning algorithms, were already known, but in the ensuing backlash research funding for those also dried up completely. The term "perceptron" became toxic and is now completely extinct.

Could D-Wave be derailed by a proof that shows that quantum annealing could, under no circumstances, deliver a quantum speed-up? To me this seems very unlikely, not only because I expect that no such proof exists, but also because, even if this was the case, there will still be a practical speed-up to be had. If D-Wave manages to double their integration density at the same rapid clip as in the past, then their machines will eventually outperform any classical computing technology in terms of annealing performance. This article (h/t Sol) expands on this point.

So far there is no sign that D-Wave will slow its manic pace. The company recently released its latest chip generation, featuring quantum annealing with an impressive 1000+ qubits (in practice, the number will be smaller, as qubits will be consumed for problem encoding and software EEC). This was followed with a detailed test under the leadership of Catherine McGeoch, and it will be interesting to see what Daniel Lidar, and other researchers with access to D‑Wave machines, will find.

My expectation has been from the get-go that D-Wave will accelerate the development of this emerging industry, and attract more money to the field. It seems to me that this is now playing out.

Intel recently (and finally as Robert Tucci points out) entered the fray with a $50M investment. While this is peanuts for a company of Intel's size, it's an acknowledgement that they can't leave the hardware game to Google, IBM or start-ups such as Rigetti.

On the software side, there's a cottage industry of software start-ups hitching their wagons to the D-Wave engine. Many of these are still in stealth mode, or early stage such as QC Ware, while others already start to receive some well deserved attention.

Then there are also smaller vendors of established software and services that already have a sophisticated understanding of the need to be quantum ready. The latter is something I expect to see much more in the coming years as the QC hardware race heats up.

The latest big name entry into the quantum computing arena was Alibaba, but at this time it is not clear what this Chinese initiative will focus on. Microsoft, on the other hand, seems to be a known quantity and will not get aboard the D‑Wave train, but will focus exclusively on quantum gate computing.

Other start-ups, like our artiste-qb.net, straddle the various QC hardware approaches. In our case, this comes "out-of-the-box", because our core technology, Quantum Bayesian Networks, as developed by Robert Tucci, is an ideal tool to abstract from the underlying architecture. Another start-up that is similarly architecture agnostic is Cambridge QC. The recent news of this company brings to mind that sometimes reality rather quickly imitates satire. While short of the $1B seed round of this April Fool's spoof, the influx of $50M dollars from the Chile based Grupo Arcano is an enormous amount for a QC software firm, that as far as I know, holds no patents.

Some astoundingly big bets are now being placed in this field.

 

 

 

 

 

Classic Quantum Confusion

Paris_Tuileries_Facepalm_statueBy now I am pretty used to egregiously misleading summarization of physics research in popular science outlets, sometimes flamed by the researchers themselves. Also self-aggrandized, ignorant papers sneaked into supposedly peer reviewed journals by non-physicists are just par for the course.

But this is in a class of it's own.  Given the headline and the introductory statement that "a fully classical system behaves like a true quantum computer", it essentially creates the impression that QC research must be pointless. Much later it sneaks in the obvious, that an analog emulation just like one on a regular computer can't possibly scale past 40 qubits due to the exponential growth in required computational resources.

But that's not the most irritating aspect of this article.

Don't get me wrong, I am a big fan of classical quantum analog systems. I think they can be very educational, if you know what you are looking at (Spreeuw 1998).  The latter paper, is actually quoted by the authors and it is very precise in distinguishing between quantum entanglement and the classical analog. But that's not what their otherwise fine paper posits (La Cour et al. 2015).  The authors write:

"What we can say is that, aside from the limits on scale, a classical emulation of a quantum computer is capable of exploiting the same quantum phenomena as that of a true quantum system for solving computational problems."

If it wasn't for the phys.org reporting, I would put this down as sloppy wording that slipped past peer review, but if the authors are correctly quoted, then they indeed labour under the assumption that they faithfully recreated quantum entanglement in their classical analog computer - mistaking the model for the real thing.

It makes for a funny juxtaposition on phys.org though, when filtering by 'quantum physics' news.

Screenshot 2015-05-28 01.35.43

The second article refers to a new realization of Wheeler's delayed choice experiment (where the non-local entanglement across space is essentially swapped for one across time).

If one takes Brian La Cour at his word then according to his other paper he suggest that these kind of phenomena should also have a classical analog.

So it's not just hand-waving when he is making this rather outlandish sounding statement with regards to being able to achieve an analog to the violation of Bell's inequality:

"We believe that, by adding an emulation of quantum noise to the signal, our device would be capable of exhibiting this type of [Bell's inequality violating] entanglement as well, as described in another recent publication."

Of course talk is cheap, but if this research group could actually demonstrate this Bell's inequality loophole it certainly could change the conversation.

Will Super Cool SQUIDs Make for an Emerging Industry Standard?

dwave_log_temp_scale
This older logarithmic (!) D-Wave Graphic gives an idea how extreme the cooling requirement is for SQUID based QC (it used to be part of a really cool SVG animation, but unfortunately D-Wave no longer hosts it).

D‑Wave had to break new ground in many engineering disciplines.  One of them was the cooling and shielding technology required to operate their chip.

To this end they are now using ANSYS software, which of course makes for very good marketing for this company (h/t Sol Warda). So good, in fact, that I would hope D‑Wave negotiated a large discount for serving as an ANSYS reference customer.

Any SQUID based quantum computing chip will have similar cooling and shielding requirements, i.e. Google and IBM will have to go through a similar kind of rigorous engineering exercise to productize their approach to quantum computing, even though this approach may look quite different.

Until recently, it would have been easy to forget that IBM is another contender in the ring for SQUID based quantum computing, yet the company's researchers have been working diligently outside the limelight - they last created headlines three years ago. And unlike other quantum computing news, that often only touts marginal improvements, their recent results deserved to be called a break-through, as they improved upon the kind of hardware error correction that Google is betting on.

IBM_in_atoms
IBM has been conducting fundamental quantum technology research for a long time, this image shows the company's name spelled out using 35 xenon atoms, arranged via a scanning tunneling microscope (a nano visualization and manipulation device invented at IBM).

Obviously, the better your error correction, the more likely you will be able to achieve quantum speed-up when you pursue an annealing architecture like D‑Wave, but IBM is not after yet another annealer. Most articles on the IBM program reports that IBM is into building a  "real quantum computer”, and the term clearly originates from within the company, (e.g. this article attributes the term to Scientists at IBM Research in Yorktown Heights, NY). This leaves little doubt about their commitment to universal gate based QC.

The difference in strategy is dramatic. D‑Wave decided to forgo surface code error correction on the chip in order to get a device to the market.  Google, on the other hand, decided to snap up the best academic surface code implementation money could buy, and also emphasized speed-to-market by first going for another quantum adiabatic design.

All the while, IBM researchers first diligently worked through the stability of SQUID based qubits .  Even now, having achieved the best available error correction, they clearly signaled that they don't consider it good enough for scale-up. It may take yet another three years for them to find the optimal number and configuration of logical qubits that achieves the kind of fidelity they need to then tackle an actual chip.

It is a very methodological engineering approach. Once the smallest building block is perfected,  they will have the confidence that they can go for the moonshot. It's also an approach that only a company with very deep pockets can afford, one with a culture that allows for the pursuit of a decades long research program.

Despite the differences, in the end, all SQUID based chips will have to be operated very close to absolute zero.  IBM's error correction may eventually give it a leg-up over the competition, but I doubt that standard liquid helium fridge technology will suffice for a chip that implements dozens or hundreds of qubits.

By the time IBM enters the market there will be more early adopters of the D‑Wave and Google chips, and the co-opetition between these two companies may have given birth to an emerging industry standard for the fridge technology. In a sense, this may lower the barriers of entry for new quantum chips if the new entrant can leverage this existing infrastructure. It would probably be a first for IBM to cater to a chip interfacing standard that the company did not help to design.

So while there's been plenty of news in the quantum computing hardware space to report, it is curious, and a sign of the times, that a recent Washington Post article on the matter opted to headline with a Quantum Computing Software company i.e. QxBranch. (Robert R. Tucci channeled the journalists at the WP when he wrote last week that the IBM news bodes well for software start-ups in this space).

While tech and business journalists may not (and may possibly never) understand what makes a quantum computer tick, they understand perfectly well that any computing device is just dead weight without software, and that the latter will make the value proposition necessary to create a market for these new machines.

 

 

 

We need Big Data where it will actually make a difference

Katmadu_detroyed_temples

Another earthquake took the lives of many thousands. As I am writing this blog post, scores of survivors will still be trapped underneath debris and rubble.

It will take weeks, if not months, before the damage done to Nepal will become fully apparent, in terms of life and limbs but also economically and spiritually.

The world's poorest regions are often hardest hit because resilient structures that can withstand quakes of this magnitude are expensive.

Governments look to science to provide better earthquake warnings, but the progress of geophysical modeling is hampered by the lack of good, high quality data.

In this context, pushing the limits of remote sensing with new technologies such as Quantum Gravimeters becomes a matter of life and death, and it should make apparent that striving for ever more precise quantum clocks is anything but a vanity chase. After all we are just now closing in on the the level of accuracy needed to perform relativistic geodesy.

It goes without saying that the resource extraction industry will be among the first to profit from these new techniques.  While this industry has an image problem due to its less than stellar environmental track record, there's no denying that anything that drives the rapid and ongoing productization of these technologies is a net positive if that makes them affordable and widely accessible to geophysicists who study the dynamic of active fault lines. Acquiring this kind of big data is the only chance to ever achieve a future when our planet will no longer shock us with its deadly geological force.