Category Archives: D-Wave

Big Bad Quantum Computer Revisited

A recent DefenseNews article again put Shor’s algorithm front and center when writing about Quantum Computing.  Yet, there is so much more to this field, and with Lockheed Martin working with D-Wave one would expect this to be an open secret in the defence sector.  At any rate, this is a good enough reason to finally publish the full text of my quantum computing article, that the Hakin9 magazine asked me to write for their special issue earlier this year:

Who’s afraid of the big bad Quantum Computer?

“Be afraid; be very afraid, as the next fundamental transition in computing technology will obliterate all your encryption protection.”

If there is any awareness of quantum computing in the wider IT community then odds are it is this phobia that is driving it.  Probably Peter Shor didn’t realize that he was about to pigeonhole the entire research field when he published his work on what is now the best known quantum algorithm. But once the news spread that he uncovered a method that could potentially speed up RSA decryption, the fear factor made it spread far and wide. Undoubtedly, if it wasn’t for the press coverage that this news received, quantum information technology research would still be widely considered to be just another academic curiosity.

So how realistic is this fear, and is breaking code the only thing a quantum computer is good for?  This article is an attempt to separate fact from fiction

First let’s review how key exchange protocols that underlie most modern public key encryption schemes accomplish their task. A good analogy that illustrates the key attribute that quantum computing jeopardizes is shown in the following diagram (image courtesy of Wikipedia):

Diffie-Hellman_Key_Exchange

Let’s assume we want to establish a common secret color shared by two individuals, Alice and Bob – in this example this may not be a primary color but one that can be produced as a mix of three other ones. The scheme assumes that there exists a common first paint component that our odd couple already agreed on. The next component is a secret private color. This color is not shared with anybody. What happens next is the stroke of genius, the secret sauce that makes public key exchange possible. In our toy example it corresponds to the mixing of the secret, private color with the public one. As everybody probably as early as kindergarten learned, it’s easy to mix colors, but not so easy – try practically impossible – to revert it. From a physics standpoint the underlying issue is that entropy massively increases when the colors are mixed. Nature drives the process towards the mixed state, but makes it very costly to reverse the mixing. Hence, in thermodynamics, these processes are called “irreversible”.

This descent into the physics of our toy example may seem a rather pointless digression, but we will see later that in the context of quantum information processing, this will actually become very relevant.

But first let’s get back to Alice and Bob.  They can now publicly exchange their mix-color, safe in the knowledge that there are myriads of ways to get to this particular shade of paint, and that nobody has much of a chance of guessing their particular components.

Since in the world of this example nobody has any concept of chromatics, even if a potential eavesdropper were to discover the common color, they’d still be unable to discern the secret ones, as they cannot unmix the publicly exchanged color shades.

In the final step, Alice and Bob recover a common color by adding their private secret component. This is a secret that they now share to the exclusion of everybody else.

So how does this relate to the actual public key exchange protocol?  We get there by substituting the colors with numbers, say x and y for the common and Alice’s secret color. The mixing of colors corresponds to a mathematical function G(x,y).  Usually the private secret numbers are picked from the set of prime numbers and the function G is simply a multiplication, exploiting the fact that integer factorization of large numbers is a very costly process. The next diagram depicts the exact same process, just mapped on numbers in this way.

key_exchange_drawing2

If this simple method is used with sufficiently large prime numbers then the Shared Key is indeed quite safe, as there is no known efficient classical algorithm, that allows for a reasonably fast integer factorization.  Of course “reasonably fast” is a very fuzzy term, so let’s be a bit more specific: There is no known classical algorithm that scales polynomially with the size of the integer. So for instance, an effort to crack a 232-digit number (RSA-768) that concluded in 2009 took the combined CPU power of hundreds of machines (Intel Core2 equivalents) over two years to accomplish.

And this is where the quantum computing bogeyman comes into the picture, and the aforementioned Peter Shor.  This affable MIT researcher formulated a quantum algorithm almost twenty years ago that can factorize integers in polynomial time on the, as of yet elusive, quantum hardware. So what difference would that actually make?  The following graph puts this into perspective:

scaling complexity graph
Scaling with Z3 (red curve) versus $ (z^2 \sqrt[3]{z} ) ~ 4 / (3^{5/2}) $ (green). The latter will look almost like a vertical ascent.
Z encodes stands for the logarithmic value of the size of the integer. The purple curve appears as almost vertical on this scale because the necessary steps (y-axis) in this classic algorithm grow explosively with the size of the integer. Shor’s algorithm, in comparison, shows a fairly well behaved slope with increasing integer sizes, making it theoretically a practical method for factoring large numbers.

And that is why common encryptions such as RSA could not protect against a deciphering attack if a suitable quantum computer was to be utilized.  So now that commercial quantum computing devices such as the D-Wave One are on the market, where does that leave our cryptographic security?

First off: Not all quantum computers are created equal. There are universal gate-based ones, which are theoretically probably the best understood, and a textbook on the matter will usually start introducing the subject matter from this vantage point.  But then there are also quantum simulators, topological design and adiabatic ones (I will forgo quantum cellular automatons in this article).  The only commercially available machine, i.e. D-Wave’s One, belongs to the latter category but is not a universal machine, in that it cannot simulate any arbitrary Hamiltonian (this term describes the energy function that governs a quantum system).  Essentially this machine is a super-fast and accurate solver for only one class of equations.  This kind of equation was first written out for describing solid state magnets according to what it now called the Ising model.

But fear not: The D-Wave machine is not suitable for Shor’s algorithm.  The latter requires a gate programmable device (or universal adiabatic machine) that provides plenty of qbits.  The D-Wave one falls short on both ends.  It has a special purpose adiabatic quantum chip with 128 qbits. Even if the architecture were compatible with Shor’s algorithm, the amount of qbits falls far short: If N is the number we want to factorize then we need a bit more than the square of that number in terms of qbits.  Since the integers we are interested in are pretty large, this is far outside anything that can be realized at this point.  For instance, for the RSA-768 challenge mentioned earlier, more than 232²=53824 qbits are required.

So you may wonder, what good is this vanguard of the coming quantum revolution if it can’t even handle the most famous quantum algorithm? To answer this let’s step back and look at what motivated the research into quantum computing to begin with. It wasn’t the hunt for new, more powerful algorithms but rather the insight, first formulated by Richard Feynman, that quantum mechanical systems cannot be efficiently simulated on classical hardware.  This is, of course, a serious impediment as our entire science driven civilization depends on exploiting quantum mechanical effects.  I am not even referring to the obvious culprits such as semiconductor based electronics, laser technology etc. but the more mundane chemical industry.  Everybody will probably recall the Styrofoam models of orbitals and simple molecules such as benzene C6H6:

Benzol_Representationen

As the graphic illustrates, we know that sp2 orbitals facilitate the binding with the hydrogen, and that there is a delocalized π electron cloud formed from the overlapping p2 orbitals.  Yet, these insights are inferred (and now thanks to raster electron microscopy also measured) but they don’t flow from an exact solution of the corresponding Schrödinger equations that govern the physics of these kinds of molecules.

Granted, multi-body problems don’t have an exact solution in the classical realm either, but the corresponding equations are well behaved when it comes to numerical simulations.  The Schrödinger equation that rules quantum mechanical systems, on the other hand, is not. Simple scenarios are still within reach for classical computing, but not so larger molecules (i.e. the kind that biological processes typically employ). Things get even worse when one wants to go even further and model electrodynamics on the quantum level.  Quantum field theories require a summation over an infinite regime of interaction paths – something that will bring any classical computer to its knees quickly. Not a quantum computer, though.

This summer a paper was published (Science June 1st issue) that showed conclusively that for this new breed of machine a polynomial scaling of these notorious calculations is indeed possible. (As for String theory simulations, the jury is still out on that – but it has been suggested that maybe it should be considered as an indication of an unphysical theory if a particular flavor of a String theory cannot be efficiently simulated on a quantum computer).

Quantum Computing has, therefore, the potential to usher in a new era for chemical and nano-scale engineering, putting an end to the still common practice of having to blindly test thousands of substances for pharmaceutical purposes, and finally realizing the vision, of designing smart drugs that specifically match targeted receptor proteins.

Of course, even if you can model protein structures, you still need to know which isomer is actually the biologically relevant one. Fortunately, a new technology deploying electron holography is expected to unlock a cornucopia of protein structure data.  But this data will remain stale if you cannot understand how these proteins can fold. The latter is going to be key for understanding the function of a protein within the living organism.

Unfortunately, simulating protein folding has been shown to be an NP hard problem. Quantum computing is once again coming to the rescue, allowing for a polynomial speed-up of these kinds of calculations.  It is not an exaggeration to expect that in the not too distant future lifesaving drug development will be facilitated this way. And first papers using D-Wave’s machine in this way have been published.

This is just one tiny sliver of the fields that quantum computing will impact.  Just as with the unexpected applications that ever-increasing conventional computing power enabled, it is safe to say that we, in all likelihood, cannot fully anticipate how this technological revolution will impact our lives.  But we can certainly identify some more areas that will immediately benefit from it: Artificial Intelligence, graph theory, operational research (and its business applications), database design etc. One could easily file another article on each of these topics while only scratching the surface, so the following observations have to be understood as extremely compressed.

It shouldn’t come as a surprise that quantum computing will prove fruitful for artificial intelligence.  After all, one other major strand that arguably ignited the entire research field was contemplations on the nature of the human mind.  The prominent mathematician and physicist Roger Penrose, for instance, argued vehemently that the human mind cannot be understood as a classical computer i.e. he is convinced (almost religious in his certainty) that a Turing machine in principle cannot emulate a human mind.  Since it is not very practical to try to put a human brain into a state of controlled quantum superposition, the next best thing is to think this through for a computer.  This is exactly the kind of thought experiment that David Deutsch discussed in his landmark paper on the topic.  (It was also for the first time that a quantum algorithm was introduced, albeit not a very useful one, demonstrating that the imagined machine can do some things better than a classical Turing machine).

So it is only fitting that one of the first demonstrations of D-Wave’s technology concerned the training of an artificial neural net.  This particular application maps nicely onto the structure of their system, as the training is mathematically already expressed as the search for a global minimum of an energy function that depends on several free parameters.  To the extent that an optimization problem can be recast in this way, it becomes a potential candidate to benefit from D-Wave’s quantum computer.  There are many applicable use cases for this in operational research (i.e. logistics, supply chain etc.) and business intelligence.

While this is all very exciting, a skeptic will rightfully point out that just knowing a certain tool can help with a task does not tell us how well it will stack up to conventional methods.  Given the price tag of $10 million, it had better be good.  There are unfortunately not a lot of benchmarks available, but a brute force search method implemented to find some obscure numbers from graph theory (Ramsey numbers) gives an indication that this machine can substitute for some considerable conventional computing horsepower i.e. about 55 MIPS, or the equivalent of a cluster of more than 300 of Intel’s fastest commercially available chips.

Another fascinating aspect that will factor into the all-important TCO (Total Cost of Ownership) considerations is that a quantum computer will actually require far less energy to achieve this kind of performance (its energy consumption will also only vary minimally under load).  Earlier I described a particular architecture as adiabatic, and it is this term that describes this counterintuitive energy characteristic.  It is a word that originated in thermodynamics and describes when a process progresses without heat exchange. I.e. throughout most of the QC processing there is no heat-producing entropy increase.  At first glance, the huge cooling apparatus that accompanies a quantum computer seems to belie this assertion, but the reason for this considerable cooling technology is not a required continuous protection of the machines from over-heating (like in conventional data centers) but because most QC implementations require an environment that is considerably colder than even the coldest temperature that can be found anywhere in space (the surface of Pluto would be outright balmy in comparison).

Amazingly, these days commercially available Helium cooling systems can readily achieve these temperatures close to absolute zero.  After cooling down, the entire remaining cooling effort is only employed to counteract thermal flow that even the best high vacuum insulated environments will experience.  The quantum system itself will only dissipate a minimal amount of heat when the final result of an algorithm is read out. That is why the system just pulls 15 KWatt in total.  This is considerably less than what our hypothetical 300 CPU cluster would consume under load i.e. >100KW per node, more than double D-Wave’s power consumption.  And the best part: The cooling system, and hence power consumption, will remain the same for each new iteration of chips (D-Wave recently introduced their new 512 qbits VESUVIUS chip) and so far steadily followed their own version of Moore’s law, doubling integration about every 15 months.

So although D-Wave’s currently available quantum computing technology cannot implement Shor’s algorithm, or the second most famous one, Grover’s search over an unstructured list, the capabilities it delivers are nothing to scoff at. With heavyweights like IBM pouring considerable R&D resources into this technology, fully universal quantum processors will hit the market much earlier than most IT analysts (such as Gartner) currently project. Recently IBM demoed a 4 qbit universal chip (interestingly using the same superconducting foundry approach as D-Wave). If they also were to manage a doubling of their integration density every 18 months then we’d be looking at 256 qbit chips within three years.

While at this point current RSA implementation will not be in jeopardy, this key exchange protocol is slowly reaching its end-of-life cycle.  So how best to mitigate against future quantum computing attacks on the key exchange?  The most straightforward approach is simply to use a different “color-mixing” function than integer multiplication i.e. a function that even a quantum computer cannot unravel within a polynomial time frame. This is an active field of research, but so far no consensus for a suitable post-quantum key exchange function has evolved. At least it is well established that most current symmetric crypto (cyphers and hash functions) can be considered secure from the looming threat.

As to key exchange, the ultimate solution can also be provided by quantum mechanics in the form of quantum cryptography that in principle allows to transfer a key in such a manner that any eavesdropping will be detectable. To prove that this technology can be scaled for global intercontinental communication, the current record holder for the longest distance of quantum teleportation, the Chinese physicist Juan Yin, plans to repeat this feat in space, opening up the prospect for ultra secure communication around the world.  Welcome to the future.

Quantum Computing Taxonomy

The dodo and analog computers are best pals in the museum iles of the extinct. Some argue this spares us the trouble to try to figure out, in what way they might be Turing complete .

Quantum computers are not created equal. In this, they also differ drastically from their common classical counterparts. Since analog computers went the way of the dodo, our current digital devices can all be modeled as Turing machines. This means that from a computational standpoint they are equivalent and can not perform on the realm of what is now referred to as hyper-computation . David Deutsch’s seminal paper that jump started modern quantum computing research followed this lead by considering a Turing machine under quantum state superposition.

This line of thought was extremely fruitful (e.g. Shor’s algorithm) and began to overshadowed another approach more in line with Feynman’s original idea: I.e. a quantum simulator: A system that is quantum mechanical in nature but can be configured to simulate various scenarios.

This dichotomy came back to haunt D-Wave, and contributed to the harsh rejection on the part of many researchers in the field when they introduced their impressive machines, that where firmly in the mold of a configurable quantum simulator.

Albeit a very different hardware realization, the D-Wave simulator is very much in the same league as the notable architecture developed at the NIST (don’t get confused by the use of the word crystal in the linked article, what’s meant is a grid). Both accomplished scalable devices to solve for a generalized Ising equation.

Interestingly, the NIST researchers didn’t have to face the kind of blow-back that D-Wave had to endure, which just goes to show that sometimes no marketing is the better marketing. These quantum simulator devices are in a sense mature technology and have proven to be scalable. D-Wave’s Rose’s law is a nice illustration for that (albeit somewhat exaggerated . The company managed to to double the qubit count on their chips about every 15 months for the last 10 years. On the other hand the holy grail of quantum computing, a truly universal machine in the mould of Deutsch’s Turing model is harder to come by. There have been impressive, albeit small scale demonstrations of Grover’s and Shor’s algorithm, but with the latter’s record achievement standing at the factorization of just 21, it is clear that there is still a long road ahead.

Intriguingly, adiabatic quantum computing the concept that is so successfully exploited by the D-Wave company and the NIST, also allows for universal computing: An alternative factorization algorithm has been demonstrated to crack this task for the number 143 . Yet, it is unclear if universal adiabatic quantum computing can achieve the quantum speed-up that is theoretically guaranteed when implementing Shor’s algorithm.

On the other hand it is a certainty that the experimental set-up used in the record breaking experiment will not scale. (There are only so many hydrogen nuclei in molecules of 1-bromo-2-chlorobenzene). There is a solid theoretical framework that indicates that universal adiabatic quantum computing may indeed be equivalent to gate based QC, but then there are also some tentative numerical simulations that indicate that this may not pan out (comments and wagers on this issue are most welcome in the comment section).

Universal gate based quantum computing relies on quantum error correction to achieve scalability. It is clearly an uphill battle against decoherence. This motivated the topological quantum computing approach that requires anyons ( majorana quais-particles) to implement a regime where an ingrained quantum symmetry counter-acts decoherence. As the latter only very recently have been experimentally shown to exist, this concept is the furthest away from realization.

So without any claim to completeness and with the caveat that it is difficult to make predictions, especially about the future, here is my Gartner inspired quantum computing taxonomy graphic (caveat: it uses the SVG format and may not display in older browsers):

Click on the graph for a larger image (also if you use Google Chrome, the browser sometimes seems to render embedded SVG only after directly going to the image file, please let me know in the comment section if you see any strange SVG effects). If there is more than one label the links are only on the text rather than the entire box. Here is how to read the graph: The axis are fairly straightforward, the more mature and scalable the technology the better. Maturity is measured in estimated time to market. The numbers are pure conjecture on my part (please feel free to submit your own estimates in the comment section). The following legends explain the color and icon choices:

Hardware   Architecture
Ion Trap Adiabatic Ising (Quantum Simulator)
Photonic Laser Warning Icon Adiabatic Universal
SQUIDs Gate Based
Other Solid State Topological

Most architectures can be realized with a variety of very different quantum systems, the latter represent the hardware of a quantum system and are listed on the left. The most universal systems (theoretically guaranteed to provide quantum speed-ups) are able to implement quantum circuits such as Shore’s algorithm and are depicted in green.  The quantum simulator family is kept in blue, the universal adiabatic architecture yellow. 

Please let me know if there is an effort that you feel should be placed on this chart. I believe there are many that I missed to penciled in. To me this is just a starting point and first draft. I plan to retain this chart in one way or another to keep track of the various diverse QC efforts and I will be happy to incorporate input.

After writing this blog entry a short google search revealed that R.R. Tucci beat me to the punch of using the taxonomy headline (by a mere four years).  So here’s a shout-out to his classification of quantum computing software for the common PC (and Mac).

What’s in a Name

Thanks to Mad Men, the concept of brand equity has been widely popularized, the most prominent example being that of a dog food company’s brand becoming toxic after it was reported that it used horse meat in its product (a practice that was outlawed in the US in 1970).

D-Wave is still in the early stages of building its brand, so the fact that some casual observers have very negative views, as apparent from some comments on my last entry, is worrying. Unbeknownst to me, Scott Aaronson put up a blog entry about the same time and I didn’t immediately pick up on it as I was travelling.  Turns out, the large comment string points to a D-Wave 2007 event as the cause for this brand damage. At that time the expectations of the theoretical computer scientists collided head-on with the reality of D‑Wave.

The comment thread at Scott’s blog is chock full of crazy, running the gamut from a poster accusing D-Wave of fraud to another one showering Scott with profanity. Always found it admirable that Scott doesn’t censor such comments but lets them speak for themselves (although Joy Christian proved that his saintly patience is not inexhaustible).

Scott initially challenged D-Wave strongly after the 2007 event (long before I started to pay closer attention – I heard about the 16 qbit claim at the time but didn’t consider such a chip yet commercially viable).  Recently he buried the hatchet, but of course for a theoretical computer scientist the question of what kind of complexity class a certain hardware design can address will remain in the forefront.

Fortunately Daniel Lidar who is currently studying a D-Wave machine reported in the comment section that he plans to soon publish on exactly that.  While other theorists such as Greg Kuperberg seem to be unwilling to move forward and past the 2007 “Sudoku crisis” (video of this “scandalous” demo can be found here), Scott and Daniel will clearly move this to a constructive debate based on published research.

The most prominent argument against this demo was that 16 qbits could not contain the entire parameter space of a Sudoku solution.  Of course this ignores how the D-Wave system is implemented, and that it works in conjunction with a classic system as a kind of quantum coprocessor.

The Sudoku solver is not currently given as a coding example on the D-Wave developer portal.  Since this old controversy can still cause damage to the company’s brand I think it may be a good idea to include it there.

It’s hard to gauge the extent of the brand damage that D-Wave suffered.  Most IT folks will probably not have heard of the company yet, so this mitigates the problem, but on the other hand this left its traces in Wikipedia. The latter will almost certainly be one of the first stops for anybody new to the subject when trying to form an opinion.  It is the first non-affiliated site that comes up in a Google search (when discounting recent news headlines) and it doesn’t get better from there. The next search result is this outright defamatory article from the highly respect IEEE, an organisation that is supposed to be objective.

Although my professional life is mostly IT consulting, I had my share of high tech marketing challenges to deal with (for a while as BI software product manager).  In my experience a problem like this needs to be addressed heads on.  My advice would be to very openly embrace this old controversy and to position how and why D-Wave’s design differs from universal gate-based quantum computing.  The company does that in various places of their web presence, but it took me 1o minutes of googling to find this old physicsandcake posting that addresses this nicely in simple terms. A potential customer coming across old and new misinformation should be able to very quickly find these kinds of counter arguments.

 

 

 

 

 

If Quantum Computing Makes a Splash and Nobody is Listening …

“If a tree falls in a forest, and no one is around to hear it, does it make a sound?” is a well known metaphysical toy question.  What is less known is its business corollary: If a start-up makes a splash, but nobody pays attention, did it actually ever really emerge from stealth mode?

It is rather welcome news that the latest Nobel prizes in physics bring some attention to quantum computing, but as always, these machines are presented as a future possibility rather than a commercial reality.

Yet, D-Wave already ships the first commercial quantum computing device, and hardly anybody seems to know. Sometimes this ignorance appears to be almost willful.  The company just doesn’t seem to be able to generate ubiquitous mainstream media awareness.

While their device is not a universal quantum computer, it nevertheless can facilitate an extremely useful and versatile quantum optimization, with plenty of real life application usage. The somewhat arcane Ramsey number research already demonstrated the power of the Rainier chip and a recent paper published in Nature shows how NP complex protein folding scenarios can be calculated on their hardware.  An interesting wrinkle to this latest research is that the device doesn’t find the optimal solution as reliably as in the Ramsey number case, but that the other solutions are also valid folding modes of the examined protein and provide additional insight.

The importance of this kind of research can hardly be overstated, so why is this company not mentioned every time somebody writes about quantum computing?

Is it a case of crying wolf too many times?  Is there a perception that the company may have over-promised and under-delivered in the past? Is it a lingering after-effect of the Scott Aaronson ham sandwich controversy?

Your guess is as good as mine.  If you have any thoughts or insights on this please share them in the comments section.

Explaining Quantum Computing – Blogroll Memory Hole Rescue

So what is quantum computing?

This is the most dreaded question for anybody involved with this field if posed by a friend or relative without a physics background.  When I am too tired or busy to make an honest effort, I usually answer by quoting Feynman’s quip on quantum mechanics (click here to listen to the man himself – the quote appears about 6:45 min into the lecture):

“A lot of people understand the theory of relativity in some way or other.  (..) On the other hand, I think I can safely say that nobody understands quantum mechanics.

The crux of the matter is that Quantum Computing derives its power from precisely the same attributes that make this realm of physics so alien to us.

It’s small comfort that greater minds than mine have been mulling this conundrum. For instance, a while back Scott Aaronson described his struggle to write a column for the New York Times that describes Quantum Computing.

Then there is Michael Nielsen’s take on it, a brilliant write-up illustrating why there is really no reason to expect a simple explanation.

But if this hasn’t utterly discouraged you then I have this little treat from D-Wave’s blog. You need to be willing to tolerate a little math, understanding that an expression like this

$ \displaystyle ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \sum_{i} x_{i} $

means you are summing over a bunch of variables x indexed by i

$ \displaystyle ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \sum_{i} x_{i} = x_{0}+x_{1}+x_{2}+ … $

Other than that it just requires you to contemplate Schroedinger’s light switches.  Just as his cat can be thought of as dead and alive at the same time, his light switches are in a superposition of On and Off.  Strictly speaking, D-Wave’s description is specific to their particular adiabatic quantum chip design, but nevertheless, if you get your head around this, you will have a pretty good idea why a Quantum Computer’s abilities go beyond the means of a classical Turing machine.

Quantum Computing – So what is it good for? – Updated below

If anybody in the wider IT community has heard about Quantum Computing, it’s usually in the context of how it may obliterate our current industry standard encryption (e.g. RSA). Probably Peter Shor didn’t realize that he was about to define the entire research field when he published his work on what is now the most famous (and notorious) quantum algorithm. But once it was reported that he uncovered a method that could potentially speed up RSA decryption, anxiety and angst spread the news far and wide.

Odds are, if it wasn’t for the IT media stir that this news caused, quantum information technology would still be mostly considered just another academic curiosity.

With the modest means of my blog, I have tried to create some awareness that quantum computing is much bigger than this, when pointing at use cases in enterprise IT and social web services, but admittedly these were general observations that did not descend to the level of implementation details.  That’s why I was very pleased when coming across this paper describing how quantum computing can be leveraged to calculate Google’s page rank.  The authors argue that for a realistic web graph topology, their approach can

… provide a polynomial quantum speedup. Moreover, the quantum PageRank state can be used in “q-sampling” protocols for testing properties of distributions, which require exponentially fewer measurements than all classical schemes …

So why does this matter?  Most bloggers will have heard about Google’s PageRank algorithm as it is crucial in determining how far up your site will appear in a Google search. But just as most people don’t really think about how power gets to your home’s outlet, few will have contemplated how this algorithm works and how much raw CPU power Google must spend to keep it updated.

For those inclined to learn more how these kinds of algorithms work, there is no better treat than this blog post on exactly this topic from the one and only Michael Nielsen (who’s standard tome of quantum computing is deservedly the first reference in the paper).

Google doesn’t advertise how often and how quickly they can run PageRank on the entire visible web space.  Clearly, the more often a search engine can update the rank values the better.  It is also a safe bet that executing this algorithm on a continuous basis will contribute to Google’s considerable power consumption. It is laudable that Google invests in “green” data centers but it seems they could save a lot on their energy bill if they’d get behind quantum computing.  The “adiabatic” classification of this algorithm is a give-away.  The unitary process evolution actually does not increase entropy, and thus does not require energy input for that part of the computation, unlike classical computing. That is why adiabatic quantum computing machines like the D-Wave One have the potential to drastically lower power consumption in data centers.  It’s really a twofer:  You get superior algorithms and save power.

Oh, and by the way, the PageRank for this blog is still 0 out of 10

 



Update: 
The way my post is written it may give the impression that the D-Wave One can execute this algorithm.  But as this machine is a specialized QC device this is not necessarily the case.  (This question was raised in the related LinkeIn discussion to this post that can be found here).

Quantum Computing – A Matter of Life and Death

Even the greatest ships can get it wrong.

In terms of commercial use cases, I have looked at corporate IT, as well as how a quantum computer will fit in with the evolving cloud computing infrastructure.  However, where QC will make the most difference -as in, a difference between life and death – goes entirely unnoticed.  Certainly by those whose lives will eventually depend on it.

Hyperbole? I think not.

As detailed in my brief retelling of quantum computing history, it all started with the realization that most quantum mechanical systems cannot efficiently be simulated on classical computers.  Unfortunately, the sorry state of public science understanding means that this facilitates hardly more than a shrug by even those who make a living writing about it (not the case for this humble blogger who toils away at it as a labor of love).

Prime example for this is a recent, poorly sourced article from the BBC that disses the commercial availability of turnkey-ready quantum computing without even mentioning D‑Wave, and at the same time proudly displays the author’s ignorance about why this technology matters (emphasis mine):

“The only applications that everyone can agree that quantum computers will do markedly better are code-breaking and creating useful simulations of systems in nature in which quantum mechanics plays a part.”

Well, it’s all good then, isn’t it? No reason to hurry and get a quantum computer on every scientist’s desk.  After all, only simulations of nature in which quantum mechanics plays a part will be affected.  It can’t possibly be all that important then.  Where the heck could this esoteric quantum mechanics stuff possibly play an essential part?

Oh, just all of solid state physics, chemistry, micro-biology and any attempts at quantum gravity unification.

For instance, one of the most important aspects of pharmaceutical research is to understand the 3D protein structure, and then to model how this protein reacts in vivo using very calculation-intensive computer simulations.

There has been some exciting progress in the former area.  It used to be that only proteins that lend themselves to crystallization could be structurally captured via X-ray scattering.  Now, recently developed low energy electron holography has the potential to revolutionize the field.  Expect to see a deluge of new protein structure data.  But despite some progress with numerical approaches to protein folding simulations, the latter remains NP complex.  On the other hand, polynomial speed-ups are possible with quantum computing.  Without it, the inevitable computational bottleneck will ensure that we forever condemn pharmaceutical research to its current expensive scatter-shot approach to drug development.

There is no doubt in my mind that in the future, people’s lives will depend on drugs that are identified by strategically deploying quantum computing in the early drug discovery process.  It is just a matter of when. But don’t expect to learn about this following BBC’s science news feed.

Analog VLSI for Neural Networks – A Cautious Tale for Adiabatic Quantum Computing

Update:  This research is now again generating some mainstream headlines.  Will be interesting to see if this hybrid chip paradigm will have more staying power than previous analog computing approaches.

###

Fifteen years ago I attempted to find an efficient randomized training algorithm for simple artificial neural networks suitable for implementation on a specialized hardware chip. The latter’s design only allowed feed-forward connections i.e. back-propagation on the chip was not an option.  The idea was that given the massive acceleration of the networks execution on the chip some sort of random walk search might be at least as efficient as optimized backprop algorithms on general purpose computers.

My research group followed a fairly conventional digital design, whereas at the time, analog VLSI was all the rage.  A field (like so many others) pioneered by Carver Mead. On the face of it, this makes sense, given that the biological neurons obviously work with analog signals, but nevertheless attain remarkable robustness (the latter being the typical problem with any sort of analog computing). Yet, it is also this robustness that makes the “infinite” precision that is the primary benefit of analog computing somewhat superfluous.

Looking back at this I expected this analog VLSI approach to be a bit of of an engineering fad as I wasn’t aware of any commercial systems ever hitting the market – of course I could have easily missed a commercial offering  if it followed a similar trajectory as the inspiring but ultimately ill fated transputer. In the end the later was just as much a fad as the Furby toy of yesteryear yet arguably much more inspiring.

To my surprise and ultimate delight a quick statistic on the publication count for analog neural VLSI proves me wrong and there is still some interesting science happening:

Google Scholar Publication Count for Neural Analog VLSI

So why are there no widespread commercial neuromorphic products on the market? Where is the neural analog VLSI co-processor to make my Laptop more empathic and fault tolerant? I think the answer comes simply down to Moor’s law.  A flagship neuromorphic chip currently designed at MIT boasts a measly 400 transistors.  I don’t want to dispute its scientific usefulness – having a detailed synapse model in silicon will certainly have its uses in medical research (and the Human Society will surely approve if it cuts down on the demise of guinea pigs and other critters). On the other hand the blue brain project claims it already successfully simulated an entire rat cortical column on their supercomputer and their goal is nothing less than a complete simulation of the rodents brain.

So what does this have to do with Adiabatic Quantum Computing?  Just like in the case of neuromorphic VLSI technology, its main competition for the foreseeable future is conventional hardware. This is the reason why I was badgering D-Wave when I thought the company didn’t make enough marketing noise about the Ramsey number research performed with their machine.  Analog Neural VLSI technology may find a niche in medical applications, but so far there is no obvious market niche for adiabatic quantum computing. Scott Aaranson argued that the “coolness” of quantum computing will sell machines.  While this label has some marketing value, not the least due to some of his inspired stunts, this alone will not do.  In the end, adiabatic quantum computing has to prove its mettle in raw computational performance per dollar spent.

(h/t to Thomas Edwards who posted a comment a while back in the LinkedIn Quantum Information Science that inspired me to this post)

Keep Walking – No Quantum Computing Jobs on Offer

UPDATE: Ever so often I get a search engine click on this post, presumably from QIS job seekers. So despite the dreary title I now can give you a link to this job portal.

When promoting my last blog post in the LinkedIn “Quantum Computing and Quantum Information” group a strange thing happened.  Rather than staying in the main discussions section of that group my item was moved to the jobs section. So let’s be clear about this: I don’t have jobs to offer.

My last entry at this LinkedIn group is now the only one amidst a see of white fertile untouched HTML background in the jobs section.  Quite a contrast to the High Performance Computing LinkedIn groups that have dozens of jobs posted.

This made me ponder the Quantum Information Science (QIS) job market.  While I expect in the not so distant future there will be plenty of high qualified jobs to be had, we are certainly living in a very different reality as of now.  There is a reason that the byline to this blog is “Observations on the nascent quantum computing industry”.

With the exception of D-Wave the few other commercial players (i.e. the IBMs of the world) are still in basic research mode. Outside these companies, and a very small Quantum Key Distribution sector, there are essentially no QIS jobs in the private sector. It seems to me the job market still looks very much the same as described by Robert Tucci 1 ½ years ago.

Don’t peruse this blog because you are looking for a job in this area. But by all means please come here, and check back regularly, if you are intrinsically interested in Quantum Computing.

Update: Todd Brun was so kind to point out his QIS job site as a worthwhile destination for the wary job seeker who surfed wandered over here looking for QC related work.

Quantum Computing for the Rest of Us? – UPDATED

<Go to update>

Everybody who is following the Quantum Computing story will have heard by now about IBM’s new chip. This certainly gives credence to the assumption that superconducting Josephon junction-based technology will win the race.  This may seem like bad news for everybody who was hoping to be able to tug away a quantum computer under his or her desk some day.

Commercial liquid cooling has come a long way, but shrinking it in price and size to fit into a PC tower form factor is a long way off. Compare and contrast this to liquid nitrogen cooling. With a high temp superconductor and some liquid nitrogen poured from a flask any high school  lab can demonstrate this neat Meisner effect:

Cooled with Liquid Nitrogen (a perfectly harmless fluid unless you get it on your cloths)

Good luck trying this with liquid helium (for one thing it may try to climb up the walls of your container – but that’s a different story).

Nitrogen cooling on the other hand is quite affordable.  (Cheap enough to make the extreme CPU overclocking scene quite fond of it).

So why then are IBM and D-Wave making their chips from the classic low temp superconductor Niobium?  In part the answer is simple pragmatic engineering: This metal allows you to adopt fabrication methods developed for silicon.

The more fundamental reason for the low temperature is to prevent decoherence, or to formulate it positively, to preserve pure entangled qubit states. The latter is especially critical for the IBM chip.

The interesting aspect about D-Wave’s more natural – but less universal – adiabatic quantum computing approach, is that a dedicated control of the entanglement of qubits is not necessary.

It would be quite instructive to know how the D-Wave chip performs as a function of temperature below Tc (~9K). If the performance doesn’t degrade too much, maybe, just maybe, high temperature superconductors are suitable for this design as well.  After all, it has been shown they can be used to realize Josephon junctions of high enough quality for SQUIDS. On the other hand, the new class of iron based superconductors show promise for easier manufacturing, but have a dramatically different energy band structure, so that at this point it seems all bets are off on this new material.

So, not all is bleak (even without invoking the vision of topological QC that deserves its own post). There is a sliver of hope for us quantum computing aficionados that even the superconducting approach might lead to an affordable machine one of these days.

If anybody with a more solid Solid States Physics background than me, wants to either feed the flames of this hope or dash it –  please make your mark in the comment section.

(h/t to Noel V. from the LinkedIn Quantum Computing Technology group and to Mahdi Rezaei for getting me thinking about this.)

Update:

Elena Tolkacheva from D-Wave was so kind to alert me to the fact that my key question has been answered in a paper that the company published in the summer of 2010. It contains the following graphs that illustrates how the chip performs at three different temperatures: (a) T = 20 mK, (b) T = 35 mK, and (c) T = 50 mK.  Note: These temperatures are far below Niobium’s critical temperature of 9.2 K.

The probability of the system to find the ground state (i.e. the “correct” answer) clearly degenerates with higher temperature – interestingly though not quite as badly as simulated. This puts a sever damper on my earlier expressed hope that this architecture might be suitable for HTCs as this happens so far away from Tc .

Kelly Loum, a member of the LinkeIn Quantum Physics group, helpfully pointed out that,

… you could gather results from many identical high temperature processors that were given the same input, and use a bit of statistical analysis and forward error correction to find the most likely output eigenstate of those that remained coherent long enough to complete the task.

… one problem here is that modern (classical, not quantum) FECs can fully correct errors only when the raw data has about a 1/50 bit error rate or better. So you’d need a LOT of processors (or runs) to integrate-out the noise down to a 1/50 BER, and then the statistical analysis on the classical side would be correspondingly massive. So my guess at the moment is you’d need liquid helium.

The paper also discusses some sampling approaches along those lines, but clearly there is a limited to how far they can be taken.  As much as I hate to admit it, I concur with Kelly’s conclusion.  Unless there is some unknown fundamental mechanism that’ll make the decoherence dynamics of HTCs fundamentally different, the D-Wave design does not seem a likely candidate for a nitrogen cooling temperature regime. On the other hand drastically changing the decoherence dynamics is the forte of topological computing, but that is a different story for a later post.