Recently, Science magazine prominently featured Quantum Information Processing on their cover:
The periodical has a great track record in publishing on QIS, and this is the main reason why I subscribe to it.
Unfortunatelly, reading this issue, yet again drove home what a dark horse enterprise D-Wave is. And this is despite some recent prominent news, that D-Wave effortlessly passed a test devised to check for the quality of entanglement that they realize on their chip. There is hardly any lingering doubt that they managed to leverage real quantum annealing, yet, neither their approach, nor adiabatic quantum computing, is featured at all in this issue of Science. In the editorializing introduction to the cover story dubbed “the Future of Quantum Information Processing” these fields aren’t even mentioned in passing. Are we to conclude that there is no future for adiabatic quantum computing?
This I found so puzzling, that it prompted me to write my first ever letter to the editors of Science:
The Science journal has been invaluable in the past in advancing the QIS field, publishing an impressive roster of ground breaking papers. Yet, it seems to me the editorializing introduction of the March 8th cover story by Jelena Stajic dropped the ball.
If QIS is prominently featured on the cover of your journal shouldn’t the reader at least expect a cursory exposition of all prominent developments in the entire field? There is nothing wrong with the authors of the paper on the superconducting Josephson junctions approach to QC, restricting themselves to universal gate based architectures. Nevertheless, at least in the accompanying editorial, I would have expected a nod towards adiabatic quantum computing, and approaches utilizing quantum annealing. This oversight seems all the more glaring as the latter already resulted in a commercial offering.
Sincerely,
Henning Dekant
Disclaimer: I am not affiliated with the company D-Wave, which ships the first commercial quantum computing device, just puzzled that an exemplary publication like Science doesn’t feature the entire spectrum of approaches towards quantum computing.
My bet with a sceptical QC and CIS expert is still outstanding, and in my exchange with him, he mentioned that he didn’t expect D-Wave to pass this first entanglement hurdle. The next one to pass now is the matter of actual benchmarking against established chip architectures.
If D-Wave’s One cannot outperfom a conventional single-threaded architecture I’ll lose 4 gallons of maple syrup, but even if that was to come to pass, it wouldn’t spell the end for D-Wave, as it’ll be just a matter of increasing the qbit density until a quantum annealing chip will surpass conventional hardware. The latter only improves linearly with the integration density, while a quantum chip’s performance grows exponentially with the numbers of qbits that it can bring to bear.
Update:
Without further comment, here is the answer that I received from Science:
Thank you for your feedback regarding the introductory page to our recent QIP special section. I appreciate the point you are making, and agree that quantum annealing is an important concept. Let me, however, clarify my original reasoning. The Introduction was aimed at the general reader of Science, which, as you are aware, has a very broad audience. It was not meant to be an exhaustive account, or to complement the reviews in the issue, but rather to serve as a motivation for covering the topic, and hopefully to induce a non-specialist reader to delve into the reviews, while introducing only a minimal set of new concepts.
I hope that this is helpful, and once again, I am grateful for your feedback.
Best regards,
Jelena Stajic
Associate Editor
Scott Aaranson took on this paper in his unique style (it’s a long read but well worth it). There really isn’t much to add to his arguments, but there is another angle that intrigues my inner “armchair psychologist”: What exactly is it about this field that so provokes some physicists? Is it that …
… Computer Scientists of all people are committing Quantum Mechanics?
… these pesky IT nerds have the audacity to actually take the axioms of Quantum Mechanics so seriously as to regard them as a resource for computational engineering?
… this rabble band of academics are breeding papers at a rabbit’s pace, so that no one can possibly keep up and read them all?
… this novel science sucks up all sorts of grant money?
The answer is probably all of the above, to some extent. But this still doesn’t feel quite right. It seems to me the animosity goes deeper. Fortunately, Kingsley Jones (whom I greatly admire) blogged about similar sentiments, but he is much more clear eyed on what is causing them.
It seems to me that the crux of this discomfort stems from the fact that many physicists have a long harbored discomfort with Quantum Mechanic’s intractabilities, which were plastered over with the Copenhagen Interpretation (which caused all sorts of unintended side effects). It’s really a misnomer, it should have been called the ostrich interpretation, as its mantra was to ignore the inconstancies and to just shut-up and calculate. It is the distinct merit of Quantum Information science to have dragged this skeleton out of the closet and made it dance.
The quantum information scientists are agnostic on the various interpretations, and even joke about it. Obviously, if you believe there is a truth to be found, there can be only one, but you first need to acknowledge the cognitive dissonance if there’s to be any chance of making progress on this front. (My favorite QM interpretation has been suggested by Ulrich Mohrhoff, and I have yet to find the inspiration to blog about this in an manner that does it justice – ironically, where he thinks of it as an endpoint, I regard it as allowing for a fresh start).
Meanwhile, in the here and now, the first commercial quantum computing device the D‑Wave One has to overcome its own challenges (or being relegated to a computing curiosity akin to analog neural VLSI). 2013 will be the year to prove its merits in comparison to conventional hardware. I’ve been in touch with a distinguished academic in the field (not Scott A.) who is convinced that optimization on a single conventional CPU will always outperform the D-Wave machines – even on the next generation chip. So I proposed a bet, albeit not a monetary one: I will gladly ship a gallon of Maple sirup to him if he is proven right and our dark horse Canadian trail blazer don’t make the finish line. The results should be unambiguous and will be based on published research, but just in case, if there should be any disagreement we will settle on Scott Aaronson as a mutually acceptable arbiter. Scott is blissfully unaware of this, but as he is also the betting kind (the really big ones), I hope he’d be so kind as to help us sort this out if need be. After all, I figure, he will be following the D-Wave performance tests and at that time will already have formed an informed opinion on the matter.
The year 2013 started off with plenty of QIS drama and may very well turn out to be the crucial one to determine whether the field has crossed the rubicon. It’s going to be a fun one.
A recent DefenseNews article again put Shor’s algorithm front and center when writing about Quantum Computing. Yet, there is so much more to this field, and with Lockheed Martin working with D-Wave one would expect this to be an open secret in the defence sector. At any rate, this is a good enough reason to finally publish the full text of my quantum computing article, that the Hakin9 magazine asked me to write for their special issue earlier this year:
Who’s afraid of the big bad Quantum Computer?
“Be afraid; be very afraid, as the next fundamental transition in computing technology will obliterate all your encryption protection.”
If there is any awareness of quantum computing in the wider IT community then odds are it is this phobia that is driving it. Probably Peter Shor didn’t realize that he was about to pigeonhole the entire research field when he published his work on what is now the best known quantum algorithm. But once the news spread that he uncovered a method that could potentially speed up RSA decryption, the fear factor made it spread far and wide. Undoubtedly, if it wasn’t for the press coverage that this news received, quantum information technology research would still be widely considered to be just another academic curiosity.
So how realistic is this fear, and is breaking code the only thing a quantum computer is good for? This article is an attempt to separate fact from fiction
First let’s review how key exchange protocols that underlie most modern public key encryption schemes accomplish their task. A good analogy that illustrates the key attribute that quantum computing jeopardizes is shown in the following diagram (image courtesy of Wikipedia):
Let’s assume we want to establish a common secret color shared by two individuals, Alice and Bob – in this example this may not be a primary color but one that can be produced as a mix of three other ones. The scheme assumes that there exists a common first paint component that our odd couple already agreed on. The next component is a secret private color. This color is not shared with anybody. What happens next is the stroke of genius, the secret sauce that makes public key exchange possible. In our toy example it corresponds to the mixing of the secret, private color with the public one. As everybody probably as early as kindergarten learned, it’s easy to mix colors, but not so easy – try practically impossible – to revert it. From a physics standpoint the underlying issue is that entropy massively increases when the colors are mixed. Nature drives the process towards the mixed state, but makes it very costly to reverse the mixing. Hence, in thermodynamics, these processes are called “irreversible”.
This descent into the physics of our toy example may seem a rather pointless digression, but we will see later that in the context of quantum information processing, this will actually become very relevant.
But first let’s get back to Alice and Bob. They can now publicly exchange their mix-color, safe in the knowledge that there are myriads of ways to get to this particular shade of paint, and that nobody has much of a chance of guessing their particular components.
Since in the world of this example nobody has any concept of chromatics, even if a potential eavesdropper were to discover the common color, they’d still be unable to discern the secret ones, as they cannot unmix the publicly exchanged color shades.
In the final step, Alice and Bob recover a common color by adding their private secret component. This is a secret that they now share to the exclusion of everybody else.
So how does this relate to the actual public key exchange protocol? We get there by substituting the colors with numbers, say x and y for the common and Alice’s secret color. The mixing of colors corresponds to a mathematical function G(x,y). Usually the private secret numbers are picked from the set of prime numbers and the function G is simply a multiplication, exploiting the fact that integer factorization of large numbers is a very costly process. The next diagram depicts the exact same process, just mapped on numbers in this way.
If this simple method is used with sufficiently large prime numbers then the Shared Key is indeed quite safe, as there is no known efficient classical algorithm, that allows for a reasonably fast integer factorization. Of course “reasonably fast” is a very fuzzy term, so let’s be a bit more specific: There is no known classical algorithm that scales polynomially with the size of the integer. So for instance, an effort to crack a 232-digit number (RSA-768) that concluded in 2009 took the combined CPU power of hundreds of machines (Intel Core2 equivalents) over two years to accomplish.
And this is where the quantum computing bogeyman comes into the picture, and the aforementioned Peter Shor. This affable MIT researcher formulated a quantum algorithm almost twenty years ago that can factorize integers in polynomial time on the, as of yet elusive, quantum hardware. So what difference would that actually make? The following graph puts this into perspective:
Scaling with Z3 (red curve) versus $ (z^2 \sqrt[3]{z} ) ~ 4 / (3^{5/2}) $ (green). The latter will look almost like a vertical ascent.Z encodes stands for the logarithmic value of the size of the integer. The purple curve appears as almost vertical on this scale because the necessary steps (y-axis) in this classic algorithm grow explosively with the size of the integer. Shor’s algorithm, in comparison, shows a fairly well behaved slope with increasing integer sizes, making it theoretically a practical method for factoring large numbers.
And that is why common encryptions such as RSA could not protect against a deciphering attack if a suitable quantum computer was to be utilized. So now that commercial quantum computing devices such as the D-Wave One are on the market, where does that leave our cryptographic security?
First off: Not all quantum computers are created equal. There are universal gate-based ones, which are theoretically probably the best understood, and a textbook on the matter will usually start introducing the subject matter from this vantage point. But then there are also quantum simulators, topological design and adiabatic ones (I will forgo quantum cellular automatons in this article). The only commercially available machine, i.e. D-Wave’s One, belongs to the latter category but is not a universal machine, in that it cannot simulate any arbitrary Hamiltonian (this term describes the energy function that governs a quantum system). Essentially this machine is a super-fast and accurate solver for only one class of equations. This kind of equation was first written out for describing solid state magnets according to what it now called the Ising model.
But fear not: The D-Wave machine is not suitable for Shor’s algorithm. The latter requires a gate programmable device (or universal adiabatic machine) that provides plenty of qbits. The D-Wave one falls short on both ends. It has a special purpose adiabatic quantum chip with 128 qbits. Even if the architecture were compatible with Shor’s algorithm, the amount of qbits falls far short: If N is the number we want to factorize then we need a bit more than the square of that number in terms of qbits. Since the integers we are interested in are pretty large, this is far outside anything that can be realized at this point. For instance, for the RSA-768 challenge mentioned earlier, more than 232²=53824 qbits are required.
So you may wonder, what good is this vanguard of the coming quantum revolution if it can’t even handle the most famous quantum algorithm? To answer this let’s step back and look at what motivated the research into quantum computing to begin with. It wasn’t the hunt for new, more powerful algorithms but rather the insight, first formulated by Richard Feynman, that quantum mechanical systems cannot be efficiently simulated on classical hardware. This is, of course, a serious impediment as our entire science driven civilization depends on exploiting quantum mechanical effects. I am not even referring to the obvious culprits such as semiconductor based electronics, laser technology etc. but the more mundane chemical industry. Everybody will probably recall the Styrofoam models of orbitals and simple molecules such as benzene C6H6:
As the graphic illustrates, we know that sp2 orbitals facilitate the binding with the hydrogen, and that there is a delocalized π electron cloud formed from the overlapping p2 orbitals. Yet, these insights are inferred (and now thanks to raster electron microscopy also measured) but they don’t flow from an exact solution of the corresponding Schrödinger equations that govern the physics of these kinds of molecules.
Granted, multi-body problems don’t have an exact solution in the classical realm either, but the corresponding equations are well behaved when it comes to numerical simulations. The Schrödinger equation that rules quantum mechanical systems, on the other hand, is not. Simple scenarios are still within reach for classical computing, but not so larger molecules (i.e. the kind that biological processes typically employ). Things get even worse when one wants to go even further and model electrodynamics on the quantum level. Quantum field theories require a summation over an infinite regime of interaction paths – something that will bring any classical computer to its knees quickly. Not a quantum computer, though.
Quantum Computing has, therefore, the potential to usher in a new era for chemical and nano-scale engineering, putting an end to the still common practice of having to blindly test thousands of substances for pharmaceutical purposes, and finally realizing the vision, of designing smart drugs that specifically match targeted receptor proteins.
Of course, even if you can model protein structures, you still need to know which isomer is actually the biologically relevant one. Fortunately, a new technology deploying electron holography is expected to unlock a cornucopia of protein structure data. But this data will remain stale if you cannot understand how these proteins can fold. The latter is going to be key for understanding the function of a protein within the living organism.
Unfortunately, simulating protein folding has been shown to be an NP hard problem. Quantum computing is once again coming to the rescue, allowing for a polynomial speed-up of these kinds of calculations. It is not an exaggeration to expect that in the not too distant future lifesaving drug development will be facilitated this way. And first papers using D-Wave’s machine in this way have been published.
This is just one tiny sliver of the fields that quantum computing will impact. Just as with the unexpected applications that ever-increasing conventional computing power enabled, it is safe to say that we, in all likelihood, cannot fully anticipate how this technological revolution will impact our lives. But we can certainly identify some more areas that will immediately benefit from it: Artificial Intelligence, graph theory, operational research (and its business applications), database design etc. One could easily file another article on each of these topics while only scratching the surface, so the following observations have to be understood as extremely compressed.
It shouldn’t come as a surprise that quantum computing will prove fruitful for artificial intelligence. After all, one other major strand that arguably ignited the entire research field was contemplations on the nature of the human mind. The prominent mathematician and physicist Roger Penrose, for instance, argued vehemently that the human mind cannot be understood as a classical computer i.e. he is convinced (almost religious in his certainty) that a Turing machine in principle cannot emulate a human mind. Since it is not very practical to try to put a human brain into a state of controlled quantum superposition, the next best thing is to think this through for a computer. This is exactly the kind of thought experiment that David Deutsch discussed in his landmark paper on the topic. (It was also for the first time that a quantum algorithm was introduced, albeit not a very useful one, demonstrating that the imagined machine can do some things better than a classical Turing machine).
So it is only fitting that one of the first demonstrations of D-Wave’s technology concerned the training of an artificial neural net. This particular application maps nicely onto the structure of their system, as the training is mathematically already expressed as the search for a global minimum of an energy function that depends on several free parameters. To the extent that an optimization problem can be recast in this way, it becomes a potential candidate to benefit from D-Wave’s quantum computer. There are many applicable use cases for this in operational research (i.e. logistics, supply chain etc.) and business intelligence.
While this is all very exciting, a skeptic will rightfully point out that just knowing a certain tool can help with a task does not tell us how well it will stack up to conventional methods. Given the price tag of $10 million, it had better be good. There are unfortunately not a lot of benchmarks available, but a brute force search method implemented to find some obscure numbers from graph theory (Ramsey numbers) gives an indication that this machine can substitute for some considerable conventional computing horsepower i.e. about 55 MIPS, or the equivalent of a cluster of more than 300 of Intel’s fastest commercially available chips.
Another fascinating aspect that will factor into the all-important TCO (Total Cost of Ownership) considerations is that a quantum computer will actually require far less energy to achieve this kind of performance (its energy consumption will also only vary minimally under load). Earlier I described a particular architecture as adiabatic, and it is this term that describes this counterintuitive energy characteristic. It is a word that originated in thermodynamics and describes when a process progresses without heat exchange. I.e. throughout most of the QC processing there is no heat-producing entropy increase. At first glance, the huge cooling apparatus that accompanies a quantum computer seems to belie this assertion, but the reason for this considerable cooling technology is not a required continuous protection of the machines from over-heating (like in conventional data centers) but because most QC implementations require an environment that is considerably colder than even the coldest temperature that can be found anywhere in space (the surface of Pluto would be outright balmy in comparison).
Amazingly, these days commercially available Helium cooling systems can readily achieve these temperatures close to absolute zero. After cooling down, the entire remaining cooling effort is only employed to counteract thermal flow that even the best high vacuum insulated environments will experience. The quantum system itself will only dissipate a minimal amount of heat when the final result of an algorithm is read out. That is why the system just pulls 15 KWatt in total. This is considerably less than what our hypothetical 300 CPU cluster would consume under load i.e. >100KW per node, more than double D-Wave’s power consumption. And the best part: The cooling system, and hence power consumption, will remain the same for each new iteration of chips (D-Wave recently introduced their new 512 qbits VESUVIUS chip) and so far steadily followed their own version of Moore’s law, doubling integration about every 15 months.
So although D-Wave’s currently available quantum computing technology cannot implement Shor’s algorithm, or the second most famous one, Grover’s search over an unstructured list, the capabilities it delivers are nothing to scoff at. With heavyweights like IBM pouring considerable R&D resources into this technology, fully universal quantum processors will hit the market much earlier than most IT analysts (such as Gartner) currently project. Recently IBM demoed a 4 qbit universal chip (interestingly using the same superconducting foundry approach as D-Wave). If they also were to manage a doubling of their integration density every 18 months then we’d be looking at 256 qbit chips within three years.
While at this point current RSA implementation will not be in jeopardy, this key exchange protocol is slowly reaching its end-of-life cycle. So how best to mitigate against future quantum computing attacks on the key exchange? The most straightforward approach is simply to use a different “color-mixing” function than integer multiplication i.e. a function that even a quantum computer cannot unravel within a polynomial time frame. This is an active field of research, but so far no consensus for a suitable post-quantum key exchange function has evolved. At least it is well established that most current symmetric crypto (cyphers and hash functions) can be considered secure from the looming threat.
As to key exchange, the ultimate solution can also be provided by quantum mechanics in the form of quantum cryptography that in principle allows to transfer a key in such a manner that any eavesdropping will be detectable. To prove that this technology can be scaled for global intercontinental communication, the current record holder for the longest distance of quantum teleportation, the Chinese physicist Juan Yin, plans to repeat this feat in space, opening up the prospect for ultra secure communication around the world. Welcome to the future.
Quantum computers are not created equal. In this, they also differ drastically from their common classical counterparts. Since analog computers went the way of the dodo, our current digital devices can all be modeled as Turing machines. This means that from a computational standpoint they are equivalent and can not perform on the realm of what is now referred to as hyper-computation . David Deutsch’s seminal paper that jump started modern quantum computing research followed this lead by considering a Turing machine under quantum state superposition.
Thislineofthoughtwas extremely fruitful (e.g. Shor’s algorithm) and began to overshadowed another approach more in line with Feynman’s original idea: I.e. a quantum simulator: A system that is quantum mechanical in nature but can be configured to simulate various scenarios.
This dichotomy came back to haunt D-Wave, and contributed to the harsh rejection on the part of many researchers in the field when they introduced their impressive machines, that where firmly in the mold of a configurable quantum simulator.
Albeit a very different hardware realization, the D-Wave simulator is very much in the same league as the notable architecture developed at the NIST (don’t get confused by the use of the word crystal in the linked article, what’s meant is a grid). Both accomplished scalable devices to solve for a generalized Ising equation.
Interestingly, the NIST researchers didn’t have to face the kind of blow-back that D-Wave had to endure, which just goes to show that sometimes no marketing is the better marketing. These quantum simulator devices are in a sense mature technology and have proven to be scalable. D-Wave’s Rose’s law is a nice illustration for that (albeit somewhat exaggerated . The company managed to to double the qubit count on their chips about every 15 months for the last 10 years. On the other hand the holy grail of quantum computing, a truly universal machine in the mould of Deutsch’s Turing model is harder to come by. There have been impressive, albeit small scale demonstrations of Grover’s and Shor’s algorithm, but with the latter’s record achievement standing at the factorization of just 21, it is clear that there is still a long road ahead.
Intriguingly, adiabatic quantum computing the concept that is so successfully exploited by the D-Wave company and the NIST, also allows for universal computing: An alternative factorization algorithm has been demonstrated to crack this task for the number 143 . Yet, it is unclear if universal adiabatic quantum computing can achieve the quantum speed-up that is theoretically guaranteed when implementing Shor’s algorithm.
On the other hand it is a certainty that the experimental set-up used in the record breaking experiment will not scale. (There are only so many hydrogen nuclei in molecules of 1-bromo-2-chlorobenzene). There is a solid theoretical framework that indicates that universal adiabatic quantum computing may indeed be equivalent to gate based QC, but then there are also some tentative numerical simulations that indicate that this may not pan out (comments and wagers on this issue are most welcome in the comment section).
Universal gate based quantum computing relies on quantum error correction to achieve scalability. It is clearly an uphill battle against decoherence. This motivated the topological quantum computing approach that requires anyons ( majorana quais-particles) to implement a regime where an ingrained quantum symmetry counter-acts decoherence. As the latter only very recently have been experimentally shown to exist, this concept is the furthest away from realization.
So without any claim to completeness and with the caveat that it is difficult to make predictions, especially about the future, here is my Gartner inspired quantum computing taxonomy graphic (caveat: it uses the SVG format and may not display in older browsers):
Click on the graph for a larger image (also if you use Google Chrome, the browser sometimes seems to render embedded SVG only after directly going to the image file, please let me know in the comment section if you see any strange SVG effects). If there is more than one label the links are only on the text rather than the entire box. Here is how to read the graph: The axis are fairly straightforward, the more mature and scalable the technology the better. Maturity is measured in estimated time to market. The numbers are pure conjecture on my part (please feel free to submit your own estimates in the comment section). The following legends explain the color and icon choices:
Hardware
Architecture
Ion Trap
Adiabatic Ising (Quantum Simulator)
Photonic
Adiabatic Universal
SQUIDs
Gate Based
Other Solid State
Topological
Most architectures can be realized with a variety of very different quantum systems, the latter represent the hardware of a quantum system and are listed on the left. The most universal systems (theoretically guaranteed to provide quantum speed-ups) are able to implement quantum circuits such as Shore’s algorithm and are depicted in green. The quantum simulator family is kept in blue, the universal adiabatic architecture yellow.
Please let me know if there is an effort that you feel should be placed on this chart. I believe there are many that I missed to penciled in. To me this is just a starting point and first draft. I plan to retain this chart in one way or another to keep track of the various diverse QC efforts and I will be happy to incorporate input.
Usually I restrict myself to physics on this blog. Not because I don’t love and adore pure math, but mostly because news from this field leaves most readers in about the same state as captured in this comic.
Take the recent proof of the ABC conjecture, for instance. By all accounts, Shinichi Mochizuki, the mathematician who came up with it, is brilliant. Unfortunately, that means that even experts close to his field have a hard time following and validating the proof that spans several arcane areas and four pre-print papers.
But I make an exception for the recent news about a new way to solve linear equations (proposed by Prasad Raghavendra). Everybody learned in school how to solve via Gauss elimination, which might be the most ubiquitous math skill after simple arithmetic. There have been some algorithms developed to improve on this method, but it is still extremely cool that there is a new algorithm for such a fundamental mathematical problem with such widespread applications.
And because this concerns an algorithm, there is even pseudo and real code to play with.
What is especially interesting about this new method is that it employs randomness (a resource that has been shown many times to allow for more efficient algorithms). But there is a catch: This approach is only applicable for linear equation systems over finite fields (so applying this to QC simulation won’t work). Fields are defined as Rings (i.e. commutative groups that have a two binary operations such as addition and multiplication).
This finite field constraint seems again to push this onto a more esoteric plane, but fortunately these structured sets are anything but. This becomes clear when looking at the simplest finite field that just consists of two members 0 and 1: Just add the logical operations AND and XOR – et voila – if you’ve been exposed to bit-wise coding you’re intimately familiar with the smallest finite field GF(2). GF stands for Galois Field, another name for the same thing. (The number denotes how many elements the field has i.e. its cardinality).
XOR
AND
Input
Output
Input
Output
A
B
A
B
0
0
0
0
0
0
0
1
1
0
1
0
1
0
1
1
0
0
1
1
0
1
1
1
These familiar logic gate operations turn a bit set into the smallest finite field GF(2).
It shouldn’t come as a surprise that finite fields find application in coding theory and cryptography. There are also two connections to the main topic of this blog: The obvious one is that any randomized algorithm can benefit from the precursors of all Quantum Information devices i.e. quantum noise generators (you can buy one here, but if you need true randomness and cannot afford one of these, then this service might be for you).
The second connection is more subtle and interesting. Just as for any other calculation device, large universal gate based quantum computers can only become feasible if there are robust error correction mechanisms in place to deal with inevitable random noise. One of the earliest quantum error correction schemes was published by the veritable Peter Shor. Turns out the basis for “stabilizer codes” that fix the error for a single qbit are again the Pauli operators, which in turn map to GF(4). This can be generalized to GF(q) for non-binary systems. So it seems this new algorithm could very well find application in this area. A somewhat ironic twist if a randomized algorithm should turn out to be an efficient tool to deploy in correcting random qbit errors.
“If a tree falls in a forest, and no one is around to hear it, does it make a sound?” is a well known metaphysical toy question. What is less known is its business corollary: If a start-up makes a splash, but nobody pays attention, did it actually ever really emerge from stealth mode?
It is rather welcome news that the latest Nobel prizes in physics bring some attention to quantum computing, but as always, these machines are presented as a future possibility rather than a commercial reality.
Yet, D-Wave already ships the first commercial quantum computing device, and hardly anybody seems to know. Sometimes this ignorance appears to be almost willful. The company just doesn’t seem to be able to generate ubiquitous mainstream media awareness.
While their device is not a universal quantum computer, it nevertheless can facilitate an extremely useful and versatile quantum optimization, with plenty of real life application usage. The somewhat arcane Ramsey number research already demonstrated the power of the Rainier chip and a recent paper published in Nature shows how NP complex protein folding scenarios can be calculated on their hardware. An interesting wrinkle to this latest research is that the device doesn’t find the optimal solution as reliably as in the Ramsey number case, but that the other solutions are also valid folding modes of the examined protein and provide additional insight.
The importance of this kind of research can hardly be overstated, so why is this company not mentioned every time somebody writes about quantum computing?
Is it a case of crying wolf too many times? Is there a perception that the company may have over-promised and under-delivered in the past? Is it a lingering after-effect of the Scott Aaronson ham sandwich controversy?
Your guess is as good as mine. If you have any thoughts or insights on this please share them in the comments section.
This is the most dreaded question for anybody involved with this field if posed by a friend or relative without a physics background. When I am too tired or busy to make an honest effort, I usually answer by quoting Feynman’s quip on quantum mechanics (click here to listen to the man himself – the quote appears about 6:45 min into the lecture):
“A lot of people understand the theory of relativity in some way or other. (..) On the other hand, I think I can safely say that nobody understands quantum mechanics.”
The crux of the matter is that Quantum Computing derives its power from precisely the same attributes that make this realm of physics so alien to us.
It’s small comfort that greater minds than mine have been mulling this conundrum. For instance, a while back Scott Aaronson described his struggle to write a column for the New York Times that describes Quantum Computing.
Then there is Michael Nielsen’s take on it, a brilliant write-up illustrating why there is really no reason to expect a simple explanation.
But if this hasn’t utterly discouraged you then I have this little treat from D-Wave’s blog. You need to be willing to tolerate a little math, understanding that an expression like this
Other than that it just requires you to contemplate Schroedinger’s light switches. Just as his cat can be thought of as dead and alive at the same time, his light switches are in a superposition of On and Off. Strictly speaking, D-Wave’s description is specific to their particular adiabatic quantum chip design, but nevertheless, if you get your head around this, you will have a pretty good idea why a Quantum Computer’s abilities go beyond the means of a classical Turing machine.
Astute followers of this blog know that quantum computing was the brain child of Richard Feynman whose contribution to the quantum field theory of electrodynamics earned him a Nobel prize. Feynman was the first to remark on the fact that classical computers cannot efficiently simulate quantum systems. Since then the field has come a long way and it has been shown theoretically and experimentally that quantum computers can efficiently simulate quantum mechanical multi-body systems. And recent experimental setups like NIST’s 300 qbit quantum simulator are destined to surpass anything that could be modeled on a classical computer.
Yet, for the longest time it was not clear if quantum computers could also efficiently simulate quantum field theories.
Fields are a bit more tricky. Just recall the classic experiment to illustrate a magnetic field as shown in the picture.
Every point in space is imbued with a field value, so that even the tiniest volume of an element will contain an infinite amount of these field values.
The typical way to get around this problem is to perform the calculations on a grid. And the algorithm introduced by Jordan et. al. in this, just three pages long, paper does that as well.
Unfortunately, Feynman is no longer around to appreciate the work that now made it official: Quantum Field theories can be efficiently simulated i.e. with polynomial time scaling.
It is quite clever how they spread their simulation over the qbits, represent scattering particles and manage to derive an error estimate. The fact that they actually do this within the Schrödinger picture makes this paper especially accessible.
If you don’t know the first thing about quantum mechanics, this paper will still give you a good sense that the constriction of quantum algorithms does not look anything like conventional coding – even, as is the case here, one using the gate based quantum computing model.
This goes to the heart of the challenge to bring quantum computing to the masses. Steven Job’s quip about the iPhone is just as true for any quantum computer: “What would it be without the software? It would make a nice paperweight!” (h/t R. Tucci) Only difference is that a quantum computer will make a really big paperweight, but otherwise it’ll be just as dead.
This somewhat resembles the days of yore when computer programs had to be hand compiled for a specific machine architecture. Hence, the race is one to find a suitable abstraction layer on top of this underlying quantum weirdness, in order to make this power accessible to non-physicists.
Just in case you wondered: It is still not clear if String theories can be efficiently simulated on a quantum computer. But it has been suggested that those that cannot should be considered unphysical.
If anybody in the wider IT community has heard about Quantum Computing, it’s usually in the context of how it may obliterate our current industry standard encryption (e.g. RSA). Probably Peter Shor didn’t realize that he was about to define the entire research field when he published his work on what is now the most famous (and notorious) quantum algorithm. But once it was reported that he uncovered a method that could potentially speed up RSA decryption, anxiety and angst spread the news far and wide.
Odds are, if it wasn’t for the IT media stir that this news caused, quantum information technology would still be mostly considered just another academic curiosity.
With the modest means of my blog, I have tried to create some awareness that quantum computing is much bigger than this, when pointing at use cases in enterprise IT and social web services, but admittedly these were general observations that did not descend to the level of implementation details. That’s why I was very pleased when coming across this paper describing how quantum computing can be leveraged to calculate Google’s page rank. The authors argue that for a realistic web graph topology, their approach can
… provide a polynomial quantum speedup. Moreover, the quantum PageRank state can be used in “q-sampling” protocols for testing properties of distributions, which require exponentially fewer measurements than all classical schemes …
So why does this matter? Most bloggers will have heard about Google’s PageRank algorithm as it is crucial in determining how far up your site will appear in a Google search. But just as most people don’t really think about how power gets to your home’s outlet, few will have contemplated how this algorithm works and how much raw CPU power Google must spend to keep it updated.
For those inclined to learn more how these kinds of algorithms work, there is no better treat than this blog post on exactly this topic from the one and only Michael Nielsen (who’s standard tome of quantum computing is deservedly the first reference in the paper).
Google doesn’t advertise how often and how quickly they can run PageRank on the entire visible web space. Clearly, the more often a search engine can update the rank values the better. It is also a safe bet that executing this algorithm on a continuous basis will contribute to Google’s considerable power consumption. It is laudable that Google invests in “green” data centers but it seems they could save a lot on their energy bill if they’d get behind quantum computing. The “adiabatic” classification of this algorithm is a give-away. The unitary process evolution actually does not increase entropy, and thus does not require energy input for that part of the computation, unlike classical computing. That is why adiabatic quantum computing machines like the D-Wave One have the potential to drastically lower power consumption in data centers. It’s really a twofer: You get superior algorithms and save power.
Oh, and by the way, the PageRank for this blog is still 0 out of 10
Update: The way my post is written it may give the impression that the D-Wave One can execute this algorithm. But as this machine is a specialized QC device this is not necessarily the case. (This question was raised in the related LinkeIn discussion to this post that can be found here).