The curious fact that matter can exhibit wave-like properties (or should this rather be waves acting like particles?) is now referred to as the wave particle duality. In old times it was often believed that there was some magic in giving something a name, and that it will take some of the christened’s power. Here is to hoping that there may be some truth to this, as this obvious incompatibility has claimed at least one prominent life.
It was Einstein who first made this two-faced character of matter explicit when publishing on the photo electric effect, assigning particle-like characteristics to light that up to this point was firmly understood to be an electromagnetic wave phenomenon.
But just like the question of the true nature of reality, the source of this dychotomy is almost as old as science itself, and arguably already inherent in the very idea of atomism as the other extrem of an all encompassing holism. The latter is often regarded as the philosophical consequence of Schroedinger’s wave mechanics, since a wave phenomenon has no sharp and clear boundaries, and in this sense is often presented as connecting the entirety of the material world. Taken to the extreme, this holistic view finds its perfect expression in Everett’s universal wavefunction (an interpretation happily embraced by Quantum Hippies of all ages) which gave rise to the now quite popular many worlds interpretation of quantum mechanics.
While atomism proved to be extremely fruitful in the development of physics, it was never popular with religious authorities. You can find echoes of this to this day if you look up this term at the Catholic Encyclopaedia:
Scholastic philosophy finds nothing in the scientific theory of atomism which it cannot harmonize with its principles, though it must reject the mechanical explanation, often proposed in the name of science, …
Atomism is incompatible with Judeo-Christian principles because atomism views matter as independent of God, …
Religion of course really doesn’t have a choice in the matter as it can hardly maintain doctrine without some holistic principle. It is no coincidence that physics only progressed after the cultural revolution of the Renaissance loosened the church’s dominance over the sheeple’s minds. But history never moves in a straight line. For instance, with Romanticism the pendulum swung back with a vengeance. It was at the height of this period that Ludwig Boltzmann achieved the greatest scientific breakthrough of atomism when developing statistical mechanics as the proper foundation of thermodynamics. It was not received well. With James Clerk Maxwell having seemingly established a holistic ether that explained all radiation as a wave phenomenon, atomism had thoroughly fallen out of favour. Boltzmann vigorously defended his work and was no stranger to polemic exchanges to make his point, yet he was beset by clinical depression and feared in the end that his life’s work was for naught. He committed suicide while on a summer retreat that was supposed to help his ailing health.
He must have missed the significance of Einstein’s publication on Brownian Motion just a year early. It is the least famous of his Annus Mirabelis papers, but it lay the foundation for experimentalists to once and for all settle the debate in Boltzmann’s favor, just a few years after his tragic death.
Thermodynamics made no sense to me before I learned statistical mechanics, and it is befitting that his most elegant equation for the entropy of a system graces the memorial at his grave site (the k denoting the Boltzmann constant).
There is more than one path to classical mechanics.
So much to do, so little time. My own lost paper work (i.e. the translation of some of Hilbert’s old papers that are not available in English) is commencing at a snail’s pace, but at Kingsley Jones’ blog we can learn about some papers that were truly lost and misplaced and that he only recovered because throughout all the moves and ups and downs of life, his parents have been hanging on to copies of the unpublished pre-prints. Kingsley’s post affected me on a deeper level than the usual blog fare, because this is such a parent thing to do. Having (young) kids myself, I know exactly the emotional tug to never throw away anything they produce, even if they have seemingly moved on and abandoned it. On the other hand, the recollection of how he found these papers when going through his parent’s belongings after they passed away, brings into sharp relief the fact that I have already begun this process for my father, who has Alzheimer’s. So many of his things (such as his piano sheet music) are now just stark reminders of all the things he can no longer do.
On a more upbeat note: The content of these fortuitously recovered papers is quite remarkable. They expand on a formalism that Steven Weinberg developed, one that essentially allows you to continuously deform quantum mechanics, making it ever less quantum. In the limit, you end up with a wave equation that is equivalent to the Hamiltonian extremal principal–i.e. you recover classical mechanics and have a “Schrödinger equation” that always fully satisfies the Ehrenfest Theorem. In this sense, this mechanism is another route to Hamilton mechanics. The anecdote of Weinberg’s reaction when he learned about this news is priceless.
Ehrenfest’s Theorem, in a manner, is supposed to be common sense mathematically formulated: QM expectation values of a system should obey classical mechanics in the classical limit. Within the normal QM frameworks this usually works, but the problem is that sometimes it does not, as every QM textbook will point out (e.g. these lecture notes). Ironically, at the time of writing, the Wikipedia entry on the Ehrenfest Theorem does not contain this key fact, which makes it kind of missing the point (just another example that one cannot blindly trust Wikipedia content). The above linked lecture notes illustrate this with a simple harmonic oscillator example and make this observation:
“…. according to Ehrenfest’s theorem, the expectation values of position for this cubic potential will only agree with the classical behaviour insofar as the dispersion in position is negligible (for all time) in the chosen state.”
So in a sense, this is what this “classic Schrödinger equation” accomplishes: a wave equation that always produces this necessary constraint in the dispersion. Another way to think about this is by invoking the analogy between Feynman’s path integral and the classical extremal principle. Essentially, as the parameter lambda shrinks for Kingsley’s generalized Schrödinger equation, the paths will be forced ever closer to the classically allowed extremal trajectory.
A succinct summation of the key math behind these papers can be currently found in Wikipedia, but you had better hurry, as the article is marked for deletion by editors following rather mechanistic notability criteria, by simply counting how many times the underlying papers were cited.
Unfortunately, the sheer number of citations is not a reliable measure with which to judge quality. A good example of this is the Quantum Discord research that is quite en vogue these days. It has recently been taken to task on R.R. Tucci’s blog. Ironically, amongst many other aspects, it seem to me that Kingsley’s approach may be rather promising to better understand decoherence, and possibly even put some substance to the Quantum Discord metric.
Scott Aaranson took on this paper in his unique style (it’s a long read but well worth it). There really isn’t much to add to his arguments, but there is another angle that intrigues my inner “armchair psychologist”: What exactly is it about this field that so provokes some physicists? Is it that …
… Computer Scientists of all people are committing Quantum Mechanics?
… these pesky IT nerds have the audacity to actually take the axioms of Quantum Mechanics so seriously as to regard them as a resource for computational engineering?
… this rabble band of academics are breeding papers at a rabbit’s pace, so that no one can possibly keep up and read them all?
… this novel science sucks up all sorts of grant money?
The answer is probably all of the above, to some extent. But this still doesn’t feel quite right. It seems to me the animosity goes deeper. Fortunately, Kingsley Jones (whom I greatly admire) blogged about similar sentiments, but he is much more clear eyed on what is causing them.
It seems to me that the crux of this discomfort stems from the fact that many physicists have a long harbored discomfort with Quantum Mechanic’s intractabilities, which were plastered over with the Copenhagen Interpretation (which caused all sorts of unintended side effects). It’s really a misnomer, it should have been called the ostrich interpretation, as its mantra was to ignore the inconstancies and to just shut-up and calculate. It is the distinct merit of Quantum Information science to have dragged this skeleton out of the closet and made it dance.
The quantum information scientists are agnostic on the various interpretations, and even joke about it. Obviously, if you believe there is a truth to be found, there can be only one, but you first need to acknowledge the cognitive dissonance if there’s to be any chance of making progress on this front. (My favorite QM interpretation has been suggested by Ulrich Mohrhoff, and I have yet to find the inspiration to blog about this in an manner that does it justice – ironically, where he thinks of it as an endpoint, I regard it as allowing for a fresh start).
Meanwhile, in the here and now, the first commercial quantum computing device the D‑Wave One has to overcome its own challenges (or being relegated to a computing curiosity akin to analog neural VLSI). 2013 will be the year to prove its merits in comparison to conventional hardware. I’ve been in touch with a distinguished academic in the field (not Scott A.) who is convinced that optimization on a single conventional CPU will always outperform the D-Wave machines – even on the next generation chip. So I proposed a bet, albeit not a monetary one: I will gladly ship a gallon of Maple sirup to him if he is proven right and our dark horse Canadian trail blazer don’t make the finish line. The results should be unambiguous and will be based on published research, but just in case, if there should be any disagreement we will settle on Scott Aaronson as a mutually acceptable arbiter. Scott is blissfully unaware of this, but as he is also the betting kind (the really big ones), I hope he’d be so kind as to help us sort this out if need be. After all, I figure, he will be following the D-Wave performance tests and at that time will already have formed an informed opinion on the matter.
The year 2013 started off with plenty of QIS drama and may very well turn out to be the crucial one to determine whether the field has crossed the rubicon. It’s going to be a fun one.
An LED lamp suspended from the hair thin nanotube wires that also supply it with power.
This has been reported all over the Web, but it is just too good to pass up, especially since the year in the Quantum Computing world started on a somewhat more contentious note (more about this in the next blog post).
This news item on the other hand deserves to be the first in the new year and is entirely positive: I was expecting carbon nanotubes to eventually become the material of choice for electric wiring but I didn’t expect it to happen this soon. The video that is embedded below, makes the compelling case that a research team at Rice university not only managed to produce wires superior to any metal wire, but also at the same time, to develop a production process that can be readily scaled up.
Copper price development over the last ten years.
Being able to produce these kind of wires at a competitive price will go a long way to ameliorate one of humanity’s key resource problems: Peak copper (a term coined after the more publicized peak oil prognosis). And this resource constraint is anything but theoretical. Copper prices have so increased over the last ten years that copper theft became a serious global problem, one that often endangers lives when critical infrastructure is destroyed.
These new copper nanotube wires have the potential to substitute copper wiring in cars, airplanes, microchip as well as residential wiring to just name a few. If the wires are as good as they are made to look in the video, they will be superior to copper wires to such an extend, that it will be simply a matter of price for them to be adopted.
This is the kind of science news I like to hear at the beginning of a new year.
A recent DefenseNews article again put Shor’s algorithm front and center when writing about Quantum Computing. Yet, there is so much more to this field, and with Lockheed Martin working with D-Wave one would expect this to be an open secret in the defence sector. At any rate, this is a good enough reason to finally publish the full text of my quantum computing article, that the Hakin9 magazine asked me to write for their special issue earlier this year:
Who’s afraid of the big bad Quantum Computer?
“Be afraid; be very afraid, as the next fundamental transition in computing technology will obliterate all your encryption protection.”
If there is any awareness of quantum computing in the wider IT community then odds are it is this phobia that is driving it. Probably Peter Shor didn’t realize that he was about to pigeonhole the entire research field when he published his work on what is now the best known quantum algorithm. But once the news spread that he uncovered a method that could potentially speed up RSA decryption, the fear factor made it spread far and wide. Undoubtedly, if it wasn’t for the press coverage that this news received, quantum information technology research would still be widely considered to be just another academic curiosity.
So how realistic is this fear, and is breaking code the only thing a quantum computer is good for? This article is an attempt to separate fact from fiction
First let’s review how key exchange protocols that underlie most modern public key encryption schemes accomplish their task. A good analogy that illustrates the key attribute that quantum computing jeopardizes is shown in the following diagram (image courtesy of Wikipedia):
Let’s assume we want to establish a common secret color shared by two individuals, Alice and Bob – in this example this may not be a primary color but one that can be produced as a mix of three other ones. The scheme assumes that there exists a common first paint component that our odd couple already agreed on. The next component is a secret private color. This color is not shared with anybody. What happens next is the stroke of genius, the secret sauce that makes public key exchange possible. In our toy example it corresponds to the mixing of the secret, private color with the public one. As everybody probably as early as kindergarten learned, it’s easy to mix colors, but not so easy – try practically impossible – to revert it. From a physics standpoint the underlying issue is that entropy massively increases when the colors are mixed. Nature drives the process towards the mixed state, but makes it very costly to reverse the mixing. Hence, in thermodynamics, these processes are called “irreversible”.
This descent into the physics of our toy example may seem a rather pointless digression, but we will see later that in the context of quantum information processing, this will actually become very relevant.
But first let’s get back to Alice and Bob. They can now publicly exchange their mix-color, safe in the knowledge that there are myriads of ways to get to this particular shade of paint, and that nobody has much of a chance of guessing their particular components.
Since in the world of this example nobody has any concept of chromatics, even if a potential eavesdropper were to discover the common color, they’d still be unable to discern the secret ones, as they cannot unmix the publicly exchanged color shades.
In the final step, Alice and Bob recover a common color by adding their private secret component. This is a secret that they now share to the exclusion of everybody else.
So how does this relate to the actual public key exchange protocol? We get there by substituting the colors with numbers, say x and y for the common and Alice’s secret color. The mixing of colors corresponds to a mathematical function G(x,y). Usually the private secret numbers are picked from the set of prime numbers and the function G is simply a multiplication, exploiting the fact that integer factorization of large numbers is a very costly process. The next diagram depicts the exact same process, just mapped on numbers in this way.
If this simple method is used with sufficiently large prime numbers then the Shared Key is indeed quite safe, as there is no known efficient classical algorithm, that allows for a reasonably fast integer factorization. Of course “reasonably fast” is a very fuzzy term, so let’s be a bit more specific: There is no known classical algorithm that scales polynomially with the size of the integer. So for instance, an effort to crack a 232-digit number (RSA-768) that concluded in 2009 took the combined CPU power of hundreds of machines (Intel Core2 equivalents) over two years to accomplish.
And this is where the quantum computing bogeyman comes into the picture, and the aforementioned Peter Shor. This affable MIT researcher formulated a quantum algorithm almost twenty years ago that can factorize integers in polynomial time on the, as of yet elusive, quantum hardware. So what difference would that actually make? The following graph puts this into perspective:
Scaling with Z3 (red curve) versus $ (z^2 \sqrt[3]{z} ) ~ 4 / (3^{5/2}) $ (green). The latter will look almost like a vertical ascent.Z encodes stands for the logarithmic value of the size of the integer. The purple curve appears as almost vertical on this scale because the necessary steps (y-axis) in this classic algorithm grow explosively with the size of the integer. Shor’s algorithm, in comparison, shows a fairly well behaved slope with increasing integer sizes, making it theoretically a practical method for factoring large numbers.
And that is why common encryptions such as RSA could not protect against a deciphering attack if a suitable quantum computer was to be utilized. So now that commercial quantum computing devices such as the D-Wave One are on the market, where does that leave our cryptographic security?
First off: Not all quantum computers are created equal. There are universal gate-based ones, which are theoretically probably the best understood, and a textbook on the matter will usually start introducing the subject matter from this vantage point. But then there are also quantum simulators, topological design and adiabatic ones (I will forgo quantum cellular automatons in this article). The only commercially available machine, i.e. D-Wave’s One, belongs to the latter category but is not a universal machine, in that it cannot simulate any arbitrary Hamiltonian (this term describes the energy function that governs a quantum system). Essentially this machine is a super-fast and accurate solver for only one class of equations. This kind of equation was first written out for describing solid state magnets according to what it now called the Ising model.
But fear not: The D-Wave machine is not suitable for Shor’s algorithm. The latter requires a gate programmable device (or universal adiabatic machine) that provides plenty of qbits. The D-Wave one falls short on both ends. It has a special purpose adiabatic quantum chip with 128 qbits. Even if the architecture were compatible with Shor’s algorithm, the amount of qbits falls far short: If N is the number we want to factorize then we need a bit more than the square of that number in terms of qbits. Since the integers we are interested in are pretty large, this is far outside anything that can be realized at this point. For instance, for the RSA-768 challenge mentioned earlier, more than 232²=53824 qbits are required.
So you may wonder, what good is this vanguard of the coming quantum revolution if it can’t even handle the most famous quantum algorithm? To answer this let’s step back and look at what motivated the research into quantum computing to begin with. It wasn’t the hunt for new, more powerful algorithms but rather the insight, first formulated by Richard Feynman, that quantum mechanical systems cannot be efficiently simulated on classical hardware. This is, of course, a serious impediment as our entire science driven civilization depends on exploiting quantum mechanical effects. I am not even referring to the obvious culprits such as semiconductor based electronics, laser technology etc. but the more mundane chemical industry. Everybody will probably recall the Styrofoam models of orbitals and simple molecules such as benzene C6H6:
As the graphic illustrates, we know that sp2 orbitals facilitate the binding with the hydrogen, and that there is a delocalized π electron cloud formed from the overlapping p2 orbitals. Yet, these insights are inferred (and now thanks to raster electron microscopy also measured) but they don’t flow from an exact solution of the corresponding Schrödinger equations that govern the physics of these kinds of molecules.
Granted, multi-body problems don’t have an exact solution in the classical realm either, but the corresponding equations are well behaved when it comes to numerical simulations. The Schrödinger equation that rules quantum mechanical systems, on the other hand, is not. Simple scenarios are still within reach for classical computing, but not so larger molecules (i.e. the kind that biological processes typically employ). Things get even worse when one wants to go even further and model electrodynamics on the quantum level. Quantum field theories require a summation over an infinite regime of interaction paths – something that will bring any classical computer to its knees quickly. Not a quantum computer, though.
Quantum Computing has, therefore, the potential to usher in a new era for chemical and nano-scale engineering, putting an end to the still common practice of having to blindly test thousands of substances for pharmaceutical purposes, and finally realizing the vision, of designing smart drugs that specifically match targeted receptor proteins.
Of course, even if you can model protein structures, you still need to know which isomer is actually the biologically relevant one. Fortunately, a new technology deploying electron holography is expected to unlock a cornucopia of protein structure data. But this data will remain stale if you cannot understand how these proteins can fold. The latter is going to be key for understanding the function of a protein within the living organism.
Unfortunately, simulating protein folding has been shown to be an NP hard problem. Quantum computing is once again coming to the rescue, allowing for a polynomial speed-up of these kinds of calculations. It is not an exaggeration to expect that in the not too distant future lifesaving drug development will be facilitated this way. And first papers using D-Wave’s machine in this way have been published.
This is just one tiny sliver of the fields that quantum computing will impact. Just as with the unexpected applications that ever-increasing conventional computing power enabled, it is safe to say that we, in all likelihood, cannot fully anticipate how this technological revolution will impact our lives. But we can certainly identify some more areas that will immediately benefit from it: Artificial Intelligence, graph theory, operational research (and its business applications), database design etc. One could easily file another article on each of these topics while only scratching the surface, so the following observations have to be understood as extremely compressed.
It shouldn’t come as a surprise that quantum computing will prove fruitful for artificial intelligence. After all, one other major strand that arguably ignited the entire research field was contemplations on the nature of the human mind. The prominent mathematician and physicist Roger Penrose, for instance, argued vehemently that the human mind cannot be understood as a classical computer i.e. he is convinced (almost religious in his certainty) that a Turing machine in principle cannot emulate a human mind. Since it is not very practical to try to put a human brain into a state of controlled quantum superposition, the next best thing is to think this through for a computer. This is exactly the kind of thought experiment that David Deutsch discussed in his landmark paper on the topic. (It was also for the first time that a quantum algorithm was introduced, albeit not a very useful one, demonstrating that the imagined machine can do some things better than a classical Turing machine).
So it is only fitting that one of the first demonstrations of D-Wave’s technology concerned the training of an artificial neural net. This particular application maps nicely onto the structure of their system, as the training is mathematically already expressed as the search for a global minimum of an energy function that depends on several free parameters. To the extent that an optimization problem can be recast in this way, it becomes a potential candidate to benefit from D-Wave’s quantum computer. There are many applicable use cases for this in operational research (i.e. logistics, supply chain etc.) and business intelligence.
While this is all very exciting, a skeptic will rightfully point out that just knowing a certain tool can help with a task does not tell us how well it will stack up to conventional methods. Given the price tag of $10 million, it had better be good. There are unfortunately not a lot of benchmarks available, but a brute force search method implemented to find some obscure numbers from graph theory (Ramsey numbers) gives an indication that this machine can substitute for some considerable conventional computing horsepower i.e. about 55 MIPS, or the equivalent of a cluster of more than 300 of Intel’s fastest commercially available chips.
Another fascinating aspect that will factor into the all-important TCO (Total Cost of Ownership) considerations is that a quantum computer will actually require far less energy to achieve this kind of performance (its energy consumption will also only vary minimally under load). Earlier I described a particular architecture as adiabatic, and it is this term that describes this counterintuitive energy characteristic. It is a word that originated in thermodynamics and describes when a process progresses without heat exchange. I.e. throughout most of the QC processing there is no heat-producing entropy increase. At first glance, the huge cooling apparatus that accompanies a quantum computer seems to belie this assertion, but the reason for this considerable cooling technology is not a required continuous protection of the machines from over-heating (like in conventional data centers) but because most QC implementations require an environment that is considerably colder than even the coldest temperature that can be found anywhere in space (the surface of Pluto would be outright balmy in comparison).
Amazingly, these days commercially available Helium cooling systems can readily achieve these temperatures close to absolute zero. After cooling down, the entire remaining cooling effort is only employed to counteract thermal flow that even the best high vacuum insulated environments will experience. The quantum system itself will only dissipate a minimal amount of heat when the final result of an algorithm is read out. That is why the system just pulls 15 KWatt in total. This is considerably less than what our hypothetical 300 CPU cluster would consume under load i.e. >100KW per node, more than double D-Wave’s power consumption. And the best part: The cooling system, and hence power consumption, will remain the same for each new iteration of chips (D-Wave recently introduced their new 512 qbits VESUVIUS chip) and so far steadily followed their own version of Moore’s law, doubling integration about every 15 months.
So although D-Wave’s currently available quantum computing technology cannot implement Shor’s algorithm, or the second most famous one, Grover’s search over an unstructured list, the capabilities it delivers are nothing to scoff at. With heavyweights like IBM pouring considerable R&D resources into this technology, fully universal quantum processors will hit the market much earlier than most IT analysts (such as Gartner) currently project. Recently IBM demoed a 4 qbit universal chip (interestingly using the same superconducting foundry approach as D-Wave). If they also were to manage a doubling of their integration density every 18 months then we’d be looking at 256 qbit chips within three years.
While at this point current RSA implementation will not be in jeopardy, this key exchange protocol is slowly reaching its end-of-life cycle. So how best to mitigate against future quantum computing attacks on the key exchange? The most straightforward approach is simply to use a different “color-mixing” function than integer multiplication i.e. a function that even a quantum computer cannot unravel within a polynomial time frame. This is an active field of research, but so far no consensus for a suitable post-quantum key exchange function has evolved. At least it is well established that most current symmetric crypto (cyphers and hash functions) can be considered secure from the looming threat.
As to key exchange, the ultimate solution can also be provided by quantum mechanics in the form of quantum cryptography that in principle allows to transfer a key in such a manner that any eavesdropping will be detectable. To prove that this technology can be scaled for global intercontinental communication, the current record holder for the longest distance of quantum teleportation, the Chinese physicist Juan Yin, plans to repeat this feat in space, opening up the prospect for ultra secure communication around the world. Welcome to the future.
Quantum computers are not created equal. In this, they also differ drastically from their common classical counterparts. Since analog computers went the way of the dodo, our current digital devices can all be modeled as Turing machines. This means that from a computational standpoint they are equivalent and can not perform on the realm of what is now referred to as hyper-computation . David Deutsch’s seminal paper that jump started modern quantum computing research followed this lead by considering a Turing machine under quantum state superposition.
Thislineofthoughtwas extremely fruitful (e.g. Shor’s algorithm) and began to overshadowed another approach more in line with Feynman’s original idea: I.e. a quantum simulator: A system that is quantum mechanical in nature but can be configured to simulate various scenarios.
This dichotomy came back to haunt D-Wave, and contributed to the harsh rejection on the part of many researchers in the field when they introduced their impressive machines, that where firmly in the mold of a configurable quantum simulator.
Albeit a very different hardware realization, the D-Wave simulator is very much in the same league as the notable architecture developed at the NIST (don’t get confused by the use of the word crystal in the linked article, what’s meant is a grid). Both accomplished scalable devices to solve for a generalized Ising equation.
Interestingly, the NIST researchers didn’t have to face the kind of blow-back that D-Wave had to endure, which just goes to show that sometimes no marketing is the better marketing. These quantum simulator devices are in a sense mature technology and have proven to be scalable. D-Wave’s Rose’s law is a nice illustration for that (albeit somewhat exaggerated . The company managed to to double the qubit count on their chips about every 15 months for the last 10 years. On the other hand the holy grail of quantum computing, a truly universal machine in the mould of Deutsch’s Turing model is harder to come by. There have been impressive, albeit small scale demonstrations of Grover’s and Shor’s algorithm, but with the latter’s record achievement standing at the factorization of just 21, it is clear that there is still a long road ahead.
Intriguingly, adiabatic quantum computing the concept that is so successfully exploited by the D-Wave company and the NIST, also allows for universal computing: An alternative factorization algorithm has been demonstrated to crack this task for the number 143 . Yet, it is unclear if universal adiabatic quantum computing can achieve the quantum speed-up that is theoretically guaranteed when implementing Shor’s algorithm.
On the other hand it is a certainty that the experimental set-up used in the record breaking experiment will not scale. (There are only so many hydrogen nuclei in molecules of 1-bromo-2-chlorobenzene). There is a solid theoretical framework that indicates that universal adiabatic quantum computing may indeed be equivalent to gate based QC, but then there are also some tentative numerical simulations that indicate that this may not pan out (comments and wagers on this issue are most welcome in the comment section).
Universal gate based quantum computing relies on quantum error correction to achieve scalability. It is clearly an uphill battle against decoherence. This motivated the topological quantum computing approach that requires anyons ( majorana quais-particles) to implement a regime where an ingrained quantum symmetry counter-acts decoherence. As the latter only very recently have been experimentally shown to exist, this concept is the furthest away from realization.
So without any claim to completeness and with the caveat that it is difficult to make predictions, especially about the future, here is my Gartner inspired quantum computing taxonomy graphic (caveat: it uses the SVG format and may not display in older browsers):
Click on the graph for a larger image (also if you use Google Chrome, the browser sometimes seems to render embedded SVG only after directly going to the image file, please let me know in the comment section if you see any strange SVG effects). If there is more than one label the links are only on the text rather than the entire box. Here is how to read the graph: The axis are fairly straightforward, the more mature and scalable the technology the better. Maturity is measured in estimated time to market. The numbers are pure conjecture on my part (please feel free to submit your own estimates in the comment section). The following legends explain the color and icon choices:
Hardware
Architecture
Ion Trap
Adiabatic Ising (Quantum Simulator)
Photonic
Adiabatic Universal
SQUIDs
Gate Based
Other Solid State
Topological
Most architectures can be realized with a variety of very different quantum systems, the latter represent the hardware of a quantum system and are listed on the left. The most universal systems (theoretically guaranteed to provide quantum speed-ups) are able to implement quantum circuits such as Shore’s algorithm and are depicted in green. The quantum simulator family is kept in blue, the universal adiabatic architecture yellow.
Please let me know if there is an effort that you feel should be placed on this chart. I believe there are many that I missed to penciled in. To me this is just a starting point and first draft. I plan to retain this chart in one way or another to keep track of the various diverse QC efforts and I will be happy to incorporate input.
Usually I restrict myself to physics on this blog. Not because I don’t love and adore pure math, but mostly because news from this field leaves most readers in about the same state as captured in this comic.
Take the recent proof of the ABC conjecture, for instance. By all accounts, Shinichi Mochizuki, the mathematician who came up with it, is brilliant. Unfortunately, that means that even experts close to his field have a hard time following and validating the proof that spans several arcane areas and four pre-print papers.
But I make an exception for the recent news about a new way to solve linear equations (proposed by Prasad Raghavendra). Everybody learned in school how to solve via Gauss elimination, which might be the most ubiquitous math skill after simple arithmetic. There have been some algorithms developed to improve on this method, but it is still extremely cool that there is a new algorithm for such a fundamental mathematical problem with such widespread applications.
And because this concerns an algorithm, there is even pseudo and real code to play with.
What is especially interesting about this new method is that it employs randomness (a resource that has been shown many times to allow for more efficient algorithms). But there is a catch: This approach is only applicable for linear equation systems over finite fields (so applying this to QC simulation won’t work). Fields are defined as Rings (i.e. commutative groups that have a two binary operations such as addition and multiplication).
This finite field constraint seems again to push this onto a more esoteric plane, but fortunately these structured sets are anything but. This becomes clear when looking at the simplest finite field that just consists of two members 0 and 1: Just add the logical operations AND and XOR – et voila – if you’ve been exposed to bit-wise coding you’re intimately familiar with the smallest finite field GF(2). GF stands for Galois Field, another name for the same thing. (The number denotes how many elements the field has i.e. its cardinality).
XOR
AND
Input
Output
Input
Output
A
B
A
B
0
0
0
0
0
0
0
1
1
0
1
0
1
0
1
1
0
0
1
1
0
1
1
1
These familiar logic gate operations turn a bit set into the smallest finite field GF(2).
It shouldn’t come as a surprise that finite fields find application in coding theory and cryptography. There are also two connections to the main topic of this blog: The obvious one is that any randomized algorithm can benefit from the precursors of all Quantum Information devices i.e. quantum noise generators (you can buy one here, but if you need true randomness and cannot afford one of these, then this service might be for you).
The second connection is more subtle and interesting. Just as for any other calculation device, large universal gate based quantum computers can only become feasible if there are robust error correction mechanisms in place to deal with inevitable random noise. One of the earliest quantum error correction schemes was published by the veritable Peter Shor. Turns out the basis for “stabilizer codes” that fix the error for a single qbit are again the Pauli operators, which in turn map to GF(4). This can be generalized to GF(q) for non-binary systems. So it seems this new algorithm could very well find application in this area. A somewhat ironic twist if a randomized algorithm should turn out to be an efficient tool to deploy in correcting random qbit errors.
Thanks to Mad Men, the concept of brand equity has been widely popularized, the most prominent example being that of a dog food company’s brand becoming toxic after it was reported that it used horse meat in its product (a practice that was outlawed in the US in 1970).
D-Wave is still in the early stages of building its brand, so the fact that some casual observers have very negative views, as apparent from some comments on my last entry, is worrying. Unbeknownst to me, Scott Aaronson put up a blog entry about the same time and I didn’t immediately pick up on it as I was travelling. Turns out, the large comment string points to a D-Wave 2007 event as the cause for this brand damage. At that time the expectations of the theoretical computer scientists collided head-on with the reality of D‑Wave.
The comment thread at Scott’s blog is chock full of crazy, running the gamut from a poster accusing D-Wave of fraud to another one showering Scott with profanity. Always found it admirable that Scott doesn’t censor such comments but lets them speak for themselves (although Joy Christian proved that his saintly patience is not inexhaustible).
Scott initially challenged D-Wave strongly after the 2007 event (long before I started to pay closer attention – I heard about the 16 qbit claim at the time but didn’t consider such a chip yet commercially viable). Recently he buried the hatchet, but of course for a theoretical computer scientist the question of what kind of complexity class a certain hardware design can address will remain in the forefront.
Fortunately Daniel Lidar who is currently studying a D-Wave machine reported in the comment section that he plans to soon publish on exactly that. While other theorists such as Greg Kuperberg seem to be unwilling to move forward and past the 2007 “Sudoku crisis” (video of this “scandalous” demo can be found here), Scott and Daniel will clearly move this to a constructive debate based on published research.
The most prominent argument against this demo was that 16 qbits could not contain the entire parameter space of a Sudoku solution. Of course this ignores how the D-Wave system is implemented, and that it works in conjunction with a classic system as a kind of quantum coprocessor.
The Sudoku solver is not currently given as a coding example on the D-Wave developer portal. Since this old controversy can still cause damage to the company’s brand I think it may be a good idea to include it there.
It’s hard to gauge the extent of the brand damage that D-Wave suffered. Most IT folks will probably not have heard of the company yet, so this mitigates the problem, but on the other hand this left its traces in Wikipedia. The latter will almost certainly be one of the first stops for anybody new to the subject when trying to form an opinion. It is the first non-affiliated site that comes up in a Google search (when discounting recent news headlines) and it doesn’t get better from there. The next search result is this outright defamatory article from the highly respect IEEE, an organisation that is supposed to be objective.
Although my professional life is mostly IT consulting, I had my share of high tech marketing challenges to deal with (for a while as BI software product manager). In my experience a problem like this needs to be addressed heads on. My advice would be to very openly embrace this old controversy and to position how and why D-Wave’s design differs from universal gate-based quantum computing. The company does that in various places of their web presence, but it took me 1o minutes of googling to find this old physicsandcake posting that addresses this nicely in simple terms. A potential customer coming across old and new misinformation should be able to very quickly find these kinds of counter arguments.
“If a tree falls in a forest, and no one is around to hear it, does it make a sound?” is a well known metaphysical toy question. What is less known is its business corollary: If a start-up makes a splash, but nobody pays attention, did it actually ever really emerge from stealth mode?
It is rather welcome news that the latest Nobel prizes in physics bring some attention to quantum computing, but as always, these machines are presented as a future possibility rather than a commercial reality.
Yet, D-Wave already ships the first commercial quantum computing device, and hardly anybody seems to know. Sometimes this ignorance appears to be almost willful. The company just doesn’t seem to be able to generate ubiquitous mainstream media awareness.
While their device is not a universal quantum computer, it nevertheless can facilitate an extremely useful and versatile quantum optimization, with plenty of real life application usage. The somewhat arcane Ramsey number research already demonstrated the power of the Rainier chip and a recent paper published in Nature shows how NP complex protein folding scenarios can be calculated on their hardware. An interesting wrinkle to this latest research is that the device doesn’t find the optimal solution as reliably as in the Ramsey number case, but that the other solutions are also valid folding modes of the examined protein and provide additional insight.
The importance of this kind of research can hardly be overstated, so why is this company not mentioned every time somebody writes about quantum computing?
Is it a case of crying wolf too many times? Is there a perception that the company may have over-promised and under-delivered in the past? Is it a lingering after-effect of the Scott Aaronson ham sandwich controversy?
Your guess is as good as mine. If you have any thoughts or insights on this please share them in the comments section.