Astute followers of this blog know that quantum computing was the brain child of Richard Feynman whose contribution to the quantum field theory of electrodynamics earned him a Nobel prize. Feynman was the first to remark on the fact that classical computers cannot efficiently simulate quantum systems. Since then the field has come a long way and it has been shown theoretically and experimentally that quantum computers can efficiently simulate quantum mechanical multi-body systems. And recent experimental setups like NIST’s 300 qbit quantum simulator are destined to surpass anything that could be modeled on a classical computer.
Yet, for the longest time it was not clear if quantum computers could also efficiently simulate quantum field theories.
Fields are a bit more tricky. Just recall the classic experiment to illustrate a magnetic field as shown in the picture.
Every point in space is imbued with a field value, so that even the tiniest volume of an element will contain an infinite amount of these field values.
The typical way to get around this problem is to perform the calculations on a grid. And the algorithm introduced by Jordan et. al. in this, just three pages long, paper does that as well.
Unfortunately, Feynman is no longer around to appreciate the work that now made it official: Quantum Field theories can be efficiently simulated i.e. with polynomial time scaling.
It is quite clever how they spread their simulation over the qbits, represent scattering particles and manage to derive an error estimate. The fact that they actually do this within the Schrödinger picture makes this paper especially accessible.
If you don’t know the first thing about quantum mechanics, this paper will still give you a good sense that the constriction of quantum algorithms does not look anything like conventional coding – even, as is the case here, one using the gate based quantum computing model.
This goes to the heart of the challenge to bring quantum computing to the masses. Steven Job’s quip about the iPhone is just as true for any quantum computer: “What would it be without the software? It would make a nice paperweight!” (h/t R. Tucci) Only difference is that a quantum computer will make a really big paperweight, but otherwise it’ll be just as dead.
This somewhat resembles the days of yore when computer programs had to be hand compiled for a specific machine architecture. Hence, the race is one to find a suitable abstraction layer on top of this underlying quantum weirdness, in order to make this power accessible to non-physicists.
Just in case you wondered: It is still not clear if String theories can be efficiently simulated on a quantum computer. But it has been suggested that those that cannot should be considered unphysical.
If anybody in the wider IT community has heard about Quantum Computing, it’s usually in the context of how it may obliterate our current industry standard encryption (e.g. RSA). Probably Peter Shor didn’t realize that he was about to define the entire research field when he published his work on what is now the most famous (and notorious) quantum algorithm. But once it was reported that he uncovered a method that could potentially speed up RSA decryption, anxiety and angst spread the news far and wide.
Odds are, if it wasn’t for the IT media stir that this news caused, quantum information technology would still be mostly considered just another academic curiosity.
With the modest means of my blog, I have tried to create some awareness that quantum computing is much bigger than this, when pointing at use cases in enterprise IT and social web services, but admittedly these were general observations that did not descend to the level of implementation details. That’s why I was very pleased when coming across this paper describing how quantum computing can be leveraged to calculate Google’s page rank. The authors argue that for a realistic web graph topology, their approach can
… provide a polynomial quantum speedup. Moreover, the quantum PageRank state can be used in “q-sampling” protocols for testing properties of distributions, which require exponentially fewer measurements than all classical schemes …
So why does this matter? Most bloggers will have heard about Google’s PageRank algorithm as it is crucial in determining how far up your site will appear in a Google search. But just as most people don’t really think about how power gets to your home’s outlet, few will have contemplated how this algorithm works and how much raw CPU power Google must spend to keep it updated.
For those inclined to learn more how these kinds of algorithms work, there is no better treat than this blog post on exactly this topic from the one and only Michael Nielsen (who’s standard tome of quantum computing is deservedly the first reference in the paper).
Google doesn’t advertise how often and how quickly they can run PageRank on the entire visible web space. Clearly, the more often a search engine can update the rank values the better. It is also a safe bet that executing this algorithm on a continuous basis will contribute to Google’s considerable power consumption. It is laudable that Google invests in “green” data centers but it seems they could save a lot on their energy bill if they’d get behind quantum computing. The “adiabatic” classification of this algorithm is a give-away. The unitary process evolution actually does not increase entropy, and thus does not require energy input for that part of the computation, unlike classical computing. That is why adiabatic quantum computing machines like the D-Wave One have the potential to drastically lower power consumption in data centers. It’s really a twofer: You get superior algorithms and save power.
Oh, and by the way, the PageRank for this blog is still 0 out of 10
Update: The way my post is written it may give the impression that the D-Wave One can execute this algorithm. But as this machine is a specialized QC device this is not necessarily the case. (This question was raised in the related LinkeIn discussion to this post that can be found here).
Science works in peculiar ways. Everything that matters will need to be published. Yet, this is no guarantee that it won’t be forgotten or lost. Recently a handwritten manuscript of Albert Einsteins was recovered. The paper in question is widely regarded as the last of his greatest contribution to theoretical physics.
It is the last part of a three piece set, the first paper of which was not authored but merely translated and submitted by Einstein after it has been previously rejected for publication.
It was the work of the young Satyendra Nath Bose an often overlooked giant of modern quantum mechanics. To my knowledge Bose’s original manuscript in English that he sent to Einstein has been lost for good. The only copies in English have been translated back from the German paper that Einstein submitted on behalf of Bose to the journal “Zeitschrift für Physik” in 1924.
Yet, no such translations of Einstein’s historic follow up paper are readily available. A cursory Google search comes up empty.
When the news of the recovered manuscript spread in various LinkedIn physics groups, many posters expressed frustration that the paper at the Leiden University Einstein Archive was merely a scan of the German original and therefore inaccessible to most.
So I decided to add the “Lost Papers” page to this blog to provide these papers in a modern English translation. Fortunately I have some help with this, as I am currently very busy.
First off I now start with Bose’s short first paper but the translation of Einstein’s last paper is near completion and will then be linked there as well.
The Gentleman to the right places you into the Matrix. His buddy could help, if only he wasn’t a fictional character.
Dr. Gates, a distinguished theoretical physicist (with a truly inspiring biography), recently made an astounding statement during an interview on NPR (the clip from the On Being show can be found here – a transcript is also online). It gave the listener the distinct impression that he uncovered empirical evidence in his work that we live in a simulated reality. In his own words:
(…) I remember watching the movies, The Matrix. And so, the thought occurred to me, suppose there were physicists in this movie. How would they figure out that they lived in the matrix? One way they might do that is to look for evidence of codes in the laws of their physics. But, you see, that’s what had happened to me already.
I, and my colleagues indeed, we had found the presence of codes in the equations of physics. Not that we’re trying to compute something. It’s a little bit like doing biology where, if you studied an animal, you’d eventually run into DNA, and that’s essentially what happened to us. These codes that we found, they’re like the DNA that sits inside of the equations that we study.
Of course Dr. Gates made additional qualifying statements that cautioned against reading too much into this, but media, even the more even-handed NPR, feeds off sensationalism. And so they of course had to end the segment with a short excerpt from the Matrix to drive this home. It would be interesting to know how many physicists were subsequently badgered by family and friends to explain if we really live in the Matrix. So here’s how I tackled this reality distortion for my non-physicist mother-in-law:
Dr. Gates has been a pioneer in Supersymmetry research (affectionately abbreviated SUSY) but just as with String theory there is an absolute dearth of experimental verification (absolute dearth meaning not a single one). While SUSY proved to be of almost intoxicating mathematical beauty the recent results from LHC have been especially brutal. Obviously, if nature doesn’t play by SUSY’s rules it will be of no physical consequence if Dr. Gates finds block codes in these equations (although it certainly is still mathematically intriguing).
The codes uncovered in the SUSY equations are classic error correction bit codes. The bit, being the smallest informational unit, hints at a Matrix style reality simulated on a massive Turing complete machine. There are certainly other smart people who actually believe in such (or a very similar) scenario – e.g. Stephen Wolfram advocated something along these lines in his controversial book. The one massive problem with such a world view is that we rather conclusively know that classic computers are no good at simulating quantum mechanical systems, and that quantum computers can outperform classical Turing machines (the same holds in the world of cellular automatons, where it can be shown that quantum cellular automatons can emulate their Turing equivalent and vice versa).
If Dr. Gates had discovered qbits and a quantum error correction code hidden in SUSY, that would have been significantly more convincing. I could entertain the idea of a Matrix world simulated on a quantum computer.
At any rate, his equations didn’t provide a better answer to the question of why anyone would go to the trouble of running a simulation like the Matrix. In the movie, the explanation is that human bodies perform as an energy source just like a battery. Always thought this explanation fell rather flat. If a mammalian body was all it took, why not use cows, for example? That should make for a significantly easier world simulation – an endless field of green should suffice. Probably wouldn’t even require a quantum computer to simulate a happy cow world.
Imagine a world before the advent of the steam engine that nevertheless imminently anticipates this marvelous machine’s arrival. Although no locomotive has been built, civil engineers are already busy discussing how to build rail-road bridges, architects try to determine the optimal layout of train stations, and the logistics of scheduling and maintaining passenger and freight traffic over the same tracks is heavily researched.
To some extent this seemingly absurd scenario is playing out in the world of quantum computing. For instance, take a look at this intriguing presentation by Rodney Van Meter:
While watching it I had to pinch myself a couple of times to make sure I wasn’t just hallucinating a beamed broadcast from the future. In fact it is more two years old. All this impressive infrastructure work is being performed while we are still years away from an actual scalable universal quantum computer.
Of course there is ample reason for all this activity, as has been documented on this humble blog. To recap: As our conventional computing inevitably arrives at structure sizes where undesired quantum effects can no longer be ignored. On the other hand harnessing the peculiarities of quantum mechanics will supercharge Moor’s law. It will enable us to tackle problems that are too complex for conventional computing.
Specialized quantum computing devices such as D-Wave’s machine or NIST’s impressive ion based quantum simulator already allow us a glance at the potential that this new approach to computing will unleash (btw. the NIST article makes it sound as if a “crystal” was contained in the Penning trap. This of course is nonsense. What is meant is that the ions are arranged in a 2d crystal like grid).
It is encouraging that this core technology is so feverishly anticipated and that considerable efforts to lay the groundwork for it are in progress. After all, conventional programming techniques won’t cut it if the goal is to leverage the additional power of a quantum computer. It will be key to empower software engineers to program these devices without forcing them to go through a quantum mechanics boot camp.
When picking up a textbook on the subject, the reader will very quickly be confronted with diagrams typically following the circuit model, where every line corresponds to a qbit. Such as:
Teleportation of the kind that’s only good to beam up qbits.
While this is useful to introduce a reader to the peculiarities of entanglements and how this can be leveraged as a computational resource, it is obviously of limited use once you have a meaningful device that offers hundreds of qbits. Even for a dedicated (Ising model solving) system such as D-Wave, you can no longer draw a complete graph (although it helps to introduce a matrix notation to the uninitiated).
A purist might stop there and observe that quantum computation just means working with density matrices, and hence brushing up on your linear algebra is what it takes. The conventional programming analog would be to observe that Boolean logic is all you need to program a conventional chip. Obviously, higher levels of abstraction serve us well in this area.
The current state of affairs in quantum computing remind me of the early days of visual programming research long before the advent of UML to provide a unified framework.
As this industry matures, expect a similar process as that which played out in the old world of visual programming. There is one important twist, though: Although UML is an excellent way to approach coding in a structured way (one that actually deserves to be called engineering), its adoption is lackluster, and sloppy coding still rules supreme.
To the extent that pictoral languages are at the heart of quantum computing programming, maybe another beneficial side effect of the coming quantum computing age will be to accelerate the maturing of the computer industry’s approaches towards software development.
Modern and ancient pictograms. Sometimes hard to piece together what a graphical representation is supposed to convey.
Currently I am spending way too much time commenting on Scott Aaronson’s blog where the Joy Christian “Bell Inequality Disproof” controversy is still in full swing. The latter also inpsired me to the new “QC Bet Tracker” page on this humble blog of mine.
Head over if you want to see a first class science imbroglio.
One of the most fascinating aspects of quantum information research is that it sheds light on the connections between informational and thermodynamic entropy, as well as how time factors into quantum dynamics.
I.e. Schroedinger Equation and Heisenberg picture are equivalent. Although in the former the wave-function changes with time in the latter the operator. Yet, we don’t actually have any experimental insight in when the changes under adiabatic development are actually realized, since by its very nature we only have discrete observations to work with. This opens up room for various speculations such as that the “passage of time” is actually an unphyiscal notion for an isolated quantum system between measurements (i.e. as expressed by Ulrich Mohrhoff in this paper).
Lot’s of material there for future posts. But before going there it’s a good idea to to revisit the oldest paradox on time with this fresh take on it by Perry Hooker.
Although widely admired and loved, in the end he died like so many who came to extremes of fame or fortune – estranged from family and separated from old friends. The only person to witness his death in exile was a nurse, incapable of understanding his last words which were uttered in a language foreign to her.
If his private life was a template for a telenovella, viewers would regard it as too over the top: As a teenager his parents leave him with relatives to complete school – they need to resettle to a foreign country. He rebels, his school teachers give up on him, he drops out. He travels across the Alps to reunite with his family. If it isn’t for the unwavering support of his mother he would probably never move on to obtain a higher education. She manages to find him a place with relatives in a country of his native language so that he can finally gain his diploma. The same year he renounces his old citizenship and also quits the religion of his parents.
He subsequently enrolls in a prestigious university, but ignores the career choice that his parents had in mind for him. He falls in love with a beautiful fellow student from a far away land. His parents are against the relationship, and so are hers. Against the will of their families they want to get married, but our hero struggles to find a job after graduation. He hopes to be hired as an assistant at his university, just like the rest of his peers, but he has once again fallen out with some of his teachers. Many of the other members of the faculty only notice him because he skips so many lectures – especially the purely mathematical ones. Still, he passes all the tests, relying on his friends’ lecture notes.
His future wife-to-be becomes pregnant out of wedlock, has to return to her family and gives birth to a little girl with Down syndrome. He never even gets to see the girl. This summer – two years after graduation – with the help of a friend, he finally lands his first steady job. Later that year his father dies, and shortly after that our man marries his beloved Mileva.
Meet the Einsteins:
Images of old Albert Einstein are so iconic that some people tend to forget that he wasn
Having settled down in Bern he now manages to find the discipline and inner calm for his subsequent groundbreaking works. I can not even begin to fathom how he musters the strength to do so, coping with a full time day job and a young family. Discussing his ideas with friends and colleagues certainly helps and surely he must discuss his research with Mileva as well (how much she influenced his work has been somewhat of a controversy). The following three years, even while working as a patent clerk, are the most fruitful of Albert Einstein’s life. His research culminates in four publications in the year 1905 that irreversibly change the very foundation of physics. His papers ….
… describe for the first time the theory of Special Relativity.
… show the equivalence of mass and energy i.e. the most famous E=mc².
… propose the idea of energy quanta (i.e. photons) to explain the photoelectric effect.
Without the realization that mass and energy are equivalent (2), there’d be no nuclear energy and weapons. Without Einstein’s energy quanta hypothesis (3), there’d be no quantum mechanics, and his work that explains the Brownian motion (4) settled, once and for all,the question if atoms were real. At the same time, it provides the missing statistical underpinning for thermodynamics.
These were all amazing accomplishments in their own right, but nothing so resonated with the public as the consequences of Einstein’s theory of Special Relativity (1). This one was regarded as a direct affront to common sense and achieved such notoriety that it was later abused by Nazi propaganda to agitate against “Jewish physics”.
Already, at this time, physics was such a specialized trade that usually the man on the street would have no motivation to form an opinion on some physics paper. So what caused all this negative attention? Einstein’s trouble was that by taking Maxwell’s theory of Electrodynamics seriously he uncovered properties of something that everybody thought they intuitively understood. Any early 20th century equivalent to Joe the Plumber would have felt comfortable explaining how to measure the size of a space and how to measure time – they were understood as absolute immutable dimensions in which life played out. Only they cannot be if Maxwell’s equations were right, and the speed of light was a constant in all frames of reference. This fact was really hiding in plain sight, and you don’t need any mathematics to understand it – you only need the willingness to entertain the possibility that the unthinkable might be true.
In 1923 an elaborate movie was produced that tried to explain Special Relativity to a broad audience. It turned out to be a blockbuster, but still didn’t convince the skeptical public – watching it made me wonder if that is where so many misconceptions about Einstein’s theories started. It does not contain any falsehoods, but it spends way too much time on elaborating relativity, while the consequences of the invariability of light speed are mixed in with results from General Relativity, and neither are really explained. Apparently the creators of this old movie felt that they had to start with the most basic principles and couldn’t really expect their audience to follow some of Einstein’s arguments. Granted, this was before anybody even knew what our planet looked like from space, and the imagined astronaut of this flick is shot into space with a canon as the preferred mode of transportation – as, for instance, imagined by Jules Verne. Nowadays this task is much easier in comparison. You can expect a blog reader to be desensitized by decades of SciFi. Also, having a plethora of educational videos at your fingertips makes for a straightforward illustration of some of the immediate outcomes of accepting light speed to be constant in all frames of reference.
For a modern audience, a thought experiment containing two spaceships traveling in parallel with a setup that has a laser signal being transferred between them requires little explanation. All that is necessary to come to grips with, is what it means that this laser signal travels at the same speed in all frames of reference. For instance, this short video does an excellent job explaining that an observer passing by these spaceships will have to conclude that the clocks for the space pilots must go slower.
Nevertheless, even nowadays you still get publications like this one, where two Stanford professors of psychology perpetuate this popular falsehood in the very first sentence of their long monograph:
[Einstein] established the subjective nature of the physical phenomenon of time.
Of course he did no such thing. He described how the flow of time and the temporal ordering of events transforms between different inertial reference frames as an objective physical reality.
Over a hundred years special relativity has withstood all experimental tests (including the recent faster than light neutrino dust-up). Yet, public education has still not caught up to it.
This is the second installment of my irregular biographical physics series intended to answer the question of how physics became so strange. Given Einstein’s importance I will revisit his lasting legacy in a future post.
In terms of commercial use cases, I have looked at corporate IT, as well as how a quantum computer will fit in with the evolving cloud computing infrastructure. However, where QC will make the most difference -as in, a difference between life and death – goes entirely unnoticed. Certainly by those whose lives will eventually depend on it.
Hyperbole? I think not.
As detailed in my brief retelling of quantum computing history, it all started with the realization that most quantum mechanical systems cannot efficiently be simulated on classical computers. Unfortunately, the sorry state of public science understanding means that this facilitates hardly more than a shrug by even those who make a living writing about it (not the case for this humble blogger who toils away at it as a labor of love).
Prime example for this is a recent, poorly sourced article from the BBC that disses the commercial availability of turnkey-ready quantum computing without even mentioning D‑Wave, and at the same time proudly displays the author’s ignorance about why this technology matters (emphasis mine):
“The only applications that everyone can agree that quantum computers will do markedly better are code-breaking and creating useful simulations of systems in nature in which quantum mechanics plays a part.”
Well, it’s all good then, isn’t it? No reason to hurry and get a quantum computer on every scientist’s desk. After all, only simulations of nature in which quantum mechanics plays a part will be affected. It can’t possibly be all that important then. Where the heck could this esoteric quantum mechanics stuff possibly play an essential part?
For instance, one of the most important aspects of pharmaceutical research is to understand the 3D protein structure, and then to model how this protein reacts in vivo using very calculation-intensive computer simulations.
There has been some exciting progress in the former area. It used to be that only proteins that lend themselves to crystallization could be structurally captured via X-ray scattering. Now, recently developed low energy electron holography has the potential to revolutionize the field. Expect to see a deluge of new protein structure data. But despite some progress with numerical approaches to protein folding simulations, the latter remains NP complex. On the other hand, polynomial speed-ups are possible with quantum computing. Without it, the inevitable computational bottleneck will ensure that we forever condemn pharmaceutical research to its current expensive scatter-shot approach to drug development.
There is no doubt in my mind that in the future, people’s lives will depend on drugs that are identified by strategically deploying quantum computing in the early drug discovery process. It is just a matter of when. But don’t expect to learn about this following BBC’s science news feed.
Their last names start with the same two letters, and they lived in the same city at the same time – but that’s where the similarities end.
Only one of these two contemporaries was a revolutionary, whose life’s work would drastically improve the human condition. Who do you pick?
Undeservedly the first man made it onto the top ten BBC Millennium list (10th) while arguably James Clerk Maxwell, the gentleman to the right, considerably improved the lot of humanity.
Maxwell predicted the existence of electromagnetic waves (but didn’t live to see this prediction experimentally verified) and correctly identified light with electromagnetic waves. This seemingly settled an old score once and for all in favor of Christian Huygenstheory of light and relegated Newton’s corpuscular theory (proposed in his famous work, Opticks) to the dustbin of history.
There was just one little problem, and over time it grew so big it could no longer be ignored.
Until then all natural laws were well behaved. They didn’t discriminate against you if you happened to live on another star that zips through the cosmos at a different rate of speed than our solar system.
Physics laws are usually written down with respect to inertial frames of references (i.e. usually represented by a simple cartesian grid). Inertial means that these systems can have relative motion but don’t accelerate. Natural laws could always be transformed between such reference systems so that by just representing the coordinates of system 1 in those of system 2 you again retain the exact same form of your equations (this is referred to as being invariant under Galilean transformations).
Maxwell’s equations did not conform and steadfastly refused to follow these iron-clad transformation laws. And this wasn’t the only problem; in combination with statistical thermodynamics, electrodynamics also predicted that a hot object should radiate an infinite amount of energy, a peculiarity know as the ultraviolet catastrophe.
These two issues were the seeds for the main physics revolutions of the last century. The first one directly lead to Special Relativity (one could even argue that this theory was already hidden within the Maxwell equations). While the second one required field quantitization in order to be fixed and spawned modern Quantum Mechanics.
It all started with this unlikely revolutionary whose life was cut short at age 48 (succumbing to the same kind of cancer that killed his mother).
Maxwell, like no other, demonstrated the predictive power of mathematical physics. One wishes he could have lived to see Heinrich Hertz confirm the existence of electromagtnetic waves – he would have been 55 at that time. But no human life span would have sufficed to see his first major insight verified: