A Brief History of Quantum Computing

An attempted apotheosis.

In the beginning there was Feynman.  And Feynman was with the scientific establishment.  Feynman was a god. Or at least as close to it as a mortal physicist can ever be. He tamed the light with his QED (pesky renormalization non-withstanding) and gave the lesser folks the Feynman diagrams, so that they could trade pictograms like baseball cards and feel that they partook.  But he saw that not all was good. So Feynman said: “Quantum mechanical systems cannot be efficiently simulated on conventional computing hardware.” Of course he did that using quite a few more words and a bit more math and published it in a first class journal as was befitting.

And then he was entirely ignored.

Feynman walked upon earth.

Four grueling years went by without follow-up publications on quantum computing. That is until David Deutsch got in on the act and laid the foundation for the entire field with his seminal paper.  His approach was motivated by how quantum physics might affect information processing, including the one that happens in your human mind.  So how to experiment on this? Obviously you cannot put a real brain into a controlled state of quantum superposition (i.e. a kind of “Schrödinger’s brain) – but a computer on the other hand won’t feel any pain.

Let's rather put a computer into Schrödinger's box.
Less messy.

So was Feynman finally vindicated? No, not really, Deutsch wrote:

Although [Feynman’s ‘universal quantum simulator’] can surely simulate any system with a finite-dimensional state space (I do not understand why Feynman doubts that it can simulate fermion systems), it is not a computing machine in the sense of this article.

The old god just couldn’t get any respect anymore. His discontent with string theory didn’t help, and in addition he experienced another inconvenience: He was dying. Way too young.  His brilliant mind extinguished at just age 69.  Like most people, he did not relish the experience, his last words famously reported as: “I’d hate to die twice. It’s so boring.”

The only upside to his death was that he didn’t have to suffer through Penrose’s book “The Emperor’s New Mind” that was released the following year.  While it is a great read for a layperson who wants to learn about the thermodynamics of black holes as well as bathroom tile designs, none of the physics would have been news to Feynman. If he wasn’t already dead, Feynman probably would have died of boredom before he made it to Penrose’s anti-climactic last chapter.  There, Penrose finally puts forward his main thesis, which can be simply distilled as “the human mind is just different”.  This insight comes after Penrose’s short paragraph on quantum computing, where the author concludes that his mind is just too beautiful to be a quantum computer, although the latter clearly can solve some NP complete (h/t Joshua Holden) problems in polynomial time. Not good enough for him.

Physics, meanwhile, has been shown to be a NP hard sport, but more importantly for the advancement of quantum computing was the cracking of another NP class problem; Shor’s algorithm showed that a quantum computer could factorize large numbers in polynomial time.  Now the latter might sound about as exciting as the recent Ramsey number discussion on this blog, but governments certainly paid attention to this news, as all our strong encryption algorithms that are currently in wide use can be cracked if you accomplish this feat. Of course quantum information science offers a remedy for this as well in the form of (already commercially available) quantum encryption, but I digress.

With the kind of attention that Shor’s algorithm engendered, the field exploded, not the least due to computer science taking an interest.  In fact Michael Nielsen, one of the co-authors of the leading textbook on the subject, is a computer scientist by trade and if you want a gentle introduction to the subject I can highly recommend his video lectures (just wish he’d get around to finishing them).

Of course if you were stuck in the wrong place at the time that all this exciting development took place, you would have never known.  My theoretical physics professors at the University of Bayreuth lived and died by the Copenhagen omertà that in their mind decreed that you could not possibly get any useful information out of the phase factor of a wave function. I was taught this as an undergraduate physics student in the early nineties (in fairness probably before Shor’s algorithm was published, but long after Deutsch’s paper). This nicely illustrates the inertia in the system and how long it takes before new scientific insights become wide-spread.

The fresh and unencumbered view-point that the computer scientists brought to quantum mechanics is more than welcome and has already helped immensely in softening up the long-established dogma of the Copenhagen interpretation. Nowadays Shor can easily quip (as he did on Scot Aaranson’s blog):

Interpretations of quantum mechanics, unlike Gods, are not jealous, and thus it is safe to believe in more than one at the same time. So if the many-worlds interpretation makes it easier to think about the research you’re doing in April, and the Copenhagen interpretation makes it easier to think about the research you’re doing in June, the Copenhagen interpretation is not going to smite you for praying to the many-worlds interpretation. At least I hope it won’t, because otherwise I’m in big trouble.

I think Feynman would approve.

8 thoughts on “A Brief History of Quantum Computing

  1. “[…] the author concludes that his mind is just too beautiful to be a quantum computer, although the latter clearly can solve some NP complete problems in polynomial time.” Are you sure? I was under the impression that that question was unsolved (e.g., ).

  2. Found this page when doing “serious” researching deal and really made me laugh…I mean itself is remarkable without Feynman’s proving lol

Comments are closed.