Category Archives: Popular Science

Quantum Computing – A Matter of Life and Death

Even the greatest ships can get it wrong.

In terms of commercial use cases, I have looked at corporate IT, as well as how a quantum computer will fit in with the evolving cloud computing infrastructure.  However, where QC will make the most difference -as in, a difference between life and death – goes entirely unnoticed.  Certainly by those whose lives will eventually depend on it.

Hyperbole? I think not.

As detailed in my brief retelling of quantum computing history, it all started with the realization that most quantum mechanical systems cannot efficiently be simulated on classical computers.  Unfortunately, the sorry state of public science understanding means that this facilitates hardly more than a shrug by even those who make a living writing about it (not the case for this humble blogger who toils away at it as a labor of love).

Prime example for this is a recent, poorly sourced article from the BBC that disses the commercial availability of turnkey-ready quantum computing without even mentioning D‑Wave, and at the same time proudly displays the author’s ignorance about why this technology matters (emphasis mine):

“The only applications that everyone can agree that quantum computers will do markedly better are code-breaking and creating useful simulations of systems in nature in which quantum mechanics plays a part.”

Well, it’s all good then, isn’t it? No reason to hurry and get a quantum computer on every scientist’s desk.  After all, only simulations of nature in which quantum mechanics plays a part will be affected.  It can’t possibly be all that important then.  Where the heck could this esoteric quantum mechanics stuff possibly play an essential part?

Oh, just all of solid state physics, chemistry, micro-biology and any attempts at quantum gravity unification.

For instance, one of the most important aspects of pharmaceutical research is to understand the 3D protein structure, and then to model how this protein reacts in vivo using very calculation-intensive computer simulations.

There has been some exciting progress in the former area.  It used to be that only proteins that lend themselves to crystallization could be structurally captured via X-ray scattering.  Now, recently developed low energy electron holography has the potential to revolutionize the field.  Expect to see a deluge of new protein structure data.  But despite some progress with numerical approaches to protein folding simulations, the latter remains NP complex.  On the other hand, polynomial speed-ups are possible with quantum computing.  Without it, the inevitable computational bottleneck will ensure that we forever condemn pharmaceutical research to its current expensive scatter-shot approach to drug development.

There is no doubt in my mind that in the future, people’s lives will depend on drugs that are identified by strategically deploying quantum computing in the early drug discovery process.  It is just a matter of when. But don’t expect to learn about this following BBC’s science news feed.

How Did Physics Become So Strange?

Let’s start with a quiz:

Their last names start with the same two letters, and they lived in the same city at the same time – but that’s where the similarities end.

Only one of these two contemporaries was a revolutionary, whose life’s work would drastically improve the human condition.

Who do you pick?

Undeservedly the first man made it onto the top ten BBC Millennium list (10th) while arguably James Clerk Maxwell, the gentleman to the right, considerably improved the lot of humanity. 

He changed physics forever, single-handedly undermining the very foundation of the science when developing his theory of electromagnetism in the early 1860s.

At first, nobody noticed.

Maxwell predicted the existence of electromagnetic waves (but didn’t live to see this prediction experimentally verified) and correctly identified light with electromagnetic waves. This seemingly settled an old score once and for all in favor of Christian Huygens theory of light and relegated Newton’s corpuscular theory (proposed in his famous work, Opticks) to the dustbin of history.

There was just one little problem, and over time it grew so big it could no longer be ignored.

Until then all natural laws were well behaved. They didn’t discriminate against you if you happened to live on another star that zips through the cosmos at a different rate of speed than our solar system.

Physics laws are usually written down with respect to inertial frames of references (i.e. usually represented by a simple cartesian grid). Inertial means that these systems can have relative motion but don’t accelerate. Natural laws could always be transformed between such reference systems so that by just representing the coordinates of system 1 in those of system 2 you again retain the exact same form of your equations (this is referred to as being invariant under Galilean transformations).

Maxwell’s equations did not conform and steadfastly refused to follow these iron-clad transformation laws. And this wasn’t the only problem; in combination with statistical thermodynamics, electrodynamics also predicted that a hot object should radiate an infinite amount of energy, a peculiarity know as the ultraviolet catastrophe.

These two issues were the seeds for the main physics revolutions of the last century. The first one directly lead to Special Relativity (one could even argue that this theory was already hidden within the Maxwell equations).  While the second one required field quantitization in order to be fixed and spawned modern Quantum Mechanics.

It all started with this unlikely revolutionary whose life was cut short at age 48  (succumbing to the same kind of cancer that killed his mother).

Maxwell, like no other, demonstrated the predictive power of mathematical physics. One wishes he could have lived to see Heinrich Hertz confirm the existence of electromagtnetic waves – he would have been 55 at that time. But no human life span would have sufficed to see his first major insight verified:

A calculation early in his career conclusively demonstrated that the rings of Saturn had to be made up of small “brick-bat” rocks. It wasn’t until the Voyager probes encountered the planet in 1980/81 that he was proven right. Really, they should be called Maxwell’s rings.

A Brief History of Quantum Computing

An attempted apotheosis.

In the beginning there was Feynman.  And Feynman was with the scientific establishment.  Feynman was a god. Or at least as close to it as a mortal physicist can ever be. He tamed the light with his QED (pesky renormalization non-withstanding) and gave the lesser folks the Feynman diagrams, so that they could trade pictograms like baseball cards and feel that they partook.  But he saw that not all was good. So Feynman said: “Quantum mechanical systems cannot be efficiently simulated on conventional computing hardware.” Of course he did that using quite a few more words and a bit more math and published it in a first class journal as was befitting.

And then he was entirely ignored.

Feynman walked upon earth.

Four grueling years went by without follow-up publications on quantum computing. That is until David Deutsch got in on the act and laid the foundation for the entire field with his seminal paper.  His approach was motivated by how quantum physics might affect information processing, including the one that happens in your human mind.  So how to experiment on this? Obviously you cannot put a real brain into a controlled state of quantum superposition (i.e. a kind of “Schrödinger’s brain) – but a computer on the other hand won’t feel any pain.

Let's rather put a computer into Schrödinger's box.
Less messy.

So was Feynman finally vindicated? No, not really, Deutsch wrote:

Although [Feynman’s ‘universal quantum simulator’] can surely simulate any system with a finite-dimensional state space (I do not understand why Feynman doubts that it can simulate fermion systems), it is not a computing machine in the sense of this article.

The old god just couldn’t get any respect anymore. His discontent with string theory didn’t help, and in addition he experienced another inconvenience: He was dying. Way too young.  His brilliant mind extinguished at just age 69.  Like most people, he did not relish the experience, his last words famously reported as: “I’d hate to die twice. It’s so boring.”

The only upside to his death was that he didn’t have to suffer through Penrose’s book “The Emperor’s New Mind” that was released the following year.  While it is a great read for a layperson who wants to learn about the thermodynamics of black holes as well as bathroom tile designs, none of the physics would have been news to Feynman. If he wasn’t already dead, Feynman probably would have died of boredom before he made it to Penrose’s anti-climactic last chapter.  There, Penrose finally puts forward his main thesis, which can be simply distilled as “the human mind is just different”.  This insight comes after Penrose’s short paragraph on quantum computing, where the author concludes that his mind is just too beautiful to be a quantum computer, although the latter clearly can solve some NP complete (h/t Joshua Holden) problems in polynomial time. Not good enough for him.

Physics, meanwhile, has been shown to be a NP hard sport, but more importantly for the advancement of quantum computing was the cracking of another NP class problem; Shor’s algorithm showed that a quantum computer could factorize large numbers in polynomial time.  Now the latter might sound about as exciting as the recent Ramsey number discussion on this blog, but governments certainly paid attention to this news, as all our strong encryption algorithms that are currently in wide use can be cracked if you accomplish this feat. Of course quantum information science offers a remedy for this as well in the form of (already commercially available) quantum encryption, but I digress.

With the kind of attention that Shor’s algorithm engendered, the field exploded, not the least due to computer science taking an interest.  In fact Michael Nielsen, one of the co-authors of the leading textbook on the subject, is a computer scientist by trade and if you want a gentle introduction to the subject I can highly recommend his video lectures (just wish he’d get around to finishing them).

Of course if you were stuck in the wrong place at the time that all this exciting development took place, you would have never known.  My theoretical physics professors at the University of Bayreuth lived and died by the Copenhagen omertà that in their mind decreed that you could not possibly get any useful information out of the phase factor of a wave function. I was taught this as an undergraduate physics student in the early nineties (in fairness probably before Shor’s algorithm was published, but long after Deutsch’s paper). This nicely illustrates the inertia in the system and how long it takes before new scientific insights become wide-spread.

The fresh and unencumbered view-point that the computer scientists brought to quantum mechanics is more than welcome and has already helped immensely in softening up the long-established dogma of the Copenhagen interpretation. Nowadays Shor can easily quip (as he did on Scot Aaranson’s blog):

Interpretations of quantum mechanics, unlike Gods, are not jealous, and thus it is safe to believe in more than one at the same time. So if the many-worlds interpretation makes it easier to think about the research you’re doing in April, and the Copenhagen interpretation makes it easier to think about the research you’re doing in June, the Copenhagen interpretation is not going to smite you for praying to the many-worlds interpretation. At least I hope it won’t, because otherwise I’m in big trouble.

I think Feynman would approve.

Information is Physical

Even when the headlines are not gut-wrenching

Information processing is seldom that physical.

One of the most astounding theoretical predictions of the late 20th century was Landauer’s discovery that erasing memory is linked to entropy i.e. heat is produced whenever a bit is fully and irrevocably erased.  As far as theoretical work goes this is even somewhat intuitively understandable: After all increasing entropy essentially means moving to a less ordered phase state (technically a micro-ensemble that is less special). And what could be possibly be more ordered than a computer memory register?

Recently this prediction has been confirmed by a very clever experiment.  Reason enough to celebrate this with another “blog memory-hole rescue”:

If you ever wondered what the term “adiabatic” in conjunction with quantum computing means, Perry Hooker provides the answer in this succinct explanation. His logic gate discussion shows why Landauer’s principle has implications far beyond the memory chips, and in a sense, undermines the entire foundation of classical information processing.

Truly required reading if you want to appreciate why quantum computing matters.

The Rise of the Quantum Hippies

… and why I blame Niels Bohr.

A satirical hyperbolic polemic

Recently there was a bit of a tempest in a teapot in the LinkedIn quantum physics group because it is very much over-run by members who I dub “Quantum Hippies”.  I.e. the kind of people who think they’ve read a quantum mechanics book after putting down Capra’s the Tao of Physics – you have probably encountered the type.

So this begs the question: Where did they spring from?

It certainly didn’t start with Capra, he was just a catalyst.

I blame this guy:

Niels Bohr stands accused.

If it wasn’t for him, and his side-kick Heisenberg, Bohr’s Copenhagen Interpretation would have never become the kind of dogma that it did.  We are still suffering the consequences.

Science is a competitive sport, even more so in the olden days when the myth of the lone genius reigned supreme.  Most of the founding fathers of quantum mechanics lacked many things but not ego. Much has been written about the struggle between Bohr and Einstein. The latter of course never stood a chance as he has been far removed from the development of the new theory. It didn’t help that he was old at the time and easily painted as a relic. Other challengers to the Copenhagen Interpretation were dealt with in various ways.

  • It was helpful that David Bohm could be vilified as a communist and nobody listened to de Broglie anyway.
  • Schrödinger mocked the new dogma with his famous cat in a box thought experiment but did not have it in him to put up a real fight.
  • Max Planck fell into the same geezer category as Einstein, but was even easier to dismiss due to his far less prominent name recognition.
  • Karl Popper was “just a philosopher”.
  • Others like Landé weren’t much of a challenge, falling into the “Landé who?” category.

Hence the Copenhagen Interpretation reigned supreme, and much energy was now devoted to keep its dirty little secret tucked away, in the closet, under the stairs with the key thrown away.

Maybe some of the energy directed at defending it against the other interpretations was in part motivated by the thought that it’ll be easier to keep this problematic aspect of the new theory under wraps. For whatever reason, Bohr and Heisenberg gave birth to a new physics omertà, the “shut-up and calculate” doctrine.  This would have far reaching consequences – way beyond the realm of physics.

The raison d’être of the hippie revolution was to challenge authority (that arguably was asking for it).

What a delightful gift Bohr had prepared for a counter-culture movement that was already high on half-understood Asian influenced mysticism and other more regulated substances. And so the Copenhagen Interpretation’s dirty little secret was dragged out of the closet and subsequently prostituted.  I am of course referring to the fact that the wave-collapse originally invented by Heisenberg requires an observer or observing mind. This was subsequently bastardized into the idea that “the conscious mind creates reality”. Just as Einstein’s Special and General Relativity entered popular culture as the spectacularly wrong premise that “everything is relative”,  cart blanche for magical thinking was brought to you courtesy of some of the greatest minds of the 20th century.  A more spectacular blow‑back is hard to imagine.

This was super-charged by Bell’s theorem that confirmed quantum mechanics’ essential non-locality.  This in turn was translated as the mystical certainty that “everything is instantaneously connected all the time”.  And so to this day you get spectacularly wrong pop science articles like this one. It completely ignores that these days entangled qbits (the essential ingredient in the thought experiment on which this article hinges) are very well understood as a quantum information resource, and that they cannot facilitate an instantaneous connection between distant events.  The term “instantaneous” has no absolute meaning when Special Relativity is taken into account. This is especially egregious when contemplating that this was published in the American Association of Physics Teacher’s journal.

Although it’s a well-established fact that the public American education system has hit rock bottom in the developed world I still would have expected better.

The Flower Power movement has been generally associated with the left political spectrum but it is in the nature of such powerful memes to eventually permeate the entire mainstream thinking.  Hence American journalists prescribe to a “he said she said” school of journalistic “objectivity”, after all everything’s relative, and so political operatives of all color feel fully justified in subscribing to a “Triumph of the Will” line of thinking.

When Ron Suskind interviewed inside staff from the Bush Jr. administration and questioned them as to how they thought they could just remake Iraq with the limited resources committed, the staffer famously answered: “… when we act, we create our own reality”.

Yes, I blame Niels Bohr for that too.

Rescued From the Blogroll Memory Hole

During the week my professional life leaves me no time to create original content.  Yet, there is a lot of excellent material out there pertinent to the nascent quantum information industry. So to fill the inter-week void I think it is very worthwhile to try to rescue recent blogroll posts from obscurity.

Very relevant to the surprise that Scott Aaronson came around on D-Wave is Robert Tucci’s great technical review of D-Wave’s recent Nature paper.  If you are not afraid of some math and are tired of the void verbiage that passes for popular science journalism than this is for you.

Where Buzzwords Go to Die

It is a pretty sure sign that a buzzword is near the end of its life cycle when the academic world uses it for promotional purposes. Ever more science research comes with its own version of marketing hype.  What makes this such a sad affair, is that this is usually done pretty badly.

So why is spouting that quantum computing makes for perfect cloud computing really, really bad marketing?

“Cloud computing” is the latest buzzword iteration of “computing as a service”, and as far as buzzwords  go it served its purpose well.  It is still in wide circulation but the time is nigh that it will be put out to pasture, and replaced with something that sounds more shiny – while signifying the very same thing.

Quantum computing on the other hand is not a buzzword. It is a revolution in the making. To hitch it to the transitory cloud computing term is bad marketing in its own right, but the way that it is done in this case, is ever more damaging.  There is already one class of quantum information devices commercially available: Quantum Key Distribution systems. They are almost tailor-made to secure current Cloud infrastructures and alleviate the security concerns that  are holding this business model back (especially in Europe).

But you’d never know from reading the sorry news stories about the (otherwise quite remarkable) experiment to demonstrate blind quantum computing.  To the contrary, an uniformed reader will come away with the impression that you won’t have acceptable privacy in the cloud unless full-scale quantum computing becomes a reality.

Compare and contrast to this exquisite quantum computing marketing stunt. While the latter brings attention and confidence to the field at zero cost, this bought and paid for marketing couldn’t be further of the mark. It is almost like it’s designed to hold the entire industry back.  Simply pitiful.

Quantum Computing Bounty

If you can hunt down this dog cat and kill Quantum Computing for good than there will be a mighty big reward waiting for you.  Scott Aaronson put up a bounty of $100,000.  All you have to do is prove that universal Quantum Computing is impossible in the real world.

Cartoon drawing by Becky Thorn

On the surface there are a couple of surprises here:  Scott doesn’t really hate quantum computing – he’s actually basing his academic career on it. And herein lies the rub for the even bigger surprise: This academic really knows how to create one heck of a marketing stunt.  His blog was already flooded after slashdot reported on this and more media is now jumping on the bandwagon.  This is an awful lot of free publicity for a marketing budget of exactly zero dimes (and this cost includes the net present value of the bounty money).

Hat’s off!

When Popular Science is Neither Science nor Popular

This is a detour from my usual subject of quantum computing due to the unusual media attention that the story of faster than light neutrinos caused.

As was to be expected this brought out the special relativity detractors in droves. Usually my attitude towards this crowds is similar as depicted here in this xkcd strip:

xkcd's take on faster than light neutrinos

Yet, I think this is symptomatic for a broader problem: I am convinced that when it comes to popular science, modern physics has utterly failed the public. TV science shows with fancy CGI graphics that somehow are supposed to make string theory and dark matter plausible don’t really help.

By often presenting untested theories such as super-strings as factual they rather help to undermine trust in the entire endeavor. Then there is the obsession with not using math at all cost because it might hurt the sales of the pop science product. Thus leaving us with the overuse of vague, yet seemingly overbearing terminology and strained metaphors. Great, if meant as material for techno babble on Star Trek but a sorry excuse for supposedly “scientific truth”.

This leaves the lay person very vulnerable when it comes to assessing any claims about physics.  Rather than trying to sell the public on the latest scientific pet theory I wished the media would go a bit meta and facilitate a better understanding of what actually makes good science.  For instance by exploring the question what criteria a physics theory should fulfill.

Most people are quite familiar with the concept of falsifiability by experiment, but few contemplate where the power of a good theory comes from: Reduction to plausible first principles that drives a drastic increase of the domain of applicability. Or to state it like Kurt Lewin, in a much more straightforward and less abstruse manner: There is really nothing as practical as a good theory.

And it better be good. Way back, when I was a full-time physics student, I was struck by how much more experimental physicists seem to be satisfied with their lot in life. I only met a single career theoretical physicist who appeared genuinely happy and content (he was one of the great ones and close to retirement). He explained it to me like this: “As an experimentalist chances are you can constantly make some incremental progress. Fixing hardware, eliminating some systematic errors, coming up with some new creative ideas of how to probe for a specific effect. Chances are as an experimentalist you will experience positive feedback from your work quite regularly. It also helps that you often get to work hands-on. A theoretical physicist on the other hand can count himself lucky if he has just one eureka moment in his life. And even then it might turn out your insight was plainly wrong. Experimental physicist win any which way – any result is a good result.”

Einstein once said “Any intelligent fool can make things bigger and more complex… It takes a touch of genius – and a lot of courage to move in the opposite direction.” And moving it in the opposite direction is exactly the hallmark of a good theory.  But this reduced complexity doesn’t necessarily make understanding nature any easier. To illustrate this, let’s pick an example that pre-dates modern physics:

Newtonian physics requires several not immediately obvious first principles (axioms) i.e. his famous three laws:

  1. Inertia
  2. Force is proportional to acceleration
  3. Action equals reaction

These principles are anything but obvious in everyday live. They had to be distilled from carefully conducted and idealized experiments (after all Newton didn’t have access to a perfect vacuum).

Now consider that these principles can be replaces by far more immediately plausible first principles:

  1. Time and space are homogeneous and the latter also isotropic. This is just a fancy way of saying that experiments behave the same if we move them to a different place and time. For instance a pendulum on the moon would have swung the same way thousands of years ago as it does today.
  2. The principle of least action – in colloquial terms: The system follows the path of least resistance or to be more precise it gets from point A to B with the least amount of reshuffling of energy. For instance from kinetic to potential energy in the case of a pendulum.

So why did Newton not start with these simpler and more self-explanatory principles? No doubt he was a genius, and he invented Calculus to present his theory of mechanics, but he inconveniently didn’t get around to the Calculus of Variations. So he didn’t have the mathematical tools required to derive classical mechanics from these more fundamental first principles.

I turned out that this superior Hamiltonian approach to mechanics was so immensely successful and elegant in its mathematical execution that after it ran its course a young Max Planck was told he’d be silly to want to pursue a career in physics – obviously everything was already known.

A good physics theory works a bit like a good lossless compression algorithm. You have to remember much less to derive all physics laws but you have to work harder to get there the more advanced the theory is.

This is our first important criteria to judge the merits of a theory.

It not the only one though. Another important one is nicely laid out by David Deutsch in this TED talk:

In a good theory every piece and part is needed – it cannot be easily varied to accommodate different outcomes.

So let’s see how some theories fare against these criteria.

For instance Special Relativity can be derived from the same principles as Hamiltonian mechanics when adding group properties for the allowed spatial transformations i.e. reversibility and requiring that applying several transformations result in the same class of transformation. It can then be mathematically shown that only the Galilean and the Lorentz transformations satisfy these axioms. A great paper demonstrating this, while only requiring high school level math, was published in 1976.

Yet, more than thirty years later Special Relativity is still mostly taught in the same convoluted way that Einstein originally established the theory. (Very doubtful that Einstein would still teach it that way if he was still around).

To get to General Relativity requires just one more axiom: The equivalence principle that states that effects of acceleration and gravity are locally indistinguishable. Following this through with mathematical rigor is beyond the scope of high-school math, but contrary to popular believe it is really not that complicated. After all it’s the same math that underlies our ability to produce reasonably accurate maps of our curved planet.

General Relativity therefore satisfies my quality criteria: It can be derived from first principles.

How about the David Deutsch criteria? It satisfies that as well, what follows from the axioms doesn’t allow for any wiggle room. Einstein stumbled over this himself when he introduced the unmotivated cosmological constant to his field equations, because he just couldn’t believe that an expanding universe made any sense.

In summary:

  1. It is well tested
  2. Follows from first principles
  3. Can’t be easily varied to accommodate different results.

Now, let’s contrast this with what is considered to be the leading contender for a unifying theory: After decades of research super-string theory produced not a single testable prediction, there is no known approach that’ll allow super-string theory to be derived from first principles and the theory is notorious for being tweakable to accommodate for different results.

Thankfully, there has been an entire book written about this colossal failure on all counts.

Nevertheless, this hasn’t really reached the public sphere where super-string theory is still often presented as the current factual understanding of the universe rather than the Standard Model.

This growing befuddlement of the public with regards to the state of contemporary physics theories comes at an inopportune time. Long gone are the days of the cold war when particle physics was always funded – no questions asked.

With the Higgs Boson hunt sold to the public as the main motivation for CERN’s supercollider I fear that physics may be confronted with a major credibility crisis if this search comes up empty.  A crisis fully self-inflicted by selling untested theories as factual.