Monthly Archives: April 2012

The Greatest Tragic Hero of Physics

Although widely admired and loved, in the end he died like so many who came to extremes of fame or fortune - estranged from family and separated from old friends. The only person to witness his death in exile was a nurse, incapable of understanding his last words which were uttered in a language foreign to her.

If his private life was a template for a telenovella, viewers would regard it as too over the top: As a teenager his parents leave him with relatives to complete school - they need to resettle to a foreign country. He rebels, his school teachers give up on him, he drops out. He travels across the Alps to reunite with his family. If it isn't for the unwavering support of his mother he would probably never move on to obtain a higher education. She manages to find him a place with relatives in a country of his native language so that he can finally gain his diploma. The same year he renounces his old citizenship and also quits the religion of his parents.

He subsequently enrolls in a prestigious university, but ignores the career choice that his parents had in mind for him. He falls in love with a beautiful fellow student from a far away land. His parents are against the relationship, and so are hers. Against the will of their families they want to get married, but our hero struggles to find a job after graduation. He hopes to be hired as an assistant at his university, just like the rest of his peers, but he has once again fallen out with some of his teachers. Many of the other members of the faculty only notice him because he skips so many lectures - especially the purely mathematical ones. Still, he passes all the tests, relying on his friends' lecture notes.

His future wife-to-be becomes pregnant out of wedlock, has to return to her family and gives birth to a little girl with Down syndrome. He never even gets to see the girl. This summer - two years after graduation - with the help of a friend, he finally lands his first steady job. Later that year his father dies, and shortly after that our man marries his beloved Mileva.

Meet the Einsteins:

Images of old Albert Einstein are so iconic that some people tend to forget that he wasn

Having settled down in Bern he now manages to find the discipline and inner calm for his subsequent groundbreaking works. I can not even begin to fathom how he musters the strength to do so, coping with a full time day job and a young family. Discussing his ideas with friends and colleagues certainly helps and surely he must discuss his research with Mileva as well (how much she influenced his work has been somewhat of a controversy). The following three years, even while working as a patent clerk, are the most fruitful of Albert Einstein's life. His research culminates in four publications in the year 1905 that irreversibly change the very foundation of physics. His papers ....

  1. ... describe  for the first time the theory of Special Relativity.
  2. ... show the equivalence of mass and energy i.e. the most famous E=mc².
  3. ... propose the idea of energy quanta (i.e. photons) to explain the photoelectric effect.
  4. ... demonstrate that Brownian motion is a thermal phenomenon.

Without the realization that mass and energy are equivalent (2), there'd be no nuclear energy and weapons. Without Einstein's energy quanta hypothesis (3), there'd be no quantum mechanics, and his work that explains the Brownian motion (4) settled, once and for all, the question if atoms were real.  At the same time, it provides the missing statistical underpinning for thermodynamics.

These were all amazing accomplishments in their own right, but nothing so resonated with the public as the consequences of Einstein's theory of Special Relativity (1). This one was regarded as a direct affront to common sense and achieved such notoriety that it was later abused by Nazi propaganda to agitate against "Jewish physics".

Already, at this time, physics was such a specialized trade that usually the man on the street would have no motivation to form an opinion on some physics paper. So what caused all this negative attention? Einstein's trouble was that by taking Maxwell's theory of Electrodynamics seriously he uncovered properties of something that everybody thought they intuitively understood. Any early 20th century equivalent to Joe the Plumber would have felt comfortable explaining how to measure the size of a space and how to measure time - they were understood as absolute immutable dimensions in which life played out. Only they cannot be if Maxwell's equations were right, and the speed of light was a constant in all frames of reference. This fact was really hiding in plain sight, and you don't need any mathematics to understand it - you only need the willingness to entertain the possibility that the unthinkable might be true.

In 1923 an elaborate movie was produced that tried to explain Special Relativity to a broad audience. It turned out to be a blockbuster, but still didn't convince the skeptical public - watching it made me wonder if that is where so many misconceptions about Einstein's theories started. It does not contain any falsehoods, but it spends way too much time on elaborating relativity, while the consequences of the invariability of light speed are mixed in with results from General Relativity, and neither are really explained. Apparently the creators of this old movie felt that they had to start with the most basic principles and couldn't really expect their audience to follow some of Einstein's arguments. Granted, this was before anybody even knew what our planet looked like from space, and the imagined astronaut of this flick is shot into space with a canon as the preferred mode of transportation - as, for instance, imagined by Jules Verne. Nowadays this task is much easier in comparison. You can expect a blog reader to be desensitized by decades of SciFi. Also, having a plethora of educational videos at your fingertips makes for a straightforward illustration of some of the immediate outcomes of accepting light speed to be constant in all frames of reference.

For a modern audience, a thought experiment containing two spaceships traveling in parallel with a setup that has a laser signal being transferred between them requires little explanation. All that is necessary to come to grips with, is what it means that this laser signal travels at the same speed in all frames of reference. For instance, this short video does an excellent job explaining that an observer passing by these spaceships will have to conclude that the clocks for the space pilots must go slower.

Nevertheless, even nowadays you still get publications like this one, where two Stanford professors of psychology perpetuate this popular falsehood in the very first sentence of their long monograph:

[Einstein] established the subjective nature of the physical phenomenon of time.

Of course he did no such thing.  He described how the flow of time and the temporal ordering of events transforms between different inertial reference frames as an objective physical reality.

Over a hundred years special relativity has withstood all experimental tests (including the recent faster than light neutrino dust-up).  Yet, public education has still not caught up to it.

This is the second installment of my irregular biographical physics series intended to answer the question of how physics became so strange. Given Einstein's importance I will revisit his lasting legacy in a future post.

Quantum Computing – A Matter of Life and Death

Even the greatest ships can get it wrong.

In terms of commercial use cases, I have looked at corporate IT, as well as how a quantum computer will fit in with the evolving cloud computing infrastructure.  However, where QC will make the most difference -as in, a difference between life and death - goes entirely unnoticed.  Certainly by those whose lives will eventually depend on it.

Hyperbole? I think not.

As detailed in my brief retelling of quantum computing history, it all started with the realization that most quantum mechanical systems cannot efficiently be simulated on classical computers.  Unfortunately, the sorry state of public science understanding means that this facilitates hardly more than a shrug by even those who make a living writing about it (not the case for this humble blogger who toils away at it as a labor of love).

Prime example for this is a recent, poorly sourced article from the BBC that disses the commercial availability of turnkey-ready quantum computing without even mentioning D‑Wave, and at the same time proudly displays the author’s ignorance about why this technology matters (emphasis mine):

“The only applications that everyone can agree that quantum computers will do markedly better are code-breaking and creating useful simulations of systems in nature in which quantum mechanics plays a part.”

Well, it’s all good then, isn’t it? No reason to hurry and get a quantum computer on every scientist’s desk.  After all, only simulations of nature in which quantum mechanics plays a part will be affected.  It can’t possibly be all that important then.  Where the heck could this esoteric quantum mechanics stuff possibly play an essential part?

Oh, just all of solid state physics, chemistry, micro-biology and any attempts at quantum gravity unification.

For instance, one of the most important aspects of pharmaceutical research is to understand the 3D protein structure, and then to model how this protein reacts in vivo using very calculation-intensive computer simulations.

There has been some exciting progress in the former area.  It used to be that only proteins that lend themselves to crystallization could be structurally captured via X-ray scattering.  Now, recently developed low energy electron holography has the potential to revolutionize the field.  Expect to see a deluge of new protein structure data.  But despite some progress with numerical approaches to protein folding simulations, the latter remains NP complex.  On the other hand, polynomial speed-ups are possible with quantum computing.  Without it, the inevitable computational bottleneck will ensure that we forever condemn pharmaceutical research to its current expensive scatter-shot approach to drug development.

There is no doubt in my mind that in the future, people’s lives will depend on drugs that are identified by strategically deploying quantum computing in the early drug discovery process.  It is just a matter of when. But don’t expect to learn about this following BBC’s science news feed.

How Did Physics Become So Strange?

Let's start with a quiz:

Their last names start with the same two letters, and they lived in the same city at the same time - but that's where the similarities end.

Only one of these two contemporaries was a revolutionary, whose life's work would drastically improve the human condition.

Who do you pick?

Undeservedly the first man made it onto the top ten BBC Millennium list (10th) while arguably James Clerk Maxwell, the gentleman to the right, considerably improved the lot of humanity. 

He changed physics forever, single-handedly undermining the very foundation of the science when developing his theory of electromagnetism in the early 1860s.

At first, nobody noticed.

Maxwell predicted the existence of electromagnetic waves (but didn’t live to see this prediction experimentally verified) and correctly identified light with electromagnetic waves. This seemingly settled an old score once and for all in favor of Christian Huygens theory of light and relegated Newton’s corpuscular theory (proposed in his famous work, Opticks) to the dustbin of history.

There was just one little problem, and over time it grew so big it could no longer be ignored.

Until then all natural laws were well behaved. They didn’t discriminate against you if you happened to live on another star that zips through the cosmos at a different rate of speed than our solar system.

Physics laws are usually written down with respect to inertial frames of references (i.e. usually represented by a simple cartesian grid). Inertial means that these systems can have relative motion but don't accelerate. Natural laws could always be transformed between such reference systems so that by just representing the coordinates of system 1 in those of system 2 you again retain the exact same form of your equations (this is referred to as being invariant under Galilean transformations).

Maxwell’s equations did not conform and steadfastly refused to follow these iron-clad transformation laws. And this wasn't the only problem; in combination with statistical thermodynamics, electrodynamics also predicted that a hot object should radiate an infinite amount of energy, a peculiarity know as the ultraviolet catastrophe.

These two issues were the seeds for the main physics revolutions of the last century. The first one directly lead to Special Relativity (one could even argue that this theory was already hidden within the Maxwell equations).  While the second one required field quantitization in order to be fixed and spawned modern Quantum Mechanics.

It all started with this unlikely revolutionary whose life was cut short at age 48  (succumbing to the same kind of cancer that killed his mother).

Maxwell, like no other, demonstrated the predictive power of mathematical physics. One wishes he could have lived to see Heinrich Hertz confirm the existence of electromagtnetic waves - he would have been 55 at that time. But no human life span would have sufficed to see his first major insight verified:

A calculation early in his career conclusively demonstrated that the rings of Saturn had to be made up of small "brick-bat" rocks. It wasn't until the Voyager probes encountered the planet in 1980/81 that he was proven right. Really, they should be called Maxwell's rings.

Analog VLSI for Neural Networks – A Cautious Tale for Adiabatic Quantum Computing

Update:  This research is now again generating some mainstream headlines.  Will be interesting to see if this hybrid chip paradigm will have more staying power than previous analog computing approaches.


Fifteen years ago I attempted to find an efficient randomized training algorithm for simple artificial neural networks suitable for implementation on a specialized hardware chip. The latter's design only allowed feed-forward connections i.e. back-propagation on the chip was not an option.  The idea was that given the massive acceleration of the networks execution on the chip some sort of random walk search might be at least as efficient as optimized backprop algorithms on general purpose computers.

My research group followed a fairly conventional digital design, whereas at the time, analog VLSI was all the rage.  A field (like so many others) pioneered by Carver Mead. On the face of it, this makes sense, given that the biological neurons obviously work with analog signals, but nevertheless attain remarkable robustness (the latter being the typical problem with any sort of analog computing). Yet, it is also this robustness that makes the "infinite" precision that is the primary benefit of analog computing somewhat superfluous.

Looking back at this I expected this analog VLSI approach to be a bit of of an engineering fad as I wasn't aware of any commercial systems ever hitting the market - of course I could have easily missed a commercial offering  if it followed a similar trajectory as the inspiring but ultimately ill fated transputer. In the end the later was just as much a fad as the Furby toy of yesteryear yet arguably much more inspiring.

To my surprise and ultimate delight a quick statistic on the publication count for analog neural VLSI proves me wrong and there is still some interesting science happening:

Google Scholar Publication Count for Neural Analog VLSI

So why are there no widespread commercial neuromorphic products on the market? Where is the neural analog VLSI co-processor to make my Laptop more empathic and fault tolerant? I think the answer comes simply down to Moor's law.  A flagship neuromorphic chip currently designed at MIT boasts a measly 400 transistors.  I don't want to dispute its scientific usefulness - having a detailed synapse model in silicon will certainly have its uses in medical research (and the Human Society will surely approve if it cuts down on the demise of guinea pigs and other critters). On the other hand the blue brain project claims it already successfully simulated an entire rat cortical column on their supercomputer and their goal is nothing less than a complete simulation of the rodents brain.

So what does this have to do with Adiabatic Quantum Computing?  Just like in the case of neuromorphic VLSI technology, its main competition for the foreseeable future is conventional hardware. This is the reason why I was badgering D-Wave when I thought the company didn't make enough marketing noise about the Ramsey number research performed with their machine.  Analog Neural VLSI technology may find a niche in medical applications, but so far there is no obvious market niche for adiabatic quantum computing. Scott Aaranson argued that the "coolness" of quantum computing will sell machines.  While this label has some marketing value, not the least due to some of his inspired stunts, this alone will not do.  In the end, adiabatic quantum computing has to prove its mettle in raw computational performance per dollar spent.

(h/t to Thomas Edwards who posted a comment a while back in the LinkedIn Quantum Information Science that inspired me to this post)