When Popular Science is Neither Science nor Popular

This is a detour from my usual subject of quantum computing due to the unusual media attention that the story of faster than light neutrinos caused.

As was to be expected this brought out the special relativity detractors in droves. Usually my attitude towards this crowds is similar as depicted here in this xkcd strip:

xkcd's take on faster than light neutrinos

Yet, I think this is symptomatic for a broader problem: I am convinced that when it comes to popular science, modern physics has utterly failed the public. TV science shows with fancy CGI graphics that somehow are supposed to make string theory and dark matter plausible don’t really help.

By often presenting untested theories such as super-strings as factual they rather help to undermine trust in the entire endeavor. Then there is the obsession with not using math at all cost because it might hurt the sales of the pop science product. Thus leaving us with the overuse of vague, yet seemingly overbearing terminology and strained metaphors. Great, if meant as material for techno babble on Star Trek but a sorry excuse for supposedly “scientific truth”.

This leaves the lay person very vulnerable when it comes to assessing any claims about physics.  Rather than trying to sell the public on the latest scientific pet theory I wished the media would go a bit meta and facilitate a better understanding of what actually makes good science.  For instance by exploring the question what criteria a physics theory should fulfill.

Most people are quite familiar with the concept of falsifiability by experiment, but few contemplate where the power of a good theory comes from: Reduction to plausible first principles that drives a drastic increase of the domain of applicability. Or to state it like Kurt Lewin, in a much more straightforward and less abstruse manner: There is really nothing as practical as a good theory.

And it better be good. Way back, when I was a full-time physics student, I was struck by how much more experimental physicists seem to be satisfied with their lot in life. I only met a single career theoretical physicist who appeared genuinely happy and content (he was one of the great ones and close to retirement). He explained it to me like this: "As an experimentalist chances are you can constantly make some incremental progress. Fixing hardware, eliminating some systematic errors, coming up with some new creative ideas of how to probe for a specific effect. Chances are as an experimentalist you will experience positive feedback from your work quite regularly. It also helps that you often get to work hands-on. A theoretical physicist on the other hand can count himself lucky if he has just one eureka moment in his life. And even then it might turn out your insight was plainly wrong. Experimental physicist win any which way – any result is a good result.”

Einstein once said "Any intelligent fool can make things bigger and more complex... It takes a touch of genius - and a lot of courage to move in the opposite direction." And moving it in the opposite direction is exactly the hallmark of a good theory.  But this reduced complexity doesn’t necessarily make understanding nature any easier. To illustrate this, let’s pick an example that pre-dates modern physics:

Newtonian physics requires several not immediately obvious first principles (axioms) i.e. his famous three laws:

  1. Inertia
  2. Force is proportional to acceleration
  3. Action equals reaction

These principles are anything but obvious in everyday live. They had to be distilled from carefully conducted and idealized experiments (after all Newton didn’t have access to a perfect vacuum).

Now consider that these principles can be replaces by far more immediately plausible first principles:

  1. Time and space are homogeneous and the latter also isotropic. This is just a fancy way of saying that experiments behave the same if we move them to a different place and time. For instance a pendulum on the moon would have swung the same way thousands of years ago as it does today.
  2. The principle of least action – in colloquial terms: The system follows the path of least resistance or to be more precise it gets from point A to B with the least amount of reshuffling of energy. For instance from kinetic to potential energy in the case of a pendulum.

So why did Newton not start with these simpler and more self-explanatory principles? No doubt he was a genius, and he invented Calculus to present his theory of mechanics, but he inconveniently didn’t get around to the Calculus of Variations. So he didn’t have the mathematical tools required to derive classical mechanics from these more fundamental first principles.

I turned out that this superior Hamiltonian approach to mechanics was so immensely successful and elegant in its mathematical execution that after it ran its course a young Max Planck was told he’d be silly to want to pursue a career in physics – obviously everything was already known.

A good physics theory works a bit like a good lossless compression algorithm. You have to remember much less to derive all physics laws but you have to work harder to get there the more advanced the theory is.

This is our first important criteria to judge the merits of a theory.

It not the only one though. Another important one is nicely laid out by David Deutsch in this TED talk:

In a good theory every piece and part is needed - it cannot be easily varied to accommodate different outcomes.

So let’s see how some theories fare against these criteria.

For instance Special Relativity can be derived from the same principles as Hamiltonian mechanics when adding group properties for the allowed spatial transformations i.e. reversibility and requiring that applying several transformations result in the same class of transformation. It can then be mathematically shown that only the Galilean and the Lorentz transformations satisfy these axioms. A great paper demonstrating this, while only requiring high school level math, was published in 1976.

Yet, more than thirty years later Special Relativity is still mostly taught in the same convoluted way that Einstein originally established the theory. (Very doubtful that Einstein would still teach it that way if he was still around).

To get to General Relativity requires just one more axiom: The equivalence principle that states that effects of acceleration and gravity are locally indistinguishable. Following this through with mathematical rigor is beyond the scope of high-school math, but contrary to popular believe it is really not that complicated. After all it’s the same math that underlies our ability to produce reasonably accurate maps of our curved planet.

General Relativity therefore satisfies my quality criteria: It can be derived from first principles.

How about the David Deutsch criteria? It satisfies that as well, what follows from the axioms doesn’t allow for any wiggle room. Einstein stumbled over this himself when he introduced the unmotivated cosmological constant to his field equations, because he just couldn’t believe that an expanding universe made any sense.

In summary:

  1. It is well tested
  2. Follows from first principles
  3. Can’t be easily varied to accommodate different results.

Now, let’s contrast this with what is considered to be the leading contender for a unifying theory: After decades of research super-string theory produced not a single testable prediction, there is no known approach that’ll allow super-string theory to be derived from first principles and the theory is notorious for being tweakable to accommodate for different results.

Thankfully, there has been an entire book written about this colossal failure on all counts.

Nevertheless, this hasn’t really reached the public sphere where super-string theory is still often presented as the current factual understanding of the universe rather than the Standard Model.

This growing befuddlement of the public with regards to the state of contemporary physics theories comes at an inopportune time. Long gone are the days of the cold war when particle physics was always funded - no questions asked.

With the Higgs Boson hunt sold to the public as the main motivation for CERN's supercollider I fear that physics may be confronted with a major credibility crisis if this search comes up empty.  A crisis fully self-inflicted by selling untested theories as factual.


This entry was posted in Popular Science and tagged . Bookmark the permalink.

9 Responses to When Popular Science is Neither Science nor Popular

  1. Tihamer says:

    You wrote:
    “The equivalence principle that states that effects of acceleration and gravity are locally indistinguishable.”
    It depends on what you mean by “locally”, but if I wake up in small room, I know of at least three ways to determine whether that small room is in an accelerating starship or a hidden basement. I guess I better hurry up and publish!

    • Henning says:

      Yes by all means, you should publish this :-)

      I’d be interested to learn how you would go about it.

      • Mark says:

        I seem to remember one of those methods being (that is if there is more than one) the following thought experiment: when two objects are dropped at the same time in the accelerating ship, their paths to the floor are parallel; however, when it’s done on Earth, in the “basement”, their paths to the floor fall at an angle toward each other, pointing to a common center–the center of the Earth. Although experimental testing seems insurmountable, it is an issue worth considering.

        • Henning Dekant says:

          You are correct, only locally in a sufficiently small volume is the equivalence principle valid – mathematically you work with such infinitesimal small frames to derive GR.

  2. Pingback: Diamond is a Qubit’s Best Friend | Wavewatching

  3. Tihamer Toth-Fejel says:

    Well, it depends what you mean by locality. Within a small room, it’s an engineering possibility (barely). It’s not new science, just accurate measurements of Euclidean geometry, Newtonian physics, and Einstein’s relativity.
    Actually, I’m frying bigger fish than that. I’m looking for engineering measurements that will determine if Nick Bostrom is correct about his simulation argument. So far, I think he has as much as a 50% probability of being correct.

  4. replica rolex says:

    Ahaa, its nice dialogue regarding this article at this place at
    this web site, I have read all that, so at this time me also commenting at
    this place.

  5. BAReFOOt says:

    And you know what your problem with the thing is?

    TL;DR!

    *Seriously*! Cut it down! Nobody wants to read a unstructured wall of text! It’s inefficient and verbose! It wastes time! It’s inelegant and ugly and literally stupid!
    You get *one* page! If something is more complex, you split it into atomic one-page articles. And no whining about supposed “attention span problems”. IT has nothing to do with that! It has something to do with everyone of us being in a constant struggle to survive nowadays! There simply isn’t any *fucking* time! You wanna be heard? KEEP IT EFFICIENT AND SHORT!

    Also about math: If those retards would use some damn *descriptive identifiers* instead of fapping their math penis with deliberately obfuscating single-letter symbols, even worse when greek, and even worse when being symbols nobody ever fuckin saw, which cannot even be fucking spoken out(!), math wouldn’t be a *tenth* as annoying!