Category Archives: Popular Science

Physics and Math – a Complicated Love Affair

Usually I restrict myself to physics on this blog.  Not because I don’t love and adore pure math, but mostly because news from this field leaves most readers in about the same state as captured in this comic.

Take the recent proof of the ABC conjecture, for instance.  By all accounts, Shinichi Mochizuki,  the mathematician who came up with it, is brilliant.  Unfortunately, that means that even experts close to his field have a  hard time following and validating the proof that spans several arcane areas and four pre-print papers.

But I make an exception for the recent news about a new way to solve linear equations (proposed by Prasad Raghavendra). Everybody learned in school how to solve via Gauss elimination, which might be the most ubiquitous math skill after simple arithmetic.  There have been some algorithms developed to improve on this method, but it is still extremely cool that there is a new algorithm for such a fundamental mathematical problem with such widespread applications.

And because this concerns an algorithm, there is even pseudo and real code to play with.

What is especially interesting about this new method is that it employs randomness (a resource that has been shown many times to allow for more efficient algorithms).  But there is a catch:  This approach is only applicable for linear equation systems over finite fields (so applying this to QC simulation won’t work).  Fields are defined as Rings (i.e. commutative groups that have a two binary operations such as addition and multiplication).

This finite field constraint seems again to push this onto a more esoteric plane, but fortunately these structured sets are anything but.  This becomes clear when looking at the simplest finite field that just consists of two members 0 and 1:  Just add the logical operations AND and XOR – et voila – if you’ve been exposed to bit-wise coding you’re intimately familiar with the smallest finite field GF(2). GF stands for Galois Field, another name for the same thing.  (The number denotes how many elements the field has i.e. its cardinality).

XOR AND
Input Output Input Output
A B A B
0 0 0 0 0 0
0 1 1 0 1 0
1 0 1 1 0 0
1 1 0 1 1 1
These familiar logic gate operations turn a bit set into the smallest finite field GF(2).

It shouldn’t come as a surprise that finite fields find application in coding theory and cryptography. There are also two connections to the main topic of this blog: The obvious one is that any randomized algorithm can benefit from the precursors of all Quantum Information devices i.e. quantum noise generators (you can buy one here, but if you need true randomness and cannot afford one of these, then this service might be for you).

The second connection is more subtle and interesting.  Just as for any other calculation device, large universal gate based quantum computers can only become feasible if there are robust error correction mechanisms in place to deal with inevitable random noise. One of the earliest quantum error correction schemes was published by the veritable Peter Shor.  Turns out the basis for “stabilizer codes” that fix the error for a single qbit are again the Pauli operators, which in turn map to GF(4).  This can be generalized to GF(q) for non-binary systems. So it seems this new algorithm could very well find application in this area. A somewhat ironic twist if a randomized algorithm should turn out to be an efficient tool to deploy in correcting random qbit errors.

Transmuting Waste and Worries Away

The philosopher stone yearned for by Alchemists, was actually not a stone.

Alchemists of old yearned for the philosopher stone, a substance of magical quality that would allow them to transmute other elements into gold.  Nowadays, it would be even more valuable to have this mythical device, since its transmuting power could be harnessed to transform long lasting nuclear waste, if not into gold, then at least into less dangerous isotopes.

Scientists at the Belgium nuclear research center SCK CEN in Mol are working to accomplish exactly that. Lacking a philosopher stone, they are deploying the Swiss army knife of contemporary physics: A particle accelerator to create fast neutrons for the treatment of problematic nuclear waste.

A modern version of the philosopher stone: The MYRRHA (Multi-purpose hYbrid Research Reactor for High-tech Applications) concept allows for industrial scale treatment of nuclear waste. Fast neutrons are produced via nuclear spallation.

By capturing these neutrons, the waste can be transmuted into fission products with much shorter half-lives. The beauty of this concept is that the nuclear waste simultaneously serves as fuel in a sub-critical reactor.  A recent paper on this concept can be found here, and there is also an excellent presentation online (which contains the MYRRHA diagram and radiotoxicity graph).

Not only does this technology allow us to get rid of extremely worrisome long lasting radioactive material, but a nice side effect is that it will also alleviate our energy security worries, as this design is well suited to also use Thorium as fuel.

The following graph illustrates how dramatically this could alter our nuclear waste problem (note: the x-axis is logarithmic).

The requirement for save storage of nuclear waste could be drastically shortened from about 200,000 years to a mere 200, if processed in a suitable spallation reactor.

This flies in the face of conventional wisdom, as well as a world increasingly turned off by conventional nuclear energy in the wake of its latest catastrophe in Fukushima. After all, this new waste treatment still requires a nuclear reactor, and given the track record of this industry is it a risk worth taking?

To answer this, one must take into consideration that the idea of using a particle accelerator as a neutron source for a nuclear reactor is actually quite old, and significantly predates the recent research in Mol.  I first encountered the concept when preparing a presentation for nuclear physics 101 almost twenty years ago.  My subject was differing approaches to inherently safe reactor designs, “safe” in this context defined as the inability of the reactor to engage in a run-away nuclear chain reaction. (The treatment of nuclear waste was not even on the horizon at this point because the necessary reprocessing to separate the waste material from the depleted fuel rods did not exist).

The ratio of surface to volume is key in determining if a a neutron triggers enough follow up reactions to sustain a critical cascading chain reaction.

The idea is simple, design the reactor geometry in such a way that neutrons produced by the reaction don’t have the opportunity to spawn more in follow-up reactions by having them escape the reactor vessel. Then, make up the balance by providing enough neutrons from an accelerator-driven reaction to sustain the nuclear fission process.  Once you pull the plug on the accelerator, the fission reaction cannot sustain itself.  Compare this with the situation in Fukushima or Chernobyl.  The latter was a classic run-away fission chain-reaction, and the biggest problem at Fukushima was that melted down fuel can become critical again (there is some indication that this may actually have happened to some degree).

Will an inherently safe fission reactor design, one that can melt away the stockpiles of most long lasting nuclear waste into something that will only be kept in safe storage for some hundreds of years, sway the environmentally motivated opposition to nuclear technology? Doubtful.  Many will argue that this is too good to be true and point to the fly in the ointment: The fact that reprocessing is essential to make this happen, and that doing this on an increased industrial scale will make accidental releases more likely.  Then there will be the talking point that the nuclear industry will simply use this as an opportunity to establish a Plutonium fuel cycle (one of the purposes that reprocessing technology was originally developed for).  Not to mention the strongly entrenched ideological notion, especially in some European countries, with Germany topping the list, that humanity is simply too immature to handle this kind of technology. In a way, this is the temporal analog to the Not In My Back-Yard (NIMBY) attitude, maybe it should be called the NIMLT principle, as in Not In My LifeTime, let future generations deal with it.

Do you think a core catcher would have helped in Fukushima?

Of course, the nuclear industry did little to earn any trust, when considering what transpired in the wake of the Chernobyl disaster.  In a pathetic display of a “lesson learned” they added “core catchers” to existing blueprints, rather than invest R&D dollars into an inherently safe design. Instead of fixing the underlying problem, they simply tried to make the worst imaginable accident more manageable.  It was as if a car manufacturer whose vehicles rarely, but occasionally, explode was improving the situation in the next model line by adding a bigger fire-extinguisher.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

In 2006 France changed its laws and regulations in anticipation of this new technology, and now requires that nuclear waste storage sites remain accessible for at least a hundred years so that the waste can be reclaimed. To me this seems eminently reasonable and pragmatic. The notion that we could safely compartmentalize dangerous nuclear waste for many thousands of years and guarantee that it would stay out of the biosphere always struck me as hubris. This technology offers a glimpse at a possible future where humanity may finally gain the capability to clean up its nuclear messes.

From the Annals of the Impossible (Experimental Physics)

Updated below.

Photons only slowly emerge from the Sun’s core. Neutrinos just pass through once produced.

Radioactive decay is supposed to be the ultimate random process, immutably governed by an element’s half life and nothing else.  There is no way to determine when a single radioactive atom will spontaneously decay, nor any way to speed-up or slow down the process.  This iron clad certainty has always been the best argument of opponents to conventional nuclear fission power generation, as it means that the inevitable nuclear waste will have to be kept isolated from the biosphere for thousands of years (notwithstanding recent research attempts at stimulated transmutation of some of the longer lasting waste products.)

When plotting the activity of a radioactive sample you expect a graph like the following, a smooth decrease with slight, random variations .

Detected activity of the 137Cs source. The first two points correspond to the beginning of data taking. Dotted lines represent a 0.1% deviation from the exponential trend. Residuals (lower panel) of the measured activity to the exponential fit. Error bars include statistical uncertainties and fluctuations.

(This graph stems from a measurement on the Beta decay of 137CS and was taken deep underground).

What you don’t expect are variations that follow a discernible pattern in the decay rate of a radioactive element, nor any correlation with outside events. But this is exactly what Jere H. Jenkins et al. found:

Plot of measured 36Cl decays taken at the Ohio State University Research Reactor (OSURR). The crosses are the individual data points, and the blue line is an 11-point rolling average. The red curve is the inverse of the square of the Earth–Sun distance. (Error bars are only shown for a limited number of points).

And now this surprising result of the sun’s influence has been corroborated.

The latest research was a collaboration of Stanford and Purdue University with the Geological Survey of Israel, rather reputable research power-houses that make these results difficult to dismiss. Their paper contains the following contour graph for the  measured gamma decay during the day plotted over several years. When comparing this with the same kind of graph of the sun’s inclination during the observed date range the correlation is quite obvious:

Gamma measurements as a function of date and time of day. The
color-bar gives the power, S , of the observed signal (top).
Solar elevation as a function of date and time of day. The color-bar
gives the power, S , of the observed signal (bottom).

There is a video talk on this phenomenon available online.  It takes some patience to sit through, but gives a more complete picture in explaining how these observed patterns can be correlated to the the Sun’s core activity with surprising accuracy.

The evidence for the reality of this effect is surprisingly good, and that is rather shocking. It does not fit into any established theory at this time.

Update and Forums Round-Up

This was the second blog post from this site that has been picked up on slashdot (this was the first one). Last time around WordPress could not handle the load (dubbed slashdot effect). Subsequently I installed the W3 Total Cache plug-in. So before getting back to the physics, I want to use this space to give them a big shout-out.  If you operate a WordPress blog I can highly recommend this plug-in.

This article received almost 30,000 views over two days, the resulting discussions fleshed out some great additional information, but also highlighted what can be easily misread or misconstrued. Top of the list was the notion that this might undermine carbon dating.  For all practical purposes, this can be categorically ruled out. For this to have a noticeable effect, this phenomenon would have to be much more pronounced.  The proposed pattern is just slightly outside the error bars and only imposes a slight variation on top of the regular decay pattern.  Archaeologists should not lose sleep over this. An unintended side-effect was that this attracted creationists. If you adhere to this belief please don’t waste your time commenting here.  This is a site dedicated to physics, and off-topic comments will be treated like spam and deleted.

Another source of confusion was the difference between induced radioactive reactions and spontaneous decay. The latter is what we are supposed to see when measuring the decay of a radioactive isotope in the lab and this is what these papers address. Induced transmution is what can be observed when matter is, for instance, irradiated with neutrons.  This process is pretty well understood and happens as a side effect within nuclear reactors (or even a nuclear warhead before the fission chain reaction overwhelms all other neutron absorption).  The treatment of nuclear waste with a neutron flux is what I hinted at in the last sentence of the first paragraph.  This emerging technology is very exciting and merits its own article, but it is an entirely different story. The news buried in the papers discussed here is that there may be a yet unknown neutrino absorption reaction influencing decay rates that were believed to be only governed by the half-life time interval.  At this point an inverse beta decay is known to exist, but the reaction rate is much smaller than what is required to explain the phenomenon that these papers claim.

The spontaneous decay of a radioactive isotope is regarded as the gold standard for randomness in computer science, and there are some products that rely on this (h/t to Dennis Farr for picking up on this).  I.e. if the decay rate of a lump of radioactive material is no longer governed by the simple function $ N(t) = N_0 2^{-t/t_{1/2}} $ then the probability distribution that these random number generators rely on is no longer valid (the decay constant used in the distribution function at the link relates to the half-life time via $  t_{1/2} = \frac{\ln 2}{\lambda} $.

There were various thoughtful critical comments on the methodology and experimental set-up. The most prominent point that came up was the contention that this was essentially the outcome of data-mining for patterns and then hand-picking results that showed some discernible patterns.  Ironically, this approach is exactly the kind of data processing that spawned a billion dollar industry catering to the commercial Business Intelligence market.  To me, this actually looks like a pretty smart approach to get some more mileage out of old data series (assuming the authors didn’t discard results detrimentally opposed to their hypothesis). The downside of this is the lack of information on the care that went into collecting this data in the first place.  I.e. it was repeatedly pointed out that experimenters should run a control to capture the background radiation and needed to understand and control for the environmental impact on their measuring devices. Relying on third party data means also relying on the reputation of the researchers who conducted the original experiments.

When the original claims were made they triggered follow-up research. Some of it was inconclusive, some of it contradicted the findings and a measurement performed on the Cassini probe’s 238Pu thermonuclear fuel clearly ruled out any sun-distance related influence on that alpha emitter.

Inevitably with controversial results like this the old moniker that “extraordinary claims require extraordinary proof” is repeatedly dragged out.

I always thought this statment was cut off a bit short and should really read: “Extraordinary claims require extraordinary proof and merit extraordinary attention.

Because without the latter, sufficient proof may never be acquired even if it is out there. The experiments required to test this further are not expensive. An easy way to rule out seasonality it to perfom these measurements closer to the equator or have them performed at the same time in a north and south American lab as one slashdot poster suggested.

Ultimately, a Beta emitter measurement on another space probe could lay this to rest and help to conclusively determine if this is a real effect.  It would be very exciting if this can be confirmed but it is certainly not settled at this point.

The Unbearable Lightness of Quantum Mechanics

Updated below.

Gravity and Quantum Mechanics don’t play nice together. Since Einstein’s time, we have two towering theories that have defied all attempts by some very smart people to be reconciled. The Standard Model, built on the foundations of quantum mechanics, has been spectacularly successful. It allows the treatment of masses acquired from the binding energies, and, if the Higgs boson confirmation pans out, accounts for the elemental rest masses – but it does not capture gravity. (The current mass generation models that involve gravity are all rather speculative at this point.)

Einstein’s General Relativity has been equally successful in explaining gravity as innate geometric attributes of space and time itself. It has survived every conceivable test and made spectacular predictions (such as gravity lenses).

On the surface this dysfunctional non-relationship between the two major generally accepted theoretical frameworks seems very puzzling. But it turns out that the nature of this conundrum can be described without recourse to higher math (or star-trek like animations with a mythical sound-track).

Much has been written about the origin of this schism: The historic struggle for the interpretation of Quantum Mechanics, with Einstein and Bohr being the figureheads of the divided physics community at the time. Mendel Sachs (who, sadly, passed away recently) drew the following distinction between the philosophies of the two fractions:

[The Copenhagen Interpretation views] Schroedinger’s matter waves as [complex] waves of probability. The probability was then tied to quantum mechanics as a theory of measurement – made by macro observers on micro-matter. This view was then in line with the positivistic philosophy, whereby the elements of matter are defined subjectively in terms of the measurements of their properties, expressed with a probability calculus. […] Einstein’s idea [was] that the formal expression of the probability calculus that is called quantum mechanics is an incomplete theory of matter that originates in a complete [unified] continuous field theory of matter wherin all of the variables of matter are ‘predetermined’.

(From Quantum Mechanics and Gravity)

These days, the Copenhagen Interpretation no longer reigns supreme, but has some serious competition: E.g. one crazy new kid on the block is the Many World Interpretation.  (For an insightful take on MWI I highly recommend this recent blog post from Scott Aaronson).

But the issue goes deeper than that. No matter what interpretation you favor, one fact remains immutable: Probabilities will always be additive, mathematically they behave in a linear fashion. This, despite its interpretative oddities, makes Quantum Mechanics fairly easy to work with.  On the other hand, general relativity is an intrinsically non-linear theory.  It describes a closed system in which the field, generated by gravitating masses, propagates with finite speed and, in a general, non-equilibrium picture, dynamically affects these masses, in turn rearranging the overall field expression.  (Little wonder Einstein’s field equations only yield to analytical solutions for drastically simplified scenarios).

There is no obvious way to fit Quantum Mechanics, this linear peg, into this decidedly non-linear hole.

Einstein considered Quantum Mechanics a theory that would prove to be an approximation of a fully unified field theory.  He spent his last years chasing after this goal, but never achieved it. Mendel Sachs claims to have succeeded where he failed, and indeed presents some impressive accomplishments, including a way to derived the quantum mechanics structure from extended General Relativity field equations.  What always struck me as odd is how little resonance this generated, although this clearly seems to be an experience shared by other theoretical physicists who work off the beaten path. For instance, Kingsley Jones approaches this same conundrum from a completely different angle in his original paper on Newtonian Quantum Gravity. Yet the citation statistic shows that there was little up-take.

One could probably dedicate an entire blog speculating on why this kind of research does not break into the mainstream, but I would rather end this with the optimistic notion that in the end, new experimental data will hopefully rectify this situation. Although the experiment on a neutral particle Bose-Einstein condensate proposed in Kingsley Jones’ paper has little chance of being performed unless there is some more attention garnered, other experiments to probe the domain where gravity and quantum mechanics intersect get a more lofty treatment: For instance this paper was featured in Nature although its premise is probably incorrect. (Sabine Hossenfelder took Nature and the authors to task on her blog – things get a bit acrimonious in the comment section).

Nevertheless, it is encouraging to see such a high profile interest in these kinds of experiments, chances are we will get it right eventually.

Update

Kingsley Jones (who’s 1995 paper paper I referenced above) has a new blog entry that reflects on the historic trajectory and current state of quantum mechanics.  I think it’s fair to say that he does not subscribe to the Many World Interpretation.

 

 

 

Lies, Damned Lies, and Quantum Statistics?

Statistics has a bad reputation, and has had for a long time, as demonstrated by Mark Twain’s famous quote[1] that I paraphrased to use as the title of this blog post. Of course physics is supposed to be above the fudging of statistical numbers to make a point.  Well, on second thought, theoretical physics should be above fudging (in the experimental branch, things are not so clear cut).

Statistical physics is strictly about employing all mathematically sound methods to deal with uncertainty. This program turned out to be incredibly powerful, and gave a solid foundation to the thermodynamic laws.  The latter were empirically derived previously, but only really started to make sense once statistical mechanics came into its own, and temperature was understood to be due to the Brownian motion. Incidentally, this was also the field that first attracted a young Einstein’s attention. Among all his other accomplishments, his paper on the matter that finally settled the debate if atoms were for real or just a useful model is often overlooked. (It is mindboggling that within a short span 0f just 40 years (’05-’45) science went from completely accepting the reality of atoms, to splitting them and unleashing nuclear destruction).

Having early on cut his teeth on statistical mechanics, it shouldn’t come as a surprise that Einstein’s last great contribution to physics went back to this field. And it all started with fudging the numbers, in a far remote place, one that Einstein had probably never even heard of.

In the city that is now the capital of Bangladesh, a brilliant but entirely unknown scholar named Satyendra Nath Bose made a mistake when trying to demonstrate to his students that the contemporary theory of radiation was inadequate and contradicted experimental evidence.  It was a trivial mistake, simply a matter of not counting correctly. What added insult to injury, it led to a result that was in accordance with the the correct electromagnetic radiation spectrum. A lesser person may have just erased the blackboard and dismissed the class, but Bose realized that there was some deeper truth lurking beneath the seemingly trivial oversight.

What Bose stumbled upon was a new way of counting quantum particles.  Conventionally, if you have two particles that can only take on two states, you can model them as you would the probabilities for a coin toss. Lets say you toss two coins at the same time; the following table shows the possible outcomes:

    Coin 1
     Head  Tail
 Coin 2  Head  HH  HT
   Tail  TH  TT

It is immediate obvious that if you throw two coins the combination head-head will have a likelihood of one in four.  But if you have the kind of “quantum coins” that Bose stumbled upon then nature behaves rather different.  Nature does not distinguish between the states tails-head and head-tails i.e. the two states marked green in the table.  Rather it just treats these two states as one and the same.

In the quantum domain nature plays the ultimate shell game. If these shells were bosons the universe would not allow you to notice if they switch places.

This means, rather than four possible outcomes in the quantum world, we only have three, and the probability for them is evenly spread, i.e. assigning a one-third chance to our heads-heads quantum coin toss.

Bose found out the hard way that if you try to publish something that completely goes against the  conventional wisdom, and you have to go through a peer review process, your chances of having your paper accepted are almost nil (some things never change).

That’s where Einstein came into the picture.  Bose penned a very respectful letter to Einstein, who at the time was already the most famous scientist of all time, and well on his way to becoming a pop icon (think Lady Gaga of Science).  Yet, against all odds, Einstein read his paper and immediately recognized its merits.  The rest is history.

In his subsequent paper on Quantum Theory of Ideal Monoatomic Gases, Einstein clearly delineated these new statistics, and highlighted the contrast to the classical one that produces unphysical results in the form of an ultraviolet catastrophe. He then applied it to the ideal gas model, uncovering a new quantum state of matter that would only become apparent at extremely low temperatures.

His audacious work set the state for the discovery of yet another fundamental quantum statistic that governs fermions, and set experimental physics on the track to achieving ever lower temperature records in order to find the elusive Bose-Einstein condensate.

This in turn gave additional motivation to the development of better particle traps and laser cooling. Key technologies that are still at the heart of the NIST quantum simulator.

All because of one lousy counting mistake …

[1] Actually the source of the quote is somewhat murky – yet clearly inducted into popular culture thanks to Twain  (h/t to my fact checking commenters).

UPDATE: For some reason Because this site got slashdotted new comments are currently not showing up in my heavily customized WordPress installation – I get to see them in the admin view and can approve them but they are still missing here.

My apologies to everybody who took the time to write a comment! Like most bloggers I love comments so I’ll try to get this fixed ASAP.

 

For the time being, if you want to leave a comment please just use the associated slashdot story.

The comment functionality has been restored.

Feynman would have approved

Astute followers of this blog know that quantum computing was the brain child of Richard Feynman whose contribution to the quantum field theory of electrodynamics earned him a Nobel prize. Feynman was the first to remark on the fact that classical computers cannot efficiently simulate quantum systems. Since then the field has come a long way and it has been shown theoretically and experimentally that quantum computers can efficiently simulate quantum mechanical multi-body systems.  And recent experimental setups like NIST’s 300 qbit quantum simulator are destined to surpass anything that could be modeled on a classical computer.

Yet, for the longest time it was not clear if quantum computers could also efficiently simulate quantum field theories.

Fields are a bit more tricky.  Just recall the classic experiment to illustrate a magnetic field as shown in the picture.

Every point in space is imbued with a field value, so that even the tiniest volume of an element will contain an infinite amount of these field values.

The typical way to get around this problem is to perform the calculations on a grid.  And the algorithm introduced by Jordan et. al. in this, just three pages long, paper does that as well.

Unfortunately, Feynman is no longer around to appreciate the work that now made it official: Quantum Field theories can be efficiently simulated i.e. with polynomial time scaling.

It is quite clever how they spread their simulation over the qbits, represent scattering particles and manage to derive an error estimate.  The fact that they actually do this within the Schrödinger picture makes this paper especially accessible.

If you don’t know the first thing about quantum mechanics, this paper will still give you a good sense that the constriction of quantum algorithms does not look anything like conventional coding – even, as is the case here, one using the gate based quantum computing model.

This goes to the heart of the challenge to bring quantum computing to the masses.  Steven Job’s quip about the iPhone is just as true for any quantum computer:  “What would it be without the software? It would make a nice paperweight!” (h/t R. Tucci) Only difference is that a quantum computer will make a really big paperweight, but otherwise it’ll be just as dead.

This somewhat resembles the days of yore when computer programs had to be hand compiled for a specific machine architecture.  Hence, the race is one to find a suitable abstraction layer on top of this underlying quantum weirdness, in order to make this power accessible to non-physicists.

Just in case you wondered:  It is still not clear if String theories can be efficiently simulated on a quantum computer.  But it has been suggested that those that cannot should be considered unphysical.

Introducing the Lost Papers Page

Lost Papers Dropping from the Page of Time

Science works in peculiar ways. Everything that matters will need to be published. Yet, this is no guarantee that it won’t be forgotten or lost. Recently a handwritten manuscript of Albert Einsteins was recovered. The paper in question is widely regarded as the last of his greatest contribution to theoretical physics.

It is the last part of a three piece set, the first paper of which was not authored but merely translated and submitted by Einstein after it has been previously rejected for publication.

It was the work of the young Satyendra Nath Bose an often overlooked giant of modern quantum mechanics.  To my knowledge Bose’s original manuscript in English that he sent to Einstein has been lost for good.  The only copies in English have been translated back from the German paper that Einstein submitted on behalf of Bose to the journal “Zeitschrift für Physik” in 1924.

Yet, no such translations of Einstein’s historic follow up paper are readily available.  A cursory Google search comes up empty.

When the news of the recovered manuscript spread in various LinkedIn physics groups, many posters expressed frustration that the paper at the Leiden University Einstein Archive was merely a scan of the German original and therefore inaccessible to most.

So I decided to add the “Lost Papers” page to this blog to provide these papers in a modern English translation. Fortunately I have some help with this, as I am currently very busy.

First off I now start with Bose’s short first paper but the translation of Einstein’s last paper is near completion and will then be linked there as well.

 

SUSY Matrix Blues

The Gentleman to the right places you into the Matrix. His buddy could help, if only he wasn’t a fictional character.

Dr. Gates, a distinguished theoretical physicist (with a truly inspiring biography), recently made an astounding statement during an interview on NPR (the clip from the On Being show can be found herea transcript is also online).  It gave the listener the distinct impression that he uncovered empirical evidence in his work that we live in a simulated reality.  In his own words:

(…) I remember watching the movies, The Matrix. And so, the thought occurred to me, suppose there were physicists in this movie. How would they figure out that they lived in the matrix? One way they might do that is to look for evidence of codes in the laws of their physics. But, you see, that’s what had happened to me already.

I, and my colleagues indeed, we had found the presence of codes in the equations of physics. Not that we’re trying to compute something. It’s a little bit like doing biology where, if you studied an animal, you’d eventually run into DNA, and that’s essentially what happened to us. These codes that we found, they’re like the DNA that sits inside of the equations that we study.

Of course Dr. Gates made additional qualifying statements that cautioned against reading too much into this, but media, even the more even-handed NPR, feeds off sensationalism. And so they of course had to end the segment with a short excerpt from the Matrix to drive this home.  It would be interesting to know how many physicists were subsequently badgered by family and friends to explain if we really live in the Matrix. So here’s how I tackled this reality distortion for my non-physicist mother-in-law:

  • Dr. Gates has been a pioneer in Supersymmetry research (affectionately abbreviated SUSY) but just as with String theory there is an absolute dearth of experimental verification (absolute dearth meaning not a single one).  While SUSY proved to be of almost intoxicating mathematical beauty the recent results from LHC have been especially brutal. Obviously, if nature doesn’t play by SUSY’s rules it will be of no physical consequence if Dr. Gates finds block codes in these equations (although it certainly is still mathematically intriguing).
  • The codes uncovered in the SUSY equations are classic error correction bit codes. The bit, being the smallest informational unit, hints at a Matrix style reality simulated on a massive Turing complete machine.  There are certainly other smart people who actually believe in such (or a very similar) scenario – e.g. Stephen Wolfram advocated something along these lines in his controversial book.  The one massive problem with such a world view is that we rather conclusively know that classic computers are no good at simulating quantum mechanical systems, and that quantum computers can outperform classical Turing machines (the same holds in the world of cellular automatons, where it can be shown that quantum cellular automatons can emulate their Turing equivalent and vice versa).

If Dr. Gates had discovered qbits and a quantum error correction code hidden in SUSY, that would have been significantly more convincing.  I could entertain the idea of a Matrix world simulated on a quantum computer.

At any rate, his equations didn’t provide a better answer to the question of why anyone would go to the trouble of running a simulation like the Matrix.  In the movie, the explanation is that human bodies perform as an energy source just like a battery.  Always thought this explanation fell rather flat.  If a mammalian body was all it took, why not use cows, for example?  That should make for a significantly easier world simulation – an endless field of green should suffice. Probably wouldn’t even require a quantum computer to simulate a happy cow world.

About Time – Blogroll Memory Hole Rescue

One of the most fascinating aspects of quantum information research is that it sheds light on the connections between informational and thermodynamic entropy, as well as how time factors into quantum dynamics.

I.e. Schroedinger Equation and Heisenberg picture are equivalent. Although in the former the wave-function changes with time in the latter the operator. Yet, we don’t actually have any experimental insight in when the changes under adiabatic development are actually realized, since by its very nature we only have discrete observations to work with. This opens up room for various speculations such as that the “passage of time” is actually an unphyiscal notion for an isolated quantum system between measurements (i.e. as expressed by Ulrich Mohrhoff in this paper).

Lot’s of material there for future posts. But before going there it’s a good idea to to revisit the oldest paradox on time with this fresh take on it by Perry Hooker.

 

The Greatest Tragic Hero of Physics

Although widely admired and loved, in the end he died like so many who came to extremes of fame or fortune – estranged from family and separated from old friends. The only person to witness his death in exile was a nurse, incapable of understanding his last words which were uttered in a language foreign to her.

If his private life was a template for a telenovella, viewers would regard it as too over the top: As a teenager his parents leave him with relatives to complete school – they need to resettle to a foreign country. He rebels, his school teachers give up on him, he drops out. He travels across the Alps to reunite with his family. If it isn’t for the unwavering support of his mother he would probably never move on to obtain a higher education. She manages to find him a place with relatives in a country of his native language so that he can finally gain his diploma. The same year he renounces his old citizenship and also quits the religion of his parents.

He subsequently enrolls in a prestigious university, but ignores the career choice that his parents had in mind for him. He falls in love with a beautiful fellow student from a far away land. His parents are against the relationship, and so are hers. Against the will of their families they want to get married, but our hero struggles to find a job after graduation. He hopes to be hired as an assistant at his university, just like the rest of his peers, but he has once again fallen out with some of his teachers. Many of the other members of the faculty only notice him because he skips so many lectures – especially the purely mathematical ones. Still, he passes all the tests, relying on his friends’ lecture notes.

His future wife-to-be becomes pregnant out of wedlock, has to return to her family and gives birth to a little girl with Down syndrome. He never even gets to see the girl. This summer – two years after graduation – with the help of a friend, he finally lands his first steady job. Later that year his father dies, and shortly after that our man marries his beloved Mileva.

Meet the Einsteins:

Images of old Albert Einstein are so iconic that some people tend to forget that he wasn

Having settled down in Bern he now manages to find the discipline and inner calm for his subsequent groundbreaking works. I can not even begin to fathom how he musters the strength to do so, coping with a full time day job and a young family. Discussing his ideas with friends and colleagues certainly helps and surely he must discuss his research with Mileva as well (how much she influenced his work has been somewhat of a controversy). The following three years, even while working as a patent clerk, are the most fruitful of Albert Einstein’s life. His research culminates in four publications in the year 1905 that irreversibly change the very foundation of physics. His papers ….

  1. … describe  for the first time the theory of Special Relativity.
  2. … show the equivalence of mass and energy i.e. the most famous E=mc².
  3. … propose the idea of energy quanta (i.e. photons) to explain the photoelectric effect.
  4. … demonstrate that Brownian motion is a thermal phenomenon.

Without the realization that mass and energy are equivalent (2), there’d be no nuclear energy and weapons. Without Einstein’s energy quanta hypothesis (3), there’d be no quantum mechanics, and his work that explains the Brownian motion (4) settled, once and for all, the question if atoms were real.  At the same time, it provides the missing statistical underpinning for thermodynamics.

These were all amazing accomplishments in their own right, but nothing so resonated with the public as the consequences of Einstein’s theory of Special Relativity (1). This one was regarded as a direct affront to common sense and achieved such notoriety that it was later abused by Nazi propaganda to agitate against “Jewish physics”.

Already, at this time, physics was such a specialized trade that usually the man on the street would have no motivation to form an opinion on some physics paper. So what caused all this negative attention? Einstein’s trouble was that by taking Maxwell’s theory of Electrodynamics seriously he uncovered properties of something that everybody thought they intuitively understood. Any early 20th century equivalent to Joe the Plumber would have felt comfortable explaining how to measure the size of a space and how to measure time – they were understood as absolute immutable dimensions in which life played out. Only they cannot be if Maxwell’s equations were right, and the speed of light was a constant in all frames of reference. This fact was really hiding in plain sight, and you don’t need any mathematics to understand it – you only need the willingness to entertain the possibility that the unthinkable might be true.

In 1923 an elaborate movie was produced that tried to explain Special Relativity to a broad audience. It turned out to be a blockbuster, but still didn’t convince the skeptical public – watching it made me wonder if that is where so many misconceptions about Einstein’s theories started. It does not contain any falsehoods, but it spends way too much time on elaborating relativity, while the consequences of the invariability of light speed are mixed in with results from General Relativity, and neither are really explained. Apparently the creators of this old movie felt that they had to start with the most basic principles and couldn’t really expect their audience to follow some of Einstein’s arguments. Granted, this was before anybody even knew what our planet looked like from space, and the imagined astronaut of this flick is shot into space with a canon as the preferred mode of transportation – as, for instance, imagined by Jules Verne. Nowadays this task is much easier in comparison. You can expect a blog reader to be desensitized by decades of SciFi. Also, having a plethora of educational videos at your fingertips makes for a straightforward illustration of some of the immediate outcomes of accepting light speed to be constant in all frames of reference.

For a modern audience, a thought experiment containing two spaceships traveling in parallel with a setup that has a laser signal being transferred between them requires little explanation. All that is necessary to come to grips with, is what it means that this laser signal travels at the same speed in all frames of reference. For instance, this short video does an excellent job explaining that an observer passing by these spaceships will have to conclude that the clocks for the space pilots must go slower.

Nevertheless, even nowadays you still get publications like this one, where two Stanford professors of psychology perpetuate this popular falsehood in the very first sentence of their long monograph:

[Einstein] established the subjective nature of the physical phenomenon of time.

Of course he did no such thing.  He described how the flow of time and the temporal ordering of events transforms between different inertial reference frames as an objective physical reality.

Over a hundred years special relativity has withstood all experimental tests (including the recent faster than light neutrino dust-up).  Yet, public education has still not caught up to it.

This is the second installment of my irregular biographical physics series intended to answer the question of how physics became so strange. Given Einstein’s importance I will revisit his lasting legacy in a future post.