Category Archives: Popular Science

Science News that isn’t really News

Usually, I don’t blog about things that don’t particularly interest me.  But even if you are a potted plant (preferably with a physics degree), you probably have people talking to you about this ‘amazing’ new paper by Stephen Hawking.

So, I am making the rare exception of re-blogging something, because already wrote everything about this I could possibly want to say, and she did it much better and more convincingly than I would.

So, if you want to know what to make of Hawking’s latest paper head over to the backreaction blog.

Rainbow_Black_hole_by_Chriall
Stephen Hawking now thinks that there are only grey holes, which is a step up in the color scheme from black. But in honor of the Sochi Olympics, I really think the world needs rainbow colored black holes.

Here be Fusion

During my autumn travel to the Canadian West Coast I was given the opportunity to visit yet another High Tech Start-up with a vision no less radical and bold than D-Wave’s.

I have written about General Fusion before, and it was a treat to tour their expanding facility, and to ask any question I could think of. The company made some news when they attracted  investment from Jeff Bezos, but despite this new influx of capital, in the world of fusion research, they are operating on a shoe-string budget. This makes it all the more impressive how much they have already accomplished.

At the outset, the idea seems to be ludicrous; How could a small start-up company possibly hope to accomplish something that the multi-national ITER consortium attempts with billions of dollars? Yet, the approach they are following is scientifically sound, albeit fallen out of favor with the mainstream of plasma physicists. It’s an approach that is incidentally well suited to smaller scale experiments, and the shell of the experiment that started it all is now on display at the reception area of General Fusion.

small_start
Doug Richardson, General Fusion co-founder, is a man on a mission, who brings an intense focus to the job. Yet, when prompted by the receptionist he manged a smile for this photo that shows him next to the shell from the original experiment that started it all. The other founder and key driving force, Michel Laberge, was unfortunately out of town during the week of my visit.

Popular Science was the first major media outlet to take note of the company.  It is very instructive to read the article they wrote on the company back then to get a sense of how much bigger this undertaking has become.  Of course, getting neutrons from fusion is one thing; Getting excess energy is an entirely different matter. After all, the company that this start-up modeled its name after was enthusiastically demonstrating fusion to the public many decades ago.

But the lackluster progress of the conventional approach to fusion does not deter the people behind this project, but rather seems to add to the sense of urgency. What struck me when first coming on site was the no-nonsense industrial feel to the entire operation.  The company is renting some nondescript buildings, the interior more manufacturing floor than gleaming laboratory, every square inch purposefully utilized to run several R&D streams in parallel.  Even before talking to co-founder Doug Richardson, the premise itself sent a clear message, this is an engineering approach to fusion and they are in a hurry. This is why rather then just focusing on one aspect of the machine, they decided to work in parallel.

When asked where I wanted to start my tour, I opted for the optically most impressive piece, the scaled down reactor core with its huge attached pistons.  The reason I wanted to scrutinize this first is because, in my experience, this mechanical behemoth is what casual outside observers usually take objection to.  This is due to the naive assumption that so many moving parts under such high mechanical stresses make for problematic technology. This argument was met with Doug’s derision. In his mind this is the easy part, just a matter of selecting the right material and precision mechanical engineering.  My point that a common argument is that moving parts mean wear and tear, he swatted easily aside.  In my experience, a layperson introduced to the concept is usually uncomfortable with the idea that pistons could be used to produce this enormous pressure. After all, everybody is well acquainted with the limited lifetime of a car engine that has to endure far less.  Doug easily turned this analogy on its ear, pointing out that a stationary mounted engine can run uninterrupted for a long time, and that the reliability typically increases with scale.

Currently they have a 3:1 scaled down reactor chamber build to test the vortex compression system (shown in the picture below).

vortex test reactor
The test version has a reactor sphere diameter of 1m. The envisioned final product will be three times the size.  Still a fairly compact envelope, but too large to be hosted in this building.

Another of my concerns with this piece of machinery was the level of accuracy required to align the piston cylinders. The final product will require 200 of them, and if the system is sensitive to misalignment it is easy to imagine how this could impact its reliability.

It came as a bit of a surprise that the precision required was actually less than I expected, 50 micron (half a tenth of a millimeter) should suffice, and in terms of timing, the synchronicity can tolerate deviations of up to 10 microseconds, ten times more than initially expected. This is due to a nice property that the GF research uncovered during the experiments: The spherical shock wave they are creating within the reactor chamber is self-stabilizing, i.e. the phase shift when one of the actuators is slightly out of line causes a self-correcting interference that helps to keep the ingoing compression symmetric as it travels through the vortex of molten lead-lithium that is at the heart of the machine.

The reason for this particular metal mix within the reactor is the shielding properties of lead, and the fact that Lithium 6 has a large neutron absorption cross section that allows for breeding tritium fuel. This is a very elegant design that ensures that if the machine gets to the point of igniting fusion there will be no neutron activation problems like those which plague conventional approaches (i.e. with a Tokamak design as used by ITER, neutrons, that cannot be magnetically contained, bombard the reactor wall, eventually wearing it down and turning it radioactive).

Doug stressed that this reflects their engineering mindset. They need to get these problems under control from the get-go, whereas huge projects like ITER can afford to kick the can down the road. I.e. first measuring the scope of the problem, and then hoping to address this with later research effort (which is then supposed to provide a solution to a problem that General Fusion’s approach manages to eliminate altogether).

Another aspect of the design that I originally did not understand is the fact that plasma will be injected from both sides of the sphere simultaneously, so that the overall momentum of the plasma will cancel out at the center.  I.e. the incoming shock wave doesn’t have to hit a moving target.

The following YouTube video animation uploaded by the company illustrates how all these pieces are envisioned to work together.

 

Managing the plasma properties and its dynamics, i.e. avoiding unwanted turbulence that may reduce temperature and/or density, is the biggest technological challenge.

To create plasma of the required quality, and in order to get it into place, the company constructed some more impressive machinery.  It is a safe bet that they have the largest plasma injectors ever built.

Plasma Injector
Admittedly, comparing this behemoth to the small plasma chamber in the upper left corner is comparing apples to oranges, but then this machine is in a class of its own.

When studying the plasma parameters, it turned out that the theoretical calculations actually lead to an over-engineering of this injector and that smaller ones may be adequate in creating plasma of the desired density. But of course creating and injecting the plasma is only the starting point.  The most critical aspect is how this plasma behaves under compression.

To fully determine this, GF faces the same research challenges as the related magnetized target fusion research program in the US. I.e. General Fusion needs to perform similar test as conducted in the SHIVA STAR Air Force facility in Albuquerque. In fact, due to budget cut-backs, SHIVA has spare capacity that could be used by GF, but exaggerated US security regulations unfortunately prevent such cooperation.  It is highly doubtful that civilian Canadians would be allowed access to the military class facility.  So the company has to improvise and come up with its own approach to these kind of implosion tests. The photo below shows an array of sensors that is used to scrutinize the plasma during one of these tests.

sensors
Understanding the characteristics of the plasma when imploded is critical,  these sensors on top of one of experimental set-up are there to collect the crucial data. Many such experiments will be required before enough data has been amassed.

Proving that they can achieve the next target compression benchmark is critical in order to continue to receive funding from the federal Canadian SDTC fund.  The latter is the only source for governmental fusion funding, Canada has no dedicated program for fusion research and even turned its back on the ITER consortium. This is a far cry from Canada’s technological vision in the sixties that resulted in nuclear  leadership with the unique CANDU design. Yet, there is no doubt,  General Fusion has been doing the most with the limited funds it received.

Here’s to hoping that the Canadian government may eventually wake-up to the full potential of a fusion reactor design ‘made in Canada’ and start looking beyond the oil patch for its energy security (although this will probably require that the torch is passed to a more visionary leadership in Ottawa).

ride
An obligatory photo for any visitor to General Fusion. Unfortunately, I forgot my cowboy hat.

~~~

Update: What a start into 2014 for this blog.  This post has been featured on slashdot, and received over 11K views within three days.  Some of the comments on slashdot inquired to dig deeper into the science of General Fusion. For those who want to follow through on this, the company’s papers and those that describe important results that GF builds on, can be found on their site. In addition, specifically for the unique vortex technology I find James Gregson’s Master Thesis very informative.

Update 2: General Fusion can be followed on Twitter @MTF_Fusion (h/t Nathan Gilliland)

Update 3: Some Canadian main stream media like the Edmonton Journal also noticed the conspicuous absence of dedicated fusion research.  Ironically, the otherwise well written article, argues for an Alberta based research program while not mentioning General Fusion once.  This despite the fact that the company is right next door (by Canadian standards) and has in fact one major Alberta based investor, the oil company  Cenovus Energy.

Blog Memory Hole Rescue – The Fun is Real

It seems that work and life is conspiring to leave me no time to finish my write-up on my General Fusion visit.  Started it weeks ago but still I am not ready to hit the publish button on this piece.

memory_hole

In the meantime I highly recommend the following blog that I came across.  It covers very similar topics than the ones here, and also shares a similar outlook.  For instance, this article beautifully sums up why I never warmed up to Everett’s Multiverse interpretation (although I have to admit reading Julian Barbour’s End of Time softened my stance a bit – more on this later).

The ‘Fun Is Real’ blog is a cornucopia of good physics writing and should provide many hours of thought-provoking reading material to bridge over the dearth of my current posting schedule.

On a side note, given that this goes to the core of the topic I write about on this blog, the following news should not go unmentioned:  Australian researchers reportedly have created a cluster state of 10,000 entangled photonic qubits (h/t Raptis T.).

This is magnitudes more than has been previously reported. Now if they were to manage to get some quantum gates applied to them we’d be getting somewhere.

Blog Round-Up

Lots of travel last week delayed the second installment on my D-Wave visit write-up, but I came across some worthy re-blog material to bridge the gap.

inholeI am usually very hard on poorly written popular science articles, which is all the more reason to point to some outstanding material in this area. I found that one writer, Brian Dodson, at the Gizmag site usually delivers excellent content. Due to his science background, he brings an unusual depth of understanding to his writing. His latest pieces are on General Relativity compatible alternatives to dark energy and a theoretical Quantum black hole study that puts the gravity loop approach to some good use. The latter is a good example as to why I am much more inclined to Loop Quantum Gravity rather than the ephemeral String theory, as the former at least delivers some predictions.

Another constant topic of this blog is the unsatisfying situation with regards to the foundational interpretations of Quantum Mechanics.  Lack of progress in this area can in no small measure be attributed to the ‘Shut up and calculate’ doctrine, a famous  quip attributed to Feynman that has since been enshrined as an almost iron rule.

To get a taste for how prohibitively this attitude permeates the physics community, this arxiv paper/rant is a must read. From the abstract:

If you have a restless intellect, it is very likely that you have played at some point with the idea of investigating the meaning and conceptual foundations of quantum mechanics. It is also probable (albeit not certain) that your intentions have been stopped in their tracks by an encounter with some version of the “Shut up and calculate!” command. You may have heard that everything is already understood. That understanding is not your job. Or, if it is, it is either impossible or very difficult. Maybe somebody explained to you that physics is concerned with “hows” and not with “whys”; that whys are the business of “philosophy” -you know, that dirty word. That what you call “understanding” is just being Newtonian; which of course you cannot ask quantum mechanics to be. Perhaps they also complemented this useful advice with some norms: The important thing a theory must do is predict; a theory must only talk about measurable quantities. It may also be the case that you almost asked “OK, and why is that?”, but you finally bit your tongue. If you persisted in your intentions and the debate got a little heated up, it is even possible that it was suggested that you suffered of some type of moral or epistemic weakness that tends to disappear as you grow up. Maybe you received some job advice such as “Don’t work in that if you ever want to own a house”.

At least if this bog post is any indication the times seem to be changing and becoming more permissive.

Science – Don’t Ask What it Can Do for You.

The most important non-scientific book about science that you will ever read is Michael Nielsen’s Reinventing Discovery: The New Era of Networked Science.

It lays out how the current scientific publishing process is a hold over from the 19th century and passionately makes the case for Open Science.  The latter is mostly understood to be synonymous with Open Access, i.e. no more hiding of scientific results in prohibitively expensive journals, especially when public tax funded grants or institutions paid for the research.

But Michael has a more expansive view.  He makes the case that science can be measurably enriched by coming out of the Ivory tower and engaging the public via well designed crowdsourcing efforts such as the Galaxy Zoo.

On this blog, I have written many times about the shortcomings of science media large and small, as well as the unsatisfying status quo in theoretical physics.  And readers may be justified in wondering why this should matter to them. The answer to this is straightforward:  Science is too important for it to be left to the scientists.  Our society is shaped by science and technology, and to the extent that we’ve all learned about the scientific method, everybody has the capacity to raise valid questions.  Science, as any other major endeavor, benefits from a critical public, and that is why the fairytale science that I wrote about in my last post is a dangerous development.  It lulls the interested observers into believing that they are clearly out of their depth, incapable of even formulating some probing questions.  This can in fact be turned into a criteria for bad science: If a reasonably intelligent and educated person cannot follow up with some questions after a science presentation, it’s a pretty good indication that the latter is either very poorly done, or may deal in fairytale science (the only problem with this criteria is that usually everybody considers themselves reasonably intelligent).

The antidote to this pathological development is Open Science as described by Michael Nielsen and Citizen Science. The latter I’ll expect to develop no less of an impact on the way we do science as the Open Source movement had on the way we do computing. Never have the means to do quality science been as affordable as today; A simple smartphone is already a pretty close match to the fabled Star Trek tricorder, and can easily be turned into precision instruments. Science labs used to require skilled craftsmen to build scientific rigs, but 3D printers will level the playing field there as well.  This means that experiments that would have required major funding just two decades away are now within the means of high school students.

So, don’t ask what science can do for you, but what you can do for science.*

Don_t_ask*In this spirit, I decided to step up this blog’s content, and didn’t shy away from the expenses to engage in some original reporting.  Last week I took a trip to Canada’s high tech wonderland, which happens to be Burnaby, BC just outside Vancouver.  Stay tuned for some upcoming first hand reporting on D-Wave and General Fusion.

Just Say No to Fairytale Science

Terry Pratchett was one, if not my all time, favorite author. Luckily for me, he was also one of the most prolific ones, creating an incredible rich, hilarious yet endearing universe, populated with the most unlikely yet humane characters. What drew me in, when I started reading his books twenty years ago, was his uncanny sense for the absurdities of modern physics. Therefor it shouldn’t really come as a surprise that he also wrote the best popular science book there is. To honor the man, and mark his passing, I republish this post from 2013.

ScienceUntil recently, there was no competition if I were to be asked what popular science book I’d recommend to a non-physicist.   It was always Terry Pratchett’s The Science of Discworld. It comes with a healthy dose of humor and makes no qualms about the fact that any popularized version of modern physics essentially boils down to “lies to children”.

farewell-to-reality

 

 

 

But there is now a new contender, one that I can highly recommend:  Farewell to Reality: How Modern Physics Has Betrayed the Search for Scientific Truth. This book does an excellent job of retelling how we got to the current state in theoretical physics, that quantum computing theorist Scott Aaronson described this way:

 

ROTFL! Have you looked recently at beyond-Standard-Model theoretical physics? It’s a teetering tower of conjectures (which is not to say, of course, that that’s inherently bad, or that I can do better). However, one obvious difference is that the physicists don’t call them conjectures, as mathematicians or computer scientists would. Instead they call them amazing discoveries, striking dualities, remarkable coincidences, tantalizing hints … once again, lots of good PR lessons for us! 🙂

This was in a comment to his recent blog post where he has some fun with Nima Arkani-Hamed’s Amplituhedron. The latter is actually some of the more interesting results I have seen come out of mainstream theoretical physics, because it actually allows us to calculate something in a much more straightforward manner than before. That this is currently unfortunately restricted to the scope of an unphysical toy theory is all you need to know to understand how far current theoretical physics has ventured from actual verifiability by experiment.

For those who want to dig deeper and understand where to draw the line between current established physics and fairytale science, Jim Baggot’s book is a one stop shop.  It is written in a very accessible manner and does a surprisingly good job in explaining what has been driving theoretical physics, without recourse to any math.

At the beginning, the author describes what prompted him to write the book: one too many of those fanciful produced science shows, with lots of CGI and dramatic music, that presents String theory as established fact.  Catching himself yelling at the TV (I’ve been there), he decided to do something about it, and his book is the pleasant result.  I am confident it will inoculate any alert reader to the pitfalls of fairytale science and equip him (or her) with a critical framework to ascertain what truthiness to assign to various theoretical physics conjectures (in popular science fare they are, of course, never referenced as such, as Scott correctly observed).

This isn’t the first book that addresses this issue.  Peter Woit’s Not Even Wrong took it on, at a time when calling out String theory was a rather unpopular stance, but the timing for another book in this vein that addresses a broad lay public is excellent.  As Baggott wrote his book, it was already apparent that CERN’s LHR did not pick up any signs in support of SUSY and string theory.  Theorists have been long in denial about these elaborately constructed castles in the sky, but the reality seems to be slowly seeping in.

The point is that the scientific mechanism for self-correction needs to reassert itself.  It’s not that SUSY and String theory didn’t produce some remarkable mathematical results.  They just didn’t produce actual physics (although in unanticipated ways the Amplituhedron may get there). Trying to spin away this failure is doing science a massive disfavor. Let’s hope theoretical physicists take a cue from the the quantum computing theorists and clearly mark their conjectures. It’ll be a start.

Alternatively, they could always present the theory as it is done in this viral video.  At least then it will be abundantly clear that this is more art than science (h/t Rolf D.):

Science Media in a Bubble – Ready to Implode?

An ongoing theme of this blog is the media coverage that science receives. Unsurprisingly, given that most journalists have little STEM background, the public is often treated to heedless rewording of press releases e.g. this example from the QC realm. Also, sensationalist science news is hardly ever put into context – the story of the faster than light CERN neutrinos is a perfect example for the latter.

What is more surprising is when dedicated publication powerhouses such as Nature or Science are getting it wrong. Either by means of omission, such as when covering quantum computing but completely ignoring the adiabatic approach that D-Wave is following, or by short-circuiting the peer review process.  The latter may have set back sonoluminescence research by decades.

Sonoluminescence is the name for a peculiar effect where cavitation in a liquid can be stimulated by sound waves to the point where the small gaseous bubbles implode so rapidly that plasma forms that produces a telltale light signal. The following video is a nice demonstration of the effect (full screen mode recommended):

 

Since there is plasma involved, the idea that this could be used as yet another means to accomplish fusion was first patented as early as 1982.

In itself, the phenomenon is remarkable enough, and not well understood, giving ample justification for basic research of the effect.  After all, it is quite extraordinary that sound waves suffice to create such extreme conditions in a liquid.

But it is still quite a stretch to get from there to the necessary conditions for a fusion reaction.  The nuclear energy barrier is orders of magnitudes larger than the energies that are involved in the chemical domain, let alone the typical energy density of sound waves.  The following cartoon puts this nicely into perspective:

That is why to me this approach to fusion always seemed rather far fetched, and not very practical. So when a Science article about ten years ago claimed fusion evidence, I was skeptical, and wasn’t surprised that it was later contradicted by reports that portrayed the earlier results as ambiguous at best.  I had no reason to question the Science reporting.  I took the news at face value and paid little attention to this area of research until a recent report by Steven Krivit.  He brings investigative journalism to the domain of science reporting and the results are not pretty:

  1. The rebuttal to the original peer reviewed article first appeared on the Science blog without going through the usual review process.
  2. Contrary to what was reported, the scientists undermining the original research did not work independently on reproducing the results but only performed auxiliary measurements on the same experimental set-up.
  3. The detector they used was known to not be ideally suited to the neutron spectrum that was to be measured, and was too large to be ideally placed.
  4. The criticism relied on an ad-hoc coincidence criteria for the photon and neutron genesis that ignored the multi-bubble cavitation design of the original experiment.

The full investigative report is behind a pay-wall.  It is rather devastating.

To add insult to injury, the Science journalist instrumental in causing this mess, the one who promoted the rebuttal without peer review, later went on to teach journalism.

A casual and cynical observer may wonder why Steven makes such a fuss about this. After all, American mainstream journalism outside the realm of science is also a rather poor and sordid affair.  He-said-she-said reporting is equated with objectivity, and journalists are mostly reduced to being stenographers and water carriers of the political actors that they are supposed to cover (the few journalists who buck this trend I hold in the highest regard).

One may also argue that there wasn’t all that much damage done, because the critics, even if they didn’t work as advertised, may have gotten it right; The BBC, a couple of years later, sponsored an attempt at reproduction and also came up empty.

But there is one rather big and important difference:  Journals such as Science are not just media that report to the public at large.  Rather, they are the gatekeepers for what is accepted as scientific research, and must therefore be held to a higher standard.  Research that doesn’t get published in peer reviewed journals may as well not exist (unless it is privately financed applied R&D, that can be immediately commercialized, and is therefore deliberately kept proprietary).

The more reputable a peer reviewed journal, the higher the impact (calculating the impact factor is a science in itself). But arguably, it is worse to get work published in a reputable journal just to have the results then demolished by the same outfit, especially if the deck is stacked against you.

To me, this story raises a lot of questions and drives home that investigative science journalism is sorely lacking and badly needed. Who else is there to guard the gatekeepers?

Out of the AI Winter and into the Cold

dwave_log_temp_scale
A logarithmic scale doesn’t have the appropriate visual impact to convey how extraordinarily cold 20mK is.

Any quantum computer using superconducting Josephson junctions will have to be operated at extremely low temperatures. The D-Wave machine, for instance, runs at about 20 mK, which is much colder than anything in nature (including deep space). A logarithmic scale like the chart to the right, while technically correct, doesn’t really do this justice.  This animated one from D-Wave’s blog shows this much more drastically when scaled linearly (the first link goes to an SVG file that should play in all modern browsers, but it takes ten seconds to get started).

Given that D-Wave’s most prominent use case is the field of machine learning, a casual observer may be misled to think that the term “AI winter” refers to the propensity of artificial neural networks to blossom in this frigid environment. But what the term actually stands for is the brutal hype cycle that ravaged this field of computer science.

One of the original first casualties of the collapse of artificial intelligence research in 1969 was the ancestor of the kind of learning algorithms that are now often implemented on D-Wave’s machines. This incident is referred to as the XOR affair, and the story that circulates goes like this:  “Marvin Minsky, being a proponent of structured AI, killed off the connectionism approach when he co-authored the now classic tome, Perceptrons. This was accomplished by mathematically proving that a single layer perceptron is so limited it cannot even be used (or trained for that matter) to emulate an XOR gate. Although this does not hold for multi-layer perceptrons, his word was taken as gospel, and smothered this promising field in its infancy.”

Marvin Minsky begs to differ, and argues that he of course knew about the capabilities of artificial neural networks with more than one layer, and that if anything, only the proof that working with local neurons comes at the cost of some universality should have had any bearing.  It seems impossible to untangle the exact dynamics that led to this most unfortunate AI winter, yet in hindsight it seems completely misguided and avoidable, given that a learning algorithm (Backpropagation) that allowed for the efficient training of multi-layer perceptrons had already been published a year prior, but at the time it received very little attention.

The fact is, after Perceptrons was published, symbolic AI flourished and connectionism was almost dead for a decade. Given what the authors wrote in the forward to the revised 1989 edition, there is little doubt how Minsky felt about this:

“Some readers may be shocked to hear it said that little of significance has happened in this field [since the first edition twenty year earlier]. Have not perceptron-like networks under the new name connectionism – become a major subject of discussion at gatherings of psychologists and computer scientists? Has not there been a “connectionist revolution?” Certainly yes, in that there is a great deal of interest and discussion. Possibly yes, in the sense that discoveries have been made that may, in time, turn out to be of fundamental importance. But certainly no, in that there has been little clear-cut change in the conceptual basis of the field. The issues that give rise to excitement today seem much the same as those that were responsible for previous rounds of excitement. The issues that were then obscure remain obscure today because no one yet knows how to tell which of the present discoveries are fundamental and which are superficial. Our position remains what it was when we wrote the book: We believe this realm of work to be immensely important and rich, but we expect its growth to require a degree of critical analysis that its more romantic advocates have always been reluctant to pursue – perhaps because the spirit of connectionism seems itself to go somewhat against the grain of analytic rigor.” [Emphasis added by the blog author]

When fast-forwarding to 2013 and the reception that D-Wave receives from some academic quarters, this feels like deja-vu all over again. Geordie Rose, founder and current CTO of D-Wave, unabashedly muses about spiritual machines, although he doesn’t strike me as a particularly romantic fellow. But he is very interested in using his amazing hardware to make for better machine learning, very much in “the spirit of connectionism”.  He published an excellent mini-series on this at D-Wave’s blog (part 1, 2, 3, 4, 5, 6, 7).  It would be interesting to learn if Minsky was to find fault with the analytic rigor on display here (He is now 86 but I hope he is still going as strong as ten years ago when this TED talk was recorded).

So, if we cast Geordie in the role of the 21st century version of Frank Rosenblatt (the inventor of the original perceptron) then we surely must pick Scott Aaronson as the modern day version of Marvin Minsky.  Only that the argument this time is not about AI, but how ‘quantum’ D-Wave’s device truly is.  The argument feels very similar: On one side, the theoretical computer scientist, equipped with boat-loads of mathematical rigor, strongly prefers the gate model of quantum computing. On the other one, the pragmatist, whose focus is to build something usable within the constraints of what chip foundries can produce at this time.

But the ultimate irony, it seems, at least in Scott Aaronson’s mind, is that the AI winter is the ultimate parable of warning to make his case (as was pointed out by an anonymous poster to his blog).  I.e. he thinks the D-Wave marketing hype can be equated to the over-promises of AI research in the past. Scott fears that if the company cannot deliver, the babe (I.e. Quantum Computing) will be thrown out with the bathwater, and so he blogged:

“I predict that the very same people now hyping D-Wave will turn around and—without the slightest acknowledgment of error on their part—declare that the entire field of quantum computing has now been unmasked as a mirage, a scam, and a chimera.”

A statement that of course didn’t go unchallenged in the comment section (Scott’s exemplary in allowing this kind of frankness on his blog).

I don’t pretend to have any deeper conclusion to draw from these observations, and will leave it at this sobering thought: While we expect science to be conducted in an eminently rational fashion, history gives ample examples of how the progress of science happens in fits and starts and is anything but rational.

The Other Kind of Cold Fusion

Cygnus_X-1
Nature clearly favours hot fusion no matter how cold the light. The cold glow in this image stems from a Blue Giant that is believed to orbit a black hole in the Cygnus X-1 system.

If you lived through the eighties there are certain things you could not miss, and since this is a science blog I am of course not referring to fashion aberrations, like mullets and shoulder pads, but rather to what is widely regarded as one of the most notorious science scandals to date: Fleischmann and Pons Cold Fusion, the claim of tapping the ultimate energy source within a simple electrochemical cell.

driver_license_photo
This blog’s author’s photo proves that he lived through the eighties. Since this driver’s licence picture was taken the same year as the Fleischmann and Pons disaster, the half smile was all that I could muster.

For a short time it felt like humanity’s prayers to deliver us from fossil fuel had been answered (at least to those who believe in that sort of thing). Of course, paying the ever increasing price at the gas pump is a constant (painful) reminder that this euphoric moment at the end of eighties was but a short lived aberration. But back then it felt so real. After all, there already existed a well-known process that allowed for nuclear fusion at room temperature, catalyzed by the enigmatic muons. One of the first scientific articles that I read in English was on that phenomenon, and it was published just a couple of years earlier. So initial speculations abounded, that maybe muons in the cosmic background radiation could somehow help trigger the reported reaction (although there was no explanation given as to how this low muon flux density could possibly accomplish this). While my fringe blog focuses on the intrepid researchers who, despite the enormous blow back, still work on Fleischman Pons-style research, this post is about the former, the oft forgotten muon-catalyzed fusion.

It is a beautiful nuclear reaction, highlighting one of the most basic peculiarities of quantum mechanics: Quantum Tunnelling and Heisenberg uncertainty principle. Both of these are direct consequences of the manifest wave properties of matter at this scale. The former allows matter to seep into what should be impenetrable barriers, and the latter describes how a bound point particle is always “smeared out” over a volume – as if points are an abstraction that nature abhors. Last but not least, it showcases the mysterious muon, a particle that seems to be identical to electrons in every way but the mass and stability (about 200 times more mass and a pretty long half life of about 2 μs). Because it behaves just like a heavier twin of the electron, it can substitute the latter in atoms and molecules.

The Heisenberg uncertainty principle states that the product of momentum (mass times velocity) and position ‘certainty’ has a lower bound. Usually the term uncertainty is simply interpreted probabilistically in terms of the deviation of the expectation value. But this view, while formally entirely correct, obstructs the very real physical implication of trying to squeeze a particle into a small space, because the momentum uncertainty then becomes a very real physical effect of quantum matter. The particle’s velocity distribution will become ever broader, forcing the matter outwards and creating an orbital ‘cloud’ (e.g. specifically the spherical hydrogen s-orbital). There is really no good analogy in our everyday experience, they all sound silly: My version is that of a slippery soap in a spherical sink, the harder you try to grasp it the more powerful you send it flying. If you were to map all trajectories of the soap over time, you will find that on average it was anywhere in the sink with the probability decreasing towards the rim (that is unless you squeeze it so hard that it acquires enough speed to jump out of the sink – I guess that would be an analog to ionization). In the atomic and chemical realm, on the other hand, the very concept of a trajectory doesn’t hold up (unless you are dealing with Rydberg atoms). You may as well think of electron orbitals as charge distributions (as this is exactly how they behave in the chemical domain).

Because the momentum rather then the velocity enters into the equation, the orbitals for a heavier version of the electron will be considerably smaller, i.e. about 2oo times smaller for the muon, as this is the factor by which the particle’s velocity can be reduced in order to still get the same momentum. So muonic hydrogen is much smaller than the electron version. That’s already all that is needed to get fusion going, because if two heavy hydrogen nucleons are bound in a muonic μH2 molecule they are far too close for comfort. Usually the repellent force of the electrostatic Coulomb potential should be enough to keep them apart, but the quantum tunnel effect allows them to penetrate the ‘forbidden’ region. And at this distance, the probability that both nucleons occupy the same space becomes large enough to get measurable incidents of nuclear fusion i.e. μH→ μHe.

The hydrogen used in the experimental realization is not the usual kind, but as with other fusion realizations, the heavier hydrogen isotopes deuterium and tritium are required, and since there is only one muon in the mix the d-t hydrogen is ionized. so that the equation looks more like this: (d-μ-t)+ → n + α (with the n indicating a fast neutron and the α a Helium-4 nucleus.)

The latter causes a lot of trouble as the muon ‘sticks’ to this alpha particle with a 1% chance (making it a muonic helium ion). If this happens, this muon is no longer available to catalyze more fusion events. This, in combination with the limited life time of the muons, and the ‘set-up’ required by the muons to bind to the hydrogen isotopes, is the limiting factor of this reaction.

Without a constant massive resupply of muons the reaction tempers off quickly. Despite decades of research this problem could never be surmounted. It takes pions to make muons, and the former are only produced in high energy particle collisions. This costs significantly more energy than the cold muon catalyzed fusion can recoup.

But there is one Australian company that claims that it has found a new, less costly way to make pions. They are certainly a very interesting ‘cold fusion’ start-up and at first glance seem far more credible than the outfits that my fringe blog covers. But on the other hand, this company treats their proprietary pion production process with a level of secrecy that is reminiscent of the worst players in the LENR world. I could not find any hint of how this process is supposed to work and why it supposedly can produce sufficient amounts of muons to make this commercially exploitable. (Pions could also be generated in two photon processes, but this would require even more input energy). So on second read the claims of Australian’s Star Scientific don’t really sound any less fantastic than the boasting of any other cold fusion outfit.

Any comments that could illuminate this mystery are more than welcome. Preliminary google searches on this company are certainly not encouraging.

Will you or will you not?

Ever so often a piece of pop science writing pops up that stands out. It’s like a good bottle of wine, you want to savour it slowly, and the next day when you wake up with a slight hangover and realize that maybe it was a bit disagreeable, you are still content that you have some more of it in your cellar.

Penrose’s “The Emperor’s New Mind” falls into this category for me. Despite all of the author’s immense scientific sophistication, it felt like he fell into the trap that my very first physics prof put like this: “The difference between theologians and philosophers is that the former have to argue towards a certain end.” In the final analysis, I find, it was a religious text.

After an exhausting rekindling of the D-Wave melee on his blog,  Scott Aaronson’s latest paper, “The Ghost in the Quantum Turing Machine”, is a welcome change of pace. Yet, given the subject matter I was mentally preparing for a similar experience as with Penrose, especially in light of the instant rejection that this paper received from some experts on Bayesian inference, such as Robert Tucci.

Scott’s analysis may be dismissed as Copenhagen Interpretation on steroids, but while the innate problems with this old QM workhorse are quite apparent, in the end I think it actually manages to yet again deliver a plausible result (despite some apparent absurdities along the way). The structure of the essay is quite clever, as Scott anticipates many objections that could be raised, and at times it almosts reads like a 21st century version of a Platonic dialog. I think he missed some issues, and I will revisit this in a later post, but overall I think the big picture holds, and it is well painted.

Scott has always been a good writer.  His  book “Quantum Computing since Democritus
I find thoroughly enjoyable.  Although unlike the Hitchhiker’s Guide to the Galaxy (the real thing, not the book) it still had to fit the dead tree format, and so there are gaps in the millennia of QC history covered. Scott had to pick and chose what’s most important to him in this story, and that means that the 495 complexity classes known to humanity these days get a fair share of attention.  After all, he is a complexity theorist. Even to the best writer, making that part flow like honey will be difficult, but it gives an excellent window into how Scott approaches the subject. It also lays bare that the field is in similar dire straights as physics was, when the number of elementary particles exploded without much understanding of exactly what was going on. So for now, we are stuck with a complexity class zoo rather than an elementary particle one, waiting for some kind of standard. model that’ll impose order.

This latest, more contemplative paper is unburdened by this particular heavy load, yet takes on another one: The age old philosophical question of free will, which is very close to the question of consciousness and AI that Penrose pondered.  It starts out with a beautifully written homage to Turing. The last piece of writing that resonated this strongly with me had an unapologetically religious sub-text (this blog entry penned by Kingsley Jones).  So I was certain I was in for another Penrose moment.

The bait and switch that followed, to the much more decidable question of what minimal necessary resource nature needs to provide to make free will a valid concept, came as a pleasant surprise. All the more, as this question seems so obvious in hindsight, but apparently hasn’t been refined in this manner before.

It is a very good question, an important one, but for now your inclination toward or away from belief in this resource (which goes by the name Knightian uncertainty) is up to your religious leanings, and I don’t know if you actually have the freedom to make this choice.