As I am preparing to again get back into more regular blogging on Quantum Computing, I learned that my second favourite Vancouver based start-up, General Fusion, got some well deserved social media traction. Michel Labarge’s TED talk has now been viewed over a million times (h/t Rolf D). Well deserved, indeed.
This reminded me of a Milken Institute fusion panel from earlier this year, which seems to have less reach than TED, but is no less interesting. It also features Michel, together with representatives from other Fusion ventures (Tri Alpha Energy and Lockheed Martin) as well as MIT’s Dennis Whyte. The panel makes a compelling case as to why we see private money flowing into this sector now, and why ITER shouldn’t be the only iron we have in the fire.
The US already only has observer status at CERN, so bailing on ITER would sideline the American physics community even more. Despite the cost overruns and irrespective of its commercialisation prospects, ITER will make for one of the most advanced testbeds for plasma physics. Should the US really shut itself out of having prime access to this machine once it is operational?
John’s post provides an excellent round-up of the various approaches to fusion, and mentions the damage that cold fusion inflicted on the field, a story that deserves a separate article. But there is another plasma phenomenon that some hope could be exploited for nuclear fusion that goes unmentioned in John’s otherwise exhaustive post. It shares some communality with the dubious cold fusion experiments: Abysmally bad replicability that severely damaged the reputation of one of the lead researchers in the field. This speculative approach to fusion was recently prominently featured in a surprisingly well researched gawker article (h/t Ed B.). It mentions some private outfits that are hanging their hat on sonoluminescence, and since the latter phenomenon is, after all, an actual plasma creating micro cavitation, these companies don’t deserve to be lumped in with the more shady cold fusion hustlers.
However, it is quite apparent that none of these can produce neutrons at a significant rate, unlike PNL’s High Yield Neutron Generator, an already commercially valuable technology. So there clearly is not much reason to get too excited about sonoluminescence unless one of the companies invested in this approach could replicate this feat.
On balance, the influx of private money into nuclear fusion start-ups is the story here, one that gives hope that humanity may find a way to break its self-defeating fossil fuel habit within our lifetime.
Another post on D-Wave is in my blog queue, but with all this attention on quantum computing my other favorite BC based high tech start-up doesn’t get enough of my time – I haven’t written anything on energy and fusion for quite a while, despite some dramatic recent news (h/t Theo) with regards to another dark horse fusion contender.
Another focus of mine, the trouble with contemporary theoretical physics also keeps falling through the cracks. From my past posts one may get the impression that I am just yet another String apostate, but I don’t really have any trouble with String Theory as such, but rather with uncritical confirmation bias. Unfortunately, the latter cuts across all fields as Sabine Hossenfelder nicely demonstrates in this recent post of hers.
During my autumn travel to the Canadian West Coast I was given the opportunity to visit yet another High Tech Start-up with a vision no less radical and bold than D-Wave’s.
I have written about General Fusion before, and it was a treat to tour their expanding facility, and to ask any question I could think of. The company made some news when they attracted investment from Jeff Bezos, but despite this new influx of capital, in the world of fusion research, they are operating on a shoe-string budget. This makes it all the more impressive how much they have already accomplished.
At the outset, the idea seems to be ludicrous; How could a small start-up company possibly hope to accomplish something that the multi-national ITER consortium attempts with billions of dollars? Yet, the approach they are following is scientifically sound, albeit fallen out of favor with the mainstream of plasma physicists. It’s an approach that is incidentally well suited to smaller scale experiments, and the shell of the experiment that started it all is now on display at the reception area of General Fusion.
But the lackluster progress of the conventional approach to fusion does not deter the people behind this project, but rather seems to add to the sense of urgency. What struck me when first coming on site was the no-nonsense industrial feel to the entire operation. The company is renting some nondescript buildings, the interior more manufacturing floor than gleaming laboratory, every square inch purposefully utilized to run several R&D streams in parallel. Even before talking to co-founder Doug Richardson, the premise itself sent a clear message, this is an engineering approach to fusion and they are in a hurry. This is why rather then just focusing on one aspect of the machine, they decided to work in parallel.
When asked where I wanted to start my tour, I opted for the optically most impressive piece, the scaled down reactor core with its huge attached pistons. The reason I wanted to scrutinize this first is because, in my experience, this mechanical behemoth is what casual outside observers usually take objection to. This is due to the naive assumption that so many moving parts under such high mechanical stresses make for problematic technology. This argument was met with Doug’s derision. In his mind this is the easy part, just a matter of selecting the right material and precision mechanical engineering. My point that a common argument is that moving parts mean wear and tear, he swatted easily aside. In my experience, a layperson introduced to the concept is usually uncomfortable with the idea that pistons could be used to produce this enormous pressure. After all, everybody is well acquainted with the limited lifetime of a car engine that has to endure far less. Doug easily turned this analogy on its ear, pointing out that a stationary mounted engine can run uninterrupted for a long time, and that the reliability typically increases with scale.
Currently they have a 3:1 scaled down reactor chamber build to test the vortex compression system (shown in the picture below).
Another of my concerns with this piece of machinery was the level of accuracy required to align the piston cylinders. The final product will require 200 of them, and if the system is sensitive to misalignment it is easy to imagine how this could impact its reliability.
It came as a bit of a surprise that the precision required was actually less than I expected, 50 micron (half a tenth of a millimeter) should suffice, and in terms of timing, the synchronicity can tolerate deviations of up to 10 microseconds, ten times more than initially expected. This is due to a nice property that the GF research uncovered during the experiments: The spherical shock wave they are creating within the reactor chamber is self-stabilizing, i.e. the phase shift when one of the actuators is slightly out of line causes a self-correcting interference that helps to keep the ingoing compression symmetric as it travels through the vortex of molten lead-lithium that is at the heart of the machine.
The reason for this particular metal mix within the reactor is the shielding properties of lead, and the fact that Lithium 6 has a large neutron absorption cross section that allows for breeding tritium fuel. This is a very elegant design that ensures that if the machine gets to the point of igniting fusion there will be no neutron activation problems like those which plague conventional approaches (i.e. with a Tokamak design as used by ITER, neutrons, that cannot be magnetically contained, bombard the reactor wall, eventually wearing it down and turning it radioactive).
Doug stressed that this reflects their engineering mindset. They need to get these problems under control from the get-go, whereas huge projects like ITER can afford to kick the can down the road. I.e. first measuring the scope of the problem, and then hoping to address this with later research effort (which is then supposed to provide a solution to a problem that General Fusion’s approach manages to eliminate altogether).
Another aspect of the design that I originally did not understand is the fact that plasma will be injected from both sides of the sphere simultaneously, so that the overall momentum of the plasma will cancel out at the center. I.e. the incoming shock wave doesn’t have to hit a moving target.
The following YouTube video animation uploaded by the company illustrates how all these pieces are envisioned to work together.
Managing the plasma properties and its dynamics, i.e. avoiding unwanted turbulence that may reduce temperature and/or density, is the biggest technological challenge.
To create plasma of the required quality, and in order to get it into place, the company constructed some more impressive machinery. It is a safe bet that they have the largest plasma injectors ever built.
When studying the plasma parameters, it turned out that the theoretical calculations actually lead to an over-engineering of this injector and that smaller ones may be adequate in creating plasma of the desired density. But of course creating and injecting the plasma is only the starting point. The most critical aspect is how this plasma behaves under compression.
To fully determine this, GF faces the same research challenges as the related magnetized target fusion research program in the US. I.e. General Fusion needs to perform similar test as conducted in the SHIVA STAR Air Force facility in Albuquerque. In fact, due to budget cut-backs, SHIVA has spare capacity that could be used by GF, but exaggerated US security regulations unfortunately prevent such cooperation. It is highly doubtful that civilian Canadians would be allowed access to the military class facility. So the company has to improvise and come up with its own approach to these kind of implosion tests. The photo below shows an array of sensors that is used to scrutinize the plasma during one of these tests.
Proving that they can achieve the next target compression benchmark is critical in order to continue to receive funding from the federal Canadian SDTC fund. The latter is the only source for governmental fusion funding, Canada has no dedicated program for fusion research and even turned its back on the ITER consortium. This is a far cry from Canada’s technological vision in the sixties that resulted in nuclear leadership with the unique CANDU design. Yet, there is no doubt, General Fusion has been doing the most with the limited funds it received.
Here’s to hoping that the Canadian government may eventually wake-up to the full potential of a fusion reactor design ‘made in Canada’ and start looking beyond the oil patch for its energy security (although this will probably require that the torch is passed to a more visionary leadership in Ottawa).
Update: What a start into 2014 for this blog. This post has been featured on slashdot, and received over 11K views within three days. Some of the comments on slashdot inquired to dig deeper into the science of General Fusion. For those who want to follow through on this, the company’s papers and those that describe important results that GF builds on, can be found on their site. In addition, specifically for the unique vortex technology I find James Gregson’s Master Thesis very informative.
Update 2: General Fusion can be followed on Twitter @MTF_Fusion (h/t Nathan Gilliland)
Update 3: Some Canadian main stream media like the Edmonton Journal also noticed the conspicuous absence of dedicated fusion research. Ironically, the otherwise well written article, argues for an Alberta based research program while not mentioning General Fusion once. This despite the fact that the company is right next door (by Canadian standards) and has in fact one major Alberta based investor, the oil company Cenovus Energy.
Yet, just as D-Wave was mostly off the radar with regards to quantum computing, there is another Vancouver based high tech venture that could similarly upset fusion research.
The hot fusion plasma ball up in the sky, when compared to the general fusion challenge down here on earth, is really, really big; It generates an enormous amount of pressure at its core, creating the kind of critical density that assists in sustaining the fusion reaction. So just heating a plasma to the Sun’s core temperature (about 16 million K) will not suffice, we have to do about ten times more than that in order to compensate for the lack of gravitational pressure. It shouldn’t be surprising that designing a reactor chamber that can hold the hottest thing in our solar system poses a significant engineering challenge.
On the other hand, the idea of tackling the second parameter, the plasma’s pressure, in a controllable manner, was generally regarded as technically impossible (not counting NIF like implosion scenarios that more mimic the runaway implosion of a H-bomb, which is why they are interesting to the military).
This common wisdom held until General Fusion entered the fray and made the case that advances in electronics and process control opened up the possibility to tackle the density side of the fusion equation. And then they built this:
This device is following the age-old engineering adage that if you want compression you use a piston, and if you want large compression you use a large piston which focuses all the energy into a tiny space. The trick is to be able to do this in such a precise fashion that you can coordinate it with the injection of fuel gas along a central axis, so that you can get a succession of pulsed fusion ignitions with each coordinated firing of the pneumatic pistons.
This device may be testing the limits of mechanical engineering, but if it can create the condition it aims for, then our current understanding of plasma and nuclear physics clearly indicates that it will result in fusion.
The interior of the reactor chamber will have to be cooled with liquid lead. Despite this high energy density, the overall footprint of just the reactor itself is fairly compact, no bigger than the typical dimensions of a commercial nuclear fission reactor. If this design pans out, these reactors could be used to retrofit existing nuclear power stations with a fusion core, converting them to a much cleaner energy source that does not come with the risk of accidentally triggering an uncontrollable nuclear chain-reaction.
The timeline for bringing this to the market is aggressive. If General Fusion delivers on it, there will be a commercial fusion offering available before ITER even opens its doors.
Given that the latter is not even attempting to deliver a commercial ready design yet, the company will be without competition (unless some of the other commercial fusion ventures such as LPP should beat them).
Fortunately, with this company it won’t be hard to decide when and if they manage to deliver on their promises (there won’t be any grounds for the kind of academic backlash that D-Wave has to endure). Unlike in the world of fringe science, where even the simple act of measuring (supposedly) substantial energy gain is obfuscated to the point of utter hilarity, once General Fusion achieves energy net gain, there will be little doubt that we entered the dawn of a new energy age.