The Church of D-Wave

Are You a BDeliever? dwave_churchgoer Science and religion have a difficult relationship, and sometimes they combine in the most obscure manner, such as when Scientology was conceived.  The latter seems to have lost a lot of its appeal and followers, but it seems that another new religion is poised to grab the mantle.  That is, if one is willing to follow Scott Aaronson's rationale that believing in the achievability of significant speed-up with D-Wave's architecture is a matter of faith. Ironically Scott, who is teaching computer science at MIT, made this comment about the same time that the MIT Technology Review named D-Wave to its Top 50 Smartest Companies list. An illustrious selection, that any company would be delighted to be included in. The only quibble I have with this list is that it ranks Elon Musk's SpaceX before D-Wave, my point being that quantum mechanics is harder than rocket science. After all, with the latter, everybody can decide if your spacecraft made it into orbit or not (classical mechanics is so straightforward).  On the other hand, we still have the ongoing high profile battle over the question of how quantum D-Wave's machine actually is (since Schroedinger the uncertainty of what's in a box seems to be a constant in Quantum Mechanics).

Another paper buttresses the company's claims that there is substantial entanglement present on their chip.  This prompted Prof. Vazirani, who I experienced as a most delightful soft spoken academic when checking out his Quantum Computing MOC, to come out swinging.  The New York Times quotes him as saying:

“What I think is going on here is that they didn’t model the ‘noise’ correctly. (....) One should have a little more respect with the truth.”

In academic parlance these are fighting words.  And so the show goes on.

But I want to take a break from this for a moment, and focus on another question: How did a startup like D-Wave get to this point?  Time magazine front page material, coverage in the New York Times, being named in the same breath as SpaceX.  From a business perspective this is nothing but an amazing success story to have gotten to this point. And to me, the question of what makes successful entrepreneurship is of no less interest than science and technology.


Geordie got closer to having a shot at Olympic gold than most of us, having been an alternate on the Canadian wrestling team at the 1996 Olympic Games, so getting this one may have been bitter sweet.

Flying into Vancouver I imagined Geordie Rose to be a Steve Jobs-like character, about whom it was famously quipped that he was surrounded by his own reality distortion field, an invisible force that made others see the world like he did, and made them buy into his vision. And although I never had the pleasure of meeting Steve Jobs, I think it is safe to say that Geordie is nothing like him. If I had to describe him in one word, I'd say he is quintessentially "Canadian", in terms of the positive attributes that we typically like to associate with our national character. (Full disclaimer: Technically I am not Canadian yet, just a permanent resident).

Given the amazing success that D-Wave has had, and the awards and accolades that he himself has received, I was impressed with his unassuming demeanor. Hard to imagine Geordie would ever park his car in a handicap spot, as Jobs was fond of doing, to shave a couple minutes off his commute.

D-Wave just moved to a new enlarged premises. In their old building Geordie occupied an interior office without windows. I naturally assumed that he would have upgraded that. So I was surprised to learn that his new workspace still doesn't have any windows. His explanation was simple, it allows him to be close to his team.

My take away is that visionaries cannot be pigeon-holed, because when talking to Geordie it was quickly obvious that his focus and dedication to making his vision a reality is ironclad, and his excitement is infectious.  So this is one key similarity to Steve Jobs after all, and then there is of course this, which goes without saying:

Great entrepreneurs never do it for the money.

Great entrepreneurs never do it for the money.

Prof. Vazirani must have picked up on D-Wave's commitment to make Quantum Computing work, as the New York Times also quotes him as saying about D-Wave that “after talking with them I feel a lot better about them. They are working hard to prove quantum computing.

That Geordie picked an approach which is so abhorred by theorists, I attribute to yet another aspect that, in my mind, marks great entrepreneurship: An almost ruthless pragmatism. Focusing on the less proven quantum annealing on a chip, he managed in just seven years to turn out an entirely new computing platform.  Meanwhile, the advances in superconducting foundry know-how that his company ushered in, will also benefit other approaches, such as the gate based implementation that UCSB's John Martinis plans to scale up to 1000 qubits within five years.

To me, there is no doubt that the hurry to get something to the market is a net benefit to the entire quantum computing field, as I expect it will attract more private capital. And that is because Quantum Computing is now no longer perceived as something nebulous, something that just may happen 25 years down the road.

Game changers polarize.  So if we pay heed to Scott Aaronson's rhetorics Geordie clearly has a leg up over Steve Jobs.  Where the latter had a cult following, Geordie's on his way to having his own religion.  Maybe that'll explain the following recent exchange on D-Wave's blog:



(h/t Rolf D. and commenter Copenhagen for pointing me to material for this post.)

Posted in D-Wave | Tagged , , , | 22 Comments

Inflation not Over-Inflated after all?

Updated Below

Cosmology is quintessential popular science, but I always regarded it as the most dismal field of physics because there is no avenue for experiments to keep run-away speculations at bay. It's like trying to catch a perpetrator by staring at a multi-billion year old crime scene with the evidence all scattered.  And of course, since it deals with the beginning of time, scientists may have a hard time divorcing themselves from philosophical or religious beliefs (e.g. for a long time, Einstein presumably regarded the big bang theory as an invention by the clergy).

Given the ad-hoc nature of the cosmic inflation theory to fix problems with the big bang explanation, I always felt rather lukewarm about it. It just appeared too much like a convenient quick fix. But I am certainly warming up to it, given that the new detailed observations of the cosmic microwave radiation do fit the picture quite nicely. This radiation is essentially a convenient thermometer for the entire universe, as it can be regarded as thermal black body radiation. This is as close as we can get to the aforementioned primordial crime scene, taking advantage of the fact that the further we look into space, the earlier the events we observe.  If the mainstream big bang theory is correct, then the evidence for it must be splattered all over space encoded in this background radiation.

 (one of the finest pop science writers on the web) nicely explains why this data is such a treasure trove. I have little to add to his article other than the caveat that one should keep an open mind, that the evidence may yet still fit a completely different sequence of events (e.g. this one made some recent headlines, and it will be interesting to observe how such alternative models may be adapted to fit the newly released data).

And then there is, of course, the other raison d'etre of this blog, pointing out when popular science writing gets the details wrong.  The better outlets, such as the NYT, got it right when they wrote that this data offers the first direct evidence for gravitational waves as predicted by general relativity.  And a layman certainly can relate to this, simply by appreciating the released pictures, that almost look like ripples left in the sand by some ocean waves.

Slight temperature fluctuations, indicated by variations in color, of the cosmic microwave background of a small patch of sky (as provided by the BICEP2 Collaboration).

But there are a lot of press releases and news blurbs that leave out that crucial word "direct" when mentioning gravitational waves, ignoring the excellent indirect evidence that earned a Nobel prize in 1993. The latter is based on one of the neatest astronomical observations I can think of, which used the precise signal of a pulsar in a binary system to measure the declining orbit of the two stars. The observed orbital decay precisely matches the theoretical predictions of how much energy the system should disseminate via gravitational waves.

Just as accelerated electrical charges will under most circumstances emanate EM radiation, accelerated masses will send out gravitational waves, carrying away some of the kinetic energy of the system.

Of course, gravitational waves have the huge advantage of being the kind of physics accessible to immediate measurement, and this new cosmological evidence gives credence to the persistence in pushing for better gravitational wave detectors to eventually measure these waves directly.


It didn't take long before some prominent push back, pointing to discrepancies between the BICEP2 data and previous data from the Planck and WMAP telescopes.

(h/t Sol Warda for prompting me to write this post) 


Posted in Popular Science | Tagged , , , , | 3 Comments

The Science Newscycle

As life keeps me otherwise busy, I am again late in finishing my next blog post, but in the meantime this web comic nicely summarizes much of the news dynamics of science in general and quantum computing in particular (h/t my lovely wife Sara).

"The Science Newscycle" by Jorge Cham


Posted in Popular Science, Quantum Computing | Tagged , | 2 Comments

The Most Important Discovery of the 21st Century

Last year I tried to establish a blog tradition of starting the new year with a hopeful science news item, something that shows enormous technological potential to change the world for the better.  But come New Years, it didn't work out, the Quantum Computing and D-Wave news was simply moving too fast, and I also didn't come across anything that felt significant enough.

Not any more. Recently a breakthrough discovery has been made that has the potential to rival the impact of the ammonium synthesis. When Fritz Haber discovered this process in the early 20th century, he single-handily vanquished famines from the developed world as  subsequently fertilizer became an inexpensive commodity. This new discovery has the potential to do the same for thirst and droughts.  It involves a surprising property of graphene and does justice to the hype that this miracle-material receives: Although graphene is usually hydrophobic it can be made to form capillaries that efficiently absorb water. Now researchers at the University of Manchester report having formed layers of graphene oxide that exploit this property to make efficient water filters on the molecular level.

These filters are reported to work astoundingly efficiently, keeping anything out above the size of nine Angstrom (9.0 × 10-10 m) at a speed comparable to an ordinary coffee filter.  It is essentially sieving on the molecular level. This is not yet enough to remove ordinary sea salt, but the scientists, who just published their research in last week's issue of Science, are confident that the material can be scaled down to this level.

If so, it will change the world. Desalination of sea water is currently only affordable to the wealthiest countries, as the required investments are staggering, and operating the necessary infrastructure is very energy intensive.  For instance, Saudi Arabia recently commissioned the world's largest desalination plant for US$ 1.46 billion. The scope of the project is impressive, yet this amount of money will still only suffice to supply one large city metropolis with enough water (~3.5M people).

According to a new report by the Worldwatch Institute, 1.2 billion people, or nearly a fifth of the world's population, live in areas of physical water scarcity, i.e. places where there is simply not enough water to meet demand. Another 1.6 billion face economic water scarcity, where people do not have the financial means to access existing water sources. If this research succeeds in creating a material that can simply filter out sea salt, and if its production can be scaled up, then this scourge on humankind could be rapidly diminished.

Wars have been fought over water and many more have been predicted.  It is rare that any one area of research has the potential to so dramatically alter the course of history for the better.

Posted in Popular Science | 14 Comments

He Said She Said – How Blogs are Changing the Scientific Discourse

The debate about D-Wave's "quantumness" shows no signs of abating, hitting a new high note with the company being prominently featured on Time magazine's recent cover, prompting a dissection of the article on Scott Aaronson's blog. This was quickly followed by yet another scoop: A rebuttal by Umesh Vazirani to Geordie Rose who recently blogged about the Vazirani et al. paper which sheds doubt on D-Wave's claim to implement quantum annealing. In his take on the Time magazine article Scott bemoans the 'he said she said' template of journalism which gives all sides equal weight, while acknowledging that the Times author Lev Grossman quoted him correctly, and obviously tries to paint an objective picture.

If I had to pick the biggest shortcoming of the Times article, my choice would have been different. I find Grossman entirely misses Scott's role in this story by describing him as "one of the closest observers of the controversy". Scott isn't just an observer in this. For better or worse he is central to this controversy. As far as I can tell, his reporting on D-Wave's original demo is what started it to begin with. Unforgettable, his inspired comparison of the D-Wave chip to a roast beef sandwich, which he then famously retracted when he resigned as D-Wave's chief critic. The latter is something he's done with some regularity, first when D-Wave started to publish results, then after visiting the company and most recently after the Troyer et al. pre-print appeared in arxiv (although the second time doesn't seem to count, since it was just a reiteration of the first resignation).

And the say sandwiches and chips go together ...Scott's resignations never seem to last long. D-Wave has a knack for pushing his buttons. And the way he engages D-Wave and associated research is indicative of a broader trend in how blogs are changing the scientific discourse. For instance, when Catherine McGeoch gave a talk about her benchmarking of the DW2, Scott did not immediately challenge her directly but took to his blog (a decision he later regretted and apologized for). Anybody who has spent more than five minutes on a Web forum knows how the immediate, yet text only, communication removes inhibitions and leads to more forceful exchanges. In the scientific context, this has the interesting effect of colliding head on with the more lofty perception of a scientist. It used to be that arguments were only conducted via scientific publications, in person such as in scientific seminars, or the occasional letter exchange. It's interesting to contemplate how corrosive the arguments between Bohr and Einstein may have turned out, if they would have been conducted via blogs rather than in person. But it's not all bad. In the olden days, science could easily be mistaken for a bloodless intellectual game, but nobody could read through the hundreds of comments on Scott's blog that day and come away with that impression. To the contrary, the inevitable conclusion will be that science arguments are fought with no less passion than the most heated bar brawl.

During this epic blog 'fight' Scott summarized his preference for the media thusly

"... I think this episode perfectly illustrates both the disadvantages and the advantages of blogs compared to face-to-face conversation. Yes, on blogs, people misinterpret signals, act rude, and level accusations at each other that they never would face-to-face. But in the process, at least absolutely everything gets out into the open. Notice how I managed to learn orders of magnitude more from Prof. McGeoch from a few blog comments, than I did from having her in the same room ..."

it is by far not the only controversy that he courted, nor is this something unique to his blog. Peter Woit continues the heretical work he started with his 'Not Even Wrong' book, Robert R. Tucci fiercely defends his quantum algorithm work when he feels he is not credited, Sabine Hossenfelder had to ban a highly qualified String theory troll due to his nastiness (she is also a mum of twins, so you know she has practice in being patient, and it's not like she doesn't have a good sense of humor). But my second favorite science blog fight also occurred on Scott's blog when Joy Christian challenge him to a bet to promote his theory that supposedly invalidates the essential non-locality of quantum mechanics due to Bell's theorem.

It's instructive to look at the Joy Christian affair and ask how a mainstream reporter could have possibly reported it. Not knowing Clifford algebra, what could a reporter do but triangulate the expert opinions? There are some outspoken smart critics that point to mistakes in Joy Christian's reasoning, yet he claims that these are based on flawed understanding and have been repudiated. The reporter will also note that doubting Bell's theorem is very much a minority position, yet such a journalist not being able to check the math himself can only fall back on the 'he said she said' template. After all, this is not a simple straight forward fact like reporting if UN inspectors found Saddam Hussein's weapons of mass distractions or not (something that surprisingly most mainstream media outside the US accomplished just fine). One cannot expect a journalist to settle an open scientific question.

The nature of the D-Wave story isn't different, how is Lev Grossman supposed to do anything but report the various stances on each side of the controversy? A commenter at Scott's blog was dismissively pointing out that he doesn't even have a science degree. As if this were to make any difference, it's not like everybody else on each side of the story doesn't boast such degrees (non-PhDs are in the minority at D-Wave).

Mainstream media reports as they always did, but unsettled scientific questions are the exception to the rule, one of the few cases when 'he said she said' journalism is actually the best format. For everything else we fortunately now have the blogs.

Posted in D-Wave, Popular Science, Quantum Computing | Tagged , , , | 51 Comments

One Video to Rule Them All

Updated below

This is essentially an extended update to my last D-Wave post. Rather than stick it there, I think it is important enough to merit its own post.  The reason being, I wish I could make anybody who plans on writing anything on D-Wave first watch the video below from the first Q+ Google+ hang-out this year.

It summarizes the results of the paper I blogged about in my last post on the matter. Ultimately, it answers what is objectively known about D-Wave's machine based on the analyzed data, and sets out to answer three questions.

  1. Does the machine work?
  2. Is is quantum or classical?
  3. Is it faster than a classical computer?

The short version is

  1. Yes
  2. Based on their modeling D-Wave 2 is indeed a true Quantum Annealer.
  3. While it can beat an off the shelf solver it cannot (yet) outperform on average a highly targeted hand-crafted classical algorithm.

Of course there is much more in the video, and I highly recommend watching the whole thing. It comes with a good introduction to the subject, but if you only want the part about the test, you may want to skip 11 minutes into the video (this way you also cut out some of the cheap shots at completely clueless popular media reports - an attempt at infusing some humor into the subject that may or may not work for you).


With regards to point (2) the academic discussion is not settled. A paper with heavyweight names on it just came out (h/t Michael Bacon). It proposes a similar annealing behavior could be accomplished with a classical set-up after all.  Too me this is truly academic in the best and worst sense i.e. a considerable effort to get all the i's dotted and the t's crossed.  It simply seems a bit far fetched that the company would set out to build a chip with coupled qubits that behave like a quantum annealer, yet somehow end up with an oddly behaving classical annealer.

From my point of view it is much more interesting to explore all the avenues that are open to D-Wave to improve their chip, such as this new paper on strategies for a quantum annealer to increase the success probability for hard optimization problems. (h/t Geordie Rose).

Update 1

Geordie Rose weighs in on the paper that claims that the D-Wave machine can be explained classically.  He expected a Christmas present and felt he only got socks ...

Update 2

Helmut Katzgraber et al. propose in this paper that the current benchmarks are using the wrong problem set to possibly find a quantum speed-up with D-Wave's machine.

Posted in D-Wave, Quantum Computing | 26 Comments

Science News that isn’t really News

Usually, I don't blog about things that don't particularly interest me.  But even if you are a potted plant (preferably with a physics degree), you probably have people talking to you about this 'amazing' new paper by Stephen Hawking.

So, I am making the rare exception of re-blogging something, because already wrote everything about this I could possibly want to say, and she did it much better and more convincingly than I would.

So, if you want to know what to make of Hawking's latest paper head over to the backreaction blog.


Stephen Hawking now thinks that there are only grey holes, which is a step up in the color scheme from black. But in honor of the Sochi Olympics, I really think the world needs rainbow colored black holes.

Posted in Popular Science | Tagged , | 1 Comment

Scott Aaronson (again) resigns as chief D-Wave critic and endorses their experiments

An exercise in positive spin.

Update below.

The English language is astoundingly malleable. It feels almost as if it was tailor made for marketing spin. I noticed this long ago (feels like a lifetime) when working in a position that required me to sell software. Positioning products was much easier when I spoke English.  Mind you, I never told a blatant lie, but I certainly spun the facts to put our product in the best light, and if a customer committed I'd do my darnedest to deliver the value that I promised. The kind of customers I dealt with were of course aware of this dance, and perfectly capable of performing their due diligence. From their perspective, in the end, it is always about buying into the vision, knowing full well that a cutting edge technology, one that will give a real competitive benefit, will of course be too new to be without risk.

During the courting of the customers, any sales person worth their salt will do anything to make the product look as good as possible. One aspect of this is of course to stress positive things that others are saying about your offerings.

To accomplish this, selective quoting can come in very handy. For instance, after reviewing the latest pre-print paper that looks at D-Wave's 503 qubit chip performance, Scott Aaronson stepped down for the second time as chief D-Wave critic. In the blog post where he announced this, he also observed that on "the ~10% of instances on which the D-Wave machine does best, (...) the machine does do slightly better (...) than simulated annealing".

This puts in words what the following picture shows in much more detail.

Screenshot 2014-01-18 17.47.52

Instance-by-instance comparison of annealing times and wall-clock times. Shown is a scatter plot of the pure annealing time for the DW2 compared to a simulated classical annealer (SA) using an average over 16 gauges on the DW2. This is figure 6 of the recent benchmark paper. Wall clock times include the time for programming, cooling, annealing, readout and communication. Gauges refer to different encodings of a problem instance. (Only plot A and B are relevant to the settling of my bet).

Now, if you don't click through to Scott's actual blog post. you may take away that he actually changed his stance. But of course he hasn't. You can look at the above picture and think the glass is ninety percent empty or you could proclaim it is ten percent full.

The latter may sound hopelessly optimistic, but let's contemplate what we are actually comparing.  Current computer chips are the end product of half a century highly focused R&D, with billions of dollars poured into developing them. Yet, we know we are getting to the end of the line of Moore's law. Leak currents already are a real problem, and the writing is on the wall that we are getting ever closer to the point where the current technology will no longer allow for tighter chip structures.

On the other hand, the D-Wave chip doesn't use transistors. It is an entirety different approach to computing, as profoundly different as the analog computers of yore.

The integration density of a chip is usually classified by the length of the silicon channel between the source and drain terminals in its field effect transistors (e.g. 25nm). This measure obviously doesn't apply to D-Wave, but the quantum chip integration density isn't even close to that. With the ridiculously low number of about 500 qubits on D-Wave's chip, which was developed on a shoestring budget when compared to the likes of Intel or IBM, the machine still manages to hold its own against a modern CPU.

Yes, this is not a universal gate-based quantum computer, and the NSA won't warm up to it because it cannot implement Shore's algorithm, nor is there a compelling theoretical reason that you can achieve a quantum speed-up with this architecture. What it is, though, is a completely new way to do practical computing using circuit structures that leave plenty of room at the bottom.  In a sense, it is resetting the clock to when Feynman delivered his famous and prophetic talk on the potentials of miniaturization. Which is why from a practical standpoint I fully expect to see a D-Wave chip eventually unambiguously outperform a classical CPU.

On the other hand, if you look at this through the prism of complexity theory none of this matters, only proof of actual quantum speed-up does.

Scott compares the quantum computing skirmishes he entertains with D-Wave to the American Civil war.

If the D-Wave debate were the American Civil War, then my role would be that of the frothy-mouthed abolitionist pamphleteer

Although clearly tongue in cheek, this comparison still doesn't sit well with me.  Fortunately, in this war, nobody will lose life or limb. The worst that could happen is a bruised ego, yet if we have to stick to this metaphor, I don't see this as Gettysburg 1863 but the town of Sandwitch 1812.

Much more will be written on this paper. Once it has fully passed peer review and been published, I will also be finally able to reveal my betting partner. But in the meantime there a Google+ meeting scheduled that will allow for more discussion (h/t Mike).


Without careful reading of the paper a casual observer may come away with the impression that this test essentially just pitted hardware against hardware. Nothing could be further from the truth, some considerable effort had to go into constructing impressive classical algorithms to beat the D-Wave machine on its own turf.  This Google Quantum AI lab post elaborates on this (h/t Robert R. Tucci).

Update 2

D-Wave's Geordie Rose weighs in.







Posted in D-Wave, Quantum Computing | Tagged | 23 Comments

Quantum Computing NSA Round-Up


Ever since the Edward Snowden-provided news broke that the NSA spent in excess of $100M on quantum computing I meant to address this in a blog post. But Robert R. Tucci beat me to it and has some very interesting speculations to add.

He also picked up on this quantum computing article in the South China Morning Post reporting on research efforts in mainland China.  Unfortunately, but unsurprisingly, it is light on technical details. Apparently China follows a shotgun approach of funding all sorts of quantum computing research. The race truly seems to be on.

Not only is China investing in a High Magnetic Field Laboratory to rival the work conducted at the US based NHMFL, but there is also Prof. Wang Haohua's efforts based on superconducting circuitry.

Interestingly, the latter may very well follow a script that Geordie Rose was speculating on when I asked him where he thinks competition in the hardware space may one day originate from.  The smart move for an enterprising Chinese researcher would be to take the government's seed money, and focus on retracing a technological path that has already proven to be commercially successful.  This won't get the government an implementation of Shor's algorithm any faster, but adiabatic factorization may be a consolation prize.  After all, that one was already made in China.

But do the NSA revelations really change anything?  Hopefully it will add some fuel to the research efforts, but at this point this will be the only effect.  The NSA has many conventional ways to listen in on the mostly unsecured Internet traffic.  On the other hand RSA with a sufficiently long key length is still safe.  For now if customers were to switch to email that is hardened in this way it'll certainly make the snoops' job significantly harder.

Posted in Quantum Computing | 34 Comments

Here be Fusion

During my autumn travel to the Canadian West Coast I was given the opportunity to visit yet another High Tech Start-up with a vision no less radical and bold than D-Wave's.

I have written about General Fusion before, and it was a treat to tour their expanding facility, and to ask any question I could think of. The company made some news when they attracted  investment from Jeff Bezos, but despite this new influx of capital, in the world of fusion research, they are operating on a shoe-string budget. This makes it all the more impressive how much they have already accomplished.

At the outset, the idea seems to be ludicrous; How could a small start-up company possibly hope to accomplish something that the multi-national ITER consortium attempts with billions of dollars? Yet, the approach they are following is scientifically sound, albeit fallen out of favor with the mainstream of plasma physicists. It's an approach that is incidentally well suited to smaller scale experiments, and the shell of the experiment that started it all is now on display at the reception area of General Fusion.


Doug Richardson, General Fusion co-founder, is a man on a mission, who brings an intense focus to the job. Yet, when prompted by the receptionist he manged a smile for this photo that shows him next to the shell from the original experiment that started it all. The other founder and key driving force, Michel Laberge, was unfortunately out of town during the week of my visit.

Popular Science was the first major media outlet to take note of the company.  It is very instructive to read the article they wrote on the company back then to get a sense of how much bigger this undertaking has become.  Of course, getting neutrons from fusion is one thing; Getting excess energy is an entirely different matter. After all, the company that this start-up modeled its name after was enthusiastically demonstrating fusion to the public many decades ago.

But the lackluster progress of the conventional approach to fusion does not deter the people behind this project, but rather seems to add to the sense of urgency. What struck me when first coming on site was the no-nonsense industrial feel to the entire operation.  The company is renting some nondescript buildings, the interior more manufacturing floor than gleaming laboratory, every square inch purposefully utilized to run several R&D streams in parallel.  Even before talking to co-founder Doug Richardson, the premise itself sent a clear message, this is an engineering approach to fusion and they are in a hurry. This is why rather then just focusing on one aspect of the machine, they decided to work in parallel.

When asked where I wanted to start my tour, I opted for the optically most impressive piece, the scaled down reactor core with its huge attached pistons.  The reason I wanted to scrutinize this first is because, in my experience, this mechanical behemoth is what casual outside observers usually take objection to.  This is due to the naive assumption that so many moving parts under such high mechanical stresses make for problematic technology. This argument was met with Doug's derision. In his mind this is the easy part, just a matter of selecting the right material and precision mechanical engineering.  My point that a common argument is that moving parts mean wear and tear, he swatted easily aside.  In my experience, a layperson introduced to the concept is usually uncomfortable with the idea that pistons could be used to produce this enormous pressure. After all, everybody is well acquainted with the limited lifetime of a car engine that has to endure far less.  Doug easily turned this analogy on its ear, pointing out that a stationary mounted engine can run uninterrupted for a long time, and that the reliability typically increases with scale.

Currently they have a 3:1 scaled down reactor chamber build to test the vortex compression system (shown in the picture below).

vortex test reactor

The test version has a reactor sphere diameter of 1m. The envisioned final product will be three times the size.  Still a fairly compact envelope, but too large to be hosted in this building.

Another of my concerns with this piece of machinery was the level of accuracy required to align the piston cylinders. The final product will require 200 of them, and if the system is sensitive to misalignment it is easy to imagine how this could impact its reliability.

It came as a bit of a surprise that the precision required was actually less than I expected, 50 micron (half a tenth of a millimeter) should suffice, and in terms of timing, the synchronicity can tolerate deviations of up to 10 microseconds, ten times more than initially expected. This is due to a nice property that the GF research uncovered during the experiments: The spherical shock wave they are creating within the reactor chamber is self-stabilizing, i.e. the phase shift when one of the actuators is slightly out of line causes a self-correcting interference that helps to keep the ingoing compression symmetric as it travels through the vortex of molten lead-lithium that is at the heart of the machine.

The reason for this particular metal mix within the reactor is the shielding properties of lead, and the fact that Lithium 6 has a large neutron absorption cross section that allows for breeding tritium fuel. This is a very elegant design that ensures that if the machine gets to the point of igniting fusion there will be no neutron activation problems like those which plague conventional approaches (i.e. with a Tokamak design as used by ITER, neutrons, that cannot be magnetically contained, bombard the reactor wall, eventually wearing it down and turning it radioactive).

Doug stressed that this reflects their engineering mindset. They need to get these problems under control from the get-go, whereas huge projects like ITER can afford to kick the can down the road. I.e. first measuring the scope of the problem, and then hoping to address this with later research effort (which is then supposed to provide a solution to a problem that General Fusion's approach manages to eliminate altogether).

Another aspect of the design that I originally did not understand is the fact that plasma will be injected from both sides of the sphere simultaneously, so that the overall momentum of the plasma will cancel out at the center.  I.e. the incoming shock wave doesn't have to hit a moving target.

The following YouTube video animation uploaded by the company illustrates how all these pieces are envisioned to work together.


Managing the plasma properties and its dynamics, i.e. avoiding unwanted turbulence that may reduce temperature and/or density, is the biggest technological challenge.

To create plasma of the required quality, and in order to get it into place, the company constructed some more impressive machinery.  It is a safe bet that they have the largest plasma injectors ever built.

Plasma Injector

Admittedly, comparing this behemoth to the small plasma chamber in the upper left corner is comparing apples to oranges, but then this machine is in a class of its own.

When studying the plasma parameters, it turned out that the theoretical calculations actually lead to an over-engineering of this injector and that smaller ones may be adequate in creating plasma of the desired density. But of course creating and injecting the plasma is only the starting point.  The most critical aspect is how this plasma behaves under compression.

To fully determine this, GF faces the same research challenges as the related magnetized target fusion research program in the US. I.e. General Fusion needs to perform similar test as conducted in the SHIVA STAR Air Force facility in Albuquerque. In fact, due to budget cut-backs, SHIVA has spare capacity that could be used by GF, but exaggerated US security regulations unfortunately prevent such cooperation.  It is highly doubtful that civilian Canadians would be allowed access to the military class facility.  So the company has to improvise and come up with its own approach to these kind of implosion tests. The photo below shows an array of sensors that is used to scrutinize the plasma during one of these tests.


Understanding the characteristics of the plasma when imploded is critical,  these sensors on top of one of experimental set-up are there to collect the crucial data. Many such experiments will be required before enough data has been amassed.

Proving that they can achieve the next target compression benchmark is critical in order to continue to receive funding from the federal Canadian SDTC fund.  The latter is the only source for governmental fusion funding, Canada has no dedicated program for fusion research and even turned its back on the ITER consortium. This is a far cry from Canada's technological vision in the sixties that resulted in nuclear  leadership with the unique CANDU design. Yet, there is no doubt,  General Fusion has been doing the most with the limited funds it received.

Here's to hoping that the Canadian government may eventually wake-up to the full potential of a fusion reactor design 'made in Canada' and start looking beyond the oil patch for its energy security (although this will probably require that the torch is passed to a more visionary leadership in Ottawa).


An obligatory photo for any visitor to General Fusion. Unfortunately, I forgot my cowboy hat.


Update: What a start into 2014 for this blog.  This post has been featured on slashdot, and received over 11K views within three days.  Some of the comments on slashdot inquired to dig deeper into the science of General Fusion. For those who want to follow through on this, the company's papers and those that describe important results that GF builds on, can be found on their site. In addition, specifically for the unique vortex technology I find James Gregson's Master Thesis very informative.

Update 2: General Fusion can be followed on Twitter @MTF_Fusion (h/t Nathan Gilliland)

Update 3: Some Canadian main stream media like the Edmonton Journal also noticed the conspicuous absence of dedicated fusion research.  Ironically, the otherwise well written article, argues for an Alberta based research program while not mentioning General Fusion once.  This despite the fact that the company is right next door (by Canadian standards) and has in fact one major Alberta based investor, the oil company  Cenovus Energy.

Posted in Popular Science | Tagged | 24 Comments