The Most Important Discovery of the 21st Century

Last year I tried to establish a blog tradition of starting the new year with a hopeful science news item, something that shows enormous technological potential to change the world for the better.  But come New Years, it didn't work out, the Quantum Computing and D-Wave news was simply moving too fast, and I also didn't come across anything that felt significant enough.

Not any more. Recently a breakthrough discovery has been made that has the potential to rival the impact of the ammonium synthesis. When Fritz Haber discovered this process in the early 20th century, he single-handily vanquished famines from the developed world as  subsequently fertilizer became an inexpensive commodity. This new discovery has the potential to do the same for thirst and droughts.  It involves a surprising property of graphene and does justice to the hype that this miracle-material receives: Although graphene is usually hydrophobic it can be made to form capillaries that efficiently absorb water. Now researchers at the University of Manchester report having formed layers of graphene oxide that exploit this property to make efficient water filters on the molecular level.

These filters are reported to work astoundingly efficiently, keeping anything out above the size of nine Angstrom (9.0 × 10-10 m) at a speed comparable to an ordinary coffee filter.  It is essentially sieving on the molecular level. This is not yet enough to remove ordinary sea salt, but the scientists, who just published their research in last week's issue of Science, are confident that the material can be scaled down to this level.

If so, it will change the world. Desalination of sea water is currently only affordable to the wealthiest countries, as the required investments are staggering, and operating the necessary infrastructure is very energy intensive.  For instance, Saudi Arabia recently commissioned the world's largest desalination plant for US$ 1.46 billion. The scope of the project is impressive, yet this amount of money will still only suffice to supply one large city metropolis with enough water (~3.5M people).

According to a new report by the Worldwatch Institute, 1.2 billion people, or nearly a fifth of the world's population, live in areas of physical water scarcity, i.e. places where there is simply not enough water to meet demand. Another 1.6 billion face economic water scarcity, where people do not have the financial means to access existing water sources. If this research succeeds in creating a material that can simply filter out sea salt, and if its production can be scaled up, then this scourge on humankind could be rapidly diminished.

Wars have been fought over water and many more have been predicted.  It is rare that any one area of research has the potential to so dramatically alter the course of history for the better.

Posted in Popular Science | 14 Comments

He Said She Said – How Blogs are Changing the Scientific Discourse

The debate about D-Wave's "quantumness" shows no signs of abating, hitting a new high note with the company being prominently featured on Time magazine's recent cover, prompting a dissection of the article on Scott Aaronson's blog. This was quickly followed by yet another scoop: A rebuttal by Umesh Vazirani to Geordie Rose who recently blogged about the Vazirani et al. paper which sheds doubt on D-Wave's claim to implement quantum annealing. In his take on the Time magazine article Scott bemoans the 'he said she said' template of journalism which gives all sides equal weight, while acknowledging that the Times author Lev Grossman quoted him correctly, and obviously tries to paint an objective picture.

If I had to pick the biggest shortcoming of the Times article, my choice would have been different. I find Grossman entirely misses Scott's role in this story by describing him as "one of the closest observers of the controversy". Scott isn't just an observer in this. For better or worse he is central to this controversy. As far as I can tell, his reporting on D-Wave's original demo is what started it to begin with. Unforgettable, his inspired comparison of the D-Wave chip to a roast beef sandwich, which he then famously retracted when he resigned as D-Wave's chief critic. The latter is something he's done with some regularity, first when D-Wave started to publish results, then after visiting the company and most recently after the Troyer et al. pre-print appeared in arxiv (although the second time doesn't seem to count, since it was just a reiteration of the first resignation).

And the say sandwiches and chips go together ...Scott's resignations never seem to last long. D-Wave has a knack for pushing his buttons. And the way he engages D-Wave and associated research is indicative of a broader trend in how blogs are changing the scientific discourse. For instance, when Catherine McGeoch gave a talk about her benchmarking of the DW2, Scott did not immediately challenge her directly but took to his blog (a decision he later regretted and apologized for). Anybody who has spent more than five minutes on a Web forum knows how the immediate, yet text only, communication removes inhibitions and leads to more forceful exchanges. In the scientific context, this has the interesting effect of colliding head on with the more lofty perception of a scientist. It used to be that arguments were only conducted via scientific publications, in person such as in scientific seminars, or the occasional letter exchange. It's interesting to contemplate how corrosive the arguments between Bohr and Einstein may have turned out, if they would have been conducted via blogs rather than in person. But it's not all bad. In the olden days, science could easily be mistaken for a bloodless intellectual game, but nobody could read through the hundreds of comments on Scott's blog that day and come away with that impression. To the contrary, the inevitable conclusion will be that science arguments are fought with no less passion than the most heated bar brawl.

During this epic blog 'fight' Scott summarized his preference for the media thusly

"... I think this episode perfectly illustrates both the disadvantages and the advantages of blogs compared to face-to-face conversation. Yes, on blogs, people misinterpret signals, act rude, and level accusations at each other that they never would face-to-face. But in the process, at least absolutely everything gets out into the open. Notice how I managed to learn orders of magnitude more from Prof. McGeoch from a few blog comments, than I did from having her in the same room ..."

it is by far not the only controversy that he courted, nor is this something unique to his blog. Peter Woit continues the heretical work he started with his 'Not Even Wrong' book, Robert R. Tucci fiercely defends his quantum algorithm work when he feels he is not credited, Sabine Hossenfelder had to ban a highly qualified String theory troll due to his nastiness (she is also a mum of twins, so you know she has practice in being patient, and it's not like she doesn't have a good sense of humor). But my second favorite science blog fight also occurred on Scott's blog when Joy Christian challenge him to a bet to promote his theory that supposedly invalidates the essential non-locality of quantum mechanics due to Bell's theorem.

It's instructive to look at the Joy Christian affair and ask how a mainstream reporter could have possibly reported it. Not knowing Clifford algebra, what could a reporter do but triangulate the expert opinions? There are some outspoken smart critics that point to mistakes in Joy Christian's reasoning, yet he claims that these are based on flawed understanding and have been repudiated. The reporter will also note that doubting Bell's theorem is very much a minority position, yet such a journalist not being able to check the math himself can only fall back on the 'he said she said' template. After all, this is not a simple straight forward fact like reporting if UN inspectors found Saddam Hussein's weapons of mass distractions or not (something that surprisingly most mainstream media outside the US accomplished just fine). One cannot expect a journalist to settle an open scientific question.

The nature of the D-Wave story isn't different, how is Lev Grossman supposed to do anything but report the various stances on each side of the controversy? A commenter at Scott's blog was dismissively pointing out that he doesn't even have a science degree. As if this were to make any difference, it's not like everybody else on each side of the story doesn't boast such degrees (non-PhDs are in the minority at D-Wave).

Mainstream media reports as they always did, but unsettled scientific questions are the exception to the rule, one of the few cases when 'he said she said' journalism is actually the best format. For everything else we fortunately now have the blogs.

Posted in D-Wave, Popular Science, Quantum Computing | Tagged , , , | 51 Comments

One Video to Rule Them All

Updated below

This is essentially an extended update to my last D-Wave post. Rather than stick it there, I think it is important enough to merit its own post.  The reason being, I wish I could make anybody who plans on writing anything on D-Wave first watch the video below from the first Q+ Google+ hang-out this year.

It summarizes the results of the paper I blogged about in my last post on the matter. Ultimately, it answers what is objectively known about D-Wave's machine based on the analyzed data, and sets out to answer three questions.

  1. Does the machine work?
  2. Is is quantum or classical?
  3. Is it faster than a classical computer?

The short version is

  1. Yes
  2. Based on their modeling D-Wave 2 is indeed a true Quantum Annealer.
  3. While it can beat an off the shelf solver it cannot (yet) outperform on average a highly targeted hand-crafted classical algorithm.

Of course there is much more in the video, and I highly recommend watching the whole thing. It comes with a good introduction to the subject, but if you only want the part about the test, you may want to skip 11 minutes into the video (this way you also cut out some of the cheap shots at completely clueless popular media reports - an attempt at infusing some humor into the subject that may or may not work for you).

 

With regards to point (2) the academic discussion is not settled. A paper with heavyweight names on it just came out (h/t Michael Bacon). It proposes a similar annealing behavior could be accomplished with a classical set-up after all.  Too me this is truly academic in the best and worst sense i.e. a considerable effort to get all the i's dotted and the t's crossed.  It simply seems a bit far fetched that the company would set out to build a chip with coupled qubits that behave like a quantum annealer, yet somehow end up with an oddly behaving classical annealer.

From my point of view it is much more interesting to explore all the avenues that are open to D-Wave to improve their chip, such as this new paper on strategies for a quantum annealer to increase the success probability for hard optimization problems. (h/t Geordie Rose).

Update 1

Geordie Rose weighs in on the paper that claims that the D-Wave machine can be explained classically.  He expected a Christmas present and felt he only got socks ...

Update 2

Helmut Katzgraber et al. propose in this paper that the current benchmarks are using the wrong problem set to possibly find a quantum speed-up with D-Wave's machine.

Posted in D-Wave, Quantum Computing | 26 Comments

Science News that isn’t really News

Usually, I don't blog about things that don't particularly interest me.  But even if you are a potted plant (preferably with a physics degree), you probably have people talking to you about this 'amazing' new paper by Stephen Hawking.

So, I am making the rare exception of re-blogging something, because already wrote everything about this I could possibly want to say, and she did it much better and more convincingly than I would.

So, if you want to know what to make of Hawking's latest paper head over to the backreaction blog.

Rainbow_Black_hole_by_Chriall

Stephen Hawking now thinks that there are only grey holes, which is a step up in the color scheme from black. But in honor of the Sochi Olympics, I really think the world needs rainbow colored black holes.

Posted in Popular Science | Tagged , | 1 Comment

Scott Aaronson (again) resigns as chief D-Wave critic and endorses their experiments

An exercise in positive spin.

Update below.

The English language is astoundingly malleable. It feels almost as if it was tailor made for marketing spin. I noticed this long ago (feels like a lifetime) when working in a position that required me to sell software. Positioning products was much easier when I spoke English.  Mind you, I never told a blatant lie, but I certainly spun the facts to put our product in the best light, and if a customer committed I'd do my darnedest to deliver the value that I promised. The kind of customers I dealt with were of course aware of this dance, and perfectly capable of performing their due diligence. From their perspective, in the end, it is always about buying into the vision, knowing full well that a cutting edge technology, one that will give a real competitive benefit, will of course be too new to be without risk.

During the courting of the customers, any sales person worth their salt will do anything to make the product look as good as possible. One aspect of this is of course to stress positive things that others are saying about your offerings.

To accomplish this, selective quoting can come in very handy. For instance, after reviewing the latest pre-print paper that looks at D-Wave's 503 qubit chip performance, Scott Aaronson stepped down for the second time as chief D-Wave critic. In the blog post where he announced this, he also observed that on "the ~10% of instances on which the D-Wave machine does best, (...) the machine does do slightly better (...) than simulated annealing".

This puts in words what the following picture shows in much more detail.

Screenshot 2014-01-18 17.47.52

Instance-by-instance comparison of annealing times and wall-clock times. Shown is a scatter plot of the pure annealing time for the DW2 compared to a simulated classical annealer (SA) using an average over 16 gauges on the DW2. This is figure 6 of the recent benchmark paper. Wall clock times include the time for programming, cooling, annealing, readout and communication. Gauges refer to different encodings of a problem instance. (Only plot A and B are relevant to the settling of my bet).

Now, if you don't click through to Scott's actual blog post. you may take away that he actually changed his stance. But of course he hasn't. You can look at the above picture and think the glass is ninety percent empty or you could proclaim it is ten percent full.

The latter may sound hopelessly optimistic, but let's contemplate what we are actually comparing.  Current computer chips are the end product of half a century highly focused R&D, with billions of dollars poured into developing them. Yet, we know we are getting to the end of the line of Moore's law. Leak currents already are a real problem, and the writing is on the wall that we are getting ever closer to the point where the current technology will no longer allow for tighter chip structures.

On the other hand, the D-Wave chip doesn't use transistors. It is an entirety different approach to computing, as profoundly different as the analog computers of yore.

The integration density of a chip is usually classified by the length of the silicon channel between the source and drain terminals in its field effect transistors (e.g. 25nm). This measure obviously doesn't apply to D-Wave, but the quantum chip integration density isn't even close to that. With the ridiculously low number of about 500 qubits on D-Wave's chip, which was developed on a shoestring budget when compared to the likes of Intel or IBM, the machine still manages to hold its own against a modern CPU.

Yes, this is not a universal gate-based quantum computer, and the NSA won't warm up to it because it cannot implement Shore's algorithm, nor is there a compelling theoretical reason that you can achieve a quantum speed-up with this architecture. What it is, though, is a completely new way to do practical computing using circuit structures that leave plenty of room at the bottom.  In a sense, it is resetting the clock to when Feynman delivered his famous and prophetic talk on the potentials of miniaturization. Which is why from a practical standpoint I fully expect to see a D-Wave chip eventually unambiguously outperform a classical CPU.

On the other hand, if you look at this through the prism of complexity theory none of this matters, only proof of actual quantum speed-up does.

Scott compares the quantum computing skirmishes he entertains with D-Wave to the American Civil war.

If the D-Wave debate were the American Civil War, then my role would be that of the frothy-mouthed abolitionist pamphleteer

Although clearly tongue in cheek, this comparison still doesn't sit well with me.  Fortunately, in this war, nobody will lose life or limb. The worst that could happen is a bruised ego, yet if we have to stick to this metaphor, I don't see this as Gettysburg 1863 but the town of Sandwitch 1812.

Much more will be written on this paper. Once it has fully passed peer review and been published, I will also be finally able to reveal my betting partner. But in the meantime there a Google+ meeting scheduled that will allow for more discussion (h/t Mike).

Update

Without careful reading of the paper a casual observer may come away with the impression that this test essentially just pitted hardware against hardware. Nothing could be further from the truth, some considerable effort had to go into constructing impressive classical algorithms to beat the D-Wave machine on its own turf.  This Google Quantum AI lab post elaborates on this (h/t Robert R. Tucci).

Update 2

D-Wave's Geordie Rose weighs in.

 

 

 

 

 

 

Posted in D-Wave, Quantum Computing | Tagged | 23 Comments

Quantum Computing NSA Round-Up

Chinawall_Red_Wave

Ever since the Edward Snowden-provided news broke that the NSA spent in excess of $100M on quantum computing I meant to address this in a blog post. But Robert R. Tucci beat me to it and has some very interesting speculations to add.

He also picked up on this quantum computing article in the South China Morning Post reporting on research efforts in mainland China.  Unfortunately, but unsurprisingly, it is light on technical details. Apparently China follows a shotgun approach of funding all sorts of quantum computing research. The race truly seems to be on.

Not only is China investing in a High Magnetic Field Laboratory to rival the work conducted at the US based NHMFL, but there is also Prof. Wang Haohua's efforts based on superconducting circuitry.

Interestingly, the latter may very well follow a script that Geordie Rose was speculating on when I asked him where he thinks competition in the hardware space may one day originate from.  The smart move for an enterprising Chinese researcher would be to take the government's seed money, and focus on retracing a technological path that has already proven to be commercially successful.  This won't get the government an implementation of Shor's algorithm any faster, but adiabatic factorization may be a consolation prize.  After all, that one was already made in China.

But do the NSA revelations really change anything?  Hopefully it will add some fuel to the research efforts, but at this point this will be the only effect.  The NSA has many conventional ways to listen in on the mostly unsecured Internet traffic.  On the other hand RSA with a sufficiently long key length is still safe.  For now if customers were to switch to email that is hardened in this way it'll certainly make the snoops' job significantly harder.

Posted in Quantum Computing | 34 Comments

Here be Fusion

During my autumn travel to the Canadian West Coast I was given the opportunity to visit yet another High Tech Start-up with a vision no less radical and bold than D-Wave's.

I have written about General Fusion before, and it was a treat to tour their expanding facility, and to ask any question I could think of. The company made some news when they attracted  investment from Jeff Bezos, but despite this new influx of capital, in the world of fusion research, they are operating on a shoe-string budget. This makes it all the more impressive how much they have already accomplished.

At the outset, the idea seems to be ludicrous; How could a small start-up company possibly hope to accomplish something that the multi-national ITER consortium attempts with billions of dollars? Yet, the approach they are following is scientifically sound, albeit fallen out of favor with the mainstream of plasma physicists. It's an approach that is incidentally well suited to smaller scale experiments, and the shell of the experiment that started it all is now on display at the reception area of General Fusion.

small_start

Doug Richardson, General Fusion co-founder, is a man on a mission, who brings an intense focus to the job. Yet, when prompted by the receptionist he manged a smile for this photo that shows him next to the shell from the original experiment that started it all. The other founder and key driving force, Michel Laberge, was unfortunately out of town during the week of my visit.

Popular Science was the first major media outlet to take note of the company.  It is very instructive to read the article they wrote on the company back then to get a sense of how much bigger this undertaking has become.  Of course, getting neutrons from fusion is one thing; Getting excess energy is an entirely different matter. After all, the company that this start-up modeled its name after was enthusiastically demonstrating fusion to the public many decades ago.

But the lackluster progress of the conventional approach to fusion does not deter the people behind this project, but rather seems to add to the sense of urgency. What struck me when first coming on site was the no-nonsense industrial feel to the entire operation.  The company is renting some nondescript buildings, the interior more manufacturing floor than gleaming laboratory, every square inch purposefully utilized to run several R&D streams in parallel.  Even before talking to co-founder Doug Richardson, the premise itself sent a clear message, this is an engineering approach to fusion and they are in a hurry. This is why rather then just focusing on one aspect of the machine, they decided to work in parallel.

When asked where I wanted to start my tour, I opted for the optically most impressive piece, the scaled down reactor core with its huge attached pistons.  The reason I wanted to scrutinize this first is because, in my experience, this mechanical behemoth is what casual outside observers usually take objection to.  This is due to the naive assumption that so many moving parts under such high mechanical stresses make for problematic technology. This argument was met with Doug's derision. In his mind this is the easy part, just a matter of selecting the right material and precision mechanical engineering.  My point that a common argument is that moving parts mean wear and tear, he swatted easily aside.  In my experience, a layperson introduced to the concept is usually uncomfortable with the idea that pistons could be used to produce this enormous pressure. After all, everybody is well acquainted with the limited lifetime of a car engine that has to endure far less.  Doug easily turned this analogy on its ear, pointing out that a stationary mounted engine can run uninterrupted for a long time, and that the reliability typically increases with scale.

Currently they have a 3:1 scaled down reactor chamber build to test the vortex compression system (shown in the picture below).

vortex test reactor

The test version has a reactor sphere diameter of 1m. The envisioned final product will be three times the size.  Still a fairly compact envelope, but too large to be hosted in this building.

Another of my concerns with this piece of machinery was the level of accuracy required to align the piston cylinders. The final product will require 200 of them, and if the system is sensitive to misalignment it is easy to imagine how this could impact its reliability.

It came as a bit of a surprise that the precision required was actually less than I expected, 50 micron (half a tenth of a millimeter) should suffice, and in terms of timing, the synchronicity can tolerate deviations of up to 10 microseconds, ten times more than initially expected. This is due to a nice property that the GF research uncovered during the experiments: The spherical shock wave they are creating within the reactor chamber is self-stabilizing, i.e. the phase shift when one of the actuators is slightly out of line causes a self-correcting interference that helps to keep the ingoing compression symmetric as it travels through the vortex of molten lead-lithium that is at the heart of the machine.

The reason for this particular metal mix within the reactor is the shielding properties of lead, and the fact that Lithium 6 has a large neutron absorption cross section that allows for breeding tritium fuel. This is a very elegant design that ensures that if the machine gets to the point of igniting fusion there will be no neutron activation problems like those which plague conventional approaches (i.e. with a Tokamak design as used by ITER, neutrons, that cannot be magnetically contained, bombard the reactor wall, eventually wearing it down and turning it radioactive).

Doug stressed that this reflects their engineering mindset. They need to get these problems under control from the get-go, whereas huge projects like ITER can afford to kick the can down the road. I.e. first measuring the scope of the problem, and then hoping to address this with later research effort (which is then supposed to provide a solution to a problem that General Fusion's approach manages to eliminate altogether).

Another aspect of the design that I originally did not understand is the fact that plasma will be injected from both sides of the sphere simultaneously, so that the overall momentum of the plasma will cancel out at the center.  I.e. the incoming shock wave doesn't have to hit a moving target.

The following YouTube video animation uploaded by the company illustrates how all these pieces are envisioned to work together.

 

Managing the plasma properties and its dynamics, i.e. avoiding unwanted turbulence that may reduce temperature and/or density, is the biggest technological challenge.

To create plasma of the required quality, and in order to get it into place, the company constructed some more impressive machinery.  It is a safe bet that they have the largest plasma injectors ever built.

Plasma Injector

Admittedly, comparing this behemoth to the small plasma chamber in the upper left corner is comparing apples to oranges, but then this machine is in a class of its own.

When studying the plasma parameters, it turned out that the theoretical calculations actually lead to an over-engineering of this injector and that smaller ones may be adequate in creating plasma of the desired density. But of course creating and injecting the plasma is only the starting point.  The most critical aspect is how this plasma behaves under compression.

To fully determine this, GF faces the same research challenges as the related magnetized target fusion research program in the US. I.e. General Fusion needs to perform similar test as conducted in the SHIVA STAR Air Force facility in Albuquerque. In fact, due to budget cut-backs, SHIVA has spare capacity that could be used by GF, but exaggerated US security regulations unfortunately prevent such cooperation.  It is highly doubtful that civilian Canadians would be allowed access to the military class facility.  So the company has to improvise and come up with its own approach to these kind of implosion tests. The photo below shows an array of sensors that is used to scrutinize the plasma during one of these tests.

sensors

Understanding the characteristics of the plasma when imploded is critical,  these sensors on top of one of experimental set-up are there to collect the crucial data. Many such experiments will be required before enough data has been amassed.

Proving that they can achieve the next target compression benchmark is critical in order to continue to receive funding from the federal Canadian SDTC fund.  The latter is the only source for governmental fusion funding, Canada has no dedicated program for fusion research and even turned its back on the ITER consortium. This is a far cry from Canada's technological vision in the sixties that resulted in nuclear  leadership with the unique CANDU design. Yet, there is no doubt,  General Fusion has been doing the most with the limited funds it received.

Here's to hoping that the Canadian government may eventually wake-up to the full potential of a fusion reactor design 'made in Canada' and start looking beyond the oil patch for its energy security (although this will probably require that the torch is passed to a more visionary leadership in Ottawa).

ride

An obligatory photo for any visitor to General Fusion. Unfortunately, I forgot my cowboy hat.

~~~

Update: What a start into 2014 for this blog.  This post has been featured on slashdot, and received over 11K views within three days.  Some of the comments on slashdot inquired to dig deeper into the science of General Fusion. For those who want to follow through on this, the company's papers and those that describe important results that GF builds on, can be found on their site. In addition, specifically for the unique vortex technology I find James Gregson's Master Thesis very informative.

Update 2: General Fusion can be followed on Twitter @MTF_Fusion (h/t Nathan Gilliland)

Update 3: Some Canadian main stream media like the Edmonton Journal also noticed the conspicuous absence of dedicated fusion research.  Ironically, the otherwise well written article, argues for an Alberta based research program while not mentioning General Fusion once.  This despite the fact that the company is right next door (by Canadian standards) and has in fact one major Alberta based investor, the oil company  Cenovus Energy.

Posted in Popular Science | Tagged | 24 Comments

Blog Memory Hole Rescue – The Fun is Real

It seems that work and life is conspiring to leave me no time to finish my write-up on my General Fusion visit.  Started it weeks ago but still I am not ready to hit the publish button on this piece.

memory_hole

In the meantime I highly recommend the following blog that I came across.  It covers very similar topics than the ones here, and also shares a similar outlook.  For instance, this article beautifully sums up why I never warmed up to Everett's Multiverse interpretation (although I have to admit reading Julian Barbour's End of Time softened my stance a bit - more on this later).

The 'Fun Is Real' blog is a cornucopia of good physics writing and should provide many hours of thought-provoking reading material to bridge over the dearth of my current posting schedule.

On a side note, given that this goes to the core of the topic I write about on this blog, the following news should not go unmentioned:  Australian researchers reportedly have created a cluster state of 10,000 entangled photonic qubits (h/t Raptis T.).

This is magnitudes more than has been previously reported. Now if they were to manage to get some quantum gates applied to them we'd be getting somewhere.

Posted in Blogroll Rescue, Popular Science, Quantum Computing, Quantum Mechanics | 7 Comments

Quantum Computing Interrupted

This is the second part of my write-up on my recent visit to D-Wave. The first one can be found here.

d_wave_close

The recent shut-down of the US government had wide spread repercussions. One of the side-effects was that NASA had to stop all non-essential activities and this included quantum computing.  So the venture which, in cooperation with Google, jointly operates a D-Wave machine was left in limbo for a while.  Fortunately, this was short lived enough to hopefully not have any lasting adverse effects.  At any rate, maybe it freed up some time to produce a QC mod for Minecraft and the following high level and very artsy Google video that 'explains' why they want to do quantum computing in the first place.

If you haven't been raised on MTV music videos and find rapid succession sub-second cuts migraine inducing (at about the 2:30 mark things settle down a bit), you may want to skip it. So here's the synopsis (Spoiler alert). The short version of what motivates Google in this endeavor, to paraphrase their own words: We research quantum computing, because we must.

In other news, D-Wave recently transferred its foundry process to a new location, partnering with Cypress Semiconductor Corp, a reminder that D-Wave firmly raised the production of superconducting Niobium circuitry to a new industrial-scale level.  Given these new capabilities, it may not be a coincidence that the NSA has recently announced its intention to fund research into super-conducting computing. Depending on how they define "small-scale" the D-Wave machine should already fall into the description of the solicitation bid, which aspires to the following ...

"... to demonstrate a small-scale computer based on superconducting logic and cryogenic memory that is energy efficient, scalable, and able to solve interesting problems."

... although it is fair to assume this program is aimed at classical computing. Prototypes for such chips have been already researched and look rather impressive (direct link to paper).  They are using the same chip material and circuitry (Josephson junctions) as D-Wave, so it is not a stretch to consider that industrial scale production of those more conventional chips can immediately benefit from the foundry process know-how that D-Wave has accumulated. It doesn't seem too much of a stretch to imagine that D-Wave may expand into this market space.

When putting the question to D-Wave's CTO Geordie Rose, he certainly took some pride in his company's manufacturing expertise. He stressed that, before D-Wave, nobody was able to scale superconducting VLSI chip production, so this now opens up many additional opportunities. He pointed out that one could, for instance, make an immediate business case for a high through-put router based on this technology, but given the many venues open for growth he stressed the need to chose wisely.

The capacity of the D-Wave fridges are certainly so that they could accommodate more super-conducting hardware. Starting with the Vesuvius chip generation, measurement heat is now generated far away from the chip. Having several in close proximity should therefore not disturb the thermal equilibrium at the core.  Geordie considers deploying stacks of quantum chips so that thousands could work in parallel, since they are currently just throwing away a lot of perfectly good chips that come off a wafer.  This may eventually necessitate larger cooling units than the current ones that draw 16KW. This approach certainly could make a lot of sense for a hosting model where processing time is rented out to several customers in parallel.

One attractive feature that I pointed out was that if you had classical logic within the box, you'd eliminate a potential bottleneck that could occur if rapid reinitialization and read out of the quantum chip is required, and it would also potentially open the possibility for direct optical interconnects between chips. Geordie seemed to like this idea. One of the challenges to make the current wired design work, was to design high efficiency low pass filters to bring the noise level in these connectors down to an acceptable level.  So, in a sense, an optical interconnect could reduce complexity, but clearly would also require some additional research effort to bring down the heat signature of such an optical transmission.

This triggered an interesting, and somewhat ironic, observation on the challenges of managing an extremely creative group of people.  Geordie pointed out that he has to  think carefully about what to focus his team on, because an attractive side project e.g. 'adiabatic' optical interconnects, could prove to be so interesting to many team members that they'd gravitate towards working on this rather than keeping their focus on the other work at hand.

Some other managerial headaches stem from the rapid development cycles.  For instance, Geordie would like to develop some training program that will allow a customer's technical staff to be quickly brought up to speed.  But by the time such a program is fully developed, chances are a new chip generation will be ready and necessitate a rewrite of any training material.

Some of  D-Wave's challenges are typical for high tech start ups, others specific to D-Wave. My next, and final, installment will focus on Geordie's approach to managing these growing pains.

Posted in D-Wave | 13 Comments

Blog Round-Up

Lots of travel last week delayed the second installment on my D-Wave visit write-up, but I came across some worthy re-blog material to bridge the gap.

inholeI am usually very hard on poorly written popular science articles, which is all the more reason to point to some outstanding material in this area. I found that one writer, Brian Dodson, at the Gizmag site usually delivers excellent content. Due to his science background, he brings an unusual depth of understanding to his writing. His latest pieces are on General Relativity compatible alternatives to dark energy and a theoretical Quantum black hole study that puts the gravity loop approach to some good use. The latter is a good example as to why I am much more inclined to Loop Quantum Gravity rather than the ephemeral String theory, as the former at least delivers some predictions.

Another constant topic of this blog is the unsatisfying situation with regards to the foundational interpretations of Quantum Mechanics.  Lack of progress in this area can in no small measure be attributed to the 'Shut up and calculate' doctrine, a famous  quip attributed to Feynman that has since been enshrined as an almost iron rule.

To get a taste for how prohibitively this attitude permeates the physics community, this arxiv paper/rant is a must read. From the abstract:

If you have a restless intellect, it is very likely that you have played at some point with the idea of investigating the meaning and conceptual foundations of quantum mechanics. It is also probable (albeit not certain) that your intentions have been stopped in their tracks by an encounter with some version of the “Shut up and calculate!” command. You may have heard that everything is already understood. That understanding is not your job. Or, if it is, it is either impossible or very difficult. Maybe somebody explained to you that physics is concerned with “hows” and not with “whys”; that whys are the business of “philosophy” -you know, that dirty word. That what you call “understanding” is just being Newtonian; which of course you cannot ask quantum mechanics to be. Perhaps they also complemented this useful advice with some norms: The important thing a theory must do is predict; a theory must only talk about measurable quantities. It may also be the case that you almost asked “OK, and why is that?”, but you finally bit your tongue. If you persisted in your intentions and the debate got a little heated up, it is even possible that it was suggested that you suffered of some type of moral or epistemic weakness that tends to disappear as you grow up. Maybe you received some job advice such as “Don’t work in that if you ever want to own a house”.

At least if this bog post is any indication the times seem to be changing and becoming more permissive.

Posted in Popular Science, Quantum Mechanics | Tagged , | 1 Comment