Category Archives: Popular Science

Big Challenges Require Bold Visions

Unless we experience a major calamity resetting the world’s economy to a much lower output, it is a foregone conclusion that the world will miss the CO2 target to limit global warming to 1.5C. This drives a slow motion multi-faceted disaster exacerbated by the ongoing growth in global population, which puts additional stress on the environment.  Unsurprisingly, we are in the midst of earth’s sixth massive extinction event.

It just takes three charts to paint the picture:

1) World Population Growth

2) Temperature Increase

3) Species Extinction

We shouldn’t delude ourselves in believing that our species is safe from adding itself to the extinction list. The next decades are pivotal in stopping the damage we do to our planet. Given our current technologies, we have every reason to believe that we can stabilize population growth and replace fossil fuel dependent technologies with CO2 neutral ones, but the processes that are already set in motion will produce societal challenges of unprecedented proportion.

Population growth and the need for arable land keeps pushing people ever closer to formerly isolated wildlife.  Most often with just fatal consequences for the latter, but sometimes the damage goes both ways.  HIV, Ebola and bird flu, for instance, are all health threats that were originally contracted from animal reservoirs (zoonosis), and we can expect more such pathogens, many of which will not have been observed before. At the same time, old pathogens can easily resurface. Take tuberculosis, for instance. Even in an affluent country with good public health infrastructure, such as Canada, we see over a thousand new cases each year, and, as in other parts of the world, multi-resistant TB strains are on the rise.

Immunization and health management require functioning governmental bodies. In a world that will see ever more refugee crises and civil strife, the risk for disruptive pandemics will massively increase. The recent outbreak of Ebola is a case study in how such mass infections can overwhelm the medical infrastructure of developing countries, and should serve as a wake-up call to the first world to help establish a global framework that can manage these kinds of global health risks. The key is to identify emerging threats as early as possible, since the chance of containment and mitigation increases by multitudes the sooner actions can be taken.

Such a framework will require robust and secure data collection and dissemination capabilities and advanced predictive analytics that can build on all available pooled health data as well as established medical ontologies. Medical doctor and bioinformatic researcher Andrew Deonarine has envisioned such a system that he has dubbed Signa.OS, and he has assembled a stellar team including members from his former alma mater Cambridge, the UBC, as well as Harvard, where he will soon start post-graduate work. Any such system should not be designed with just our current hardware in mind, but with the technologies that will be available within the decade.  That is why quantum computer accelerated Bayesian networks are an integral part of the analytical engine for Signa.OS. We are especially excited to also have Prof. Marco Scutari from Oxford join the Signa.OS initiative, whose work in Bayesian network training in R is stellar, and served as a guiding star for our python implementation.

Our young company, artiste-qb.net, which I recently started with Robert R. Tucci, could not have wished for a more meaningful research project to prove our technology.

[This video was produced by Andrew for entering the MacArthur challenge.]

 

Fusion is Hotter Than You May Think

As I am preparing to again get back into more regular blogging on Quantum Computing, I learned that my second favourite Vancouver based start-up, General Fusion, got some well deserved social media traction.  Michel Labarge’s TED talk has now been viewed over a million times (h/t Rolf D).  Well deserved, indeed.

This reminded me of a Milken Institute fusion panel from earlier this year, which seems to have less reach than TED, but is no less interesting. It also features Michel, together with representatives from other Fusion ventures (Tri Alpha Energy and Lockheed Martin) as well as MIT’s Dennis Whyte. The panel makes a compelling case as to why we see private money flowing into this sector now, and why ITER shouldn’t be the only iron we have in the fire.

Late Wave

It took only one scientist to predict them but a thousand to get them confirmed (1004 to be precise). I guess if the confirmation of gravitational waves couldn’t draw me out of my blogging hiatus nothing could, although I am obviously catching a very late wave. The advantage of this – I can compile and link to all the best content that has already been written on the topic.

Of course this latest spectacular confirmation will unfortunately not change the mind of those quixotic individuals who devote themselves to fight the “wrongness” of all of Einstein’s work (I once had the misfortune of encountering the maker of this abysmal movie. Safe to say I had more meaningful conversations talking to Jehovah Witnesses).

But given the track record of science news journalism, what are the chances that this may be a fluke similar to the BICEP news that turned out to be far less solid than originally reported? Or another repeat of the faster than light neutrino measurements?

The beauty of a direct experimental measurement as performed by LIGO, is that the uncertainty can be calculated statistically. Since this is a “5-sigma” event, this means the signal is real with a 99.9999% probability. The graph at the bottom shows that what has been measured matches a theoretically expected signal from a black hole merger so closely that the similarity is immediately compelling even for a non-scientists.

But more importantly, unlike faster than light neutrinos, we have every reason to believe that gravitational waves exist. There is no new physics required, and the phenomenon is strictly classical, in the sense that General Relativity produces a classical field equation that unlike Quantum Mechanics adheres to physical realism. That is why this discovery does nothing to advance the search for a unification of gravity with the other three forces. The importance of this discovery lies somewhere else, but is no less profound. Sabine Hossenfelder says it best:

Hundreds of millions of years ago, a primitive form of life crawled out of the water on planet Earth and opened their eyes to see, for the first time, the light of the stars. Detecting gravitational waves is a momentous event just like this – it’s the first time we can receive signals that were previously entirely hidden from us, revealing an entirely new layer of reality.

The importance of this really can’t be overstated.  The universe is a big place and we keep encountering mysterious observations. There is of course the enduring puzzle of dark matter, lesser known may be the fast radio bursts first observed in 2007 that are believed to be the highest energy events known to modern astronomy.  Until recently it was believed that some one-off cataclysmic events were the underlying cause, but all these theories had to be thrown out when it was recently observed that these signals can repeat.  (The Canadian researcher who published on this recently received the highest Canadian science award, and the CBC has a nice interview with her).

We are a long way off from having good spatial resolution with the current LIGO setup. The next logical step is of course to simply drastically increase the scale of the device, and when it comes to Laser interferometry this can be done on a much grander scale then with other experimental set-ups (e.g. accelerators).  The eLISA space based gravitational wave detector project is well underway. And I wouldn’t yet count out advanced quantum interferometry as a means to drastically improve the achievable resolution, even if they couldn’t beat LIGO to the punch.

After all, it was advanced interferometry that had been driving the hunt for gravitational waves for many decades. One of its pioneers, Heinz Billing, was determined to bring about and witness their discovery, reportedly stating that he refused to die before the discovery was made.  The universe was kind to him, so at age 101 he is still around and got his wish.

LIGO signal
LIGO measurement of gravitational waves. Shows the gravitational wave signals received by the LIGO instruments at Hanford, Washington (left) and Livingston, Louisiana (right) and comparisons of these signals to the signals expected due to a black hole merger event.

Classic Quantum Confusion

Paris_Tuileries_Facepalm_statueBy now I am pretty used to egregiously misleading summarization of physics research in popular science outlets, sometimes flamed by the researchers themselves. Also self-aggrandized, ignorant papers sneaked into supposedly peer reviewed journals by non-physicists are just par for the course.

But this is in a class of it’s own.  Given the headline and the introductory statement that “a fully classical system behaves like a true quantum computer“, it essentially creates the impression that QC research must be pointless. Much later it sneaks in the obvious, that an analog emulation just like one on a regular computer can’t possibly scale past 40 qubits due to the exponential growth in required computational resources.

But that’s not the most irritating aspect of this article.

Don’t get me wrong, I am a big fan of classical quantum analog systems. I think they can be very educational, if you know what you are looking at (Spreeuw 1998).  The latter paper, is actually quoted by the authors and it is very precise in distinguishing between quantum entanglement and the classical analog. But that’s not what their otherwise fine paper posits (La Cour et al. 2015).  The authors write:

“What we can say is that, aside from the limits on scale, a classical emulation of a quantum computer is capable of exploiting the same quantum phenomena as that of a true quantum system for solving computational problems.”

If it wasn’t for the phys.org reporting, I would put this down as sloppy wording that slipped past peer review, but if the authors are correctly quoted, then they indeed labour under the assumption that they faithfully recreated quantum entanglement in their classical analog computer – mistaking the model for the real thing.

It makes for a funny juxtaposition on phys.org though, when filtering by ‘quantum physics’ news.

Screenshot 2015-05-28 01.35.43

The second article refers to a new realization of Wheeler’s delayed choice experiment (where the non-local entanglement across space is essentially swapped for one across time).

If one takes Brian La Cour at his word then according to his other paper he suggest that these kind of phenomena should also have a classical analog.

So it’s not just hand-waving when he is making this rather outlandish sounding statement with regards to being able to achieve an analog to the violation of Bell’s inequality:

“We believe that, by adding an emulation of quantum noise to the signal, our device would be capable of exhibiting this type of [Bell’s inequality violating] entanglement as well, as described in another recent publication.”

Of course talk is cheap, but if this research group could actually demonstrate this Bell’s inequality loophole it certainly could change the conversation.

Quantum Computing Road Map

No, we are not there yet, but we are working on it.Qubit spin states in diamond defects don’t last forever, but they can last outstandingly long even at room temperature (measured in microseconds which is a long time when it comes to computing).

So this is yet another interesting system added to the list of candidates for potential QC hardware.

Nevertheless, when it comes to the realization of scalable quantum computers, qubits decoherence time may very well be eclipsed by the importance of another time span: 20 years, the length at which patents are valid (in the US this can include software algorithms).

With D-Wave and Google leading the way, we may be getting there faster than most industry experts predicted. Certainly the odds are very high that it won’t take another two decades for useable universal QC machines to be built.

But how do we get to the point of bootstrapping a new quantum technology industry? DK Matai addressed this in a recent blog post, and identified five key questions, which I attempt to address below (I took the liberty of slightly abbreviating the questions, please check at the link for the unabridged version).

The challenges DK laid out will require much more than a blog post (or a LinkedIn comment that I recycled here), especially since his view is wider than only Quantum Information science. That is why the following thoughts are by no means comprehensive answers, and very much incomplete, but they may provide a starting point.

1. How do we prioritise the commercialisation of critical Quantum Technology 2.0 components, networks and entire systems both nationally and internationally?

The prioritization should be based on the disruptive potential: Take quantum cryptography versus quantum computing for example. Quantum encryption could stamp out fraud that exploits some technical weaknesses, but it won’t address the more dominant social engineering deceptions. On the upside it will also facilitate iron clad cryptocurrencies. Yet, if Feynman’s vision of the universal quantum simulator comes to fruition, we will be able to tackle collective quantum dynamics that are computationally intractable with conventional computers. This encompasses everything from simulating high temperature superconductivity to complex (bio-)chemical dynamics. ETH’s Matthias Troyer gave an excellent overview over these killer-apps for quantum computing in his recent Google talk, I especially like his example of nitrogen fixation. Nature manages to accomplish this with minimal energy expenditure in some bacteria, but industrially we only have the century old Haber-Bosch process, which in modern plants still results in 1/2 ton of CO2 for each ton of NH3. If we could simulate and understand the chemical pathway that these bacteria follow we could eliminate one of the major industrial sources of carbon dioxide.

2. Which financial, technological, scientific, industrial and infrastructure partners are the ideal co-creators to invent, to innovate and to deploy new Quantum technologies on a medium to large scale around the world? 

This will vary drastically by technology. To pick a basic example, a quantum clock per se is just a better clock, but put it into a Galileo/GPS satellite and the drastic improvement in timekeeping will immediately translate to a higher location triangulation accuracy, as well as allow for a better mapping of the earth’s gravitational field/mass distribution.

3. What is the process to prioritise investment, marketing and sales in Quantum Technologies to create the billion dollar “killer apps”?

As sketched out above, the real price to me is universal quantum computation/simulation. Considerable efforts have to go into building such machines, but that doesn’t mean that you cannot start to already develop software for them. Any coding for new quantum platforms, even if they are already here (as in the case of the D-Wave 2) will involve emulators on classical hardware, because you want to debug and proof your code before submitting it to the more expansive quantum resource. In my mind building such an environment in a collaborative fashion to showcase and develop quantum algorithms should be the first step. To me this appears feasible within an accelerated timescale (months rather than years). I think such an effort is critical to offset the closed sourced and tightly license controlled approach, that for instance Microsoft is following with its development of the LIQUi|> platform.

4. How do the government agencies, funding Quantum Tech 2.0 Research and Development in the hundreds of millions each year, see further light so that funding can be directed to specific commercial goals with specific commercial end users in mind?

This to me seems to be the biggest challenge. The amount of research papers produced in this field is enormous. Much of it is computational theory. While the theory has its merits, I think the governmental funding should try to emphasize programs that have a clearly defined agenda towards ambitious yet attainable goals. Research that will result in actual hardware and/or commercially applicable software implementations (e.g. the UCSB Martinis agenda). Yet, governments shouldn’t be in the position to pick a winning design, as was inadvertently done for fusion research where ITER’s funding requirements are now crowding out all other approaches. The latter is a template for how not to go about it.

5. How to create an International Quantum Tech 2.0 Super Exchange that brings together all the global centres of excellence, as well as all the relevant financiers, customers and commercial partners to create Quantum “Killer Apps”?

On a grassroots level I think open source initiatives (e.g. a LIQUiD alternative) could become catalysts to bring academic excellence centers and commercial players into alignment. This at least is my impression based on conversations with several people involved in the commercial and academic realm. On the other hand, as with any open source products, commercialization won’t be easy, yet this may be less of a concern in this emerging industry, as the IP will be in the quantum algorithms, and they will most likely be executed with quantum resources tied to a SaaS offering.

 

Who is watching the watchmen gatekeepers?

The Technological Singularity won't happen without sound science.Almost every human activity in the first world has been impacted by technology. Our means of production have been fundamentally altered, and while creating enormous wealth, the changes have often been disruptive, painfully so at times. As this ongoing transformation accelerates, “business as usual” has become an oxymoron.

Paradoxically, while science is at the foundation of all our technological progress, it is like the eye at the center of the storm – the academic mode of operation has hardly changed over the last two centuries. And why not? An argument could be made not to fix what isn’t broken. For instance, sometimes you hear the scientific process compared to the Open Source movement, arguing that both strive for a transparent meritocracy where openness ensure that mistakes will not survive for long. Unfortunately, this idealized view is a fallacy on more than one count.

There are lots of good reasons for the Open Source coding paradigm, but it does not give license to forgo quality control and code review, as for instance the heartbleed bug, or the most recent widespread (and ancient!) bash vulnerability, illustrated.

On the other hand, the scientific peer review process is not anything like the open communication that goes on in public forums and email lists of Open Source software like the Linux kernel. Peer review is completely closed off from public scrutiny, yet determines what enters the scientific discourse in the first place.

The main medium of communicating scientific results remains the publication of papers in scientific journals, some of which charge outrageous subscription fees that shut out poorer institutions and most individuals. But this isn’t by far the worst impediment to open scientific exchange. Rather, it is the anonymous peer review process itself, which is by design not public. Authors are often given opportunities to correct a paper by re-submitting if the original one is rejected, but ultimately the peer reviewers serve as gatekeepers.

For a discipline that is the foundation of all our technology, the knowledge generating process of science has been surprisingly untouched, and due to the build in anonymity, it has also managed to escape any scrutiny. That would be all fine and good if it was actually working. But it is not. We know little about stellar papers that may have been rejected and now linger forever in obscurity in some pre-print archive, but we know all the trash that passed final approval for publications. For instance, we get sensational headlines like this, promising an entirely new take on dark energy, but if you actually read up on this you realize that the author of the underlying paper, who is just referred to as a University of Georgia professor, is actually not a physicist but a biologist. Now far be it from me to discourage a biologist from wanting to do physics, but if you bother to read his paper on the matter you will quickly realize that the man apparently doesn’t grasp the basics of general and special relativity. Yet, this was published in PLOS One which supposedly follows a rigorous peer review process. They even bothered to issue a correction to an article that is just pseudo science. Mind boggling.

Now you may think, well, this is PLOS One, although a leading Open-Access journal, the underlying business model must surely imply that they cannot pay decent money for peer review. Surely more prestigious journals, published by an organization that is known for its ludicrous journal subscription prices, such as Elsevier, will have a much more rigorous peer review process. Unfortunately you would be wrong. May I present you this Rainbow and Unicorns Gravity paper. It has, of course, caused the predictable splash in the mainstream media. The paper should never have been published in this form. You don’t have to take my word for it, you can read up on it in detail on the blog of Sabine Hossenfelder, whose 2004 paper on black hole production the authors listed as a reference. When challenged to write up a criticism to submit to the same journal, Sabine didn’t mince words:

This would be an entire waste of time. See, this paper is one of hundreds of papers that have been published on this and similar nonsense, and it is admittedly not even a particularly bad one. Most of the papers on the topic are far worse that that. I have already tried to address these problems by publishing this paper which explicitly rules out all models that give rise to a modified dispersion relation of the same type that the authors use. But look, it doesn’t make any difference. The model is ruled out – you’d think that’s the end of the story. But that isn’t how science works these days. People continue to publish papers on this by just ignoring it. They don’t even claim there is something wrong with my argument, they just ignore it and continue with their nonsense.

I have wasted enough time on this. There is something really, really going wrong with theoretical physics and this is only one indication for it.

Later in the comment thread she also had this to say:

I have had to talk to students who work on related things (not exactly the same thing) and who were never told that there are any problems with this idea. Even though I know for sure their supervisor knows. Even though I have published half a dozen of comments and papers explicitly explaining what the problems are. Honestly, it’s things like this that make me want to leave academia. This just isn’t research any more this is only sales strategies and networking.

The problem goes beyond peer review and comes down to accountability, but because the peer review is anonymous by design, it is especially easily corrupted, and I know of cases that resulted in exactly what Sabine spelled out: Top talent leaving academia and theoretical physics. The more I look into this the more I believe that this process at the heart of the scientific endeavour is fundamentally broken, and urgently needs fixing.

peer_reviewR4_scr
Outside the world of physics, failures in peer review can have egregious consequences, such as the anti-vaccination hysteria that a peer-reviewed (and now completely discredited) article caused when linking vaccinations to autism. Although this was debunked and the scandal prominently featured in a 2011 BBC documentary the damage was done. It is much easier to get away with such shoddy work in theoretical physics.

 

Nuclear Confusion Versus Dead-End Certainty?

While I am working on my next  blog post, this excellent update on the state of fusion research from the Polywell Blog author John Smith shouldn’t go unnoticed.  He makes a strong case that the US is neglecting promising avenues towards self-sustained nuclear fusion as the ITER cost keeps on skyrocketing.  This echoes a similar sentiment that I heard when visiting General Fusion. Nevertheless, I think quitting ITER completely, as John recommends, is unwise.

The US already only has observer status at CERN, so bailing on ITER would sideline the American physics community even more. Despite the cost overruns and irrespective of its commercialisation prospects, ITER will make for one of the most advanced testbeds for plasma physics.  Should the US really shut itself out of having prime access to this machine once it is operational?

John’s post provides an excellent round-up of the various approaches to fusion, and mentions the damage that cold fusion inflicted on the field, a story that deserves a separate article. But there is another plasma phenomenon that some hope could be exploited for nuclear fusion that goes unmentioned in John’s otherwise exhaustive post. It shares some communality with the dubious cold fusion experiments: Abysmally bad replicability that severely damaged the reputation of one of the lead researchers in the field. This speculative approach to fusion was recently prominently featured in a surprisingly well researched gawker article (h/t Ed B.). It mentions some private outfits that are hanging their hat on sonoluminescence, and since the latter phenomenon is, after all, an actual plasma creating micro cavitation, these companies don’t deserve to be lumped in with the more shady cold fusion hustlers.

However, it is quite apparent that none of these can produce neutrons at a significant rate, unlike PNL’s High Yield Neutron Generator, an already commercially valuable technology. So there clearly is not much reason to get too excited about sonoluminescence unless one of the companies invested in this approach could replicate this feat.

Phoenix Nuclear Lab’s High Yield Neutron Generator, a piece of fusion technology you can buy today. It offers much cleaner and less messy source for neutrons than any fission based approach (it also avoids the proliferation headaches that come with the latter).

On balance, the influx of private money into nuclear fusion start-ups is the story here, one that gives hope that humanity may find a way to break its self-defeating fossil fuel habit within our lifetime.

 

 

 

The Year That Was <insert expletive of your choice>

Usually, I like to start a new year on an upbeat note, but this time I just cannot find the right fit. I was considering whether to revisit technology that can clean waterlauding the effort of the Bill Gates foundation came to mind, but while I think this is a great step in the right direction, this water reclaiming technology is still a bit too complex and expensive to become truly transformational and liberating.

At other times, a groundbreaking progress in increasing the efficiency of solar energy would have qualified, the key being that this can be done comparatively cheaply. Alas, the unprecedented drop in the price of oil is not only killing off the fracking industry, but also the economics for alternative energy.  For a planet that has had its fill of CO2, fossil fuel this cheap is nothing but an unmitigated disaster.

So while it was a banner year for quantum computing, in many respects 2014 was utterly dismal, seeing the return of religiously motivated genocide, open warfare in Europe, a resurgence of diseases that could be eradicated by now, and a pandemic that caused knee jerk hysterical reactions that taught us how unprepared we are for these kind of health emergencies. This year was so depressing it makes me want to wail along to my favorite science blogger’s song about it (but then again I’d completely ruin it).

And there is another reason to not yet let go of the past, corrections:

With these corrections out of the way I will finally let go of 2014, but with the additional observation that in the world of quantum computing, the new year started very much in the same vein as the old, generating positive business news for D-Wave, which managed to just raise another 29 million dollars, while at the same time still not getting respect from some academic QC researchers.

I.H. Deutsch (please note, not the Deutsch but Ivan) states at the end of this interview:

  1. [1]The D-Wave prototype is not a universal quantum computer.
  2. [2]It is not digital, nor error-correcting, nor fault tolerant.
  3. [3]It is a purely analog machine designed to solve a particular optimization problem.
  4. [4]It is unclear if it qualifies as a quantum device.”

No issues with [1]-[3].  But how many times do classical algos have to be ruled out before D-Wave is finally universally accepted as a quantum annealing machine?  This is getting into climate change denying territory. It shouldn’t really be that hard to define what makes for quantum computation. So I guess we found a new candidate for D-Wave chief critic, after Scott Aaronson seems to have stepped down for good.

Then again, with a last name like Deutsch, you may have to step up your game to get some name recognition of your own in this field.  And there’s no doubt that controversy works.

So 2015 is shaping up to become yet another riveting year for QC news. And just in case you made the resolution that, this year, you will finally try to catch that rainbow, there’s some new tech for you.
SOURCE: chaosgiant.deviantart.com

 

 

Update: Almost forgot about this epic fail of popular science reporting at the tail end of 2014.  For now I leave it as an exercise to the reader to spot everything that’s wrong with it. Of course most of the blame belongs to PLoS ONE which supposedly practices peer review.

The Race Towards Universal Quantum Computing – Lost in Confusion

If headlines and news articles were all you had to go by when trying to form an opinion about quantum computing, you’d end up with one enormous migraine. For many years now, they have created a constant barrage of conflicting story lines.

QC_mess

For reasons known only to them, science news authors seem to have collectively decided to ignore that there are many competing approaches to quantum computing. This apparent inability to differentiate between architectures and computational models makes for a constant source of confusion, which is then augmented by the challenge to explain the conceptual oddities of quantum computing, such as entanglement.

For instance, most authors, even if they may already know this is wrong, run with the simplest trope about quantum computing, which has been repeated ad nauseum: The pretense that these machines can execute every possible calculation within their input scope in parallel. Hard to imagine a misconception that would be better designed to put up a goalpost that no man-made machine could ever reach.  Scott Aaronson is so incensed by this nonsense that it even inspired the title of his new book. It is truly a sorry state of affairs when even Nature apparently cannot find an author who doesn’t fall for it. Elizabeth Gibney’s recent online piece on quantum computing was yet another case in point. It starts off promising, as the subtitle is spot on:

After a 30-year struggle to harness quantum weirdness for computing, physicists finally have their goal in reach.

But then the reader’s mind is again poisoned with this nonsense:

Where a classical computer has to try each combination in turn, a quantum computer could process all those combinations simultaneously — in effect, carrying out calculations on every possible set of input data in parallel.

Part of the problem is that there exist no other easy concepts that a news author can quickly turn to when trying to offer up an explanation that a casual reader can understand, while at the same time having his mind blown.  (‘Wow, every possible combination at the same time!’ It’s like double rainbow all over again).

Here’s my attempt to remedy this situation, a simple example to illustrate the extended capabilities of quantum computing versus  classical machines. The latter are very fast, but when solving a complex puzzle, i.e. finding the lowest number in an unordered list, they have to take one stab at it at a time.  It is like attacking an abstract problem-space the way ancient mariners had to fathom the depth of the sea.  (Gauging the depth with a rope in this manner is the original meaning of the word ‘fathom’).

You may argue that having several guys fathoming at the same time will give you a ‘parallelizing’ speed-up, but you would have to be a Luddite to the core to convince yourself that this could ever measure up to echolocation. Just like the latter can perceive data from a larger patch of seafloor, quantum computing can leverage more than just local point data. But this comes at a price: The signal that comes back is not easy to interpret. It depends on the original set-up of the probing signal, and requires subsequent processing.

Like an echolocation system, a quantum computer doesn’t magically probe the entire configuration space. It ‘sees’ more, but it doesn’t provide this information in an immediately useful format.

The real challenge is to construct the process in a way that allows you to actually get the answer to the computational problem you are trying to solve. This is devilishly difficult, which is why there are so few quantum algorithms in existence.  There are no simple rules to follow. In order to create one, it requires first and foremost inspiration, and is as much art as science.  That is why, when I learned how Shor’s algorithm worked, I was profoundly astounded and awed by the inordinate creativity it must have taken to think up.

Regardless, if this was the only problem with Elizabeth Gibney’s article, that would just be par for the course. Yet, while reporting on Google’s efforts to build their own quantum computing chip, she manages to not even mention the other quantum computer Google is involved with, and that despite D-Wave publishing in Nature in 2011 and just last year in Nature Communications.

Maybe if she hadn’t completely ignored D-Wave, she may have thought to ask Martinis the most pressing question of all: What kind of chip will he build for Google? Everything indicates that it is yet another quantum annealer, but the quotes in the article make it sound as if he was talking about gate computing:

“It is still possible that nature just won’t allow it to work, but I think we have a decent chance.”

Obviously he can not possibly be referring to quantum annealing in this context, since that clearly works just fine with fairly large numbers of qubits (as shown in the above mentioned Nature publication).

The current state of news reporting on quantum computing is beyond frustrating. There is a very real and fascinating race underway for the realization of the first commercially useful universal quantum computer. Will it be adiabatic or the gate model?  Are quantum cellular automatons still in the running?

But of course in order to report on this, you must first know about these differences. Apparently, when it comes to science news reporting, this is just too much to expect.

The Nature article also contains this little piece of information:

… the best quantum computers in the world are barely able to do school-level problems such as finding the prime factors of the number 21. (Answer: 3 and 7.)

I guess the fact that the answer is provided gives us a hint as to what level of sophistication the author expects from her audience, which in turn must be terribly confused to see a headline such as “New largest number factored on a quantum device is 56,153“.

This is of course not done with Shor’s algorithm but via adiabatic computing (and also involves some slight of hand as the algorithm only works for a certain class of numbers and not all integers).

Nevertheless, adiabatic computing seems to have the upper hand when it comes to scaling the problem scope with a limited number of qubits. But the gate model also made some major news last month.  The guinea pig Simon’s algorithm (one of the first you will learn when being introduced to the field) has been demonstrated to provide the theoretically predicted quantum speed-up. This is huge news that was immediately translated to the rather misleading headline “Simon’s algorithm run on quantum computer for the first time—faster than on standard computer“.

Faster in this case means less processing iterations rather than actual elapsed time, but irrespective, having this theoretical prediction confirmed using the fairly recent one-way technique clearly bolsters the case that gate computing can deliver the goods.

No doubt, the race between the  architectures to deliver the first commercial-grade universal quantum computer is on.  It is still wide open, and makes for a compelling story. Now, if we could only get somebody to properly report on it.

 

 

The Unintentional Obsfuscation of Physics

Sometimes it only takes one person’s untimely demise to change history. There’s an entire genre of literature that explores these possibilities, typically involving the biggest baddies of human history. The following video is an artful example that makes this point rather succinctly – while also leaving me profoundly uncomfortable (after all, it does involve the death of a child).

I am not aware of many examples of exploring alternative histories with regards to science, and by that I mean in more detail than what steampunk has to offer, although William Gibson and Bruce Sterling do a pretty good job of imagining a world in which Charles Babbage succeeded in introducing a mechanical computer to the world in their book “The Difference Engine“.  The subject matter is certainly a worthwhile topic for another post , especially when contrasted with the challenges now to go beyond the Turing machine by getting Quantum Computing to the market. (h/t vznvzn)

william
William Kingdon Clifford (1845 – 1879). Had he lived longer physics would be taught differently.

The untimely death I am contemplating here is that of William Kingdon Clifford. If you are not immersed in physics and math, you have probably never heard his name, because we live in a world where he died young.

That meant it fell to Gibbs and Heaviside to clean up the Maxwell equations, which gave us the insufferable cross-product that confused leagues of students by requiring them to distinguish between polar and axial vectors.  It also meant that complex function theory got stuck in two dimensions, and that group theory was developed without the obvious geometric connection. Which in turn, once this approach started to take over, provoked older physicists, such as Schrödinger, to coin the term “Gruppenpest” (group pestilence). It also created a false symmetry between the electric and magnetic fields, motivating the quest for the ever elusive magnetic monopol. Last but not least, it led to the confused notion that spin is an intrinsically quantum mechanical property, something that is still taught in universities across the globe to this day.

It’s impossible to do Geometric Algebra (GA) justice in one short blog post, but David Hestenes managed to do so in a fairly concise and highly readable paper, the 2002 Oersted Medal Lecture.

It is hard to overstate the profound effect this paper had on me.  The only thing it compares to is when I first learned of Euler’s formula many years ago in my first physics semester.  And the similarities are striking, not only due to the power of bringing together seemingly disparate areas of mathematics by putting them into a geometric context. In the latter case, the key is the imaginary unit, which was originally introduced to solve for negative square roots, and thus allows for the fundamental theorem of algebra. In fact, it turns out that complex numbers can be neatly embedded into geometric algebra and are isomorphic to the 2d GA case. Also, Quaternion are part of the 3d geometric algebra and have a similarly satisfying geometric interpretation.

All this is accomplished by introducing a higher level concept of vector.  For instance, rather than using a cross product, an outer product is defined that creates a bivector that can be thought of as a directed plane segment.

Hestenes makes a convincing case that geometric algebra should be incorporated into every physics curriculum. He wrote some excellent textbooks on the subject, and thankfully, numerous other authors have picked up the mantle (outstanding is John W. Arthur’s take on electrodynamics and Chris Doran’s ambitious and extensive treatment).

The advantages of geometric algebra are so glaring and the concepts so natural that one has to wonder why it took a century to be rediscovered.  John Snygg puts it best in the preface to his textbook on differential geometry:

Although Clifford was recognized worldwide as one of England’s most distinguished mathematicians, he chose to have the first paper published in what must have been a very obscure journal at the time. Quite possibly it was a gesture of support for the efforts of James Joseph Sylvester to establish the first American graduate program in mathematics at Johns Hopkins University. As part of his endeavors, Sylvester founded the American Journal of Mathematics and Clifford’s first paper on what is now known as Clifford algebra appeared in the very first volume of that journal.

The second paper was published after his death in unfinished form as part of his collected papers. Both of these papers were ignored and soon forgotten. As late as 1923, math historian David Eugene Smith discussed Clifford’s achievements without mentioning “geometric algebra” (Smith, David Eugene 1923). In 1928, P.A.M. Dirac reinvented Clifford algebra to formulate his equation for the electron. This equation enabled him to predict the discovery of the positron in 1931. (…)

Had Clifford lived longer, “geometric algebra” would probably have become mainstream mathematics near the beginning of the twentieth century. In the decades following Clifford’s death, a battle broke out between those who wanted to use quaternions to do physics and geometry and those who wanted to use vectors. Quaternions were superior for dealing with rotations, but they are useless in dimensions higher than three or four without grafting on some extra structure.

Eventually vectors won out. Since the structure of both quaternions and vectors are contained in the formalism of Clifford algebra, the debate would have taken a different direction had Clifford lived longer. While alive, Clifford was an articulate spokesman and his writing for popular consumption still gets published from time to time. Had Clifford
participated in the quaternion–vector debate, “geometric algebra” would have received more serious consideration.