All posts by Henning Dekant

Classic Quantum Confusion

Paris_Tuileries_Facepalm_statueBy now I am pretty used to egregiously misleading summarization of physics research in popular science outlets, sometimes flamed by the researchers themselves. Also self-aggrandized, ignorant papers sneaked into supposedly peer reviewed journals by non-physicists are just par for the course.

But this is in a class of it’s own.  Given the headline and the introductory statement that “a fully classical system behaves like a true quantum computer“, it essentially creates the impression that QC research must be pointless. Much later it sneaks in the obvious, that an analog emulation just like one on a regular computer can’t possibly scale past 40 qubits due to the exponential growth in required computational resources.

But that’s not the most irritating aspect of this article.

Don’t get me wrong, I am a big fan of classical quantum analog systems. I think they can be very educational, if you know what you are looking at (Spreeuw 1998).  The latter paper, is actually quoted by the authors and it is very precise in distinguishing between quantum entanglement and the classical analog. But that’s not what their otherwise fine paper posits (La Cour et al. 2015).  The authors write:

“What we can say is that, aside from the limits on scale, a classical emulation of a quantum computer is capable of exploiting the same quantum phenomena as that of a true quantum system for solving computational problems.”

If it wasn’t for the phys.org reporting, I would put this down as sloppy wording that slipped past peer review, but if the authors are correctly quoted, then they indeed labour under the assumption that they faithfully recreated quantum entanglement in their classical analog computer – mistaking the model for the real thing.

It makes for a funny juxtaposition on phys.org though, when filtering by ‘quantum physics’ news.

Screenshot 2015-05-28 01.35.43

The second article refers to a new realization of Wheeler’s delayed choice experiment (where the non-local entanglement across space is essentially swapped for one across time).

If one takes Brian La Cour at his word then according to his other paper he suggest that these kind of phenomena should also have a classical analog.

So it’s not just hand-waving when he is making this rather outlandish sounding statement with regards to being able to achieve an analog to the violation of Bell’s inequality:

“We believe that, by adding an emulation of quantum noise to the signal, our device would be capable of exhibiting this type of [Bell’s inequality violating] entanglement as well, as described in another recent publication.”

Of course talk is cheap, but if this research group could actually demonstrate this Bell’s inequality loophole it certainly could change the conversation.

Will Super Cool SQUIDs Make for an Emerging Industry Standard?

dwave_log_temp_scale
This older logarithmic (!) D-Wave Graphic gives an idea how extreme the cooling requirement is for SQUID based QC (it used to be part of a really cool SVG animation, but unfortunately D-Wave no longer hosts it).

D‑Wave had to break new ground in many engineering disciplines.  One of them was the cooling and shielding technology required to operate their chip.

To this end they are now using ANSYS software, which of course makes for very good marketing for this company (h/t Sol Warda). So good, in fact, that I would hope D‑Wave negotiated a large discount for serving as an ANSYS reference customer.

Any SQUID based quantum computing chip will have similar cooling and shielding requirements, i.e. Google and IBM will have to go through a similar kind of rigorous engineering exercise to productize their approach to quantum computing, even though this approach may look quite different.

Until recently, it would have been easy to forget that IBM is another contender in the ring for SQUID based quantum computing, yet the company’s researchers have been working diligently outside the limelight – they last created headlines three years ago. And unlike other quantum computing news, that often only touts marginal improvements, their recent results deserved to be called a break-through, as they improved upon the kind of hardware error correction that Google is betting on.

IBM_in_atoms
IBM has been conducting fundamental quantum technology research for a long time, this image shows the company’s name spelled out using 35 xenon atoms, arranged via a scanning tunneling microscope (a nano visualization and manipulation device invented at IBM).

Obviously, the better your error correction, the more likely you will be able to achieve quantum speed-up when you pursue an annealing architecture like D‑Wave, but IBM is not after yet another annealer. Most articles on the IBM program reports that IBM is into building a  “real quantum computer”, and the term clearly originates from within the company, (e.g. this article attributes the term to Scientists at IBM Research in Yorktown Heights, NY). This leaves little doubt about their commitment to universal gate based QC.

The difference in strategy is dramatic. D‑Wave decided to forgo surface code error correction on the chip in order to get a device to the market.  Google, on the other hand, decided to snap up the best academic surface code implementation money could buy, and also emphasized speed-to-market by first going for another quantum adiabatic design.

All the while, IBM researchers first diligently worked through the stability of SQUID based qubits .  Even now, having achieved the best available error correction, they clearly signaled that they don’t consider it good enough for scale-up. It may take yet another three years for them to find the optimal number and configuration of logical qubits that achieves the kind of fidelity they need to then tackle an actual chip.

It is a very methodological engineering approach. Once the smallest building block is perfected,  they will have the confidence that they can go for the moonshot. It’s also an approach that only a company with very deep pockets can afford, one with a culture that allows for the pursuit of a decades long research program.

Despite the differences, in the end, all SQUID based chips will have to be operated very close to absolute zero.  IBM’s error correction may eventually give it a leg-up over the competition, but I doubt that standard liquid helium fridge technology will suffice for a chip that implements dozens or hundreds of qubits.

By the time IBM enters the market there will be more early adopters of the D‑Wave and Google chips, and the co-opetition between these two companies may have given birth to an emerging industry standard for the fridge technology. In a sense, this may lower the barriers of entry for new quantum chips if the new entrant can leverage this existing infrastructure. It would probably be a first for IBM to cater to a chip interfacing standard that the company did not help to design.

So while there’s been plenty of news in the quantum computing hardware space to report, it is curious, and a sign of the times, that a recent Washington Post article on the matter opted to headline with a Quantum Computing Software company i.e. QxBranch. (Robert R. Tucci channeled the journalists at the WP when he wrote last week that the IBM news bodes well for software start-ups in this space).

While tech and business journalists may not (and may possibly never) understand what makes a quantum computer tick, they understand perfectly well that any computing device is just dead weight without software, and that the latter will make the value proposition necessary to create a market for these new machines.

 

 

 

We need Big Data where it will actually make a difference

Katmadu_detroyed_temples

Another earthquake took the lives of many thousands. As I am writing this blog post, scores of survivors will still be trapped underneath debris and rubble.

It will take weeks, if not months, before the damage done to Nepal will become fully apparent, in terms of life and limbs but also economically and spiritually.

The world’s poorest regions are often hardest hit because resilient structures that can withstand quakes of this magnitude are expensive.

Governments look to science to provide better earthquake warnings, but the progress of geophysical modeling is hampered by the lack of good, high quality data.

In this context, pushing the limits of remote sensing with new technologies such as Quantum Gravimeters becomes a matter of life and death, and it should make apparent that striving for ever more precise quantum clocks is anything but a vanity chase. After all we are just now closing in on the the level of accuracy needed to perform relativistic geodesy.

It goes without saying that the resource extraction industry will be among the first to profit from these new techniques.  While this industry has an image problem due to its less than stellar environmental track record, there’s no denying that anything that drives the rapid and ongoing productization of these technologies is a net positive if that makes them affordable and widely accessible to geophysicists who study the dynamic of active fault lines. Acquiring this kind of big data is the only chance to ever achieve a future when our planet will no longer shock us with its deadly geological force.

How many social networks do you need?

The proliferation of social networks seems unstoppable now. Even the big ones you can no longer count on one hand: Facebook, LinkedIn, GooglePlus, Twitter, Instagram, Tumblr, Pinterest, Snapchat – I am so uncool I didn’t even know about the latter until very recently. It seems that there has to be a natural saturation point with diminishing marginal return of signing up to yet another one, but apparently we are still far from it.

Recently via LinkedIn I learned about a targeted social network that I happily signed up for, which is quite against my character (i.e. I still don’t have a Facebook account).

iQEi_logo
Free to join and no strings attached. (This targeted social network is not motivated by a desire to monetize your social graph).

The aptly named International Quantum Exchange for Innovation is a social network set up by DK Matai with the express purpose of bringing together people of all walks of life anywhere on this globe who are interested in the next wave of the coming Quantum Technology revolution. If you are as much interested in this as I am, then joining this UN of Quantum Technology, as DK puts it, is a no-brainer.

The term ‘revolution’ is often carelessly thrown around, but in this case I think, when it comes to the new wave of quantum technologies, it is more than justified. After all, the first wave of QM driven technologies powered the second leg of the  Industrial Revolution. It started with a bang, in the worst possible manner, when the first nuclear bomb ignited, but the new insights gained led to a plethora of new high tech products.

Quantum physics was instrumental in everything from solar cells, to lasers, to medical imaging (e.g. MRI) and of course, first and foremost, the transistor. As computers became more powerful, Quantum Chemistry coalesced into an actual field, feeding on the ever increasing computational power. Yet Moore’s law proved hardly adequate for its insatiable appetite for the compute cycles required by the underlying quantum numerics.

During Richard Feynman’s (too short) life span, he was involved in the military as well as civilian application of quantum mechanics, and his famous “there is plenty of room at the bottom” talk can be read as a programmatic outline of the first Quantum Technology revolution.  This QT 1.0 wave has almost run its course. We made our way to the bottom, but there we encountered entirely new possibilities by exploiting the essential, counter-intuitive non-localities of quantum mechanics.  This takes it to the next step, and again Information Technology is at the fore-front. It is a testament to Feynman’s brilliance that he anticipated QT 2.0 as well, when suggesting a quantum simulator for the first time, much along the lines of what D-Wave built.

It is apt and promising that the new wave of quantum technology does not start with a destructive big bang, but an intriguing and controversial black box.

D-Wave_2001

 

Dumbing Down for Smartphones

Google changed its site ranking, if a site is not mobile friendly it will now be heavily penalized. I was quite fond of my old design but when running the Google Mobile test it failed miserably.  Hence a hasty redesign based on a newer WordPress theme was in order.

Screenshot 2015-04-20 01.43.20
Goodby my beloved theme, Google and that nasty smartphone killed you.

 

 

Quantum Computing Road Map

No, we are not there yet, but we are working on it.Qubit spin states in diamond defects don’t last forever, but they can last outstandingly long even at room temperature (measured in microseconds which is a long time when it comes to computing).

So this is yet another interesting system added to the list of candidates for potential QC hardware.

Nevertheless, when it comes to the realization of scalable quantum computers, qubits decoherence time may very well be eclipsed by the importance of another time span: 20 years, the length at which patents are valid (in the US this can include software algorithms).

With D-Wave and Google leading the way, we may be getting there faster than most industry experts predicted. Certainly the odds are very high that it won’t take another two decades for useable universal QC machines to be built.

But how do we get to the point of bootstrapping a new quantum technology industry? DK Matai addressed this in a recent blog post, and identified five key questions, which I attempt to address below (I took the liberty of slightly abbreviating the questions, please check at the link for the unabridged version).

The challenges DK laid out will require much more than a blog post (or a LinkedIn comment that I recycled here), especially since his view is wider than only Quantum Information science. That is why the following thoughts are by no means comprehensive answers, and very much incomplete, but they may provide a starting point.

1. How do we prioritise the commercialisation of critical Quantum Technology 2.0 components, networks and entire systems both nationally and internationally?

The prioritization should be based on the disruptive potential: Take quantum cryptography versus quantum computing for example. Quantum encryption could stamp out fraud that exploits some technical weaknesses, but it won’t address the more dominant social engineering deceptions. On the upside it will also facilitate iron clad cryptocurrencies. Yet, if Feynman’s vision of the universal quantum simulator comes to fruition, we will be able to tackle collective quantum dynamics that are computationally intractable with conventional computers. This encompasses everything from simulating high temperature superconductivity to complex (bio-)chemical dynamics. ETH’s Matthias Troyer gave an excellent overview over these killer-apps for quantum computing in his recent Google talk, I especially like his example of nitrogen fixation. Nature manages to accomplish this with minimal energy expenditure in some bacteria, but industrially we only have the century old Haber-Bosch process, which in modern plants still results in 1/2 ton of CO2 for each ton of NH3. If we could simulate and understand the chemical pathway that these bacteria follow we could eliminate one of the major industrial sources of carbon dioxide.

2. Which financial, technological, scientific, industrial and infrastructure partners are the ideal co-creators to invent, to innovate and to deploy new Quantum technologies on a medium to large scale around the world? 

This will vary drastically by technology. To pick a basic example, a quantum clock per se is just a better clock, but put it into a Galileo/GPS satellite and the drastic improvement in timekeeping will immediately translate to a higher location triangulation accuracy, as well as allow for a better mapping of the earth’s gravitational field/mass distribution.

3. What is the process to prioritise investment, marketing and sales in Quantum Technologies to create the billion dollar “killer apps”?

As sketched out above, the real price to me is universal quantum computation/simulation. Considerable efforts have to go into building such machines, but that doesn’t mean that you cannot start to already develop software for them. Any coding for new quantum platforms, even if they are already here (as in the case of the D-Wave 2) will involve emulators on classical hardware, because you want to debug and proof your code before submitting it to the more expansive quantum resource. In my mind building such an environment in a collaborative fashion to showcase and develop quantum algorithms should be the first step. To me this appears feasible within an accelerated timescale (months rather than years). I think such an effort is critical to offset the closed sourced and tightly license controlled approach, that for instance Microsoft is following with its development of the LIQUi|> platform.

4. How do the government agencies, funding Quantum Tech 2.0 Research and Development in the hundreds of millions each year, see further light so that funding can be directed to specific commercial goals with specific commercial end users in mind?

This to me seems to be the biggest challenge. The amount of research papers produced in this field is enormous. Much of it is computational theory. While the theory has its merits, I think the governmental funding should try to emphasize programs that have a clearly defined agenda towards ambitious yet attainable goals. Research that will result in actual hardware and/or commercially applicable software implementations (e.g. the UCSB Martinis agenda). Yet, governments shouldn’t be in the position to pick a winning design, as was inadvertently done for fusion research where ITER’s funding requirements are now crowding out all other approaches. The latter is a template for how not to go about it.

5. How to create an International Quantum Tech 2.0 Super Exchange that brings together all the global centres of excellence, as well as all the relevant financiers, customers and commercial partners to create Quantum “Killer Apps”?

On a grassroots level I think open source initiatives (e.g. a LIQUiD alternative) could become catalysts to bring academic excellence centers and commercial players into alignment. This at least is my impression based on conversations with several people involved in the commercial and academic realm. On the other hand, as with any open source products, commercialization won’t be easy, yet this may be less of a concern in this emerging industry, as the IP will be in the quantum algorithms, and they will most likely be executed with quantum resources tied to a SaaS offering.

 

Who is watching the watchmen gatekeepers?

The Technological Singularity won't happen without sound science.Almost every human activity in the first world has been impacted by technology. Our means of production have been fundamentally altered, and while creating enormous wealth, the changes have often been disruptive, painfully so at times. As this ongoing transformation accelerates, “business as usual” has become an oxymoron.

Paradoxically, while science is at the foundation of all our technological progress, it is like the eye at the center of the storm – the academic mode of operation has hardly changed over the last two centuries. And why not? An argument could be made not to fix what isn’t broken. For instance, sometimes you hear the scientific process compared to the Open Source movement, arguing that both strive for a transparent meritocracy where openness ensure that mistakes will not survive for long. Unfortunately, this idealized view is a fallacy on more than one count.

There are lots of good reasons for the Open Source coding paradigm, but it does not give license to forgo quality control and code review, as for instance the heartbleed bug, or the most recent widespread (and ancient!) bash vulnerability, illustrated.

On the other hand, the scientific peer review process is not anything like the open communication that goes on in public forums and email lists of Open Source software like the Linux kernel. Peer review is completely closed off from public scrutiny, yet determines what enters the scientific discourse in the first place.

The main medium of communicating scientific results remains the publication of papers in scientific journals, some of which charge outrageous subscription fees that shut out poorer institutions and most individuals. But this isn’t by far the worst impediment to open scientific exchange. Rather, it is the anonymous peer review process itself, which is by design not public. Authors are often given opportunities to correct a paper by re-submitting if the original one is rejected, but ultimately the peer reviewers serve as gatekeepers.

For a discipline that is the foundation of all our technology, the knowledge generating process of science has been surprisingly untouched, and due to the build in anonymity, it has also managed to escape any scrutiny. That would be all fine and good if it was actually working. But it is not. We know little about stellar papers that may have been rejected and now linger forever in obscurity in some pre-print archive, but we know all the trash that passed final approval for publications. For instance, we get sensational headlines like this, promising an entirely new take on dark energy, but if you actually read up on this you realize that the author of the underlying paper, who is just referred to as a University of Georgia professor, is actually not a physicist but a biologist. Now far be it from me to discourage a biologist from wanting to do physics, but if you bother to read his paper on the matter you will quickly realize that the man apparently doesn’t grasp the basics of general and special relativity. Yet, this was published in PLOS One which supposedly follows a rigorous peer review process. They even bothered to issue a correction to an article that is just pseudo science. Mind boggling.

Now you may think, well, this is PLOS One, although a leading Open-Access journal, the underlying business model must surely imply that they cannot pay decent money for peer review. Surely more prestigious journals, published by an organization that is known for its ludicrous journal subscription prices, such as Elsevier, will have a much more rigorous peer review process. Unfortunately you would be wrong. May I present you this Rainbow and Unicorns Gravity paper. It has, of course, caused the predictable splash in the mainstream media. The paper should never have been published in this form. You don’t have to take my word for it, you can read up on it in detail on the blog of Sabine Hossenfelder, whose 2004 paper on black hole production the authors listed as a reference. When challenged to write up a criticism to submit to the same journal, Sabine didn’t mince words:

This would be an entire waste of time. See, this paper is one of hundreds of papers that have been published on this and similar nonsense, and it is admittedly not even a particularly bad one. Most of the papers on the topic are far worse that that. I have already tried to address these problems by publishing this paper which explicitly rules out all models that give rise to a modified dispersion relation of the same type that the authors use. But look, it doesn’t make any difference. The model is ruled out – you’d think that’s the end of the story. But that isn’t how science works these days. People continue to publish papers on this by just ignoring it. They don’t even claim there is something wrong with my argument, they just ignore it and continue with their nonsense.

I have wasted enough time on this. There is something really, really going wrong with theoretical physics and this is only one indication for it.

Later in the comment thread she also had this to say:

I have had to talk to students who work on related things (not exactly the same thing) and who were never told that there are any problems with this idea. Even though I know for sure their supervisor knows. Even though I have published half a dozen of comments and papers explicitly explaining what the problems are. Honestly, it’s things like this that make me want to leave academia. This just isn’t research any more this is only sales strategies and networking.

The problem goes beyond peer review and comes down to accountability, but because the peer review is anonymous by design, it is especially easily corrupted, and I know of cases that resulted in exactly what Sabine spelled out: Top talent leaving academia and theoretical physics. The more I look into this the more I believe that this process at the heart of the scientific endeavour is fundamentally broken, and urgently needs fixing.

peer_reviewR4_scr
Outside the world of physics, failures in peer review can have egregious consequences, such as the anti-vaccination hysteria that a peer-reviewed (and now completely discredited) article caused when linking vaccinations to autism. Although this was debunked and the scandal prominently featured in a 2011 BBC documentary the damage was done. It is much easier to get away with such shoddy work in theoretical physics.

 

Quantum Computing Coming of Age

Are We There Yet? That’s the name of the talk that Daniel Lidar recently gave at Google (h/t Sol Warda who posted this in a previous comment).

Spoiler alert, I will summarize some of the most interesting aspects of this talk as I finally found the time to watch it in its entirety.

The first 15 min you may skip if you follow this blog, he just gives a quick introduction to QC. Actually, if you follow the discussion closely on this blog, then you will find not much news in most of the presentation until the very end, but I very much appreciated the graph 8 minutes in, which is based on this Intel data:

CPU_hit_a_wall
Performance and clock speeds are essentially flat for the last ten years. Only the ability to squeeze more transistors and cores into one chip keeps Moore’s law alive (data source Intel Corp.).

Daniel, deservedly, spends quite some time on this, to drive home the point that classical chips have hit a wall.  Moving from Silicon to Germanium will only go so far in delaying the inevitable.

If you don’t want to sit through the entire talk, I recommend skipping ahead to the 48 minute mark, when error correction on the D-Wave is discussed. The results are very encouraging, and in the Q&A Daniel points out that this EC scheme could be inherently incorporated into the D-Wave design. Wouldn’t be surprised to see this happen fairly soon. The details of the ECC scheme are available at arxiv, and Daniel spends some time on the graph shown below. He is pointing out that, to the extent that you can infer a slope, it looks very promising, as it get flatter as the problems get harder, and the gap between non-ECC and the error corrected annealing widens (solid vs. dashed lines). With ECC I would therefore expect D-Wave machines to systematically outperform simulated annealing.

Number of repetitions to find a solution at least once.
Number of repetitions to find a solution at least once.

Daniel sums up the talk like this:

  1. Is the D-Wave device a quantum annealer?
    • It disagrees with all classical models proposed so far. It also exhibits entanglement. (I.e. Yes, as far as we can tell)
  2.  Does it implement a programmable Ising model in a transverse field and solve optimization problems as promised?
    • Yes
  3. Is there a quantum speedup?
    • Too early to tell
  4. Can we error-correct it and improve its performance?
    • Yes

With regard to hardware implemented qubit ECC, we also got some great news from Martinis’ UCSB lab, whom Google drafted for their quantum chip. The latest results have just been published in Nature (pre-print available at arxiv).

Martinis explained the concept in a talk I previously reported on, and clearly the work is progressing nicely. Unlike the ECC scheme for the D-Wave architecture, Martinis’ approach is targeting a fidelity that not only will work for quantum annealing, but should also allow for non-trivial gate computing sizes.

Quantum Computing may not have fully arrived yet, but after decades of research we clearly are finally entering the stage where this technology won’t be just the domain of theorists and research labs, and at this time, D-Wave is poised to take the lead.

 

 

Nuclear Confusion Versus Dead-End Certainty?

While I am working on my next  blog post, this excellent update on the state of fusion research from the Polywell Blog author John Smith shouldn’t go unnoticed.  He makes a strong case that the US is neglecting promising avenues towards self-sustained nuclear fusion as the ITER cost keeps on skyrocketing.  This echoes a similar sentiment that I heard when visiting General Fusion. Nevertheless, I think quitting ITER completely, as John recommends, is unwise.

The US already only has observer status at CERN, so bailing on ITER would sideline the American physics community even more. Despite the cost overruns and irrespective of its commercialisation prospects, ITER will make for one of the most advanced testbeds for plasma physics.  Should the US really shut itself out of having prime access to this machine once it is operational?

John’s post provides an excellent round-up of the various approaches to fusion, and mentions the damage that cold fusion inflicted on the field, a story that deserves a separate article. But there is another plasma phenomenon that some hope could be exploited for nuclear fusion that goes unmentioned in John’s otherwise exhaustive post. It shares some communality with the dubious cold fusion experiments: Abysmally bad replicability that severely damaged the reputation of one of the lead researchers in the field. This speculative approach to fusion was recently prominently featured in a surprisingly well researched gawker article (h/t Ed B.). It mentions some private outfits that are hanging their hat on sonoluminescence, and since the latter phenomenon is, after all, an actual plasma creating micro cavitation, these companies don’t deserve to be lumped in with the more shady cold fusion hustlers.

However, it is quite apparent that none of these can produce neutrons at a significant rate, unlike PNL’s High Yield Neutron Generator, an already commercially valuable technology. So there clearly is not much reason to get too excited about sonoluminescence unless one of the companies invested in this approach could replicate this feat.

Phoenix Nuclear Lab’s High Yield Neutron Generator, a piece of fusion technology you can buy today. It offers much cleaner and less messy source for neutrons than any fission based approach (it also avoids the proliferation headaches that come with the latter).

On balance, the influx of private money into nuclear fusion start-ups is the story here, one that gives hope that humanity may find a way to break its self-defeating fossil fuel habit within our lifetime.

 

 

 

The Year That Was <insert expletive of your choice>

Usually, I like to start a new year on an upbeat note, but this time I just cannot find the right fit. I was considering whether to revisit technology that can clean waterlauding the effort of the Bill Gates foundation came to mind, but while I think this is a great step in the right direction, this water reclaiming technology is still a bit too complex and expensive to become truly transformational and liberating.

At other times, a groundbreaking progress in increasing the efficiency of solar energy would have qualified, the key being that this can be done comparatively cheaply. Alas, the unprecedented drop in the price of oil is not only killing off the fracking industry, but also the economics for alternative energy.  For a planet that has had its fill of CO2, fossil fuel this cheap is nothing but an unmitigated disaster.

So while it was a banner year for quantum computing, in many respects 2014 was utterly dismal, seeing the return of religiously motivated genocide, open warfare in Europe, a resurgence of diseases that could be eradicated by now, and a pandemic that caused knee jerk hysterical reactions that taught us how unprepared we are for these kind of health emergencies. This year was so depressing it makes me want to wail along to my favorite science blogger’s song about it (but then again I’d completely ruin it).

And there is another reason to not yet let go of the past, corrections:

With these corrections out of the way I will finally let go of 2014, but with the additional observation that in the world of quantum computing, the new year started very much in the same vein as the old, generating positive business news for D-Wave, which managed to just raise another 29 million dollars, while at the same time still not getting respect from some academic QC researchers.

I.H. Deutsch (please note, not the Deutsch but Ivan) states at the end of this interview:

  1. [1]The D-Wave prototype is not a universal quantum computer.
  2. [2]It is not digital, nor error-correcting, nor fault tolerant.
  3. [3]It is a purely analog machine designed to solve a particular optimization problem.
  4. [4]It is unclear if it qualifies as a quantum device.”

No issues with [1]-[3].  But how many times do classical algos have to be ruled out before D-Wave is finally universally accepted as a quantum annealing machine?  This is getting into climate change denying territory. It shouldn’t really be that hard to define what makes for quantum computation. So I guess we found a new candidate for D-Wave chief critic, after Scott Aaronson seems to have stepped down for good.

Then again, with a last name like Deutsch, you may have to step up your game to get some name recognition of your own in this field.  And there’s no doubt that controversy works.

So 2015 is shaping up to become yet another riveting year for QC news. And just in case you made the resolution that, this year, you will finally try to catch that rainbow, there’s some new tech for you.
SOURCE: chaosgiant.deviantart.com

 

 

Update: Almost forgot about this epic fail of popular science reporting at the tail end of 2014.  For now I leave it as an exercise to the reader to spot everything that’s wrong with it. Of course most of the blame belongs to PLoS ONE which supposedly practices peer review.