Tag Archives: IBM

Will Super Cool SQUIDs Make for an Emerging Industry Standard?

dwave_log_temp_scale
This older logarithmic (!) D-Wave Graphic gives an idea how extreme the cooling requirement is for SQUID based QC (it used to be part of a really cool SVG animation, but unfortunately D-Wave no longer hosts it).

D‑Wave had to break new ground in many engineering disciplines.  One of them was the cooling and shielding technology required to operate their chip.

To this end they are now using ANSYS software, which of course makes for very good marketing for this company (h/t Sol Warda). So good, in fact, that I would hope D‑Wave negotiated a large discount for serving as an ANSYS reference customer.

Any SQUID based quantum computing chip will have similar cooling and shielding requirements, i.e. Google and IBM will have to go through a similar kind of rigorous engineering exercise to productize their approach to quantum computing, even though this approach may look quite different.

Until recently, it would have been easy to forget that IBM is another contender in the ring for SQUID based quantum computing, yet the company’s researchers have been working diligently outside the limelight – they last created headlines three years ago. And unlike other quantum computing news, that often only touts marginal improvements, their recent results deserved to be called a break-through, as they improved upon the kind of hardware error correction that Google is betting on.

IBM_in_atoms
IBM has been conducting fundamental quantum technology research for a long time, this image shows the company’s name spelled out using 35 xenon atoms, arranged via a scanning tunneling microscope (a nano visualization and manipulation device invented at IBM).

Obviously, the better your error correction, the more likely you will be able to achieve quantum speed-up when you pursue an annealing architecture like D‑Wave, but IBM is not after yet another annealer. Most articles on the IBM program reports that IBM is into building a  “real quantum computer”, and the term clearly originates from within the company, (e.g. this article attributes the term to Scientists at IBM Research in Yorktown Heights, NY). This leaves little doubt about their commitment to universal gate based QC.

The difference in strategy is dramatic. D‑Wave decided to forgo surface code error correction on the chip in order to get a device to the market.  Google, on the other hand, decided to snap up the best academic surface code implementation money could buy, and also emphasized speed-to-market by first going for another quantum adiabatic design.

All the while, IBM researchers first diligently worked through the stability of SQUID based qubits .  Even now, having achieved the best available error correction, they clearly signaled that they don’t consider it good enough for scale-up. It may take yet another three years for them to find the optimal number and configuration of logical qubits that achieves the kind of fidelity they need to then tackle an actual chip.

It is a very methodological engineering approach. Once the smallest building block is perfected,  they will have the confidence that they can go for the moonshot. It’s also an approach that only a company with very deep pockets can afford, one with a culture that allows for the pursuit of a decades long research program.

Despite the differences, in the end, all SQUID based chips will have to be operated very close to absolute zero.  IBM’s error correction may eventually give it a leg-up over the competition, but I doubt that standard liquid helium fridge technology will suffice for a chip that implements dozens or hundreds of qubits.

By the time IBM enters the market there will be more early adopters of the D‑Wave and Google chips, and the co-opetition between these two companies may have given birth to an emerging industry standard for the fridge technology. In a sense, this may lower the barriers of entry for new quantum chips if the new entrant can leverage this existing infrastructure. It would probably be a first for IBM to cater to a chip interfacing standard that the company did not help to design.

So while there’s been plenty of news in the quantum computing hardware space to report, it is curious, and a sign of the times, that a recent Washington Post article on the matter opted to headline with a Quantum Computing Software company i.e. QxBranch. (Robert R. Tucci channeled the journalists at the WP when he wrote last week that the IBM news bodes well for software start-ups in this space).

While tech and business journalists may not (and may possibly never) understand what makes a quantum computer tick, they understand perfectly well that any computing device is just dead weight without software, and that the latter will make the value proposition necessary to create a market for these new machines.

 

 

 

Quantum Computing for the Rest of Us? – UPDATED

<Go to update>

Everybody who is following the Quantum Computing story will have heard by now about IBM’s new chip. This certainly gives credence to the assumption that superconducting Josephon junction-based technology will win the race.  This may seem like bad news for everybody who was hoping to be able to tug away a quantum computer under his or her desk some day.

Commercial liquid cooling has come a long way, but shrinking it in price and size to fit into a PC tower form factor is a long way off. Compare and contrast this to liquid nitrogen cooling. With a high temp superconductor and some liquid nitrogen poured from a flask any high school  lab can demonstrate this neat Meisner effect:

Cooled with Liquid Nitrogen (a perfectly harmless fluid unless you get it on your cloths)

Good luck trying this with liquid helium (for one thing it may try to climb up the walls of your container – but that’s a different story).

Nitrogen cooling on the other hand is quite affordable.  (Cheap enough to make the extreme CPU overclocking scene quite fond of it).

So why then are IBM and D-Wave making their chips from the classic low temp superconductor Niobium?  In part the answer is simple pragmatic engineering: This metal allows you to adopt fabrication methods developed for silicon.

The more fundamental reason for the low temperature is to prevent decoherence, or to formulate it positively, to preserve pure entangled qubit states. The latter is especially critical for the IBM chip.

The interesting aspect about D-Wave’s more natural – but less universal – adiabatic quantum computing approach, is that a dedicated control of the entanglement of qubits is not necessary.

It would be quite instructive to know how the D-Wave chip performs as a function of temperature below Tc (~9K). If the performance doesn’t degrade too much, maybe, just maybe, high temperature superconductors are suitable for this design as well.  After all, it has been shown they can be used to realize Josephon junctions of high enough quality for SQUIDS. On the other hand, the new class of iron based superconductors show promise for easier manufacturing, but have a dramatically different energy band structure, so that at this point it seems all bets are off on this new material.

So, not all is bleak (even without invoking the vision of topological QC that deserves its own post). There is a sliver of hope for us quantum computing aficionados that even the superconducting approach might lead to an affordable machine one of these days.

If anybody with a more solid Solid States Physics background than me, wants to either feed the flames of this hope or dash it –  please make your mark in the comment section.

(h/t to Noel V. from the LinkedIn Quantum Computing Technology group and to Mahdi Rezaei for getting me thinking about this.)

Update:

Elena Tolkacheva from D-Wave was so kind to alert me to the fact that my key question has been answered in a paper that the company published in the summer of 2010. It contains the following graphs that illustrates how the chip performs at three different temperatures: (a) T = 20 mK, (b) T = 35 mK, and (c) T = 50 mK.  Note: These temperatures are far below Niobium’s critical temperature of 9.2 K.

The probability of the system to find the ground state (i.e. the “correct” answer) clearly degenerates with higher temperature – interestingly though not quite as badly as simulated. This puts a sever damper on my earlier expressed hope that this architecture might be suitable for HTCs as this happens so far away from Tc .

Kelly Loum, a member of the LinkeIn Quantum Physics group, helpfully pointed out that,

… you could gather results from many identical high temperature processors that were given the same input, and use a bit of statistical analysis and forward error correction to find the most likely output eigenstate of those that remained coherent long enough to complete the task.

… one problem here is that modern (classical, not quantum) FECs can fully correct errors only when the raw data has about a 1/50 bit error rate or better. So you’d need a LOT of processors (or runs) to integrate-out the noise down to a 1/50 BER, and then the statistical analysis on the classical side would be correspondingly massive. So my guess at the moment is you’d need liquid helium.

The paper also discusses some sampling approaches along those lines, but clearly there is a limited to how far they can be taken.  As much as I hate to admit it, I concur with Kelly’s conclusion.  Unless there is some unknown fundamental mechanism that’ll make the decoherence dynamics of HTCs fundamentally different, the D-Wave design does not seem a likely candidate for a nitrogen cooling temperature regime. On the other hand drastically changing the decoherence dynamics is the forte of topological computing, but that is a different story for a later post.