<Go to update>
Everybody who is following the Quantum Computing story will have heard by now about IBM’s new chip. This certainly gives credence to the assumption that superconducting Josephon junction-based technology will win the race. This may seem like bad news for everybody who was hoping to be able to tug away a quantum computer under his or her desk some day.
Commercial liquid cooling has come a long way, but shrinking it in price and size to fit into a PC tower form factor is a long way off. Compare and contrast this to liquid nitrogen cooling. With a high temp superconductor and some liquid nitrogen poured from a flask any high school lab can demonstrate this neat Meisner effect:
Good luck trying this with liquid helium (for one thing it may try to climb up the walls of your container – but that’s a different story).
Nitrogen cooling on the other hand is quite affordable. (Cheap enough to make the extreme CPU overclocking scene quite fond of it).
So why then are IBM and D-Wave making their chips from the classic low temp superconductor Niobium? In part the answer is simple pragmatic engineering: This metal allows you to adopt fabrication methods developed for silicon.
The more fundamental reason for the low temperature is to prevent decoherence, or to formulate it positively, to preserve pure entangled qubit states. The latter is especially critical for the IBM chip.
The interesting aspect about D-Wave’s more natural – but less universal – adiabatic quantum computing approach, is that a dedicated control of the entanglement of qubits is not necessary.
It would be quite instructive to know how the D-Wave chip performs as a function of temperature below Tc (~9K). If the performance doesn’t degrade too much, maybe, just maybe, high temperature superconductors are suitable for this design as well. After all, it has been shown they can be used to realize Josephon junctions of high enough quality for SQUIDS. On the other hand, the new class of iron based superconductors show promise for easier manufacturing, but have a dramatically different energy band structure, so that at this point it seems all bets are off on this new material.
So, not all is bleak (even without invoking the vision of topological QC that deserves its own post). There is a sliver of hope for us quantum computing aficionados that even the superconducting approach might lead to an affordable machine one of these days.
If anybody with a more solid Solid States Physics background than me, wants to either feed the flames of this hope or dash it – please make your mark in the comment section.
(h/t to Noel V. from the LinkedIn Quantum Computing Technology group and to Mahdi Rezaei for getting me thinking about this.)
Elena Tolkacheva from D-Wave was so kind to alert me to the fact that my key question has been answered in a paper that the company published in the summer of 2010. It contains the following graphs that illustrates how the chip performs at three diﬀerent temperatures: (a) T = 20 mK, (b) T = 35 mK, and (c) T = 50 mK. Note: These temperatures are far below Niobium’s critical temperature of 9.2 K.
The probability of the system to find the ground state (i.e. the “correct” answer) clearly degenerates with higher temperature – interestingly though not quite as badly as simulated. This puts a sever damper on my earlier expressed hope that this architecture might be suitable for HTCs as this happens so far away from Tc .
Kelly Loum, a member of the LinkeIn Quantum Physics group, helpfully pointed out that,
… you could gather results from many identical high temperature processors that were given the same input, and use a bit of statistical analysis and forward error correction to find the most likely output eigenstate of those that remained coherent long enough to complete the task.
… one problem here is that modern (classical, not quantum) FECs can fully correct errors only when the raw data has about a 1/50 bit error rate or better. So you’d need a LOT of processors (or runs) to integrate-out the noise down to a 1/50 BER, and then the statistical analysis on the classical side would be correspondingly massive. So my guess at the moment is you’d need liquid helium.
The paper also discusses some sampling approaches along those lines, but clearly there is a limited to how far they can be taken. As much as I hate to admit it, I concur with Kelly’s conclusion. Unless there is some unknown fundamental mechanism that’ll make the decoherence dynamics of HTCs fundamentally different, the D-Wave design does not seem a likely candidate for a nitrogen cooling temperature regime. On the other hand drastically changing the decoherence dynamics is the forte of topological computing, but that is a different story for a later post.