Tag Archives: super-computing

Quantum Computing for the Rest of Us? – UPDATED

<Go to update>

Everybody who is following the Quantum Computing story will have heard by now about IBM’s new chip. This certainly gives credence to the assumption that superconducting Josephon junction-based technology will win the race.  This may seem like bad news for everybody who was hoping to be able to tug away a quantum computer under his or her desk some day.

Commercial liquid cooling has come a long way, but shrinking it in price and size to fit into a PC tower form factor is a long way off. Compare and contrast this to liquid nitrogen cooling. With a high temp superconductor and some liquid nitrogen poured from a flask any high school  lab can demonstrate this neat Meisner effect:

Cooled with Liquid Nitrogen (a perfectly harmless fluid unless you get it on your cloths)

Good luck trying this with liquid helium (for one thing it may try to climb up the walls of your container – but that’s a different story).

Nitrogen cooling on the other hand is quite affordable.  (Cheap enough to make the extreme CPU overclocking scene quite fond of it).

So why then are IBM and D-Wave making their chips from the classic low temp superconductor Niobium?  In part the answer is simple pragmatic engineering: This metal allows you to adopt fabrication methods developed for silicon.

The more fundamental reason for the low temperature is to prevent decoherence, or to formulate it positively, to preserve pure entangled qubit states. The latter is especially critical for the IBM chip.

The interesting aspect about D-Wave’s more natural – but less universal – adiabatic quantum computing approach, is that a dedicated control of the entanglement of qubits is not necessary.

It would be quite instructive to know how the D-Wave chip performs as a function of temperature below Tc (~9K). If the performance doesn’t degrade too much, maybe, just maybe, high temperature superconductors are suitable for this design as well.  After all, it has been shown they can be used to realize Josephon junctions of high enough quality for SQUIDS. On the other hand, the new class of iron based superconductors show promise for easier manufacturing, but have a dramatically different energy band structure, so that at this point it seems all bets are off on this new material.

So, not all is bleak (even without invoking the vision of topological QC that deserves its own post). There is a sliver of hope for us quantum computing aficionados that even the superconducting approach might lead to an affordable machine one of these days.

If anybody with a more solid Solid States Physics background than me, wants to either feed the flames of this hope or dash it –  please make your mark in the comment section.

(h/t to Noel V. from the LinkedIn Quantum Computing Technology group and to Mahdi Rezaei for getting me thinking about this.)

Update:

Elena Tolkacheva from D-Wave was so kind to alert me to the fact that my key question has been answered in a paper that the company published in the summer of 2010. It contains the following graphs that illustrates how the chip performs at three different temperatures: (a) T = 20 mK, (b) T = 35 mK, and (c) T = 50 mK.  Note: These temperatures are far below Niobium’s critical temperature of 9.2 K.

The probability of the system to find the ground state (i.e. the “correct” answer) clearly degenerates with higher temperature – interestingly though not quite as badly as simulated. This puts a sever damper on my earlier expressed hope that this architecture might be suitable for HTCs as this happens so far away from Tc .

Kelly Loum, a member of the LinkeIn Quantum Physics group, helpfully pointed out that,

… you could gather results from many identical high temperature processors that were given the same input, and use a bit of statistical analysis and forward error correction to find the most likely output eigenstate of those that remained coherent long enough to complete the task.

… one problem here is that modern (classical, not quantum) FECs can fully correct errors only when the raw data has about a 1/50 bit error rate or better. So you’d need a LOT of processors (or runs) to integrate-out the noise down to a 1/50 BER, and then the statistical analysis on the classical side would be correspondingly massive. So my guess at the moment is you’d need liquid helium.

The paper also discusses some sampling approaches along those lines, but clearly there is a limited to how far they can be taken.  As much as I hate to admit it, I concur with Kelly’s conclusion.  Unless there is some unknown fundamental mechanism that’ll make the decoherence dynamics of HTCs fundamentally different, the D-Wave design does not seem a likely candidate for a nitrogen cooling temperature regime. On the other hand drastically changing the decoherence dynamics is the forte of topological computing, but that is a different story for a later post.

Where Buzzwords Go to Die

It is a pretty sure sign that a buzzword is near the end of its life cycle when the academic world uses it for promotional purposes. Ever more science research comes with its own version of marketing hype.  What makes this such a sad affair, is that this is usually done pretty badly.

So why is spouting that quantum computing makes for perfect cloud computing really, really bad marketing?

“Cloud computing” is the latest buzzword iteration of “computing as a service”, and as far as buzzwords  go it served its purpose well.  It is still in wide circulation but the time is nigh that it will be put out to pasture, and replaced with something that sounds more shiny – while signifying the very same thing.

Quantum computing on the other hand is not a buzzword. It is a revolution in the making. To hitch it to the transitory cloud computing term is bad marketing in its own right, but the way that it is done in this case, is ever more damaging.  There is already one class of quantum information devices commercially available: Quantum Key Distribution systems. They are almost tailor-made to secure current Cloud infrastructures and alleviate the security concerns that  are holding this business model back (especially in Europe).

But you’d never know from reading the sorry news stories about the (otherwise quite remarkable) experiment to demonstrate blind quantum computing.  To the contrary, an uniformed reader will come away with the impression that you won’t have acceptable privacy in the cloud unless full-scale quantum computing becomes a reality.

Compare and contrast to this exquisite quantum computing marketing stunt. While the latter brings attention and confidence to the field at zero cost, this bought and paid for marketing couldn’t be further of the mark. It is almost like it’s designed to hold the entire industry back.  Simply pitiful.

Dust to dust – Science for Science

No, this is not an obituary for D-Wave.

But the reporting of the latest news connected to D-Wave just doesn’t sit well with me.

Ever tried to strike up a conversation about Ramsey numbers around the water cooler, or just before a business meeting started? No? I wouldn’t think so.

I don’t mean to denigrate the scientific feat of calculating Ramsey numbers on D-Wave’s machine, but the way this news is reported, is entirely science for science’s sake.

It puts D-Wave squarely into the ghetto of specialized scientific computation. Although, I am convinced that quantum computing will be fantastic for science, and having a physics background I am quite excited about this, I nevertheless strongly believe that this is not a big enough market for D-Wave.

It is one thing to point to the calculation of numbers that less than one out of ten CIOs will ever have heard of. It is another matter entirely not to milk this achievement for every drop of marketing value.

In all the news articles I perused, it is simply stated that calculating Ramsey numbers is notoriously difficult. What this exactly means is left to the reader’s imagination.

If your goal is to establish that you are making an entirely new type of super-computer then you need an actual comparison or benchmark. From Wikipedia we can learn the formula for how many graphs have to be searched to determine a Ramsey number.

For R(8,2) D-Wave’s machine required 270 milliseconds. This comes to more than 68,719 million search operations. For a conventional computer one graph search will take multiple operations – depending on the size of the graph. (The largest graph will be 8 nodes requiring about 1277 operations).  Assuming the graph complexity grows with O(2n) I estimate about 800 operations on average.

Putting this together – assuming I calculated this correctly – the D-Wave machine performs at the equivalent of about 55 million MIPS.   For comparison: This is more than what a cluster of 300 Intel i7 Hex core CPUs could deliver.

Certainly some serious computational clout. But why do I have to waste my spare time puzzling this out?  At the time of writing I cannot find a press release about this on the company’s web site. Why? This needs to be translated into something that your average CIO can comprehend and then shouted from the rooftops.

D-Wave used to be good at performing marketing stunts and the company was harshly criticized for this from some academic quarters. Did these critics finally get under D-Wave’s skin?

…. I hope not.

Update: Courtesy of Geordie Rose from D-Wave (lifted from the comment section) here is a link to a very informative presentation on the Ramsey number paper.  While you’re at it you may also want to check out his talk. That one definitely makes for better water cooler  conversation material – less steeped in technicalities but with lots of apples and Netflix thrown in for good measure. Neat stuff.

Update 2: More videos from the same event now available on D-Wave’s blog.