Category Archives: Quantum Computing

SUSY Matrix Blues

The Gentleman to the right places you into the Matrix. His buddy could help, if only he wasn’t a fictional character.

Dr. Gates, a distinguished theoretical physicist (with a truly inspiring biography), recently made an astounding statement during an interview on NPR (the clip from the On Being show can be found herea transcript is also online).  It gave the listener the distinct impression that he uncovered empirical evidence in his work that we live in a simulated reality.  In his own words:

(…) I remember watching the movies, The Matrix. And so, the thought occurred to me, suppose there were physicists in this movie. How would they figure out that they lived in the matrix? One way they might do that is to look for evidence of codes in the laws of their physics. But, you see, that’s what had happened to me already.

I, and my colleagues indeed, we had found the presence of codes in the equations of physics. Not that we’re trying to compute something. It’s a little bit like doing biology where, if you studied an animal, you’d eventually run into DNA, and that’s essentially what happened to us. These codes that we found, they’re like the DNA that sits inside of the equations that we study.

Of course Dr. Gates made additional qualifying statements that cautioned against reading too much into this, but media, even the more even-handed NPR, feeds off sensationalism. And so they of course had to end the segment with a short excerpt from the Matrix to drive this home.  It would be interesting to know how many physicists were subsequently badgered by family and friends to explain if we really live in the Matrix. So here’s how I tackled this reality distortion for my non-physicist mother-in-law:

  • Dr. Gates has been a pioneer in Supersymmetry research (affectionately abbreviated SUSY) but just as with String theory there is an absolute dearth of experimental verification (absolute dearth meaning not a single one).  While SUSY proved to be of almost intoxicating mathematical beauty the recent results from LHC have been especially brutal. Obviously, if nature doesn’t play by SUSY’s rules it will be of no physical consequence if Dr. Gates finds block codes in these equations (although it certainly is still mathematically intriguing).
  • The codes uncovered in the SUSY equations are classic error correction bit codes. The bit, being the smallest informational unit, hints at a Matrix style reality simulated on a massive Turing complete machine.  There are certainly other smart people who actually believe in such (or a very similar) scenario – e.g. Stephen Wolfram advocated something along these lines in his controversial book.  The one massive problem with such a world view is that we rather conclusively know that classic computers are no good at simulating quantum mechanical systems, and that quantum computers can outperform classical Turing machines (the same holds in the world of cellular automatons, where it can be shown that quantum cellular automatons can emulate their Turing equivalent and vice versa).

If Dr. Gates had discovered qbits and a quantum error correction code hidden in SUSY, that would have been significantly more convincing.  I could entertain the idea of a Matrix world simulated on a quantum computer.

At any rate, his equations didn’t provide a better answer to the question of why anyone would go to the trouble of running a simulation like the Matrix.  In the movie, the explanation is that human bodies perform as an energy source just like a battery.  Always thought this explanation fell rather flat.  If a mammalian body was all it took, why not use cows, for example?  That should make for a significantly easier world simulation – an endless field of green should suffice. Probably wouldn’t even require a quantum computer to simulate a happy cow world.

A Picture is Worth More Than a Thousand Lines of Code

Imagine a world before the advent of the steam engine that nevertheless imminently anticipates this marvelous machine’s arrival. Although no locomotive has been built, civil engineers are already busy discussing how to build rail-road bridges, architects try to determine the optimal layout of train stations, and the logistics of scheduling and maintaining passenger and freight traffic over the same tracks is heavily researched.

To some extent this seemingly absurd scenario is playing out in the world of quantum computing.  For instance, take a look at this intriguing presentation by Rodney Van Meter:

While watching it I had to pinch myself a couple of times to make sure I wasn’t just hallucinating a beamed broadcast from the future. In fact it is more two years old. All this impressive infrastructure work is being performed while we are still years away from an actual scalable universal quantum computer.

Of course there is ample reason for all this activity, as has been documented on this humble blog.  To recap: As our conventional computing inevitably arrives at structure sizes where undesired quantum effects can no longer be ignored. On the other hand harnessing the peculiarities of quantum mechanics will supercharge Moor’s law. It will enable us to tackle problems that are too complex for conventional computing.

Specialized quantum computing devices such as D-Wave’s machine or NIST’s impressive ion based quantum simulator already allow us a glance at the potential that this new approach to computing will unleash (btw. the NIST article makes it sound as if a “crystal” was contained in the Penning trap.  This of course is nonsense.  What is meant is that the ions are arranged in a 2d crystal like grid).

It is encouraging that this core technology is so feverishly anticipated and that considerable efforts to lay the groundwork for it are in progress.  After all, conventional programming techniques won’t cut it if the goal is to leverage the additional power of a quantum computer. It will be key to empower software engineers to program these devices without forcing them to go through a quantum mechanics boot camp.

When picking up a textbook on the subject, the reader will very quickly be confronted with diagrams typically following the circuit model, where every line corresponds to a qbit. Such as:

Only good to beam up qbits.
Teleportation of the kind that’s only good to beam up qbits.

While this is useful to introduce a reader to the peculiarities of entanglements and how this can be leveraged as a computational resource, it is obviously of limited use once you have a meaningful device that offers hundreds of qbits.  Even for a dedicated (Ising model solving) system such as D-Wave, you can no longer draw a complete graph (although it helps to introduce a matrix notation to the uninitiated).

A purist might stop there and observe that quantum computation just means working with density matrices, and hence brushing up on your linear algebra is what it takes.  The conventional programming analog would be to observe that Boolean logic is all you need to program a conventional chip.  Obviously, higher levels of abstraction serve us well in this area.

The current state of affairs in quantum computing remind me of the early days of visual programming research long before the advent of UML to provide a unified framework.

For instance, there is Robert Tucci’s remarkable work to extend Bayesian Network diagrams into the quantum computing realm.  There are also considerable efforts underway to develop a universal visual Tensor Network “language”.  Last but not least, there are some convincing arguments that topological quantum computers are most amenable to a schema dubbed “quantum picturalism“. A nice talk on this is also available online (courtesy of Microsoft’s research division).

As this industry matures, expect a similar process as that which played out in the old world of visual programming.  There is one important twist, though: Although UML is an excellent way to approach coding in a structured way (one  that actually deserves to be called engineering), its adoption is lackluster, and sloppy coding still rules supreme.

To the extent that pictoral languages are at the heart of quantum computing programming, maybe another beneficial side effect of the coming quantum computing age will be to accelerate the maturing of the computer industry’s approaches towards software development.

Modern and ancient pictograms. Sometimes hard to piece together what a graphical representation is supposed to convey.

 

About Time – Blogroll Memory Hole Rescue

One of the most fascinating aspects of quantum information research is that it sheds light on the connections between informational and thermodynamic entropy, as well as how time factors into quantum dynamics.

I.e. Schroedinger Equation and Heisenberg picture are equivalent. Although in the former the wave-function changes with time in the latter the operator. Yet, we don’t actually have any experimental insight in when the changes under adiabatic development are actually realized, since by its very nature we only have discrete observations to work with. This opens up room for various speculations such as that the “passage of time” is actually an unphyiscal notion for an isolated quantum system between measurements (i.e. as expressed by Ulrich Mohrhoff in this paper).

Lot’s of material there for future posts. But before going there it’s a good idea to to revisit the oldest paradox on time with this fresh take on it by Perry Hooker.

 

Quantum Computing – A Matter of Life and Death

Even the greatest ships can get it wrong.

In terms of commercial use cases, I have looked at corporate IT, as well as how a quantum computer will fit in with the evolving cloud computing infrastructure.  However, where QC will make the most difference -as in, a difference between life and death – goes entirely unnoticed.  Certainly by those whose lives will eventually depend on it.

Hyperbole? I think not.

As detailed in my brief retelling of quantum computing history, it all started with the realization that most quantum mechanical systems cannot efficiently be simulated on classical computers.  Unfortunately, the sorry state of public science understanding means that this facilitates hardly more than a shrug by even those who make a living writing about it (not the case for this humble blogger who toils away at it as a labor of love).

Prime example for this is a recent, poorly sourced article from the BBC that disses the commercial availability of turnkey-ready quantum computing without even mentioning D‑Wave, and at the same time proudly displays the author’s ignorance about why this technology matters (emphasis mine):

“The only applications that everyone can agree that quantum computers will do markedly better are code-breaking and creating useful simulations of systems in nature in which quantum mechanics plays a part.”

Well, it’s all good then, isn’t it? No reason to hurry and get a quantum computer on every scientist’s desk.  After all, only simulations of nature in which quantum mechanics plays a part will be affected.  It can’t possibly be all that important then.  Where the heck could this esoteric quantum mechanics stuff possibly play an essential part?

Oh, just all of solid state physics, chemistry, micro-biology and any attempts at quantum gravity unification.

For instance, one of the most important aspects of pharmaceutical research is to understand the 3D protein structure, and then to model how this protein reacts in vivo using very calculation-intensive computer simulations.

There has been some exciting progress in the former area.  It used to be that only proteins that lend themselves to crystallization could be structurally captured via X-ray scattering.  Now, recently developed low energy electron holography has the potential to revolutionize the field.  Expect to see a deluge of new protein structure data.  But despite some progress with numerical approaches to protein folding simulations, the latter remains NP complex.  On the other hand, polynomial speed-ups are possible with quantum computing.  Without it, the inevitable computational bottleneck will ensure that we forever condemn pharmaceutical research to its current expensive scatter-shot approach to drug development.

There is no doubt in my mind that in the future, people’s lives will depend on drugs that are identified by strategically deploying quantum computing in the early drug discovery process.  It is just a matter of when. But don’t expect to learn about this following BBC’s science news feed.

Analog VLSI for Neural Networks – A Cautious Tale for Adiabatic Quantum Computing

Update:  This research is now again generating some mainstream headlines.  Will be interesting to see if this hybrid chip paradigm will have more staying power than previous analog computing approaches.

###

Fifteen years ago I attempted to find an efficient randomized training algorithm for simple artificial neural networks suitable for implementation on a specialized hardware chip. The latter’s design only allowed feed-forward connections i.e. back-propagation on the chip was not an option.  The idea was that given the massive acceleration of the networks execution on the chip some sort of random walk search might be at least as efficient as optimized backprop algorithms on general purpose computers.

My research group followed a fairly conventional digital design, whereas at the time, analog VLSI was all the rage.  A field (like so many others) pioneered by Carver Mead. On the face of it, this makes sense, given that the biological neurons obviously work with analog signals, but nevertheless attain remarkable robustness (the latter being the typical problem with any sort of analog computing). Yet, it is also this robustness that makes the “infinite” precision that is the primary benefit of analog computing somewhat superfluous.

Looking back at this I expected this analog VLSI approach to be a bit of of an engineering fad as I wasn’t aware of any commercial systems ever hitting the market – of course I could have easily missed a commercial offering  if it followed a similar trajectory as the inspiring but ultimately ill fated transputer. In the end the later was just as much a fad as the Furby toy of yesteryear yet arguably much more inspiring.

To my surprise and ultimate delight a quick statistic on the publication count for analog neural VLSI proves me wrong and there is still some interesting science happening:

Google Scholar Publication Count for Neural Analog VLSI

So why are there no widespread commercial neuromorphic products on the market? Where is the neural analog VLSI co-processor to make my Laptop more empathic and fault tolerant? I think the answer comes simply down to Moor’s law.  A flagship neuromorphic chip currently designed at MIT boasts a measly 400 transistors.  I don’t want to dispute its scientific usefulness – having a detailed synapse model in silicon will certainly have its uses in medical research (and the Human Society will surely approve if it cuts down on the demise of guinea pigs and other critters). On the other hand the blue brain project claims it already successfully simulated an entire rat cortical column on their supercomputer and their goal is nothing less than a complete simulation of the rodents brain.

So what does this have to do with Adiabatic Quantum Computing?  Just like in the case of neuromorphic VLSI technology, its main competition for the foreseeable future is conventional hardware. This is the reason why I was badgering D-Wave when I thought the company didn’t make enough marketing noise about the Ramsey number research performed with their machine.  Analog Neural VLSI technology may find a niche in medical applications, but so far there is no obvious market niche for adiabatic quantum computing. Scott Aaranson argued that the “coolness” of quantum computing will sell machines.  While this label has some marketing value, not the least due to some of his inspired stunts, this alone will not do.  In the end, adiabatic quantum computing has to prove its mettle in raw computational performance per dollar spent.

(h/t to Thomas Edwards who posted a comment a while back in the LinkedIn Quantum Information Science that inspired me to this post)

Keep Walking – No Quantum Computing Jobs on Offer

UPDATE: Ever so often I get a search engine click on this post, presumably from QIS job seekers. So despite the dreary title I now can give you a link to this job portal.

When promoting my last blog post in the LinkedIn “Quantum Computing and Quantum Information” group a strange thing happened.  Rather than staying in the main discussions section of that group my item was moved to the jobs section. So let’s be clear about this: I don’t have jobs to offer.

My last entry at this LinkedIn group is now the only one amidst a see of white fertile untouched HTML background in the jobs section.  Quite a contrast to the High Performance Computing LinkedIn groups that have dozens of jobs posted.

This made me ponder the Quantum Information Science (QIS) job market.  While I expect in the not so distant future there will be plenty of high qualified jobs to be had, we are certainly living in a very different reality as of now.  There is a reason that the byline to this blog is “Observations on the nascent quantum computing industry”.

With the exception of D-Wave the few other commercial players (i.e. the IBMs of the world) are still in basic research mode. Outside these companies, and a very small Quantum Key Distribution sector, there are essentially no QIS jobs in the private sector. It seems to me the job market still looks very much the same as described by Robert Tucci 1 ½ years ago.

Don’t peruse this blog because you are looking for a job in this area. But by all means please come here, and check back regularly, if you are intrinsically interested in Quantum Computing.

Update: Todd Brun was so kind to point out his QIS job site as a worthwhile destination for the wary job seeker who surfed wandered over here looking for QC related work.

A Brief History of Quantum Computing

An attempted apotheosis.

In the beginning there was Feynman.  And Feynman was with the scientific establishment.  Feynman was a god. Or at least as close to it as a mortal physicist can ever be. He tamed the light with his QED (pesky renormalization non-withstanding) and gave the lesser folks the Feynman diagrams, so that they could trade pictograms like baseball cards and feel that they partook.  But he saw that not all was good. So Feynman said: “Quantum mechanical systems cannot be efficiently simulated on conventional computing hardware.” Of course he did that using quite a few more words and a bit more math and published it in a first class journal as was befitting.

And then he was entirely ignored.

Feynman walked upon earth.

Four grueling years went by without follow-up publications on quantum computing. That is until David Deutsch got in on the act and laid the foundation for the entire field with his seminal paper.  His approach was motivated by how quantum physics might affect information processing, including the one that happens in your human mind.  So how to experiment on this? Obviously you cannot put a real brain into a controlled state of quantum superposition (i.e. a kind of “Schrödinger’s brain) – but a computer on the other hand won’t feel any pain.

Let's rather put a computer into Schrödinger's box.
Less messy.

So was Feynman finally vindicated? No, not really, Deutsch wrote:

Although [Feynman’s ‘universal quantum simulator’] can surely simulate any system with a finite-dimensional state space (I do not understand why Feynman doubts that it can simulate fermion systems), it is not a computing machine in the sense of this article.

The old god just couldn’t get any respect anymore. His discontent with string theory didn’t help, and in addition he experienced another inconvenience: He was dying. Way too young.  His brilliant mind extinguished at just age 69.  Like most people, he did not relish the experience, his last words famously reported as: “I’d hate to die twice. It’s so boring.”

The only upside to his death was that he didn’t have to suffer through Penrose’s book “The Emperor’s New Mind” that was released the following year.  While it is a great read for a layperson who wants to learn about the thermodynamics of black holes as well as bathroom tile designs, none of the physics would have been news to Feynman. If he wasn’t already dead, Feynman probably would have died of boredom before he made it to Penrose’s anti-climactic last chapter.  There, Penrose finally puts forward his main thesis, which can be simply distilled as “the human mind is just different”.  This insight comes after Penrose’s short paragraph on quantum computing, where the author concludes that his mind is just too beautiful to be a quantum computer, although the latter clearly can solve some NP complete (h/t Joshua Holden) problems in polynomial time. Not good enough for him.

Physics, meanwhile, has been shown to be a NP hard sport, but more importantly for the advancement of quantum computing was the cracking of another NP class problem; Shor’s algorithm showed that a quantum computer could factorize large numbers in polynomial time.  Now the latter might sound about as exciting as the recent Ramsey number discussion on this blog, but governments certainly paid attention to this news, as all our strong encryption algorithms that are currently in wide use can be cracked if you accomplish this feat. Of course quantum information science offers a remedy for this as well in the form of (already commercially available) quantum encryption, but I digress.

With the kind of attention that Shor’s algorithm engendered, the field exploded, not the least due to computer science taking an interest.  In fact Michael Nielsen, one of the co-authors of the leading textbook on the subject, is a computer scientist by trade and if you want a gentle introduction to the subject I can highly recommend his video lectures (just wish he’d get around to finishing them).

Of course if you were stuck in the wrong place at the time that all this exciting development took place, you would have never known.  My theoretical physics professors at the University of Bayreuth lived and died by the Copenhagen omertà that in their mind decreed that you could not possibly get any useful information out of the phase factor of a wave function. I was taught this as an undergraduate physics student in the early nineties (in fairness probably before Shor’s algorithm was published, but long after Deutsch’s paper). This nicely illustrates the inertia in the system and how long it takes before new scientific insights become wide-spread.

The fresh and unencumbered view-point that the computer scientists brought to quantum mechanics is more than welcome and has already helped immensely in softening up the long-established dogma of the Copenhagen interpretation. Nowadays Shor can easily quip (as he did on Scot Aaranson’s blog):

Interpretations of quantum mechanics, unlike Gods, are not jealous, and thus it is safe to believe in more than one at the same time. So if the many-worlds interpretation makes it easier to think about the research you’re doing in April, and the Copenhagen interpretation makes it easier to think about the research you’re doing in June, the Copenhagen interpretation is not going to smite you for praying to the many-worlds interpretation. At least I hope it won’t, because otherwise I’m in big trouble.

I think Feynman would approve.

Quantum Computing for the Rest of Us? – UPDATED

<Go to update>

Everybody who is following the Quantum Computing story will have heard by now about IBM’s new chip. This certainly gives credence to the assumption that superconducting Josephon junction-based technology will win the race.  This may seem like bad news for everybody who was hoping to be able to tug away a quantum computer under his or her desk some day.

Commercial liquid cooling has come a long way, but shrinking it in price and size to fit into a PC tower form factor is a long way off. Compare and contrast this to liquid nitrogen cooling. With a high temp superconductor and some liquid nitrogen poured from a flask any high school  lab can demonstrate this neat Meisner effect:

Cooled with Liquid Nitrogen (a perfectly harmless fluid unless you get it on your cloths)

Good luck trying this with liquid helium (for one thing it may try to climb up the walls of your container – but that’s a different story).

Nitrogen cooling on the other hand is quite affordable.  (Cheap enough to make the extreme CPU overclocking scene quite fond of it).

So why then are IBM and D-Wave making their chips from the classic low temp superconductor Niobium?  In part the answer is simple pragmatic engineering: This metal allows you to adopt fabrication methods developed for silicon.

The more fundamental reason for the low temperature is to prevent decoherence, or to formulate it positively, to preserve pure entangled qubit states. The latter is especially critical for the IBM chip.

The interesting aspect about D-Wave’s more natural – but less universal – adiabatic quantum computing approach, is that a dedicated control of the entanglement of qubits is not necessary.

It would be quite instructive to know how the D-Wave chip performs as a function of temperature below Tc (~9K). If the performance doesn’t degrade too much, maybe, just maybe, high temperature superconductors are suitable for this design as well.  After all, it has been shown they can be used to realize Josephon junctions of high enough quality for SQUIDS. On the other hand, the new class of iron based superconductors show promise for easier manufacturing, but have a dramatically different energy band structure, so that at this point it seems all bets are off on this new material.

So, not all is bleak (even without invoking the vision of topological QC that deserves its own post). There is a sliver of hope for us quantum computing aficionados that even the superconducting approach might lead to an affordable machine one of these days.

If anybody with a more solid Solid States Physics background than me, wants to either feed the flames of this hope or dash it –  please make your mark in the comment section.

(h/t to Noel V. from the LinkedIn Quantum Computing Technology group and to Mahdi Rezaei for getting me thinking about this.)

Update:

Elena Tolkacheva from D-Wave was so kind to alert me to the fact that my key question has been answered in a paper that the company published in the summer of 2010. It contains the following graphs that illustrates how the chip performs at three different temperatures: (a) T = 20 mK, (b) T = 35 mK, and (c) T = 50 mK.  Note: These temperatures are far below Niobium’s critical temperature of 9.2 K.

The probability of the system to find the ground state (i.e. the “correct” answer) clearly degenerates with higher temperature – interestingly though not quite as badly as simulated. This puts a sever damper on my earlier expressed hope that this architecture might be suitable for HTCs as this happens so far away from Tc .

Kelly Loum, a member of the LinkeIn Quantum Physics group, helpfully pointed out that,

… you could gather results from many identical high temperature processors that were given the same input, and use a bit of statistical analysis and forward error correction to find the most likely output eigenstate of those that remained coherent long enough to complete the task.

… one problem here is that modern (classical, not quantum) FECs can fully correct errors only when the raw data has about a 1/50 bit error rate or better. So you’d need a LOT of processors (or runs) to integrate-out the noise down to a 1/50 BER, and then the statistical analysis on the classical side would be correspondingly massive. So my guess at the moment is you’d need liquid helium.

The paper also discusses some sampling approaches along those lines, but clearly there is a limited to how far they can be taken.  As much as I hate to admit it, I concur with Kelly’s conclusion.  Unless there is some unknown fundamental mechanism that’ll make the decoherence dynamics of HTCs fundamentally different, the D-Wave design does not seem a likely candidate for a nitrogen cooling temperature regime. On the other hand drastically changing the decoherence dynamics is the forte of topological computing, but that is a different story for a later post.

Rescued From the Blogroll Memory Hole

During the week my professional life leaves me no time to create original content.  Yet, there is a lot of excellent material out there pertinent to the nascent quantum information industry. So to fill the inter-week void I think it is very worthwhile to try to rescue recent blogroll posts from obscurity.

Very relevant to the surprise that Scott Aaronson came around on D-Wave is Robert Tucci’s great technical review of D-Wave’s recent Nature paper.  If you are not afraid of some math and are tired of the void verbiage that passes for popular science journalism than this is for you.

Scott Aaronson Visits D-Wave – No Casualties Reported

Those following the quantum computing story and D-Wave are well aware of the controversial scuffle between Scott and the company.  So it’s quite newsworthy that according to Scott’s own accord the hatchet has been buried. No more comparing the D‑Wave One to a roast beef sandwich (hopefully BLT is out of the picture too).

Scott is still taking on D-Wave’s pointy-haired bosses though.  He wants them to open the purse-strings to determine more clearly how “quantum” the Rainier chip really is.

Unfortunately, I think he overlooks that pointy-haired bosses are only interested in the bottom line.  At this point there is nothing stopping D-Wave from selling their system as a quantum computer. Not a universal one but who’s counting.  Any closer inquiry into the nature of their qbits only carries the danger to add qualifiers to this claim.  So why should they bother?

In terms of marketing, the label “Quantum Computer” is just a nice nifty term signifying to potential customers that this is a shiny, cool new device.  Something different.  It is supposed to serve as a door opener for sales.  Afterwards it just comes down to the price/performance ratio.

At this point D-Wave One sales won’t benefit from further clarifying how much entanglement is occurring in their system – I see this only change once there is actual competition in this space.

Update:  I finally don’t have to develop a cognitive dissonance about adding Scott’s and D‑Wave’s blog to my blogroll. So to celebrate this historic peace accord this is my next step in the development of this most important website.  BTW if you still need an argument why it is worthwhile to pay attention to Scott, treat yourself to his TedX talk (h/t to Perry Hooker).