All posts by Henning Dekant

Quantum Computing Bounty

If you can hunt down this dog cat and kill Quantum Computing for good than there will be a mighty big reward waiting for you.  Scott Aaronson put up a bounty of $100,000.  All you have to do is prove that universal Quantum Computing is impossible in the real world.

Cartoon drawing by Becky Thorn

On the surface there are a couple of surprises here:  Scott doesn’t really hate quantum computing – he’s actually basing his academic career on it. And herein lies the rub for the even bigger surprise: This academic really knows how to create one heck of a marketing stunt.  His blog was already flooded after slashdot reported on this and more media is now jumping on the bandwagon.  This is an awful lot of free publicity for a marketing budget of exactly zero dimes (and this cost includes the net present value of the bounty money).

Hat’s off!

Dust to dust – Science for Science

No, this is not an obituary for D-Wave.

But the reporting of the latest news connected to D-Wave just doesn’t sit well with me.

Ever tried to strike up a conversation about Ramsey numbers around the water cooler, or just before a business meeting started? No? I wouldn’t think so.

I don’t mean to denigrate the scientific feat of calculating Ramsey numbers on D-Wave’s machine, but the way this news is reported, is entirely science for science’s sake.

It puts D-Wave squarely into the ghetto of specialized scientific computation. Although, I am convinced that quantum computing will be fantastic for science, and having a physics background I am quite excited about this, I nevertheless strongly believe that this is not a big enough market for D-Wave.

It is one thing to point to the calculation of numbers that less than one out of ten CIOs will ever have heard of. It is another matter entirely not to milk this achievement for every drop of marketing value.

In all the news articles I perused, it is simply stated that calculating Ramsey numbers is notoriously difficult. What this exactly means is left to the reader’s imagination.

If your goal is to establish that you are making an entirely new type of super-computer then you need an actual comparison or benchmark. From Wikipedia we can learn the formula for how many graphs have to be searched to determine a Ramsey number.

For R(8,2) D-Wave’s machine required 270 milliseconds. This comes to more than 68,719 million search operations. For a conventional computer one graph search will take multiple operations – depending on the size of the graph. (The largest graph will be 8 nodes requiring about 1277 operations).  Assuming the graph complexity grows with O(2n) I estimate about 800 operations on average.

Putting this together – assuming I calculated this correctly – the D-Wave machine performs at the equivalent of about 55 million MIPS.   For comparison: This is more than what a cluster of 300 Intel i7 Hex core CPUs could deliver.

Certainly some serious computational clout. But why do I have to waste my spare time puzzling this out?  At the time of writing I cannot find a press release about this on the company’s web site. Why? This needs to be translated into something that your average CIO can comprehend and then shouted from the rooftops.

D-Wave used to be good at performing marketing stunts and the company was harshly criticized for this from some academic quarters. Did these critics finally get under D-Wave’s skin?

…. I hope not.

Update: Courtesy of Geordie Rose from D-Wave (lifted from the comment section) here is a link to a very informative presentation on the Ramsey number paper.  While you’re at it you may also want to check out his talk. That one definitely makes for better water cooler  conversation material – less steeped in technicalities but with lots of apples and Netflix thrown in for good measure. Neat stuff.

Update 2: More videos from the same event now available on D-Wave’s blog.

The story patterns of your life and what does this have to do with quantum computing?

Storytelling is an integral part in how we usually organize our world. How did you chose your line of work? How did you meet the love of your life, or if you haven’t yet, why not?  There will be a story in the answers to these questions.

In our mind we constantly write on the story arc of our life. But there is a different view of the world that is no less valid. Once I read the story of Goldilocks to a child who was diagnosed with a mild form of autism spectrum disorder. When asked to summarize the story, the child answered:  “There was a pattern that repeated three times, then the bears came home.”

A brain like this is tuned to see patterns where other kids just perceive the story.

We humans like to share our stories and by doing this we make the patterns of our lives accessible.

A whole industry sprang from this, with its poster boy Facebook valued in the billions. The money of course is not in the stories but in the exposed patterns.  Tantalizingly, an unparalleled depth of market research seems to be in reach. Corporations in the B2C space seem to be at the cusp of achieving a completely transparent view of their customers.

There are only two obstacles: Customer’s concern for privacy and the sheer amount of data. Both bode well for quantum computing, and create opportunities for a customer driven market that D-Wave can take advantage of.

Social networks currently thrive on sharing images.  Let’s consider a concrete example:  A report that breaks out over time how many teenagers are wearing Nike versus Adidas shoes on photos shared online – broken out by gender and ethnicity. This kind of data is worth a lot of money but not enough to hire a small army of market analysts to compile it manually.  On the other hand fast, advanced image recognition is a good fit for D-Wave and one of their demo cases.  If they are as good as they claim they are, this kind of customer intelligence will quickly amortize an investment in their hardware.

Given the price tag and size of D-Wave’s machine it is easy to jump to the conclusion that quantum computing is in the mainframe age, similar to the early days of information technology.  Clearly the quantum computing power will be skewed to the 1% of big corporations?  Not necessarily,  because the analogy is flawed: The early mainframes were introduced into a pre-Internet world.

Consider the following business model:  Don’t want photos of you posted online without your knowledge? How about a web-based service that tracks any photos that show your face?

If enough individuals were willing to subscribe to such a service, it could harness the power of D-Wave’s One system to sift through millions of photos a day.

The patterns in the plethora of image data that we leave online is just the tip of the iceberg.  Apps like Google Goggles will ensure yet another data explosion in this area (and in due time they will be always online). Additionally, I expect ever more health relevant data added to the pile. Eventually some truly life changing and enhancing patterns will be uncovered. If ever another deadly product like cigarettes gets introduced, one can quickly connect the dots and a cover-up becomes unfeasible.

We may not see a quantum computing eharmony.com matching you on an infinite number of dimensions any time soon.  But I am convinced that unbeknownst quantum computing will come to the masses sooner than generally anticipated.

Time to start investing in quantum computing software?

While the argument over whether D-Wave represents true quantum computing still rages the company moves on and opens a software developer portal.

Assuming D-Wave succeeds in the marketplace (and I wish them all the best) they will own the term “quantum computing” for the time being. It is very unlikely that another vendor will introduce another quantum computing device in the intermediate future. Being the ultimate first mover in this nascent market D-Wave has the unique chance to shape it to their advantage.

The (mostly academic) critics won’t cease to point out that D-Wave doesn’t implement a universal gate programmable quantum computer – not that the company ever claimed such a thing – but assuming that their device delivers superior computational power the market won’t care.

But there is a catch: D-Wave’s optimization engine is versatile enough to help with many real life optimization business cases but the path to actual usefulness is all in the software.

Let’s first look at the B2B market (I’ll discuss applications in the consumer space in a later blog entry).

It is hard to get market share numbers that break out spending on IT solutions for optimization software. Currently this is mostly folded into the BI and CRM category, both areas that pose plenty of business cases (e.g. channel, price, revenue, supply chain optimization to just name a few). According to Forester, BI and CRM are the main drivers of IT spending in 2012.

Even if D-Wave could just dip a toe into this huge market it’ll render it a commercial success.

But selling any optimization solution to businesses is no small feat. The potential customers don’t care about the underlying technology and won’t be interested in talking about hardware and algorithms.

Making a persuasive case is tricky because it requires a special skill mix: Domain expertise as well as understanding of the parameters within which an optimal solution can be found. While it suffices to mention this as “boundary condition” to those with a mathematical background, this term will result in blank stares with your average business crowd and is guaranteed to sink your sales pitch.

The magic realm of financial engineering may seem like a good fit for D-Wave. The quants who reside in this world will clearly understand where this company is coming from – many of these folks will have been lured away from a career in math and physics and there is no doubt that they could utilize a quantum optimization device to its fullest potential. That is, if this magic kingdom wasn’t on fire. Wall Street’s profits are up again, but on a shaky foundation, and the faith in the reliability of financial modeling has certainly taken quite a hit. (For a great account on this I recommend the hilariously named “Models behaving badly” book). This will certainly have reduced the appetite to gamble sizable amounts on – as of yet – untested technology (although sizable is of course rather relative when contrasted with the kind of volume that high frequency trading employs).

In the end, D-Wave’s best option may be to lower the barrier for engaging with their tech, i.e.  by developing a stack of accelerator software that allows programs written in SAS or R to execute without modification in a hosted environment. That means engineering a pre-processor that diverts the invocations of proc nlp or optim() to an underlying D-Wave parser that compiles the optimization problems for execution on their hardware.

“Build it and they will come” is rarely a winning business strategy, but if D-Wave can make it effortless enough to test drive their optimization engine, then they stand a good chance to competing on the merits of their technology.

When they finally came ...

When Popular Science is Neither Science nor Popular

This is a detour from my usual subject of quantum computing due to the unusual media attention that the story of faster than light neutrinos caused.

As was to be expected this brought out the special relativity detractors in droves. Usually my attitude towards this crowds is similar as depicted here in this xkcd strip:

xkcd's take on faster than light neutrinos

Yet, I think this is symptomatic for a broader problem: I am convinced that when it comes to popular science, modern physics has utterly failed the public. TV science shows with fancy CGI graphics that somehow are supposed to make string theory and dark matter plausible don’t really help.

By often presenting untested theories such as super-strings as factual they rather help to undermine trust in the entire endeavor. Then there is the obsession with not using math at all cost because it might hurt the sales of the pop science product. Thus leaving us with the overuse of vague, yet seemingly overbearing terminology and strained metaphors. Great, if meant as material for techno babble on Star Trek but a sorry excuse for supposedly “scientific truth”.

This leaves the lay person very vulnerable when it comes to assessing any claims about physics.  Rather than trying to sell the public on the latest scientific pet theory I wished the media would go a bit meta and facilitate a better understanding of what actually makes good science.  For instance by exploring the question what criteria a physics theory should fulfill.

Most people are quite familiar with the concept of falsifiability by experiment, but few contemplate where the power of a good theory comes from: Reduction to plausible first principles that drives a drastic increase of the domain of applicability. Or to state it like Kurt Lewin, in a much more straightforward and less abstruse manner: There is really nothing as practical as a good theory.

And it better be good. Way back, when I was a full-time physics student, I was struck by how much more experimental physicists seem to be satisfied with their lot in life. I only met a single career theoretical physicist who appeared genuinely happy and content (he was one of the great ones and close to retirement). He explained it to me like this: “As an experimentalist chances are you can constantly make some incremental progress. Fixing hardware, eliminating some systematic errors, coming up with some new creative ideas of how to probe for a specific effect. Chances are as an experimentalist you will experience positive feedback from your work quite regularly. It also helps that you often get to work hands-on. A theoretical physicist on the other hand can count himself lucky if he has just one eureka moment in his life. And even then it might turn out your insight was plainly wrong. Experimental physicist win any which way – any result is a good result.”

Einstein once said “Any intelligent fool can make things bigger and more complex… It takes a touch of genius – and a lot of courage to move in the opposite direction.” And moving it in the opposite direction is exactly the hallmark of a good theory.  But this reduced complexity doesn’t necessarily make understanding nature any easier. To illustrate this, let’s pick an example that pre-dates modern physics:

Newtonian physics requires several not immediately obvious first principles (axioms) i.e. his famous three laws:

  1. Inertia
  2. Force is proportional to acceleration
  3. Action equals reaction

These principles are anything but obvious in everyday live. They had to be distilled from carefully conducted and idealized experiments (after all Newton didn’t have access to a perfect vacuum).

Now consider that these principles can be replaces by far more immediately plausible first principles:

  1. Time and space are homogeneous and the latter also isotropic. This is just a fancy way of saying that experiments behave the same if we move them to a different place and time. For instance a pendulum on the moon would have swung the same way thousands of years ago as it does today.
  2. The principle of least action – in colloquial terms: The system follows the path of least resistance or to be more precise it gets from point A to B with the least amount of reshuffling of energy. For instance from kinetic to potential energy in the case of a pendulum.

So why did Newton not start with these simpler and more self-explanatory principles? No doubt he was a genius, and he invented Calculus to present his theory of mechanics, but he inconveniently didn’t get around to the Calculus of Variations. So he didn’t have the mathematical tools required to derive classical mechanics from these more fundamental first principles.

I turned out that this superior Hamiltonian approach to mechanics was so immensely successful and elegant in its mathematical execution that after it ran its course a young Max Planck was told he’d be silly to want to pursue a career in physics – obviously everything was already known.

A good physics theory works a bit like a good lossless compression algorithm. You have to remember much less to derive all physics laws but you have to work harder to get there the more advanced the theory is.

This is our first important criteria to judge the merits of a theory.

It not the only one though. Another important one is nicely laid out by David Deutsch in this TED talk:

In a good theory every piece and part is needed – it cannot be easily varied to accommodate different outcomes.

So let’s see how some theories fare against these criteria.

For instance Special Relativity can be derived from the same principles as Hamiltonian mechanics when adding group properties for the allowed spatial transformations i.e. reversibility and requiring that applying several transformations result in the same class of transformation. It can then be mathematically shown that only the Galilean and the Lorentz transformations satisfy these axioms. A great paper demonstrating this, while only requiring high school level math, was published in 1976.

Yet, more than thirty years later Special Relativity is still mostly taught in the same convoluted way that Einstein originally established the theory. (Very doubtful that Einstein would still teach it that way if he was still around).

To get to General Relativity requires just one more axiom: The equivalence principle that states that effects of acceleration and gravity are locally indistinguishable. Following this through with mathematical rigor is beyond the scope of high-school math, but contrary to popular believe it is really not that complicated. After all it’s the same math that underlies our ability to produce reasonably accurate maps of our curved planet.

General Relativity therefore satisfies my quality criteria: It can be derived from first principles.

How about the David Deutsch criteria? It satisfies that as well, what follows from the axioms doesn’t allow for any wiggle room. Einstein stumbled over this himself when he introduced the unmotivated cosmological constant to his field equations, because he just couldn’t believe that an expanding universe made any sense.

In summary:

  1. It is well tested
  2. Follows from first principles
  3. Can’t be easily varied to accommodate different results.

Now, let’s contrast this with what is considered to be the leading contender for a unifying theory: After decades of research super-string theory produced not a single testable prediction, there is no known approach that’ll allow super-string theory to be derived from first principles and the theory is notorious for being tweakable to accommodate for different results.

Thankfully, there has been an entire book written about this colossal failure on all counts.

Nevertheless, this hasn’t really reached the public sphere where super-string theory is still often presented as the current factual understanding of the universe rather than the Standard Model.

This growing befuddlement of the public with regards to the state of contemporary physics theories comes at an inopportune time. Long gone are the days of the cold war when particle physics was always funded – no questions asked.

With the Higgs Boson hunt sold to the public as the main motivation for CERN’s supercollider I fear that physics may be confronted with a major credibility crisis if this search comes up empty.  A crisis fully self-inflicted by selling untested theories as factual.

 

Nascent industry killed from birthing pain?

Networkworld.com has an instructive article on the state of quantum computing. While the headline is somewhat sensationalist (not as much as mine though) the article turned out to be well written and solidly sourced. It does a good job of highlighting what D-Wave has on offer while contrasting it to the universal quantum computing approach.

Dr. Suzanne Gildert from D-Wave is quoted extensively and I think succeeded in conveying why their product is relevant. It is also instructive that she envisions quantum computing as a kind of co-processing unit for cloud service providers. This very much meshes with my own expectations of where we will see the first wave of the commercial application of quantum computers (no pun intended).

Where the article appears somewhat disjointed, is in not making clear that the presented argument against the rapid availability of quantum computing hardly applies to the approach that D-Wave is taking. Another testament to the fact that the waters have been considerably muddied as to what quantum computing actually means. Maybe D-Wave is better served to consistently call its method “Natural Quantum Computing ” as positioned by Dr. Gildert.

The gist of all pessimism voiced in the article is the ever-present struggle to preserve the coherence of qbits. While this is the obvious Achilles’ heel of any quantum system the critical voices quoted in the article seem oddly out of step with the current state of the art. The relatively new approach of cluster-state quantum computation significantly mitigates the decoherence problem. When it was first published in 2005 it prompted David Deutsch to shorten his expectation of the realization of the first universal quantum computer from many decades to within the decade.

These recent research trends in cluster/measure based quantum computation were nowhere reflected in the article whereas ultimate naysayer Artur Ekert from Oxford university was quoted as gleefully stating that “the best outcome of our research in this field would be to discover that we cannot build a quantum computer for some very fundamental reason, then maybe we would learn something new and something profound about the laws of nature.”

Makes you wonder if Ekert and his fellow faculty member Deutsch who co-authored a couple of older papers are still on the same page – or on speaking terms for that matter.

Update: I should have realized that IT journalism subscribes to the same “he said – she said” sort of reporting that makes mainstream media so useless in the US. Ekert’s quote is probably taken out of context just to provide some semblance of faux balance in Networkworld’s article.

You keep using that word. I do not think it means what you think it means.

UCSB physicists claim to have a von Neumann architecture implemented for quantum computing.

Far be it from me to denigrate what they achieved but I have to take issue with the marketing.

They keep using that “von Neumann” word. I do not think it means what they think it means. In a strict sense a quantum computer cannot possibly implement a von Neumann architecture. Peter Hines a theoretical computer science researcher wrote a nice paper about just that question not too long ago.

I think this an example of a somewhat careless appropriation of a term that has a well-defined meaning in computer science.

In my mind this is rather problematic, because sloppy adoption of classical CS terms does nothing to illuminate the fundamental difference of quantum computing and is destined to just add confusion.

Update:

The UCSB researchers talk about a “Quantum von Neumann” architecture in their original paper. The “Quantum” unfortunately got dropped in many news releases.

The purpose of this blog

Gartner underestimates an emerging technology and I will use this humble blog to redress this.  Other than Gartner I will do it for free because this new and developing industry is utterly fascinating.

Gartner places quantum computing firmly at the early stages of what they call the “Hype Cycle”. This is what they call a technology that is more than 10 years away from maturity.

When googling the latest publicly available report I found that even in 2009 Gartner didn’t revisit this assessment.

This seems like a reasonable and conservative classification when taking into account how most quantum computing is performed at this time. The leading approach requires a roomful of high tech lab equipment to create ultra-cold, single atom ions that are then trapped with uncanny precision and manipulated by lasers to create entangled electron states that represent qbits. At the time of writing the record stands at 14 qbits that could potentially be used to perform quantum algorithms.

An impressive scientific feat but hardly with the potential to impact business computing any time soon.

Yet, already in 2007 there was a small start-up company making some marketing noise: Claiming that they can deliver a pre-packaged quantum computer on a scale that won’t crowd your average corporate data center.

Initially, they kept their technology a black box. A demonstration in 2007 showed that they couldn’t convincingly outperform conventional computers. So Gartner’s stance back then is quite justifiable. They weren’t alone in dismissing this as vaporware.

But things have changed. Now the company D-Wave is selling their (big) black box and also found a first paying customer in Lockheed Martin.

The controversy if this is real quantum computing hasn’t gone away, but even one of D-Wave’s fiercest critics had to admit that the company finally demonstrated the use of quantum annealing on eight qbits when publishing some of their results in Nature.

For the currently shipping system they claim 128 qbits. If these could actually all be used as an entangled entity then this new computational device would have some serious clout.

As my time allows I will use this blog to cover this emerging technology and strive to make it a useful resource that helps to distinguish between the hype and reality of quantum computing. I will delve into what makes this type of information processing so different from coding for good old fashioned Turing machines and explore the difference between the “conventional” quantum computing approach and what D-Wave is offering.

Some of my blog entries will be technical and some lighter reading for the more business minded crowd and I will mark them accordingly. In the end I hope this blog will prove to be a useful resource for any professional who may be confounded by the question of what quantum computing is and what it’s good for.

Update:

D-Wave recent paper in Nature makes clear that they don’t use entangled qbit states. Rather any extra power over a classic analog computing annealing scheme will stem from quantum tunneling.