Update: This research is now again generating some mainstream headlines. Will be interesting to see if this hybrid chip paradigm will have more staying power than previous analog computing approaches.
###
Fifteen years ago I attempted to find an efficient randomized training algorithm for simple artificial neural networks suitable for implementation on a specialized hardware chip. The latter’s design only allowed feed-forward connections i.e. back-propagation on the chip was not an option. The idea was that given the massive acceleration of the networks execution on the chip some sort of random walk search might be at least as efficient as optimized backprop algorithms on general purpose computers.
My research group followed a fairly conventional digital design, whereas at the time, analog VLSI was all the rage. A field (like so many others) pioneered by Carver Mead. On the face of it, this makes sense, given that the biological neurons obviously work with analog signals, but nevertheless attain remarkable robustness (the latter being the typical problem with any sort of analog computing). Yet, it is also this robustness that makes the “infinite” precision that is the primary benefit of analog computing somewhat superfluous.
Looking back at this I expected this analog VLSI approach to be a bit of of an engineering fad as I wasn’t aware of any commercial systems ever hitting the market – of course I could have easily missed a commercial offering if it followed a similar trajectory as the inspiring but ultimately ill fated transputer. In the end the later was just as much a fad as the Furby toy of yesteryear yet arguably much more inspiring.
To my surprise and ultimate delight a quick statistic on the publication count for analog neural VLSI proves me wrong and there is still some interesting science happening:
So why are there no widespread commercial neuromorphic products on the market? Where is the neural analog VLSI co-processor to make my Laptop more empathic and fault tolerant? I think the answer comes simply down to Moor’s law. A flagship neuromorphic chip currently designed at MIT boasts a measly 400 transistors. I don’t want to dispute its scientific usefulness – having a detailed synapse model in silicon will certainly have its uses in medical research (and the Human Society will surely approve if it cuts down on the demise of guinea pigs and other critters). On the other hand the blue brain project claims it already successfully simulated an entire rat cortical column on their supercomputer and their goal is nothing less than a complete simulation of the rodents brain.
So what does this have to do with Adiabatic Quantum Computing? Just like in the case of neuromorphic VLSI technology, its main competition for the foreseeable future is conventional hardware. This is the reason why I was badgering D-Wave when I thought the company didn’t make enough marketing noise about the Ramsey number research performed with their machine. Analog Neural VLSI technology may find a niche in medical applications, but so far there is no obvious market niche for adiabatic quantum computing. Scott Aaranson argued that the “coolness” of quantum computing will sell machines. While this label has some marketing value, not the least due to some of his inspired stunts, this alone will not do. In the end, adiabatic quantum computing has to prove its mettle in raw computational performance per dollar spent.
(h/t to Thomas Edwards who posted a comment a while back in the LinkedIn Quantum Information Science that inspired me to this post)
you got a very excellent website, Glad I noticed it through yahoo.