This is essentially an extended update to my last D-Wave post. Rather than stick it there, I think it is important enough to merit its own post. The reason being, I wish I could make anybody who plans on writing anything on D-Wave first watch the video below from the first Q+ Google+ hang-out this year.
It summarizes the results of the paper I blogged about in my last post on the matter. Ultimately, it answers what is objectively known about D-Wave’s machine based on the analyzed data, and sets out to answer three questions.
Does the machine work?
Is is quantum or classical?
Is it faster than a classical computer?
The short version is
Yes
Based on their modeling D-Wave 2 is indeed a true Quantum Annealer.
While it can beat an off the shelf solver it cannot (yet) outperform on average a highly targeted hand-crafted classical algorithm.
Of course there is much more in the video, and I highly recommend watching the whole thing. It comes with a good introduction to the subject, but if you only want the part about the test, you may want to skip 11 minutes into the video (this way you also cut out some of the cheap shots at completely clueless popular media reports – an attempt at infusing some humor into the subject that may or may not work for you).
With regards to point (2) the academic discussion is not settled. A paper with heavyweight names on it just came out (h/t Michael Bacon). It proposes a similar annealing behavior could be accomplished with a classical set-up after all. Too me this is truly academic in the best and worst sense i.e. a considerable effort to get all the i’s dotted and the t’s crossed. It simply seems a bit far fetched that the company would set out to build a chip with coupled qubits that behave like a quantum annealer, yet somehow end up with an oddly behaving classical annealer.
From my point of view it is much more interesting to explore all the avenues that are open to D-Wave to improve their chip, such as this new paper on strategies for a quantum annealer to increase the success probability for hard optimization problems. (h/t Geordie Rose).
Usually, I don’t blog about things that don’t particularly interest me. But even if you are a potted plant (preferably with a physics degree), you probably have people talking to you about this ‘amazing’ new paper by Stephen Hawking.
So, I am making the rare exception of re-blogging something, because Sabine Hossenfelder already wrote everything about this I could possibly want to say, and she did it much better and more convincingly than I would.
Stephen Hawking now thinks that there are only grey holes, which is a step up in the color scheme from black. But in honor of the Sochi Olympics, I really think the world needs rainbow colored black holes.
The English language is astoundingly malleable. It feels almost as if it was tailor made for marketing spin. I noticed this long ago (feels like a lifetime) when working in a position that required me to sell software. Positioning products was much easier when I spoke English. Mind you, I never told a blatant lie, but I certainly spun the facts to put our product in the best light, and if a customer committed I’d do my darnedest to deliver the value that I promised. The kind of customers I dealt with were of course aware of this dance, and perfectly capable of performing their due diligence. From their perspective, in the end, it is always about buying into the vision, knowing full well that a cutting edge technology, one that will give a real competitive benefit, will of course be too new to be without risk.
During the courting of the customers, any sales person worth their salt will do anything to make the product look as good as possible. One aspect of this is of course to stress positive things that others are saying about your offerings.
To accomplish this, selective quoting can come in very handy. For instance, after reviewing the latest pre-print paper that looks at D-Wave’s 503 qubit chip performance, Scott Aaronson stepped down for the second time as chief D-Wave critic. In the blog post where he announced this, he also observed that on “the ~10% of instances on which the D-Wave machine does best, (…) the machine does do slightly better (…) than simulated annealing”.
This puts in words what the following picture shows in much more detail.
Instance-by-instance comparison of annealing times and wall-clock times. Shown is a scatter plot of the pure annealing time for the DW2 compared to a simulated classical annealer (SA) using an average over 16 gauges on the DW2. This is figure 6 of the recent benchmark paper. Wall clock times include the time for programming, cooling, annealing, readout and communication. Gauges refer to different encodings of a problem instance. (Only plot A and B are relevant to the settling of my bet).
Now, if you don’t click through to Scott’s actual blog post. you may take away that he actually changed his stance. But of course he hasn’t. You can look at the above picture and think the glass is ninety percent empty or you could proclaim it is ten percent full.
The latter may sound hopelessly optimistic, but let’s contemplate what we are actually comparing. Current computer chips are the end product of half a century highly focused R&D, with billions of dollars poured into developing them. Yet, we know we are getting to the end of the line of Moore’s law. Leak currents already are a real problem, and the writing is on the wall that we are getting ever closer to the point where the current technology will no longer allow for tighter chip structures.
On the other hand, the D-Wave chip doesn’t use transistors. It is an entirety different approach to computing, as profoundly different as the analog computers of yore.
The integration density of a chip is usually classified by the length of the silicon channel between the source and drain terminals in its field effect transistors (e.g. 25nm). This measure obviously doesn’t apply to D-Wave, but the quantum chip integration density isn’t even close to that. With the ridiculously low number of about 500 qubits on D-Wave’s chip, which was developed on a shoestring budget when compared to the likes of Intel or IBM, the machine still manages to hold its own against a modern CPU.
Yes, this is not a universal gate-based quantum computer, and the NSA won’t warm up to it because it cannot implement Shore’s algorithm, nor is there a compelling theoretical reason that you can achieve a quantum speed-up with this architecture. What it is, though, is a completely new way to do practical computing using circuit structures that leave plenty of room at the bottom. In a sense, it is resetting the clock to when Feynman delivered his famous and prophetic talk on the potentials of miniaturization. Which is why from a practical standpoint I fully expect to see a D-Wave chip eventually unambiguously outperform a classical CPU.
On the other hand, if you look at this through the prism of complexity theory none of this matters, only proof of actual quantum speed-up does.
Scott compares the quantum computing skirmishes he entertains with D-Wave to the American Civil war.
If the D-Wave debate were the American Civil War, then my role would be that of the frothy-mouthed abolitionist pamphleteer
Although clearly tongue in cheek, this comparison still doesn’t sit well with me. Fortunately, in this war, nobody will lose life or limb. The worst that could happen is a bruised ego, yet if we have to stick to this metaphor, I don’t see this as Gettysburg 1863 but the town of Sandwitch 1812.
Without careful reading of the paper a casual observer may come away with the impression that this test essentially just pitted hardware against hardware. Nothing could be further from the truth, some considerable effort had to go into constructing impressive classical algorithms to beat the D-Wave machine on its own turf. This Google Quantum AI lab post elaborates on this (h/t Robert R. Tucci).
Ever since the Edward Snowden-provided news broke that the NSA spent in excess of $100M on quantum computing I meant to address this in a blog post. But Robert R. Tucci beat me to it and has some very interesting speculations to add.
He also picked up on this quantum computing article in the South China Morning Post reporting on research efforts in mainland China. Unfortunately, but unsurprisingly, it is light on technical details. Apparently China follows a shotgun approach of funding all sorts of quantum computing research. The race truly seems to be on.
Interestingly, the latter may very well follow a script that Geordie Rose was speculating on when I asked him where he thinks competition in the hardware space may one day originate from. The smart move for an enterprising Chinese researcher would be to take the government’s seed money, and focus on retracing a technological path that has already proven to be commercially successful. This won’t get the government an implementation of Shor’s algorithm any faster, but adiabatic factorization may be a consolation prize. After all, that one was already made in China.
But do the NSA revelations really change anything? Hopefully it will add some fuel to the research efforts, but at this point this will be the only effect. The NSA has many conventional ways to listen in on the mostly unsecured Internet traffic. On the other hand RSA with a sufficiently long key length is still safe. For now if customers were to switch to email that is hardened in this way it’ll certainly make the snoops’ job significantly harder.
During my autumn travel to the Canadian West Coast I was given the opportunity to visit yet another High Tech Start-up with a vision no less radical and bold than D-Wave’s.
I have written about General Fusion before, and it was a treat to tour their expanding facility, and to ask any question I could think of. The company made some news when they attracted investment from Jeff Bezos, but despite this new influx of capital, in the world of fusion research, they are operating on a shoe-string budget. This makes it all the more impressive how much they have already accomplished.
At the outset, the idea seems to be ludicrous; How could a small start-up company possibly hope to accomplish something that the multi-national ITER consortium attempts with billions of dollars? Yet, the approach they are following is scientifically sound, albeit fallen out of favor with the mainstream of plasma physicists. It’s an approach that is incidentally well suited to smaller scale experiments, and the shell of the experiment that started it all is now on display at the reception area of General Fusion.
Doug Richardson, General Fusion co-founder, is a man on a mission, who brings an intense focus to the job. Yet, when prompted by the receptionist he manged a smile for this photo that shows him next to the shell from the original experiment that started it all. The other founder and key driving force, Michel Laberge, was unfortunately out of town during the week of my visit.
Popular Science was the first major media outlet to take note of the company. It is very instructive to read the article they wrote on the company back then to get a sense of how much bigger this undertaking has become. Of course, getting neutrons from fusion is one thing; Getting excess energy is an entirely different matter. After all, the company that this start-up modeled its name after was enthusiastically demonstrating fusion to the public many decades ago.
But the lackluster progress of the conventional approach to fusion does not deter the people behind this project, but rather seems to add to the sense of urgency. What struck me when first coming on site was the no-nonsense industrial feel to the entire operation. The company is renting some nondescript buildings, the interior more manufacturing floor than gleaming laboratory, every square inch purposefully utilized to run several R&D streams in parallel. Even before talking to co-founder Doug Richardson, the premise itself sent a clear message, this is an engineering approach to fusion and they are in a hurry. This is why rather then just focusing on one aspect of the machine, they decided to work in parallel.
When asked where I wanted to start my tour, I opted for the optically most impressive piece, the scaled down reactor core with its huge attached pistons. The reason I wanted to scrutinize this first is because, in my experience, this mechanical behemoth is what casual outside observers usually take objection to. This is due to the naive assumption that so many moving parts under such high mechanical stresses make for problematic technology. This argument was met with Doug’s derision. In his mind this is the easy part, just a matter of selecting the right material and precision mechanical engineering. My point that a common argument is that moving parts mean wear and tear, he swatted easily aside. In my experience, a layperson introduced to the concept is usually uncomfortable with the idea that pistons could be used to produce this enormous pressure. After all, everybody is well acquainted with the limited lifetime of a car engine that has to endure far less. Doug easily turned this analogy on its ear, pointing out that a stationary mounted engine can run uninterrupted for a long time, and that the reliability typically increases with scale.
Currently they have a 3:1 scaled down reactor chamber build to test the vortex compression system (shown in the picture below).
The test version has a reactor sphere diameter of 1m. The envisioned final product will be three times the size. Still a fairly compact envelope, but too large to be hosted in this building.
Another of my concerns with this piece of machinery was the level of accuracy required to align the piston cylinders. The final product will require 200 of them, and if the system is sensitive to misalignment it is easy to imagine how this could impact its reliability.
It came as a bit of a surprise that the precision required was actually less than I expected, 50 micron (half a tenth of a millimeter) should suffice, and in terms of timing, the synchronicity can tolerate deviations of up to 10 microseconds, ten times more than initially expected. This is due to a nice property that the GF research uncovered during the experiments: The spherical shock wave they are creating within the reactor chamber is self-stabilizing, i.e. the phase shift when one of the actuators is slightly out of line causes a self-correcting interference that helps to keep the ingoing compression symmetric as it travels through the vortex of molten lead-lithium that is at the heart of the machine.
The reason for this particular metal mix within the reactor is the shielding properties of lead, and the fact that Lithium 6 has a large neutron absorption cross section that allows for breeding tritium fuel. This is a very elegant design that ensures that if the machine gets to the point of igniting fusion there will be no neutron activation problems like those which plague conventional approaches (i.e. with a Tokamak design as used by ITER, neutrons, that cannot be magnetically contained, bombard the reactor wall, eventually wearing it down and turning it radioactive).
Doug stressed that this reflects their engineering mindset. They need to get these problems under control from the get-go, whereas huge projects like ITER can afford to kick the can down the road. I.e. first measuring the scope of the problem, and then hoping to address this with later research effort (which is then supposed to provide a solution to a problem that General Fusion’s approach manages to eliminate altogether).
Another aspect of the design that I originally did not understand is the fact that plasma will be injected from both sides of the sphere simultaneously, so that the overall momentum of the plasma will cancel out at the center. I.e. the incoming shock wave doesn’t have to hit a moving target.
The following YouTube video animation uploaded by the company illustrates how all these pieces are envisioned to work together.
Managing the plasma properties and its dynamics, i.e. avoiding unwanted turbulence that may reduce temperature and/or density, is the biggest technological challenge.
To create plasma of the required quality, and in order to get it into place, the company constructed some more impressive machinery. It is a safe bet that they have the largest plasma injectors ever built.
Admittedly, comparing this behemoth to the small plasma chamber in the upper left corner is comparing apples to oranges, but then this machine is in a class of its own.
When studying the plasma parameters, it turned out that the theoretical calculations actually lead to an over-engineering of this injector and that smaller ones may be adequate in creating plasma of the desired density. But of course creating and injecting the plasma is only the starting point. The most critical aspect is how this plasma behaves under compression.
To fully determine this, GF faces the same research challenges as the related magnetized target fusion research program in the US. I.e. General Fusion needs to perform similar test as conducted in the SHIVA STAR Air Force facility in Albuquerque. In fact, due to budget cut-backs, SHIVA has spare capacity that could be used by GF, but exaggerated US security regulations unfortunately prevent such cooperation. It is highly doubtful that civilian Canadians would be allowed access to the military class facility. So the company has to improvise and come up with its own approach to these kind of implosion tests. The photo below shows an array of sensors that is used to scrutinize the plasma during one of these tests.
Understanding the characteristics of the plasma when imploded is critical, these sensors on top of one of experimental set-up are there to collect the crucial data. Many such experiments will be required before enough data has been amassed.
Proving that they can achieve the next target compression benchmark is critical in order to continue to receive funding from the federal Canadian SDTC fund. The latter is the only source for governmental fusion funding, Canada has no dedicated program for fusion research and even turned its back on the ITER consortium. This is a far cry from Canada’s technological vision in the sixties that resulted in nuclear leadership with the unique CANDU design. Yet, there is no doubt, General Fusion has been doing the most with the limited funds it received.
Here’s to hoping that the Canadian government may eventually wake-up to the full potential of a fusion reactor design ‘made in Canada’ and start looking beyond the oil patch for its energy security (although this will probably require that the torch is passed to a more visionary leadership in Ottawa).
An obligatory photo for any visitor to General Fusion. Unfortunately, I forgot my cowboy hat.
~~~
Update: What a start into 2014 for this blog. This post has been featured on slashdot, and received over 11K views within three days. Some of the comments on slashdot inquired to dig deeper into the science of General Fusion. For those who want to follow through on this, the company’s papers and those that describe important results that GF builds on, can be found on their site. In addition, specifically for the unique vortex technology I find James Gregson’s Master Thesis very informative.
Update 2: General Fusion can be followed on Twitter @MTF_Fusion (h/t Nathan Gilliland)
Update 3: Some Canadian main stream media like the Edmonton Journal also noticed the conspicuous absence of dedicated fusion research. Ironically, the otherwise well written article, argues for an Alberta based research program while not mentioning General Fusion once. This despite the fact that the company is right next door (by Canadian standards) and has in fact one major Alberta based investor, the oil company Cenovus Energy.
It seems that work and life is conspiring to leave me no time to finish my write-up on my General Fusion visit. Started it weeks ago but still I am not ready to hit the publish button on this piece.
The ‘Fun Is Real’ blog is a cornucopia of good physics writing and should provide many hours of thought-provoking reading material to bridge over the dearth of my current posting schedule.
This is magnitudes more than has been previously reported. Now if they were to manage to get some quantum gates applied to them we’d be getting somewhere.
This is the second part of my write-up on my recent visit to D-Wave. The first one can be found here.
The recent shut-down of the US government had wide spread repercussions. One of the side-effects was that NASA had to stop all non-essential activities and this included quantum computing. So the venture which, in cooperation with Google, jointly operates a D-Wave machine was left in limbo for a while. Fortunately, this was short lived enough to hopefully not have any lasting adverse effects. At any rate, maybe it freed up some time to produce a QC mod for Minecraft and the following high level and very artsy Google video that ‘explains’ why they want to do quantum computing in the first place.
If you haven’t been raised on MTV music videos and find rapid succession sub-second cuts migraine inducing (at about the 2:30 mark things settle down a bit), you may want to skip it. So here’s the synopsis (Spoiler alert). The short version of what motivates Google in this endeavor, to paraphrase their own words: We research quantum computing, because we must.
In other news, D-Wave recently transferred its foundry process to a new location, partnering with Cypress Semiconductor Corp, a reminder that D-Wave firmly raised the production of superconducting Niobium circuitry to a new industrial-scale level. Given these new capabilities, it may not be a coincidence that the NSA has recently announced its intention to fund research into super-conducting computing. Depending on how they define “small-scale” the D-Wave machine should already fall into the description of the solicitation bid, which aspires to the following …
“… to demonstrate a small-scale computer based on superconducting logic and cryogenic memory that is energy efficient, scalable, and able to solve interesting problems.”
… although it is fair to assume this program is aimed at classical computing. Prototypes for such chips have been already researched and look rather impressive (direct link to paper). They are using the same chip material and circuitry (Josephson junctions) as D-Wave, so it is not a stretch to consider that industrial scale production of those more conventional chips can immediately benefit from the foundry process know-how that D-Wave has accumulated. It doesn’t seem too much of a stretch to imagine that D-Wave may expand into this market space.
When putting the question to D-Wave’s CTO Geordie Rose, he certainly took some pride in his company’s manufacturing expertise. He stressed that, before D-Wave, nobody was able to scale superconducting VLSI chip production, so this now opens up many additional opportunities. He pointed out that one could, for instance, make an immediate business case for a high through-put router based on this technology, but given the many venues open for growth he stressed the need to chose wisely.
The capacity of the D-Wave fridges are certainly so that they could accommodate more super-conducting hardware. Starting with the Vesuvius chip generation, measurement heat is now generated far away from the chip. Having several in close proximity should therefore not disturb the thermal equilibrium at the core. Geordie considers deploying stacks of quantum chips so that thousands could work in parallel, since they are currently just throwing away a lot of perfectly good chips that come off a wafer. This may eventually necessitate larger cooling units than the current ones that draw 16KW. This approach certainly could make a lot of sense for a hosting model where processing time is rented out to several customers in parallel.
One attractive feature that I pointed out was that if you had classical logic within the box, you’d eliminate a potential bottleneck that could occur if rapid reinitialization and read out of the quantum chip is required, and it would also potentially open the possibility for direct optical interconnects between chips. Geordie seemed to like this idea. One of the challenges to make the current wired design work, was to design high efficiency low pass filters to bring the noise level in these connectors down to an acceptable level. So, in a sense, an optical interconnect could reduce complexity, but clearly would also require some additional research effort to bring down the heat signature of such an optical transmission.
This triggered an interesting, and somewhat ironic, observation on the challenges of managing an extremely creative group of people. Geordie pointed out that he has to think carefully about what to focus his team on, because an attractive side project e.g. ‘adiabatic’ optical interconnects, could prove to be so interesting to many team members that they’d gravitate towards working on this rather than keeping their focus on the other work at hand.
Some other managerial headaches stem from the rapid development cycles. For instance, Geordie would like to develop some training program that will allow a customer’s technical staff to be quickly brought up to speed. But by the time such a program is fully developed, chances are a new chip generation will be ready and necessitate a rewrite of any training material.
Some of D-Wave’s challenges are typical for high tech start ups, others specific to D-Wave. My next, and final, installment will focus on Geordie’s approach to managing these growing pains.
Lots of travel last week delayed the second installment on my D-Wave visit write-up, but I came across some worthy re-blog material to bridge the gap.
I am usually very hard on poorly written popular science articles, which is all the more reason to point to some outstanding material in this area. I found that one writer, Brian Dodson, at the Gizmag site usually delivers excellent content. Due to his science background, he brings an unusual depth of understanding to his writing. His latest pieces are on General Relativity compatible alternatives to dark energy and a theoretical Quantum black hole study that puts the gravity loop approach to some good use. The latter is a good example as to why I am much more inclined to Loop Quantum Gravity rather than the ephemeral String theory, as the former at least delivers some predictions.
To get a taste for how prohibitively this attitude permeates the physics community, this arxiv paper/rant is a must read. From the abstract:
If you have a restless intellect, it is very likely that you have played at some point with the idea of investigating the meaning and conceptual foundations of quantum mechanics. It is also probable (albeit not certain) that your intentions have been stopped in their tracks by an encounter with some version of the “Shut up and calculate!” command. You may have heard that everything is already understood. That understanding is not your job. Or, if it is, it is either impossible or very difficult. Maybe somebody explained to you that physics is concerned with “hows” and not with “whys”; that whys are the business of “philosophy” -you know, that dirty word. That what you call “understanding” is just being Newtonian; which of course you cannot ask quantum mechanics to be. Perhaps they also complemented this useful advice with some norms: The important thing a theory must do is predict; a theory must only talk about measurable quantities. It may also be the case that you almost asked “OK, and why is that?”, but you finally bit your tongue. If you persisted in your intentions and the debate got a little heated up, it is even possible that it was suggested that you suffered of some type of moral or epistemic weakness that tends to disappear as you grow up. Maybe you received some job advice such as “Don’t work in that if you ever want to own a house”.
At least if this bog post is any indication the times seem to be changing and becoming more permissive.
This is my first installment of the write-up on my recent visit to D-Wave in Burnaby, BC.
No matter where you stand on the merits of D-Wave technology, there is no doubt they have already made computing history. Transistors have been the sole basis for our rapidly improving information technology since the last vacuum tube computer was sold in the early sixties. That is, until D-Wave started to ship their first system. Having won business from the likes of NASA and Google, this company is now playing in a different league. D‑Wave now gets to present at high profile events such as the HPC IDP conference, and I strongly suspect that they caught the eye of the bigger players in this market.
The entire multi-billion dollar IT industry is attached at the hip to the existing computing paradigm, and abhors cannibalize existing revenue streams. This is why I am quite certain that as I write this, SWOT assessments and talking-points on D-Wave are being typed up in some nondescript Fortune 500 office buildings (relying on corporate research papers like this to give them substance). After all, ignoring them is no longer an option. Large companies like to milk cash cows as long as possible. An innovative powerhouse like IBM, for instance, often follows the pattern to invest in R&D up to productization, but they are prone to holding back even superior technology if it may jeopardize existing lines of business. Rather, they just wait until a competitor forces their hand, and then they rely on their size and market depth, in combination with their quietly acquired IP, to squash or co-opt them. They excel at this game and seldom lose it (it took somebody as nimble and ruthless as Bill Gates to beat them once).
This challenging competitive landscape weighed on my mind when I recently had an opportunity to sit down with D-Wave founder and CTO Geordie Rose, and so our talk first focused on D-Wave’s competitive position. I expected that patent protection and technological barriers of entry would dominate this part of our conversation, and was very surprised about Geordie’s stance, which certainly defied conventional thinking. Geordie Rose founder and CTO of D-Wave in one of the Tardis-sized boxes that host his quantum chip. The interior is cooled close to absolute zero when in operation. If you subscribe to the multiverse interpretation of quantum mechanics one may argue that it then will in fact be bigger on the inside. After all, the Hilbert space is a big place.
While he acknowledged the usefulness of the over 100 patents that D-Wave holds, he only considers them to be effectively enforceable in geographies like North America. Overall, he does not consider them an effective edge to keep out competition, but was rather sanguine that the fate of any computing hardware is to eventually become commoditized. He asserted that the academic community misjudged how hard it would be produce a device like the D-Wave machine. Now that D-Wave has paved the way, he considers a cloning and reverse engineering of this technology to be fairly straightforward. One possible scenario would be a government funded QC effort in another geography to incubate this new kind of information processing. In the latter case, patent litigation will be expensive, and may ultimately be futile. Yet, he doesn’t expect these kind of competitive efforts unless D-Wave’s technology has further matured and proven its viability in the market place.
I submitted that the academic push-back that spreads some FUD with regards to their capabilities, may actually help in this regard. This prompted a short exchange on the disconnect with some of the academic QC community. D-Wave will continue to make it’s case with additional publication to demonstrate entanglement and the true quantum nature of their processor. But ultimately this is a side-show, the research focus is customer driven and to the extent that this means deep learning (e.g. for pattern recognition) the use case of the D-Wave chip is evolving. Rather than only using it as an optimization engine, Geordie explained how multiple solution runs can be used to sample the problem space of a learning problem and facilitate more robust learning.
It is the speed of customer driven innovation that Geordie relies on giving D-Wave a sustainable edge, and ultimately he expects that software and services for his platform will prove to be the key to a sustainable business. The current preferred mode of customer engagement is what D-Wave calls a deep partnership, i.e. working in very close collaboration with the customer’s staff. Yet, as the customer base grows, more management challenges appear, since clear lines have to be drawn to mark where the customer’s intellectual property ends and D-Wave’s begins. The company has to be able to re-sell solutions tied to its architecture.
D-Wave experiences some typical growing pains of a successful organization, and some unique high tech challenges in managing growth. How Geordie envisions tackling those will be the subject of the next installment.
It lays out how the current scientific publishing process is a hold over from the 19th century and passionately makes the case for Open Science. The latter is mostly understood to be synonymous with Open Access, i.e. no more hiding of scientific results in prohibitively expensive journals, especially when public tax funded grants or institutions paid for the research.
But Michael has a more expansive view. He makes the case that science can be measurably enriched by coming out of the Ivory tower and engaging the public via well designed crowdsourcing efforts such as the Galaxy Zoo.
On this blog, I have written many times about the shortcomings of science media large and small, as well as the unsatisfying status quo in theoretical physics. And readers may be justified in wondering why this should matter to them. The answer to this is straightforward: Science is too important for it to be left to the scientists. Our society is shaped by science and technology, and to the extent that we’ve all learned about the scientific method, everybody has the capacity to raise valid questions. Science, as any other major endeavor, benefits from a critical public, and that is why the fairytale science that I wrote about in my last post is a dangerous development. It lulls the interested observers into believing that they are clearly out of their depth, incapable of even formulating some probing questions. This can in fact be turned into a criteria for bad science: If a reasonably intelligent and educated person cannot follow up with some questions after a science presentation, it’s a pretty good indication that the latter is either very poorly done, or may deal in fairytale science (the only problem with this criteria is that usually everybody considers themselves reasonably intelligent).
The antidote to this pathological development is Open Science as described by Michael Nielsen and Citizen Science. The latter I’ll expect to develop no less of an impact on the way we do science as the Open Source movement had on the way we do computing. Never have the means to do quality science been as affordable as today; A simple smartphone is already a pretty close match to the fabled Star Trektricorder, and can easily be turned into precision instruments. Science labs used to require skilled craftsmen to build scientific rigs, but 3D printers will level the playing field there as well. This means that experiments that would have required major funding just two decades away are now within the means of high school students.
So, don’t ask what science can do for you, but what you can do for science.*
*In this spirit, I decided to step up this blog’s content, and didn’t shy away from the expenses to engage in some original reporting. Last week I took a trip to Canada’s high tech wonderland, which happens to be Burnaby, BC just outside Vancouver. Stay tuned for some upcoming first hand reporting on D-Wave and General Fusion.