Category Archives: Popular Science

The Other Kind of Fusion – General Fusion

When it comes to nuclear fusion size matters.
When it comes to nuclear fusion size matters.

Recent news with regards to nuclear fusion research has not been good:

Yet, just as D-Wave was mostly off the radar with regards to quantum computing, there is another Vancouver based high tech venture that could similarly upset fusion research.

The hot fusion plasma ball up in the sky, when compared to the general fusion challenge down here on earth, is really, really big; It generates an enormous amount of pressure at its core, creating the kind of critical density that assists in sustaining the fusion reaction. So just heating a plasma to the Sun’s core temperature (about 16 million K) will not suffice, we have to do about ten times more than that in order to compensate for the lack of gravitational pressure. It shouldn’t be surprising that designing a reactor chamber that can hold the hottest thing in our solar system poses a significant engineering challenge.

On the other hand, the idea of tackling the second parameter, the plasma’s pressure, in a controllable manner, was generally regarded as technically impossible (not counting NIF like implosion scenarios that more mimic the runaway implosion of a H-bomb, which is why they are interesting to the military).

This common wisdom held until General Fusion entered the fray and made the case that advances in electronics and process control opened up the possibility to tackle the density side of the fusion equation.  And then they built this:

Screen Shot 2013-06-02 at 2.12.23 PM
A device that would fit nicely into the engine room of a spaceship and not look out of place on a SciFi set.

This device is following the age-old engineering adage that if you want compression you use a piston, and if you want large compression you use a large piston which focuses all the energy into a tiny space.  The trick is to be able to do this in such a precise fashion that you can coordinate it with the injection of fuel gas along a central axis, so that you can get a succession of pulsed fusion ignitions with each coordinated firing of the pneumatic pistons.

As with most fusion reactor schemes the envisioned reactor will be fairly compact.
As with most fusion reactor schemes, the envisioned reactor will be fairly compact.

When I first heard about this concept, I thought it was completely off the wall, but the math checks out and there have been other experiments to confirm the viability of this approach.

This device may be testing the limits of mechanical engineering, but if it can create the condition it aims for, then our current understanding of plasma and nuclear physics clearly indicates that it will result in fusion.

The interior of the reactor chamber will have to be cooled with liquid lead. Despite this high energy density, the overall footprint of just the reactor itself is fairly compact, no bigger than the typical dimensions of a commercial nuclear fission reactor. If this design pans out, these reactors could be used to retrofit existing nuclear power stations with a fusion core, converting them to a much cleaner energy source that does not come with the risk of accidentally triggering an uncontrollable nuclear chain-reaction.

The timeline for bringing this to the market is aggressive.  If General Fusion delivers on it, there will be a commercial fusion offering available before ITER even opens its doors.

Given that the latter is not even attempting to deliver a commercial ready design yet, the company will be without competition (unless some of the other commercial fusion ventures such as LPP should beat them).

GF Timeline

Fortunately, with this company it won’t be hard to decide when and if they manage to deliver on their promises (there won’t be any grounds for the kind of academic backlash that D-Wave has to endure). Unlike in the world of fringe science, where even the simple act of measuring (supposedly) substantial energy gain is obfuscated to the point of utter hilarity, once General Fusion achieves energy net gain, there will be little doubt that we entered the dawn of a new energy age.

(SOURCES: General Fusion Web site, GF 2012 progress report)

The Meaning of Wave Mechanics and the Mongol Physics Project

Mongols knew that a horse was either dead or alive, but never in a state of superposition between the twain.

Kingsley Jones, an Australian theoretical physicist turned entrepreneur, recently introduced what he dubs Mongol physics, a bold undertaking to “fix” QM and QED.

The name is aptly chosen, because if he succeeds in this, most of academic physics will be as taken by surprise as Europe was when the Golden Horde arrived. After all, physics doesn’t perceive these theories as defective, despite the enduring confusion as to what QM interpretation makes the most sense.

Kingsley dubs Erwin Schrödinger “Mongul #1” and there is a good reason for this. Having just received my copy of his collected papers, the first thing I came across was this little gem that I include below. The fact that it reads just as relevant 60 years later speaks volumes.  The only thing that has changed since then is that clever ways were found to deal with the runaway infinities in QED, so that accurate numbers could be forced out of it. Schrödinger knew better than to hinge any of his arguments on these major technical challenges at the time.  Rather, the article details his discomfort with the Copenhagen interpretation based on very fundamental considerations.  Makes me wonder how he’d feel about the fact that his cat in a box, that he made up to mock the status quo, entered popular culture as a supposedly valid illustration of quantum weirdness.

(Austrian copyright protection expires after 70 years, yet due to the fact that scans of the article are freely accessible at this University of Vienna site, I assume this text to be already placed in the public domain and hence free for online reproduction.  Please note this is not a translation. Schrödinger was fluent in several languages and originally penned this in English)  

THE MEANING OF WAVE MECHANICS
by Erwin Schrödinger
(For the July Colloquium, Dublin 1952)

Louis de Broglie’s great theoretical discovery of the wave phenomenon associated with the electron was followed within a few years, on the one hand by incontrovertible experimental evidence (based on interference patterns) of the reality of the de Broglie waves (Davisson and Germer, G. P. Thomson), and on the other hand by a vast generalization of his original ideas, which embraces the entire domain of physics and chemistry, and may be said to hold the field today along the whole line, albeit not precisely in the way de Broglie and his early followers had intended.

For it must have given to de Broglie the same shock and disappointment as it gave to me, when we learnt that a sort of transcendental, almost psychical interpretation of the wave phenomenon had been put forward, which was very soon hailed by the majority of leading theorists as the only one reconcilable with experiments, and which has now become the orthodox creed, accepted by almost everybody, with a few notable exceptions. Our disappointment consisted in the following. We had believed. that the eigenfrequencies of the wave phenomenon, which were in exact numerical agreement with the, until then so called, energy levels, gave a rational understanding of the latter. We had confidence that the mysterious “fit and jerk theory” about the jump-like transition from one energy level to another was now ousted. Our wave equations could be expected to describe any changes of this kind as slow and actually describable processes. This hope was not informed by personal predilection for continuous description, but if anything, by the wish for any kind of description at all of these changes. It was a dire necessity. To produce a coherent train of light, waves- of 100 cm length and more, as is observed in fine spectral lines, takes a time comparable with the average interval between transitions. The transition must be coupled with the production of the wave train. Hence if one does not understand the transition, but only understands the “stationary states”, one understands nothing. For the emitting system is busy all the time in producing the trains of light waves, it has no time left to tarry in the cherished “stationary states”, except perhaps in the ground state.

Another disconcerting feature of the probability interpretation was and is that the wave function is deemed to change in two entirely distinct fashions; it is thought to be governed by the wave equation as long as no observer interferes with the system, but whenever an observer makes a measurement, it is deemed to change into an eigenfunction of that eigenvalue of the associated operator that he has measured. I know only of one timid attempt (J. von Neumann in his well known book) to put this “change by measurement” to the door of a perturbing operator introduced by the measurement, and thus to have it also controlled solely by the wave equation. But the idea was not pursued, partly because it seemed unnecessary to those who were prepared to swallow the orthodox tenet, partly because it could hardly be reconciled with it. For in many cases the alleged change involves an actio in distans, which would contradict a firmly established principle, if the change referred to a physical entity. The non-physical character of the wave function (which is sometimes said to embody merely our knowledge) is even more strongly emphasized by the fact that according to the orthodox view its change by measurement is dependent on the observer’s taking cognizance of the result. Moreover the change holds only for the observer who does. If you are present, but are not informed of the result, then for you even if you have the minutest knowledge both of the wave function before the measurement and of the appliances that were used, the changed wave function is irrelevant, not existing, as it were; for you there is, at best, a wave function referring to the measuring appliances plus the system under consideration, a wave function in which the one adopted by the knowing observer plays no distinguished role.

M. de Broglie, so I believe, disliked the probability interpretation of wave mechanics as much as I did. But very soon and for a long period one had to give up opposing it, and to accept it as an expedient interim solution. I shall point out some of the reasons why the originally contemplated alter-native seemed deceptive and, after all, too naive. The points shall be numbered for later reference; the illustrating examples are representative of wide classes.

  • i) As long as a particle, an electron or proton etc., was still believed to be a permanent, individually identifiable entity, it could not adequately be pictured in our mind as a wave parcel. For as a rule, apart from artificially constructed and therefore irrelevant exceptions, no wave parcel can be indicated which does not eventually disperse into an ever increasing volume of space.
  • .
  • ii) The original wave-mechanical model of the hydrogen atom is not self-consistent. The electronic cloud effectively shields the nuclear charge towards outside, making up a neutral whole, but is inefficient inside; in computing its structure its own field that it will produce must not be taken into account, only the field of the nucleus.
  • .
  • iii) It seemed impossible to account for e.g. Planck’s radiation formula without assuming that a radiation oscillator (proper mode of the hohlraum) can only have energies nhν, with n an integer (or perhaps a half odd integer). Since this holds in all cases of thermodynamic equilibrium that do not follow the classical law of equipartition we are thrown back to the discrete energy states with abrupt transitiona between them, and thus to the probability interpretation.
  • .
  • iv) Many non-equilibrium processes suggest even more strongly the “transfer of whole quanta”; the typical, often quoted example is the photoelectric effect, one of the pillars of Einstein’s hypothesis of light quanta in 1905.

All this was known 25 years ago, and abated the hopes of “naive” wave-mechanista. The now orthodox view about the wave function as “probability amplitude” was put forward and was worked out into a scheme of admirable logical consistency. Let us first review the situation after the state of knowledge we had then. The view suggested by (iii) and (iv), that radiation oscillators, electrons and similar constituents of observable systems always find themselves at one of their respective energy levels except when they change abruptly to another one handing the balance over to, or receiving it from, some other system, this view, so I maintain, is in glaring contradiction with the above mentioned scheme in spite of the admirable logical self-consistency of the latter. For one of the golden rules of this scheme is, that any observable is always found at one of its eigenvalues, when you measure it, but that you must not say that it has any value, if you do not measure it. To attribute sharp energy values to all those constituents, whose energies we could not even dream of measuring (except in a horrible nightmare), is not only gratuitous but strictly forbidden by this rule.

Now let us review the situation as it is today. Two new aspects have since arisen which I consider very relevant for reconsidering the interpretation. They are intimately connected. They have not turned up suddenly. Their roots lie far back, but their bearing was only very gradually recognized.

I mean first the recognition that the thing which has always been called a particle and, on the strength of habit, is still called by some such name is, whatever it may be, certainly not an individually identifiable entity. I have dwelt on this point at length elsewhere [“Endeavour”, Vol.IX, Number 35, July 1950; reprinted in the Smithsonian Institution Report for 1950, pp. 183, – 196; in German “Die Pyramide”, Jan. and Feb. 1951 (Austria)]. The second point is the paramount importance of what is sometimes called “second quantization”.

To begin with, if a particle is not a permanent entity, then of the four difficulties labelled above, (i) is removed. As regards (ii), the quantization of de Broglie’s waves around a nucleus welds into one comprehensive scheme all the 3n-dimensional reprasentations that I had. proposed for the n-body problems. It is not an easy scheme, but it is logically clear and it can be so framed that only the mutual Coulomb energies enter.

As regards (iii) – keeping to the example of black body radiation – the situation is this. If the radiation is quantized each radiation oscillator (proper mode) obtains the frequencies or levels nhν. This is sufficient to produce Planck’s formula for the radiation in a cavity surrounded by a huge heat bath. I mean to say, the level scheme suffices: it is not necessary to assume that each oscillator is at one of its levels, which is absurd from any point of view. The same holds for all thermodynamical equilibria. I have actually given a general proof of this in the last of my “Collected Papers” (English version: Blackie and Son, Glasgow 1928). A better presentation is added as an appendix to the forthcoming 2nd impression of “Statistical Thermodynamics” (Cambridge University Press).

Under (iv) we alluded to a vast range of phenomena purported to be conclusive evidence for the transfer of whole quanta. But I do not think they are, provided only that one holds on to the wave aspect throughout the whole process. One must, of course, give up thinking of e.g. an electron as of a tiny speck of something moving within the wave train along a mysterious unknowable path. One must regard the “observation of an electron” as an event that occurs within a train of de Broglie waves when a contraption is interposed in it which by its very nature cannot but answer by discrete responses: a photographic emulsion, a luminescent screen, a Geiger counter. And one must, to repeat this, hold on to the wave aspect throughout. This includes, that the equations between frequencies and frequency differences, expressing the resonance condition that governs wave mechanics throughout, must not be multiplied by Planck’s constant h and then interpreted as tiny energy balances of microscopic processes between tiny specks of something that have, to say the least, no permanent existence.

This situation calls for a revision of the current interpretation, which involves computing transition probabilities from level to level, and disregards the fact that the wave equation, with few exceptions if any, indicates nothing of the sort, but leads each of the reacting systems into a state composed of a wide spread of energy eigenstates. To assume that the system actually leaps into just one of them which is selected by “playing dice”, as it were, is not only gratuitous, but as was pointed out above, contradicts in most cases even the current interpretation. These inconsistencies will be avoided by returning to a wave theory that is not continually abrogated by dice-miracles; not of course to the naive wave theory of yore, but to a more sophisticated one, based on second quantization and the non-individuality of “particles”. Originating from contraptions that by their very nature cannot but give a discrete, discontinuous response, the probability aspect has unduly entered the fundamental concepts and has domineeringly dictated the basic structure of the present theory.

In giving it up we must no longer be afraid of losing time-honoured atomism. It has its counterpart in the level-scheme (of second quantization) and nowhere else. It may be trusted to give atomism its due, without being aided by dice-playing.

To point here to the general failure of the present theory to obtain finite transition probabilities and finite values of the apparent mass and charge, might seem a cheap argument and a dangerous one at that. The obvious retort would be: Can you do better, sir? Let me frankly avow that I cannot. Still I beg to plead that I am at the moment groping for my way almost single-handed, as against a host of clever people doing their best along the recognized lines of thought.

But let me still draw attention to a point that is seldom spoken of. I called the probability interpretation a scheme of admirable logical consistency. Indeed it gives us a set of minute prescriptions, not liable ever to be involved in contradiction, for computing the probability of a particular outcome of any intended measurement, given the wave function and the hermitian operator associated with that particular measuring device. But, of course, an abstract mathematical theory cannot possibly indicate the rules for this association between operators and measuring devices. To describe one of the latter is a long and circumstantial task for the experimentalist. Whether the device which he recommends really corresponds to the operator set up by the theorist, is not easy to decide. Yet this is of paramount importance. For a measuring appliance means now much more than it did before the advent of quantum mechanics and of its interpretation which I am opposing here. It has a physical influence on the object; it is deemed to press it infallibly into one of the eigenstates of the associated operator. If it fails to put it in an eigenstate belonging to the value resulting from the measurement, the  latter is quantum-mechanically not repeatable. I cannot help feeling that the precariousness of the said association makes that beautiful, logically consistent theoretical scheme rather void. At any rate its contact with actual laboratory work is very different from what one would expect from its fundamental enunciations.

A further discussion of the points raised in this paper can be found in a forthcoming longer (but equally non-mathematical) essay in the British Journal for the Philosophy of Science.

(Dublin. Institute for Advanced Studies)

Time Crystal – A New Take on Perpetual Motion

Update: Here’s the link to Wilczek time crystal paper

Not a time crystal but perpetually moving at room temperature. (Illustration of Nitrogen-inversion).

It is a given that at room temperature there is plenty of perpetual chaotic and truly perpetual motion to be had.  And sometimes this motion takes on some more organized forms as is the case with Nitrogen inversion.

Also it is well established that unexpected movements can occur close to absolute zero, when for instance superfluid liquids climb up the walls of their containment.

In general, unperturbed quantum systems develop in a unitary manner (i.e. a kind of movement) and will do so perpetually, until measured.

In case of super-sized Rydberg atoms you can also approach an almost classical orbit (and that should hold at very low temperatures as well).  But to have sustained, detectable perpetual motion in the ground state of a system at absolute zero would be a new quality.

That is what “Time Crystals” might be adding to the quantum cabinet of oddities.  The idea that lead to this theoretical prediction, formulated by Frank Wilczek, is indeed quite clever:

“I was thinking about the classification of crystals, and then it just occurred to me that it’s natural to think about space and time together, (…) So if you think about crystals in space, it’s very natural also to think about the classification of crystalline behavior in time.”

It’ll be up to some creative experimentalist to determine if the resulting theory holds water.  If so, this may open up an interesting new venue to tackle the frustrating problem of getting General Relativity (where space and time is a combined entity) and QM to play together.

If a Fighter Writes a Paper to go for the Kill …

You don’t want to take on this man in the rink:

And you don’t want to take on his namesake in the scientific realm.

In my last post I wrote about the Kish Cypher protocol, and was wondering about its potential to supplant Quantum Cryptography.

The very same same day, as if custom ordered, this fighter’s namesake, no other than Charles Bennett himself, published this pre-print paper (h/t Alessandro F.).

It is not kind on the Kish Cipher protocol, and that’s putting it mildly.  To quote from the abstract (emphasis mine):

We point out that arguments for the security of Kish’s noise-based cryptographic protocol have relied on an unphysical no-wave limit, which if taken seriously would prevent any correlation from developing between the users. We introduce a noiseless version of the protocol, also having illusory security in the no-wave limit, to show that noise and thermodynamics play no essential role. Then we prove generally that classical electromagnetic protocols cannot establish a secret key between two parties separated by a spacetime region perfectly monitored by an eavesdropper. We note that the original protocol of Kish is vulnerable to passive time-correlation attacks even in the quasi-static limit.

Ouch.

The ref’s counting …

Quantum Cryptography Made Obsolete?

The background story.

Electrical engineering is often overshadowed by other STEM fields. Computer Science is cooler, and physics has the aura of the Faustian quest for the most fundamental truths science can uncover.  Yet, this discipline produced a quite remarkable bit of research with profound implications for Quantum Information Science.  It is not very well publicized. Maybe that is because it’s a bit embarrassing to the physicists and computer scientists who are heavily vested in Quantum Cryptography?

After all, the typical, one-two punch elevator-pitch for QIS is entirely undermined by it. To recap, the argument goes likes this:

  1. Universal Quantum Computing will destroy all effective cryptography as we know it.
  2. Fear not, for Quantum Cryptography will come to your rescue.

Significant funds went into the latter.  And it’s not like there isn’t some significant progress, but what if all this effort proved futile because an equally strong encryption could be had with far more robust methods?  This is exactly what the Kish Cypher protocol promises. It has been around for several years, and in a recent paper, Laszlo Bela Kish discusses several variations of his protocol that he modestly calls the Kirchhoff-Law-Johnson-(like)-Noise (KLJN) secure key exchange – although otherwise it goes by his name in the literature. A 2012 paper that describes the principle behind it can be found here.  The abstract of the latter makes no qualms about the challenge to Quantum Information Science:

It has been shown recently that the use of two pairs of resistors with enhanced Johnson-noise and a Kirchhoff-loop—i.e., a Kirchhoff-Law-JohnsonNoise (KLJN) protocol—for secure key distribution leads to information theoretic security levels superior to those of a quantum key distribution, including a natural immunity against a man-in-the-middle attack. This issue is becoming particularly timely because of the recent full cracks of practical quantum communicators, as shown in numerous peer-reviewed publications.

There are some commonalities between quantum cryptography and this alternative, inherently safe, protocol.  The obvious one is that they are both key exchange schemes; The more interesting one is that they both leverage fundamental physics properties of the systems that they are employing.  In one case, it is the specific quantum correlations of entangled qubits, in the other, the correlations in classical thermodynamic noise (i.e. the former picks out the specific quantum entanglement correlations of the systems density matrix, the latter only requires the classical entries that remain after decoherence and tracing of the density matrix).

Since this protocol works in the classical regime, it shouldn’t come as a surprise that the implementation is much easier to accomplish than when having to accomplish and preserve an entangled state. The following schematic illustrates the underlying principle:

Core of the KJLN secure key exchange system. Alice encodes her message by connecting these two resistors to the wire in the required sequence. Bob, on the other hand, connects his resistors to the wire at random.

The recipient (Bob) connects the wire at random in predefined synchronicity with the sender (Alice).  The actual current and voltage through the wire is random, ideally Johnson noise. The resistors determine the characteristic of this voltage, Bob can determine what resistor Alice used because he knows which one he connected, but the Fluctuation Dissipation Theorem ensures that wire-tapping by an attacker (Eve) is futile.  The noise characteristics of the signal ensure that no information can be extracted from it.

Given that the amount of effort and funding that goes into Quantum Cryptography is substantial (some even mock it as a distraction from the ultimate prize which is quantum computing), it seems to me that the fact that classic thermodynamic resources allow for similar inherent security should give one pause.  After all, this line of research may provide a much more robust approach to the next generation,”Shor safe”, post quantum encryption infrastructure.

Nobel Laureates on the QM Interpretation Mess

Update:  Perusing the web I noticed that John Preskill [not yet a Nobel laureate 🙂 ] also blogged on the same survey.  Certainly another prominent voice to add to the mix.

~~~

In the LinkedIn discussion to my earlier blog post that was lamenting the disparate landscape of QM interpretation, I had Nobel laureate Gerard ‘t Hooft weighing in:

Don’t worry, there’s nothing rotten. The point is that we all agree about how to calculate something in qm. The interpretation question is something like: what language should one use to express what it is that is going on? The answer to the question has no effect on the calculations you would do with qm, and thus no effect on our predictions what the outcome of an experiment should be. The only thing is that the language might become important when we try to answer some of the questions that are still wide open: how do we quantize gravity? And: where does the Cosmological Constant come from? And a few more. It is conceivable that the answer(s) here might be easy to phrase in one language but difficult in another. Since no-one has answered these difficult questions, the issue about the proper language is still open.

His name certainly seemed familiar, yet due to some very long hours I am currently working, it was not until now that I realized that it was that ‘t Hooft.  So I answered with this, in hindsight, slightly cocky response:

Beg to differ, the interpretations are not more language, but try to answer what constitutes the measurement process. Or, with apologies to Ken, what “collapses the wave function”: The latter is obviously a physical process. There has been some yeoman’s work to better understand decoherence, but ultimately what I want to highlight is that this sate of affairs, of competing QM interpretation should be considered unsatisfactory. IMHO there should be an emphasis on trying to find ways to decide experimentally between them.

My point is we need another John Bell.  And I am happy to see papers like this that may allow us to rule out some many world interpretations that rely on classical probabilities.

So why does this matter?  It is one thing to argue that there can be only one correct QM interpretation, and that it is important to identify that one in order to be able to develop a better intuition for the quantum realities (if such a thing is possible at all).

But I think there are wider implications, and so I want to quote yet another Nobel laureate, Julian Schwinger, to give testament to how this haunted us when the effective theory of quantum electrodynamics was first developed (preface selected papers on QED 1956):

Thus also the starting point of the theory is the independent assignment of properties to the two fields, they can never be disengaged to give those properties immediate observational significance. It seems that we have reached the limits of the quantum theory of measurement, which asserts the possibility of instantaneous observations, without reference to specific agencies.  The localization of charge with indefinite precision requires for its realization a coupling with the electromagnetic field that can attain arbitrarily large magnitudes. The resulting appearance of divergences, and contradictions, serves to deny the basic measurement hypothesis.

John Bell never got one of these, because of his untimely death.

Something is Rotten in the State of Physics.

How else to explain that almost a century after the most successful modern physics theory has been coined leading experts in the field can still not agree on how to interpret it?

Exhibit (A) this bar chart from a survey taken at a quantum foundations meeting.  It has been called the most embarrassing  graph of modern physics (and rightly so).

Screen Shot 2013-02-23 at 11.25.29 AM

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Unsurprisingly, my favorite interpretation of QM, Ulrich Mohrhoff’s Pondicherry Interpretation, is such a dark horse candidate it did not even make the list.

In accordance with this main confusion, the view on the role of the observer is also all over the map:

Screen Shot 2013-02-23 at 11.46.52 AMThe majority settles on a statement that no matter how I try to parse it, doesn’t make any sense to me:  If our formalism describes nature correctly, and the observer plays a fundamental role in the latter, how is it supposed to not occupy a distinguished physical role? The cognitive dissonance to take this stance is dizzying. At least the quantum hippie choice of option (d) has some internal consistency.

So it shouldn’t come as a surprise that with regard to quantum computing these experts are as ignorant as the public at large and completely ignore that D-Wave is already shipping a quantum computer (if the phrasing was about a universal quantum computer these results would have been easier to tolerate).  Invited to opine on the availability of the first working and useful quantum computer this was the verdict:

 

 

 

 

 

 

 

The paper contains another graph that could almost parse as a work of art, it visualizes the medium to strong correlation between the survey answers.  To me it is the perfect illustration for the current State of Physics with regards to the interpretation of quantum mechanics:

It is a mess.

Given this state of affairs it’s small wonder that one of my heros, Carver Mead, recently described the QM revolution that started in the early last century as an aborted one. It is indeed time to kick-start it again.

The Wave Particle Duality – A Deadly Divide

Wave_particle_duality_p_unknown
A particle and its associated wave function.

The curious fact that matter can exhibit wave-like properties (or should this rather be waves acting like particles?) is now referred to as the wave particle duality.  In old times it was often believed that there was some magic in giving something a name, and that it will take some of the christened’s power. Here is to hoping that there may be some truth to this, as this obvious incompatibility has claimed at least one prominent life.

It was Einstein who first made this two-faced character of matter explicit when publishing on the photo electric effect, assigning particle-like characteristics to light that up to this point was firmly understood to be an electromagnetic wave phenomenon.

But just like the question of the true nature of reality, the source of this dychotomy is almost as old as science itself, and arguably already inherent in the very idea of atomism as the other extrem of an all encompassing holism. The latter is often regarded as the philosophical consequence of Schroedinger’s wave mechanics, since a wave phenomenon has no sharp and clear boundaries, and in this sense is often presented as connecting the entirety of the material world. Taken to the extreme, this holistic view finds its perfect expression in Everett’s universal wavefunction (an interpretation happily  embraced by Quantum Hippies of all ages) which gave rise to the now quite popular many worlds interpretation of quantum mechanics.

While atomism proved to be extremely fruitful in the development of physics, it was never popular with religious authorities.  You can find echoes of this to this day if you look up this term at the Catholic Encyclopaedia:

Scholastic philosophy finds nothing in the scientific theory of atomism which it cannot harmonize with its principles, though it must reject the mechanical explanation, often proposed in the name of science, …

Or at this site of religious physicists:

Atomism is incompatible with Judeo-Christian principles because atomism views matter as independent of God, …

Religion of course really doesn’t have a choice in the matter as it can hardly maintain doctrine without some holistic principle.  It is no coincidence that physics only progressed after the cultural revolution of the Renaissance loosened the church’s dominance over the sheeple’s  minds. But history never moves in a straight line.  For instance, with Romanticism the pendulum swung back with a vengeance. It was at the height of this period that Ludwig Boltzmann achieved the greatest scientific breakthrough of atomism when developing statistical mechanics as the proper foundation of thermodynamics. It was not received well. With James Clerk Maxwell having seemingly established a holistic ether that explained all radiation as a wave phenomenon, atomism had thoroughly fallen out of favour.  Boltzmann vigorously defended his work and was no stranger to polemic exchanges to make his point, yet he was beset by clinical depression and feared in the end that his life’s work was for naught. He committed suicide while on a summer retreat that was supposed to help his ailing health.

He must have missed the significance of Einstein’s publication on Brownian Motion just a year early.  It is the least famous of his Annus Mirabelis papers, but it lay the foundation for experimentalists to once and for all settle the debate in Boltzmann’s favor, just a few years after his tragic death.

Thermodynamics made no sense to me before I learned statistical mechanics, and it is befitting that his most elegant equation for the entropy of a system graces the memorial at his grave site (the k denoting the Boltzmann constant).

A physicist can't ask for more to be remembered by than his most fundamental equation.
Ludwig Boltzmann Tombstone in Vienna.

Peak Copper – No More

Screen Shot 2013-01-14 at 9.36.25 PM
An LED lamp suspended from the hair thin nanotube wires that also supply it with power.

This has been reported all over the Web, but it is just too good to pass up, especially since the year in the Quantum Computing world started on a somewhat more contentious note (more about this in the next blog post).

This news item on the other hand deserves to be the first in the new year and is entirely positive:  I was expecting carbon nanotubes to eventually become the material of choice for electric wiring but I didn’t expect it to happen this soon.  The video that is embedded below, makes the compelling case that a research team at Rice university not only managed to produce wires superior to any metal wire, but also at the same time, to develop a production process that can be readily scaled up.

 

price_chart_copper
Copper price development over the last ten years.

Being able to produce these kind of wires at a competitive price will go a long way to ameliorate one of humanity’s key resource problems: Peak copper (a term coined after the more publicized peak oil prognosis). And this resource constraint is anything but theoretical.  Copper prices have so increased over the last ten years that copper theft became a serious global problem, one that often endangers lives when critical infrastructure is destroyed.

These new copper nanotube wires have the potential to substitute copper wiring in cars, airplanes, microchip as well as residential wiring to just name a few.  If the wires are as good as they are made to look in the video, they will be superior to copper wires to such an extend, that it will be simply a matter of price for them to be adopted.

This is the kind of science news I like to hear at the beginning of a new year.

Big Bad Quantum Computer Revisited

A recent DefenseNews article again put Shor’s algorithm front and center when writing about Quantum Computing.  Yet, there is so much more to this field, and with Lockheed Martin working with D-Wave one would expect this to be an open secret in the defence sector.  At any rate, this is a good enough reason to finally publish the full text of my quantum computing article, that the Hakin9 magazine asked me to write for their special issue earlier this year:

Who’s afraid of the big bad Quantum Computer?

“Be afraid; be very afraid, as the next fundamental transition in computing technology will obliterate all your encryption protection.”

If there is any awareness of quantum computing in the wider IT community then odds are it is this phobia that is driving it.  Probably Peter Shor didn’t realize that he was about to pigeonhole the entire research field when he published his work on what is now the best known quantum algorithm. But once the news spread that he uncovered a method that could potentially speed up RSA decryption, the fear factor made it spread far and wide. Undoubtedly, if it wasn’t for the press coverage that this news received, quantum information technology research would still be widely considered to be just another academic curiosity.

So how realistic is this fear, and is breaking code the only thing a quantum computer is good for?  This article is an attempt to separate fact from fiction

First let’s review how key exchange protocols that underlie most modern public key encryption schemes accomplish their task. A good analogy that illustrates the key attribute that quantum computing jeopardizes is shown in the following diagram (image courtesy of Wikipedia):

Diffie-Hellman_Key_Exchange

Let’s assume we want to establish a common secret color shared by two individuals, Alice and Bob – in this example this may not be a primary color but one that can be produced as a mix of three other ones. The scheme assumes that there exists a common first paint component that our odd couple already agreed on. The next component is a secret private color. This color is not shared with anybody. What happens next is the stroke of genius, the secret sauce that makes public key exchange possible. In our toy example it corresponds to the mixing of the secret, private color with the public one. As everybody probably as early as kindergarten learned, it’s easy to mix colors, but not so easy – try practically impossible – to revert it. From a physics standpoint the underlying issue is that entropy massively increases when the colors are mixed. Nature drives the process towards the mixed state, but makes it very costly to reverse the mixing. Hence, in thermodynamics, these processes are called “irreversible”.

This descent into the physics of our toy example may seem a rather pointless digression, but we will see later that in the context of quantum information processing, this will actually become very relevant.

But first let’s get back to Alice and Bob.  They can now publicly exchange their mix-color, safe in the knowledge that there are myriads of ways to get to this particular shade of paint, and that nobody has much of a chance of guessing their particular components.

Since in the world of this example nobody has any concept of chromatics, even if a potential eavesdropper were to discover the common color, they’d still be unable to discern the secret ones, as they cannot unmix the publicly exchanged color shades.

In the final step, Alice and Bob recover a common color by adding their private secret component. This is a secret that they now share to the exclusion of everybody else.

So how does this relate to the actual public key exchange protocol?  We get there by substituting the colors with numbers, say x and y for the common and Alice’s secret color. The mixing of colors corresponds to a mathematical function G(x,y).  Usually the private secret numbers are picked from the set of prime numbers and the function G is simply a multiplication, exploiting the fact that integer factorization of large numbers is a very costly process. The next diagram depicts the exact same process, just mapped on numbers in this way.

key_exchange_drawing2

If this simple method is used with sufficiently large prime numbers then the Shared Key is indeed quite safe, as there is no known efficient classical algorithm, that allows for a reasonably fast integer factorization.  Of course “reasonably fast” is a very fuzzy term, so let’s be a bit more specific: There is no known classical algorithm that scales polynomially with the size of the integer. So for instance, an effort to crack a 232-digit number (RSA-768) that concluded in 2009 took the combined CPU power of hundreds of machines (Intel Core2 equivalents) over two years to accomplish.

And this is where the quantum computing bogeyman comes into the picture, and the aforementioned Peter Shor.  This affable MIT researcher formulated a quantum algorithm almost twenty years ago that can factorize integers in polynomial time on the, as of yet elusive, quantum hardware. So what difference would that actually make?  The following graph puts this into perspective:

scaling complexity graph
Scaling with Z3 (red curve) versus $ (z^2 \sqrt[3]{z} ) ~ 4 / (3^{5/2}) $ (green). The latter will look almost like a vertical ascent.
Z encodes stands for the logarithmic value of the size of the integer. The purple curve appears as almost vertical on this scale because the necessary steps (y-axis) in this classic algorithm grow explosively with the size of the integer. Shor’s algorithm, in comparison, shows a fairly well behaved slope with increasing integer sizes, making it theoretically a practical method for factoring large numbers.

And that is why common encryptions such as RSA could not protect against a deciphering attack if a suitable quantum computer was to be utilized.  So now that commercial quantum computing devices such as the D-Wave One are on the market, where does that leave our cryptographic security?

First off: Not all quantum computers are created equal. There are universal gate-based ones, which are theoretically probably the best understood, and a textbook on the matter will usually start introducing the subject matter from this vantage point.  But then there are also quantum simulators, topological design and adiabatic ones (I will forgo quantum cellular automatons in this article).  The only commercially available machine, i.e. D-Wave’s One, belongs to the latter category but is not a universal machine, in that it cannot simulate any arbitrary Hamiltonian (this term describes the energy function that governs a quantum system).  Essentially this machine is a super-fast and accurate solver for only one class of equations.  This kind of equation was first written out for describing solid state magnets according to what it now called the Ising model.

But fear not: The D-Wave machine is not suitable for Shor’s algorithm.  The latter requires a gate programmable device (or universal adiabatic machine) that provides plenty of qbits.  The D-Wave one falls short on both ends.  It has a special purpose adiabatic quantum chip with 128 qbits. Even if the architecture were compatible with Shor’s algorithm, the amount of qbits falls far short: If N is the number we want to factorize then we need a bit more than the square of that number in terms of qbits.  Since the integers we are interested in are pretty large, this is far outside anything that can be realized at this point.  For instance, for the RSA-768 challenge mentioned earlier, more than 232²=53824 qbits are required.

So you may wonder, what good is this vanguard of the coming quantum revolution if it can’t even handle the most famous quantum algorithm? To answer this let’s step back and look at what motivated the research into quantum computing to begin with. It wasn’t the hunt for new, more powerful algorithms but rather the insight, first formulated by Richard Feynman, that quantum mechanical systems cannot be efficiently simulated on classical hardware.  This is, of course, a serious impediment as our entire science driven civilization depends on exploiting quantum mechanical effects.  I am not even referring to the obvious culprits such as semiconductor based electronics, laser technology etc. but the more mundane chemical industry.  Everybody will probably recall the Styrofoam models of orbitals and simple molecules such as benzene C6H6:

Benzol_Representationen

As the graphic illustrates, we know that sp2 orbitals facilitate the binding with the hydrogen, and that there is a delocalized π electron cloud formed from the overlapping p2 orbitals.  Yet, these insights are inferred (and now thanks to raster electron microscopy also measured) but they don’t flow from an exact solution of the corresponding Schrödinger equations that govern the physics of these kinds of molecules.

Granted, multi-body problems don’t have an exact solution in the classical realm either, but the corresponding equations are well behaved when it comes to numerical simulations.  The Schrödinger equation that rules quantum mechanical systems, on the other hand, is not. Simple scenarios are still within reach for classical computing, but not so larger molecules (i.e. the kind that biological processes typically employ). Things get even worse when one wants to go even further and model electrodynamics on the quantum level.  Quantum field theories require a summation over an infinite regime of interaction paths – something that will bring any classical computer to its knees quickly. Not a quantum computer, though.

This summer a paper was published (Science June 1st issue) that showed conclusively that for this new breed of machine a polynomial scaling of these notorious calculations is indeed possible. (As for String theory simulations, the jury is still out on that – but it has been suggested that maybe it should be considered as an indication of an unphysical theory if a particular flavor of a String theory cannot be efficiently simulated on a quantum computer).

Quantum Computing has, therefore, the potential to usher in a new era for chemical and nano-scale engineering, putting an end to the still common practice of having to blindly test thousands of substances for pharmaceutical purposes, and finally realizing the vision, of designing smart drugs that specifically match targeted receptor proteins.

Of course, even if you can model protein structures, you still need to know which isomer is actually the biologically relevant one. Fortunately, a new technology deploying electron holography is expected to unlock a cornucopia of protein structure data.  But this data will remain stale if you cannot understand how these proteins can fold. The latter is going to be key for understanding the function of a protein within the living organism.

Unfortunately, simulating protein folding has been shown to be an NP hard problem. Quantum computing is once again coming to the rescue, allowing for a polynomial speed-up of these kinds of calculations.  It is not an exaggeration to expect that in the not too distant future lifesaving drug development will be facilitated this way. And first papers using D-Wave’s machine in this way have been published.

This is just one tiny sliver of the fields that quantum computing will impact.  Just as with the unexpected applications that ever-increasing conventional computing power enabled, it is safe to say that we, in all likelihood, cannot fully anticipate how this technological revolution will impact our lives.  But we can certainly identify some more areas that will immediately benefit from it: Artificial Intelligence, graph theory, operational research (and its business applications), database design etc. One could easily file another article on each of these topics while only scratching the surface, so the following observations have to be understood as extremely compressed.

It shouldn’t come as a surprise that quantum computing will prove fruitful for artificial intelligence.  After all, one other major strand that arguably ignited the entire research field was contemplations on the nature of the human mind.  The prominent mathematician and physicist Roger Penrose, for instance, argued vehemently that the human mind cannot be understood as a classical computer i.e. he is convinced (almost religious in his certainty) that a Turing machine in principle cannot emulate a human mind.  Since it is not very practical to try to put a human brain into a state of controlled quantum superposition, the next best thing is to think this through for a computer.  This is exactly the kind of thought experiment that David Deutsch discussed in his landmark paper on the topic.  (It was also for the first time that a quantum algorithm was introduced, albeit not a very useful one, demonstrating that the imagined machine can do some things better than a classical Turing machine).

So it is only fitting that one of the first demonstrations of D-Wave’s technology concerned the training of an artificial neural net.  This particular application maps nicely onto the structure of their system, as the training is mathematically already expressed as the search for a global minimum of an energy function that depends on several free parameters.  To the extent that an optimization problem can be recast in this way, it becomes a potential candidate to benefit from D-Wave’s quantum computer.  There are many applicable use cases for this in operational research (i.e. logistics, supply chain etc.) and business intelligence.

While this is all very exciting, a skeptic will rightfully point out that just knowing a certain tool can help with a task does not tell us how well it will stack up to conventional methods.  Given the price tag of $10 million, it had better be good.  There are unfortunately not a lot of benchmarks available, but a brute force search method implemented to find some obscure numbers from graph theory (Ramsey numbers) gives an indication that this machine can substitute for some considerable conventional computing horsepower i.e. about 55 MIPS, or the equivalent of a cluster of more than 300 of Intel’s fastest commercially available chips.

Another fascinating aspect that will factor into the all-important TCO (Total Cost of Ownership) considerations is that a quantum computer will actually require far less energy to achieve this kind of performance (its energy consumption will also only vary minimally under load).  Earlier I described a particular architecture as adiabatic, and it is this term that describes this counterintuitive energy characteristic.  It is a word that originated in thermodynamics and describes when a process progresses without heat exchange. I.e. throughout most of the QC processing there is no heat-producing entropy increase.  At first glance, the huge cooling apparatus that accompanies a quantum computer seems to belie this assertion, but the reason for this considerable cooling technology is not a required continuous protection of the machines from over-heating (like in conventional data centers) but because most QC implementations require an environment that is considerably colder than even the coldest temperature that can be found anywhere in space (the surface of Pluto would be outright balmy in comparison).

Amazingly, these days commercially available Helium cooling systems can readily achieve these temperatures close to absolute zero.  After cooling down, the entire remaining cooling effort is only employed to counteract thermal flow that even the best high vacuum insulated environments will experience.  The quantum system itself will only dissipate a minimal amount of heat when the final result of an algorithm is read out. That is why the system just pulls 15 KWatt in total.  This is considerably less than what our hypothetical 300 CPU cluster would consume under load i.e. >100KW per node, more than double D-Wave’s power consumption.  And the best part: The cooling system, and hence power consumption, will remain the same for each new iteration of chips (D-Wave recently introduced their new 512 qbits VESUVIUS chip) and so far steadily followed their own version of Moore’s law, doubling integration about every 15 months.

So although D-Wave’s currently available quantum computing technology cannot implement Shor’s algorithm, or the second most famous one, Grover’s search over an unstructured list, the capabilities it delivers are nothing to scoff at. With heavyweights like IBM pouring considerable R&D resources into this technology, fully universal quantum processors will hit the market much earlier than most IT analysts (such as Gartner) currently project. Recently IBM demoed a 4 qbit universal chip (interestingly using the same superconducting foundry approach as D-Wave). If they also were to manage a doubling of their integration density every 18 months then we’d be looking at 256 qbit chips within three years.

While at this point current RSA implementation will not be in jeopardy, this key exchange protocol is slowly reaching its end-of-life cycle.  So how best to mitigate against future quantum computing attacks on the key exchange?  The most straightforward approach is simply to use a different “color-mixing” function than integer multiplication i.e. a function that even a quantum computer cannot unravel within a polynomial time frame. This is an active field of research, but so far no consensus for a suitable post-quantum key exchange function has evolved. At least it is well established that most current symmetric crypto (cyphers and hash functions) can be considered secure from the looming threat.

As to key exchange, the ultimate solution can also be provided by quantum mechanics in the form of quantum cryptography that in principle allows to transfer a key in such a manner that any eavesdropping will be detectable. To prove that this technology can be scaled for global intercontinental communication, the current record holder for the longest distance of quantum teleportation, the Chinese physicist Juan Yin, plans to repeat this feat in space, opening up the prospect for ultra secure communication around the world.  Welcome to the future.