Monday morning. Coffee table updated, presentation for the DPG conference ready. My room is empty. I take a minute to wonder what the next step is, and in the meantime start organizing the articles and notes on my desk. That this leads to a full-scale wondering on the inner mechanisms of science should not come as a surprise. The branching possibilities are infinite.

As of late, many of our lunchtime discussions have revolved around Science 2.0, with a particular focus to questions posed by Michael Nielsen in his blog. How much knowledge goes unnoticed or must be (constantly) reinvented or rediscovered? How many thousands of collective heads must bang against the walls throughout Academialand until finally one solution becomes feasible? Is this really the optimum solution?

I often question how is progress effectively made. Progress, here, taken as 'the potential for the collective to effectively apply the resulting knowledge'. Maybe this isn't clear enough, so I'll try again. How does research - and in this particular case, say, quite abstract results in a highly specialized mathematical formalism - end up providing something to a broader class of scientists, and eventually later engineers, so that they might at some point give back to the society as whole?

(Perhaps not all progress should lead to technological applications; possible implications to philosophy should be included, for instance, but this is a ramble I'll leave for another occasion.)

In the case of Quantum Computation, we have half a dozen of possible physical implementations, which range from superconducting charge or flux qubits to nuclear magnetic ressonance to trapped ions to quantum optics - which currently pays my bills... Each has its pros and cons; while a given unitary transformation might be trivial in one implementation, the required interaction might be unavailable in another; equally, some carriers are more stable with regards to decoherence, but eventually less scalable, etc. Even inside the optics paradigm, there are still three or four main architectures and it's not entirely clear which one (if any) will ever be considered a clear winner.

So. Under a thread of 'Quantum Error Correction' - ie, protection against decoherence - I'm investigating schemes on both discrete (|0> and |1>) and continuous (|x> and |p>) variables (and eventually hybrid configurations), both in traditional circuit models (ie, sequence of gates) as in cluster state representations (where measurements implement the transformations); a more daring possibility would include investigating toric, surface and other topological codes, which give access to a whole new formalism with intrinsic error correction (for those interested, references will be posted as a comment).

Which brings us to yet another central question - and a long known Shakespearean dilemma: to provide a solution through brute force and hope that scalability will eventually allow for such implementation, or to keep searching for a simple and elegant way employing some phenomenon or resource which has so far gone by unnoticed? After reading dozens of Nature, Science and PRL articles, it is clear by now that not all of them bring really novel ideas and breakthrough results - some are simply the fruit of working a scheme to exhaustion and reporting back on the findings. And yet they provide some of the most fundamental building pieces from which one is then able to step forward.

I still believe the right inspiration leading to a brilliant idea can change a whole field, but it's fruitless to sit looking at the wall waiting for such a spark to come - to which I wrap this post, citing Bernard Lonergan: "It is not by sinking into some inert passivity but by positive effort and rigorous training that a man becomes a master of the difficult art of scientific observation". Off to another espresso and then back to the readings...

ARC Future Fellowship

7 months ago

## 3 comments:

Discrete Variables: P. Kok et al, "Linear optical quantum computing", Rev. Mod. Phys. 79, 135 (2007), quant-ph/0512071

Continuous Variables: S. Braunstein et al, "Quantum information with continuous variables", Rev. Mod. Phys. 77, 513 (2005), quant-ph/0410100

Circuit Model: E. Knill, R. Laflamme & G. Milburn (KLM), "Efficient Scheme for Quantum Computation with Linear Optics", Nature 409, 46 (2001), quant-ph/0006088

Cluster Model: M. Nielsen, "Optical quantum computation using cluster states", Phys. Rev. Lett 93, 040503 (2004), quant-ph/0402005

Topological Model: A. Kitaev, "Fault-tolerant quantum computation by anyons", Annals of Phys. 303 (2003), quant-ph/9707021

Nice quote. Staring at the wall is nice to retake a thought, though and see if the brick wall that's standing in your way crumbles. RB

This is too tough a subject to be fully exploited in a mere comment. It has to be talked out loud. =D

Post a Comment