Archive for the ‘papers’ category

Refuting nonlocal realism?

2nd May 2007

Posting has been light of late. I would like to say this is due to the same sort of absorbtion that JoAnne has described over at Cosmic Variance, but in fact my attention span is currently too short for that and it has more to do with my attempts to work on three projects simultaneously. In any case, a report of an experiment on quantum foundations in Nature cannot possibly go ignored for too long on this blog. See here for the arXiv eprint.

What Gröblacher et. al. report on is an experiment showing violations of an inequality proposed by Leggett, aimed at ruling out a class of nonlocal hidden-variable theories, whilst simultaneously violating the CHSH inequality, so that local hidden-variable theories are also ruled out in the same experiment. This is of course subject to the usual caveats that apply to Bell experiments, but let’s grant the correctness of the analysis for now and take a look at the class of nonlocal hidden-variable theories that are ruled out.

It is well-known that Bell’s assumption of locality can be factored out into two conditions.

  • Outcome independence: the outcome of the experiment at site A does not depend on the outcome of the experiment at site B.
  • Parameter independence: the outcome of the experiment at site A does not depend on the choice of detector setting at site B.

Leggett has proposed to consider theories that maintain the assumption of outcome independence, but drop the assumption of parameter independence.  It is worth remarking at this point that the attribution of fundamental importance to this factorization of the locality assumption can easily be criticized.  Whilst it is usual to describe the outcome at each site by  ±1 this is an oversimplification.  For example, if we are doing Stern-Gerlach measurements on electron spins then the actual outcome is a deflection of the path of the electron either up or down with respect to the orientation of the magnet.  Thus, the outcome cannot be so easily separated from the orientation of the detector, as its full description depends on the orientation.

Nevertheless, whatever one makes of the factorization, it is the case that one can construct toy models that reproduce the quantum predictions in Bell experiments by dropping parameter independence.  Therefore, it is worth considering what other reasonable constraints we can impose on theories when this assumption is dropped.  Leggett’s assumption amounts to assuming that the hidden variable states in the theory can be divided into subensembles, in each of which the two photons have a definite polarization (which may however depend on the distant detector setting).  The total ensemble corresponding to a quantum state is then a statistical average over such states.  This is the class of theories that has been ruled out by the experiment.

This is all well and good, and I am certainly in favor of any experiment that places constraints on the space of possible interpretations of quantum theory.  However, the experiment has been sold in some quarters as a “refutation of nonlocal realism”, so we should consider the extent to which this is true.  The first point to make is that there are perfectly good nonlocal realistic models, in the sense of reproducing the predictions of quantum theory, that do not satisfy Leggett’s assumptions – the prime example being Bohmian mechanics.  In the Bohm theory photons do not have a well-defined value of polarization, but instead it is determined nonlocally via the quantum potential.   Therefore, if we regard this as a reasonable theory then no experiment that simply confirms the predictions of quantum theory can be said to rule out nonlocal realism.

What can decoherence do for us?

24th January 2007

OK, so it’s time for the promised post about decoherence, but where to begin? Decoherence theory is now a vast subject with an enormous literature covering a wide variety of physical systems and scenarios. I will not deal with everything here, but just make some comments on how the theory looks from my point of view about the foundations of quantum theory. Alexei Grinbaum pointed me to a review article by Maximilian Schlosshauer on the role of decoherence in solving the measurement problem and in interpretations of quantum theory. That’s a good entry into the literature for people who want to know more.

OK, let me start by defining two problems that I take to be at the heart of understanding quantum theory:

1) The Emergence of Classicality: Our most fundamental theories of the world are quantum mechanical, but the world appears classical to us at the everyday level. Explain why we do not find ourselves making mistakes in using classical theories to make predictions about the everyday world of experience. By this I mean not only classical dynamics, but also classical probability theory, information theory, computer science, etc.

2) The ontology problem: The mathematical formalism of quantum theory provides an algorithm for computing the probabilities of outcomes of measurements made in experiments. Explain what things exist in reality and what laws they obey in such a way as to account for the correctness of the predictions of the theory.

I take these to be the fundamental challenges of understanding quantum mechanics. You will note that I did not mention the measurement problem, Schroedinger’s cat, or the other conventional ways of expressing the foundational challenges of quantum theory. This is because, as I have argued before, these problems are not interpretation neutral. Instead, one begins with something like the orthodox interpretation and shows that unitary evolution and the measurement postulates are in apparent conflict within that interpretation depending on whether we choose to view the measuring apparatus as a physical system obeying quantum theory or to leave it unanalysed. The problems with this are twofold:

i) It is not the case that we cannot solve the measurement problem. Several solutions exist, such as the account given by Bohmian mechanics, that of Everett/many-worlds, etc. The fact that there is more than one solution, and that none of them have been found to be universally compelling, indicates that it is not solving the measurement problem per se that is the issue. You could say that it is solving the measurement problem in a compelling way that is the issue, but I would say it is better to formulate the problem in such a way that it is obvious how it applies to each of the different interpretations.

ii) The standard way of describing the problems essentially assumes that the quantum state-vector corresponds more or less directly to whatever exists in reality, and that it is in fact all that exists in reality. This is an assumption of the orthodox interpretation, so we are talking about a problem with the standard interpretation and not with quantum theory itself. Assuming the reality of the state-vector simply begs the question. What if it does not correspond to an element of reality, but is just an epistemic object with a status akin to a probability distribution in classical theories? This is an idea that I favor, but now is not the time to go into detailed arguments for it. The mere fact that it is a possibility, and is taken seriously by a significant section of the foundations community, means that we should try to formulate the problems in a language that is independent of the ontological status of the state-vector.

Given this background viewpoint, we can now ask to what extent decoherence can help us with 1) and 2), i.e. the emergence and ontology problems. Let me begin with a very short description of what decoherence is in this context. The first point is that it takes seriously the idea that quantum systems, particularly the sort that we usually describe as “classical”, are open, i.e. interact strongly with a large environment. Correlations between system and environment are typically established very quickly in some particular basis, determined by the form of the system-environment interaction Hamiltonain, so that the density matrix of the system quickly becomes diagonal in that basis. Furthermore, the basis in which the correlations exist is stable over a very long period of time, which can typically be much longer than the lifetime of the universe. Finally, for many realistic Hamiltonians and a wide variety of systems, the decoherence basis corresponds very well to the kind of states we actually observe.

From my point of view, the short answer to the role of decoherence in foundations is that it provides a good framework for addressing emergence, but has almost nothing to say about ontology.  The reason for saying that should be clear:  we have a good correspondence with our observations, but at no point in my description of decoherence did I find it necessary to mention a reality underlying quantum mechanics.  Having said that, a couple of caveats are in order. Firstly, decoherence can do much more if it is placed within a framework with a well defined ontology. For example, in Everett/many-worlds, the ontology is the state-vector, which always evolves unitarily and never collapses. The trouble with this is that the ontology doesn’t correspond to our subjective experience, so we need to supplement it with some account of why we see collapses, definite measurement outcomes, etc. Decoherence theory does a pretty good job of this by providing us with rules to describe this subjective experience, i.e. we will experience the world relative to the basis that decoherence theory picks out. However, the point here is that the work is not being done by decoherence alone, as claimed by some physicists, but also by a nontrivial ontological assumption about the state-vector. As I remarked earlier, the latter is itself a point of contention, so it is clear that decoherence alone is not providing a complete solution.

The second caveat, is that some people, including Max Schlosshauer in his review, would argue for plausible denial of the need to answer the ontology question at all. So long as we can account for our subjective experience in a compelling manner then why should we demand any more of our theories? The idea is then that decoherence can solve the emergence problem, and then we are done because the ontology problem need not be solved at all. One could argue for this position, but to do so is thoroughly wrongheaded in my opinion, and this is so independently of my conviction that physics is about trying to describe a reality that goes beyond subjective experience. The simple point is that someone who takes this view seriously really has no need for decoherence theory at all. Firstly, given that we are not assigning ontological status to anything, let alone the state-vector, then you are free to collapse it, uncollapse it, evolve it, swing it around your head or do anything else you like with it. After all, if it is not supposed to represent anything existing in reality then there need not be any physical consequences for reality of any mathematical manipulation, such as a projection, that you might care to do. The second point is that if we are prepared to give a privelliged status to observers in our physical theories, by saying that physics needs to describe their experience and nothing more, then we can simply say that the collapse is a subjective property of the observer’s experience and leave it at that. We already have privelliged systems in our theory on this view, so what extra harm could that do?

Of course, I don’t subscribe to this viewpoint myself, but on both views described so far, decoherence theory either needs to be supplemented with an ontology, or is not needed at all for addressing foundational issues.

Finally, I want to make a couple of comments about how odd the decoherence solution looks from my particular point of view as a believer in the epistemic nature of wavefunctions. The first is that, from this point of view, the decoherence solution appears to have things backwards. When constructing a classical probabilistic theory, we first identify the ontological entities, e.g. particles that have definite trajectories, and describe their dynamics, e.g. Hamilton’s equations. Only then do we introduce probabilities and derive the corresponding probabilistic theory, e.g. Liouville mechanics. Decoherence theory does things in the other direction, starting from Schroedinger mechanics and then seeking to define the states of reality in terms of the probabilistic object, i.e. the state-vector. Whilst this is not obviously incorrect, since we don’t necessarily have to do things the same way in classical and quantum theories, it does seem a little perverse from my point of view. I’d rather start with an ontology and derive the fact that the state-vector is a good mathematical object for making probabilistic predictions, instead of the other way round.

The second comment concerns an analogy between the emergence of classicality in QM and the emergence of the second law of thermodynamics from statistical mechanics. For the latter, we have a multitude of conceptually different approaches, which all arrive at somewhat similar results from a practical point of view. For a state-vector epistemist like myself, the interventionist approach to statistical mechanics seems very similar to the decoherence approach to the emergence problem in QM. Both say that the respective problems cannot be solved by looking at a closed Hamiltonian system, but only by considering interaction with a somewhat uncontrollable environment. In the case of stat-mech, this is used to explain the statistical fluctuations observed in what would be an otherwise deterministic system. The introduction of correlations between system and environment is the mechanism behind both processes. Somewhat bizzarely, most physicists currently prefer closed-system approaches to the derivation of the second law, based on coarse-graining, but prefer the decoherence approach when it comes to the emergence of classicality from quantum theory. Closed system approaches have the advantage of being applicable to the universe as a whole, where there is no external environment to rely on. However, apart from special cases like this, one can broadly say that the two types of approach are complimentary for stat mech, and neither has a monopoly on explaining the second law. It is then natural to ask whether closed system approaches to emergence in QM are available making use of coarse graining, and whether they ought to be given equal weight to the decoherence explanation. Indeed, such arguments have been given – here is a recent example, which has many precursors too numerous to go through in detail. I myself am thinking about a similar kind of approach at the moment. Right now, such arguments have a disadvantage over decoherence in that the “measurement basis” has to be put in by hand, rather than emerging from the physics as in decoherence. However, further work is needed to determine whether this is an insurmountable obstacle.

In conclusion, decoherence theory has done a lot for our understanding of the emergence of classicality from quantum theory. However, it does not solve all the foundational queations about quantum theory, at least not on it’s own. Further, its importance may have been overemphasized by the physics community because other less-developed approaches to emergence could turn out to be of equal importance.

Steane Roller

15th December 2006

Earlier, I promised some discussion of Andrew Steane‘s new paper: Context, spactime loops, and the interpretation of quantum mechanics. Whilst it is impossible to summarize everything in the paper, I can give a short description of what I think are the most important points.

  • Firstly, he does believe that the whole universe obeys the laws of quantum mechanics, which are not required to be generalized.
  • Secondly, he does not think that Everett/Many-Worlds is a good way to go because it doesn’t give a well-defined rule for when we see one particular outcome of a measurement in one particular basis.
  • He believes that collapse is a real phenomenon and so the problem is to come up with a rule for assigning a basis in which the wavefunction collapses, as well as, roughly speaking, a spacetime location at which it occurs.
  • For now, he describes collapse as an unanalysed fundamenally stochastic process that achieves this, but he recognizes that it might be useful to come up with a more detailed mechanism by which this occurs.

Steane’s problem therefore reduces to picking a basis and a spacetime location. For the former, he uses the standard ideas from decoherence theory, i.e. the basis in which collapse occurs is the basis in which the reduced state of the system is diagonal. However, the location of collapse is what is really interesting about the proposal, and makes it more interesting and more bizzare than most of the proposals I have seen so far.

Firstly, note that the process of collapse destroys the phase information between the system and the environment. Therefore, if the environmental degrees of freedom could ever be gathered together and re-interacted with the system, then QM would predict interference effects that would not be present if a genuine collapse had occurred. Since Steane believes in the universal validity of QM, he has to come up with a way of having a genuine collapse without getting into a contradiction with this possibility.

His first innovation is to assert that the collapse need not be associated to an exactly precise location in spacetime. Instead, it can be a function of what is going on in a larger region of spacetime. Presumably, for events that we would normally regard as “classical” this region is supposed to be rather small, but for coherent evolutions it could be quite large.

The rule is easiest to state for special cases, so for now we will assume that we are talking about particles with a discrete quantum degree of freedom, e.g. spin, but that the position and momentum can be treated classically. Now, suppose we have 3 qubits and that they are in the state |000> + e^i phi |111>. The state of the first two qubits is a density operator, diagonal in the basis {|00>, |11>}, with a probability 1/2 for each of the two states. The phase e^i phi will only ever be detectable if the third qubit re-interacts with the first two. Whether or not this can happen is determined by the relative locations of the qubits, since the interaction Hamiltonias in nature are local. Since we are treating position and momentum classically at the moment, there is a matter of fact about whether this will occur and Steane’s rule is simple: if the qubits re-interact in the future then there is no collapse, but if they don’t then the then the first two qubits have collapsed into the state |00> or the state |11> with probability 1/2 for each one.

Things are going to get more complicated if we quantize the position and momentum, or indeed if we move to quantum field theory, since then we don’t have definite particle trajectories to work with. It is not entirely clear to me whether Steane’s proposal can be made to work in the general case, and he does admit that further technical work is needed. However, he still asserts that whether or not a system has collapsed at a given point is spacetime is in principle a function of its entire future, i.e. whether or not it will eventually re-interact with the environment it has decohered with respect to.

At this point, I want to highlight a bizzare physical prediction that can be made if you believe Steane’s point of view. Really, it is metaphysics, since the experiment is not at all practical. For starters, the fact that I experience myself being in a definite state rather than a superposition means that there are environmental degrees of freedom that I have interacted with in the past that have decohered me into a particular basis. We can in principle imagine an omnipotent “Maxwell’s demon” type character, who can collect up every degree of freedom I have ever interacted with, bring it all together and reverse the evolution, eliminating me in the process. Whilst this is impractical, there is nothing in principle to stop it happening if we believe that QM applies to the entire universe. However, according to Steane, the very fact that I have a definite experience means that we can predict with certainty that no such interaction happens in the future. If it did, there would be no basis for my definite experience at the moment.

Contrast this with a many-worlds account a la David Wallace. There, the entire global wavefunction still exists, and the fact that I experience the world in a particular basis is due to the fact that only certain special bases, the ones in which decoherence occurs, are capable of supporting systems complex enough to achieve conciousness. There is nothing in this view to rule out the Maxwell’s demon conclusively, although we may note that he is very unlikely to be generated by a natural process due to the second law of thermodynamics.

Therefore, there is something comforting about Steane’s proposal. If true, my very existence can be used to infer that I will never be wiped out by a Maxwell’s demon. All we need to do to test the theory is to try and wipe out a conscious being by constructing such a demon, which is obviously impractical and also unethical. Needless to say, there is something troubling about drawing such a strong metaphysical conclusion from quantum theory, which is why I still prefer the many-worlds account over Steane’s proposal at the moment. (That’s not to say that I agree with the former either though.)

New Papers

20th November 2006

I don’t normally like to just list new papers without commenting on them, but I don’t have much reading time at the moment so here are two that look interesting.

Firstly, Andrew Steane has a new paper entitled “Context, spacetime loops, and the interpretation of quantum mechanics”, which was written for the Ghirardi festschrift. Steane is best known for his work on quantum error correction, fault tolerance and ion trap quantum computing, which may not engender a lot of confidence in his foundational speculations. However, the abstract looks interesting and the final sentence: “A single universe undergoing non-unitary evolution is a viable interpretation.” would seem to fit with my “Church of the smaller Hilbert space” point of view. Steane has also addressed foundational issues before in his paper “A quantum computer only needs one universe”, and I like the title even if I am not familiar with the contents. Both of these are on my reading list, so expect further comments in the coming weeks.

The second paper is a survey entitled “Philosophical Aspects of Quantum Information Theory” by Chris Timpson. The abstract makes it seem like it would be a good starting point for philosophers interested in the subject. Timpson is one of the most careful analysers of quantum information on the philosophy side of things, so it should be an interesting read.

Quantum foundations before WWII

24th September 2006

The Shtetl Optimizer informs me that there has not been enough contemplation of Quantum Quandaries for his taste recently. Since there has not been a lot of interesting foundational news, the only sensible thing to do is to employ the usual blogger’s trick of cut, paste, link and plagiarize other blogs for ideas.

Scott recently posted a list of papers on quantum computation that a computer science student should read in order to prepare themselves for research in quantum complexity. Now, so far, nobody has asked me for a list of essential readings in the Foundations of Quantum Theory, which is incredibly surprising given the vast numbers of eager grad students who are entering the subject these days. In a way, I am quite glad about this, since there is no equivalent of “Mike and Ike” to point them towards. We are still waiting for a balanced textbook that gives each interpretation a fair hearing to appear. For now, we are stuck trawling the voluminous literature that has appeared on the subject since QM cohered into its present form in the 1920’s. Still, it might be useful to compile a list of essential readings that any foundational researcher worth their salt should have read.

Since this list is bound to be several pages long, today we will stick to those papers written before the outbreak of WWII, when physicists switched from debating foundational questions to the more nefarious applications of their subject. This is not enough to get you up to the cutting edge of modern research, so more specialized lists on particular topics will be compiled when I get around to it. I have tried to focus on texts that are still relevant to the debates going on today, so many papers that were important in their time but fairly uncontroversial today, such as Born’s introduction of the probability rule, have been omitted. Still, it is likely that I have missed something important, so feel free to add your favourites in the comments with the proviso that it must have been published before WWII.

  • P.A.M. Dirac, The Principles of Quantum Mechanics, Oxford University Press (1930).
  • J. von Neumann, Mathematical Foundations of Quantum Mechanics, Princeton University Press (1955). This is the first English translation, but I believe the original German version was published prior to WWII.
  • W. Heisenberg, Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik, Zeitschrift für Physik, 43, 172-198 (1927). The original uncertainty principle paper.
  • A. Einstein, B. Podolsky, and N. Rosen, Can quantum-mechanical description of physical reality be considered complete? Phys. Rev. 47, 777 (1935).
  • N. Bohr, Can quantum-mechanical description of physical reality be considered complete?, Phys. Rev. 48, 696 (1935).
  • N. Bohr, The Philosophical Writings of Niels Bohr (vols. I and II), Oxbow Press (1987). It is a brave soul who can take this much Bohrdom in one sitting. All papers in vol. I and about half of vol. II were written prior to WWII. There is also a vol. III, but that contains post 1958 papers.
  • E. Schrödinger, Discussion of probability relations between separated systems, Proceedings of the Cambridge Philosophical Society. 31, 555-562 (1935).
  • E. Schrödinger, Die Gegenwärtige Situation in der Quantenmechanik, Die Naturwissenschaften. 23, 807-812; 824-828; 844-849 (1935). Translated here.
  • Birkhoff, G., and von Neumann, J., The Logic of Quantum Mechanics, Annals of Mathematics 37, 823-843 (1936).

Many of the important papers are translated and reproduced in:

  • J. A. Wheeler and W.H. Zurek (eds.), Quantum Theory and Measurement, Princeton University Press (1983).

Somewhat bizzarely it is out of print, but you should find a copy in your local university library.

I am also informed that Anthony Valentini and Guido Bacciagaluppi have recently finished translating the proceedings of the 5th Solvay conference (1927), which is famous for the Bohr-Einstein debates, and produced one of the most well-known photos in physics. It should be worth a read when it comes out. A short video showing many of the major players at the 1927 Solvay conference is available here.

Update: A draft of the Valentini & Bacciagaluppi book has just appeared here.

Anyone for frequentist fudge?

14th June 2006

Having just returned from several evenings of Bayesian discussion in Vaxjo, I was inspired to read Facts, Values and Quanta by Marcus Appleby. Whilst not endorsing a completely subjectivist view of probability, the paper is an appropriate remedy for anyone who thinks that the frequentist view is the way to understand probability in physics, and particularly in quantum theory.

In fact, Appleby's paper provides good preparation for tackling a recent paper by Buniy, Hsu and Zee, pointed out by the Quantum Pontiff. The problem they address is how to derive the Born rule within the many-worlds interpretation, or simply from the eigenvalue-eigenstate (EE) link. The EE link says that if you have a system in an eigenstate of some operator, then the system posesses a definite value (the corresponding eigenvalue) for the associated physical quantity with certainty. Note that this is much weaker than the Born rule, since it does not say anything about the probabilities for observables that the system is not in an eigenstate of.

An argument dating back to Everett, but also discussed by Graham, Hartle and Farhi, Goldstone and Gutmann, runs as follows. Suppose you have a long sequence of identically prepared systems in a product state:

|psi>|psi>|psi>…|psi>

For the sake of definiteness, suppose these are qubits. Now suppose we are interested in some observable, with an eigenbasis given by |0>,|1>. We can construct a sequence of relative frequency operators, the first few of which are:

F1 = |1><1|

F2 = 1/2(|01><01| + |10><10|) + 1|11><11|

F3 = 1/3(|001><001| + |010><010| + |100><100|) + 2/3( |011><011| + |101><101| + |110><110|) + 1|111><111|

It is straightforward to show that in the limit of infinite copies, the state |psi>|psi>|psi>…|psi> becomes an eigenstate of Fn with eigenvalue |<psi|1>|^2. Thus, in this limit, the infinite system posesses a definite value for the relative frequency operator, given by the Born probability rule. The argument is also relevant for many worlds, since one can show that if the |0> vs. |1> measurement is repeated on the state |psi>|psi>|psi>…|psi> then there will be norm squared of the worlds where non Born-rule relative frequencies were found will tend to zero.

Of course, there are many possible objections to this argument (see Caves and Shack for a rebuttal of the Farhi, Goldstone, Gutmann version). One is that there are no infinite sequences available in the real world. For finite but large sequences, one can show that although the norm squared of the worlds with non Born probabilities is small, there are actually still far more of them than worlds which do have Born probabilities. Therefore, since we have no a priori reason to assign worlds with small amplitudes a small probability (which we do not because that is what we are trying to derive), we should expect to see non Born rule probabilities.

Buniy, Hsu and Zee point out that this problem can be avoided if we assume that the state space is fundamentally discrete, i.e. if |<phi|psi>| < epsilon for some small epsilon then |psi> and |phi> are actually the same physical state. They provide a way of discretizing the Hilbert space such that the small amplitude worlds dissapear for some large but finite number of copies of the state. They also argue that this discreteness of the state space might be derived from some future theory of quantum gravity.

I have to say that I do not buy their argument at all. For one thing, I hope that the conceptual problems of quantum theory have good answers independently of anything to do with quantum gravity. In any case, the question of whether the successful theory will really entail a discrete state space is still open to doubt. More seriously, it should be realized that the problem they are trying to solve is not unique to quantum mechanics. The same issue exists if one trys to give a frequentist account of classical probability based on large but finite ensembles. In that case, their solution would amount to the procustean method of just throwing away probabilities that are smaller than some epsilon. Hopefully, this already seems like a silly thing to do, but if you still have doubts then you can find persuasive arguments against this approach in the Appleby paper.

For me, the bottom line is that the problem being addressed has nothing to do with quantum theory, but is based on an erroneous frequentist notion of probability. Better to throw out frequentism and use something more sensible, i.e. Bayesian. Even then, the notion of probability in many-worlds remains problematic, but I think that Wallace has given the closest we are likely to get to a derivation of the Born rule for many-worlds along Bayesian lines.

Shameless self-promotion

5th June 2006

As is traditional with physics blogs, it is time to indulge in a spot of shameless self-promotion of my own work. I have just posted a paper on quantum dynamics as an analog of conditional probability on the arXiv. This is about a generalization of the isomorphism between bipartite quantum states and completely positive maps, that is often used in quantum information. The main point is that it provides a good quantum analog of conditional probability, so it may be of interest to foundations-types who like to think of quantum theory as a generalization of classical probability theory.

The paper was completed in somewhat of a hurry, to get it out in time for the conference on Foundations of Probability and Physics in Vaxjo taking place this week, where I am due to give a talk on the subject. No doubt it still contains a few typos, so you can expect it to get updated in the next couple of weeks. Any comments would be appreciated.

More on the Vaxjo meeting to follow soon.

Realists on the counter attack

25th April 2006

Martin Daumer, Detlef Duerr, Sheldon Goldstein, Tim Maudlin, Roderich Tumulka and Nino Zanghi, a collection of scholars noted for their advocacy or realist interpretations of quantum mechanics, and Bohmian mechanics in particular, have posted an article on quant-ph that attacks the idea that quantum theory is “fundamentally about information”. The article is a response to a recent essay in Nature by Anton Zeilinger, and is mainly a criticism of his particular viewpoint.

Most of their argument is based on the fact that interpretations like Bohmian mechanics offer a clear counterexample to various claims, such as that QM shows nature is fundamentally indeterministic and that the Bell and Kochen-Specker no-go theorems rule out realism. I think this is all fair enough, and I agree that it is well worth taking the time to become familiar with the Bohm interpretation if one is at all interested in foundations. It is quite amazing how often it can be used as an example to clear up confusion and misunderstandings about what we can infer from QM. On the other hand, this is a far cry from saying that Bohmian mechanics should be taken seriously as a description of reality. There are several arguments against doing so, which would take too long to go into right now. Perhaps I will do so in another post when I have more free time.

In any case, Zeilinger’s Nature essay seems a rather easy target to me. It was a short article, and there was clearly not enough space for any detailed arguments. Whether or not you think that Zeilinger in fact has any compelling arguments, there are many other contemporary approaches that also claim QM is about “information” in some sense, and it would be good to see a more in depth response to all of these from the realist camp. Examples include the quantum Bayesianism of Caves, Fuchs and Schack; the axiomatic approach of Bub, Clifton and Halvorson; and Hardy’s axiomatics.

Those of you who are waiting for Rovellifest 2 – fear not, for it is coming within the next week or so. For now, I feel like I need to write something on a topic I feel positive about, to aviod this blog descending into a sea of negative criticisms.

Rovellifest 1

17th April 2006

Carlo Rovelli has recently put 3 papers on the arXiv, which have attracted some attention within the blogsphere (see here, here, here and here). The one that concerns us here at QQ is the paper about EPR in the relational approach to QM. I don't want to comment on the particular argument in that paper, which seems fine as far as it goes, but I do want to say a couple of things about Rovelli's approach in general, since it seems to be a popular topic at the moment. The main ideas of the approach can be found in Rovelli's original paper.

Here is an (admittedly cartoonish) summary:

1. We should shift attention from things like the measurement problem and instead try to derive QM from the idea that it is a theory of the information about one system that is available relative to other systems.

2. Quantum states are not absolute concepts and the state of a system is only defined relative to some other reference systems. Different reference systems do not have to agree on this state. If they do come to agreement it is only after the reference systems themselves interact with each other according to some Hamiltonian.

3. The question of whether a system has some particular property has no absolute meaning. However, some property of a system can be well-defined relative to some other system, provided the systems happen to have interacted in such a way that the second system records the appropriate information about the first system.

4. All the relational states just represent the subjective point of view that one system has about another. There is no absolute meaning to such states and no meaningful "wave-vector of the universe" can be constructed because there is no external system for it to enter into relations with.

5. This is all just a twist on the usual kind of relationalism that we have in other physical theories, e.g. special and general relativity.

In my opinion, there is a good deal wrong with relational QM as formulated by Rovelli, although I am not particularly opposed to relationalism in general. In this post, I'll make some comments about 4 and 5. A forthcoming "Rovellifest 2" post will point out a problem with 3, which I believe is more serious.

To address 5, it is worth noting a striking disanalogy between relational QM and other sorts of relational theories in physics. For example, in Newtonian mechanics we are very used to the idea that that there is no absolute meaning of the position of a particle A, but you can define its distance to a reference system B. This is generally different from the distance of A relative to another reference system C. Similarly, there is no absolute notion of when two events are simultaneous in special relativity, but this is well defined relative to any inertial reference frame.

However, in these cases it is always possible to find some transformation that relates the descriptions relative to different reference frames, provided you know the relations between the frames themselves, e.g. the Lorentz transformations in special relativity.

Now consider a quantum system composed of a subsystem A and two observers B and C. Suppose both B and C separately interact with A, possibly measuring different observables on A. Relative to B, A is supposed to have some definite property after this interaction and similarly for C. However, you generally can't convert between B and C's description of the situation if you only know the state of B relative to C. You can if they happened to measure the same observable, but that's a very special case.

In fact, the only way to relaibly convert between different observers relative states of the same system is to know the entire "wave-vector of the universe", something that is meaningless for Rovelli due to 4.

So, it seems we are left with two options:

1. Add in a "state of the universe" so that one can reliably transform between different descriptions of the same subsystem.

2. Abandon the classical notion that one can reliably transform between different descriptions of the same system.

Adopting 1 would essentially entail accepting an Everettian/many-worlds type scenario, something that Rovelli is keen to distance himself from. Therefore, I conclude that he must accept 2.

Abandoning reliable transformations is not a completely absurd thing to do, but it is important to note that this is a departure from what we usually mean by the term "relational". I am still not entirely convinced that it is consistent, although I haven't managed to think up a scenario where it would cause a problem yet. My suspicion is that it might be attacked by a "Wigner's Enemy" type of argument of the sort that was levelled against Chris Fuchs' Bayesian approach by Amit Hagar, which seems much more relevant to the relational approach than to its original target.

N.B. "Wigner's Enemy" is a new name I just thought up for the argument.  I figure he must be an enemy rather than a friend because friends don't usually try to erase your memory. 

The Free Will Theorem

13th April 2006

Michael Nielsen recently posted a comment by John Sidles about a preprint by Kochen and Conway that was posted on the quant-ph arXiv yesterday. It's called "The Free Will Theorem", which is certainly a provocative title. Here's my comment on the paper that I left on Mike's blog.

Hmm… I had a look at this paper. The title sounds a bit crackpot, but given the status of the authors I was willing to give it a chance.

First of all the name “Free Will Theorem” opens a whole can of worms, which we probably don’t want to get into. Suffice to say, what they actually prove is an “indeterminism theorem”, i.e. they use a Bell-type argument + a no-signalling requirement to prove that nature must be indeterministic. I have heard similar arguments before, in particular Y. Aharonov and D. Rohlich mention it in their book, although I’ve never seen it written down formally before.

To call this a “free will theorem” one has to get into the debates about whether free will is compatible with determinism and, if not, whether indeterminism even solves the problem. Most contemporary philosophers seem to answer yes and no respectively, so I don’t think this theorem has much to do with free will, although it would take a lot more space to go through the arguments for and against thoroughly.

However, what I did think was interesting about the paper was the “hexagon universe” toy-model that they introduced in the second half of the paper. Given the current interest in understanding aspects of QM via simpler toy theories, e.g. nonlocal boxes and Spekkens toy theory, this might be a useful addition to the canon. I haven’t managed to decipher all the details of this model yet, so I’ll have to defer judgement on that.


Follow

Get every new post delivered to your Inbox.