Baez on Quantum Foundations

Posted 11th May 2007 by Matthew Leifer
Categories: Blogs, Quantum, Web

I just wrote another post on the fqxi site, but to cut a long story short it gives a link to the latest “This Week’s Finds..” on quantum foundations.

Changes

Posted 7th May 2007 by Matthew Leifer
Categories: Blog Admin

OK, it is time to announce some changes that I alluded to in comments to an earlier post.

Firstly, I have decided to take an invitation to blog over at the new FQXi community pages.  At the moment, I’m just doing this on an experimental basis for a couple of months and I am fully intending to bring my foundational musings back over here at the end of it.  One reason for this is that the FQXi blogs currently work a bit more like forums than a regular blog and they are missing a number of key features, e.g. rss feeds, that I think are important.  Still, I think it will be worthwhile as I will potentially be able to reach a wider audience of people working on fundamental physics who would not read this blog.  For now, I will be posting links here when I write a FQXi post, so you can keep track of them.  The first one on ontological vs. epistemic wave-vectors can be found here.

Secondly, I have decided that my strategy to keep this blog purely focussed on foundations and maintain another blog about technology in academia is not really working.  For one thing, I can hardly ever be bothered to write posts on the other blog and it is certainly not the case that I have groundbreaking new things to say about foundations every day.  Therefore, I think it would be an improvement if I allow myself to write about a wider array of subjects in fundamental science and other things that I think are interesting.  Rest assured that foundations will remain the focus, so this will not become just another general physics blog and you will definitely never find me writing any posts about my pet dog.

Refuting nonlocal realism?

Posted 2nd May 2007 by Matthew Leifer
Categories: Experiment, papers, Quantum

Posting has been light of late. I would like to say this is due to the same sort of absorbtion that JoAnne has described over at Cosmic Variance, but in fact my attention span is currently too short for that and it has more to do with my attempts to work on three projects simultaneously. In any case, a report of an experiment on quantum foundations in Nature cannot possibly go ignored for too long on this blog. See here for the arXiv eprint.

What Gröblacher et. al. report on is an experiment showing violations of an inequality proposed by Leggett, aimed at ruling out a class of nonlocal hidden-variable theories, whilst simultaneously violating the CHSH inequality, so that local hidden-variable theories are also ruled out in the same experiment. This is of course subject to the usual caveats that apply to Bell experiments, but let’s grant the correctness of the analysis for now and take a look at the class of nonlocal hidden-variable theories that are ruled out.

It is well-known that Bell’s assumption of locality can be factored out into two conditions.

  • Outcome independence: the outcome of the experiment at site A does not depend on the outcome of the experiment at site B.
  • Parameter independence: the outcome of the experiment at site A does not depend on the choice of detector setting at site B.

Leggett has proposed to consider theories that maintain the assumption of outcome independence, but drop the assumption of parameter independence.  It is worth remarking at this point that the attribution of fundamental importance to this factorization of the locality assumption can easily be criticized.  Whilst it is usual to describe the outcome at each site by  ±1 this is an oversimplification.  For example, if we are doing Stern-Gerlach measurements on electron spins then the actual outcome is a deflection of the path of the electron either up or down with respect to the orientation of the magnet.  Thus, the outcome cannot be so easily separated from the orientation of the detector, as its full description depends on the orientation.

Nevertheless, whatever one makes of the factorization, it is the case that one can construct toy models that reproduce the quantum predictions in Bell experiments by dropping parameter independence.  Therefore, it is worth considering what other reasonable constraints we can impose on theories when this assumption is dropped.  Leggett’s assumption amounts to assuming that the hidden variable states in the theory can be divided into subensembles, in each of which the two photons have a definite polarization (which may however depend on the distant detector setting).  The total ensemble corresponding to a quantum state is then a statistical average over such states.  This is the class of theories that has been ruled out by the experiment.

This is all well and good, and I am certainly in favor of any experiment that places constraints on the space of possible interpretations of quantum theory.  However, the experiment has been sold in some quarters as a “refutation of nonlocal realism”, so we should consider the extent to which this is true.  The first point to make is that there are perfectly good nonlocal realistic models, in the sense of reproducing the predictions of quantum theory, that do not satisfy Leggett’s assumptions – the prime example being Bohmian mechanics.  In the Bohm theory photons do not have a well-defined value of polarization, but instead it is determined nonlocally via the quantum potential.   Therefore, if we regard this as a reasonable theory then no experiment that simply confirms the predictions of quantum theory can be said to rule out nonlocal realism.

Why is many-worlds winning the foundations debate?

Posted 11th April 2007 by Matthew Leifer
Categories: Philosophy, Quantum

Almost every time the foundations of quantum theory are mentioned in another science blog, the comments contain a lot of debate about many-worlds. I find it kind of depressing the extent to which many people are happy to jump on board with this interpretation without asking too many questions. In fact, it is almost as depressing as the fact that Copenhagen has been the dominant interpretation for so long, despite the fact that most of Bohr’s writings on the subject are pretty much incoherent.

Well, this year is the 50th anniversary of Everett’s paper, so perhaps it is appropriate to lay out exactly why I find the claims of many-worlds so unbelievable.

WARNING: The following rant contains Philosophy!

Traditionally, philosophers have made a distinction between analytic and synthetic truths. Analytic truths are those things that you can prove to be true by deduction alone. They are necessary truths and essentially they are just the tautologies of classical logic, e.g. either this is a blog post or this is not a blog post. On the other hand, synthetic truths are things we could imagine to have been another way, or things that we need to make some observation of the world in order to confirm or deny, e.g. Matt Leifer has never written a blog post about his pet cat.

Perhaps the central problem of the philosophy of science is whether the correctness of the scientific method is an analytic or a synthetic truth. Of course this depends a lot on how exactly you decide to define the scientific method, which is a topic of considerable controversy in itself. However, it’s pretty clear that the principle of induction is not an analytic truth, and even if you are a falsificationist you have to admit that it has some role in science, i.e. if a single experiment contradicts the predictions of a dominant theory then you call it an anomaly rather than a falsification. Of the other alternatives, if you’re a radical Kuhnian then you’re probably not reading this blog, since you are busy writing incoherent postmodern junk to write for a sociology journal. If you are a follower of Feyerabend then you are a conflicted soul and I sympathize. Anyway, back to the plot for people who do believe that induction has some role to play in science.

Kant’s resolution to this dilemma was to divide the synthetic truths into two categories, the a priori truths and the rest (I don’t know a good name for non-a priori synthetic truths). The a priori synthetic truths are things that cannot be directly deduced, but are nevertheless so basic to our functioning as beings living in this world that we must assume them to be true, i.e. it would be impossible to make any sense of the world without them. For example, we might decide that the fact that the universe is regular enough to perform approximately repeatable scientific experiments and to draw reliable inferences from them should be in the category of a priori truths. This seems reasonable because it is pretty hard to imagine that any kind of intelligent life could exist in a universe where the laws of physics were in continual flux.

One problem with this notion is that we can’t know a priori exactly what the a priori truths are. We can write down a list of what we currently believe to be a priori truths – our assumed a priori truths – but this is open to revision if we find that we can in fact still make sense of the world when we discard some of these assumed truths. The most famous example of this comes from Kant himself, who assumed that the way our senses are hooked up meant that we must describe the world in terms of events happening in space and time, implicitly assuming a Euclidean geometry. As we now know, the world still makes sense if we drop the Euclidean assumption, unifying space and time and working with much more general geometries. Still, even in relativity we have the concept of events occurring at spacetime locations as a fundamental primitive. If you like, you can modify Kant’s position to take this as the fundamental a priori truth, and explain that he was simply misled by the synthetic fact that our spacetime is approximately flat on ordinary length scales.

At this point, it is useful to introduce Quine’s pudding-bowl analogy for the structure of knowledge (I can’t remember what kind of bowl Quine actually used, but he’s making pudding as far as we are concerned). If you make a small chip at the top of a pudding bowl, then you won’t have any problem making pudding with it and the chip can easily be fixed up. On the other hand, if you make a hole near the bottom then you will have a sticky mess in the oven. It will take some considerable surgery to fix up the bowl and you are likely to consider just throwing out the bowl and sitting down at the pottery wheel to build a new one. The moral of the story is that we should be more skeptical of changes in the structure of our knowledge that seem to undermine assumptions that we think are fundamental. We need to have very good reasons to make such changes, because it is clear that there is a lot of work to be done in order to reconstruct all the dependent knowledge further up the bowl that we rely on every day. The point is not that we should never make such changes – just that we should be careful to ensure that there isn’t an equally good explanation that doesn’t require making such as drastic change.

Aside: Although Quine has in mind a hierarchical structure for knowledge – the parts of the pudding bowl near the bottom are the foundation that supports the rest of the bowl – I don’t think this is strictly necessary. We just need to believe that some areas of knowledge have higher connectivity than others, i.e. more other important things that depend on them. It would work equally well if you think knowledge is stuctured like a power-law graph for example.

The Quinian argument is often levelled against proposed interpretations of quantum theory, e.g. the idea that quantum theory should be understood as requiring a fundamental revision of logic or probability theory rather than these being convenient mathematical formalisms that can coexist happily with their classical counterparts. The point here is that it is bordering on the paradoxical for a scientific theory to entail changes to things on which the scientific method itself seems to depend, and we did use logical and statistical arguments to confirm quantum theory in the first place. Thus, if we revise logic or probability then the theory seems to be “eating its own tail”. This is not to say that this is an actual paradox, because it could be the case that when we reconstruct the entire edifice of knowledge according to the new logic or probability theory we will still find that we were right to believe quantum theory, but just mistaken about the reasons why we should believe it. However, the whole exercise is question begging because if we allow changes to such basic things then why not make a more radical change and consider the whole space of possible logics or probability theories. There are clearly some possible alternatives under which all hell breaks loose and we are seriously deluded about the validity of all our knowledge. In other words, we’ve taken a sledgehammer to our pudding bowl and we can’t even make jelly (jello for North Ameican readers) any more.

At this point, you might be wondering whether a Quinian argument can be levelled against the revision of geometry implied by general relativity as well. The difference is that we do have a good handle of what the space of possible alternative geometries looks like. We can imagine writing down countless alternative theories in the language of differential geometry and figuring out what the world looks like according to them. We can adopt the assumed a priori truth that the world is describable in terms of events in asome spacetime geometry and then we find the synthetic fact that general relativity is in accordance with our observations, while most of the other theories are not. We did some significant damage close to the bottom of the bowl, but it turned out that we could fix it relatively easily. There are still some fancy puddings – like the theory of quantum gravity (baked Alaska) – that we haven’t figured out how to make in the repaired bowl, but we can live without them most of the time.

Now, is there a Quinian argument to be made against the many-worlds interpretation? I think so. The idea is that when we apply the scientific method we assume we can do experiments which have actual definite outcomes. These are the basic data from which we build a confirmation or refutation our theories. Many-worlds says that this assumption is wrong, there are no fundamental definite outcomes – it just appears that way to us because we are all entangled up in the wavefunction of the universe. This is a pretty dramatic assertion and it does seem to be bordering on the “theory eating its own tail” type of assertion. We need to be pretty sure that there isn’t an equally good alternative explanation in which experiments really do have definite outcomes before accepting it. Also, as with the case of revising logic or probability, we don’t have a good understanding of the space of possible theories in which experiments do not have definite outcomes. I can think of one other theory of this type, namely a bizarre interpretation of classical probability theory in which all outcomes that are assigned nonzero probability occur in different universes, but two possible theories does not amount to much in the grand scheme of things. The problem is that on dropping the assumption of definite outcomes, we have not replaced it with an adequate new assumed a priori truth. That the world is describable by vectors in Hilbert space that evolve unitarily seems much to specific to be considered as a potential candidate. Until we do come up with such an assumption, I can’t see why many-worlds is any less radical than proposing a revision of logic or probability theory. Until then, I won’t be making any custard tarts in that particular pudding bowl myself.

Teaching Quantum Theory

Posted 26th March 2007 by Matthew Leifer
Categories: Quantum, Teaching

The recent article by Chandralekha Singh, Mario Belloni and Wolfgang Christian on Students’ understanding of Quantum Mechanics in Physics Today provoked an interesting series of letters in response. Both Robert Griffith and Travis Norsen argue that students’ understanding would be improved by replacing the usual Copenhagen/Orthodox dogma by discussion of some more recent developments in the foundations of quantum theory.

Given that I don’t actually have much experience teaching quantum theory (I have only covered a lecturer’s absence for two lectures) it is perhaps a bit presumptuous for me to contribute my thoughts on this topic. Nevertheless, I do agree wholeheartedly with the basic sentiment of both these letters. I think one can easily see that at least some of the misconceptions that Sing, Belloni and Christian have written about could be easily remedied by a bit more foundational discussion at the ground level. For example, I think the common misconception that stationary states are the only allowed states of a quantum system could be dispelled by a deeper discussion of the sense in which quantum theory is analogous to classical probability theory.

However, I think both Griffith and Norsen make a mistake in the approaches they advocate in their letters. Griffith suggests replacing the orthodoxy with his own favored approach, namely decoherent/consistent histories, and Norsen thinks we should teach students Bohmian mechanics. In fact, in his letter Griffith gives the misleading impression that his approach is universally and unproblematicallly accepted by all right-thinking physicists. Whilst the formalism certainly has quite a few adherents in quantum cosmology, it is far from true that it has received universal support from all serious thinkers on the foundations of quantum theory. Similarly, whilst I agree that Bohmian mechanics presents the clearest counterexample to many common misconceptions about quantum theory, it is far from clear that it represents the best road to future progress.

In my view, the problem is not that we are teaching the wrong orthodoxy to students, but rather that we are teaching them any orthodoxy at all, since foundations is a subject that is still mired in controversy to this day. It is hard for me to imagine any physicist who is not directly involved in foundations taking either Griffith’s or Norsen’s arguments seriously, since their letters directly contradict each other about what is the best approach to teach, and a non-specialist really has no way of deciding which one of them they should trust. The view that foundations is a murky area, with no clear reason for choosing one approach over any other is only reinforced by such arguments and it is unlikely to persuade a skeptic to change their whole teaching strategy.

On the other hand, I do believe that there are a lot of developments in foundations that have made our current understanding much clearer, and these could be usefully communicated to students. For example, we have a much clearer understanding of the “no-go” theorems, such as Bell’s theorem, and their possible loopholes, and a much clearer understanding of the space of possible realist interpretations of quantum theory. We have an improved understanding of the classical limit, via decoherence theory amongst other approaches, and quantum information theory has shown that entanglement and the understanding of quantum theory as a generalized probability theory actually have useful consequences. I believe we should teach these things as a central part of quantum mechanics courses, and not just as peripheral topics covered in the last one or two lectures, which students are instructed not to worry about because it won’t be on the final exam! We should also give students an understanding of the space of possible resolutions to foundational problems, to equip them with a BS detector for statements they are likely to hear about quantum theory. Why do I believe this? Well, simply because I think it will leave students less confused about how to understand quantum theory and because I think these areas are all increasingly fruitful avenues of research that we might want to encourage them to pursue.

The difficult question, I think, is not the why but the how. It would entail battling against the prevailing wisdom that foundations are to be de-emphasised and relegated to the end of the course. Also, good teaching materials at an appropriate level that could supplement the existing curriculum are not readily available, and that is a problem we definitely have to address if we want this to happen.

Foundations Summer School: Apply Now!

Posted 12th March 2007 by Matthew Leifer
Categories: Meetings, Quantum

Just a short note to let you know that the application form for the Perimeter Institute Quantum Foundations Summer School is now available online from here. The application deadline is 20th May.

Update: I should have mentioned that for successful applicants who are grad students all expenses will be paid by Perimeter. That should make it easier to persuade your advisor to let you go. You don’t have to be an expert on foundations and we are hoping that students studying a wide variety of areas of Physics will attend.

Update 2: Whether non-students, e.g. postdocs, will be allowed to attend is still an open question. I’m waiting to hear more about this from the organizers. Clearly, the priority for a summer school has to be grad students, so I would speculate that it will depend on the number and quality of applications that we get. I’m just guessing at the moment though and I’ll post another update once I hear the official word.

Update 3: I have just heard that there will be up to 10 places will be made available at the summer school for postdocs and junior faculty.

Foundations at APS, take 2

Posted 6th March 2007 by Matthew Leifer
Categories: Meetings, Quantum

It doesn’t seem that a year has gone by since I wrote about the first sessions on quantum foundations organized by the topical group on quantum information, concepts and computation at the APS March meeting. Nevertheless it has, and I am here in Denver after possibly the longest day of continuous sitting through talks in my life. I arrived at 8am to chair the session on Quantum Limited Measurements, which was interesting, but readers of this blog won’t want to hear about such practical matters, so instead I’ll spill the beans on the two foundations sessions that followed.

In the first foundations session, things got off to a good start with Rob Spekkens as the invited speaker explaining to us once again why quantum states are states of knowledge. OK, I’m biased because he’s a collaborator, but he did throw us a new tidbit on how to make an analog of the Elitzur Vaidman bomb experiment in his toy theory by constructing a version for field theory.

Next, there was a talk by some complete crackpot called Matt Leifer. He talked about this.

Frank Schroeck gave an overview of his formulation of quantum mechanics on phase space, which did pique my interest, but 10 minutes was really too short to do it justice. Someday I’ll read his book.

Chris Fuchs gave a talk which was surprisingly not the same as his usual quantum Bayesian propaganda speech. It contained some new results about Symmetric Informationally Complete POVMs, including the fact that the states the POVM elements are proportional to are minimum uncertainty states with respect to mutually unbiased bases. This should be hitting an arXiv near you very soon.

Caslav Brukner talked about his recent work on the emergence of classicality via coarse graining. I’ve mentioned it before on this blog, and it’s definitely a topic I’m becoming much more interested in.

Later on, Jeff Tollaksen talked about generalizing a theorem proved by Rob Spekkens and myself about pre- and post-selected quantum systems to the case of weak measurements. I’m not sure I agree with the particular spin he gives on it, especially his idea of “quantum contextuality”, but you can decide for yourself by reading this.

Jan-Ake Larrson gave a very comprehensible talk about a “loophole” (he prefers the term “experimental problem”) in Bell inequality tests to do with coincidence times of photon detection. You can deal with it by having a detection efficiency just a few percent higher than that needed to overcome the detection loophole. Read all about it here.

Most of the rest of the talks in this session were more quantum information oriented, but I suppose you can argue they were at the foundational end of quantum information. Animesh Datta talked about the role of entanglement in the Knill-Laflamme model of quantum computation with one pure qubit, Anil Shaji talked about using easily computable entanglement measures to put bounds on those that aren’t so easy to compute and finally Ian Durham made some interesting observations about the connections between entropy, information and Bell inequalities.

The second foundations session was more of a mixed bag, but let me just mention a couple of the talks that appealed to me. Marcello Sarandy Alioscia Hamma talked about generalizing the quantum adiabatic theorem to open systems, where you don’t necessarily have a Hamiltonian with well-defined eigenstates to talk about and Kicheon Kang talked about a proposal for a quantum eraser experiment with electrons.

On Tuesday, Bill Wootters won a prize for best research at an undergraduate teaching college. He gave a great talk about his discrete Wigner functions, which included some new stuff about minumum uncertainty states and analogs of coherent states.

That’s pretty much it for the foundations talks at APS this year. It’s all quantum information from here on in. That is unless you count Zeilinger, who is talking on Thursday. He’s supposed to be talking about quantum cryptography, but perhaps he will say something about the more foundationy experiments going on in his lab as well.


Follow

Get every new post delivered to your Inbox.