Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

  • Bernard d’Espagnat. We have the pleasure of welcoming among us Alexei Grinbaum. During our last session, thanks to Jean-Michel Raimond’s brilliant presentation [1], we saw experimentally that physical systems which have a typically quantum behaviour at time 0+ have a classical behaviour later on. This constitutes a very strong indication, not to say an experimental proof, that there are not two types of systems, one obeying quantum physics and the other obeying classical physics, but on the contrary that there is only one physics, even if depending on the circumstances, physical systems can appear to us either under a quantum or a classical aspect. The interaction between macroscopic systems and the environment, and the resulting decoherence seem indeed to be the cause of this progressive passage from the quantum to the classical world, classical mechanics being derived under certain conditions from quantum mechanics. Today, it is logically the theoretical and particularly the conceptual aspects of decoherence that we propose to explore together. A number of us here have thought about the subject. However, perhaps it is right to ask those newly present among us to speak first.

In order to make this read more informative, this report has been divided into four sections:

  1. 1.

    Overview of a non-standard conception of decoherence and discussion;

  2. 2.

    Discussion of the standard conception of decoherence;

  3. 3.

    Arguments for and against realism;

  4. 4.

    Mathematical aspects of the conflict between quantum mechanics and spatio-temporal localization.

4.1 Overview of a Non-stardard Conception of Decoherence and Discussion

  • Alexei Grinbaum. I have a rather unorthodox theoretical view of decoherence, which I associate with the problem of the observer. Some of you here have already heard me present this idea. To summarize, I think that decoherence seems, from a theoretical perspective, to be a phenomenon that is linked to the analysis of the complexity relation that exists between the observer and the observed system.

In my opinion, the main characteristic of the quantum observer (I summarize, of course, with words what can be described with mathematical formalism) is that it must necessarily be much more complex than the observed system—and when I say complex, we can think either of the degrees of freedom used by the observer to store information, like a sort of memory, or, on a more formal level, of the Kolmogorov complexity of the observer, since to my mind the observer is essentially a system identification algorithm, and therefore has an invariant formal characteristic in its physical implementation, which is its Kolmogorov complexity [2]. I leave to one side the mathematical details of this description.

In my opinion, decoherence appears when the number of degrees of freedom of a system observed by the observer come close to a certain threshold characterizing the observer. By that I mean that each observer is characterized by a complexity threshold below which it can always observe the quantum system, i.e., it describes the system completely without doing any coarse graining. In other words, decoherence is linked to the complexity of the observer. The threshold for a quantum description of a system is not the same for two observers where one is much more complex than the other [3].

Ultimately, all this results from the analysis, which I think is necessary but has not been done in the history of quantum mechanics, of the notion of system which, in a way, is pre-acquired and predefined: quantum mechanics “begins” when the observer and the system are already in place. I think it is possible, using informational language, to go one step back and analyse the system in informational terms and draw from this analysis a theoretical explanation of decoherence.

  • Jean-Michel Raimond. If I may ask a question, does decoherence as you define it do without the notion of environment?

  • Alexei Grinbaum. Not necessarily. Indeed, if the observer observes a system whose number of degrees of freedom is fixed and is much less than that of the observer, then we speak, in everyday language, of a closed system. Then quantum mechanics usually works without any problems. That said, what does the statement mean that the number of degrees of freedom can increase and come close to a certain threshold? It means that the observer is beginning to take into account, or attempts to take into account, other degrees of freedom than those initially identified. We can say that before, these other degrees of freedom were classed as those of the environment.

  • Jean-Michel Raimond. If there are two observers, can there be an objective description of the system that the two agree on? In other words, if two observers are in the same room, what do they do?

  • Alexei Grinbaum. Indeed, the main result of my article is a theorem where observers with not very different complexities—and I give a mathematical criterion of what that means—would agree on the characterization of the systems. This result, on the possibility of agreement between two observers, seems to me to be very important. Among other things, it means that two observers that are different will not characterize quantum systems in the same way.

  • Jean-Michel Raimond. There is therefore a system that is more or less big and totally decoupled from the Universe, and in this system there are one or two observers who are decoupled from the Universe and who interact together?

  • Alexei Grinbaum. I do not use the term Universe because I use an informational language. I do not take a realistic, or antirealistic, position, in the sense of the realism of physical systems. The observer observes what he identifies as a quantum system. This observer is a system identification algorithm. It is implemented physically, and can be implemented in different ways, but the informational description on its own provides interesting theoretical results.

  • Jean-Michel Raimond. Is this quantum system a perfectly isolated system, apart from its interaction with the observer? I was thinking of the problem of the environment in that sense, in the most standard view of decoherence.

  • Alexei Grinbaum. My vision does not begin with the theatre of nature as physics frequently does. I do not say that there is first of all the theatre of nature with its objects and that we use physics to account for it. My point of view regarding quantum mechanics stems from the epistemic observation that the observer interacts with systems and observes them, thereby obtaining information. “What really exists around us” has no place in this vision; consequently the notions of universe or of a system isolated from the environment are not defined. That is why I specify that this point is not orthodox, even if in reality, it is close to Bohr’s position.

  • Bernard d’Espagnat. Does the observer, in your vision of things, differ by some qualitative trait from what is being observed?

  • Alexei Grinbaum. The observer differs from the observed system by the label observer, meaning a feature related to complexity, not written down as such in nature, but implemented in the form for example of memory size. The difference between observer and observed system resides in the characterization of informational exchanges.

  • Bernard d’Espagnat. Your thesis that the observer is different from what is observed in a feature “not inscribed in nature as such”, reminds me of Niels Bohr: according to him, an instrument is an instrument not because of its physical composition, but because we use it as an instrument.

  • Alexei Grinbaum. Exactly.

  • Bernard d’Espagnat. Is there a strong analogy between Borh’s approach and yours?

  • Alexei Grinbaum. Absolutely. In my opinion, any informational approach, at least the one I defend, falls into a neobohrian framework. It is a “neo-Bohrism”.

  • Michel Bitbol. I like your neobohrian connection, which I also relate to if perhaps in a slightly different way. The crucial characteristic of this group of positions is that the observation process belongs to a profoundly different category compared to what is involved in the description of physical systems. For Bohr, this categorial leap is represented by the quantum/classical boundary (the process that is observed being explained by quantum mechanics, the process of observation being explained by classical mechanics). For you, this categorial leap amounts to going from a quantum language to an informational language. In neobohrian positions, the reason underlying these categorial leaps is be found not in physics itself but in the necessities of the act of knowing: it amounts to bringing to light the conditions for the possibility of knowledge, in a typically Kantian manner. I would like to specify some of the consequences of your catergorial leap.

You have insisted on the importance (particularly for decoherence processes) of the degree of complexity of the observer. You have highlighted that the degree of complexity you had in mind regarding the observer was not directly linked to its first order constitution as a physical system, but in its capacity of a higher order to memorize, analyse and conceptualize. Does this dual characterization of the observer as a physical system and as an information processing device not lead you to a functionalist conception of the mind, where there is on the lowest level a certain material device, and on a higher organizational level the equivalent of software?

  • Alexei Grinbaum. I believe I can indeed avoid falling into the trap associated with the word “mind”. When I speak of the complexity of the observer as an observer, I speak of a rather precise notion which is the Kolmogorov complexity, the algorithmic complexity, namely an invariant mathematical characteristic—the only one in fact of the system identification algorithm. First of all the observer determines what the quantum system is. He determines the degrees of freedom of the quantum system, to take them into account. This process can be described as an algorithm in a very abstract sense.

Whatever the physical medium of the observer, this algorithm has an invariant characteristic which is the Kolmogorov complexity. The physical content does not play a role in this description. Indeed, we know (these theorems have been proven) that, up to a constant, it does not depend on the physical content of the given system—in the same way that a computer can run the same programme on different physical media.

In my opinion, the question of medium does not necessarily lead to dualism, and is simply not relevant to this description. This level of description has nothing to say about the concrete physical system determined by the observer—who can be a human being, a butterfly, the entire planet, etc. Once the observer identifies a quantum system, there are invariant characteristics in the form of an algorithm that characterize the systems. That is sufficient.

  • Bernard d’Espagnat. An “old school” realist—a school of thought I personally do not adhere to, but I know and have known many of its followers, in particular John Bell—would probably ask you if your analysis as a whole is compatible with his views and can be accepted by him. “Old school” realists consider that things, like atoms, exist in themselves. They exist with their properties, completely independently of the question of knowing whether there are conscious beings that could know them. Can we transpose your general conception to a language that would call upon only the be-ables [4] of John Bell? That is the question.

  • Alexei Grinbaum. Indeed. In the same way that a word processor can be completely described in the language of atoms that make up the computer, likewise transistors and their physical states—which is useless for understanding how they work—an observer can be described, to quote Einstein, by a “constructive theory”—however, this does not help us understand quantum mechanics.

  • Bernard d’Espagnat. It does not help, but do you think it is compatible? Therein lies the question.

  • Alexei Grinbaum. I think each observer, while being a physical observer, can be described as a physical system. This does not provide us with information regarding its capacities as a quantum observer. In the same way that the description of a computer as a physical system does not tell us anything about the word processor it operates.

  • Jean Petitot. That is exactly the definition of functionalism in cognitive theories.

  • Alexei Grinbaum. There is the question of the medium. What carries the function?

  • Jean Petitot. The medium on which we implement the algorithm is not relevant to what the algorithm does as an information processing algorithm.

  • Alexei Grinbaum. I agree on this point.

  • Jean Petitot. That is really the definition of functionalism. This brings us back to question Michel Bitbol asked earlier.

  • Alexei Grinbaum. Not entirely, because there is no mention of the mind.

  • Jean Petitot. The mind is simply the mental process acting in a functionalist way compared to the neuronal process.

  • Michel Bitbol. It appears that there are here different conceptions of the mind. Alexei, you probably fear (justifiably perhaps) that to invoke the mind necessarily leads to associate with it consciousness. However, in the functionalist paradigm, the mind is only defined in the first instance as a set of information processing and decision-making functions that are implementable on all types of physical media. The term “mind” is used specifically as a marker of a functional and informational level of organization which is opposed to the level of its basic substrate.

  • Alexei Grinbaum. The element that allows us to distinguish these two levels is, I believe, the effectiveness of a given description for constructing theories. The only reason that could make us take this step is to explain things that could not be explained otherwise.

  • Michel Bitbol. What you have said is very important. It means that a process as important for quantum physics as decoherence cannot be explained if we describe the observer only in terms of a physical system: we must assign functional properties to it, properties that pertain to the software rather than to the hardware. We must perhaps even assign to it projects, goals, for example that of extracting a fraction of what appears to be and treat it like a physical system.

  • Alexei Grinbaum. I would not go as far as saying “goals”.

  • Michel Bitbol. Nevertheless, is there not a form of circularity in your approach? On the one hand, to derive the process of decoherence, we must from the start call upon a non-physical level of description of the observer (of an informational and algorithmic order). On the other hand, we ask that decoherence accounts for a level of organisation above what is described by quantum physics: i.e., precisely that which allows an informational and algorithmic description.

  • Alexei Grinbaum. This circle is part of the explanatory circles known elsewhere, which appear each time we are dealing with a principial theory, meaning a theory based on principles or postulates. I’m thinking about the distinction between principial theory and constructive theory; it has been the subject of discussion for over a century, and you have no doubt much to say about it.

Constructive theory begins with a fundamental physical level that we believe is part of reality, and constructs a theory from this reality, whereas principial theory always works in circles, since where do the principles on which we base our theory come from? They stem from our own analysis of the systems around us. We then “raise” them, as Einstein would say, to the rank of principle. In the same way, I raise certain things to the rank of principle—things which come from the physical experiences we all have.

  • Bernard d’Espagnat. From the physical experiences we all have, in other words, without any preconception regarding the nature of what creates the experience in us. Is that what you mean?

  • Alexei Grinbaum. In a pragmatic sense and not necessarily in an empirical sense.

  • Bernard d’Espagnat. OK.

  • Alexei Grinbaum. This is not to defend a type of empiricism, but a pragmatic reason for favouring one principle over another.

  • Hervé Zwirn. The concept of “the Kolmogorov complexity of the algorithm that defines the observer in the system” seems unclear to me. It is something that seems to me to be extremely difficult to define rigorously. We know how to define accurately the Kolmogorov complexity of a string of characters or bits.

However, it seems to me that we will encounter great difficulties in defining what characterizes an observer as a system to which we would attribute a Kolmogorov complexity. I have my doubts concerning even the meaning of the expression “the Kolmogorov complexity of the observer”.

  • Alexei Grinbaum. What Kolmogorov complexity are we talking about? That of the observer as an algorithm defining the observed system. What does “defining the system” mean? The image I use of this process is the following. Imagine a long strip of tape, like in a Turing machine, with all possible degrees of freedom. This algorithm consists of putting a cross next to the relevant degrees of freedom. This vision is obviously very abstract, as abstract as a Turing machine.

For example, a human observer who indicates observing an electron means that this electron has a certain number of degrees of freedom. However, does a fullerene define a quantum system, like a photon, in the same way a human would? We are currently debating on this subject with colleagues in Vienna. Can we observe differences at the thermodynamic level? We have already expressed certain ideas on this matter.

The memory of the observer can constitute a certain number of degrees of freedom achieved in certain ways. What the observer does, as an algorithm that defines systems, is to indicate which degrees of freedom he will observe. That is the algorithm whose Kolmogorov complexity is invariant, at an abstract level. A man does not define the degrees of freedom in the same way as a fullerene would, but a fullerene can also do this.

  • Hervé Zwirn. Indeed, but I believe there is a significant difference between an algorithm that would consist iof putting crosses on a list of degrees of freedom (which can effectively be done with a Turing machine with the right programme) and what we mean by “observer”. It is not the same thing. I’ve somewhat lost the thread of the discussion regarding the question we were trying answer.

Let us come back to the end of Jean-Michel Raimond’s presentation from the previous session. We set the problem surrounding decoherence, not experimentally, but philosophically, as being a necessity, firstly, to explain the appearance of the classical world to human observers, and secondly, to know whether this classical appearance was simply an appearance or whether the world had really—for the realists—become classical. The debate, at least as it was presented during the previous session, consisted of identifying two questions: why does the world appear classical and is it really classical since it appears as such?

I am struggling to see which question is being addressed with what you are proposing and whether there are links with the questions we were asking, which are the usual questions when decoherence is mentioned. These are the questions that Wojciech Zurek himself mentioned at the start of his articles [5]. Zurek’s first position was extreme in the sense that he concluded his first articles by stating that the world does become classical and the problem is, in fact, solved. Mr. d’Espagnat discussed this with him—and he was not the only one to do so. Zurek’s second position was consequently a moderate conception of decoherence. Let me repeat, I struggle to find a link between this and what you propose.

  • Alexei Grinbaum. I can see the irony here. Indeed, what I propose corresponds rather well, even if I propose some changes, to the work Zurek did much later, around 1994.

I will attempt to answer your first question, leaving aside the second question which requires a different type of reasoning. In the first question, you mention the human observer. I was not at the previous session, but I think the question of the human observer needs to be broadened. We must, first of all, ask ourselves whether the observer is necessarily human or not. Then there is the question of knowing what its minimal characteristics are. How can we understand this notion of observer? Scientifically, we need to give the minimal informational characteristics of what an observer is. My answer would be to say that the observer is an algorithm of systems. Period. That is my definition.

From there, I try to conceptualize the notion of decoherence. That is my line of reasoning.

4.2 Discussion of the Standard Conception of Decoherence

  • Bernard d’Espagnat. The research on algorithmic complexity applied to the observer is undoubtedly promising and we will have the opportunity to come back to it. However, I believe, like Hervé Zwirn, that it must not make us forget the questions we were asking concerning the standard conception of decoherence, founded on the notion of environment. You have mentioned Zurek and his recent papers. This may be the occasion for me to tell you what I made of his publications, in particular of his important 2003 paper [6].

  • Jean-Michel Raimond. The one in Review of Modern Physics?

  • Bernard d’Espagnat. Yes, that one. I have three points to make regarding this article.

First of all, Zurek writes on page 51 of the ArXiv version: “many conceptual and technical issues (such as what constitutesa system”) are still open”, backing up the comment he makes as early as page 4 that we must accept the existence of the environment, in other words, the distinction between system and environment. This shows that this article does not claim to solve the issue which, during our previous sessions, appeared to be still open to more or less all of us, i.e., the problem of the existence of systems.

  • Jean-Michel Raimond. Except that if we abandon this existence, we have a slight problem…

  • Bernard d’Espagnat. Yes indeed, as decoherence is based on this distinction, and therefore ultimately, so is the theoretical resolution of the cat paradox. On this first point, I would like to say here that, in my opinion, within the framework of standard quantum mechanics, the existence of systems is an appearance to us, they must not be considered as existing per se, and I find in Zurek’s article an echo to this idea since he writes on page 4 that: “Einselection delineates how much of the Universe will appear classical to observers who monitor it from within using their limited capacity to acquire, store, and process information”.

This was my first point, which amounts to highlighting that apparently we are not capable of knowing reality per se, and that we can only know the appearances that are valid for everyone. The two go in the same direction.

The second point deals with the way that, implicitly, Zurek brings a type of solution to the “conceptual” problem raised by some of us last time, namely the passage from an “and” to an “or”, and more precisely the passage to an “or” understood as the choice of one eventuality among others. How and where does he do this? He does this in his paper at the same place where, with the aim of proving Born’s rule, he focuses on probabilities. He shows first of all (from quantum principles not involving probabilities) that the measurement outcome of an observable can only be one of the eigenvalues of this observable. Then (p. 37) he examines the case where multiple outcomes are possible, and in particular the case where the coefficients of the different components of the wave function of the system, developed according to the eigenvectors of the measured observable, have the same absolute value. He still speaks, of course, of the singularity of the measurement outcome, since he accepts implicitly (but which is for me the essential point!) the evidence that a measurement has only one outcome. If two outcomes are possible, then it needs to be either one or the other. He shows, explicitly this time, that in that case it will be with a 50/50 probability. As you can see, for this passage from the “and” to the “or” (and I stress once again that “or” is used in the strong sense of the term, namely implying a truly random choice between multiple eventualities), the notion of measurement as such, implying the idea that an outcome is necessarily unique, plays an essential role in his article. From my point of view, I cannot see what could replace, to this end, the notion of measurement carried out by an agent.

My third point deals with the notions of event and history, for which Zurek introduced that of “relatively objective past”. He writes: “When many observers can independently gather compatible evidence concerning an event, we call it relatively objective. Relatively objective history is then a time-ordered sequence of relatively objective events.” It seems to me that we have here the well-known antirealistic interpretation: to say that an event took place at a given time means that many of us have documents in support of this, but nothing more.

As you can see, all this leads to what is sometimes called “weak objectivity”. Zurek calls it “relative objectivity” but this is not a question of words. As Pascal wrote in the Provinciales: “I never quarrel about a name, provided I am apprised of the sense in which it is understood”. Therefore Zurek, despite appearances to the contrary, and reading his work quickly may give the impression he wants to return to traditional realism, does not come back to this type of realism, nor does he want to.

Such is my conception. It seems to me that in certain ways, that of Alexei Grinbaum is not far off.

  • Alexei Grinbaum. I try to go a bit further, all the while accepting Zurek’s reasoning. When he says we do not know what the notion of system means and the question remains open, he uses an algorithmic argument, the Kolmogorov complexity, to study the change of state of the system. He asks the question of knowing what the complexity of this algorithm, which passes from a given state to another, is. This is where he uses algorithmic ideas.

I think that prior to that, before speaking of states of a quantum system, along the same lines as Zurek, we must apply an algorithmic reasoning to the question of the notion of system. As for everything else, I think that we can perfectly follow Zurek, and that we are not talking about realism in the sense of Bell. The keyword, with Zurek, is “relatively”: everything is relative to the observer.

  • Bernard d’Espagnat. That is right, everything is relative to the observer.

  • Jean-Michel Raimond. The important contribution of this paper is to explain that there is a relative objectivity with multiple independent observers, which are all part of the Universe and share parts of the environment. All these parts of the environment provide the same information on the system if the latter is in a pointer state. There is therefore a common objective reality for these observers.

  • Alexei Grinbaum. Absolutely.

  • Jean-Michel Raimond. It is an interesting point.

  • Alexei Grinbaum. From there, as I have tried to show in my paper, we can speak of objectivity in relation to a class of observers. The question then arises of the boundaries of this class for shared objectivity to have any meaning.

  • Jean-Michel Raimond. In your opinion, a single spin of the environment is not an acceptable observer because it is not complex enough to have unambiguous and classical information on the state of the system?

  • Alexei Grinbaum. Indeed. For instance, how far can we go while maintaining the same notion of objectivity?

  • Bernard d’Espagnat. It seems to me there has been much technical, and even conceptual, progress in what has been achieved by many people—Zurek, you, and many others. However, ultimately, from a philosophical point of view, all this is closer to the line of reasoning of Bohr than of traditional realists.

  • Alexei Grinbaum. Yes.

  • Jean-Michel Raimond. If all the observers of the Universe, whatever they are provided they are sufficiently complex, agree on a reality, then this reality takes on an objectivity that appears strong rather than weak.

  • Bernard d’Espagnat. I called them “weak” in my writings because in themselves they are not capable of conferring any meaning to seemingly obvious claims such as: “the Sun would exist even if no observer had ever existed”; and as a consequence I needed an adjective to distinguish it from the—probably illusory!—objectivity of conventional realism, which considers the claim in question to be sensible and even self-evident. Basically, my aim was to show that there are two possible conceptions of objectivity, not one as commonly thought, and it is wrong to think they are the same. That is why I gave the name of strong objectivity to conventional realism. Zurek calls the objectivity I call weak “relative”, implying relative to the observer. That is all very well. For the sake of clarity, it might even be better since, as you pointed out, this objectivity is nonetheless extremely strong.

  • Olivier Rey. I have a problem: if we speak on the one hand of objectivity arising from an agreement between observers, and on the other hand, if we extend the class of observers to include a huge number of things including molecules, how do you observe alongside or agree with a molecule?

  • Alexei Grinbaum. I think this agreement is neither given nor something obvious to be proven. I think we need to first ask whether we can categorize observers into classes for which we can have an agreement. Frankly, I think that as an observer myself, a fullerene does not provide the same idea of objectivity as a human observer.

I then asked myself whether we could imagine a physical experiment that would corroborate the idea that a fullerene is an observer. I have tried to do this—but this is not the topic. Anyway, the agreement between different observers is not an agreement between all observers but between certain classes of observers.

  • Bertrand Saint-Sernin. I would like to take part in this discussion to ask you for information or advice. I must speak tomorrow morning in front of the French Society for Plant Biology about the difference between nature and artifice, and about the particular problem of genetically modified organisms (GMOs). You know there is a position that is specific to France, which is the refusal of GMOs. Now, one of the arguments used touches upon the notion of realism. In essence, the problem is the following: can something obtained artificially or through modifications in the laboratory have the same properties and be considered identical to something that has evolved under so-called natural conditions?

The problem is therefore not that of the observer, but that of the user. We notice something quite peculiar: anti-GMO advocates who have cancer, heart disease or diabetes have no problem using insulin or blood-thinning drugs, etc. Yet they are anxious when it comes to food.

The first question I would like to ask is this: what is the point of view of the chemist regarding synthetically-derived compounds, of which there have been, from what I have heard, 22 million types since 1928? In my opinion, the problem of realism is especially relevant to something that has been conducted on massive scale in terms of quantity within the world’s population. The question is to know how to provide theoretical justifications to say either you are right to be realist, or you are wrong. Are they right from the perspective of chemistry? Are they right from the perspective of biology? I do not know.

  • Oliver Rey. We must distinguish between situations. In the case of insulin for diabetics, it is not a GMO that is absorbed but only what it produces. Meaning we use genetically modified yeast, grown and maintained in the laboratory, for producing insulin molecules that are exactly identical to those synthesized directly by the human body. In the case of GM food, corn for example, we eat the genetically modified organism itself, which has a different molecular composition from that of traditional corn since its DNA is different. The impact of this difference on the consumer or the environment is another matter.

  • Bertrand Saint-Sernin. Yes indeed, but the question I ask is this: is it right to claim that synthetically-derived chemical molecules, proven to be identical to the naturally-occurring molecules, have the same effect as those? Is this line of reasoning valid, or does quantum mechanics change this perspective?

  • Jean-Michel Raimond. A molecule with the same chemical composition and the same conformation is the same molecule. Quantum mechanics does not say anything different. Unless you are an animist, a molecule produced by whatever means (a GMO, explicit organic synthesis, or natural synthesis) has exactly the same functions and properties.

  • Bertrand Saint-Sernin. This is accepted in America, but not in France.

  • Bernard d’Espagnat. Jean-Michel Raimond is of course completely right. However, I would like to specify the relation between Bertrand Saint-Sernin’s question and what we are debating today. I would say that, following the terminology we arrived at earlier, if we believe that quantum mechanics is a universal theory, the realism of chemists as well as of pro- or anti-GM camps necessarily refers back to the objectivity I called “weak” and which Zurek called “relative”. All phenomena, like this table or all the elements present in this room, are appearances that are the same for all human beings and probably for all conscious beings. They belong to the reality that is relative to the notion of the observer. What I suggest here is only that, in the light of the constitutive principles of standard quantum mechanics, they cannot be considered as objective in what I called the “strong” sense, meaning in the sense of the term “objective” given by common realism.

Consequently, I do not believe that quantum mechanics has anything particular to say regarding the problem you have raised. That is my first conclusion; my second conclusion being, I repeat, that on this point I completely agree with Jean-Michel Raimond. There is no difference.

  • Hervé Zwirn. If we come back to the questions we were asking previously, I would like to present a very simple point of view to see if we share it or not. It is about the description of the problem as we defined it at the end of the last session.

We asked ourselves whether the problem of measurement (consisting, in orthodox quantum mechanics outside of decoherence, of not being able to come out of this chain of successive entanglements for each new interaction between the initial system and a measuring device, the observer, etc., and which seems to suggest that the observer is in a superposed state), is resolved by decoherence by breaking the chain. We asked ourselves whether we needed an observer that was conscious or not, and whether the outcome can be interpreted as being real in the sense of strong realism, or whether we must consider that beyond appearances, the world remains profoundly quantum. These are the questions we were asking ourselves.

Jean-Michel Raimond had strongly insisted, and rightly so, that the phrase “for all practical purposes” meant that we can consider all that happens as if it was classical. The debate we had at that point consisted of saying that on a practical level we obviously agreed, while asking ourselves whether it was possible to claim, philosophically speaking, that the conclusion is that the world is classical or not.

I have a proposition to make. It is very simple. I would simply like to ask whether we agree on the following process. When a system is coupled with a measuring apparatus and to the environment, we have a big system S + A + E comprising of the system itself, the measuring apparatus, and the environment. This big system is in a superposed state and its density operator contains off-diagonal elements. The usual rules of quantum mechanics state that if an observer takes a measurement without measuring the degrees of freedom of the environment (such a measurement deals with observables that are unattainable to the observer), the system S + A will be described by making a partial trace on the environment of the density operator of the big system, and calculations show that, in general, the off-diagonal elements of this trace become very small very rapidly (this is “decoherence time”) and remain that way for an extremely long time. The description of the system S + A is practically equivalent, for an observer that would not be taking measurements that are unfeasible, to that given by a diagonal density operator. Last time, we debated on the off-diagonal elements, which can eventually become non-negligible again after a time that is possibly greater than the age of the universe—thus we left this topic to one side.

Since the density operator that describes the system as it is accessible to the observer is practically equivalent in all its observable consequences to a diagonal density operator, do we agree to say that in reality, in the strong sense of the term, the world remains quantum and can be superposed, but that this is not important because we cannot see it that way. This is somewhat equivalent, as a situation, to what happens with relativity: the world is not a classical world, it is a relativistic world. However, at low speeds, the relativistic effects being completely invisible, the world seems classical to us. We would have an identical situation here: the world is in fact quantum, but the quantum effects being, at our scale, for this type of measurement, unverifiable, it appears classical to us.

We could therefore reconcile the points of view from our previous discussion by considering that the world is quantum and never becomes classical in the old sense of the term, but that this quantum effect has no visible effect—the world thus appears classical to us. One of the problems, it seems to me, in this philosophical discussion, is that very often we think that if something is considered quantum, then this must have a surprising visible effect, different from the classical world. In fact, what decoherence shows is that a system can be totally quantum while having a behaviour that appears classical to us, without this posing the slightest problem.

Do we agree or not on this point?

  • Bernard d’Espagnat. Let us vote! I for one am for it.

  • Jean-Michel Raimond. I must say I would not like a theory where the observer is necessarily defined by consciousness. That the observer must be a complex system, that he is a part of the environment seems to me to be a reasonable approach. In which case, this part of the environment can be the 60 × 3 degrees of freedom of a fullerene or a normally constituted Doctoral student. Anything between the two would seem a reasonable observer.

Effectively, I have the impression that you propose a “decoherence plus Everett” approach. We know that everything is superposed. We know that we are in the wave function of the Universe. But I the observer, you the observers, observing the same phenomenon and all other reasonable observers observing copies of the same phenomenon scattered in the environment agree on the fact that this phenomenon has an objective reality and that the measured observable has an objective value.

  • Bernard d’Espagnat. A relative value.

  • Jean-Michel Raimond. In our Universe, we completely agree on everything, such that if everything in the Universe agrees to say that the electron spin is positive, why not say that it is the real reality?

  • Bernard d’Espagnat. It’s a matter of convention. Indeed, why not say that when everything in the Universe that possesses the quality of observer agrees with a certain observation, what is observed is a “real” reality. That’s what many, including I, call “empirical reality”, in which all living things are immersed, and what many philosophers just call “reality”. That being, it is worth differentiating between this notion and that of a reality “per se” that would exist even if no observer existed. It is a priori conceivable that these two notions have the same field of reference, but this is a postulate and not a truism. And a postulate that is difficult to reconcile with standard quantum mechanics, even complemented with decoherence.

  • Jean-Michel Raimond. If I may say so, the very “Zukerian” notion of the existence of a complex environment split into multiple parts that all agree on the state of the system (this is what Zurek says, quite rightly I think, in his papers from the 2000s) means that it is not defined until there is an observer, but that the Universe in its entirety agrees to consider that this sub-part of the Universe is in that state.

  • Bernard d’Espagnat. The Zukerian notion you speak of appears as a consequence of his demonstration, effectively very enlightening, of the fact that the non-isolation of macroscopic systems leads to the publicly known existence of robust observables, in the sense that once measured by someone, all other observers know they will be able to measure it again, possibly in rotation, without changing the values. And that is true, even if we take only indirect measurements on intermediate objects, hence this universal agreement you mentioned. It is undeniable that this agreement provides observers with a very strong feeling for the reality of what they observe. Nevertheless the problem of the passage from “and” to “or” is still not resolved as it is a more fundamental question, which arises prior to this. Zurek resolves this question implicitly by calling upon the notion of branches of the Universe, in other words Everett’s theory (“Distinct memory states label and “inhabit” different branches of the Everett’s “Many Worlds” Universe” [7]). It is effectively a possible solution, but I personally do not adhere to the theory of multiple worlds in which I see an attempt at a metaphysical explanation comparable in its detail to many previous metaphysical attempts, and like them, lacks credibility precisely for that reason.

  • Hervé Zwirn. On this point, there is an alternative reasoning.

  • We can either say that decoherence takes place “in the manner of Everett”, meaning there is only one wave function with coexistence of all the possibilities, or we can say that there is at a given time a choice in the “or “of all possible states, without this coexistence—even though this brings about many problems. This is the first alternative. We can make either one choice or the other. Both pose problems of a different nature.

In addition, even if we accept Everett’s definition where everything coexists and where each branch of the Universe corresponding to a choice is in agreement with itself, it does not imply that reality as it is described becomes objective—this takes us back to our previous discussion. Simply because the off-diagonal elements, unattainable to us but not strictly null, come back to the fore. We can then think that there is a difference between the fact that the spin is rigorously up and the fact that is practically up with off-diagonal elements that will (even in a very, very long time) become important again. There is a nuance here. The debate we had last time on small probabilities regains its significance: either we consider that small probabilities have no meaning and there is a cut-off point, below which we set them at zero; or we consider that no matter how small, they still have meaning.

  • Jean-Michel Raimond. From the observers’ point of view, the two possibilities (either a global wave function and we are on one of these branches, or there has effectively been a choice) are indistinguishable. The question of course is to know whether we can devise experiments to discriminate between these points of view. There have been many proposals based on stochastic quantum mechanics and others that perform a reduction of the wave packet which are not experimentally detectable in the current state but which can become so. As of yet, we have not managed.

  • Alexei Grinbaum. Decoherence is part of physics.

  • Hervé Zwirn. We agree.

  • Alexei Grinbaum. Then any interpretation of quantum mechanics among all the interpretations we have known for decades will do: each accommodates perfectly well the existence of decoherence phenomena. I think Zurek confuses matters—and not only him, but Murray Gell-Mann and James Hartle for example. Indeed, in my opinion, their description of physical phenomena and their philosophical interpretation of quantum mechanics are too close to each other. What Zurek describes is his interpretation of quantum physics. It is not something required by the existence of the decoherence phenomenon. Consequently, I am not sure we need to seek an agreement among us: each can have his own favoured interpretation.

  • Hervé Zwirn. Allow me to repeat myself, I just wanted to know if we agreed on the fact that the decoherence mechanism, as I described it in a simple manner earlier, provides an explanation for the appearance of the world to a human observer. This is the first step. Many other questions ensue. Simply, regarding the question of why the world appears to us as it does, which was problematic without decoherence (many hypotheses, including the reduction of the wave packet by consciousness, have now been mostly abandoned by physicists), do we agree that this problem is practically resolved by the decoherence mechanism? Of course, this does not provide a definitive solution to the problem of realism.

  • Alexei Grinbaum. Provided that we agree on considering this problem of appearance or occurrence of the world as we see it as different from the measurement problem. The measurement problem, for me anyway, is not resolved by decoherence. The problem of understanding what happens for all practical purposes is resolved by decoherence. However, it is not the same thing as the measurement problem, which is not resolved. The choice of the preferred base, the passage from “and” to “or”, etc., all these reformulations of the measurement problem, are not…

  • Hervé Zwirn. …the choice of the preferred base is settled.

  • Alexei Grinbaum. Yes, the choice of the preferred base depends on the observer…

  • Jean-Michel Raimond. …who depends on the environment.

  • Alexei Grinbaum. Who depends on the environment, but…

  • Jean-Michel Raimond. … I believe the rather clever idea of Zurek’s article was to say that the observer never directly interacts with the system. The environment is between him and the system, the observer being in fact only a part of the environment, which interacts indirectly with the system. The dynamics of the system/environment interactions are what determine the preferred base. It is both experimental and, I think, it provides for all practical purposes a solution to the question of the preferred base.

  • Hervé Zwirn. The problem of the “or” at the final stage remains unresolved. Unless we remain in Everett’s model where we say that everything coexists but we do not take that into account.

  • Jean-Michel Raimond. It is a way of “sweeping things under the carpet”…

  • Bernard d’Espagnat. As I was saying earlier, I think the use of the notion of conscious observer allows us to do better.

4.3 Arguments for and Against Realism

  • Michel Bitbol. I have a question I would like to ask to you as a group, and in particular to decoherence specialists. We have mentioned two different aspects of the measurement problem: (1) we have spoken of the disappearance for all practical purposes of off-diagonal terms in the density matrix, and (2) we have also spoken of the ability, or rather the inability, of decoherence theory to resolve the so-called “and/or” question, meaning the passage from a superposition of states of an observable to a disjunction of singular values of this observable. The question I would like to ask you is the following: is there a link between the two?

To formulate my question more precisely, I would like to ask you this: if decoherence was really capable of making off-diagonal terms of the density matrix disappear, if it could really impose a value strictly equal to zero, would you consider the “and/or” problem resolved?

  • Jean-Michel Raimond. I am very pragmatic and very “for all practical purposes” (because something is wrong if you are not “for all practical purposes” in the lab). I would say that for all practical purposes, decoherence resolves the “and/or” problem. For all practical purposes and for any experiment conceivable by man or any normally constituted extra-terrestrial. It claims that the density matrix is, for all practical purposes, diagonal.

  • Michel Bitbol. Bear in mind, I was pushing the problem to its limits, by saying: “let us accept that the density matrix is really diagonal, not just for practical purposes, give or take negligible values, but strictly diagonal”. Would your answer to the question “is the and/or problem resolved” be affirmative? Allow me to ask you again.

  • Jean-Michel Raimond. This clearly does not resolve the problem of choice.

  • Hervé Zwirn. We agree.

  • Jean-Michel Raimond. We can perfectly reason “à la Everett” where all the branches are resolved. Either in the entire Universe, if there is not an interpretation “à la Everett”, or in each of the branches, everyone agrees on what took place.

  • Michel Bitbol. We clearly agree on this point. But please note that in my opinion, the “or” problem is not really different from what you call the problem of choice. Indeed, if one “or” another possible measurement outcome of an observable is obtained, this means there is only one, chosen among all possible outcomes, which is achieved although we do not yet know which one it is.

  • Bernard d’Espagnat. I have wondered, as I mentioned earlier, how Zurek would resolve this problem. Ultimately, in his previously cited 2003 article, he resolves it implicitly through this obvious observation—provided that we introduce the notion of observer and that of measurement—that a measurement has only one outcome. When there are N possibilities, as is the case once the density matrix has been diagonalized, Zurek says explicitly that: as a measurement outcome can only have one of the values that is specific to the observable, it is necessarily identical to one or the other if N = 2 (or to one of these in the general case) with equal probabilities for each when the coefficients of the wave function are equal. It is the first step in his attempt to demonstrate Bohr’s rule.

I found that presented in that way, his reasoning was correct. However, I can see that it is also fundamentally derived from the notion of measurement per se. Unless we resort (somewhat problematically) to Everett, no “purely physical” interaction would give you this, it seems to me. We need measurement.

  • Michel Bitbol. I would like to add a point to make you understand what my motivation was when I asked about the resolution (or lack of resolution) of the problem of choice through decoherence. This question seems almost naive when taken on its own, but it takes on another dimension when you relate it to a certain probabilistic idea of the status of quantum formalism. Let us suppose therefore that quantum formalism, through decoherence theory, produces a rigorously diagonal density matrix (with strictly null off-diagonal terms). In that case, the quantum formalism is exactly like classical probability theory with its Kolmogorovian axiomatic. Yet no one has ever asked classical probability theory to designate the choice that is effectively observed in the laboratory: we do not even ask it to justify that a particular choice is made among all those possible, as this is taken for granted from the moment we accept that one or the other is achieved; we only ask of it to determine a priori the probability of each possible choice. It is therefore surprising that, even at the boundary where quantum theory meets classical probability theory, we still ask of the former to justify on its own why a particular choice is made. This is why I am perplexed.

  • Jean-Michel Raimond. I think that this vision of things gives quantum mechanics exactly the same status as ordinary statistical physics. We know there are an infinite number of microscopic realities for the same macroscopic state. We do not which one is achieved, but we know only one is achieved. I believe we could manage by simply suggesting that a first postulate is added to all the postulates of quantum mechanics that says that if all the observers of the Universe agree on one physical reality, then this reality is unique. Period.

  • Bernard d’Espagnat. I like your “first postulate” all the more that it is the one made by all the antirealist philosophers starting with… Schrödinger. He added that under these conditions, there was no need, for science to be done rigorously, to implicitly add (just by using the word “reality”) the metaphysical, unverifiable and now problematic (non-separability) postulate where this physical reality exists “per se”, meaning it would exist as we apprehend it even if there was not, and never has been, an observer.

However, I am less convinced by the equivalence of status that you are suggesting following Michel Bitbol’s comment and within the context of his hypothesis. Even within this framework, I do not see it being achieved, and that because of the “ontological” realism implicitly postulated in many presentations of classical statistical physics (with the exception, I think, of Gibbs-like presentations).

Admittedly, nothing stops us from conceiving a mechanics that would be both realistic (i.e., ontologically interpretable) and fundamentally non-deterministic. In such a theory, certain events would have an intrinsic probability. For example, during a measurement and for certain states of the measured system the pointer would have, independently of any consideration of the environment and of decoherence, a certain probability to move “as one block” to the right and the complementary probability to move “as one block” to the left (this is similar to the idea of “propensity”). In such a theory, there would be no need to explain the “or” using the notions of observer, measurement, etc. There would be no need because in my theory, the “or” is introduced in a way “by hand” as an integral part of the axiomatic. However, this theory does not comply with either quantum mechanics or experience (your experiments show that at time 0+ it would be impossible to assign a position or even a determinate form to the pointer). With quantum mechanics, we are dealing with a completely different theory, where it is in principal always possible to move back the “or” (i.e., non-determinism) and at the same time assign a credible form to the system, up to the point where an observer takes a measurement. It is therefore not surprising that the problem of the “or”, which is non-existent in a theory that sets the “or” at the start, is in a theory with such characteristics a real problem that is furthermore linked to measurement.

  • Jean-Michel Raimond. We need to have choices without determinism.

  • Bertrand Saint-Sernin. Could I make just one point? Historically, the 19th century philosopher [8] who was the first to explicitly investigate the notion of realism using experiments of synthetic chemistry was a probability theoretician. He thought that contingency was part of nature’s make-up. He was not at all a determinist. He criticized Laplace’s demon using very strong terms.

  • Alexei Grinbaum. From this point of view, Jean-Michel (Raimond), it is absolutely true that there would be no difference between statistical physics and quantum mechanics when off-diagonal elements are equal to zero. That being, what is different, is that when we say “statistical physics” we think of “a system with multiple components”. We do not think of a gas with a single molecule, which is an extreme example—which besides can be studied, and is studied, and is even rather interesting. Whereas when we say “quantum mechanics”, we also want to study not the statistical aspect of things but a single photon (which is an example of a single system). There is not physical (since the mechanics works) but conceptual friction: to say that quantum mechanics is a statistical theory is, in my opinion, akin to what was said in the 1920s or 1930s. Nowadays, when we use quantum mechanics to describe single systems, we want to say that there are things which are probabilistic and not statistical.

  • Jean-Michel Raimond. I agree. I spent my life manipulating single systems! Decoherence provides probabilities that are not of the nature of probabilities of statistical physics, because they do not concern a large number of sets of systems, but the description of a single system or a single object, yet whose conceptual status—or philosophical status if you prefer, supposing I understand it—does not seem to me so different from that of a probability of statistical physics, which is a probability of ignorance. This can concern a spin, a photon, or a molecule.

  • Alexei Grinbaum. Yes, but what is interesting is that when you consider single systems in quantum mechanics, this leads to a number of paradoxes. In quantum mechanics, paradoxes (which are not logical paradoxes, but very strange counter-intuitive phenomena) are linked to post-selection. The nature of these paradoxes is different from the conceptual problems raised for example when considering a gas with a single molecule.

  • Jean-Michel Raimond. Most of these paradoxes are linked to a non-trivial description of non-trivial experiments where we suppose entirely quantum behaviour and where, at the end, we analyse the measurements. We take a measurement first and only acknowledge it at the end. This is simply determined by complicated quantum behaviour. I am not saying that quantum behaviour is not complicated. I simply say that if we introduce decoherence, at a given point probabilities appear that do not have a conceptually different role from those of statistical physics. I am not saying that quantum mechanics is statistical physics.

  • Jean Petitot. Yes, it is definitely not statistical physics.

  • Olivier Rey. In fact there are two ways of considering statistical physics in a classical framework. One approach consists of trying to construct a physical theory from what we can know empirically of reality. The other consists of thinking that all is determined within reality, yet the number of elements forces us to treat them statistically. There are thus two ways of considering statistical physics, both leading to the same outcomes, but which are philosophically very different.

The hypothesis of underlying determinism is not necessary for statistical physics. Recently, Jean Bricmont argued for a so-called “Bayesian” formulation of statistical physics. The main question is then: what is most rational way of thinking about reality taking all the available information into account?

  • Alexei Grinbaum. Which we can do, as well, with quantum mechanics.

  • Olivier Rey. Yes, precisely. The “Bayesian” approach is much more in line with quantum physics than any statistical approach that supposes an integral determinism, and deploys the statistical arsenal from this hypothesis.

  • Michel Bitbol. If I may comment, I find it amusing and paradoxical that it would Jean Bricmont who would put forth this view of statistical physics. It apparently supposes that we devise a global stochastic description by excluding any preoccupation concerning hypothetical underlying microscopic processes. And yet Jean Bricmont has made himself the advocate of Bohm’s “ontological” interpretation, the same one that claims to use mechanisms supposedly underlying quantum probabilities…

  • Olivier Rey. He is part of a long tradition.

  • Jean-Michel Raimond. I will be once again outrageously “for all practical purposes”! It seems to me that we can agree on the fact that independently from everything, all the observers of the Universe can agree on the fact that there are objective realities in the physical world that are one, unique and indivisible. Perhaps this should be the common denominator of all the physical theories we attempt to formulate, be it in statistical physics or in quantum physics. In other words, there is an objective reality. We should not ask of physical theories to extract their own formalism—at least not the physical theories we have now.

  • Bernard d’Espagnat. With nonetheless some reservation regarding the term “there is”, which is too close to “per se” to my mind. What you have defined is an objectivity that is relative to all possible observers or all possible conscious beings.

  • Jean-Michel Raimond. And even to all possible fractions of the environment.

  • Bernard d’Espagnat. I am not sure about this last point. It depends on what we mean by “fractions of the environment”. We have a tendency, when we speak of the environment, to keep in the back our minds a physicalist notion of reality made up from I do not know what, perhaps atoms linked by forces, or something more complex yet similar (made up for example of objects and fields existing per se and scattered here and there in space). We must do away with this, to say the least, questionable image (non-separability). In fact if we try to go beyond the “for all practical purposes”—which is sufficient for science, we all agree on this point—we have no valid mental representation at our disposal of what could be “an environment per se” made up of “fragments”. As we are readily quoting Zurek, I would say that the quotation, which we have already mentioned, where he says that the Universe “will appear classical to observers who monitor it from within, using their limited capacity to acquire, store and process information”, shows that that is also what he thinks.

  • Hervé Zwirn. Moreover, it seems to me that the agreement reached by observers pertains to phenomena. The problem of realism is to agree on the existence of something per se so that we agree on phenomena. We are not contesting that we agree on phenomena. We can use whatever vocabulary we wish. I for one call this not empirical but phenomenal reality: we all agree on phenomena as they appear to us and this constitutes for me phenomenal reality.

The problem of realism arises later. Given this reality, which I call phenomenal (but which is sometimes called empirical reality), is there an underlying reality per se which “causes” it? That is the problem of realism. As for the fact that there are phenomena on which everyone agrees, I think no one denies that.

  • Jean-Michel Raimond. The link between phenomenology and reality per se seems to me to be outside the grasp of physics.

  • Hervé Zwirn. It is philosophy.

  • Jean-Michel Raimond. I am not sure that this link is any different for any type of science, in particular for classical physics compared to quantum physics.

  • Hervé Zwirn. It seems, and that is the debate we are having here, that for reasons linked to what we were talking about, the relatively simple link (without mentioning other problems) that can exist between the two in classical physics is more complex to establish with quantum formalism. The reasons previously mentioned include non-locality, contextuality, etc. This is what the debate hinges on. Does quantum formalism, or quantum mechanics in the largest sense, enable this transition or not?

  • Jean-Michel Raimond. What we have been saying, nonetheless, is that formalism and decoherence, despite their notorious inadequacies, give quantum physics the status of a physics of classical probability—thereby making the link easier.

  • Hervé Zwirn. It appears that decoherence has been a major step forward from the beginnings of quantum physics (with the debates of the founding fathers) up to the discovery of decoherence, when some of the world’s greatest physicists formulated some rather far-fetched hypotheses to resolve this problem. Is this a definitive paradigm shift? In that case, do we consider that the philosophical problems that allow us to bridge the two are resolved?

  • Jean-Michel Raimond. They are no more or less resolved than in the other branches of physics.

  • Hervé Zwirn. There. It is getting closer. Of course.

  • Bertrand Saint-Sernin. Historically, the problem of realism is linked to a very classical theological question: the problem of divine guarantee. In other words, the first definition of realism, the one we find in Antiquity, is: “have we access to divine reason when it created the world and as it maintains it?”. The nature of the problem of realism changes the moment we say we need to construct a science without divine guarantee. The founders of modern science, be it Descartes, Newton or Leibniz, etc., think we can achieve, with more or less difficulty, a sort of vision of God. However in the 18th century, the nature of the problem changed completely. What does it mean to create a science by strictly human means and without referring to the idea of an infinite spirit with which we could communicate? From that point onwards, this profoundly changed the nature of the notion of realism.

  • Jean-Michel Raimond. I would really like us to do physics without consciousness and without God.

  • Bertrand Saint-Sernin. Of course. But historically, this is what happened. That is all.

  • Jean-Michel Raimond. I do not know what consciousness is, but the fact that we must bring in thinking objects for the physical description of the world is not at all to my liking.

  • Hervé Zwirn. This is a central point for our discussion. We all seek to avoid resorting to consciousness, which is akin in a way to resorting to God. God was eliminated and no one supports the idea anymore that the reduction of the wave packet is due to the direct action of consciousness on the system. Nevertheless, it seems to me that it is possible to take consciousness into account in the following sense: consciousness has no physical effects for reducing a system, but what we observe is done, in a Kantian sense, through a set of “filters” so that what we observe may not be totally independent from what we are. It seems to me to be something to consider, and is less troublesome than resorting to a divine idea or a consciousness that has a direct action. New theories that bring in information theory are close to this idea.

  • Jean-Michel Raimond. I hope that, if we have to bring in consciousness, it is more like the minimal degree of complexity of what is doing the observing.

  • Hervé Zwirn. Allow to me quote Zurek, from one of his articles from 2003 [9] reformulating his earlier publications: “Hence, the ontological features of a state vectorobjective existence of the einselected statesis acquired through the epistemological information transfer.” This means that he links the ontological aspect of the vector state with some form of epistemological information. However, epistemological information…

  • Alexei Grinbaum. … the keyword in “objective existence” is “objective” not “existence”. For Zurek, existence is a philosophical term. What concerns him is to put objectivity in the description.

To reiterate what Jean-Michel (Raimond) was saying, I think we can ask exactly the same question regarding the observation of a quantum system by a fullerene. We are not saying that a fullerene has a consciousness, which would be a bit strange; however we can ask ourselves how a rather complex molecule, like C60, observes photons. With the support of evidence, I can show you that a fullerene can observe up to ten photons and keep this information in its memory. There is no reason, in my opinion, to think that there is a fundamental difference between a fullerene and a human being as quantum observers.

  • Bernard d’Espagnat. Unless perhaps when you consider probabilistic events and decide to exclude all hidden determinism (of the Bohm type). I think there is a difference. If you consider your fullerene as a purely physical, yet quantum system, then I believe it will not be able to tell you by itself that there is one single answer. In other words: that a given observable that could have taken on a number of different values has in fact taken on one of them and not the others. I believe that for this to be possible you need to consider your fullerene as classical, in the same way Bohr considered his instruments as classical because they were used as instruments.

  • Alexei Grinbaum. By stating that “it will not be able to tell you”, you are assuming there is a communication or interaction problem between observers. It is not the same framework.

  • Bernard d’Espagnat. But the two problems are linked because you, as a conscious observer, know there is not (or rather, in my opinion, “are compelled by human mental processes to say that there is not”) a single answer. In the sole light of quantum principles, we cannot see how a fullerene, a simple quantum system, could be brought to make such a “choice”.

  • Hervé Zwirn. What does this mean? What meaning do you give to what a fullerene feels when it observes spin up or spin down?

  • Jean-Michel Raimond. It does not feel anything. However, when my computer saves the results of an experiment, it stores is in a RAM. In modern RAMs, to store one bit of data, you need twelve electrons. These twelve electrons carry the data I am saving from my experiments. It is neither very big nor very small compared to a fullerene. It is in the same order of magnitude. That is what ultimately carries objective reality. Countless conscious minds can look at these electrons, which, until I inadvertently press on the “start” button, will hold this information. If we consider the way a modern computer works, what holds information in an objective and verifiable manner, is a small set of quantum particles.

  • Michel Bitbol. That is true, but as long as you have not observed this system of electrons which holds the information you speak of, you have to describe it by a non-diagonal density operator.

  • Jean-Michel Raimond. Or a completely diagonal one, because these twelve electrons are very strongly coupled with a very complicated environment that stops them completely from being in superposition.

  • Michel Bitbol. Mr d’Espagnat would say “Diagonal indeed, but only for all practical purposes”. We always come back to this!

  • Alexei Grinbaum. I think that is not quite right, because you cannot know what outcome was read by the fullerene. However, I think you can devise an experiment that will show you it acted as an observer. The measurement outcome is not accessible, because you are outside the observer-observed system pair. However, you can observe the thermodynamic consequences of the fact that the fullerene has acted for a time as an observer.

  • Michel Bitbol. The ease with which you consider a molecule to be an observer troubles me. What exactly is being an observer?

  • Alexei Grinbaum. It is to keep something in memory.

  • Michel Bitbol. To keep something in memory… What exactly is memory? What does keeping something in memory mean?

  • Jean-Michel Raimond. It means to be classically correlated with the state of the measured system. It is the classic correlation between the state of the measured system and the state of the meter, meaning a purely classical correlation, on a classical probabilistic superposition of classically entangled states.

  • Alexei Grinbaum. There is a problem with photons, which are absorbed—thus are no longer there.

  • Jean-Michel Raimond. Indeed, I agree.

  • Michel Bitbol. Why do you speak of “classically” entangled states? It is the adverb “classically” I do not understand here.

  • Jean-Michel Raimond. I mean described by a probabilistic alternative: either this state where the needle is in this position, or this state where the needle is in another position.

  • Michel Bitbol. The problem is that the alternative is, in theory, of a quantum nature, meaning that the off-diagonal terms can be extremely small but are not strictly null.

  • Jean-Michel Raimond. Yes, I agree. But for all practical purposes, no one will ever see them, not the entire Universe itself.

  • Michel Bitbol. I concede you this, of course. However, what I wanted to say was that the problem remains. The alternative has not tipped, as Alexei (Grinbaum) said, on the side of strict determination. A true observer would see a single, strictly determined outcome whereas a fullerene molecule remains in theory in a superposed state, (more or less intensely) entangled with its correlated system.

  • Jean-Michel Raimond. The problem is the same whether we are dealing with a fullerene, an electron in a RAM or a postdoc.

  • Olivier Rey. You would concede that this is a physicist’s definition of the act of observing. It is not the usual definition of the word, which tends to suppose that there is a subject carrying out the observation.

  • Jean-Michel Raimond. Yes.

  • Olivier Rey. Words have a certain meaning.

  • Jean-Michel Raimond. I would not like the results of my experiments to depend on my state of consciousness, on knowing which consciousness is looking and whether that consciousness has had too much whisky or not!

  • Olivier Rey. What I wanted to highlight was that we need to be careful… When doing physics, we tend, for practical reasons, to use words from everyday vocabulary rather than create new ones. Therefore, we can easily be led to believe that physics always aims in the same direction as everyday language, when, words having taken on a new meaning, it speaks of something else. The definition of observation you have just given us is not in the dictionary for example.

  • Jean-Michel Raimond. I agree. In my opinion, observation is in a way a classical recording. That is what observation is for me.

  • Bernard d’Espagnat. We all have, I think, the impression that we are verging on an agreement without having completely achieved it, and consequently the debate is still open. That is all the truer since we have not yet tackled the specific spatio-temporal aspects of decoherence, although these raise particular problems that Jean Petitot, I believe, would like to address now.

4.4 Mathematical Aspects of the Conflict Between Quantum Mechanics and Spatio-temporal Localization

  • Jean Petitot. I prepared a comment on the link between problems of decoherence and my spatio-temporal localization—It’s my geometrician side!—and more generally on the conflict between quantum mechanics and the localization of measurements. For us, human observers, the macroscopic world is characterized by its spatio-temporal localization. In Decoherence and the Appearance of a Classical World in Quantum Theory [10], Eric Joos broaches this question in the following way (p. 63): one of the fundamental characteristics of macroscopic objects is that are spatio-temporally localized. In particular, Joos cites the debate between Born and Einstein, where Einstein warns that spatio-temporal localization (or the fact that there is a very well-localized wave packet that does not disperse itself “with respect to the macro-coordinates”, namely space-time macro-coordinates, positions, momenta, etc.) is in contradiction with the axioms of quantum mechanics. I would like to come back to this point, reprising the mathematical reflections on the work of the Gelfand, Naimark and Segal school, followed by Mackey, who tried to compare the mathematical formalisms of classical mechanics and quantum mechanics, to really “pinpoint” their fundamental difference. I am using as a template Jerrold Marsden’s presentation in Applications of Global Analysis in Mathematical Physics [11].

In quantum mechanics, state space is a phase space (coordinates p and q of Hamiltonian mechanics). There is a differentiable variety of states (call it P for phase space). The observables are defined functions on this phase space P that take their value from a set of values. In general, these are functions with real values, meaning functions with complex values equal to their conjugates (in quantum mechanics they become self-adjoint operators). The measurement of an observable f on a state represented by point x in phase space P is simply the value of function f(x). It is the evaluation. F(x) is equal to the value on f of the Dirac delta distribution in x. This means we have a duality between space and function: we have points (states) in a space of representation; the observables are functions on it; but we can equally start with the functions and recover the points as Dirac measurements, i.e., as certain linear operators on the commutative algebra of observables.

Mathematically, it is extremely important to note—this is the heart of Gelfand’s theory—that there is perfect equality between the space where we can localize phenomena and observables, and algebraic properties, in particular the fact that the points are in bijective correspondence with the maximal ideals of the commutative algebra of functions, i.e., the ideals of functions that annul themselves at a certain point. This is a fundamental property of commutative algebras which disappears completely in the non-commutative algebras of observables we find in quantum mechanics.

In quantum mechanics, we know the situation is completely different: we have as space state a Hilbert space H; we have a non-commutative algebra of operators for the observables; and we have the measurement of observables: if A is an observable, i.e., an operator, and if ψ is a state, a scalar product in H 〈,ψ〉 is defined.

The problem is comparing classical statistical physics to this scheme of quantum mechanics.

Very early on, I believe as far back as the 1930s, Bernard Koopman tried to find a Hilbertian and operatorial formalism for Hamiltonian mechanics, in order to compare quantum mechanics and Hamiltonian mechanics. He proposed to formulate Hamiltonian mechanics in the closest possible way to what we find in quantum mechanics. It is rather easy: we take statistical states (thus we have a distribution on phase space P, a state ψ now being a distribution on phase space), from which we derive a measurement (in the mathematical sense) for the values of measurement (in the physical sense) of the observables. The formalism is then exactly the same. The fundamental difference comes from the fact that the measurement, essentially the square of the module of ψ, |ψ|2, multiplied by the Liouville measure in phase space, has a huge group that leaves it invariant: you can multiply ψ by exp((x)) where α(x) is any function on phase space P. Therefore you have a huge group, and the quotient of Hilbert space H (which is the space L 2 of the functions on the phase space) gives back phase space P. As it is phase space that guarantees localization, localization is linked to the fact that a huge group operates on the Hilbert space of states. The incoherence of classical mechanics, meaning its decoherence, is fundamentally linked to this type of “localizability”.

By contrast, in quantum mechanics the invariance group is miniscule: it is the group U(1). The quotient of the Hilbert space H by this group is simply the projector of H, what we call rays. This is what, in this perspective, formulates coherence, the possibility of interferences, etc. In short, it is the magnitude of the invariance group in a Hilbertian formalism of classical mechanics that explains the characteristics of classical mechanics. With such an approach, it becomes easy to show (this dates back to von Neumann) the fact that it is impossible to have theories of local hidden variables whose idea is to try to reuse the formalism of quantum mechanics while adding a space where we could localize things in a somewhat analogous manner to what happens in Hamiltonian mechanics.

This theorem, revised by Mackey, dates back to von Neumann: any formalism of this type, which tries to enrich quantum mechanics by saying that there is an underlying variety, whatever it may be, that allows us to localize the measurements, is necessarily commutative. This was the first fundamental result of impossibility that highlighted the obstruction we face when we try to make quantum mechanics complete in this way.

I consider it to be interesting to see that even at the level of the foundations of mathematics, the problem of measurement “localizability” is a fundamental obstruction. As pointed out by many philosophers (including Husserl), if spatio-temporal localization can no longer be a principle of individuation, then we destroy the classical world. One of the characteristics of the classical world is that spatio-temporal localization is individuating. However, it is really in contradiction with the non-commutability of observables in quantum mechanics. To my knowledge, nowadays only non-commutative geometry attempts to resolve this problem.

That is the somewhat mathematical comment I wanted to make. If we try to “coin” the irreducible difference between classical and quantum, it is essentially linked to that. Can we, “underneath” Hilbertian formalism, introduce geometrical substrates that allow the localization of phenomena and their measurements? I think it would be interesting to discuss this from a philosophical perspective.

Obviously, this does not stop wave functions from being defined on space-time. What I have spoken about has nothing to do with that.

  • Alexei Grinbaum. My first comment is that von Neumann’s theorem is wrong, as we know. Von Neumann’s hidden variables theorem needs to be modified.

  • Michel Bitbol. It is not wrong. Simply, it is very partial, very incomplete. It does not prove, contrarily to what von Neumann claimed, that no hidden variables theory is compatible with quantum mechanics. It excludes only a very specific (but at the time the most likely) type of hidden variables theory. All subsequent theorems (Bell, Kochen and Specker, Leggett, etc.) are of this type: they lead to the exclusion of an increasingly large number of hidden variables theories, without excluding all theories of this kind.

  • Jean Petitot. I was speaking of Mackey’s demonstration, which is correct [12].

  • Alexei Grinbaum. The question has been asked many times, especially since the resurgence of approaches that use C-star algebras…

  • Jean Petitot. Absolutely. All I have talked about, from Gelfand’s theory to that of Connes, is expressed in terms of C-star algebras.

  • Alexei Grinbaum. The problem becomes more complicated when we consider field theory. The general problem is the following. You have a C-star algebra: how do you know it is classical, quantum, or something else? In the literature, we have proposed systems of axioms for understanding how an extremely general algebra becomes quantum, by specifying the constraints that must be added. It is a very active area of research.

  • In 1927, during the Solvay Conference (I would like to recommend, on this topic, a book that has just been published that contains all the abstracts and notes from all the participants of this conference [13]), Schrödinger asked the same question at the start of his presentation, without using mathematical language. He stated that we do mechanics in three dimensions, then in four dimensions (by adding time), then in 3 N dimensions—we did not know exactly what that meant (that was before Hilbert space, before von Neumann). What does this slide mean? Why go from three to 3 N dimensions, i.e., from a Euclidian space to a space in the abstract sense of the term? That is the reason why I think the term “localization” has changed meaning. Regarding the discussion we were having earlier, in my opinion the problem of localization consists of knowing how decoherence takes place with the variable position. That is really on a basic level. However, when we say “localization” in this way, we raise the problem of system composition: in algebraic language, how can we conceive of the fact that there is a geometric link between two systems? It is a tremendous problem for algebraic approaches.

  • Jean Petitot. Yes. Individuation and separation of systems are fundamentally linked to localization in the sense I was speaking of.

  • Alexei Grinbaum. I would like to make a distinction. There is the fascinating algebraic problem, which is that of separation. It is not the same as the more down-to-earth problem of Euclidian space. We use Euclidian space, which in fact has nothing to do with this story, to understand an algebraic aspect of the structure of quantum mechanics. Euclidian space, which has nothing to do with all this, has been used, in a way, as a starting point.

  • Jean Petitot. That is what I said. The impossibility of localization does not stop wave functions from being functions of space-time.

  • Bernard d’Espagnat. Problems of “and/or”, “impossibility of localization”… Is the objectivity of the classical world only “weak” (or “relative”)? On this philosophical question, we have not, as expected, reached an agreement; however, our respective views have been refined. Each of us now sees the outlines better. I have no doubt we will have the opportunity to come back to it.

This meeting is the last of the academic year. We will meet again in the autumn, probably at the end of September. Our next session will be dedicated, as you know, to an old theory, not one of ours, but that of Louis de Broglie and David Bohm. It is not the double solution theory, but the one that Louis de Broglie presented during the Solvay Conference of 1927 and which was rediscovered by David Bohm.

This theory has the particularity of being ontologically interpretable, like classical physics. It has been snubbed by physicists since its inception, yet there have been people that, even today, say: “Why is this theory so criticized? Is it because it is not relativistic, etc.?”. I think, honestly, that we should examine this question.

Franck Laloë has very kindly agreed, although he is not supporter of the theory, to present it to us. We await him for our next meeting at the start of October [14].