Keywords

1 The Representations of Change

1.1 From Change to Time

1.1.1 Nothing Changes/Everything Changes

Two opposing views of the world have marked more than 25 centuries of philosophy since its birth in Ancient Greece. The first one, due to Parmenides, states that nothing changes: change is only appearance. According to Popper, his argument goes as follows [1]: (1) Only being exists; (2) non-being does not exist; (3) non-being would be the absence of being: the vacuum; (4) vacuum cannot exist; (5) if there is no vacuum the world is full: there is no space for movement; (6) movement and change are impossible.

The second one, due to Heraclitus, states that everything changes: “All things are always in movement … even if this escapes our sensations”. Things are not real things, they are processes, they are continuously changing. “Panta rei”. They are like fire: a flow of matter, or like a river. “Nobody bathes twice in the same river”. The apparent stability of things is only a consequence of the laws which constrain the processes of the world [2].

Today we are still at grips with the same conflict. In fact it simply reflects the dispute between those who believe the universe to be ruled by absolute and eternal laws of nature written in mathematical language, and those who see it as a network of interconnected processes and prefer to look for explanations of phenomena based on generalizations of empirical evidence.

I personally share the view of J.R. Oppenheimer who says: “These two ways of thinking, the one which is based on time and history, and the one which is based on eternity and timelessness, are two components of man’s effort to understand the world in which he lives. Neither is capable of including the other one, nor can they be reduced one to the other, because they are both insufficient to describe everything”.

1.1.2 Objective Time, Subjective Time

For a long time men have discussed the nature of time. It is appropriate, I think, to start this discussion by quoting Augustine’s statement: “Time does not exist without a change produced by movement”.Footnote 1

The first thing to do, in fact, is to dissipate the belief that the concept of time is a necessary premise for describing and interpreting change. The reverse is instead true. Already Aristotle, several centuries before Augustine, said that “time is the number of change, in accordance to what comes before or after”, and recognized explicitly that without something that changes there is no time: “Since we have no cognizance of time when we do not detect any change, while on the contrary when we perceive a change we say that time has elapsed, it is clear that there is no time without change and movement” [4].

There is more. The concept of time may arise only from a comparison between two processes of change. A period of stillness may be long or short compared with another one, as well as a change may be quick or slow only compared with the rate of another one. The comparison between two processes outside us leads us to conceive time as an objective entity, while the comparison between the change of an exterior object and our internal, conscious or unconscious, rhythms, leads to the concept of a subjective time.

Let us start from the first one. It should be clear that “objective time” is not a substance flowing at a constant rate, as many colloquial expressions such as “time flows” or “clocks measure the flow of time” imply. It is sufficient to notice that the velocity of this hypothetical fluid would be of one second per second in order to realize that it is a tautological nonsense. A more precise proof that absolute time does not exist comes, as is well known, from Einstein’s relativity: time is a form of relationship between succeeding events in different space locations. Even after Einstein, however, the idea that time “contracts” or “dilates” implies a misleading reification of the concept, which hinders its comprehension.

The second notion of time—which defines it, according to Kant, as an innate a priori capacity of human perception of synthetizing events in the form of temporal sequences before having access to any kind of experience—should be equally criticized. The knowledge accumulated in two centuries of scientific development has shown how tight are the connections existing between mind and body on the one hand, and between the individual and society on the other one. On factual grounds, in addition, the classic studies of Piaget [5] on the development of the notion of time in the child have shown how scarcely innate it is.

Neither of these concepts, therefore, can be defined without recognizing that both are reciprocally connected within the pattern of the social fabric in a given historical context. It is appropriate at this point to quote Norbert Elias, an author who has investigated in depth this point of view: “Time is not the reproduction of an objectively existing flux, nor a form of common experience of all men, antecedent to any other experience… The word “time” is, so to say, the symbol of a relationship created by a group of human beings, endowed with a given biological capacity of remembering and synthetizing, between two or more series of happenings, one of which is standardized as a frame of reference or a unit of measure of the other one” [6].

1.1.3 Time’s Arrow, Time’s Cycle

Besides the dichotomy between objective and subjective time another dichotomy, which also goes back to the depth of ages, contrasts two conceptions of change—reversibility or irreversibility—as mutually exclusive, it consequently reflects itself on two conflicting views of time. An example of the latter is the human life, which inexorably flows from birth to death; an example of the former is the motion of celestial bodies, with their eternal going forwards and coming back.

“A crucial dichotomy—writes Stephen J. Gould—covers the most ancient and deep themes of western thought about the central subject of time: two visions, linear and circular, are resumed under the notions of time’s arrow and time’s cycle. At one end—time’s arrow—history is viewed as an irreversible sequence of unrepeatable events. Each moment occupies a distinct position in this sequence, and altogether they tell a story of successively connected events moving in one direction. At the other end—time’s cycle—events have no significance as distinct episodes with a causal impact on a contingent history. Apparent motions are part of repeated cycles and differences of the past will become realities in the future. Time has no direction” [7].

These two conflicting ways of conceiving time have alternatively dominated human cultures. At the roots of western culture, according to Gould, we find in the Bible the arrow of time. Not always and not everywhere, however, this vision of time has marked the birth of civilization. According to Mircea Eliade [8] the majority of peoples in the history of mankind have believed in a cyclic time, and considered the arrow of time as inconceivable and even frightening.

Only in recent times, however, the notion of an arrow of time—again according to Gould—has become “the familiar and orthodox conception for the majority of cultured western people”. Without it the idea of progress, or the concept of biological or cosmic evolution would be impossible. This phenomenon is in fact very recent, since its origins can be traced back to the introduction in physics by Sadi Carnot in the early decades of the 19th century of a way of looking at the world based on the irreversibility of natural phenomena, alternative to the one on which the galilean revolution was based, which made of the reversibility of any kind of motion the key for explaining everything that happens.

It is, however, only a century after Carnot, in the second half of the 20th century, that the metaphor of the arrow of time has become a component of the “metaphysical core” of many contemporary scientific disciplines. As a striking example I quote from a report of the physicist Jean Pierre Luminet the following list of five different arrows of time actually envisaged in this discipline [9]:

  1. 1.

    The radiative arrow (spherical waves always propagate outwards from a source).

  2. 2.

    The thermodynamic arrow (transformations in an isolated system always proceed in the direction of increasing entropy).

  3. 3.

    The microscopic arrow (weak interactions show an asymmetry between decays and inverse reactions).

  4. 4.

    The quantum arrow (the interaction between a measuring instrument and a microscopic object changes irreversibly the state of the latter).

  5. 5.

    The cosmological arrow (the universe apparently expands irreversibly).

1.1.4 Causality and the Two Forms of Time

A close connection between time’s arrow and time’s cycle can be found by using the concept of causality. Intuitively, to explain an event one has to find its cause. Of the four aristotelian types of causes (material, formal, final, and efficient) only the last one is still considered a cause in a proper sense, because it considers the occurrence of an event or the accomplishment of an action as a necessary and sufficient condition for the occurrence of a subsequent event. The latter is therefore the effect of the former. This sequence establishes an arrow of time: the effect always comes after its cause. This is what we call linear causality. However, this apparently trivial remark turns out to entail non-trivial consequences when the same cause and the same effect are repeated in a steady sequence of time’s cycles.

A simple example of this relation between the two representations of time is given by the connection of the stress applied to an elastic body to its deformation (strain). If the stress is applied suddenly and remains constant thereafter (step function) the strain starts from zero and increases with time reaching asymptotically a final value. This is due to the presence of internal friction which dissipates into heat a part of the work done in deforming the body. The arrow of time goes from the application of the stress towards the inception of the strain. On the other hand, if the applied stress is periodic, the response also is periodic. There is no longer something which comes before (or after) something else: when a steady state is reached the peak of the strain comes after the preceding peak of the stress, but before the next one, and vice versa. Let us see how the relation between the two comes out.

In the periodic case we can express the strain S in terms of the unit stress e iωt simply by multiplying it (Hooke’s law) by a complex response factor:

$$S = \bigl[A(\omega)+i B(\omega)\bigr]e^{i\omega t} $$

where A(ω) and B(ω) are two apparently independent functions of the frequency ω, characteristic of the material of which the body is made: the elastic modulus and the damping coefficient. In the case of a sudden application of the stress the time evolution of the strain can be obtained in terms of A(ω) and B(ω) by means of a Fourier transform of the step function as an integral of periodic exponentials.

The remarkable result is that, by simply imposing the causality condition, namely that the effect must be zero before the onset of the cause, the two functions A(ω) and B(ω) turn out not to be independent, but rather to be connected by a relation of the form

$$A(\omega) = \int B\bigl(\omega'\bigr)/\bigl(\omega- \omega'\bigr) \,d\omega' $$

and vice versa with A and B exchanged. Relations of this type are called dispersion relations and have played an important role in physics in different fields. I personally discovered it [10] in 1947 in dealing with the field of elasticity, without knowing that it had been discovered 20 years before by Kramers and Kronig [11, 12] in the field of optics (where A and B represent the refractive index and the absorption coefficient of light in a medium). In the mid-1950s similar relations have been found for scattering amplitudes in elementary particle physics [13], by using a generalized form of causality expressing the impossibility of connecting causally two events in space-time connected by a spacelike distance. I have published a brief history of dispersion relations [14] in Fundamenta Scientiae many years ago.

1.2 Reversible and Irreversible Changes

1.2.1 Reversibility of Motion: Galileo

A lantern oscillates in Pisa’s Cathedral. A young man—we are at the end of the 16th century—does not pay much attention to the religious service. His attention is attracted by the lantern. Slowly the oscillations are damped, the amplitude reduces gradually until the motion comes to an end. In these times anyone would have interpreted the phenomenon as a verification of the Aristotelian doctrine: in its natural motion a body tends to reach its “natural” place, the lowest possible attainable. This is the important “fact”. The phases of the motion’s rise are only accidental consequences of the initial “artificial” motion impressed on the body by an external impact.

But the young man—Galileo—looks at the phenomenon with a different eye. Oscillations are the important “fact”. They disclose that, if one neglects as accidental the gradual damping, the downward and the upward motions are equally “natural”, because one is the reverse of the other one. Only from this point of view is it possible to ask oneself how long it takes the pendulum to perform a complete oscillation: if the downward and the upward motion are qualitatively different there is no oscillation. Only by deciding to unify conceptually the two motions is Galileo able to find that the time required is practically constant and independent of the amplitude. The pendulum becomes the symbol of cyclical time.

1.2.2 Reversibility and Irreversibility in the Earth’s History: Burnet, Hutton, and Lyell

With Newton’s triumph this new way of looking at things penetrated into all the domains of science. Gould’s reconstruction of the work of three pioneers of modern geology clearly illustrates this diffusion [15]. The task they had to accomplish, one after the other, was to explain the empirical discovery that the Earth’s history had originated many millions of years before the biblical date of Creation.

The first one, Thomas Burnet, a man of the church, bound to the necessity of conciliating this explanation with the Holy Writ, divides the span of time between the Creation of the Earth and its Final End into recurring cycles whose phases change from one to the other, but maintain a substantially similar pattern.

The second one, James Hutton, solves the problem within the boundaries of science by elaborating a conception of the Earth as a “machine” in which disrupting forces and restoring forces balance each other continuously in order to reproduce a cycle of events which rigorously reproduce themselves: the first ones eroding mountains and continents, the second ones rising them and reconstructing them. The influence of Newton’s thought is explicitly recognized by Hutton himself: “When we find that there are means cleverly devised in order to make possible the renewal of the parts which necessarily decay … we are able to connect the Earth’s mineral system with the system by means of which celestial bodies are made to move perpetually along their orbits”.

The third and most famous of them, Charles Lyell is usually known for upholding the thesis that the same forces acting today have been responsible for all the gradual geological changes of the past (uniformism) against Georges Cuvier, according to whom the history of the Earth is a succession of unrepeatable and unpredictable events (catastrophism). Here again Lyell’s view was based on the belief that “many enigmas of both the moral and the physical worlds, rather than being the effect of irregular and external causes, depend on invariable and fixed laws.” The metaphor of the time’s cycle is therefore for him the conceptual tool for explaining the Earth’s history in terms of repeated phases of reversible changes (climatic and morphological) produced by alternating upward and downward movements of lands and seas, leading to the gradual and steady variation of the different forms of life.

1.2.3 Irreversibility of Thermodynamical Transformations: Carnot

Sadi Carnot wrote: “Because by reaching in any way a new caloric equilibrium one may obtain the production of motion power, any new equilibrium reached without production of motion power should be considered as a true loss; in other words, any change in temperature not due to a change in volume of bodies, is nothing else than a useless attainment of a new equilibrium” [16].

A newtonian scientist would never use words as loss and useless. They are concepts referring to man, not to the phenomenon in itself. The caloric loss is lost for man. The variation of temperature without production of motion power is useless for man.

This new way of looking at nature has two consequences. The first one is that the type of “law” looked for by Carnot is qualitatively different from Newton’s laws, which prescribe what has to happen. His aim is to discover the interdictions set by nature to the use of its forces, to determine the constraints which limit their reciprocal transformations. His “laws” establish what is forbidden. The consequence is that the type of abstraction needed to pursue this aim is different in the two cases. For Carnot dissipation is important, while for Newtonians dissipation is negligible.

As is well known, Carnot proved in this way that any transformation involving the transfer of heat from a source to a body in order to produce mechanical power must inevitably involve the irreversible transfer of a part of this heat to another body at a lower temperature.

1.2.4 Irreversibility of Biological Evolution: Darwin

Darwin explains the evolutionary process of life on Earth by the concurrent action of two factors: a mechanism (on the nature of which Darwin does not express himself) producing a variability of the somatic features of the different individuals belonging to a given species, and a filter which selects the individuals with the most convenient features for survival, leading to the formation of species better adapted to the changing environment. Natural selection is the result of the capacity of these individuals to reproduce at a higher rate than the more disadvantaged ones, whose descendants gradually are extinguished.

At the beginning of the XX century, with the rediscovery by Hugo de Vries of Mendel’s law, the origin of variability is traced back to the random mutations of a discontinuous genetic material possessed by the individuals. The breaking in of chance produces irreversibility. The model of evolution which has dominated in the community of biologists (New Synthesis) for almost 60 years is therefore characterized by the gradual irreversible change of the population of a species under the action of the two complementary processes of random generation of genetical variability and deterministic selection of the fittest phenotypes.

At the beginning of the 1970s a new model was introduced by N. Eldredge and S.J. Gould. Their theory of punctuated equilibria rejects the gradualism of the standard evolutionary process and replaces it with a discontinuous process in which species remain unchanged for long periods (millions of years) until they disappear abruptly (thousands of years) and are replaced by new ones. Both their birth and their death may often be due to chance.

This intervention of chance at the two levels of individuals and of species leads Gould to conclude his book Wonderful Life with the words:

And so, if you wish to ask the question of all the ages—why do humans exist?—a major part of the answer, touching those aspects of the issue that science can treat at all, must be: because Pikaia survived the Burgess decimation. This response does not cite a single law of nature; it embodies no statement about predictable evolutionary pathways, no calculation of probabilities based on general rules of anatomy or ecology. The survival of Pikaia was a contingency of “just history”. [17]

2 From Macro to Micro

2.1 From Reversibility to Irreversibility and Back

2.1.1 From Dichotomy to Statistics

A macroscopic volume V of gas, at normal pressure and temperature, contains a number N of molecules of the order 1023. Suppose that initially the volume is divided by a partition in two non-communicating volumes V R and V L , and that all of the N molecules are contained in V R . The density in V R will be d R =N/V R and in V L it will be d L =0. If the partition is removed, very quickly in both volumes the densities will be equal to N/V. At the macroscopic scale the change is irreversible.

On the other hand, if N is of the order of a few molecules, it may happen that d R and d L will be different. Perhaps we may even find again d R =N/V R and d L =0. The change may therefore be reversible. Its probability can be easily calculated, and amounts to 1/2N. The sharp dichotomy has become a statistical evaluation. Of course, when N is 1023 the probability of reversal becomes ridiculously small. The spontaneous expansion of a macroscopic quantity of gas in vacuum is therefore “for all practical purposes”, always irreversible.

This simple argument shows that the Second law of thermodynamics, introduced by Clausius and Thomson for macroscopic bodies, has a microscopic justification. However, things are not that simple.

2.1.2 Boltzmann, Loschmidt and Zermelo: Time’s Arrow or Time’s Cycle?

The central point of the debate which animated the physicist’s community at the end of the 19th century is about the nature of the Second law. The reversibility of newtonian motions is in fact incompatible with the Second law of thermodynamics, which excludes the possibility of reversing the direction of the spontaneous transformation leading to the equilibrium state of a system from a non-equilibrium one. We all know that this fact was not easily recognized and that Boltzmann in 1872 worked out a theorem (H theorem) which seemed to prove that the irreversible transition from any non-equilibrium state to the equilibrium one was a consequence of Newton’s law. We also know that Loschmidt first (1876) and Zermelo later (1896) rejected Boltzmann’s result and presented counterexamples showing that his claim was untenable. I am not going to dwell on the details of the dispute (denoted in the following as BLZ), which had been forgotten for almost a century, and was reconsidered only recently in two interesting books to which I refer [18, 19].

The thing which matters here is that Boltzmann, as a consequence of this dispute, changed his approach to the problem, and introduced a distinction between initial conditions which lead to an evolution towards equilibrium and those which tend to lead the system away from it. Since it turns out that the former are enormously more numerous than the latter ones, the irreversibility of the Second law can be reconciled, according to Boltzmann, with the reversibility of newtonian motion. The Second law loses therefore the character of absolute necessity, which was attributed to it up to that moment by the majority of the physicist’s community, to acquire the status of a probabilistic prediction about the properties of a system made of a great number of elementary constituents. The law of increasing entropy expresses therefore a statistical property: the great majority of evolutionary paths lead from less probable to more probable states.

2.1.3 Order, Disorder, and Information

The free expansion of a perfect gas presented in Sect. 2.1.1 can be interpreted as a transition from order to disorder. In fact the initial state (all the molecules are concentrated in V R ) is more ordered than the final state (molecules may be in V R as well as in V L ). Order is, however, a “subjective” concept. We “know” that all the molecules are in V R initially, while we do not know at the end where any given molecule is. We can describe the change as a loss of information on the position of the molecules. The expansion is, however, an “objective” phenomenon. We can describe it in terms of increase of entropy. It turns out, as is well known, that the two quantities are proportional, with a minus sign in front.

All is clear, therefore, at the macroscopic scale. However, as Loschmidt and Zermelo claimed, at the microscopic scale the motion of any given molecule is “in principle” completely determined, once its initial state is completely given, by its collisions with the other ones and against the walls. Since the forces are conservative, the classical motion of each molecule is reversible and the collective motion of all of them should be equally reversible. Is this claim well founded? What are its implications?

Apart from the consideration that the motion of molecules is not classical, but is ruled by the laws of quantum mechanics [this argument will be discussed in Sect. 2.2.1], it is clear that a physical experiment capable of proving this kind of reversibility will never be possible, even in the case that the number of molecules could be drastically reduced. However, the experiment can be simulated in a computer, and, of course, it works. A whole new field of research has developed along these lines (Molecular Dynamics).

2.2 Reversibility/Irreversibility in the Quantum World

2.2.1 The Role of Chance in Quantum Mechanics

At the level of the dynamics of an individual system, newtonian motion is reversible. Irreversibility is brought in, we have seen, by the necessity of describing, by means of the probability distribution in phase space of classical statistical mechanics, the macroscopic properties of a collection of a great number of particles.

Quantum mechanics, however, had to take into account two new facts. The first one is that different events may follow from apparently equal external and initial conditions; the second one is that it is impossible to fix exactly the value of all the variables of a given system. Position x and momentum p of a particle are the simplest example of incompatible variables. Heisenberg’s principle sets the lower limit of h/4π to the product of their uncertainties.

The solution of the problem, as we all know, was found by releasing the connection between the state of the system and its variables. The first one, represented by means of a suitable (wavelike) function, was still completely determined by the initial conditions and the laws of motion, and the latter were left free of acquiring at random, with different probabilities given by that function, one of the possible values within their range of variability. The difference with classical statistical mechanics is radical: while in the latter the probabilistic description of a system’s state is simply due to our ignorance of the precise value of its variables, which nevertheless do actually have a precise value, the probabilistic nature of the quantum system’s properties is considered, by the overwhelming majority of physicists, to be “ontological”.

Now the question arises: at what stage does chance come in? The usual answer is: the evolution of the wave is deterministic and reversible, while the measurement brings in randomness and irreversibility. Of course this answer introduces a lot of problems of a fundamental nature: on the role of the “observer”, on the power of man’s mind to manipulate “reality” and so on. I will come back to these questions at the end.

2.2.2 Irreversibility of Quantum Measurement

The origin of irreversibility is therefore generally ascribed to the so called “wave function collapse” or “reduction of a wave packet” produced by the act of measuring a quantum variable by means of a macroscopic measuring instrument. As is well known, the problem was tackled, in the early days of quantum mechanics, by Bohr, who postulated the existence of classical objects in order to explain how the quantum objects could acquire abruptly, in the interaction with them, sharp values of either position or velocity. This dichotomy between classical and quantum worlds was questioned by von Neumann who insisted that, after all, also the macroscopic objects should obey the laws of quantum mechanics.

The problem, in my opinion, should be formulated as follows. On the one hand we have microscopic objects (quantons) which have context dependent properties. This means that these properties, which have generally blunt values, only occasionally, but not simultaneously, may acquire sharp values. This happens when a quanton interacts with a suitable piece of matter which constrains it to assume, at random but with a given probability, a sharp value. On the other hand our everyday experience shows that macroscopic objects have context independent properties. It becomes therefore necessary to prove that the existence of macroscopic pieces of matter with context independent properties is not a postulate (as Bohr assumed) but follows from the equations of quantum mechanics themselves.

2.2.3 Ontic and Epistemic Uncertainties

This question was investigated and answered by my group in Rome 20 years ago in two papers [20, 21], which at the time received some attention (Nature dedicated a whole page of comment to the second one [22]). It is, however, fair to give credit to K. Gottfried [23] for having correctly approached the problem many years before.

In these papers we proved that when a quanton P in a given state interacts with a suitable “instrument” S q made of N quantons, the difference between the probabilistic predictions of quantum mechanics on the possible outcomes of this interaction and the predictions of classical statistical mechanics, for an ideal statistical ensemble in which a classical instrument S c replaces S q (with the same values of its macroscopic variables), tends to vanish when N becomes very large (≫1). This means that, after all, Bohr was right in assuming that classical bodies exist. Needless to say, our result proved also that Schrödinger’s cat cannot be at the same time dead and alive, simply because it is a macroscopic “object”.

A similar problem—namely whether a single particle, whose wave function is represented by two distant wave packets, materializes instantly in one or the other only when its position is measured—has been investigated, I believe with success, by Maurizio Serva and myself a few years ago [24], and further clarified in collaboration with Philippe Blanchard [25]. In this case we can explicitly calculate the uncertainties Δx and Δp of position and momentum, which appear in the well known general expression of the Heisenberg uncertainty principle. The standard inequality becomes

$$(\Delta x \Delta p)^2 = (h/4\pi)^2 + \bigl[(\Delta x \Delta p)_{\mathrm{csm}}\bigr]^2 $$

where (ΔxΔp)csm is the uncertainty product of the corresponding probability distribution of classical statistical mechanics. The second term, therefore, expresses an epistemic uncertainty, while the first one expresses the irreducible nature of chance at the quantum level.

This interpretation of the uncertainty principle solves the paradox of the particle localization in one of the two distant isolated wave packets. In fact we can conclude that the particle actually was in one or the other even before the measurement was performed, because the large Δx has a purely classical epistemic origin.

This clarifies also the different nature of ontic randomness and epistemic randomness. The first one is reversible (no dissipation) the second one is irreversible (the loss of information [entropy] increases with time).

Here again we are faced with the old debate (BLZ) of explaining how a macroscopic system (particle + instrument) made of a great number of microscopic objects can acquire a property (irreversibility of evolution) that his elementary components do not have. The answer is the same: the great majority of evolutionary paths lead from less probable to more probable states.

2.2.4 A Unified Statistical Description of the Quantum World

If randomness has an irreducible origin the fundamental laws should allow for the occurrence of different events under equal conditions. The language of probability, suitably adapted to take into account all the relevant constraints, seems therefore to be the only language capable of expressing this fundamental role of chance. If the probabilistic nature of the microscopic phenomena is fundamental, and not simply due to our ignorance as in classical statistical mechanics, it should be possible to describe them in probabilistic terms from the very beginning.

The proper framework in which a solution of the conceptual problems discussed above should be looked for is therefore, after all, the birthplace of the quantum of action, namely phase space, where no probability amplitudes exist. It is of course clear that joint probabilities for both position and momentum having sharp given values cannot exist in phase space, because they would contradict the uncertainty principle. Wigner [26] however, introduced the functions called pseudoprobabilities (which may assume also negative values) to represent quantum mechanics in phase space, and showed that by means of them one can compute any physically meaningful statistical property of quantum states. It seems reasonable therefore to consider these functions not only as useful tools for computations, but as a framework for looking at quantum mechanics from a different point of view.

This program has been recently carried on [27] by generalizing the formalism of classical statistical mechanics in phase space with the introduction of a single quantum postulate, which introduces mathematical constraints on the set of variables in terms of which any physical quantity can be expressed (usually denoted as characteristic variables). It turns out, however, that these constraints cannot be fulfilled by ordinary random numbers, but are satisfied by the mathematical objects called by Dirac q-numbers. The introduction of these q-numbers in quantum theory is therefore not assumed as a postulate from the beginning, but is a consequence of a well defined physical requirement. The whole structure of quantum mechanics in phase space is therefore deduced from a single quantum postulate without ever introducing wave functions or probability amplitudes.

This approach has some advantages. First of all, many paradoxes typical of wave-particle duality disappear. On the one hand in fact, as already shown by Feynman [28], it becomes possible to express the correlations between two distant particles in terms of the product of two pseudoprobabilities independent from each other. All the speculations on the nature of an hypothetical superluminal signal between them becomes therefore meaningless. Similarly, the long debated question of the meaning of the superposition of state vectors for macroscopic objects may also be set aside as equally baseless.

Secondly, this approach eliminates the conventional hybrid procedure of describing the dynamical evolution of a system, which consists of a first stage in which the theory provides a deterministic evolution of the wave function, followed by a hand made construction of the physically meaningful probability distributions. The direct deduction of Wigner functions from first principles solves therefore a puzzling unanswered question which has been worrying all the beginners approaching the study of our fundamental theory of matter, all along the previous 75 years, namely “Why should one take the modulus squared of a wave amplitude in order to obtain the corresponding probability?” We can now say that there is no longer need of an answer, because there is no longer any need to ask the question.

Finally it should be stressed that it is not the practical use of the formalism of quantum mechanics, of course, which is put in question by the approach suggested here. However, from a conceptual point of view, the elimination of the waves from quantum theory is in line with the procedure inaugurated by Einstein with the elimination of the ether in the theory of electromagnetism. Maybe it can provide a new way of musing on the famous statement of Feynman: “It is fair to say that nobody understands quantum mechanics”.Footnote 2