Keywords

1 From CPT Invariance Violation and Cosmic Asymmetry to the Fundamental Concept of Entropy

Let us start with a brief review of some fundamental concepts of modern physics into which the role of time enters as a fundamental part of the study of nature. We shall start with a few remarks on the development of thermodynamics, before we expose the second law and the related concept of entropy where for the first time the possibility that time could be irreversible appears. The other context in which enters the notion of “arrow of time”, i.e. the fact that the physical laws governing the universe should not be invariant with respect to time reversal, is cosmology and particularly the quantum theory of space-time singularities, which leads one to consider the existence of a cosmic asymmetry between matter and antimatter as the realistic scenario followed by the universe since its origins. Finally, there has been a very surprising result in quantum field theory: the 1956 discovery of parity (P) non-conservation in weak interaction phenomena. Even more surprising was the discovery of CP violation in 1964, which shattered the illusion concerning the fundamental nature of CPT theorem, that is, the belief that the invariance of time reversal transformation, of charge conjugation and of space inversion or mirror symmetry are the general principles to be satisfied by the equations of motion—hence a firm root in the foundations of physics, and opened up questions concerning its origin and its profound implication for our conception of physics and nature. These questions have not yet been answered satisfactorily despite an enormous effort in theoretical and experimental physics. Nevertheless, the developments of physics and of the other natural sciences in the last two decades lead to the belief that the violation of CPT invariance is needed to deal with interactions that are not invariant under one or more of these transformations.

One should distinguish two aspects of the violation of the three fundamental symmetries of nature, namely the time invariance (T), the electrical charge invariance (C) and the space inversion invariance (P). The first concern the consequences of T invariance for those properties of matter that depend on electromagnetic and strong interactions, and even on the grosser features of the weak interactions; the other concern the violation of CP invariance and T invariance in some special aspects of the weak interactions. The ability to separate these two aspects rests on the fact that the observed violation is an extremely small effect, not influencing in a (so far) measurable way even high-precision weak interactions measurements other than those specific, particularly sensitive one by means of which the CP violation was discovered. Nevertheless, most physicists believe firmly in the notion of a theory that unifies the electromagnetic, weak, and strong interaction phenomena at some level. At that level the separation of the physical phenomena into two classes should likely become meaningless. And the very fact that the observed violation occurs in such a limited though very meaningful way suggests that the level of unification at which T violation originates in a fundamental way must be very deep indeed. Therefore its elucidation may have profound implications for our understanding of the nature of physical theories. It may also have important implications for cosmology and notably for our actual conception of the structure of space and time.

1.1 Arrows of Time and Their Relations

So in our universe, as we find it, there are at least five arrows of time. Physicists do not yet know how they are interrelated. The preferred time direction on the subatomic microlevel, in certain weak interactions involving K-mesons, is still a mystery. It may have no connection with the macroscopic arrows, just as the handedness of particles seems to have no connection with the handedness of molecules, and the handedness of molecules in turn has no bearing on the bilateral symmetry of human body. On the macrolevel are four arrows. First, there is the entropy arrow, which has a precise technical definition in both thermodynamic theory and information theory. The notion of entropy was first introduced by the 19th century Austrian physicist Ludwig Boltzmann who founded statistical thermodynamics, whose starting point was the study of a system of gas molecules moving about randomly in a closed container. According to his vision, entropy is the principal foundation for the arrow of time. We can think of it in a rough way as a measure of disorder—the absence of pattern. The “information” content of a system, roughly speaking, is a measure of order (see below for a mathematical definition). The two measures vary inversely. If the entropy of a system goes up, its information content goes down, and vice versa.

We suggest distinguishing between two classes of phenomena and events in which time acts in a fundamental way. One in one case uses the term geometrical arrow for those processes in which order is increasing. They are very grounded in historical as well as in biological evolution. The formation of matter, moving in an orderly fashion outward from the site of the big bang, was the first gigantic instance of an event stamped with the geometrical arrow. The evolutions of stars and planets are later examples. The formation of strongly ordered crystals is another example. Finally, the energy radiating from a highly ordered sun allowed the rise and proliferation of life, the most highly patterned thing we know. The entropy arrow points opposite ways with respect to order, hence apply to those natural phenomena which evolve towards disorder. Let us now mention the other arrows of time.Footnote 1 There is the arrow defined by events radiating from a centre like expanding circular ripples on a pond or energy radiating from a star. This kind of arrows (for example, those concerned with dissipative chaotic systems) seems to derive from the probability of initial or boundary conditions. Third, there is the expansion of the universe, or the cosmic arrow. Fourth, there is the psychological arrow of consciousness. (For some remarks about the last two arrows of time, see below.)

1.2 The Fundamental Principle of Entropy in Thermodynamics Theory

The second law of thermodynamics has “various formulations”, but they all lead to the existence of an entropy function whose reason for existence is to tell us which processes can occur and which cannot. We shall reformulate it by referring to the existence of entropy as the second law. The entropy we are talking about is that defined by thermodynamics, and not some analytic quantity that appears in information theory, probability theory and statistical mechanical models. The statement of the first law of thermodynamics is essentially the statement of the principle of the conservation of energy for thermodynamical systems. As such, it may be expressed by stating that the variation in energy of a system during any transformation is equal to the amount of energy that the system receives from its environment. Briefly, it is a concept that provides the connection between mechanics (and things like falling weights) and thermodynamics. The first law arose as the result of the impossibility of constructing a machine that could create energy. However, it places no limitations on the possibility of transforming energy from one form into another. Thus, for instance, on the basis of the first law alone, the possibility of transforming heat into work or work into heat always exists provided the total amount of heat is equivalent to the total amount of work.

The three popular formulation of the second law are: (i) No process is possible the sole result of which is that heat is transformed from a body to a hotter one (postulate of Clasius). (ii) No process is possible the sole result of which is that a body is cooled and work is done (postulate of Kelvin and Planck). (iii) In any neighbourhood of any state there are states that cannot be reached by it by an adiabatic process. All three formulations are supposed to lead to the entropy principle (defined below).

Definition

A state Y is adiabatically accessible from a state X, in symbols X<Y, if it is possible to change the state from X to Y by means of an interaction with some device consisting of some auxiliary system and a weight in such a way that the auxiliary system returns to its initial state at the end of the process, whereas the weight may have risen or fallen.

We could have (in principle, at least) both X<Y and Y<X, and we could call such a process a reversible adiabatic process. Let us write XY if X<Y but not Y<X (written Y<X). In this case we say that we can go from X to Y by an irreversible adiabatic process. If X<Y and Y<X (i.e., X and Y are connected by a reversible adiabatic process), we say that X and Y are adiabatically equivalent and write XY.

Entropy Principle

There is a real-valued function on all states of all systems (including compound systems) called entropy, denoted by S, such that:

  1. (a)

    Monotonicity: When X and Y are comparable states, then X<Y if and only if S(X)≤S(Y).

  2. (b)

    Additivity and extensivity: If X and Y are states of some (possibly different) systems and if (X,Y) denotes the corresponding state in the compound system, then the entropy is additive for these states; i.e., S(X,Y)=S(X)+S(Y). S is also extensive; i.e., for each λ>0 and each state X and its scaled copy λXΓ (λ) (where Γ is the space of states of the system) S(λX)=λS(X).

A formulation logically equivalent to (a) is the following pair of statements: X∼Y⇒S(X)=S(Y) and XYS(X)<S(Y). The last line is especially noteworthy. It says that entropy must increase in an irreversible adiabatic process. Then, irreversibility means that for each XΓ there is a point YΓ such that XY.

The reversibility of time in physical elementary process (both in classical and in quantum mechanics, as well as in the relativistic theories) is commonly accepted and very well established; that means that the fundamental laws of physics are invariant under time reversal. However, it is an obvious fact that most phenomena in Nature distinguish a direction of time; time is irreversible in complex systems. Electromagnetic waves are observed in their retarded form only, where the fields causally follow from their sources. The increase of entropy, as expressed in the second law of thermodynamics, also defines a time direction. This is directly connected with the psychological arrow of time—we remember the past but not the future. In quantum mechanics it is the irreversible measurement process and in cosmology the expansion of the universe, as well as the local growing of inhomogeneities, which determine a direction of time.

1.3 Irreversibility of Complex Systems

In order to make clear the irreversible character of most complex systems, let us consider a simple case of a droplet of ink added to water in a jar. The droplet spreads out rapidly, so that the colour becomes uniform in the entire vessel. Anyone can observe these phenomena. However, no one has ever seen a process developing in the opposite direction: ink particles collecting from the whole volume into a single droplet. Take now an iron rod, heat it and then put it into a vessel with cold water. The rod will cool down, the water will get warmer and their temperatures will become equal. The process always goes this way. Heat is never transferred from cold water to hot iron, raising its temperature still further. This is another example of an irreversible process, similar to the spreading of a droplet. Why does irreversibility always arise in all such processes, even though they are composed of particle motions that are definitely time-reversible? Where and how does reversibility perish?

The answer to that question, as we have seen above, lies in the second law of thermodynamics discovered by the physicists Rudolf Clasius and William Thomson. Their thermodynamic ideas were then developed and extended by Ludwig Boltzmann. He uncovered the meaning of the second law of thermodynamics. Heat is, in fact, the chaotic motion of atoms and molecules of which material bodies consist. Hence the transition of the energy of mechanical motion of individual constituents of the system into heat signifies the transition from the organised motion of large parts of the system to the chaotic motion of the smallest particles; this means that an increase in chaos is inevitable owing to the random motion of particles, unless the system is influenced from outside so as to maintain the level of order. Boltzmann showed that the measure of chaos in a system is a quantity called entropy. The greater the chaos, the higher the entropy. The transition of different types of motion of matter into heat means that entropy grows. When all forms of energy have transformed into heat, and this heat has spread uniformly through the system, this state of maximum chaos ceases to change with time and corresponds to maximum entropy.

This is the gist of the matter! In complex systems consisting of many particles or other elements, disorder (chaos) inevitably increases as a result of the random nature of numerous interactions. Entropy is that very measure of the degree of chaos. It is very important that when creating a more ordered state in a system, by influencing it from within a larger system, we inevitably insert additional disorder into this larger system. The laws of thermodynamics state that the “chaos” added to the larger system is inevitably greater than the ‘order’ introduced into the smaller system. Hence the “chaos”, and “entropy”, in the whole world must grow, even though order may be established in some parts of the world. One realises then that the second law of thermodynamics is of great importance for the evolution of the universe. Indeed, exchange of energy between the world and “other systems” being impossible, the universe must be treated as an isolated system. Therefore, all types of energy in the universe must ultimately convert to heat spread uniformly through matter, after which all macroscopic motion peters out. Even though the law of conservation of energy is not violated, the energy does not disappear and remains in the form of heat, it ‘loses all forces’, any possibility of transformation, any possibility of doing the work of motion. This bleak state became known as the ‘thermal death’ of the universe. The irreversible process in the universe is thus the growth of entropy. The question, however, remain open: can this process entirely dictate the direction of flow of time? I guess that we shall search for some other key feature of time and of space-time if we want to be able to give a satisfactory answer to these questions.

For the moment, we may ask: how can one understand that most phenomena distinguish a direction of time? One of the most interesting answer likely lies in the possibility of very special boundary conditions such as an initial condition of low entropy (see [6]). Such an assumption transcends the Newtonian separation into laws and boundary conditions by also seeking physical explanations for the latter. Where lies the key to the understanding of the irreversibility of time? According to Roger Penrose, it is primarily the high-unoccupied entropy capacity of the gravitational field that allows for the emergence of structure far from thermodynamical equilibrium. As he has stressed, the presence and the apparent structure of space-time singularities contain the key to the solution to one of the long-standing mysteries of physics: the origin of the arrow of time [7]. He has emphasised that the statistical notion of entropy is crucial for the discussion of time-symmetry. And if the fundamental local laws are all time-symmetric, then the place to look for the origin of statistical asymmetries is in the boundary conditions. This assumes that the local laws are of the form that, like Newtonian theory, standard Maxwell–Lorentz theory, Hamiltonian theory, Schrödinger’s equations, etc., they determine the evolution of the system once we have boundary conditions either in the past or in the future. Then the statistical arrow of time can arise via the fact that, for some reason, the initial boundary conditions have an overwhelmingly lower entropy than do the final boundary conditions. Penrose has convincingly showed that the expansion of the universe cannot, in itself, be responsible for the entropy imbalance either. Accordingly, the arrows of entropy and retarded radiation can be explained if a reason is found for the initial state of the universe (big bang singularity) to be of comparatively low entropy and for the final state to be of high entropy. Consequently some low-entropy assumption does need to be imposed on the big bang; that is, the mere fact that the universe expands away from a singularity is in no way sufficient. We need some assumption on initial singularities that rules out those which would lie at the centres of white holes. But what is it in the nature of the big bang that is of ‘low entropy’? The answer to this question lies in the unusual nature of gravitational entropy.

Many authors have pointed out that gravity behaves in a somewhat anomalous way with regard to entropy. This is true just as much for Newtonian theory as for general relativity. Thus, in many circumstances in which gravity is involved, a system may behave as though it has a negative specific heat. This is directly true in the case of a black hole emitting Hawking radiation, since the more it emits, the hotter it gets (the energy increase). This is essentially an effect of the universally attractive nature of the gravitational interaction. As a gravitating system “relaxes” more and more, velocities increase and the sources clump together—instead of uniformly spreading throughout space in a more familiar high-entropy arrangement. With other types of forces, their attractive aspects tend to saturate (such as with a system bound electromagnetically), but this is not the case with gravity. Only non-gravitational forces can prevent parts of a gravitationally bound system from collapsing further inwards as the system relaxes. Kinetic energy itself can halt collapse only temporarily. In the absence of significant non-gravitational forces, when dissipative effects come further into play, clumping becomes more and more marked as the entropy increases. Finally, maximum entropy is achieved with collapse to a black hole.

Consider a universe that expands from a “big bang” singularity and then re-collapses to an all-embracing final singularity. The entropy in the late stages ought to be much higher than the entropy in the early stages. How does this increase in entropy manifest itself? In what way does the high entropy of the final singularity distinguish it from the big bang, with its comparatively low entropy? We may suppose that, as is apparently the case with the actual universe, the entropy in the initial matter is high. The kinetic energy of the big bang, also, is easily sufficient (at least on average) to overcome the attraction due to gravity, and the universe expands. But then, relentlessly, gravity begins to win out. The precise moment at which it does so, locally, depends upon the degree of irregularity already present, and probably on various other unknown factors. Then clumping occurs, resulting in clusters of galaxies, galaxies themselves, globular clusters, ordinary stars, planets, white dwarfs, neutron stars, black holes, etc. The elaborate and interesting structures that we are familiar with all owe their existence to this clumping, whereby the gravitational potential energy begins to be taken up and the entropy can consequently begin to rise above the apparently very high value that the system had initially. This clumping must be expected to increase; more black holes are formed; smallish black holes swallow material and congeal with each other to form bigger ones. This process accelerates in the final stages of re-collapse when the average density becomes very large again, and one must expect a very irregular and clumpy final state.

As Roger Penrose [3] has emphasised, there is very likely a qualitative relation between gravitational clumping and an entropy increase due to the taking up of gravitational potential energy. In terms of space-time curvature, the absence of clumping corresponds to the absence of Weyl conformal curvature (since absence of clumping implies spatial isotropy, and hence no gravitational principal null-directions). When clumping takes place, each clump is surrounded by a region of nonzero Weyl curvature. As the clumping gets more pronounced owing to gravitational contraction, new regions of empty space appear with Weyl curvature of greatly increased magnitude. Finally, when gravitational collapse takes place and a black hole forms, the Weyl curvature in the interior region is larger still and diverges to infinity at the singularity. In other words, Penrose formulated his Weyl tensor hypothesis that the Weyl tensor vanishes at singularities in the past but not at those in the future. The Weyl tensor is that part of the Riemann tensor which is not fixed by the boundary equations (in which only the Ricci tensor enters) but by the boundary conditions only. It describes the degrees of freedom of the gravitational field. Since it vanishes exactly for a homogeneous and isotropic Friedmann universe, it can be taken as a heuristic measure for inhomogeneity and, therefore, for gravitational entropy.

2 Symmetry and Symmetry Breaking in Nature

2.1 The Meanings of Symmetry

In general terms, what symmetry means is that the (physical) system possesses the possibility of a change that leaves some aspect of the system unchanged. Symmetry of the laws of nature concerns conservation. There are a number of conservations, called “conservation laws”, that hold for quasi-isolated systems. The best known of them are conservation of energy, conservation of linear momentum, conservation of angular momentum and conservation of electric charge. What is meant is that, if the initial state of any quasi-isolated physical system is characterised by having definite values for one or more of those quantities, then any state that evolves naturally from that initial state will have the same values for those quantities. The conceptual definition of symmetry can be thus: Symmetry is immunity to a possible change. We can point out the two following essential components of symmetry: 1. Possibility of change. 2. Immunity. If a change is possible but some aspect of the system is not immune to it, we have asymmetry. The system can be said to be asymmetric under the change with respect to that aspect.

The symmetry principle is fundamental to the applications of symmetry in science, and especially in physics. It states that the symmetry group of the cause is a subgroup of the symmetry group of the effect. In other words: the effect is at least as symmetric as the cause. However, these principles is in many situations contradicted by the phenomenon of “spontaneous symmetry breaking”. There appear to be cases of physical systems where the effect simply has less symmetry than the cause, where the symmetry of the cause is possessed by the effect only as a badly broken symmetry, so that the exact symmetry group of the effect is a subgroup of the symmetry group of the cause, rather than vice versa. In fact, what is assumed to be the exact symmetry of the cause is really only an approximate symmetry. Just how small, symmetry-breaking perturbations of a cause affect the symmetry of the effect? What can be said about the symmetry of an effect relative to the approximate symmetry of its cause? That depends on the actual nature of the physical system, on whatever it is that links cause and effect in each case. But we can consider the possibilities.

  1. 1.

    Stability. The deviation from the exact symmetry limit of the cause, introduced by the perturbation, is “damped out”, so that the approximate symmetry group of the cause is the minimal symmetry group of the effect.

  2. 2.

    Lability. The approximate symmetry group of the cause is the minimal approximate symmetry group of the effect, of more or less the same goodness of approximation.

  3. 3.

    Instability. The deviation from the exact symmetry limit of the cause, introduced by the perturbation, is “amplified”, and the minimal symmetry of the effect is only the exact symmetry of the cause (including perturbation), with the approximate symmetry of the cause appearing in the effect as a badly broken symmetry. That is what is commonly called spontaneous symmetry breaking. Thus, although symmetric causes must produce symmetric effects, nearly symmetric causes need not produce nearly symmetric effects: a symmetry problem need have no stable symmetric solutions.

2.2 Examples of Symmetry Breaking

As an example of instability, we can take the solar system, its origin and evolution. Modern theory has the solar system originating as a rotating cloud of approximate axial symmetry and reflection symmetry with respect to a plane perpendicular to its axis. If that state of what is now the solar system is taken as the cause, the present state can be taken as the effect. And any axial symmetry the proto solar system one had has clearly practically disappeared during the course of evolution, leaving the solar system as we now observe it. The random, symmetry-breaking fluctuations in the original cloud grew in importance as the system evolved, until the original axial symmetry became hopelessly broken. Consider now, for another example of spontaneous symmetry breaking, a volume of liquid at rest in a container; such liquid is isotropic, which is to say that its physical properties are independent of direction, hence it is a symmetric system. Now, a small crystal of the frozen liquid thrown into the liquid breaks the symmetry, but is soon melts and isotropy returns. However, when the liquid is cooled to below its freezing point, the situation alters drastically. Let now throw in a crystal, then the supercooled liquid will immediately crystallise and thus become highly anisotropic. If in the subfreezing temperature range the system is unstable for isotropy; any anisotropic perturbation is immediately amplified until the whole volume becomes anisotropic and stays that way. The cooler the liquid (below its freezing point), the greater its instability. The freezing point is the boundary between the temperature range of stability and that of instability.

It must be pointed out that one of the most important upheavals in the scientific vision of nature in the last century has been the discovering that spontaneous breaking symmetries, bifurcations and singularities are three mechanisms which play a fundamental role for the organisation of physical and living matter and the unfolding of natural phenomena. These mechanisms are very deep related, because each time that a physical or living system bifurcs, the immediate consequence is that the symmetry of the system breaks down and instead of that a new broader symmetry will appear. Besides, the fact that a system may bifurc at some moment of his evolution means that its unfolding stops to be (mathematically speaking) continuous or linear and become discontinuous and non-linear. In many situations, this non-linearity (of partial differential equations) lead to the emergence of new order-disorder transition phenomena which exhibits non-equilibrium states mathematically expressible by time-dependent equations, and it is a source of instability, bifurcation and symmetry-breaking phenomena. Many of these macroscopic and local dynamical laws and phenomena manifest time asymmetry or irreversibility, which is a feature of key significance. Let me first mention some examples and fields in which spontaneous symmetry breaking manifests itself as a primary feature of the problem.

Example 1

(Morphogenesis and molecular biology)

A striking example of symmetry breaking in a biological system is the breakdown of rotational symmetry in the Fucus seaweed egg. At a critical stage in the development of the egg a transition is made from a spherically symmetric membrane potential distribution to a polarised state with an axial symmetry, and a net trans-cellular current leaving one pole and entering the opposite. This phenomenon (or effect) is termed “self-electrophoresis”. The net trans-cellular potential gradient is believed to be essential in the development of the asymmetry that leads to dramatically different rhizoid and thallus cells after the first division of the egg. The symmetry breakdown in the Fucus egg is of the form rotational invariance to axial invariance. That is, prior to self-electrophoresis the solutions are invariant under the entire rotation group O(3), while the bifurcating solutions are invariant only under a subgroup of rotations about a fixed axis. The solutions thus appear in two-dimensional orbits with one-dimensional isotropy subgroup. This, however, is by no means the only symmetry breakdown that can occur in rotationally invariant systems.

Processes underlying the growth and reproduction of living organisms seem to be governed by a fundamental asymmetrical structure. In particular, sister cells can be born different by an asymmetric cell division. At each stage in its development, a cell in an embryo is presented with a limited set of options according to the state it has attained: the cell travels along a developmental pathway that branches repeatedly. At each branch in the pathway it has to make a choice, and its sequence of choices determines its final destiny. In this way, a complicated array of different cell types is produced. To understand development, we need to know how each choice between options is controlled, and how those options depend on the choices made previously. To reduce the question to his simplest form: how do two cells with the same genome come to be different? When a cell undergoes mitosis, both of the resulting daughter cells receive a precise copy of the mother cell’s genome. Yet those daughters will often have different specialised fates, and, at some point, they or their progeny must acquire different characters. In some cases, the two sister cells are born different as a result of an asymmetric cell division, in which some significant set of molecules is divided unequally between the two daughter cells at the time of division. This asymmetrically segregated molecule (or set of molecules) then acts as a determinant for one of the cell fates by directly or indirectly altering the pattern of gene expression within the daughter cell that receives it. Asymmetric division are particularly common at the beginning of development, when the fertilised egg divides to give daughter cells with different fates, but they also occur at later stages—in the genesis of nerve cells, for example.

Example 2

(Wave propagation in neural networks)

Bifurcation phenomena in simple mathematical models of excitatory inhibitory neural networks have been discussed recently by many peoples (see, for instance [1]). Neural networks are aggregates of nerve cells which interact with other neurones in the network in either an excitatory or inhibitory way, and so it is plausible to expect these networks to exhibit such non-linear collective phenomena as bifurcation, threshold effects, and hysteresis. One can model these networks by a system of equations

$$ \mu Y = - Y + S(KY + P) $$
(1)

where Y is a two-component vector, S is a non-linear vector-valued function, K is a linear convolution operator, and P is the external stimulus. Equation (1) may be studied in one, two, or three dimensions. Some neurophysiologists seek to model the patterns of activity of the central nervous system by showing how organised space-time neuronal activity patterns can arise through the mechanisms of bifurcation from an initially uniform resting state. They investigate the structure of the bifurcation when two pairs of complex conjugate eigenvalues cross the imaginary axis simultaneously. In that case one gets secondary bifurcation as some of the parameters in the problem are varied. J.D. Cowan and G.B. Ermentrout [2] have treated hallucinatory phenomena from the standpoint of symmetry-breaking bifurcations. Recent experiments on mescaline induced hallucinations have led to the conclusion that most simple hallucinations could be classified into one of four categories: (a) grating, lattice, honeycomb or chessboard; (b) cobweb; (c) funnel, tunnel, cone or vessel; (d) spiral. Cowan and Ermentrout base their analysis on the contention that simple formed hallucinations arise from an instability of the resting state leading to concomitant spatial patterns of activity in the cortex. This instability arises from a combination of enhanced excitatory modulation and decreased inhibition. They demonstrate that such spatial patterns are a property of neural nets with long strong lateral interactions acting to provide a dominant negative feedback. They formalise these postulates into a simple mathematical model and then use bifurcation theory to demonstrate the existence of the relevant spatial patterns.

The relevant spatial patterns are none other than those crystallographic patterns that have already made their appearance in the Bénard problem, with one additional factor. Experimental observations have established that in primates there is a conformal transformation from the retinal field, which is circular, to the cortical field, which has Cartesian (rectangular) symmetry. This implies that the transformation from retinal polar co-ordinates to cortical rectangular co-ordinates must be essentially logarithmic in nature. Such a logarithmic transformation would take a tunnel pattern consisting of concentric circles of activity to a pattern of rolls parallel to the y-axis. Similarly, spirals are transforms of rolls with some other direction. Thus the patterns observed in hallucinatory phenomena are images under the log transformations of the cellular patterns familiar in the analysis of the Bénard problem: hexagons, squares, rectangles, and rolls. One can in fact assume that, as some parameter λ increases, the strength of the excitation increases until, beyond some critical value λ c , the rest state becomes unstable and gives way to the stationary patterns of spatial activity. Thus, according to this theory, the drug-induced hallucinatory patterns are precisely those which one would see when Euclidean invariance is broken.

Example 3

(Phase transitions in statistical mechanics)

The notion of symmetry breaking is fundamental to phase transitions, yet much harder to treat mathematically. Until the renormalisation theories developed in the last two decades, the primary approach to phase transition was, in one way or another, a mean-field approximation coupled with a bifurcation analysis of the mean-field equations. The simplest mean-field theories for critical phenomena were the scalar equations of state, such as the Van der Waals equation for a gas of the Curie–Weiss model for a ferromagnet. In more elaborate theories the state of the ensemble is described, for example, by a single particle density function, and an integral equation is derived for this function by some kind of closure hypothesis for the hierarchy of higher-order (multiple particle) correlation functions. Nevertheless, these approximations are still mean-field theories, and depend, for their validity, on the assumption that fluctuations are negligible; the major difficulty is that in many cases, large fluctuations become important precisely at the critical point. In fact, at a critical point the fluctuations very often diverge to infinity, making the mean-field approximation invalid, and it is this fact which accounts for the deviation of the critical exponents from the “classical exponents” predicted by bifurcation (mean-field models. All this notwithstanding, the bifurcation models do have some areas of validity, and they are generally successful in predicting the symmetry changes actually observed. Landau’s theory of second-order phase transitions is a phenomenological description of phase transitions, which is essentially a theory of “symmetry-breaking bifurcations”. According to this point of view, the generalised mean-field approximation usually brings us to the formulation of the broken-symmetry problem in terms of the bifurcation on a non-linear integral equation solution for the Bogolyubov quasi-average. Especially the liquid-solid phase transition is considered as a bifurcation of the solution of the equation of Hammerstein type

$$ \varPhi(r_1) - \mu \int K(r_1,r_2) f \bigl(\varPhi(r_2),r_2 \bigr)\,dr_2 = 0. $$
(2)

The phase transitions of the ensemble are described in terms of bifurcations of this integral equation. In the area of non-equilibrium thermodynamics the operation of the laser can be described by a mean-field theory, which is amenable to a bifurcation analysis. In the Dicke–Haken–Lax model of the laser it is possible to describe the many body photon field by a mean-field theory as N (the number of degrees of freedom) tends to infinity. Thus it is possible in this case to solve a non-linear quantum-mechanical model, far from equilibrium, by reducing the problem to a system of ordinary differential equations for the expectation values of the extensive variables. The onset of laser action in these theories is then described by the bifurcation of time-periodic solutions from the equilibrium solution, that is, so-called Hopf bifurcations.

3 Spontaneous Symmetry Breakdown, Gauge Fields and Particle Physics

Here are some long-standing problems in particle theory: (1) How can we understand the hierarchical structure of the fundamental interactions? Are the strong, medium strong (i.e. SU(3)-breaking), electromagnetic, and weak interactions truly independent, or is there some principle that establishes connections between them? (2) How can we construct a renormalisable theory of the weak interactions, one which reproduces the low-energy successes of the Fermi theory but predicts finite higher-order corrections? (3) How can we construct a theory of electromagnetic interactions in which electromagnetic mass differences within isotopic multiplets are finite? (4) How can we reconcile Bjorken scaling in deep inelastic electro-production with quantum field theory? The SLAC-MIT experiments seem to be telling us that the light-cone singularities in the product of two currents are canonical in structure; ordinary perturbation theory, on the other hand, tells us that the canonical structure is spoiled by logarithmic factors, which get worse and worse as we go to higher and higher orders in the perturbation expansion. Are there any theories of the strong interactions for which we can tame the logarithms, sum them up and show they are harmless? Very significant advances have been made on all of these problems in the last 20 years. There now exist a large family of models of the weak and electromagnetic interactions that solve the second and third problem, and there has been discovered a somewhat smaller family of models of the strong interactions that solve the fourth problem. As we shall see, the structure of these models is such that we are beginning to get ideas about the solution of the (very deep) first problem; connections are beginning to appear in unexpected places, and one might optimistically say that we are on the road to the first truly unified theory of the fundamental interactions. All these marvellous developments are based upon the ideas of spontaneous symmetry breakdown and gauge fields.

Let us briefly discuss spontaneous symmetry breakdown, Goldstone bosons, gauge fields, and the Higgs phenomenon in the simplest context, that is, classical field theory. I will have no time to go into the renormalisation problem, nor into the non-Abelian generalisations of the Wald identities, and other aspects of the quantisation of gauge fields. In general, there is no reason why an invariance of the Hamiltonian of a quantum-mechanical system should also be an invariance of the ground state of the system. Thus, for example, the nuclear forces are rotationally invariant, but this does not mean that the ground state of a nucleus is necessarily rotationally invariant (i.e. of spin zero). This is a triviality for nuclei, but it has highly non-trivial consequences if we consider systems which, unlike nuclei, are of infinite spatial extent. The standard example is the Heisenberg ferromagnet, an infinite crystalline array of spin −1/2 magnetic dipoles, with spin–spin interactions between nearest neighbours such that neighbouring dipoles tend to align. Even though the Hamiltonian is rotationally invariant, the ground state is not; it is a state in which all the dipoles are aligned in some arbitrary direction, and is infinitely degenerate for an infinite ferromagnet. A little man living inside such a ferromagnet would have a hard time detecting the rotational invariance of the laws of nature; all his experiments would be corrupted by the background magnetic field. If his experimental apparatus interacted only weakly with the background field, he might detect rotational invariance as an approximate symmetry; if it interacted strongly, he might miss it altogether; in any case, he would have no reason to suspect that it was in fact an exact symmetry. Also, the little man would have no hope of detecting directly that the ground state in which he happens to find himself is in fact part of an infinitely degenerate multiplet. Since he is of finite extent (this is the technical meaning of “little”), he can only change the direction of a finite number of dipoles at a time; but to go from one ground state of the ferromagnet to another, he must change the directions of an infinite number of dipoles—an impossible task.

At least at first glance, there appears to be nothing in this picture that cannot be generalised to relativistic quantum mechanics. For the Hamiltonian of a ferromagnet, we can substitute the Hamiltonian of a quantum field theory; for rotational invariance, some internal symmetry; for the ground state of the ferromagnet, the vacuum state; and for the little man, ourselves. That is to say, we conjecture that the laws of nature may possess symmetries that are not manifest to us because the vacuum state is not invariant under them. This situation is usually called “spontaneous breakdown symmetry”. Let us investigate spontaneous symmetry breakdown in the case of classical field theory. For simplicity, we will restrict ourselves to theories involving a set of n real scalar fields, which we assemble into a real n-vector, ϕ, with Lagrange density

$$ L = 1/2 (\partial_\mu \phi)\cdot \bigl( \partial^\mu \phi \bigr) - U(\phi), $$
(3)

where U is some function of the ϕ S , but not of their derivatives. We treat these theories purely classically, but use quantum-mechanical language; thus, we call the state of lowest energy “the vacuum”, and refer to the quantities which characterise the spectra of small oscillations about the vacuum as “particle masses”. For any of these theories, the energy density is

$$ H = 1/2(\partial_0 \phi)^2 + 1/2(\nabla \phi)^2 + U(\phi). $$
(4)

Thus the state of lowest energy is one for which the value of ϕ is a constant, which we denote by 〈ϕ〉. The value of 〈ϕ〉 is determined by the detailed dynamics of the particular theory under investigation, that is to say, by the location of the minimum (or minima) of the potential U. We call 〈ϕ〉 “the vacuum expectation value of ϕ”. Within this class of theories, it is easy to find examples for which symmetries are either manifest or spontaneously broken. The simplest one is the theory of a single field for which the potential is

$$ U = ( \lambda /4!) \phi^4 + \bigl(\mu^2/2 \bigr)\phi^2, $$
(5)

where λ is a positive number and μ 2 can be either positive or negative. This theory admits the symmetry

$$ \phi \rightarrow -\phi. $$
(6)

If μ 2 is positive, the potential has one minimum. The vacuum is at 〈ϕ〉 equals zero, the symmetry is manifest, and μ 2 is the mass of the scalar meson. If μ 2 is negative, though, the situation is quite different; the potential has two minima. In this case, it is convenient to introduce the quantity

$$ a^2 = -6\mu^2/\lambda, $$
(7)

and to rewrite the potential as

$$ U = \lambda /4! \bigl(\phi^2-a^2 \bigr)^2, $$
(8)

plus an (irrelevant) constant. It is clear from this formula that the potential now has two minima, at ϕa. Because of the symmetry (6), which one we choose as the vacuum is irrelevant to the resulting physics; however, whichever one we choose, the symmetry is spontaneously broken. Let us choose 〈ϕ〉=a. To investigate physics about the asymmetric vacuum, let us define a new field

$$ \phi' = \phi - a. $$
(9)

In terms of the new (“shifted”) field,

$$ U = \lambda /4! \bigl(\phi'^2 + 2a \phi' \bigr)^2 = (\lambda/4!) \phi'^4 + (\lambda a/6)\phi'^3 + \bigl(\lambda a^2/6 \bigr)\phi'^2. $$
(10)

We see that the true mass of the meson is λa 2/3. Note that a cubic meson self-coupling has appeared as a result of the shift, which would make it hard to detect the hidden symmetry (6) directly.

A new phenomenon appears if we consider the spontaneous breakdown of continuous symmetries. Let us consider the theory of two scalar fields, A and B, with

$$ U = \lambda /4! \bigl[A^2 + B^2 - a^2 \bigr]^2. $$
(11)

This theory admits a continuous group of symmetries isomorphic to the two-dimensional rotation group, SO(2):

$$ A \rightarrow B \cos \omega + B \sin \omega,\qquad B \rightarrow - A \sin \omega + B \cos \omega. $$
(12)

The minima of the potential lie on the circle

$$ A^2 + B^2 = a^2. $$
(13)

Just as before, which of these we choose as the vacuum is irrelevant, but whichever one we choose, the SO(2) internal symmetry is spontaneously broken. Let us choose

$$ \langle A \rangle = a,\qquad \langle B \rangle = 0. $$
(14)

As before, we shift the fields,

$$ \phi' = \phi - \langle \phi \rangle, $$
(15)

and find

$$ U = 1/4! \bigl(A'^2 + B'^2 + 2aA' \bigr)^2. $$
(16)

Expanding this, we see that the A-meson has the same mass as before, but the B-meson is massless. Such a massless spin-less meson is called a Goldstone boson; for the class of theories under consideration, its appearance does not depend at all on the special form of the potential U, but is a consequence only of the spontaneous breakdown of the continuous SO(2) symmetry group (12). To show this, let us introduce “angular variables”,

$$ A = \rho \cos \theta,\qquad B = \rho \sin \theta. $$
(17)

In terms of these variables, (12) becomes

$$ \rho \rightarrow \rho \theta \rightarrow \theta + \omega, $$
(18)

and the Lagrange density becomes

$$ L = 1/2 (\partial_\mu\rho)^2 + 1/2 \rho^2(\partial_\mu\theta)^2 - U(\rho). $$
(19)

In terms of these variables, SO(2) invariance is simply the statement that U does not depend on θ. The transformation to angular variables is, of course, ill-defined at the origin, and this is reflected in the singular form of the derivative part of the Lagrange density (19). However, this is of no interest to us, since we wish to do perturbation expansions not about the origin, but about an assumed asymmetric vacuum. With no loss of generality, we can assume this vacuum is at 〈ρ〉=a, 〈θ〉=0. Introducing shifted fields as before,

$$ \rho' = \rho - a,\qquad \theta' = \theta, $$
(20)

we find

$$ L = 1/2 \bigl(\partial_\mu\rho' \bigr)^2 + 1/2 \bigl(\rho' + a \bigr)^2 \bigl( \partial_\mu\theta' \bigr)^2 - U \bigl( \rho' + a \bigr). $$
(21)

It is clear from this expression that the θ-meson is massless, just because the θ-field enters the Lagrangian only through its derivatives. This can also be seen purely geometrically, without writing down any formulae. If the vacuum is not invariant under SO(2) rotations, then there is a curve passing through the vacuum along which the potential is constant; this is the curve of points obtained from the vacuum by SO(2) rotations—in terms of our variables, the curve of constant ρ. If we expand the potential around the vacuum, no terms can appear involving the variable that measures displacement along this curve—the θ variable. Hence we always have a massless meson. This argument can easily be generalised to the spontaneous breakdown of a general continuous internal symmetry group.

Summarising, we can make the following remarks relating to the above description:

  1. (i)

    There is a large family of field theories that display spontaneous breakdown on internal symmetries. If the spontaneously broken symmetry is discrete, this causes no problems; however, if the symmetry is continuous, symmetry breakdown is associated with the appearance of Goldstone bosons. This can be cured by coupling gauge fields to the system and promoting the internal symmetry group to a gauge group; the Goldstone bosons then disappear and the gauge mesons acquire masses. It should be remembered that, at the time of their inventions, both the theory of non-Abelian gauge fields and the theory of spontaneous symmetry breakdown were thought to be theoretically amusing but physically untenable, because both predicted unobserved massless particles, the gauge mesons and the Goldstone bosons. It was only later that it was discovered that each of these diseases was the other’s cure.

  2. (ii)

    What has been done for classical field theory can be extended to some extent into the quantum domain. At least for weak couplings, the phenomenon of spontaneous breakdown of internal symmetries survives substantially unchanged; in particular, all of the equations we have derived can be reinterpreted as the first terms in a systematic quantum expansion.

  3. (iii)

    Regarding theories with fermions, it is clear that if we couple fermions to the scalar-meson systems we have discussed, either directly (through Yukawa couplings) or indirectly (through gauge field couplings), then the shift in the scalar fields will induce an apparent symmetry-violating term in the fermion part of the Lagrangian. A more interesting question is whether spontaneous symmetry breakdown can occur in a theory without fundamental scalar fields. For example, perhaps bilinear forms in Fermi fields can develop symmetry-breaking vacuum expectation values all by themselves. There is one exactly soluble model without fundamental scalars that displays the full Goldstone–Higgs phenomenon. This is the Schwinger model, quantum electrodynamics of massless fermions in two-dimensional space-time.

  4. (iv)

    It is important to realise that we can make the effects of spontaneous symmetry breakdown as large or as small as we want, by appropriately fudging the parameters in our models. Thus, in the real world, some of the spontaneously broken symmetries of nature may be totally inaccessible to direct observation. Also, of course, there is no objection to exact or approximate symmetries of the usual kind coexisting with spontaneously broken symmetries. Presumably symmetries such as nucleon number conservation, neither broken nor coupled to a massless gauge meson, are of this sort.

4 Some Mathematical Aspects of Bifurcations, Singularities and Universality

Bifurcation, as a scientific terminology, has been used to describe significant and qualitative changes that occur in the solution curves of a dynamical system, as the key system parameters are varied. Very frequently, it is used to describe the qualitative stability changes of the solution curves of a non-linear dynamical system. In other words, the concept of bifurcation allows studying the branch points in non-linear equations, that is, of singular points of the equations where several solutions come together. It is important in applications because bifurcation phenomena typically accompany the transition to instability when a characteristic parameter passes through a critical value. Most of the dynamical systems naturally depend on parameters. For some special (critical) values of the parameters, say c, the non-generic situations may occur. For example, two stationary points A and B, depending on the parameter c, may collide at c=1/4. If c is decreasing, the unique stationary point existing for c=1/4, is “subdivided” into two points A and B. In such examples, all topological changes of the phase portraits under the change of the parameters are called bifurcations. A phase singularity is a point at which phase is ambiguous and near which phase takes on all values. In other words, singularity means a place where slopes become infinite, where the rate of change of one variable with another exceeds all bounds, and where a big change in an observable is caused by an arbitrarily small change in something else. Various areas of physics (solid state physics, hydrodynamics, fluid mechanics, physical chemistry and statistical physics) are a rich source of instability and bifurcation phenomena. We mention the formation of convection cells in the Bénard problem, which furnishes an excellent example of what is called a “symmetry-breaking instability”. Prior to the onset of instability the solution is invariant under the entire group of rigid motions, whereas the bifurcating convective motions are invariant only under a crystallographic subgroup. Symmetry is broken “spontaneously”, because the symmetry group of the equations is unchanged, while the bifurcating solutions have a smaller symmetry group.

The centre manifold theorem is one of the most useful tools for giving a representation of the solution trajectories of a non-linear dynamical system in a neighbourhood of a non-hyperbolic equilibrium (for further details and a mathematical statement, see [10] and [9]). It permits to understand the transition from stability to instability in many non-linear dynamical systems—the stability may vanish at the criticality appearing with different kinds of bifurcation points and trajectories or other singularities—and the emergence of new periodic or non-periodic (such as in the case of chaotic time evolutions like hydrodynamic turbulence and strange attractors) solutions. The concepts of bifurcation and attractor are very important for understanding the transition from stable dynamical systems to unstable dynamical systems in a large class of natural phenomena.

Closely related to this last question is a recent remarkable discovery—the so-called Feigenbaum universality, which is based on the well-known renormalisation-group method in theoretical physics, first used in statistical mechanics and quantum field theory. The problem with which Feigenbaum started consists in studying how dynamical systems depending on a parameter pass from a stable type of motion, which it is natural to call laminar, to an unstable type that involves the appearance of strong statistical properties frequently associated with turbulence. The Feigenbaum universality refers directly to sequences of period-doubling bifurcations. In traditional bifurcation theory it is usual to consider the local behaviour of families of dynamical systems in a neighbourhood of a bifurcation value of the parameter. Here, however, we encounter a completely new problem: the local behaviour of a family of dynamical systems in a neighbourhood of a parameter value where infinitely many parameter bifurcation values accumulate. It should be observed that the form of the trajectories becomes more complicated as the parameter increases for a broad class of one-parameter families of maps of a closed interval into itself, namely, a stable periodic trajectory becomes unstable as the parameter increases, and a stable periodic trajectory with twice the period is created, which attracts all points except for unstable cycles. Feigenbaum observed that the successive parameter values where such bifurcations take place for the family of maps xμx(1−x) (0≤μ≤4) of [0, 1] into itself converge to a limit at the rate of a geometric progression with the ratio δ=4.6692… , the famous Feigenbaum constant. He then made analogous calculations with the family f(x:μ)=μsin(πx) and observed here a geometric progression with the same ratio. This led to the natural conjecture that δ does not depend at all on the form of the specific family of maps. Feigenbaum also proposed a theory explaining the universality of δ. It is useful qualitatively to form an intuitive picture of the phenomenon taking place when there is an infinite sequence of period-doubling bifurcations.

5 Brief Remarks on Conservative and Dissipative Systems

Very roughly one can classify the natural phenomena into two great classes: those that do not depend on time, i.e. which are invariant with respect to time changes, and those that depend on time, that is, which transforms in the course of time evolution and, more important, with time. The most interesting example of this variation of many natural phenomena and living systems is the spontaneous symmetry breaking, which produces a qualitative change in the state of those phenomena and systems.

Consider a (non-linear) oscillator, which is an archetypal system having a behaviour depending on time. Consider further the periodic movement of a physical pendulum; now this movement will stops after a certain time owing to frictions (and other perturbations). In other words, the amplitude of oscillations will decrease inevitably with time. This phenomenon consists in a dissipation of energy, which we found to be very general and to lead to implications for the study of everyday physical experience, and which can be expressed in a precise mathematical formulation. One can first consider the ideal case of a simple pendulum where we have, in addition to the point-like character of the mass, the absence of any kind of friction; yet some physical quantities and properties of the system are conserved.

The damped-oscillator provides a typical example of a dissipative dynamical system; and its most striking dynamical properties may be summarised as follows.

  1. 1.

    For such dissipative systems, there is not in general a time-independent Hamiltonian H, hence, no conservation of the energy of the systems.

  2. 2.

    In some cases, on the other hand, there exists a function of the dynamical variables, called the Lyapounov function, which is positive and monotonically decreasing (with time), which means that the system under consideration undertake an irreversible process.

  3. 3.

    One can also have, in the case of a dissipative system, a domain of evolution much more complicated than a simple decreasing. In any case, every time there is dissipation the equations of movement change by time reversal: therefore, the dynamics of dissipative systems is irreversible.

6 Some Qualitative and Geometrical Properties of Psychological Time

This last reflection is twofold aimed. First, we would like emphasising some properties of time which make up its peculiar structure in the conception of the physical world that governs everyday life. Let us begin by distinguishing quantitative from qualitative properties of time. In measuring time by the help of clocks we make use of its quantitative, or metrical, properties. Such measurements concern the determination of time distances of equal length, represented, for instance, by two consecutive hours; and, in addition, the determination of simultaneity, that is, of equal time values for spatially distant points. The theory of the metrical properties of time has been developed in great detail in modern physics—in particular, in Einstein’s theory of relativity. The qualitative, or topological, properties of time are fundamental, in that they hold independently of specific procedures of measurement and remain unchanged even if the form of measuring time are varied. They comprise notably those properties that confer upon time its specific nature as different from space and that account for our sensible perception towards time. Following the precise analysis given by Hans Reichenbach [8] (see also [11]), we can formulate in several statements the most evident qualitative properties of time as follows:

  1. (1)

    Time goes from the past to the future. This statement refers to the flow of time; it expresses what we call becoming. Time is not static; it moves. We may regard the flow of time as the common product of an objective (physical) factor and a subjective one (connected with the structure of human consciousness).

  2. (2)

    The present, which divides the past from the future, is now. The meaning of “now” might express either (or simultaneously) our subjective and intentional approach to time for the “present”, or (and) the fact that we see the things around us in a certain spatial perspective. However that may be, this statement appears, from the psychological point of view, rather enigmatic.

  3. (3)

    The past never comes back. This statement appears to be closely connected with the flow of time, that is, with the fact that time flows linearly in one and same direction, in the direction of a straight line, without thus never and nowhere intersects itself; the one-dimensional and linear continuum is the model of this conception of time.

The following three statements are intended to express the differences between the “past” and the “future”.

  1. (4)

    We cannot change the past, but we can change the future. The statement means, among other things, that there are some future happenings which we can predict and control though—owing to the random and complex nature of many macroscopic—we cannot predict and control cosmic events, or the weather, or earthquakes; and we are rather poor at controlling human society, which continues to drift from crisis into crisis and from war into war, but there are not events of the past which we can change.

  2. (5)

    We can make records of the past, but not of the future. It is not possible to predict the future from isolated indications. And even if such a prediction from a few isolated causes is possible, it can be made only in approximate terms. Moreover, even the knowledge of the total cause cannot permit sure predictions.

  3. (6)

    The past is determined; the future is undetermined. In some sense, the past consists of established facts, whereas the future does not; and an established fact is something that we cannot change, whereas the future concerns uncertain and questionable facts, and it is open to very different issues.

Then we want to sketch the essential features of a geometric suited model to represent the multidimensional and polycyclical nature of psychological and possibly physical time upon which rests partly our perception of the world. We borrow the fundamental ideas of this model from the mathematical theory of superstrings and from the theory of Calabi–Yau spaces. The central idea is that the space-time structure of the universe may have both extended dimensions and curled-up dimensions. This is an astounding suggestion made in 1919 by the Polish mathematician Theodor Kaluza, and refined some years later by the Swedish physicist Oskar Klein. This means that our spatial universe has dimensions that are large, extended, and easily visible, namely the three spatial dimensions of common experience, but that may also have additional spatial dimensions that are tightly curled up into a very tiny space. For instance, circular loops may exist at every point in the familiar extended dimensions. It is worthy of note that the circular dimension is not merely a circular bump within the familiar extended dimensions; rather, the circular dimension is a new dimension, one that exists at every point in the familiar extended dimensions; it is a new and independent direction in which some being, if it were small enough, could move.

In the 1980s, it has been showed that one may generalise the Kaluza–Klein theory to higher-dimensional theories with numerous curled-up spatial directions. These extra dimensions are curled up into the surface of a sphere. Of course, beyond proposing a different number of extra dimensions, one can also imagine other shapes for the extra dimensions, for instance, the shape of a torus. And also more complicated possibilities can be imagined in which there are three, four, five, essentially any number of extra spatial dimensions, curled up into a wide spectrum of exotic shapes. In fact, the extra dimensions or the curled-up dimensions, which seem very profoundly to influence basic physical properties of the universe, look like a class of six-dimensional geometrical shapes known as Calabi–Yau spaces. Roughly, we have to imagine replacing each of the spheres—which represented two curled-up dimensions—with Calabi–Yau space. That is, at every point in the three familiar extended dimensions, string theory claims that there are six hitherto unexpected dimensions, tightly curled up into one of these rather complicated-looking shapes. These dimensions are an integral and ubiquitous part of the space’s structure; they exist everywhere. For instance, if you sweep your hand in a large arc, you are moving not only through the three extended dimensions, but also through these curled-up dimensions. Of course, because the curled-up dimensions are very small, as you move your hand you circumnavigate them an enormous number of times, repeatedly returning to your starting point.

Now, given the requirement of numerous extra dimensions, is it possible that some are additional time dimensions, as opposed to additional space dimensions? We all have an understanding of what is means for the universe to have multiple space dimensions, since we live in a world in which we constantly deal with a plurality three. But what would it mean to have multiple times? Would one align with time as we presently experience it psychologically while the other would somehow be “different”? It gets even hard to accept when you think about a curled-up time dimension. Nevertheless, we may think of time not solely as a dimension we can traverse in only one direction with absolute inevitability, never being able to return to an instant after it has passed. At any rate, it might be that curled-up time dimensions have vastly different properties from the familiar, vast time dimension that we imagine reaching back to the creation of the universe and forward to the present moment.

But, in contrast to extra spatial dimensions, new and previously unknown time dimensions would clearly require an even more profound change of our intuition. It seems to us that the intriguing possibility of new time dimensions could well play a role in future developments of our conceptions of physical reality and of natural phenomena. Starting from these mathematical objects, one might suggest a geometrical model notably of psychological time which cannot be conceived like a linear and one-dimensional concept any more, but rather as a multidimensional and polycyclical one.