Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Introduction

Near the end of the 19\(^\text {th}\) century, physics appeared to be slowing down. The mechanics of Newton and others rested on solid ground, statistical mechanics explained the link between the microscopic and the macroscopic, Maxwell’s equations unified electricity, magnetism, and light, and the steam engine had transformed society. But the blade of progress is double edged and, as more problems were sliced through, fewer legitimate fundamental issues remained. Physics, it seemed, was nearing an end.

Or was it? Among the few remaining unsolved issues were two experimental anomalies. As Lord Kelvin allegedly announced: “The beauty and clearness of the dynamical theory [...] is at present obscured by two clouds” [1]. One of these clouds was the ultra-violet catastrophe: an embarrassing prediction that hot objects like the sun should emit infinite energy. The other anomaly was an experiment by Michelson and Morley that measured the speed of light to be independent of how an observer was moving. Given the tremendous success of physics at that time, it would have been a safe bet that, soon, even these clouds would pass.

Never bet on a sure thing. The ultra-violet catastrophe led to the development of quantum mechanics and the Michelson–Morley experiment led to the development of relativity. These discoveries completely overturned our understanding of space, time, measurement, and the perception of reality. Physics was not over, it was just getting started.

Fast-forward a hundred years or so. Quantum mechanics and relativity rest on solid ground. The microchip and GPS have transformed society. These frameworks have led to an understanding that spans from the microscopic constituents of the nucleus to the large scale structure of the Universe. The corresponding models have become so widely accepted and successful that they have been dubbed standard models of particle physics and cosmology. Resultantly, the number of truly interesting questions appears to be slowly disappearing. In well over 30 years, there have been no experimental results in particle physics that cannot be explained within the basic framework laid out by the standard model of particle physics. With the ever increasing cost of particle physics’ experiments, it seems that the data is drying up. But without input from experiment, how can physics proceed? It would appear that physics is, again, in danger of slowing down.

Or is it? Although the number of interesting fundamental questions appears to be decreasing, the importance of the remaining questions is growing. Consider two of the more disturbing experimental anomalies. The first is the naturalness problem, i.e., the presence of unnaturally large and small numbers in Nature. The most embarrassing of these numbers—and arguably the worst prediction of science—is the accelerated expansion of the Universe, which is some 120 orders of magnitude smaller than its natural value. The second is the dark matter problem that just under 85–90 % of the matter content of our Universe is of an exotic nature that we have not yet seen in the lab. It would seem that we actually understand very little of what is happening in our Universe!

The problem is not that we don’t have enough data. The problem is that the data we do have does not seem to be amenable to explanation through incremental theoretical progress. The belief that physics is slowing down or, worse, that we are close to a final theory is just as as unimaginative now as it would have been before 1900. Our thesis here will be that the lesson to take from that period is that the way forward is to question the fundamental assumptions of our physical theories in a radical way. This is easier said than done: one must not throw out the baby with the bath water. What is needed is a careful examination of our physical principles in the context of real experimental facts to explain more data using less assumptions.

The purpose of this work is to point out three specific assumptions made by our physical theories that might be wrong. We will not offer a definite solution to these problems but suggest a new scenario, supported by a suggestive calculation, that puts these assumptions into a new light and unifies them. The three assumptions we will question are

  1. 1.

    Time and space are unified.

  2. 2.

    Scale is physical.

  3. 3.

    Physical laws are independent of the measurement process.

We will argue that these three assumptions inadvertently violate the same principle: the requirement that the laws of physics depend only on what is knowable through direct measurement. They fall into a unique category of assumptions that are challenged when we ask how to adapt the scientific method, developed for understanding processes in the lab, to the cosmological setting. In other words, how can we do science on the Universe as a whole?

We will not directly answer this question but, rather, suggest that this difficult issue may require a radical answer that questions the very origin of time. The flow of time, we will argue, may be fundamentally linked to the process of measurement. We will then support this argument with an intriguing calculation that recovers the black hole entropy law from a simple toy model. Before getting to this, let us explain the three questionable assumptions.

Three Questionable Assumptions

Many of our most basic physical assumptions are made in the first week of physics education. A good example is one of the first equations we are taught: the definition of velocity,

$$\begin{aligned} v = \frac{\Delta x}{\Delta t}. \end{aligned}$$
(6.1)

It is perhaps a bit over-dramatic—but, at the same time, not inaccurate—to say that to give this equation a precise operational meaning has been an outstanding issue in physics for its entire history. This is because, to understand this equation, one has to have an operational definition of both \(x\), \(t\), and \(\Delta \). Great minds have pondered this question and their insights has led to scientific revolutions. This includes the development of Newtonian mechanics, relativity, and quantum mechanics.Footnote 1 Recently, the meaning of \(x\) and, in particular, \(t\), have been the subject of a new debate whose origin is in a theory of quantum gravity. This brings us to our first questionable assumption.

Time and Space Are Unified

The theory of relativity changed our perception of time. As Minkowski put it in 1908 [2], “space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality”. Nowhere is this more apparent than in the main equation physicists use to construct the solutions of general relativity (GR):

$$\begin{aligned} S_\text {Einstein-Hilbert} = \int d^4x \left( R + \mathcal L_\text {matter} \right) \sqrt{-g} \;. \end{aligned}$$
(6.2)

Can you spot the \(t\)? It’s hidden in the \(4\) of \(d^4x\). But there are important structures hidden by this compact notation.

We will start by pointing out an invisible minus sign in Eq. (6.2). When calculating spacetime distances, one needs to use

$$\begin{aligned} x^2 + y^2 + z^2 - t^2, \end{aligned}$$
(6.3)

which has a—in front of the \(t^2\) instead of Pythagoras’ \(+\). The minus sign looks innocent but has important consequences for the solutions of Eq. (6.2). Importantly, the minus sign implies causal structure, which means that only events close enough to us so that light signals sent from these events can make it to us now can effect what is going on now. This, in turn, implies that generic solutions of GR can only be solved by specifying information at a particular time and then seeing how this information propagates into the future. Doing the converse, i.e., specifying information at a particular place and seeing how that information propagates to another place, is, in general, not consistent.Footnote 2 Thus, the minus sign already tells you that you have to use the theory in a way that treats time and space differently.

There are other ways to see how time and space are treated differently in gravity. In Julian Barbour’s 2009 essay, The Nature of Time [3], he points out that Newton’s “absolute” time is not “absolute” at all. Indeed, the Newtonian notion of duration—that is, how much time has ticked by between two distinct instants—can be inferred by the total change in the spatial separations of particles in the Universe. He derives the equation

$$\begin{aligned} \Delta t^2 \propto \sum _i \Delta d^2_i, \end{aligned}$$
(6.4)

where the \(d_i\) are inter-particle separations in units where the masses of the particles are one. The factor of proportionality is important, but not for our argument. What is important is that changes in time can be inferred by changes in distances so that absolute duration is not an input of the classical theory. This equation can be generalized to gravity where it must be solved at every point in space. The implications for the quantum theory are severe: time completely drops out of the formalism.

Expert readers will recognize this as one of the facets of the Problem of Time [4]. The fact that there is no equivalent Problem of Space can be easily traced back to the points just made: time is singled out in gravity as the variable in terms of which the evolution equations are solved. This in turn implies that local duration should be treated as an inferred quantity rather than something fundamental. Clearly, time and space are not treated on the same footing in the formalism of GR despite the rather misleading form of Eq. (6.2). Nevertheless, it is still true that the spacetime framework is incredibly useful and, as far as we know, correct. How can one reconcile this fact with the space-time asymmetry in the formalism itself? We will investigate this in Sect. “Time from Coarse Graining”.

Scale Is Physical

Before even learning the definition of velocity, the novice physicist is typically introduced to an even more primary concept that usually makes up one’s first physics lesson: units. Despite the rudimentary nature of units, they are probably the most inconsistently understood concept in all of physics. If you ask ten different physicists for the physical meaning of a unit, you will likely get ten different answers. To avoid confusion, most theoreticians set all dimensionful constants equal to 1. However, one can’t predict anything until one has painfully reinserted these dimensionful quantities into the final result.

And yet, no one has ever directly observed a dimensionful quantity. This is because all measurements are comparisons. A ‘meter’ has no intrinsic operational meaning, only the ratio of two lengths does. One can define an object A to have a length of one meter and make a measurement that reveals that some other object B has twice the length of object A. Then, we can deduce that object B has a length of 2 meters. This, however, tells you nothing about the intrinsic absolute length of object A for if a demon doubled the intrinsic size of the Universe, the result of the experiment would be exactly the same. So, where do units come from?

Some units, like the unit of pressure, are the result of emergent physics. We understand how they are related to more “fundamental” units like meters and seconds. However, even our most fundamental theories of Nature have dimensionful quantities in them. The standard model of particle physics and classical GR require only a singe unit: mass. Scale or, more technically, conformal invariance is then broken by only two quantities with the units of mass. The first is the recently observed Higgs mass, which can be related to all the masses of the particles in the standard model. The second is the Plank mass, which sets the scale of quantum gravity. As already discussed, there is a naturalness problem associated with writing all other constants of nature as dimensionless quantities but this will not bother us to much here.

The presence of dimensionful quantities is an indication that our “fundamental” theories are not fundamental at all. Instead, scale independence should be a basic principle of a fundamental theory. As we will see in Sect. “Time from Coarse Graining”, there is a formulation of gravity that is nearly scale invariant. We will try to address the “nearly” with the considerations of the next section.

Physical Laws Are Independent of the Measurement Process

There is one assumption that is so fundamental it doesn’t even enter the physics curriculum. This is the assumption that the scientific method is generally applicable for describing everything in the Universe taken together. We know that the scientific method can be applied in the laboratory where external agents (i.e., scientists) carefully control the inputs of some subsystem of the Universe and observe the subsystem’s response to these inputs. We don’t know, however, whether it is possible to apply these techniques to the Universe as a whole. On the other hand, when it comes to quantum mechanics, we do know whether our formalism can be consistently applied to the Universe. The answer is ‘NO’! The reasons are well understood—if not disappointingly under appreciated—and the problem even has a name: the measurement problem.

The measurement problem results from the fact that quantum mechanics is a framework more like statistical physics than classical mechanics. In statistical physics, one has practical limitations on one’s knowledge of a system so one takes an educated guess at the results of a specific experiment by calculating a probability distribution for the outcome using one’s current knowledge of the system. In quantum mechanics, one has fundamental limitations on one’s knowledge of the system—essentially because of the uncertainty principle—so one can only make an educated guess at the outcome of a specific experiment by calculating a probability distribution for the outcome using one’s current knowledge of the system. However, it would be strange to apply statistical mechanics to the whole Universe because the Universe itself is only given once. It is hard to imagine what an ensemble of Universes, for which one can calculate and give meaning to a probability distribution, would even mean.Footnote 3 The same is true in quantum mechanics, but the problem is worse. The framework itself is designed to give you a probability distribution for the outcome of some measurement but how does one even define a measurement when the observer itself is taken to be part of the system? The answer is not found in any interpretation of quantum mechanics, although the problem itself takes a different form in a given interpretation. The truth is that quantum mechanics requires some additional structure, which can be thought of as describing the observer, in order for it to make sense. In other words, quantum mechanics alone, without additional postulates, can never be a theory of the whole Universe.

As a consequence of this, any approach to quantum gravity that uses quantum mechanics unmodified—including all major approaches to quantum gravity—is not, and can never be a theory of the whole Universe. It could still be used for describing quantum gravity effects on isolated subsystems of the Universe, but that is not the ambition of a full fledged quantum gravity theory. Given such a glaring foundational issue at the core of every major approach to quantum gravity, we believe that the attitude that we are nearing the end of physics is unjustified. The “shut-up and calculate” era is over. It is time for the quantum gravity community to return to these fundamental issues.

One approach is to change the ambitions of science. This is the safest and, in some ways, easiest option, but it would mean that science is inherently a restricted framework. The other possibility is to try to address the measurement problem directly. In the next section, we will give a radical proposal that embraces the role of the observer in our fundamental description of Nature. To understand how this comes about, we need one last ingredient: renormalization, or the art of averaging.

A Way Forward

The Art of Averaging

It is somewhat unfortunate that the great discoveries of the first half of the 20th century have overshadowed those of the second half of the century. One of these, the theory of renormalization, is potentially the uncelebrated triumph of twentieth century physics. Renormalization was born as rather ugly set of rules for removing some undesirable features of quantum field theories. From these humble beginnings, it has grown into one of the gems of physics. In its modern form due to Wilson [6], renormalization has become a powerful tool for understanding what happens in a general system when one lacks information about the details of its fine behavior. Renormalization’s reach extends far beyond particle physics and explains, among other things, what happens during phase transitions. But, the theory of renormalization does even more: it helps us understand why physics is possible at all.

Imagine what it would be like if, to calculate everyday physics like the trajectory of Newton’s apple, one would have to compute the motions of every quark, gluon, and electron in the apple and use quantum gravity to determine the trajectory. This would be completely impractical. Fortunately, one doesn’t have to resort to this. High-school physics is sufficient to determine the motion of what is, fundamentally, an incredibly complicated system. This is possible because one can average, or coarse grain, over the detailed behavior of the microscopic components of the apple. Remarkably, the average motion is simple. This fact is the reason why Newtonian mechanics is expressible in terms of simple differential equations and why the standard model is made up of only a couple of interactions. In short, it is why physics is possible at all. The theory of renormalization provides a framework for understanding this.

The main idea behind renormalization is to be able to predict how the laws of physics will change when a coarse graining is performed. This is similar to what happens when one changes the magnification of a telescope. With a large magnification, one might be able to see the moons of Jupiter and some details of the structure of their atmospheres. But, if the magnification, or the renormalization scale, is steadily decreased, the resolution is no longer good enough to make out individual moons and the lens averages over these structures. The whole of Jupiter and its moons becomes a single dot. As we vary the renormalization scale, the laws of physics that govern the structures of the system change from the hydrodynamic laws governing atmospheres to Newton’s law of gravity.

The theory of renormalization produces precise equations that say how the laws of physics will change, or flow, as we change the renormalization scale. In what follows, we will propose that flow under changes of scale may be related to the flow of time.

Time from Coarse Graining

We are now prepared to discuss an idea that puts our three questionable assumptions into a new light by highlighting a way in which they are connected. First, we point out that there is a way to trade a spacetime symmetry for conformal symmetry without altering the physical structures of GR. This approach, called Shape Dynamics (SD), was initially advocated by Barbour [7] and was developed in [8, 9]. Symmetry trading is allowed because symmetries don’t affect the physical content of a theory. In SD, the irrelevance of duration in GR is traded for local scale invariance (we will come to the word “local” in a moment). This can be done without altering the physical predictions of the theory but at the cost of having to treat time and space on a different footing. In fact, the local scale invariance is only an invariance of space, so that local rods—not clocks—can be rescaled arbitrarily. Time, on the other hand, is treated differently. It is a global notion that depends on the total change in the Universe.

The equivalence between SD and GR is a rather remarkable thing. What can be proved is that a very large class of spacetimes that are solutions of GR can be reproduced by a framework that does not treat spacetime as fundamental. Instead, what is fundamental in SD is scale-invariant geometry. Recently [10], it has been discovered that some solutions of SD do not actually correspond to spacetimes as all, although they are still in agreement with experiment. These are solutions that describe certain kinds black holes in SD. In these solutions, there is no singularity where the curvature of spacetime becomes infinite. Rather, there is a traversable worm hole that connects the event horizon of a black hole to another region of space. This exciting discovery could pave the way to a completely different understanding of black holes.

Symmetry trading is the key to understanding how GR and SD are related. In 2 spatial dimensions, we know that this trading is possible because of an accidental mathematical relationship between the structure of conformal symmetry in 2 dimensions and the symmetries of 3 dimensional spacetime [11].Footnote 4 We are investigating whether this result will remain true in 3 spatial dimensions. If it does, it would mean that the spacetime picture and the conformal picture can coexist because of a mere mathematical accident.

We now come to a key point: in order for any time evolution to survive in SD, one cannot eliminate all of the scale. The global scale of the Universe cannot be traded since, then, no time would flow. Only a redistribution of scale from point to point is allowed (this is the significance of the word “local”) but the overall size of the Universe cannot be traded. In other words, global scale must remain for change to be possible. How can we understand this global scale?

Consider a world with no scale and no time. In this world, only 3 dimensional Platonic shapes exist. This kind of world has a technical name, it is a fixed point of renormalization—“fixed” because such a world does not flow since the renormalization scale is meaningless. This cannot yet be our world because nothing happens in this world. Now, allow for something to happen and call this “something” a measurement. One thing we know about measurements is that they can never be perfect. We can only compare the smallest objects of our device to larger objects and coarse grain the rest. Try as we may, we can never fully resolve the Platonic shapes of the fixed point. Thus, coarse graining by real measurements produces flow away from the fixed point. But what about time? How can a measurement happen if no time has gone by? The scenario that we are suggesting is that the flow under the renormalization scale is exchangeable with the flow of time. Using the trading procedure of SD, the flow of time might be relatable to renormalization away from a theory of pure shape.

In this picture, time and measurement are inseparable. Like a diamond with many faces, scale and time are different reflections of a single entity. This scenario requires a radical revaluation of our notions of time, scale, and measurement.

To be sure, a lot of thought is still needed to turn this into a coherent picture. A couple of comments are in order. Firstly, some authors [12, 13] have investigated a similar scenario, called holographic cosmology using something called gauge/gravity duality. However, our approach suggests that one may not have to assume gauge/gravity duality for this scenario but, instead, can make use of symmetry trading in SD. Furthermore, our motivation and our method of implementation is more concrete. Secondly, in the context of scale-invariant particle “toy models”, Barbour, Lostaglio, and one of the authors [14] have investigated a scenario where quantum effects ‘ruin’ the classical scale invariance. In these models, the quantum theory has an emergent scale, which can then be used as a clock that measures the quantum time evolution of the scale invariant shapes of the system. This simple model illustrates one way in which the radical scenario discussed here could implemented into a concrete theory. Finally, why should we expect that there is enough structure in a coarse graining of pure shapes to recover the rich structure of spacetime? A simple answer is the subject of the next section.

The Size that Matters

In this section, we perform a simple calculation suggesting that the coarse graining of shapes described in the last section could lead to gravity. This section is more technical than the others but this is necessary to set up our final result. Brave souls can find the details of the calculations in the “Technical Appendix”.

We will consider a simple “toy model” that, remarkably, recovers a key feature of gravity. Before getting into the details of the model, we should quickly point out that this model should be taken more as an illustration of one way in which it is possible to define the notion of a coarse graining on shape space. The model should not be taken as a literal model for gravity or black holes, even though some of the results seem suggestive in this regard. Certainly much more work would be needed to flesh this out in a convincing way.

Fig. 6.1
figure 1

Each point in Shape Space is a different shape (represented by triangles). These correspond to an equivalence class (represented by arrows) of points of the Extended Configuration Space describing the same shape with a different position, orientation, and size

The model we will consider is a set of \(N\) free Newtonian point particles. To describe the calculation we will need to talk about two spaces: Shape Space and Extended Configuration Space (ECS). Shape Space is the space of all the shapes of the system. If \(N=3\), this is the space of all triangles. ECS is the space of all Cartesian coordinates of the particles. That is, the space of all ways you can put a shape into a Cartesian coordinate system. The ECS is larger than Shape Space because it has information about the position, orientation, and size of the shapes. Although this information is unphysical, it is convenient to work with it anyway because the math is simpler. This is called a gauge theory. We can work with gauge theories provided we remove, or quotient, out the unphysical information. To understand how this is done, examine Fig. (6.1) which shows schematically the relation between the ECS and Shape Space. Each point on Shape Space is a different shape of the system, like a triangle. All the points along the arrows represent the same shape with a different position, orientation, or size. By picking a representative point along each arrow, we get a 1–to–1 correspondence between ECS and Shape Space. This is called picking a gauge. Mathematically, this is done by imposing constraints on the ECS. In our case, we need to specify a constraint that will select a triangle with a certain center of mass, orientation, and size. For technical reasons, we will assume that all particles are confined to a line so that we don’t have to worry about orientation. To specify the size of the system, we can take the “length” of the system, \(R\), on ECS. This is the moment of inertia. By fixing the center of mass and moment of inertia in ECS, we can work indirectly with Shape Space. The main advantage of doing this is that there is a natural notion of distance in ECS. This can be used to define the distance between two shapes, which is a key input of our calculations.

To describe the calculation, we need to specify a notion of entropy in Shape Space. Entropy can be thought of as the amount of information needed to specify a particular macroscopic state of the system. To make this precise, we can use the notion of distance on ECS to calculate a “volume” on Shape Space. This volume roughly corresponds to the number of shapes that satisfy a particular property describing the state. The more shapes that have this property, the more information is needed to specify the state. The entropy of that state is then related to its volume, \(\Omega _m\), divided by the total volume of Shape Space, \(\Omega _\text {tot}\). Explicitly,

$$\begin{aligned} S = -k_\text {B} \log \frac{\Omega _m}{\Omega _\text {tot}}, \end{aligned}$$
(6.5)

where \(k_\text {B}\) is Boltzmann’s constant.

Fig. 6.2
figure 2

Left Approximation of a line using a grid. Right Further approximation of the line as a strip of thickness equal to the grid spacing

We will be interested in states described by a subsystem of \(n<N\) particles that have a certain center of mass \({\varvec{x}}_0\) and moment of inertia, \(r\). To make sense of the volume, we need a familiar concept: coarse graining. We can approximate the volume of the state by chopping up the ECS into a grid of size \(\ell \). Physically, the coarse graining means that we have a measuring device with a finite resolution given by \(\ell \). Consider a state that is represented by some surface in ECS. This is illustrated in Fig. (6.2) by a line. The volume of the state is well approximated by counting the number of dark squares intersected by the line. In the “Technical Appendix”, we calculate this volume explicitly. The result is

$$\begin{aligned} \Omega _\text {m} \propto \ell ^2 \; r^{n-2} \; \left( R^2 - r^2 - \left( 1 + \frac{m}{M-m} \right) \frac{m}{M} \; x_0^2\right) ^{\frac{N-n-2}{2}} , \end{aligned}$$
(6.6)

where \(M\) and \(R\) are the total mass and moment of inertia of the whole system and \(m\) is the mass of the subsystem. We can then compare this volume to the total volume of Shape Space, which goes like the volume of an \(N-1\) dimensional sphere (the \(-1\) is because of the center of mass gauge fixing). Thus,

$$\begin{aligned} \Omega _\text {tot} \propto R^{N-1}. \end{aligned}$$
(6.7)

The resulting entropy is

$$\begin{aligned} S = \frac{1}{2} \, k_B \, \frac{N}{n} \, \left( \frac{r}{R} \right) ^2 - \, k_B \, \log \frac{r}{R} + \cdots . \end{aligned}$$
(6.8)

Remarkably, the first term is exactly the entropy of a black hole calculated by Bekenstein and Hawking [15, 16]. More remarkably, the second term is exactly the first correction to the Bekenstein–Hawking result calculated in field theory [17, 18]. However, one should be careful not to interpret this result too literally. After all, we are considering only a very simplified case. A much more detailed analysis is necessary to draw any conclusions from this about real black holes. Note, however, that Erik Verlinde [19] discovered a way to interpret Newtonian gravity as an entropic force for systems whose entropy behaves in this way. It would appear that this simple model of a coarse graining of pure shapes has the right structure to reproduce Newtonian gravity.

Conclusions

We have questioned the basic assumptions that: (i) time and space should be treated on the same footing, (ii) scale should enter our fundamental theories of Nature, and (iii) the evolution of the Universe is independent of the measurement process. This has led us to a radical proposal: that time and scale emerge from a coarse graining of a theory of pure shape. The possibility that gravity could come out of this formalism was suggested by a simple toy model. The results of this model are non–trivial. The key result was that the entropy (6.8) scales like \(r^2\), which, dimensionally, is an area. In three dimensions, this is the signature of holography. Thus, in this simple model, Shape Space is holographic. If this is a generic feature of Shape Space, it would be an important observation for quantum gravity.

Moreover, the toy model may shed light on the nature of the Plank length. In this model, the Plank length is the emergent length arising in ECS given by

$$\begin{aligned} L_\text {Planck}^2 = G \, \hbar \propto \frac{R^2}{N} \;. \end{aligned}$$
(6.9)

This dimensionful quantity, however, is not observable in this model. What is physical, instead, it the dimensionless ratio \(r/R\). This illustrates how a dimensionful quantity can emerge from a scale independent framework. Size doesn’t matter—but a ratio of sizes does. The proof could be gravity.