I characterize the objectives of fundamental physics in such a way that the only admissible “return” on investments in a research program is the experimental discovery of previously unknown physical phenomena. Accordingly scientists should assess, however subjectively, the “winning probability” of their research programs, here defined as the product between the probability that the idea is “good” and the probability that the idea, if indeed good, would lead to the experimental discovery of previously unknown physical phenomena. I observe that these criteria could affect in particularly significant way strategic choices in quantum-gravity research, where for most predictions of a new theory the probability that they be tested experimentally is very low. I also observe that estimates of the winning probability must be frequently updated in light of relevant theoretical and experimental developments, as I here illustrate in relation to tests of Planck-scale effects for macroscopic systems and tests of Planck-scale effects for the propagation of particles observed from cosmological sources.

The Winning Probability of a Research Program

Humankind invests resources (money, working hours) in physics with the objective of “getting to know Nature better”: a research program is successful when the return on the investments takes the shape of the experimental discovery of previously unknown physical phenomena. Some ratio of the quantity and quality of these discoveries versus the amount invested must be the measure of success of a research program. While quantifying precisely is hard, evidently the research program on quantum mechanics of about a century ago is the most successful research program ever, while, for example, research programs on the magnetic monopole [1] are so far at a total loss.

Assessing a posteriori this return on investment is of course mere academia. We need good decisions on investments, not some historical accounts of good and bad investments. Here is where estimates of the “winning probability” of a research program play a role. We need estimates of the probability that a research program will provide a good return on investment. In an appropriate sense the winning probability is the product of two probabilities, \(P_{th}\) and \(P_{exp}\):

$$P_{win} = P_{th} \cdot P_{exp} \, ,$$

where \(P_{th}\) is the “probability that the idea is good” (the idea inspiring the research program is good), while \(P_{exp}\) is the probability that the idea, if indeed good, would lead to the experimental discovery of previously unknown physical phenomena. A proper definition of \(P_{th}\) is such that it should be given by the value one would obtain for \(P_{win}\) when making the hypothetical assumption that \(P_{exp} =1\), i.e. \(P_{th}\) is defined as the value of the winning probability obtained assuming hypothetically that \(P_{exp} =1\).

A key observation for this essay is that in most research areas \(P_{win} \simeq P_{th}\) (i.e. \(P_{exp} \simeq 1\)), while one has evidently \(P_{exp} \ll 1\) for all quantum-gravity research programs. In most research areas testing the predictions of a new theory is relatively simple (\(P_{win} \simeq P_{th}\)), and this explains why it is not customary in physics to also worry about \(P_{exp}\). Working in most areas of physics one could be lead to assuming that the winning probability is \(P_{th}\). Even quantum-gravity researchers are first trained in other areas of physics, exposing them to the risk that they too would take the professional attitude of assuming (however unknowingly) that the winning probability is \(P_{th}\). However, with the information available at the present time we must expect that quantum-gravity effects are terribly small, resulting in estimates of \(P_{exp}\) which are \({\ll }1\). The most commonly encountered quantum-gravity predictions are indeed very small effects [2], since their magnitude is proportional to \(\left( {E \over E_P}\right) ^\alpha \), some power of the ratio between the typical energy E of the particles involved over the gigantic Planck scale (\(E_P \sim 10^{28}eV\)).

As recommended by the editors, this essay is addressed to “readers without a higher degree in physics”. My main objective is to render tangible for such readers some challenges for strategic decisions in quantum-gravity research, due to its peculiarities.

Estimating the Winning Probability

Estimates of the winning probability are to a large extent subjective. Scientists can do no more than estimating subjectively the winning probability of research programs, in good faith, and to the best of their abilities. I can illustrate the nature of this effort by discussing briefly my subjective estimates of the winning probabilities of some research programs.

First however let me state explicitly a rather obvious fact: if the objective of a research program is exclusively the one of providing a more elegant (more “satisfactory”) description of known physical phenomena, without leading to the experimental discovery of any previously unknown physical phenomena, it will for sure produce no return on the investment, and automatically \(P_{win} =0\).

I mentioned above research on the magnetic monopole, as illustrative example of a case where so far the return on investment is 0. I should mention however that at the present time my subjective assessment of the winning probability for magnetic-monopole research is of \(P_{win} \sim 0.01\), which is evidently not good but also not so bad. My decisions on investments in magnetic-monopole research should take into account the costs, the amount of resources that appear to be needed, and compare that to the value of the possible “return”, factoring in this small (but non-negligible) \(P_{win}\). Overall I choose not to invest personally (my working hours) in magnetic-monopole research, but it is a rather close call, and I would not at all be surprised if other individuals (or funding agencies) choose to invest in magnetic-monopole research.

Moving on to topics of interest in quantum-gravity research, let me start by considering quantum-gravity research programmes focused on the hypothesis of compact spatial dimensions of size given by the Planck length (the inverse of the Planck scale, \({\sim }10^{-35}m\)). For this my subjective estimate of \(P_{th}\) is of \(P_{th} \sim 0.1\), which is very high among the values of \(P_{th}\) that I attribute to physical predictions emerging from quantum-gravity research. However, my subjective estimate of \(P_{exp}\) for this case is of \(P_{exp} \sim 10^{-80}\), reflecting the fact that, according to theoretical evidence gathered so far, these extra dimensions produce effects with very steep onset (they leave no trace at length scales below the compactification length scale). This \(10^{-80}\) reflects my estimate of how difficult it would be to devise experiments capable of probing directly length scales comparable to the Planck length. So overall my subjective estimate of the winning probability for research programmes on the hypothesis of compact spatial dimensions of size given by the Planck length is of \(P_{win} = P_{th} \cdot P_{exp} \sim 10^{-81}\), a typically minute value for quantum-gravity research. I shall not invest my working hours on the phenomenology of compact spatial dimensions of size given by the Planck length.

My interest in research on Planck-scale effects affecting relativistic symmetries reflects of course my subjective estimate of the winning probability for that research program. In this case it is useful to separate the discussion of the winning probability into two subcases, depending on the value of \(\alpha \) in the factors of \(\left( {E \over E_P}\right) ^\alpha \) that give the Planck-scale dependence of the effects. For \(\alpha \le 1\) my subjective estimate of the \(P_{th}\) is of \(P_{th} \sim 0.01\), but the trends of sensitivity improvements over the last decade leads me to estimate \(P_{exp} \simeq 1\). The case \(\alpha > 1\) appears to be more generic in theory studies (more probably a good idea) but is more challenging experimentally, a situation which I subjectively characterize as a case of \(P_{th} \sim 0.1\) and \(P_{exp} \simeq 0.1\). So overall I estimate the winning probability for research on Planck-scale effects for relativistic symmetries at \(P_{win} \sim 0.02\), which is by far the biggest winning probability I see among quantum-gravity research programs.

Of course my subjective estimates have no objective quantitative valence, but they illustrate how scientists could deal with the challenge of estimating, however tentatively and subjectively, both \(P_{th}\) and \(P_{exp}\). Instead it often happens, particularly among young quantum-gravity researchers, that only \(P_{th}\) is taken into account in choosing a research program. Often discussions about priority between one and another quantum-gravity research program focus exclusively on which \(P_{th}\) could be higher, even though a high \(P_{th}\) when accompanied by a particularly low \(P_{exp}\) still gives a very low \(P_{win}\).

Reassessing Winning Probabilities

Estimates of winning probabilities are not only subjective but also a reflection of the status of theoretical and experimental knowledge at the time when the estimate is performed. Good practice imposes that one should reassess frequently the overall situation and perform updated estimates of the winning probability.

In-Vacuo Dispersion

The possibility of quantum-gravity-induced in-vacuo dispersion, an energy dependence of the travel times of ultrarelativistic particles from a given source to a given detector, has been motivated in several studies (see e.g. Refs. [2,3,4,5,6,7] and references therein). This is in particular the most studied example of quantum-gravity effect affecting relativistic symmetries. Part of the interest in this possibility comes from the fact that it is a rare case of candidate quantum-gravity effect that could lead to observably large manifestations, even if its characteristic length scale is of the order of the Planck length.

The best opportunity so far studied for such experimental tests is provided by observations of GRBs [2,3,4], which set up for us a sort of race among photons of different energies and neutrinos of different energies, all emitted within a relatively small time window. A characterization of the present status of these studies is given in figure, relying on observations reported in Refs. [6, 8,9,10,11,12,13].

Fig. 1
figure 1

The points here shown correspond to values of \(E^*/(1+z)\) and \(|\Delta t|/(1+z)\) for the GRB photons (blue, red and green, all observed after the first GBM peak) of highest energy at emission observed by the Fermi telescope and for IceCube-telescope neutrinos (black for those observed after the GRB trigger, gray for those observed before the GRB trigger) that fit the criteria for GRB-neutrino candidates proposed in Ref. [8]. z is redshift, while comments on \(E*\) and \(\Delta t\) are here offered in the main text. The photon point in red is from 2009 (GRB090510) and its impact on the winning probability of these studies had to be reanalyzed when the photon point in green became available in 2016 (GRB160509a)

The neutrinos and photons in figure were selected using criteria [6, 8,9,10,11,12,13] which do not a priori favor the emergence of the correlation visible in figure. That correlation is the sort of feature one could expect from in-vacuo dispersion, as it follows immediately from the definitions of \(\Delta t\) and \(E^*\) (Fig. 1):

\(\bullet \) For the photons in figure \(\Delta t\) is the time-of-observation difference between that high-energy GRB photon (interpreted tentatively as a photon emitted at or near the first peak of the GRB) and the first GBM peak [13] of the GRB, while for neutrinos \(\Delta t\) is the time-of-observation difference between that candidate GRB neutrino and the trigger of the relevant GRB.

\(\bullet \) The values of \(E^*\) are obtained from the energy of the particles (photons or neutrinos), rescaled by a suitable redshift-dependent factor [13], in such a way that for in-vacuo dispersion with \(\alpha =1\) one would expect an exactly linear dependence between \(\Delta t\) and \(E^*\) (up to uncertainties in the values of redshift, and the possible presence of spurious points corresponding to high-energy photons emitted not exactly at the first peak or neutrinos misidentified as GRB neutrinos).

The data point in figure taken from GRB090510 is not in agreement with the overall correlation shown in figure, and was one of the first such photons to be reported. When that photon was reported my subjective estimate of the winning probability for \(\alpha \le 1\) was lower than it is now. Some of the photons reported more recently (perhaps most notably one from GRB160509a [13]) strengthened the correlation now shown in figure, and inform my present assessment of the relevant winning probability.

Planck-Scale Effects for Macroscopic Systems

Quantum-gravity effects are usually postulated for “fundamental” microscopic particles, but of course it is important to then investigate what are the implications of those effects for macroscopic systems, composed of many microscopic particles. It is in principle possible that the effects are amplified for a macroscopic system, as a result of cumulative manifestations of the microscopic effects. First of all one must check that this amplification (if at all present) still keeps the proposal consistent with experimental facts, since of course we have very good experimental information on certain types of macroscopic system. Most interestingly the amplification could bring the effects to observable level, consistent with available experimental information but suitable for testing with forthcoming experiments.

The windows of opportunity for this sort of studies of macroscopic bodies should evidently be very rare: it is a delicate balance, which will rarely occur, for the effects to cumulate to observable level for foreseeable experiments, but still safe from falsification with already available experimental facts. Moreover, some arguments suggest that the types of effects for microscopic particles that most naturally arise in quantum-gravity research should automatically fade away as large numbers of microscopic particles combine to form a macroscopic system. Let me here briefly discuss the simplest of these arguments, where the microscopic effects take the form of noncommutativity of the spacetime coordinates of microscopic particles. It suffices for my purposes to use as illustrative example noncommutativity of the type

$$\begin{aligned}{}[x_n,y_n]=i \ell ^2 + i \ell ' y_n \, , \end{aligned}$$
(1)

where the index n prepares me to consider many such particles, since n will label different particles composing a macroscopic system, while \(\ell \) and \(\ell '\) are length scales characteristic of the noncommutativity.

For the description of the coordinates of the center of mass of a macroscopic system composed of N constituent particles I take XY, with

$$\begin{aligned} X = \frac{1}{N}\sum \limits ^{N}_{n=1} x_n \,\, , \,\,\, Y = \frac{1}{N}\sum \limits ^{N}_{n=1} y_n \end{aligned}$$
(2)

Combining (1) and (2) one easily finds that

$$\begin{aligned}{}[X,Y] = i \left( \frac{\ell }{\sqrt{N}}\right) ^2 + i \frac{\ell '}{\sqrt{N}} Y \, , \end{aligned}$$
(3)

which evidently shows that the effects of coordinate noncommutativity for the center of mass of macroscopic systems are scaled down by a factor of \(1/\sqrt{N}\).

This observation based on Eqs. (1), (2) and (3) is an example of theory result which, once established, must be taken into account when reassessing winning probabilities. My present subjective estimate of \(P_{win}\) for research programs on Planck-scale effects for macroscopic systems is of \(P_{win} \sim 0.001\), taking into account theory arguments of the type in Eqs. (1), (2) and (3), but could have been much higher without such arguments.

Ace on the River

Some readers will be uncomfortable with the role played by subjective assessments and with the role played by chance in the methodology here advocated. I have argued that this is inevitably on the road to the only objective enrichment we can aspire to, which is the experimental discovery of previously-unknown physical phenomena. Knowledge is the collection of the physical phenomena we have witnessed (not their interpretationFootnote 1).

Those dreaming a procedure for objective quantitative assessment of different ongoing research programs will be unimpressed, but might still appreciate that comparisons based on subjective assessments of both \(P_{th}\) and \(P_{exp}\) are better than comparisons based exclusively on subjective assessments of \(P_{th}\).

Even more unsatisfied will be those feeling the urge to evaluate theories on the basis of their “internal qualities” (like being “absolutely true”) rather than on their temporary usefulness for the experimental discovery of some previously-unknown physical phenomena (being “temporarily true”). I shall write elsewhere about the futility of the notion of “absolutely true theory” (i.e. “theory of everything”), but let me note here how the fact that this weak notion still has a hold on so much of our scientific efforts is probably to be attributed (and here I am at least in part influenced by Lakatos [14]) to the fact that the pivotal works by Galilei and Newton emerged against the background of centuries dominated by the all-pervading idea that religious knowledge was certain and indubitable. Science took shape inevitably at first as an alternative path to knowledge who should also produce certain and indubitable theories. However, neither the theories nor the religions can aspire to objectivity. We are lucky enough to have the objectivity of physical phenomena (not of their interpretation) to share.

Emphasis on winning probabilities might not be a frequent sort of emphasis for essays on knowledge, but it will be recognized as well-placed emphasis by anyone who analyzes successful scientific practices without prejudice. Talent is the ability to perform good assessments of winning probabilities, courage is the willingness to take a low winning probability when a big “return” is desired, honesty (with others and with self) is especially to be found in reassessing winning probabilities without bias (or at least attempting, in good faith, to keep bias under control) in light of novel evidence from theory work or experiments. However ultimately on the way to any good “return” some luck is needed. It’s just that more often than not it takes a lot of hard preparation to be lucky. I like in this respect the book in Ref. [15]: at a certain point of the book there is a description of how an ace on the river played a peculiarly important role in the career of a poker champion; before that the book offers a detailed description of the hard work and talent that was required preparation for that lucky ace.