1 Strong Interactions in the Mid Sixties

Having graduated from the University of Florence in 1965, I had the enormous luck of entering the field just at the beginning of that period which, a posteriori, can be rightly called the “golden decade” of elementary particle physics. At that time the status of the theory of strong (nuclear) interactions was not in very good shape. Data were abundant, but we could only confront them with a handful of models each one capturing one or another aspect of the complicated hadronic (hadron is a generic name for any particle feeling the strong force) world. Many hadrons had been identified, most of them metastable (resonances), and with large mass and angular momentum (spin): the “hadronic zoo” seemed to be increasing in size every day.

Today, with hindsight, we can easily assert that, in the late sixties, we took the wrong way by rejecting, a priori, a description of these phenomena based on quantum field theory (QFT) the framework that had already been so successful for the electromagnetic interactions via quantum electrodynamics (QED). There were (at least) two very good excuses for having chosen the wrong way:

  • Unlike in QED, the theory of just photons and electrons, there were too many particles to deal with, actually, as I just said, an ever increasing number;

  • QFTs of particles with high angular momentum were known to be very difficult, if not impossible, to deal with in a QFT framework.

Instead, a so-called S-matrix approach looked much more promising.

The constraint of relativistic causality forces the S-matrix elements to be analytic functions of the kinematical variables they depend upon, like the energy of the collision. Also, the symmetries of the strong interactions can easily be implemented at the level of the S-matrix. These symmetries could also be used to put some order in the hadronic zoo by grouping particles with the same spin into multiplets (with respect to symmetries such as SU(2) of isospin or its SU(3) extension to include strange particles).

Also, the recently developed Regge theory [1] was also able to assemble together particles of different angular momentum. One amazing empirical observation at the time was that the masses M and angular momenta J of particles lying on the same “Regge trajectory” approximately satisfied a simple relation:

$$\begin{aligned} J = \alpha ( M^{2}) = \alpha _{0} + \alpha ' M^{2} \;, \end{aligned}$$
(9.1)

with \(\alpha _{0}\) a parameter depending of the particular Regge-family under consideration and \(\alpha '\) a universal constant (\(\alpha ' \sim 0.9~\textrm{GeV}^{-2}\) in natural units where \(c= \hbar = 1\)).

Regge theory had a second important facet, pointed out later by Gribov, Chew, Mandelstam and others [2]: it could be used to describe the behaviour of the S-matrix at high energy. These two uses of Regge theory are illustrated in Fig. 9.1, where we see the linear and parallel Regge trajectories (with one exception, the so-called vacuum or Pomeranchuk trajectory) and the fact that the trajectory interpolates among different particles at positive \(J, M^{2}\) while it determines high-energy scattering at negative \(M^{2}\).

Fig. 9.1
figure 1

Regge trajectories at positive and negative values of \(M^{2}\)

Chew [3] had invoked these two appealing feature of Regge’s theory to formulate what I will call (for reasons that will become clear later) an “expensive bootstrap”. Chew’s idea was to add to the already mentioned constraints (unitarity, analyticity, symmetry) the assumption of “Nuclear” Democracy” according to which:

  • All hadrons, whether stable or unstable, lie on Regge trajectories (at \(M^{2} \ge 0\)) and are on the same footing;

  • The high-energy behaviour of the S-matrix is entirely given in terms of the same Regge trajectories (at \(M^{2} \le 0\)).

In Chew’s bootstrap Unitarity (i.e. conservation of probability) played a crucial role. It represented a non linear and thus very non-trivial, constraint. Would that give a unique solution to the bootstrap? The S-matrix knew about both uses of Regge theory:

$$\begin{aligned} S =S_{s-channel} + S_{t-channel} \;, \end{aligned}$$
(9.2)

Considering, for instance, \(\pi ^+ \pi ^-\) scattering we would expect to find a contribution from both the formation of s-channel resonances (\(\rho ^0\) and the like) and from the exchange of the (\(\rho ^0\) and the like) Regge trajectory in the t-channel. This would mimic, for the strong interactions, the situation for \(e^+ e^-\) scattering in QED, with the exchange of either a photon or a \(Z^0\) in both channels.

However, an interesting surprise came out in 1967 through a fundamental observation made by Dolen, Horn and Schmit [4] who, after looking carefully at some pion-nucleon scattering data, concluded that contributions from resonance formation and those from particle exchange should not be added but were actually each one a complete representation of the process. This property became known as DHS duality.

In the summer of 1967, at a summer school in Erice, I was strongly influenced by a talk given by Murray Gell Mann reporting about DHS duality and stressing that such a framework could lead to what he defined as a “cheap bootstrap” as opposed to Chew’s expensive one. In order to get interesting constraints on the Regge trajectories themselves it was enough to require that the two dual descriptions of a process would produce the same answer. This was a non trivial constraint, yet a linear one, thus providing a “cheaper” bootstrap.

DHS duality prompted Harari and Rosner [5] to introduce “Duality Diagrams” (see Fig. 9.2) where hadrons are represented by a set of quark lines (two for the mesons, three for the baryons) and the scattering process is described in terms of the flow of these quark lines through the diagram. By looking at the diagram in different directions (channels), the process is seen to proceed in different -but equivalent in the sense of DHS duality- ways. Note that in those days quarks were just a mnemonic to keep track of quantum numbers and internal symmetries: they were not considered as having any real substance.

Fig. 9.2
figure 2

Duality diagrams illustrating DHS duality

2 Dual Resonance Models

The crucial question was: Can we associate a precise mathematical expression to duality diagrams like we do with Feynman diagrams in quantum field theory?

A tentative answer to that question was found in 1968 [6] for a very simple and convenient (theoretically speaking!) process: \(\pi \pi \rightarrow \pi \omega \), represented pictorially by three duality diagrams (two of which are shown in Fig. 9.3).

Fig. 9.3
figure 3

Duality diagrams for \(\pi \pi \rightarrow \pi \omega \)

The educated guess for this process was in terms of the well-known Euler Beta-function:

$$\begin{aligned} A= & {} \beta \left[ B\left( 1- \alpha (u), 1- \alpha (t)\right) + B\left( 1- \alpha (s), 1- \alpha (u)\right) + B\left( 1- \alpha (s), 1- \alpha (t)\right) \right] \nonumber \\ B(x,y)\equiv & {} \frac{\Gamma (x) \Gamma (y)}{\Gamma (x+y)}~;~ \alpha (t) = \alpha _0 + \alpha ' t~;~ s +t +u = 3 m_{\pi }^2 + m_{\omega }^2 \end{aligned}$$
(9.3)

where \(\Gamma (x)\) denotes Euler’s Gamma-function and the three terms in (9.3) are in one-to-one correspondence with the three duality diagrams of Fig. 9.3. Note the exact linearity of the Regge trajectory and the consequent appearance there of a dimensionful constant, the Regge slope \(\alpha '\).

Although measuring the process \(\pi \pi \rightarrow \pi \omega \) is challenging the same amplitude (9.3) can be used, by analytic continuation, to describe the decay \( \omega \rightarrow 3 \pi \). The result turned out to be very satisfactory (in particular the presence of zeroes [7] due to the \(\Gamma \)-functions in the denominators). The amplitude was also successfully extended to describe \(\pi \pi \rightarrow \pi \pi \) scattering in the so-called Lovelace-Shapiro model [8].

Finally, (9.3) was generalized to production processes i.e. to amplitudes with more than four external legs. This last generalization became known as the Dual Resonance Model (DRM), the progenitor of string theory as we know it today.

3 The Dual Resonance Model and Relativistic Quantum Strings: From Hints to Proof

Since the early days of DRM research there were definite hints of some sort of underlying vibrating string (as particularly emphasized by H. Nielsen, L. Susskind and Y. Nambu). We can list some of them:

  • The linear Regge trajectories imply a constant ratio between angular momentum and squared mass. That fits very well with an object that has a mass M proportional to its size L (then \(J \sim M \cdot L \sim M^2\)) where the constant of proportionality (\(\alpha '\)) has dimensions of length per unit mass, i.e. of the inverse of a string tension. In this reasoning we took the characteristic speed to be of \(\mathcal{O}(c)\), hence the string is supposed to have relativistic motion.

  • The duality diagrams (say for meson-meson scattering) can be visualized (Fig. 9.4) in terms of strings connecting quark-antiquark pairs first joining to form a single string (by quark antiquark annihilation) and then splitting again (by pair creation).

  • The spectrum of the DRM could be described [9] in terms of an infinite set of (quantized) harmonic oscillators having integer frequencies in terms of a fundamental one. This latter property is typical of a classical (say violin) string, but it was also obvious that the putative string had to be quantum-mechanical.

  • There was a two-dimensional (conformal) field theory underlying the DRM with its Virasoro operators [10] and algebra [11]. And this would be the natural description of the dynamics of one-dimensional objects (in analogy with the world-line description of one-dimensional objects).

  • ...

Fig. 9.4
figure 4

Strings joining and splitting?

The remaining hints were not missed, but the connection with strings remained qualitative for sometime. Eventually, it was established on solid grounds through a precise formulation of the classical relativistic string by Nambu and Goto [12] in 1970–1971. But its first correct (light-cone) quantization by Goddard, Goldstone, Rebbi and Thorn [13] had to wait till 1972. I refer to P. Di Vecchia’s contribution for more details on this part.

4 Beautiful, Elegant, But Not the Right Theory!

Paradoxically, now that the DRM had been raised to the level of a respectable theory, it became apparent that it was not the right one for the (strong) interactions it had been conceived for! There were actually both good and bad news for the newly born string!

The good news (mainly theoretical)

  • The Neveu-Scherk-Ramond extensions for adding fermions,

  • The Gliozzi-Scherk-Olive (GSO) projection, leading to supersymmetry discovery (in the west).

  • The combination of all these developments gave fully consistent superstring theories, with neither negative norm states (ghosts) nor imaginary mass states (tachyons).

The bad news (basically phenomenological):

  • Unwanted massless states giving problems at large distance (strong interactions are short range forces)

  • Softness giving problems at short distance (see below)

  • Need for six extra dimensions of space for a total of ten space-time dimensions.

On the other hand the following experimental facts:

  • The constant high-energy limit of \(R = \sigma (e^+ e^- \rightarrow \textrm{hadrons})/\sigma (e^+ e^- \rightarrow \mu ^+ \mu ^-)\),

  • Bjorken scaling in deep inelastic lepton-hadron collisions,

  • The relative abundance of large\(p_t\) events at CERN’s pp collisions at the Intersecting Storage Ring (ISR),

were providing strong evidence for the existence of point-like structures inside the hadrons, structures completely absent in the Nambu-Goto string.

5 QCD Takes over

Around 1973–1974 QCD clearly took the upper hand on the hadronic strings. The points in its favor were many:

  • Its proven ultraviolet (asymptotic) freedom explaining the abundance of hard collisions;

  • Its conjectured, and later proven (see Guido Martinelli’s talk), infrared slavery (confinement) leading to string-like excitations via chromo-electric flux tubes. The string tension is a well-defined quantity in QCD, via the behavior of large Wilson loops;

  • Its reinterpretation of duality diagrams (and their higher order topologies) in terms of large-N expansions [14]. In large-\(N_c\)-QCD (at fixed ’t Hooft coupling \(\lambda = g^2 N_c\)) duality diagrams take up a precise meaning: they are the sum of planar Feynman diagrams bounded by quark propagators and filled with gluons (as shown in Fig. 9.5). In this approximation resonances have zero width, the scattering amplitude is meromorphic, exhibits (most likely) DHS duality, and generates a scale (\(\Lambda ^{-2} \sim \alpha '\)) via a renormalization-group phenomenon known as dimensional transmutation.

With the exception of the first one, these properties of what we now believe to be the correct theory of strong interactions explain why, starting from a bottom up approach, we landed on a string theory of hadrons albeit not on the right one! Strings are there in QCD, and possibly represent the best description of its large-distance confining dynamics, but its precise formulation (even in the large-\(N_c\) limit) is still missing.

Fig. 9.5
figure 5

Reinterpretation of a duality diagram in the ’t-Hooft limit [14]

6 Turning a Defeat into a Victory?

Around 1974 most people working in string theory turned their attention to the newly constructed Standard Model (of which QCD is a basic component). An important proposal by Scherk and Schwarz [15] went almost unnoticed for a full decade. In retrospect, perhaps too daring for the time, it was as follows.

Upon a rescaling of the string tension by some twenty orders of magnitude string theory could be perhaps reinterpreted as a candidate theory of all truly elementary particles: not of hadrons, but of their constituents (quarks and gluons), as well as of leptons, gauge bosons, and all the way including the graviton.

Under this reinterpretation the shortcomings of the hadronic string became advantages:

  • Massless particles of spin \(J = 1, 2\) are very needed for gauge interactions and gravity.

  • Softness cures the long-standing problem of QFT’s UV divergences, making quantum string gravity well defined (at least perturbatively).

  • Extra dimensions, if compact, can be used to generate new gauge interactions through (a stringy version of) the Kaluza-Klein idea.

The combination of these properties could possibly provide a finite quantum theory of all interactions, including gravity.

It took however till 1984 before a breakthrough paper by Green and Schwarz [16] made it possible for people to take seriously such a dream. Their paper showed how to eliminate (almost miraculously) the only remaining inconsistency, a quantum gauge/gravitational anomaly (a well known constraint in more conventional quantum field theories such as the standard model) upon severely restricting the underlying gauge symmetry.

Overnight many theorists went back to (if old enough) or jumped in (if young enough) the new adventure and many new results quickly followed. For lack of space I will mention just three of them.

6.1 Stringy Symmetries

The stringy version of Kaluza-Klein theory leads to new kinds of symmetries, known as T-dualities: large and small compactification radii (with respect to \(\sqrt{\alpha '}\)) are equivalent for closed strings (upon the swapping of momentum and winding modes for closed strings) implying a minimal compactification radius \(R_c \sim \sqrt{\alpha '}\).

At that minimal (self-dual) radius, compactification gives non-abelian gauge interactions with \(R_c\) playing the role of the Higgs field. A cosmological variant of T-duality is also at the basis of new (big bounce) cosmologies [17].

6.2 The D-brane revolution

T-duality looked only possible for closed strings since open strings can carry momentum but, apparently, no winding. That looked suspicious to J. Polchinski, who found a way out of the puzzle in 1994 [18]. It went as follows.

T-duality is deeply rooted in the canonical transformation [19] \(P \leftrightarrow X'\) (the latter being related to winding). For open strings such a transformation relates open strings with Neumann boundary conditions (i.e. with free ends as one had assumed to be the case till then) to open strings with Dirichlet boundary conditions (i.e. with fixed ends). The latter were called D-strings. Note that different boundary conditions can be specified in different spatial coordinates.

While Neumann open strings (N-strings) carry momentum but no winding, D-strings carry winding but no momentum. T-duality then simply connects N- to D-strings. Instead, as we have already discussed, it relates closed strings to themselves (as they move/wind in apparently different but equivalent compact spaces).

D-branes is the name given to sub-manifolds of the full (typically 9-dimensional) space on which the ends of D-strings are, by definition, stuck. Their dimensionality, p, is thus related to the number of Neumann directions along which those end can freely move. One thus talks about \(D_p\)-branes.

The brane revolution led to many important results e.g.

  • The first example, by Strominger and Vafa [20] of black-holes whose Bekenstein-Hawking entropy can be given a statistical mechanics interpretation by counting their micro-states.

  • Apparently unrelated string theories are actually connected to each other through a web of dualities so that, eventually, they all appear to descend from different limits of a common ancestor, a mysterious M-theory in eleven dimensions [21], with the finite size of the11th dimension playing the role of the string coupling (the string analog of the fine structure constant).

  • The most recent (and amazing) use of D-branes came however in 1997.

6.3 Gauge-Gravity Duality

A stack of N coincident \(D_3\)-branes has an associated U(N) gauge theory living on their four-dimensional (in general \((p+1)\)-dimensional) space-time. One can then take the large-N limit, keeping \(\lambda = g^2 N\) fixed (cf. the already mentioned ’t-Hooft limit in QCD).

In the ambient ten-dimensional space-time the branes (whose energy density is known) generate a geometry which approaches asymptotically five-dimensional Anti-de Sitter space time (\(AdS_5\)) times a five-dimensional sphere (\(S_5\)) with AdS and sphere radii both fixed (in string units) in terms of \(\lambda \).

In 1997  Maldacena [22] conjectured an equivalence (made precise soon after by E. Witten) between a maximally supersymmetric gauge theory in four-dimensions (the boundary of \(AdS_5\)) and a ten-dimensional supergravity theory in \(AdS_5\otimes S_5\). The large-\(\lambda \) limit of the gauge theory gets related to the large-AdS radius limit of the gravity theory. Difficult non-perturbative phenomena on the gauge-theory side get thus mapped into an “easy” small-curvature regime on the gravity side.

Example: a lower bound on the ratio of shear viscosity and entropy density (\(\frac{\eta }{s} > \frac{1}{4 \pi }\)) was predicted and is apparently nearly saturated by the quark-gluon plasma produced at Brookhaven and LHC. There is by now overwhelming evidence for the validity of Maldacena’s conjecture.

7 Back to Square One?

Maldacena’s conjecture has been generalized to other gauge-gravity pairs. Attempts have been made, with some success, to extend the correspondence to less supersymmetric theories and even to (large-\(N_c\)) QCD.

We seem to be back to the problem we mentioned earlier: Can we find out, at least in ’t Hooft’s limit, how to describe the true string lurking behind the hadronic world? Perhaps a simple gravity problem can shed light on a hard gauge theory problem ...

That would close a 50-years-old circle!