Abstract
We show how the difficulties faced by quantum field theory in advancing beyond QED led to various models, one of which was Regge theory, with the addition of the dual resonance idea. This model achieved significant empirical successes, had several powerful theoretical virtues, and was therefore pursued with some excitement. We trace the story from Regge’s introduction of complex angular momentum into quantum mechanics, to its extension into the relativistic domain. This combined with ‘bootstrap’ physics according to which the properties of elementary particles, such as coupling constants, could be predicted from a few basic principles coupled with just a small amount of empirical input. This journey culminated in the finite energy sum rules of Dolen, Horn, and Schmid, which were elevated to the status of a duality principle. The primary researcher network guiding research in this period was fairly narrowly confined, and can be charted quite precisely, with Geoffrey Chew featuring as a key hub leading an anti-QFT school.
As you can see, the new mistress is full of mystery but correspondingly full of promise. The old mistress is clawing and scratching to maintain her status, but her day is past.
Geoffrey Chew, Rouse Ball Lecture. Cambridge 1963
An errata to this chapter is available at DOI 10.1007/978-3-642-45128-7_11
An erratum to this chapter can be found at http://dx.doi.org/10.1007/978-3-642-45128-7_11
Access provided by Autonomous University of Puebla. Download chapter PDF
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
David Gross has described the early 1960s as a period of “experimental supremacy” [28, p. 9099]. The theoretical situation was almost entirely phenomenologically-oriented, with a profusion of new particle data being generated by experiments at Brookhaven, CERN, DESY, SLAC, and elsewhere. Theory was in a rather sorry state. Most of the work was concerned with model building to try and get some kind of foothold on the diversity of new phenomena coming out of the latest generation of particle accelerators. There was genuine uncertainty about the correct framework for describing elementary particles, and even doubts as to whether there were such things as elementary particles. Footnote 1
One of the central problems was triggered by the strong interactions, involving hadrons,Footnote 2 describing the properties of nuclei. True to their name the strongly interacting particles have large coupling constants determining how strongly they interact with one another, so the standard field theoretical tool of expanding quantities in powers of these constants fails to give sensible results.Footnote 3 Steven Weinberg notes that the “uselessness of the field theory of strong interactions led in the early 1950s to a widespread disenchantment with quantum field theory” [55, p. 17]. It didn’t end with the strong interactions: the weak interaction too was then described by the non-renormalizable Fermi theory. This situation led to a move to bypass quantum field theory and instead deal directly with the fundamental constraints—and other general properties characteristic of the strong interaction—on the S-matrix that are expected of a good relativistic quantum theory (i.e. the scattering probability amplitude).Footnote 4 As Pierre Ramond writes, “[i]n the absence of a theory, reliance on general principles alone was called for” [46, p. 503].
String theory did not spontaneously emerge from a theoretical vacuum; it emerged precisely from the conditions supplied by this profound foundational disenchantment.Footnote 5 With hindsight, the earliest developments in string theory—i.e. the dual resonance models alluded to in the previous chapter—can be viewed as perfectly rational and progressive steps given the state of physics just prior to it. In this first part we describe this state of affairs, and introduce the mathematical and physical concepts, formalism, and terminology necessary in order to make sense of early (and large portions of later) string theory.Footnote 6 However, many of the old concepts still make an appearance in modern string theory despite being not so well-known. This part might therefore also serve as a useful primer for those wishing to learn some string theory by providing some of the original physical intuitions and motivations.
1 Hadrontology
Recall that hadrons come in two families: baryons (particles composed of three quarks, such as the protons and neutrons making up an atomic nucleus) and mesons (force-mediating particles composed of two quarks, (a quark and an anti-quark), such as the pions and kaons found in cosmic rays)—particles that do not interact via the strong force are called leptons. The interactions amongst the components of nuclei were originally thought to be mediated entirely by \(\pi \)-mesons (a name contracted to ‘pions’). However, the early models did not consider hadrons as internally structured entities composed of point-like constituents that interact through hard collisions, but as extended objects with ‘soft’ interactions.Footnote 7 From our present position we would say that the models were latching on to low-energy, long-range aspects of hadron physics in which the pions were the tip of an iceberg. There were many more mesons lurking below the surface. Unifying the profusion of mesons and baryons posed one of the most serious challenges of mid-twentieth century physics.
The challenge was further intensified as technological advances made possible proton acceleratorsFootnote 8 and bubble chambers capable of registering events involving hadrons by photographing bubbles formed by charged particles as they dart through a superheated liquid, thereby superseding earlier cosmic rays observations.Footnote 9
Of course, quantum mechanics renders events probabilistic. This infects the natural observables in particle physics too. One of the observable quantities is the scattering cross-section (which basically offers a measure of the scattering angle made by colliding beams, or a beam and a static target). This tells you the likelihood of a collision given that two particles are moving towards one another. The magnitude of the cross-section is directly proportional to this likelihood. The cross-section itself is a function of the energy of the incoming beams, and if one examines the behaviour of the cross-section as a function of this energy, one can find peaks such that one can ask whether they correspond to particles or not.
Analysing the data from these scattering experiments pointed to the production of very many more new particles, or ‘hadronic resonances’ (very short lived, ‘fleeting’ strongly interacting particlesFootnote 10 corresponding to sharp peaks in the total cross section, as a function of the energy)—of course, the strong interaction’s being strong implies that such particle production will be plentiful. As described in the Particle Data Group’s documents, resonant cross sections are described using the Breit-Wigner formula:
where \(E\) is the energy in the centre of mass frame, \(J\) is the resonance spin, \((2S_{1}+1)\) and \((2S_{2}+1)\) are the polarisation states of a pair of incident particles, \(k\) is the initial momentum in the centre of mass frame, \(E_{0}\) is the resonance energy (again in the centre of mass frame), \(\varGamma \) describes the resonance width (with \(\frac{1}{\varGamma }\) giving the mean resonance lifetime), and the \(B_{in}B_{out}\) pair describe the resonance branching fractions for the incoming and outgoing channels, where \(B_{channel}\) would be computed as \(\frac{\varGamma _{channel}}{\varGamma _{all}}\) (that is, one counts the total number of decays through some channel relative to the total number of particles produced)—see http://pdg.lbl.gov/2013/reviews/rpp2012-rev-cross-section-formulae.pdf for more details.
The search for patterns in this jumble of data led to the discovery of a new symmetry principle and a deeper quark structure underlying the dynamics of hadrons. This work can be viewed in terms of a drive to systematise.Footnote 11 A central concern was whether these new particles (or, indeed any of the particles) were ‘fundamental’ (i.e. elementary)—with the sheer number of different particle types naturally casting doubt on the idea that they were all elementary. If so, then the others might be constructed as bound states of some small number of elementary particles.Footnote 12
One of the most hotly pursued approaches, S-matrix theory, involved focusing squarely on just those properties of the scattering process—or more precisely of the probability amplitude for such a scattering event—that had to be obeyed by a physically reasonable relativistic quantum field theory. The combination of these general principles with (minimal) empirical evidence drawn from observations of hadrons was believed to offer a way of (eventually) getting a predictive physics of strong interactions.Footnote 13 In its most radical form, espoused by Berkeley physicist Geoffrey Chew, the question of which hadrons were elementary and which were bound states was simply not appropriate; instead, one should treat them democratically, as on all fours.Footnote 14
The S-matrix was originally developed by John Wheeler, as a way of condensing the complex bundle of information that is needed to describe a collision process, encapsulating the experimentally accessible information about any scattering experiment one could think of. Heisenberg actually named the object that achieves this condensation and imbued it with far more significance than Wheeler ever did.Footnote 15 Wheeler saw it as a mere tool “to deal with processes to be analysed by a more fundamental treatment” [56]. This might, as in the case of quantum electrodynamics [QED] be provided by a quantum field theory, which delivers up an S-matrix as an infinite expansion in the coupling constant (as we saw, in the case of QED this is the fine-structure constant \(\alpha _{\textit{EM}} = \frac{e^{2}}{4\pi }\)).Footnote 16 Alternatively, one can sidestep talk of fields entirely, and focus on the scattering probability amplitude itself, which after all should contain all physically observable information (including the cross sections mentioned above, which can be written in terms of the matrix elements).
In this latter sense the S-matrix has an affinity with Bohr’s positivistic strategy of ignoring what happens between energy transition processes involving electrons orbiting atoms. In this case what is ignored (as unphysical or meaningless since unobservable, since too short-lived) are the unmeasurable processes occurring between initial and final states of a collision process.Footnote 17 Rather than describing what happens at the precise spacetime point (the vertex) at which the two or more particles meet (in which case there is no measurement to ascertain what is happening), one focuses on the measurable ‘free’ (non-interacting) situation when the systems were not and are no longer able to causally interact (mathematically speaking, at infinity, in the asymptotic limits), and therefore the particles have straight trajectories at constant velocities. In effect one draws a black box around the innards of the process and focuses on the particles entering and leaving the box and the probabilities of their doing so. This is somewhat paradoxical since the interaction between particles is described by an expression involving the particles’ being far apart!
The S-matrix catalogues these possible relations between inputs and outputs along with their various probabilities. Measurable quantities such as scattering cross-sections can be written in terms of the matrix elements of the (unitary) S-matrix operator \(\mathsf {S}\). Recall that in quantum mechanics the state of a system is represented by a wave function \(\psi (p)\), a square-integrable function of the system’s momentum \(p\) (a 3-vector). For \(n\) particles it is a function of all the particles’ momenta \(p_{1}, ..., p_{n}\) (each a 3-vector).Footnote 18 The S-matrix is then an operation that transforms an initial state (a free wavefunction) of such incident particles to a final state (another free wavefunction), which, under the action of the unitary operator \(\mathsf {S}\), will have the general form of a superposition of all possible final states. The amplitude for finding one of these final states (say \(|p'_{1}, p'_{2} \rangle \)) in a measurement (for which the initial state is \(|p_{1}, p_{2}\rangle \)), is given by \(\langle p'_{1}, p'_{2} | \mathsf {S} | p_{1}, p_{2}\rangle \):
As in many episodes in the history of physics, what was essentially a mathematical result, here from complex analysis, led in 1959 to a breakthrough in physical theory. Analytic continuation allows one to extend the domain of definition of a complex function. A (complex) function is said to be analytic (or holomorphic, in mathematical terms) if it is differentiable at every point in some region. It was already known, thanks to the work of Gell-Mann, Chew, and others, that the S-matrix was an analytic function of its variables (representing physical quantities: the momenta of ‘incoming’ and ‘outgoing’ particles). This allowed the properties of the S-matrix to be probed almost independently of field theoretical notions in a quasi-axiomatic fashion (with very little by way of direct experimental input). The S-matrix theory (also known as the ‘theory of dispersion relations’,Footnote 19 though the links between dispersion relations and Heisenberg’s theory took more time to emerge) then sought to derive the S-matrix by imposing various natural consistency conditions on it: Lorentz invariance, crossing,Footnote 20 unitarity, and analyticity (see the box below).
-
Lorentz invariance is satisfied when physical quantities are unchanged by Lorentz transformations (of the form \(x'^{\mu } = \varLambda ^{\mu }_{\nu } x^{\nu }\) for all 4-vectors \(x^{\nu } = (x^{0}, \mathbf {x}) = ( {t} , \mathbf {x})\) and Lorentz tensors \(\varLambda ^{\mu }_{\nu }\)). (Of course, this also implies that energy, momentum, and angular momentum are conserved.)
-
Analyticity is satisfied just in case a scattering amplitude \(A\) is an analytic function of the Lorentz invariant objects used to represent the physical process in which one is interested. This formal condition is the mathematical counterpart of causality (i.e. the outlawing of effects preceding causes). (This condition has its origins in the dispersion relations of classical optics—see footnote 19.)
-
Crossing is a symmetry relating a pair of processes differing by the exchange of one of the input and output particles (mapping particle to anti-particle and vice versa); for example, \(a+b \rightarrow c+d\) and \(a + \overline{c} \rightarrow \overline{b}+d\) (where \( \overline{b}\) and \( \overline{c}\) are \(b\) and \(c\)’s anti-particles).
-
Unitarity is simply the condition that the scattering matrix \(S\) is unitary: \(S^{\dag }S = 1\). Or, in other words, probability (that is, the squared modulus of the amplitude) must be conserved over time. (This also includes the condition of coherent superposition for reaction amplitudes.)
As indicated above, one of the central objects of the physics of elementary particle physics is the scattering (or transition) amplitude \(A\). This is a function that churns out probabilities for the outcomes of collision experiments performed on pairs of particlesFootnote 21—note, this is not the same as the matrix of such described above. It takes properties of the particles as its argument. For example, the function might depend on the energy \(E\) of the collision event and the scattering angle \(\theta \) representing a particle’s deflection \(f(E,\theta )\) thus encodes the nature of this interaction. The general representation involves the incoming energy and the momentum that is transferred in the collision, \(s\) and \(t\) respectively, defined as follows:
-
\(t\) is the square of the difference between the initial and final momenta of the particles involved in some process (also known as “the momentum transfer”):
$$\begin{aligned} t = (p_{a} - p_{c})^{2} = (p_{b} - p_{d})^{2} \end{aligned}$$(2.2) -
\(s\) is the square of the sum of the momenta of the initial states on the one hand and the final states on the other:
$$\begin{aligned} s = (p_{a} + p_{b})^{2} = (p_{c} + p_{d})^{2} \end{aligned}$$(2.3)
We denote the incoming momenta of the particles, \(p_{a}\) and \(p_{b}\), with outgoing momenta \(-p_{c}\) and \(-p_{d}\). In this process there is a conservation of total momentum (4-momentum); i.e. \(p_{a} + p_{b} = p_{c} + p_{d}\) (also, \(p_{i}^{2} = m_{i}^{2}\), with \(m_{i}\) being the \(i\)th particle’s mass).Footnote 22 The scattering amplitude is, then, a function of certain conserved (invariant) quantities (‘channel invariants’). Suppose we have some process involving a pair of incoming particles going into some pair of outgoing particles (of the same mass \(m\), for simplicity): \(a+b \rightarrow c+d\). This will involve a 4-point amplitude \(A(s,t)\). The amplitude is then written as \(A(s,t) \sim \beta (t) (s/s_{0})^{\alpha (t)}\) (where \(\beta \) is a residue function). The squared modulus of this object delivers the observable scattering cross-section discussed above.
The Mandelstam variables define reaction channels as follows (see Fig. 2.4):
-
The reaction \(a + b \rightarrow c+d\) occurs in the \(s\)-channel, with the physical (real) region defined by values \(s \ge (m_{a} + m_{b})^{2}\).
-
The ‘crossed’ reaction \(a + \overline{c} \rightarrow \overline{b}+d\) occurs in the \(t\)-channel (as noted in the box above), with the physical (real) region defined by values \(t \ge (m_{a} + m_{c})^{2}\).Footnote 23
Recall that Feynman diagrams were originally intended to provide a mathematical representation of the various contributions to the S-matrix in the context of perturbative (Lagrangian) field theories. However, in the late 1950s Landau [34] had instigated the examination of the links between Feynman graphs and singularities of the S-matrix, thus liberating the former from weakly-coupled quantum field theories to which they were previously thought to be hitched. The singularity conditions that Landau found pointed to a correspondence between tree graphsFootnote 24 and poles (and loop diagrams and branch points). Thus was born the idea that general conditions imposed on the structure of the scattering amplitude might be enough to determine the physical behaviour of particles.
These considerations led to a variety of features that could be aimed at in model building. It was from this search that the Veneziano model was born. Before we discuss that model, we first need to say something about some important intervening work, of Tullio Regge, Stanley Mandelstam, and Geoffrey Chew, that will help us make better sense of the foregoing.
2 Chew’s Boots and Their Reggean Roots
In 1959 Tullio Regge [47] suggested that one think of solutions to the Schrödinger equation for the potential scattering problem in terms of the complex plane, using complex angular momentum variables (which, of course, take on discrete values). This ignited a surge of research in linking ‘Regge theory’ to the world of hadrons and high energy (special relativistic) physics.Footnote 25
A singularity of a complex function (i.e. a point where the value of the function is zero or infinity for some argument) is known as a pole (a tree graph in graphical terms, with loops corresponding to branch points). Regge focused on the potential scattering problem, where the amplitudes become simple poles in angular momentum (i.e. at certain special values of the momenta). The locations of these poles is determined by the energy of the system and the poles themselves were taken to correspond to the propagation of intermediate particles. As one tunes the energy parameter, one gets a graph (a Regge trajectory) describing the properties of resonances and scattering amplitudes (for which the transfer of momentum is large). In the relativistic case one must introduce another class of singularity in angular momentum, in particular at \(j = -1\). Stanley Mandelstam tamed these singularities by introducing a second Riemannian hyperplaneFootnote 26 of the complex \(j\)-plane and performing branch cuts in the \(j\)-plane, known as “Regge cuts”.Footnote 27
A Regge pole is then the name given to a singularity that arises when one treats angular momentum \(J\) as a complex variable.Footnote 28 Physically a Regge pole corresponds to a kind of particle that ‘lives’ in the complex angular momentum plane, whose spin is linearly related to its mass. Tuning the energy of such a particle to a value which would spit out an integer or half-integer value for the spin would produce a particle that one ought to be able to detect. Confirmation of this relationship was indeed found in early hadron spectroscopy which generated Regge plots showing (for mass squared plotted against spin) a linearly rising family of particles on what became known as a ‘Regge trajectory’ (see Fig. 2.5).Footnote 29 In this way specific types of particles could be classified by these trajectories, each trajectory containing a family of resonances differing with respect to spin (but sharing all other quantum numbers).
There was a curious feature about some of the spin values,Footnote 30 as represented in the plots of Regge trajectories, namely that they were seemingly unbounded from above. Particles with large spins are more like finite-sized objects possessing angular momentum (from real rotationFootnote 31). In the case of baryons, one can find experimentally observed examples of spin \(J\) \(=10\)! According to Regge theory, the high energy behaviour of scattering amplitudes is dominated by the leading singularity in the angular momentum Argand plane. Crucially, if such a singularity is a pole at \(J=\alpha (t)\) (in other words, a Regge pole) then the scattering amplitude has the asymptotic behaviour: \(\varGamma (1-\alpha (t)) (1+ e^{-i\pi \alpha (t)}) s^{\alpha (t)}\) (where \(s\rightarrow \infty \) and \(t<0\)).
The bootstrap approach grew out of these developments of Regge and Mandelstam.Footnote 32 In dispersion theory one tries to generate physics from a few basic axioms, such as Lorentz invariance, unitarity, and causality discussed above. These are used as (high-level physical) constraints on the space of possible theories as input data from the world is fed in. The dispersion theory approach and the old S-matrix approach were merged together in Chew’s ‘bootstrap’ approach to physics.Footnote 33
A crucial component of Chew’s approach was the ‘pole-particle’ correspondence. According to this principle, there is a one-to-one correspondence holding between the poles of an (analytic) S-matrix and resonances, so that the position of a pole in the complex energy plane gives the mass of the resonance while the residue gives the couplings. When the pole is complex, the imaginary part gives its lifetime. The idea was that the axioms of the dispersion approach would uniquely pin down the correct S-matrix, and thereby deliver physical predictions. The focus would be on the analytic properties of the S-matrix. The theory had some degree of success at a phenomenological level.
Presently, of course, our best description of nature at very small subatomic scales is couched in the framework of quantum field theory [QFT]—a framework Chew believed unhealthily imported concepts from classical electromagnetism. It is thought that there are six fundamental leptons and six fundamental quarks. These are bound together by forces that are understood as involving quantum fields. The unified theory of the weak and electromagnetic interactions, the electroweak force, is understood via the exchange of four kinds of particle: the photon, the \(W^{+}\), the \(W^{-}\), and the \(Z^{0}\). The strong force is mediated via the exchange of eight types of massless gluon. The standard model also involves Higgs particles, \(H^{0}\), whose associated field is responsible for the generation of the masses of observed particles.Footnote 34 In quantum field theory the dynamics is delivered through a Lagrangian, from which one derives equations of motion. Essentially what Chew proposed was to eliminate equations of motion in favour of general principles. In the case of strong interactions, at least, Chew believed that a Lagrangian model simply wasn’t capable of delivering up a satisfactory S-matrix.
At the root of Chew’s proposal was the belief that field theory could simply not cope with the demands imposed by strong interaction physics. He wrote that “no aspect of strong interactions has been clarified by the field concept” [6, p. 1]. Though there was a family of hadrons, no family members appeared to be fundamental, and a field for each and every hadron would result in filling space with an absurd number of fields. For this reason, Chew suggested that all hadrons should be treated on an equal footing: neither more nor less fundamental than any other. The notion of fundamentality dropped out in favour of nuclear democracy, with the particles understood as in some sense composed out of each other as in footnote 33, with the forces and particles bundled together as a package deal. Chew expresses it as follows:
The forces producing a certain reaction are due to the intermediate states that occur in the two “crossed” reactions belonging to the same diagram. The range of a given part of the force is determined by the mass of the intermediate state producing it, and the strength of the force by the matrix elements connecting that state to the initial and final states of the crossed reaction. By considering all three channels [i.e., orientations of the Feynman diagram] on this basis we have a self-determining situation. One channel provides forces for the other two—which in turn generate the first [6, p. 32].
A further development that played a crucial role was made by Chew’s postdoc student at Berkeley, Stanley Mandelstam. He had discovered a way to resolve a problem in understanding the strong interaction in terms of particle exchange (à la YukawaFootnote 35). The problem was that the hadrons were short range, and therefore massive—Yukawa had calculated a characteristic mass of \(100\) MeV, corresponding to a sub-nuclear range of the strong force of \(10^{-13}\) cm. The old cosmic ray observations delivered a candidate for such a particle in the form of the pion. Yet, by the late 1950s, particles were also being discovered with spins greater than 1, increasing linearly. This would imply that the exchange forces would also grow in such a way, without limit. Referring back to the discussion above, this would further imply that the scattering cross-section describing the size of the area over which the particles interact would also grow indefinitely. This is in direct conflict with the idea that exchanging massive particles demands smaller areas: the more massive the particles are, the less capable they are of covering large distances.
The solution was to treat the entire series of particles (with increasing spins) laid out along a Regge trajectory as the subjects of exchange (named a “pomeron” by Vladimir Gribov, after Pomeranchuk)—that is, rather than the individual points lying within the trajectories.Footnote 36 Applying this procedure keeps the cross-sections finite—a calculation that was performed by Chew and Steven Frautschi [5].Footnote 37
3 Enter Duality
An important step in the bootstrap approach was the principle of duality introduced by Dolen, Horn, and Schmid in 1967, at Caltech (they referred to it as “average duality” or “FESR duality”, for reasons given below).Footnote 38 They noticed that Regge pole exchange (at high energy) and resonance (at low energy) descriptions offer multiple representations (or rather approximations) of one and the same physically observable process. In other words, the physical situation (the scattering amplitude, \(A(s,t)\) Footnote 39) can be described using two apparently distinct notions (see Fig. 2.6):
-
A large number of resonances (poles) exchanged in the s-channel.
-
Regge asymptotics: \(A(s,t)_{s \rightarrow \infty } \sim \alpha (s)^{\alpha (t) - 1}\), involving the exchange of Regge poles in the t-channel.
That these are in some sense ‘equivalent’ in terms of the physical description was elevated to a duality principle Footnote 40:
DHS Duality Direct \(s\)-channel resonance particles are generated by an exchange of particles in the \(t\)-channel.
This has the effect that the representative Feynman diagrams for such processes are identified to avoid surplus states, known as “double counting”. For this reason, the two contributions to the amplitude are not to be summed together: summing over one channel is sufficient to cover the behaviour encapsulated in the other. This was matched by the experimental data. So-called “interference models” would demand that the two descriptions (both \(s\)- and \(t\)-channel contributions) be added together like ordinary Feynman tree diagrams, which would be empirically inadequate of course (see Fig. 2.7). As with any duality there is an associated epistemic gain: if we know about the resonances at low energies, we know about the Regge poles at high energies.Footnote 41
One can make some physical sense of the existence of such a duality by thinking about the ‘black box’ nature of the scattering methodology, as discussed previously. Since one makes measurements only of the free states (the asymptotic wave-functions), one cannot discern the internal structure between these measurements, and so given that both the \(s\)-channel (resonance) and \(t\)-channel (interaction via exchange) situations have the same asymptotic behaviour, they correspond to ‘the same physics’. However, the precise mathematical reason would have to wait first for the formulation of a dual amplitude, and then for the string picture, at which point it would become clear that conformal invariance was grounding the equivalence between such dual descriptions.
Mention must be made of the Finite Energy Sum Rules (i.e. where the energy has been truncated or cut in \(s\)), which are further consistency conditions, flowing from analyticity.Footnote 42 They are an expression of a linear relationship between the particle in the \(s\)- and \(t\)-channels and were a crucial step on the way to the DHS duality principle. They have enormous utility in terms of applications, not least in allowing the low and high energy domains of scattering amplitudes to be analytically connected: at high energies the scattering amplitude will be ruled by a handful of Regge poles (in the so-called ‘crossed’ \(t\)-channel) viewed at low energies the amplitude will be ruled instead by a handful of resonances (in the so-called ‘direct’ \(s\)-channel), as above. Thus, the FESR already establish a kind of duality between these two regimes so that \(t\)-channel (Regge) values can be determined from \(s\)-channel resonances. More formally, one begins with the (imaginary part of the) low energy amplitude characterised by resonances (which sits on the left hand side of the FESR equation) and builds up the Regge terms by analytic continuation (cf. [49, p. 246]). Schematically one has (borrowing from [43, p. 204]):
The averaging refers to the fact that one is integrating over Regge and resonance terms (Fig. 2.8).Footnote 43 The FESR are formally expressed as follows:
Hence, DHS duality is sometimes also called FESR-duality.
Though this duality in some ways embodies Chew’s Nuclear Democracy (since, in the case of \(\pi \pi \) scattering, both channels contain the same particles) it also paved the way for a departure from this picture. Using diagrammatic representations of the duality, Harari and Rosner reinterpreted the duality in terms of the flow of hadron constituents (quarks and anti-quarks Footnote 44) and the exchange of such.
Although the link wasn’t explicitly made at the time, these diagrams, in eliminating the links and vertices from standard Feynman graphs, already contain the germ of what would become string scattering diagrams according to which only the topological characteristics are relevant in the scattering process—one can easily see that the exchange and resonance diagrams are deformable and so topologically equivalent. This equivalence was given a graphical representation in the work of Haim Harari (see Fig. 2.9).
Harari was then working at the Weizmann Institute. At around the same time, at Tel-Aviv University, Jonathan Rosner also came up with the idea of duality diagrams.Footnote 45 Rosner’s version can be seen in Fig. 2.10.
Since it makes an appearance in the following pair of chapters, we should also say something about the Pomeron (that is, the Pomeranchuk pole) in this context. The duality principle links Regge poles to resonances, but the Pomeron, with vacuum quantum numbers, falls outside of this scheme. It satisfies duality in a sense, but it turns out to be dual to the non-resonating background terms.
Another problematic issue was simple one pion exchange. The problem with this case, vis-a-vis duality, is that the amplitudes for such exchange processes are real-valued, whereas, as we have seen, duality involves only the imaginary parts of amplitudes. Though this problem was discussed (see, e.g., the remark of Harari following Chan’s talk at a symposium on Duality-Reggeons and Resonances in Elementary Particle Processes, [11, p. 399], it doesn’t seem to have been satisfactorily resolved until John Schwarz and André Neveu’s dual pion model in 1971.
As we will see in the next chapter, Veneziano’s achievement was to display a solution to FESR by the Euler Beta function (thus giving an implementation of a dual version of the bootstrap). The solution is an amplitude that displays precisely the Regge behaviour (that is, Regge asymptotics) and satisfies all of the principles laid out by the S-matrix philosophy (Lorentz invariance, analyticity, crossing, duality), apart from unitarity, on account of the particular approximation scheme employed (on which more later). The hope was that using the bootstrap principle, this framework could then eventually be employed to predict specific physical properties of hadrons, such as masses.
The ability of dual models to encompass so many, then ill-understood, features of hadronic physics led to their very quick take up. Quite simply, there was no alternative capable of doing what dual models did. Hence, though it was not then able to make novel testable predictions, even at this stage, the fact that it resolved so many thorny problems with hadrons, and explained so many features in a unified manner meant that it was still considered to be serious physics—though, it has to be said, not all were enamoured, precisely on the grounds that it failed to make experimental predictions.
Before we shift to consider the Veneziano model, a further important step towards the dual models, and away from Chew-style bootstrap models, was the introduction of the narrow-resonance (or zero-width) approximation alluded to above, which initially ignored the instability of hadrons, treating all of them instead as stable particles, with scattering and decays then progressively added as perturbations.Footnote 46 Stanley Mandelstam [38, p. 1539], wishing to model the rising Regge trajectories within the double dispersion relations approach, introduced the “simplifying assumption” that the scattering amplitude is dominated by narrow resonances (where the amplitude is understood to be approximated by a finite number of Regge poles). In this scheme, Mandelstam was able to implement crossing symmetry using the FESR. To achieve the rising, Mandelstam uses two subtraction constants,Footnote 47 which in turn generates a pair of new parameters into the scheme: the Regge slope \(a\) and the intercept \(b\) (now written, \(\alpha \) and \(\alpha (0)\) respectively). These two parameters are absolutely central to the physical implications of the early attempts to construct dual symmetric, Regge behaved models, and still play a vital role today. Mandelstam makes an additional (well-motivated) assumption that the trajectories built from these parameters, namely \(\alpha (s) = as+b\), do not rise “more than linearly with \(s\)” (p. 1542). For this reason, it might be prudent to call \(\alpha (s)\) the ‘Regge-Mandelstam slope’ rather than the Regge-slope.Footnote 48
4 A Note on Early Research Networks
For reasons that should by now be clear, those working on the S-matrix programme and the bootstrap approach to strong interaction physics play a ‘statistically significant’ role in string theory’s early life, the latter being an outgrowth of the former via the dual resonance model (as we will see in the subsequent pair of chapters). An important subset of the current string theory researcher network can be traced back quite easily to a small group of physicists from this period in the 1960s, all working in and around the S-matrix programme (or dispersion relations) and Regge theory. This is quite natural, of course, since the dual resonance models can be viewed as a culmination of the bootstrap approach (recall Cushing’s remark about superstring theory constituting “the ultimate bootstrap” [12]). The lines of influence are presented below.Footnote 49
In particular, we can see a clear clustering around Geoffrey Chew and Berkeley. In a key move, Chew invited Stanley Mandelstam over to Berkeley, as a postdoc, who brought over the skills of complex analysis. It seems that Chew liked to be in close proximity to his students, and held weekly group meetings with them to discuss what they were working on. This close proximity clearly led to Chew’s idiosyncratic positions being transmitted throughout the group.Footnote 50 Note that prior to joining Berkeley, Chew was based at the University of Illinois, Urbana-Champaign, together with Francis Low. Nearby, at the University of Chicago, were Nambu and Goldberger. Richard Brower, whom we will encounter later, had been at Berkeley, interacting closely with Chew and Mandelstam (his supervisor).
Note that John Schwarz was working on sum rules while at Princeton University in 1967. Schwarz’s advisor was Geoff Chew. While at Berkeley, heavily influenced by Chew, he would have been steered away from work on elementary quarks Footnote 51 and quantum fields. Of course, this can’t provide any explanation of why Schwarz and a few others from Chew’s workshop continued to avoid quantum field theory. After all, David Gross (one of the few responsible for laying the finishing touches to QCD) was also a student of Chew’s at roughly the same time as Schwarz and, indeed, the two shared an office during their three final years (1963–6), writing a joint paper in 1965.Footnote 52
Gross pinpoints the moment he became disillusioned with his supervisor’s approach following a remark from Francis Low, at the 1966 Rochester meeting:
I believe that when you find that the particles that are there in S-matrix theory, with crossing matrices and all of the formalism, satisfy all these conditions, all you are doing is showing that the S-matrix is consistent with the way the world is; that is, the particles have put themselves there in such a way that it works out, but you have not necessarily explained that they are there [28, p. 9101].
Gross did briefly return to the bootstrap approach with Veneziano’s discovery of the beta function formula, but quickly became disillusioned once again, this time by its inability to explain scaling. As a result, Gross quickly brought himself up to speed on quantum field theory (especially renormalization group techniques) to try to find an explanation of scaling within field theory. As we see in Chap. 9, he would return to a descendent of the bootstrap programme much later, in 1985, when he helped construct the heterotic string theory.
Though things obviously become near-exponentially complicated once we move outwards from the origins of the bootstrap approach and dual models, we can trace paths of several important string researchers from Mandelstam too, including Joseph Polchinski and Charles Thorn.
There were two quite distinct styles of physics associated with the West Coast (roughly: Berkeley, Caltech) and the East Coast (roughly: Chicago, Princeton, Harvard). In particular, the East Coast seems to have been less dominated by ‘physics gurus’ (if I might be permitted to use that term).Footnote 53 However, this is to ignore the European influence: there is clearly a strong European component, though this will really come to dominate the theory of strong interactions in the period around Veneziano’s presentation of his dual model.
This is, of course, very USA-centric, and much is missed. However, the influence spread across the Atlantic, especially to Cambridge University.Footnote 54 Mention should certainly be made too of the Japanese school. One of the initials of DHS duality (Richard Dolen) was based at Kyoto University for a time (at the Research Institute for Theoretical Physics). In his letters to Murray Gell-Mann (from 1966: in the Gell-Mann archives of Caltech [Box 6, Folder 20]) he explicitly mentions interactions with several local physicists that went on to do important work on dual models and string theory—including Keiji Kikkawa, who later visited Rochester in 1967.Footnote 55
Though it involves jumping ahead a little, much of the early detailed dual model work (including string models) took place at CERN. As has often been pointed out, this had much to do with the strong leadership and dual-model advocacy of Daniele Amati.Footnote 56 One could find David Olive (who would later take a post as a staff member, rather than a regular visitor, turning his back on a tenured position at Cambridge University), Peter Goddard, Ian Drummond, David Fairlie, and very many more centrally involved in the construction of string theory from the early dual resonance models.Footnote 57 Olive captures the hub-like dual model scene at CERN in the early 1970s as follows:
Amati had gathered together from around Europe a galaxy of young enthusiasts for this new subject as research fellows and visitors. This was possible as centres of activity had sprung up around Europe, in Copenhagen, Paris, Cambridge, Durham, Torino and elsewhere. I already knew Peter Goddard from Cambridge University who was in his second year as Fellow, Lars Brink from Chalmers in Gothenburg was just starting, as was Jöel Scherk from Orsay, in Paris, all as Fellows, and destined to be collaborators and, particularly, close friends. Also present as Fellows were Paolo Di Vecchia (who arrived in January 1972), Holger Nielsen, Paul Frampton, Eugène Cremmer, Claudio Rebbi and others. Many visitors came from Italy, Stefano Sciuto, Nando Gliozzi, Luca Caneschi and so on. Visiting from the United States for the academic year were Charles Thorn and Richard Brower. Summer visitors included John Schwarz, and later Pierre Ramond, Joel Shapiro, Korkut Bardakçi, Lou Clavelli and Stanley Mandelstam, all from the United States [42, p. 349].
The early phase involving dual models was a particularly interconnected one, then, and also one featuring very many collaborative efforts. Stefano Sciuti, who had earlier been a part of Sergio Fubini’s group in Turin, explicitly refers to the willingness to “join forces, cooperating rather than competing” as “fruit of the spirit of 1968” ([48], p. 216).
5 Summary
We have shown how the difficulties faced by quantum field theory in advancing beyond QED led to various models, one of which was Regge theory, with the addition of the dual resonance idea. This model achieved significant empirical successes, had several powerful theoretical virtues, and was therefore pursued with some excitement. We traced the story from Regge’s introduction of complex angular momentum into quantum mechanics, to its extension into the relativistic domain. This combined with ‘bootstrap’ physics according to which the properties of elementary particles, such as coupling constants, could be predicted from a few basic principles coupled with just a small amount of empirical input. This journey culminated in the finite energy sum rules of Dolen, Horn, and Schmid, which were elevated to the status of a duality principle. The primary researcher network guiding research in this period was fairly narrowly confined, and can be charted quite precisely, with Geoff Chew as a key hub leading an anti-QFT school, as far as strong interactions were concerned. The bulk of later developments which place Regge-resonance duality at the heart of hadron physics (and the true beginnings of string theory) take place across the Atlantic, at CERN. We turn to these in the next chapter in which we discuss the Veneziano (dual resonance) model and its many extensions and generalisations.
Notes
- 1.
In fact, the beginnings of an erosion of confidence in quantum field theory (the orthodox framework for describing elementary particles) can be traced back to at least the 1950s, when the likes of Heisenberg, Landau, Pauli, and Klein were debating whether field theoretic infinities could be dealt with by invoking some natural (possibly gravitational) cutoff—of course, at this time non-Abelian gauge theories (and asymptotic freedom) were not known. (There was also a positivistic distaste with the notion of unobservable ‘bare’ masses and coupling constants.) One might also note that a new spirit flowed through the rest of physics at this time; not simply because it was a time of great social upheaval (being post-WWII = “the physicist’s war”), but also because many of the ‘old guard’ of physics had passed away. In the immediate aftermath of WWII, there was extreme confidence in the available theoretical frameworks, and little concern with foundational issues. By the late-1950s and into the 1960s, this confidence was beginning to wane, as Chew’s remarks in the above quotation make clear—the “new mistress” is S-matrix theory, while the “old mistress” is quantum field theory (amusingly, Marvin Goldberger had used the terminology of “old, but rather friendly, mistress” to describe quantum field theory in his Solvay talk from 1961—clearly Chew’s remarks are a reference to this).
- 2.
The name ‘hadron’ was introduced by Lev Okun in a plenary talk on “The Theory of Weak Interaction” at CERN in 1962, invoking the Greek word for large or massive (in contrast to lepton: small, light).
- 3.
Recall that the combination of relativity and quantum mechanics implies that particles (quanta of the field) can be created and destroyed at a rate depending on the energy of the system. Therefore, any such combination of relativity and quantum will involve many-body physics. This is compounded as the energy is increased. If the coupling constant is less than 1 then one can treat the increasing number of particles as negligible ‘corrections’ to the lowest order terms—note that the simpler, non-relativistic field theoretic case (the ‘potential-scattering’ problem) does not involve varying particle number. If the coupling constant is greater than 1, then going to higher order in the perturbation series (and adding more and more particles) means that the corrections will not be negligible so that the first few terms will not give a good approximation to the whole series.
- 4.
Not everyone was enchanted by this new S-matrix philosophy. As Leonard Susskind remembers it, the “general opinion among leaders of the field was that hadronic length and time scales were so small that in principle it made no sense to probe into the guts of a hadronic process—the particles and the reactions were unopenable black boxes. Quantum field theory was out; Unitarity and Analyticity were in. Personally, I so disliked that idea that when I got my first academic job I spent most of my time with my close friend, Yakir Aharonov, on the foundations of quantum mechanics and relativity.” [51, p. 262]. As mentioned in the preceding chapter, Susskind would go on to make important contributions to the earliest phase of string theory research, including the discovery that if you break open the black box that is the Veneziano amplitude, you find within it vibrating strings. In his popular book The Cosmic Landscape Susskind compares this black box ideology to the behaviourst psychology of B. F. Skinner [50, pp. 202–203].
- 5.
Indeed, Stanley Deser remarked that the reason he got into general relativity and quantum gravity, after a background in particle physics, was precisely because “quantum field theory appeared to be degenerating while gravitational physics looked like a new frontier” (interview with the author, 2011—available via the AIP oral history archives [Call number OH 34507]). This suggests that there was something like a ‘crisis’ in Kuhn’s sense. It was, of course, resolved to the satisfaction of many physicists (in quantum chromodynamics [QCD]) by a complex series of discoveries, culminating in a solid understanding of scaling and renormalization, dimensional regularization, non-Abelian gauge theories, and asymptotic freedom—recall that QCD is based on quark theory, where the ‘chromo’ refers to the extra degree of freedom postulated by Oscar Greenberg (in addition to space, spin, and flavour), labeled ‘colour’ by Gell-Mann. I don’t discuss these discoveries in any detail in this book. For a good recent historical discussion, see [2] (see also: [29, 44]). However, QCD, while an excellent description of the high-energy behaviour of hadrons, still cannot explain certain low energy features that the earliest dual models (leading to string theory) had at least some limited success with.
- 6.
Naturally, many important concepts (from the point of understanding the development of string theory) have fallen out of fashion as the theories and models to which they belonged have been superseded.
- 7.
Of course, in QCD the strong interaction is governed by the exchange of gluons (massless, spin-1 bosons) which are coupled to any objects with strong charge or ‘colour’ (i.e. quarks). This has many similarities to QED, albeit with a coupling \(\alpha _{strong} = g_{s}^{2}/4\pi \approx 1\), instead of the much weaker \(\alpha _{\textit{EM}} = e^{2}/4\pi \approx 1/137\). However, in the early days of hadron physics quarks were seen as convenient fictions used as a mere book-keeping tool for the various properties of hadrons—the nomenclature had some resistance: Victor Weisskopf, for example, wanted to call them ‘trions,’ while George Zweig wanted to call them ‘aces’! The gluons are themselves coloured which implies that they self-interact. This results in a characteristic property of quarks, namely that they are confined within hadrons, unable to be observed in their singular form. The gluons attract the field lines of the colour field together, forming a ‘tube’. Accounting for this tube-like behaviour was considered to be an empirical success of the early string models of hadrons, as we see below.
- 8.
Primarily the Proton Synchrotron [PS], turned on in 1959, becoming the highest energy accelerator at that time, attaining a beam energy of 28 GeV. By comparison, the Cosmotron at Brookhaven reached energies of just 3 GeV, though at the time of its first operation it was six times more powerful than other accelerators. For a good, technical review of these experiments see [31].
- 9.
The tracks of these particles are bent using strong magnetic fields. The quantum numbers of the particles can then be computed from the curvature of paths, thus enabling (under the assumption of energy-momentum conservation) the identification of various particle types.
- 10.
Such resonance particles are too short-lived and localised to leave a directly observable trace. Resonances possess lifetimes of the order of \(10^{-24}\) s. They would simply not travel far enough to leave a track before decaying. Given that particles travel at the speed of light \(c\), solving for the distance traveled gives just \(\approx 10^{-15}\) m. They are simply not stable enough to warrant the title ‘particle,’ which implies some degree of robust and continued existence. Of course, bubble chambers cannot allow one to see such particles, but one can infer their existence by observing decay products via various channels (see Figs. 2.1 and 2.2). (However, Chew [8, pp. 81–82] argued that, since both were to be represented by S-matrix poles, particles and resonances should not be distinguished in any significant way.)
- 11.
As we will see below, it was consideration of hard scattering processes that led to quantum field theory once again providing the framework in which to couch fundamental interactions. What such processes revealed was a hard, point-like interior structure of hadrons, much as the classic gold foil experiments of Rutherford had revealed a point-like atomic nucleus.
- 12.
Of course, the quark model postulated a deeper layer of elements of which the new particles were really bound states. Although I won’t discuss it, mention should be made here of the ‘current algebra’ approach to strong interactions, of Murray Gell-Mann (see, e.g., [24]). In this programme, although the underlying theory of quarks and their interactions wasn’t determined, certain high-level algebraic aspects of the free theory were, and these were believed to be stable under the transition to the interacting theory. The current algebra is an \({\textit{SU}}(n) \otimes {\textit{SU}}(n)\) algebra (with \(n\) the number of what would now be called ‘flavours’), generated by the equal-time commutation relations between the vector current \(V_{\mu }^{a}(x)\) and the axial vector current \(A^{a}_{\mu }(x)\). One of the crucial approximation methods employed in the construction of dual models (that of infinitely many narrow hadronic resonances) was developed in the context of current algebra. (See [2] for a conceptually-oriented discussion of current algebra, including an extended argument to the effect that this amounts to a ‘structural realist’ position in which the structural (broad algebraic) aspects constituted a pivotal element of the development of the theory.)
- 13.
James Cushing’s Theory Construction and Selection in Modern Physics [12] is a masterly account of the historical development of this new way of doing particle physics. In it he argues that the S-matrix methodology, of employing general mathematical principles to constrain the physics (at least, of the strong interaction), was perfectly viable and bore much fruit, despite the confirmation of QCD that knocked S-matrix theory off its pedestal. I agree with this general sentiment, and string theory can be found amongst such fruit.
- 14.
A concept Gell-Mann had labeled ‘nuclear democracy’—surely a term coloured by the political and social climate of Berkeley in the 1960s. For a discussion of the context surrounding Chew’s ‘democratic’ physics, see [32]. To this idea was appended the notion of ‘bootstrapping’ strongly interacting particle physics, in the sense that hadrons are bound states of other hadrons, that are themselves held together by hadron exchange forces—a purely endogenous mechanism.
- 15.
Holger Nielsen notes that he gave a talk on string theory while Heisenberg was visiting the Niels Bohr Institute at a conference given in his honour, but, as he puts it, “I do not think though that I managed to make Heisenberg extremely enthusiastic about strings” [40, p. 272]. Interestingly, David Olive also spoke on multi-Veneziano theory (that is, the generalised Veneziano model) and its relationship to quarks and duality diagrams, on the occasion of Heisenberg’s 70th birthday, in Munich, June 1971. He notes that Heisenberg’s reaction was a protest denying that the quark model was physics [37, p. 348].
- 16.
This connection was at the core of Freeman Dyson’s equivalence proof of Feynman’s and the Schwinger-Tomonaga formulations of QED [16], which employed the S-matrix to knit them together—the method of proof was to derive from both approaches the same set of rules by which the matrix element of the S-matrix operator between two given states could be written down.
- 17.
In this sense, Heisenberg’s way with the S-matrix corresponds to a repetition of the ideas that led to his matrix mechanics in the context of high-energy particle physics. Once scattering matrix elements have been fixed, then all cross-sections and observables have thereby been determined. Heisenberg’s view was that one needn’t ask for more (e.g., equations of motion are not required—on which, see Dirac [14]). The rough chronology that follows is that renormalisation techniques are developed, leading to quantum electrodynamics (with its phenomenal precision), leading to the demise of S-matrix theory. It was the subsequent fall from grace of quantum field theory at the hands of mesons that led to the resurrection of S-matrix theory, as we will see (see Fig. 2.5 for a visual impression of this “resurrection”). The trouble was that the finite, short range nature of the forces behind mesons seemed to imply that the particles were massive (in the context of Yukawa’s exchange theory). Chen Ning Yang and Robert Mills had argued otherwise, of course, in order to preserve gauge invariance (now generalised to non-Abelian cases), but this view (famously discredited by Pauli) had to wait for an understanding of confinement and the concept of asymptotic freedom to emerge. Fortunately, by that time S-matrix theory had enough time to spawn string theory—’t Hooft gives a good description of this progression (including the impact of dual models and hadronic string theory) in [54] (see also [27, 28]).
- 18.
In the case of quantum mechanics this will be with respect to a Lebesgue measure, \(d\mu (p) = \varPi d^{3}p_{i}\). In the context of a relativistic quantum theory the measure must be Lorentz-invariant, so one has a mass term: \(d\mu (p) = \varPi ( m^{2} + p^{2}_{i})^{-\frac{1}{2}} d^{3}p_{i}\) (with \(m\) the particle mass).
- 19.
The term ‘dispersion’ harks back to Kramers and Kronig’s work in optics and the theory of material dispersion involving the absorption and transmission (in the form of a spectrum of different colours, or rainbow) of white light through a prism (or, more generally, some dispersive medium). In this case, a dispersion relation connects the frequency \(\nu \), wavelength, \(\lambda \), and velocity of the light, \(v\): \(\nu = v(\lambda )\). The spatial dispersion of light into different colours occurs because the different wavelengths possess different (effective) velocities when traveling through the prism. A good guide to dispersion relations is [41]. It was Murray Gell-Mann (at the 1956 Rochester conference [23]) who had initially suggested that dispersion relations might be useful in computing observables for the case of strong interaction physics. In simple terms, the idea is to utilise S-matrix dispersion relations to tie up experimental facts about hadron scattering with information about the behaviour of the resonances (independently of any underlying field theory). More technically, this would be achieved by expressing an analytic S-matrix in terms of its singularities, using Cauchy-Riemann equations. Chew developed this (initially in collaboration with Goldgerber, Low, and Nambu: [4]) into the general idea that strong forces correspond to singularities of an analytic S-matrix.
- 20.
In more orthodox terms, crossed processes are represented by the same amplitude and correspond to continuing energies from positive to negative values (whence the particle-antiparticle switch)—this corresponds, of course, to CPT symmetry. This idea of crossing also harks back to Murray Gell-Mann, this time to a paper coauthored with Marvin Goldberger [22]. Of course, if analyticity is satisfied, then the operation of analytic continuation can amplify knowledge of the function in some region of its domain to other regions—as Cushing puts it, “an analytic function is determined globally once it has been precisely specified in the neighbourhood of any point” [13, p. 38].
- 21.
More generally, it is more appropriate to think about channels of particles. One can think of a channel, loosely, as a providing a possible ‘route’ from which the final state emerges. There might be many such possible routes, in which case one has a multichannel collision process, otherwise one has a single channel process. Such channels are indexed by the kinds of particles they involve and their relative properties. In scattering theory one is interested in inter-channel transitions; i.e. the transition from some process generated through an input channel and decaying through an output channel. Given a set of available channels, unitarity in this case is simply the property that every intermediate state must decay through some channel, so that \(\sum _{out} |S_{\langle in,out\rangle }|^{2} = 1\).
- 22.
The variables \(s\) and \(t\) are known as Mandelstam variables, with a third, \(u = (p_{a} - p_{d})^{2} = (p_{b} - p_{c})^{2}\), completing the set of Lorentz invariant scalars. These variables are not all independent because of the presence of the constraint \(s + t + u = \sum _{i=1}^{i=4} m_{i}^{2}\), so any two variables can be used to construct the scattering amplitudes, therefore we can dispense with \(u\) for convenience.
- 23.
The \(u\)-channel would be obtained from the \(t\)-channel by switching particles \(c\) and \(d\): \(u = (p_{a} - p_{d})^{2} = (p_{b} - p_{c})^{2}\). In the \(u\)-channel is the reaction: \(a + \overline{d} \rightarrow \overline{b}+d\), where the physical region is \(u \ge (m_{a} + m_{d})^{2}\).
- 24.
In other words, a tree graph in the sense of Landau is understood to represent, directly, physical hadrons via the lines. Landau’s singularity conditions are satisfied by a classical process sharing the topological (network) structure of the graph. Coleman and Norton later provided a proof of this graph-process correspondence. As they put it: “a Feynman amplitude has singularities on the physical boundary if and only if the relevant Feynman diagram can be interpreted as a picture of an energy- and momentum-conserving process occurring in space-time, with all internal particles real, on the mass shell, and moving forward in time” [11, p. 438].
- 25.
This expansion into the complex plane has a significant impact on the mathematics employed. For example, integration takes on a different appearance since, whereas given the real numbers one follows a single path to integrate between two points, in complex analysis one can take many different paths in the plane, leading to planar diagrams and contour integration. Note, however, that all were taken with the complex expansion. ’t Hooft mentions that his PhD supervisor, Martinus Veltman, was of the opinion “Angular momentum aren’t complex. They’re real. Why do you have to go to a complex thing? What does it mean?” (interview with the author, 10 February 2010). See [17] for a good general overview of Regge theory, including its place within Veneziano’s dual resonance model.
- 26.
A Riemann surface provides a domain for a many-valued complex function.
- 27.
To put some ‘physical’ flesh on these concepts, it is safe in this context to think of simple poles as particle exchanges at a vertex, while a cut is a singularity corresponding to pair production (of particles). Technically, of course, a branch cut is a kind of formal ‘barrier’ that one imposes on a domain in order to keep a complex function single valued.
- 28.
The singularity is of the form \(\frac{1}{J-\alpha }\) (where \(\alpha \), the Regge slope, is a function of the collision energy of the process in which the particle is involved).
- 29.
The slope \(\alpha ^{\prime }\) of the Regge trajectories was one of the concepts that would enter string theory in a rather direct way. It was suggested later that the slope has the air of a universal constant of nature, and one that might be connected to the extended, non-point-like character of hadrons, leading to a fundamental length scale set by hadron constituents, \(\lambda \approx \sqrt{\alpha '}\) of the order \(10^{-14}\) cm [36]. As Daniel Freedman and Jiunn-Ming Wang showed in 1966, in addition to the ‘leading trajectory,’ one would also have ‘daughter trajectories’ lying parallel (with spins separated by one unit), underneath the leading trajectory, and separated by a spacing of integer multiples of a half.
- 30.
The spin values of the resonances themselves can be inferred from the angular distribution of the decay products in the various reactions.
- 31.
Quantum field theories face severe problems with conservation of probability (i.e. unitarity) for particles of spins greater than 1, in which case the amplitudes diverge at high energies. One of Regge theory’s key successes was the ability to deal with the exchange of particles of very high spins by conceptualizing the process in terms of ‘Reggeon’ and ‘multi-Reggeon’ exchange (where Reggeons are composite objects associated with \(\alpha (t)\)).
- 32.
This story begins in 1958, with Mandelstam’s paper marking the beginning of the so-called ‘double dispersion representation’ (in both energy and momentum transfer): [37]. Such double dispersion relations were later renamed the ‘Mandelstam representation’. Mandelstam was explicitly taking up the suggestion made by Gell-Mann in [23], that one might “actually replace the more usual equations of field theory and ... calculate all observable quantities in terms of a finite number of coupling constants” [37, p. 1344]. Elliot Leader has written that “Tullio Regge’s great imaginative leap, the introduction of complex angular momentum in non-relativistic quantum mechanics, might have ended in oblivion, weighed down by its overpowering mathematical sophistry and rigour, had not S. Mandelstam, seizing upon its crucial element and casting off the mathematical shroud, demonstrated a direct and striking consequence in the behaviour of high-energy elementary particle collision processes [35, p. 213]. Mandelstam’s insight was the realization that unphysical regions of the scattering plane (involving very large values of the cosine of the scattering angle \(\theta \)), for a scattering event like \(A+B \rightarrow A+B\), is mathematically related to the physical reaction \(A+\overline{A} \rightarrow B+\overline{B}\).
- 33.
As Chew describes the origination of the bootstrap idea, it was in discussion with Mandelstam before the 1959 Kiev Conference when they discovered that “a spin 1 \(\pi \pi \) resonance could be generated by a force due to Yukawa-like exchange of this same resonance” [9, p. 605]—a resonance that was later to be named the \(\rho \)-meson. The bootstrap, more generally, refers to the notion that one can build up a pole in some variable via an infinite sum of singularities in some other variable—that is, a pole generates singularities in the crossed-channel, and these singularities generate the original pole. A pole thus generated can then be viewed as a bound state of other particles: “\(\rho \) as a force generates \(\rho \) as a particle” [9, p. 606]. Or, in more general terms, hadrons are to be viewed as bound states of other hadrons (see [5] for the more general bootstrap theory).
- 34.
Gravitation is not incorporated in this scheme, and is modelled only classically. The particle physics approach to quantum gravity was being pursued at around the same time that the standard model was being formed. Indeed, the tools and methods used to construct the standard model were very much bound up with work in quantum gravity. The electroweak, the strong force, and the gravitational force were, after all, described by non-Abelian theories. The properties powerfully represented by the standard model form a target that any future theory that hopes to probe still higher energies (‘beyond the standard model’) will have to hit. This includes string theory.
- 35.
Yukawa had attempted to construct a quantum field theory along the lines suggested by quantum electrodynamics in 1935. His approach proposed a connection between the mass of a particle and its interaction range.
- 36.
The Pomeron was later understood to be the trajectory given by \(2+ \frac{\alpha '}{2}J^{2}\) (the Pomeron sector) corresponding to the massless states of gravitons and dilatons (associated with closed strings). Its defining quality is that it is, in some sense, ‘without qualities,’ carrying no quantum numbers (or equivalently, it has ‘vacuum quantum numbers’: that is, no charge, spin, baryon number, etc.). This latter basic idea of the Pomeron was introduced in Chew and Frautschi’s “Principle of Equivalence for all Strongly Interacting Particles Within the S-Matrix Framework” [5]. They were to be distinguished from Reggeons (later interpreted in terms of open strings). It was subsequently found that the states of the Pomeron sit on a Regge trajectory with twice the intercept and half slope of the Reggeon trajectory. As we see, the vacuum quantum numbers are later explained by the fact that closed string worldtubes have no boundaries on which to ‘attach’ quantum numbers using the then standard ‘Paton-Chan method.
- 37.
This chapter also introduced the representation of Regge trajectories (as in Fig. 2.5, now known as a ‘Chew-Frautschi plot’). The original Chew-Frautschi plot consisted of a line draw between just two points (the only two then known experimentally)—cf. [5, pp. 57–58]. As Frautschi noted in an interview, “Originally, we had just drawn a straight line between two points, because two points were all we had for the data. And then as more data occurred, the straight line continued through the next particle discovered and through the Yukawa exchanges in a different kinematic region. So the straight lines we’d originally drawn for our Regge particles turned out to be a pervasive feature, and eventually that came to be regarded as very strong evidence for strings. ” [21, p. 19].
- 38.
Indeed, James Cushing referred to the combined S-matrix theory \(+\) duality framework as “the ultimate bootstrap” [12, p. 190]. However, duality really is just an implementation of the bootstrap principle of generating a pole (particle) by summing over (infinitely many) singularities in some other amplitude variable. In the case of duality one has a physical (that is, observational) equivalence between a description without forces (but with resonance production: i.e. fermions, though without spin degrees of freedom) and one with forces (mediated by an exchange particle: i.e. bosons).
- 39.
A simple expression of the duality is through the symmetry of the amplitude under the interchange of energy \(s\) and momentum transfer \(t\): \(A(s,t) = A(t,s)\). One can think in terms of \(s-t\) duality or resonance-Regge pole duality—for this reason it is sometimes called ‘\(s-t\) duality’.
- 40.
As Pierre Ramond notes, this was “elevated to a principle to be added to the Chew bootstrap program, regarding resonance and Regge trajectories as aspects of the same entities” [46, p. 505].
- 41.
I borrow the term “epistemic gain” from Ralf Krömer to refer to the fact that there are circumstances in which “dual objects are epistemically more accessible than the original ones” [33, p. 4]. The most significant case of this is seen in the final chapter when we look at S-duality, relating strongly coupled to weakly coupled limits of certain theories.
- 42.
According to Mahiko Suzuki, who shared an office with Horn and Schmid and collaborated with them briefly, it was Horn that coined the name “finite energy sum rule”. Richard Dolen entered the collaboration (as Suzuki departed) because of his computational and data handling skills (private communication).
- 43.
By contrast in the competing interference model scheme, mentioned above, one would have the sum rule: \(f (\mathrm {Resonance}) + f (\mathrm {Regge}) \) (see [1]).
- 44.
At this stage the quarks were, in general, not invested with any physical reality, but were merely viewed as a kind of book-keeping method. George Zweig was entertaining the idea that quarks were real, but Gell-Mann’s view that they were purely formal prevailed. Of course, he would later receive his Nobel prize, in 1969, for the discovery that hadrons are bound states of quarks. In fact, it should be pointed out that this does not appear to have been Gell-Mann’s actual position, and his usage of the term “mathematical” to describe certain quarks was non-standard (cf. [53, p. 634]): he simply meant ‘unliberated’ or “permanently confined” and chose “mathematical” to avoid what he called “the philosopher problem”! He was worried that philosophers would grumble about the possibility of unobservable entities—and, indeed, we saw earlier that Heisenberg objected on just such grounds. David Fairlie goes further, arguing that the positivistic commandment against talking about “unobservable features of particle interactions, but only about properties of asymptotic states...inhibited the invention of the concept of quarks” [19, p. 283].
- 45.
Rosner notes in his paper that he became aware of Harari’s work once the bulk of his own work was completed [20, p. 691]. This feature of multiple near-simultaneous discoveries is especially rife in the history of string theory—it surely points to an underlying common set of heuristics.
- 46.
The resonance width gives us an indication of the uncertainty about the particle’s mass. The terminology of ‘narrow-resonance’ is something of an oxymoron of course, since if a resonance is wide then the particle will be short-lived (a resonance particle!).
- 47.
Chu, Epstein, and Kaus [10] argued that Mandelstam’s scheme for computing the subtraction constants depends too sensitively on both the cutoff used in the FESR and on the specific value of momentum transfer \(s\) at which the FESR are evaluated.
- 48.
Note that Veneziano’s paper, “Construction Of A Crossing-Symmetric Regge-Behaved Amplitude For Linearly Rising Trajectories,” was (by far) the highest cited paper to have been influenced by Mandelstam’s. Note also, that the most highly cited paper to have in turn been influenced by Veneziano’s paper was Neveu and Schwarz’s paper introducing the dual pion model: “Factorizable Dual Model Of Pions”. Continuing, the paper on “Vacuum Configurations for Superstrings” of Candelas, Horowitz, Strominger, and Witten is the highest cited citer of the Neveu-Schwarz paper—again, by a fairly large margin. (Citation analysis performed with Thomson-Reuters, Web of Science.) This gives some indication of the level of continuity between the earliest work on duality and modern superstring theory.
- 49.
Though this is a very selective network, of course, and misses many other important contributors, many of those associated with what have been labelled ‘revolutionary’ developments in string theory are located on this graph. (Note that neither circle size nor overlap has any representational relevance in this diagram.)
- 50.
See p. 10 of Frautschi, Steven C. Interview by Shirley K. Cohen. Pasadena, California, June 17 and 20, 2003. Oral History Project, California Institute of Technology Archives. Retrieved [24th July, 2013]: http://resolver.caltech.edu/CaltechOH:OH_Frautschi_S. Frautschi, also a postdoc under Chew, shared an office with Mandelstam in 1960. Interestingly, Frautschi mentions (pp. 18–19) his later work on the so-called “statistical bootstrap” (employing some of Rolf Hagerdorn’s ideas) reproduced facts of the Regge phenomenology (such as equal-spacing between successive spin states and exponential growth in particle species with mass increases) without invoking string theory, or being aware that what he was doing had any connection to the derivation of equal-spacing from the oscillations of a string system. By this stage, 1971–1972, Frautschi was at Cornell, and that he wasn’t aware of the work that had by then been carried out using string models perhaps indicates that work on dual models and string models did not travel so widely and easily outside of the primary groups.
- 51.
Chew had referred disparagingly to quarks, in 1965, as “strongly interacting aristocrats” [7, p. 95].
- 52.
Schwarz was much aided by Murray Gell-Mann’s advocacy during the quieter years of superstring theory. In his closing talk at the 2nd Nobel Symposium on Particle Physics, Gell-Mann pointed out that Sergio Fubini joked that he (Gell-Mann) had “created at Caltech, during the lean years ... a nature reserve for an endangered species—the superstring theorist” [25, p. 202].
- 53.
At a session on dual models at CERN in 1974, Harry Lipkin put forth the following as a ‘motto’ of the session: “Dual theory should be presented in such a way that it becomes understandable to non-dualists. At least as understandable as East Coast theories are for West Coast physicists and vice versa” (http://www.slac.stanford.edu/econf/C720906/papers/v1p415.pdf). John Polkinghorne speaks of “Californian free-wheeling (bootstrappers)” and “New England Sobriety (field theory)” [45, p. 138]. Peter Woit’s book [57, p. 150] includes a discussion of the East-West divide.
- 54.
- 55.
Kikkawa later joined CUNY (with another dual model/string theorist Bunji Sakita) in 1970.
- 56.
I might add to this brief review of networks the fact that Amati took a sabbatical year in Orsay, while Andrè Neveu and Jöel Scherk were doing their PhDs there, spreading the gospel of dual resonance models to two of its future central proponents. Note that Neveu and Scherk later joined Schwarz in Princeton (in 1969) on NATO fellowships. However, since French higher degrees were not called PhDs, Neveu and Scherk were mistakenly classified as graduate students and assigned to Schwarz as such.
- 57.
David Fairlie himself oversaw a significant dual model group at Durham University in the UK, supervising several PhD theses on the subject in the 1970s.
References
Barger, V. & Cline, D. (1968). Phenomenological theories of high energy scattering. New York: Benjamin.
Cao, T. Y. (2010). From current algebra to quantum chromodynamics: A case for structural realism. Cambridge: Cambridge University Press.
TC Division. (1961). Track Chambers. CERN Annual Reports E (pp. 91–99): http://library.web.cern.ch/library/content/ar/yellowrep/varia/annual_reports/1961_E_p91.pdf.
Chew, G. M. L., & Goldberger, F. E. (1957). Application of dispersion relations to low energy meson-nucleon scattering. Physical Review, 106, 1337–1344.
Chew, G., & Frautschi, S. (1961). Principle of equivalence for all strongly interacting particles within the S-Matrix framework. Physical Review Letters, 7, 394–397.
Chew, G. (1962). S-Matrix theory of strong interactions. New York: W.A. Benjamin.
Chew, G. (1966). The analytic S-matrix: A basis for nuclear democracy. New York: W.A. Benjamin.
Chew, G. (1968). Aspects of the resonance-particle-pole relationship which may be useful in the planning and analysis of experiments. In G. Puppi (Ed.), Old and new problems in elementary particles (pp. 80–95). Amsterdam: Elsevier.
Chew, G. (1989). Particles as S-Matrix poles: Hadron democracy. In L. Hoddeson et al. (Eds.), Pions to Quarks (pp. 600–607). Cambridge: Cambridge University Press.
Chu, S.-Y., Epstein, G., & Kaus, P. (1969). Crossing-symmetric rising regge trajectories. Physical Review, 175(5), 2098–2105.
Coleman, S., & Norton, R. (1965). Singularities in the physical region. Il Nuovo Cimento, 38(1), 438–442.
Cushing, J. T. (1990). Theory construction and selection in modern physics. Cambridge: Cambridge University Press.
Cushing, J. T. (1985). Is there just one possible world? Contingency vs. the bootstrap. Studies in the History and Philosophy of Science, 16(1), 31–48.
Dirac, P. A. M. (1970). Can equations of motion be used in high-energy physics? Physics Today, 23(4), 29–31.
Donnachie, S. (1999). Probing the pomeron. CERN Courier, Mar 29: http://cerncourier.com/cws/article/cern/27985/2.
Dyson, F. (1948). The radiation theories of Tomonaga, Schwinger, and Feynman. Physical Review, 75, 486–502.
Eden, R. (1971). Regge poles and elementary particles. Reports on Progress in Physics, 34, 995–1053.
Eden, R. J., Landshoff, P. V., Olive, D. I., & Polkinghorne, J. C. (1966). The analytic S-Matrix. Cambridge: Cambridge University Press.
Fairlie, D. (2012). The analogue model for string amplitudes. In A. Capelli et al. (Eds.), The birth of string theory (pp. 283–293). Cambridge: Cambridge University Press.
Frautschi, S. (1995). Statistical studies of hadrons. In J. Letessier et al. (Eds.), Hot hadronic matter: Theory and experiment (pp. 57–62). Plenum Press.
Frautschi, S. C. (2003). Interview by Shirley K. Cohen. Pasadena, California, June 17 and 20, 2003. Oral History Project, California Institute of Technology Archives:http://resolver.caltech.edu/CaltechOH:OH_Frautschi_S.
Gell-Mann, M. G., & Goldberger, M. (1954). The scattering of low energy photons by particles of spin \(1/2\). Physical Review, 96, 1433–1438.
Gell-Mann, M. G. (1956). Dispersion relations in pion-pion and photon-nucleon scattering. In J. Ballam, et al. (Eds.), High energy nuclear physics, In: Proceedings of the sixth annual Rochester conference. (pp. 30–6). New York: Interscience Publishers.
Gell-Mann, M. G. (1964). The symmetry group of vector and axial vector currents. Physics, 1, 63–75.
Gell-Mann, M. G. (1987). Superstring theory. Physica Scripta, T15, 202–209.
Green, M. B. (2012) From String to Superstrings: A Personal Perspective. In A. Capelli et al. (eds.), The Birth of String Theory (pp. 527–543). Cambridge: Cambridge University Press.
Gross, D. (1992). Gauge theory—past, present, and future. Chinese Journal of Physics, 30(7), 955–971.
Gross, D. (2005). The discovery of asymptotic freedom and the emergence of QCD. Proceedings of the National Academy of Science, 102(26), 9099–9108.
Hoddeson, L., Brown, L., Riordan, M., & Dresden, M. (Eds.). (1997). The rise of the standard model: Particle physics in the 1960s and 1970s. Cambridge: Cambridge University Press.
Jackson, J. D. (1969). Models for high-energy processes. Reviews of Modern Physics, 42(1), 12–67.
Jacob, M. (Ed.). (1981). CERN: 25 years of physics, physics reports reprint book series, (Vol. 4). Amsterdam: North Holland.
Kaiser, D. (2002). Nuclear democracy: Political engagement, pedagogical reform, and particle physics in postwar America. Isis, 93, 229–268.
Krömer, R. (2001). The duality of space and function, and category-theoretic dualities. Unpublished manuscript: http://www.univ-nancy2.fr/poincare/documents/CLMPS2011ABSTRACTS/14thCLMPS2011_C1_Kroemer.pdf.
Landau, L. D. (1959). On analytic properties of vertex parts in quantum field theory. Nuclear Physics, 13, 181–192.
Leader, E. (1978). Why has Regge pole theory survived? Nature, 271, 213–216.
Lusanna, L. (1974). Extended hadrons and Regge slope. Lettere Al Nuovo Cimento, 11(3), 213–217.
Mandelstam, S. (1958). Determination of the pion-nucleon scattering amplitude from dispersion relations and unitarity general theory. Physical Review, 112(4), 1344–1360.
Mandelstam, S. (1968). Dynamics based on rising Regge trajectories. Physical Review, 166, 1539–1552.
Mandelstam, S. (1974). Dual-resonance models. Physics Reports, 13(6), 259–353.
Nielsen, H. (2012). String from Veneziano model. In A. Capelli et al. (Eds.). The birth of string theory (pp. 266–274). Cambridge: Cambridge University Press.
Nussenzveig, H. M. (Ed.). (1972). Causality and dispersion relations. North Holland: Elsevier.
Olive, D. I. (2012). From dual fermion to superstring. In A. Cappelli et al. (Eds.), The birth of string theory (pp. 346–360). Cambridge: Cambridge University Press.
Phillips, R. J. N., & Ringland, G. A. (1972). Regge phenomenology. In E. Burhop (Ed.). High energy physics. Massachusetts: Academic Press.
Pickering, A. (1984). Constructing quarks: A sociological history of particle physics. Chicago: University of Chicago Press.
Polkinghorne, J. (1989). Rochester Roundabout. London: Longman.
Ramond, P. (1987). The early years of string theory: The dual resonance model. In R. Slansky & G. B. West (Eds.). Proceedings of Theoretical Advanced Study Institute Lectures in Elementary Particle Physics (pp. 501–571). Singapore: World Scientific.
Regge, T. (1959). Introduction to complex angular momenta. Il Nuovo Cimento, 14(5), 951–976.
Sciuto, S. (2012). The ‘3-Reggeon Vertex’. In A. Capelli et al. (Eds.), The birth of string theory (pp. 214–217). Cambridge: Cambridge University Press.
Schmid, C. (1970). What is duality? Proceedings of the Royal Society of London, Series A, Mathematical and Physical Sciences, 318(1534), 257–278.
Susskind, L. (2006). The cosmic landscape. USA: Back Bay Books.
Susskind, L. (2012). The first string theory: Personal recollections. In A. Capelli et al. (Eds.), The birth of string theory (pp. 262–265). Cambridge: Cambridge University Press.
Taylor, J. R. (2000). Scattering theory: The quantum theory of nonrelativistic collisions. New York: Dover.
Teller, P. (1997). The philosopher problem. In L. Hoddeson, L. Brown, M. Riordan, & M. Dresden (Eds.), The rise of the standard model: Particle physics in the 1960s and 1970s (pp. 634–636). Cambridge: Cambridge University Press.
’t Hooft, G. (1999). When was asymptotic freedom discovered? Or the rehabilitation of quantum field theory. Nuclear Physics, B74(1–3), 413–425.
Weinberg, S. (1977). The search for unity: Notes for a history of quantum field theory. Daedalus, 106(4), 17–35.
Wheeler, J. (1994). Interview of John Wheeler by Kenneth Ford on March 28, 1994, Niels Bohr Library and Archives, American Institute of Physics, College Park, MD USA, www.aip.org/history/ohilist/5908_12.html.
Woit, P. (2007). Not even wrong: The failure of string theory and the search for unity in physical law. New York: Basic Books.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2014 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Rickles, D. (2014). Particle Physics in the Sixties. In: A Brief History of String Theory. The Frontiers Collection. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-45128-7_2
Download citation
DOI: https://doi.org/10.1007/978-3-642-45128-7_2
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-45127-0
Online ISBN: 978-3-642-45128-7
eBook Packages: Physics and AstronomyPhysics and Astronomy (R0)