So far we have talked a lot about the properties of the S-matrix, but there is an important concept closely related to the S-matrix, namely, the concept of Green’s functions. We first define Green’s functions in the interaction picture, and then rewrite them in the Heisenberg picture. This leads to the so-called Gell-Mann–Low relation [111]. We then discuss Matthews’ theorem [112] for the relation between the Hamiltonian formalism and the Lagrangian formalism. Finally, we turn our attention to the reduction formula connecting the S-matrix and Green’s functions. This formula relates to fundamental conditions called asymptotic conditions in the Heisenberg picture.

11.1 Gell-Mann–Low Relation

As in Chap. 3, writing the free-field energy–momentum four-vector in the interaction picture as \(P^{(0)}_{\mu }\), we find that (3.29) holds true for a field operator \(\mathscr {O}(x)\) without external fields, i.e.,

$$\displaystyle \begin{aligned} \big[P^{(0)}_{\mu} , \mathscr{O}(x)\big] = \mathrm{i}\frac{\partial}{\partial x_{\mu}} \mathscr{O}(x)\;. {} \end{aligned} $$
(11.1)

In particular, for μ = 0,

$$\displaystyle \begin{aligned} \big[P^{(0)}_{0}, \mathscr{O}(x)\big] =-\mathrm{i}\frac{\partial}{\partial t} \mathscr{O}(x)\;. {} \end{aligned} $$
(11.2)

We now introduce the following transformation function as \(\mathscr {O}\) :

$$\displaystyle \begin{aligned} U(t,t_{0}) = T \exp \left[-\mathrm{i}\int^{t}_{t_{0}}\mathrm{d} t^{\prime}\, H_{\text{int}} (t^{\prime})\right]\;. {} \end{aligned} $$
(11.3)

We insert this into (11.2), but bearing in mind that

$$\displaystyle \begin{aligned} \bigg[ P^{(0)}_{0} , -\mathrm{i} \int^{t}_{t_{0}}\mathrm{d} t^{\prime} H_{\text{int}} (t^{\prime})\bigg] &=- \int^{t}_{t_{0}}\mathrm{d} t^{\prime} \frac{\partial}{\partial t^{\prime}} H_{\text{int}}(t^{\prime}) \\ &=-\big[H_{\text{int}}(t) - H_{\text{int}}(t_{0})\big]\;. {} \end{aligned} $$
(11.4)

This immediately yields

$$\displaystyle \begin{aligned} \big[P^{(0)}_{0} , U(t , t_{0})\big] &= - T\big[ [ H_{\text{int}} (t) - H_{\text{int}} (t_{0}) ]U(t,t_{0})\big] \\ &=-H_{\text{int}}(t)U(t,t_{0}) + U(t,t_{0})H_{\text{int}}(t_{0})\;. {} \end{aligned} $$
(11.5)

We now set t = 0 and take the limit t 0 →−. In this case, putting

$$\displaystyle \begin{aligned} P^{(0)}_{0} + H_{\text{int}}(0) = H_{\text{total}}\;, {} \end{aligned} $$
(11.6)

we obtain

$$\displaystyle \begin{aligned} H_{\text{total}}U(0, - \infty) = U(0, -\infty) \big[P^{(0)}_{0} + H_{\text{int}}(- \infty)\big]\;. \end{aligned}$$

Assuming that interactions exist only in a finite space-time region, as in Sects. 4.4 and 7.3, we consider the limit as this space-time region extends to infinity. This corresponds to introducing interactions adiabatically, and a variety of consequences depend on how the limit is actually taken. With some loss of rigour, we assume that

$$\displaystyle \begin{aligned} H_{\text{int}}(- \infty) = 0\;, {} \end{aligned} $$
(11.7)

whence

$$\displaystyle \begin{aligned} H_{\text{total}} U (0 , - \infty) = U(0 , -\infty)P^{(0)}_{0}\;. {} \end{aligned} $$
(11.8)

Then writing the vacuum in the interaction picture as Φ 0,

$$\displaystyle \begin{aligned} P^{(0)}_{0} \varPhi_{0} = 0\;.{} \end{aligned} $$
(11.9)

Hence, writing the vacuum in the Heisenberg picture as Ψ 0, we find

$$\displaystyle \begin{aligned} H_{\text{total}}\varPsi_{0} = 0\;,\quad \varPsi_{0} = U(0 , - \infty) \varPhi_{0}\;. {} \end{aligned} $$
(11.10)

As shown in Chap. 6, the Heisenberg picture and the interaction picture are related by (6.30), viz.,

$$\displaystyle \begin{aligned} U(t , t_{0})^{-1} \varphi_{\alpha} (\boldsymbol{x} , t) U (t,t_{0}) = \varphi^{\mathrm{(H)}}_{\alpha}(\boldsymbol{x} , t)\;. {} \end{aligned} $$
(11.11)

In this section, we set t 0 = 0. Therefore, for t 1 > t 2,

$$\displaystyle \begin{aligned} \begin{array}{rcl} & &\displaystyle \big(\varPhi_{0}, T\big[ U (\infty , - \infty) A(\boldsymbol{x}_{1} , t_{1}) B(\boldsymbol{x}_{2} , t_{2})\big]\varPhi_{0}\big) \\ & &\displaystyle \qquad \qquad = \big( \varPhi_{0} , U(\infty , t_{1}) A(\boldsymbol{x}_{1} , t_{1}) U(t_{1} , t_{2}) B(\boldsymbol{x}_{2} , t_{2})U(t_{2} , - \infty) \varPhi_{0} \big) \\ & &\displaystyle \qquad \qquad =\big( \varPhi_{0} , U(\infty , 0) A^{\mathrm{(H)}}(\boldsymbol{x}_{1} , t_{1}) B^{\mathrm{(H)}}(\boldsymbol{x}_{2} , t_{2})U( 0 , - \infty) \varPhi_{0} \big) \\ & &\displaystyle \qquad \qquad =\big( \varPhi_{0} , U(\infty , - \infty) U(0 , - \infty)^{-1} A^{\mathrm{(H)}}(\boldsymbol{x}_{1} , t_{1}) B^{\mathrm{(H)}}(\boldsymbol{x}_{2} , t_{2})U( 0 , - \infty) \varPhi_{0} \big) \\ & &\displaystyle \qquad \qquad =\big( \varPhi_{0} , U(\infty , - \infty) \varPhi_{0} )( \varPhi_{0} , U(0 , - \infty)^{-1} A^{\mathrm{(H)}}(\boldsymbol{x}_{1} , t_{1}) B^{\mathrm{(H)}}(\boldsymbol{x}_{2} , t_{2})U( 0 , - \infty) \varPhi_{0} \big) \\ & &\displaystyle \qquad \qquad =\big( \varPhi_{0} , U(\infty , - \infty) \varPhi_{0} \big) \big( \varPsi_{0} , A^{\mathrm{(H)}}(\boldsymbol{x}_{1} , t_{1}) B^{\mathrm{(H)}}(\boldsymbol{x}_{2} , t_{2}) \varPsi_{0} \big)\,, {} \end{array} \end{aligned} $$
(11.12)

where we have used the composition rule (6.27) for the transformation function and U(, −), and the fact that the vacuum Φ 0 transforms to the vacuum up to a phase. This can be written in the form

$$\displaystyle \begin{aligned} \big(\varPsi_{0} , T \big[ A^{\mathrm{(H)}}(x_{1})B^{\mathrm{(H)}}(x_{2})\big] \varPsi_{0}\big) = \frac{\big(\varPhi_{0} , T\big[U(\infty , - \infty) A(x_{1}) B(x_{2})\big] \varPhi_{0} \varPhi_{0} \big)}{\big( \varPhi_{0} , U (\infty , - \infty) \varPhi_{0} \big) }\;. {} \end{aligned} $$
(11.13)

The presence of the denominator corresponds to neglecting bubble graphs, as discussed in connection with (8.70). This equation is easily generalized:

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} & &\displaystyle \big(\varPsi_{0} , T \big[ A^{\mathrm{(H)}}(x_{1}) B^{\mathrm{(H)}}(x_{2}) \ldots Z^{\mathrm{(H)}}(x_{n})\big] \varPsi_{0}\big)\\ & &\displaystyle =\frac{ \big( \varPhi_{0} , T \big[ U(\infty , - \infty) A(x_{1}) B(x_{2}) \ldots Z(x_{n})\big] \varPhi_{0}\big)}{ \big( \varPhi_{0} , U(\infty , - \infty) \varPhi_{0}\big)}\;. \end{array} \end{aligned} $$
(11.14)

This is called the Gell-Mann–Low relation [111]. It gives the relationship between the Heisenberg picture and the interaction picture. Additionally, any quantity like the left-hand side, i.e., a vacuum expectation value of a product of time-ordered field operators, is called a Green’s function.

11.2 Green’s Functions and Their Generating Functionals

When we discuss properties of a Green’s function , instead of considering them one by one, it is sometimes useful to discuss their generating function, or more precisely their generating functional . In this section, we describe the simplest generating functional, which produces the Green’s functions of a neutral scalar field.

We consider the Lagrangian density

$$\displaystyle \begin{aligned} \mathscr{L} = - \frac{1}{2} \big[ (\partial_{\mu} \varphi)^{2} + m^{2} \varphi^{2}\big] - \frac{g}{4 !} \varphi^{4}\;. {} \end{aligned} $$
(11.15)

We split this into the free part and the interaction part. In fact, such a separation is not trivial, and becomes particularly complicated when we take into account renormalization in the next chapter. However, in this section, we simply choose

$$\displaystyle \begin{aligned} \mathscr{H}_{\text{int}}(x) = - \mathscr{L}_{\text{int}} (x) = \frac{g}{4!} \varphi^{4}\;. {} \end{aligned} $$
(11.16)

In the following, we shall express the vacuum Φ 0 simply as |0〉. We introduce a c-number external field J(x) and define the functional

$$\displaystyle \begin{aligned} \mathscr{T}^{(0)} [J] = \frac{ \Big\langle 0 \Big| T \exp \Big\{ -\mathrm{i}\int \mathrm{d}^4x \big[ \mathscr{H}_{\text{int}}(x) + J(x) \varphi (x)\big] \Big\} \Big| 0 \Big\rangle }{ \big\langle 0 \big| T \exp \big[ -\mathrm{i}\int \mathrm{d}^4 x \mathscr{H}_{\text{int}} (x)\big] \big| 0 \big\rangle }\;. {} \end{aligned} $$
(11.17)

We now derive the equation satisfied by the generating functional of the Green’s functions \(\mathscr {T}^{(0)}[J]\). To do this, we just have to use the reduction formula (8.57), i.e.,

(11.18)

Writing the denominator of (11.17) as 〈0|U(, −)|0〉,

(11.19)

Integrating this equation, we obtain the functional equation for \(\mathscr {T}^{(0)}[J]\):

$$\displaystyle \begin{aligned} \mathrm{i}\frac{\updelta}{\updelta J(x)} \mathscr{T}^{(0)} [J] = -\mathrm{i}\int \mathrm{d}^4 y \varDelta_{\mathrm{F}} (x-y) \bigg\{ \frac{g}{3 !} \left[\mathrm{i} \frac{\updelta}{\updelta J (y)} \right]^{3} + J(y) \bigg\} \mathscr{T}^{(0)} [J]\;. {} \end{aligned} $$
(11.20)

Differentiating \(\mathscr {T}^{(0)} [J]\) n times with respect to J and setting J = 0, we obtain the n-point Green’s function:

$$\displaystyle \begin{aligned} \left. {\mathrm{i}}^{n} \frac{ \updelta^{n} }{\updelta J(x_{1}) \ldots \updelta J (x_{n}) } \mathscr{T}^{(0)} [J] \right|{}_{J=0} = \frac{ \big\langle 0 \big| T \big[ \varphi (x_{1}) \ldots \varphi (x_{n}) U (\infty , - \infty)\big] \big| 0 \big\rangle }{ \langle 0 | U(\infty , - \infty) | 0 \rangle }\;. {} \end{aligned} $$
(11.21)

We now turn to the Heisenberg picture, defining the generating functional by

$$\displaystyle \begin{aligned} \mathscr{T} [J] = \big\langle \boldsymbol{0} \big| T \exp \Big[ -\mathrm{i}\int \mathrm{d}^4 x\, J(x) \boldsymbol{\varphi} (x) \Big] \big| \boldsymbol{0} \big\rangle \;, {} \end{aligned} $$
(11.22)

where we have written Ψ 0 and φ (H) in bold face as |0〉 and φ, respectively. Therefore,

(11.23)

In this picture,

(11.24)

Inserting this into (11.23) and using the equation of motion for φ, we recover (11.19). Hence, (11.20) can also be recovered, i.e.,

$$\displaystyle \begin{aligned} \mathrm{i}\frac{\updelta}{\updelta J (x)} \mathscr{T} [J] = -\mathrm{i}\int \mathrm{d}^4 y\, \varDelta_{\mathrm{F}} (x-y) \bigg\{ \frac{g}{3!} \left[\mathrm{i} \frac{\updelta}{\updelta J(y)}\right]^{3} + J(y)\bigg\} \mathscr{T}[J]\;. {} \end{aligned} $$
(11.25)

This implies that, even if the boundary conditions are the same, we can expect

$$\displaystyle \begin{aligned} \mathscr{T} [J] = \mathscr{T}^{(0)} [J]\;. {} \end{aligned} $$
(11.26)

In fact, differentiating this n times with respect to J and setting J = 0, it becomes

$$\displaystyle \begin{aligned} \big\langle \boldsymbol{0} \big| T [ \boldsymbol{\varphi} (x_{1}) \ldots \boldsymbol{\varphi} (x_{n}) ] | \boldsymbol{0} \rangle = \frac{\big\langle 0 \big| T \big[ \varphi (x_{1}) \ldots \varphi (x_{n}) U (\infty , - \infty)\big] \big| 0 \big\rangle}{ \langle 0 | U(\infty , - \infty) | 0 \rangle}\;, {} \end{aligned} $$
(11.27)

which is nothing other than the Gell-Mann–Low relation. Thus, it turns out that (11.26) always holds true.

When a Green’s function is expressed by a Feynman diagram, it will generally give a set of non-connected parts. We focus only on the graphs in which n points are connected to each other by lines. We write the contribution corresponding to such a graph as

$$\displaystyle \begin{aligned} \big\langle \boldsymbol{0} \big|T \big[ \boldsymbol{\varphi } (x_{1}) \boldsymbol{\ldots} \boldsymbol{\varphi} (x_{n})\big]\big| \boldsymbol{0} \big\rangle_{\text{conn}}\;, {} \end{aligned} $$
(11.28)

where conn stands for ‘connected.’ So what is the relationship between the connected Green’s function and the original Green’s function? To find out, we start with some point x and separate into the part connected with x and the other parts. This gives a recursion formula:

(11.29)

where the summation has been taken over all combinations separating x 1, …, x n into \(x^{\prime }_{1}, \ldots , x^{\prime }_{k}\) and \(x^{\prime }_{k+1}, \ldots , x^{\prime }_{n}\). In order to express this relation in a closed form, we introduce a generating functional \(\mathscr {R}[J]\) for the connected Green’s functions:

$$\displaystyle \begin{aligned} \mathscr{R}[J] = \sum^{\infty}_{n=1} \frac{(-\mathrm{i})^{n}}{n!} \int \mathrm{d}^4x_{1} \ldots \int \mathrm{d}^4x_{n} \big\langle \boldsymbol{0} \big| T[ \boldsymbol{\varphi}(x_{1}) \ldots \varphi (x_{n})]\big| \boldsymbol{0} \big\rangle_{\text{conn}} J(x_{1}) \ldots J(x_{n})\,, {}\end{aligned} $$
(11.30)
$$\displaystyle \begin{aligned} \left. {\mathrm{i}}^{n} \frac{\updelta^{n}}{\updelta J(x_{1}) \ldots \updelta J(x_{n})} \mathscr{R} [J] \right|{}_{J=0} = \big\langle \boldsymbol{0} \big| T[ \boldsymbol{\varphi}(x_{1}) \ldots \varphi (x_{n})]\big| \boldsymbol{0} \big\rangle_{\text{conn}}\;. {} \end{aligned} $$
(11.31)

The recursion formula can now be expressed in closed form:

$$\displaystyle \begin{aligned} \mathrm{i}\frac{\updelta}{\updelta J(x)}\mathscr{T}[J] = \left(\mathrm{i}\frac{\updelta}{\updelta J(x)} \mathscr{R} [J] \right) \mathscr{T}[J]\;. {} \end{aligned} $$
(11.32)

We solve this functional equation with the boundary conditions

$$\displaystyle \begin{aligned} \mathscr{R}[0] = 0\;,\quad \mathscr{T} [0] =1\;. {} \end{aligned} $$
(11.33)

The solution is

$$\displaystyle \begin{aligned} \mathscr{T} [J] = \exp \mathscr{R} [J]\;. {} \end{aligned} $$
(11.34)

Then writing the expectation value of φ(x) when there exists an external field J as 〈φ(x)〉, we have

$$\displaystyle \begin{aligned} \langle \boldsymbol{\varphi}(x) \rangle =\frac{\mathrm{i} \dfrac{\updelta}{\updelta J(x)} \mathscr{T}[J] }{ \mathscr{T}[J]} =\mathrm{i} \frac{\updelta}{\updelta J(x)} \mathscr{R} [J]\;. {} \end{aligned} $$
(11.35)

In some situations, it is more useful to consider 〈φ(x)〉 as an independent external field instead of J(x), so we introduce the following Legendre transformation :

$$\displaystyle \begin{aligned} \mathscr{F} = \mathscr{R} +\mathrm{i}\int \mathrm{d}^4x J(x) \langle \boldsymbol{\varphi}(x) \rangle\;. {} \end{aligned} $$
(11.36)

This yields

$$\displaystyle \begin{aligned} \updelta \mathscr{F} &= \updelta \mathscr{R} +\mathrm{i}\int \mathrm{d}^4x \big[ J(x)\updelta \langle \boldsymbol{\varphi}(x) \rangle + \updelta J(x) \langle \boldsymbol{\varphi}(x) \rangle\big] \\ &= - \mathrm{i}\int \mathrm{d}^4x\, \updelta J(x)\updelta \langle \boldsymbol{\varphi}(x) \rangle +\mathrm{i}\int \mathrm{d}^4x\,\big[ J(x) \updelta \langle \boldsymbol{\varphi}(x) \rangle + \updelta J(x) \langle \boldsymbol{\varphi}(x) \rangle\big] \\ &=\mathrm{i}\int \mathrm{d}^4x\, J(x) \updelta \langle \boldsymbol{\varphi}(x) \rangle\;, {} \end{aligned} $$
(11.37)

i.e., taking 〈φ(x)〉 as an independent variable instead of J(x),

$$\displaystyle \begin{aligned} \frac{\updelta \mathscr{F}}{\updelta \langle \boldsymbol{\varphi}(x) \rangle } =\mathrm{i} J(x)\;. {} \end{aligned} $$
(11.38)

Variable transformations like this will play an important role in the discussion of spontaneous symmetry breaking later on. Moreover, differentiating (11.35) again,

$$\displaystyle \begin{aligned} \mathrm{i}\frac{ \updelta \langle \boldsymbol{\varphi} (x) \rangle }{ \updelta J (y) } = - \frac{ \updelta^2 \mathscr{R} }{ \updelta J(x) \updelta J (y) } = \varDelta_{\mathrm{F}} (x,y)\;. {} \end{aligned} $$
(11.39)

This is the two-point Green’s function in the case where an external field exists. Differentiating (11.38),

$$\displaystyle \begin{aligned} \frac{\updelta^{2} \mathscr{F}}{\updelta \langle \boldsymbol{\varphi}(x) \rangle \updelta \langle \boldsymbol{\varphi} (y) \rangle} = \mathrm{i}\frac{\updelta J(y)}{\updelta \langle \boldsymbol{\varphi }(x) \rangle} =-\varDelta^{-1}_{\mathrm{F}}(x,y)\;, {} \end{aligned} $$
(11.40)

which is the inverse of the two-point Green’s function.

11.3 Different Time-Orderings in the Lagrangian Formalism

In the previous section, we investigated the relationship between representations of Green’s functions in the Heisenberg picture and in the interaction picture. In this section, we shall discuss the difference between time-orderings in the Hamiltonian formalism and the Lagrangian formalism.

For simplicity, we start with particle dynamics. Considering a one-dimensional system, we assume that

$$\displaystyle \begin{aligned} p(t) = m \dot{q} (t)\;. {} \end{aligned} $$
(11.41)

Therefore, under the time-ordering operator T, p(t) is ordered as an operator corresponding to time t. This is because in the Hamiltonian formalism, the operators q(t) and p(t) are treated as independent variables, so these are considered to be quantities that need no further temporal decomposition. However, in the Lagrangian formalism, only q(t) is an independent quantity, so \(\dot {q}(t)\) becomes

$$\displaystyle \begin{aligned} \dot{q}(t) = \lim_{\epsilon \rightarrow 0} \frac{q(t + \epsilon) - q(t)}{\epsilon}\;. {} \end{aligned} $$
(11.42)

Assuming that 𝜖 is small but finite, it turns out that, in the Lagrangian formalism, \(\dot {q}(t)\) is associated with two clock times in time-orderings. We denote the time-ordering operator in this treatment by T . It should be noted here that, in the path-integral method, the same decomposition (11.42) is used. Therefore, the method using T is closely related to the path-integral method. So what is the difference between using T or T ?

In the Hamiltonian formalism and hence under T, we assume that \(\dot {q}(t)\) can be treated as a one-clock-time quantity, as in (11.41). Now,

$$\displaystyle \begin{aligned} T^{\ast} [\dot{q}(t) , q(t^{\prime})] &= T^{\ast} \bigg[ \lim_{\epsilon \rightarrow 0} \frac{q(t + \epsilon) - q(t)}{\epsilon} , q(t^{\prime}) \bigg] \\ &=\lim_{\epsilon \rightarrow 0} T \bigg[ \frac{q(t + \epsilon) - q(t)}{\epsilon} , q(t^{\prime}) \bigg] \\ &=\frac{\partial}{\partial t} T \big[ q(t) , q(t^{\prime}) \big]\;. {} \end{aligned} $$
(11.43)

However, if q obeys a second order differential equation, then only \(\dot {q}\) can enter in T , because \(\ddot {q}\) is not independent of q. Another example is

$$\displaystyle \begin{aligned} T^{\ast} \big[\dot{q}(t) , \dot{q}(t^{\prime})\big] = \frac{\partial^{2}}{\partial t \partial t^{\prime}} T \big[q(t) , q(t^{\prime})\big]\;. {} \end{aligned} $$
(11.44)

The derivation of this equation is perfectly analogous to the one in the previous example. However, using T, we find

$$\displaystyle \begin{aligned} \frac{\partial}{\partial t} T \big[ q(t) , q(t^{\prime})\big] &= T[\dot{q}(t) , q(t^{\prime}) ] + \frac{\partial}{\partial t} \left[ \frac{1}{2} \epsilon (t -t^{\prime}) \right] \big[ q(t) , q(t^{\prime})\big] \\ &= T\big[\dot{q}(t) , q(t^{\prime})\big] + \delta (t - t^{\prime}) \big[ q(t) , q(t^{\prime})\big] \\ &=T\big[\dot{q}(t) , q (t^{\prime})\big]\;. {} \end{aligned} $$
(11.45)

Therefore, this is the same as T in (11.43). However, assuming (11.41),

$$\displaystyle \begin{aligned} \frac{\partial}{\partial t} T \big[ q(t) , \dot{q} (t^{\prime})\big] &= T\big[\dot{q}(t) , \dot{q}(t^{\prime})\big] + \delta (t-t^{\prime}) \big[ q(t), \dot{q}(t^{\prime})\big] \\ &= T\big[\dot{q}(t) , \dot{q}(t^{\prime})\big] + \frac{\mathrm{i}}{m} \delta (t - t^{\prime})\;. {} \end{aligned} $$
(11.46)

The left-hand side is equal to the right-hand side of (11.44). So setting the left-hand side of (11.46) equal to the left-hand side of (11.44),

$$\displaystyle \begin{aligned} T\big[ \dot{q}(t) , \dot{q}(t^{\prime}) \big] = T^{\ast} \big[ \dot{q} (t) , \dot{q}(t^{\prime})\big] - \frac{\mathrm{i}}{m} \delta (t - t^{\prime})\;. {} \end{aligned} $$
(11.47)

The difference between T and T becomes clear in this way.

This argument can be extended to field theory. For simplicity, we consider a neutral scalar field in the interaction picture:

$$\displaystyle \begin{aligned} \frac{\partial}{\partial x_{\mu}} T\big[\partial_{\mu} \varphi (x) , \varphi (y)\big] &= T\big[\partial_{\mu} \varphi (x) , \varphi (y)\big] + \frac{1}{\mathrm{i}} \delta_{\mu 4} \delta (x_{0} - y_{0})\big[ \varphi (x) , \varphi (y)\big] \\ &= T \big[ \partial_{\mu} \varphi (x) , \varphi (y)\big]\;. {} \end{aligned} $$
(11.48)

In order to differentiate this one more time, we introduce a unit time-like vector n μ. Here we assume that n 1 = n 2 = n 3 = 0 and n 4 = i :

$$\displaystyle \begin{aligned} \frac{\partial}{\partial y_{\nu}} T \big[ \partial_{\mu} \varphi (x) , \varphi (y)\big] = T\big[ \partial_{\mu} \varphi (x) , \varphi (y)\big] + \mathrm{i}\delta_{\nu 4} \delta (x_{0} - y_{0}) \big[ \partial_{\mu} \varphi (x) , \varphi (y)\big]\;. \end{aligned} $$

The second term becomes

$$\displaystyle \begin{aligned} \delta_{\mu 4} \delta_{\nu 4} \delta (x_{0} - y_{0}) \big[ \dot{\varphi} (x) , \varphi (y)\big] &= - \mathrm{i}\delta_{\mu 4} \delta_{\nu 4} \delta^{4} (x-y) \\ &=\mathrm{i} n_{\mu}n_{\nu} \delta^{4} (x-y)\;. \end{aligned} $$

Taking the vacuum expectation value,

$$\displaystyle \begin{aligned} \big\langle 0 \big| T \big[ \partial_{\mu} \varphi (x) , \partial_{\nu} \varphi (y)\big]\big| 0 \big\rangle = \frac{\partial^{2}}{\partial x_{\mu} \partial y_{\nu}} \varDelta_{\mathrm{F}} (x-y) - \mathrm{i} n_{\mu}n_{\nu} \delta^{4} (x-y)\;. {} \end{aligned} $$
(11.49)

On the other hand,

$$\displaystyle \begin{aligned} \big\langle 0 \big| T^{\ast} \big[ \partial_{\mu} \varphi (x), \partial_{\nu} \varphi (y)\big] \big| 0 \big\rangle = \frac{\partial^{2}}{\partial x_{\mu} \partial y_{\nu}} \varDelta_{\mathrm{F}} (x-y)\;. {} \end{aligned} $$
(11.50)

This difference corresponds to (11.47). Comparing the two equations above, we see that T is covariant and simpler. When there are derivatives of field operators in the interaction term, the Hamiltonian density is not a scalar, but a tensor involving the time-like vector n. Although in this case both the Hamiltonian density and the contraction function depend on n, when we compute the S-matrix, the two types of n-dependence cancel out, so the final result does not depend on n. This is called Matthew’s theorem [112].

11.4 Matthews’ Theorem

If interactions do not include derivatives of the field operators, we have

$$\displaystyle \begin{aligned} \mathscr{H}_{\text{int}} (x) = - \mathscr{L}_{\text{int}}(x)\;. {} \end{aligned} $$
(11.51)

Clearly, in this case,

$$\displaystyle \begin{aligned} S = T \exp \left[ -\mathrm{i}\int \mathrm{d}^4 x \mathscr{H}_{\text{int}}(x) \right] = T^{\ast} \exp \left[\mathrm{i}\int \mathrm{d}^4x \mathscr{L}_{\text{int}}(x) \right]\;. {} \end{aligned} $$
(11.52)

This is because the difference between T and T does not appear anywhere here.

When derivatives are included in the interactions, the situation becomes more complicated. For example, when a charged scalar field and the electromagnetic field interact with each other,

$$\displaystyle \begin{aligned} \mathscr{L} =-\big[ (\partial_{\mu} + \mathrm{i} eA_{\mu}) \varphi^{\dagger} \cdot ( \partial_{\mu} - \mathrm{i} eA_{\mu}) \varphi + m^{2}\varphi^{\dagger}\varphi\big] + \mathscr{L}_{\text{em}}\;, {}\end{aligned} $$
(11.53)
$$\displaystyle \begin{aligned} \mathscr{L}_{\text{int}} = -\mathrm{i} eA_{\mu} (\varphi^{\dagger} \cdot \partial_{\mu} \varphi - \partial_{\mu} \varphi^{\dagger} \cdot \varphi) - \mathrm{e}^{2}A^{2}_{\mu} \varphi^{\dagger}\varphi\;. {} \end{aligned} $$
(11.54)

Thus, when derivatives of field operators are included, \(\mathscr {H}_{\text{int}}\) differs from \(-\mathscr {L}_{\text{int}}\).

First, we decompose the Lagrangian density:

$$\displaystyle \begin{aligned} \mathscr{L}(x) = \mathscr{L}_{\text{f}}(x) + \mathscr{L}_{\text{int}}(x)\;. {} \end{aligned} $$
(11.55)

If φ α is a real scalar field, the quantity canonically conjugate to φ α in the free field case is

$$\displaystyle \begin{aligned} \pi_{\alpha}(x) = \frac{\partial \mathscr{L}_{\text{f}}(x)}{\partial \dot{\varphi}_{\alpha}(x)} = \dot{\varphi}_{\alpha}(x)\;. {} \end{aligned} $$
(11.56)

If we now consider interactions, the canonically conjugate field is

$$\displaystyle \begin{aligned} \pi^{\prime}_{\alpha} (x) = \frac{\partial \mathscr{L} (x)}{\partial \dot{\varphi}_{\alpha}(x)} = \dot{\varphi}_{\alpha}(x) + \frac{ \partial \mathscr{L}_{\text{int}} (x) }{ \partial \dot{\varphi}_{\alpha}(x)}\;. {} \end{aligned} $$
(11.57)

By the Yang–Feldman equation (6.34), the relation between the Heisenberg picture and the interaction picture is

$$\displaystyle \begin{aligned} \varphi_{\alpha}(x) = U(x_{0} , - \infty)^{-1} \varphi^{\text{in}}_{\alpha} (x) U (x_{0} , - \infty)\;, {}\end{aligned} $$
(11.58)
$$\displaystyle \begin{aligned} \pi^{\prime}_{\alpha} (x) = U(x_{0} , -\infty)^{-1} \pi^{\text{in}}_{\alpha}(x) U(x_{0} , - \infty)\;. {} \end{aligned} $$
(11.59)

The reason why π transforms into π is that the transformation by U should not change the canonical commutation relation. In the following, we shall drop the superscript ‘in’ on operators in the interaction picture, and express Heisenberg operators in bold face. Therefore, the Hamiltonian density is

(11.60)

where we have assumed that a spatial derivative is a linear combination of the φ α(x). Then,

(11.61)

In this computation we use the inverse transformations of (11.58) and (11.59). Therefore,

$$\displaystyle \begin{aligned} U(x_{0}, - \infty) \boldsymbol{\dot{\varphi}}_{\alpha}(x) U (x_{0} , - \infty)^{-1} &= U(x_{0} , -\infty) \big[\boldsymbol{\pi^{\prime}}_{\alpha}(x) - \boldsymbol{\sigma}_{\alpha}(x) \big] U( x_{0} , - \infty )^{-1} \\ &= \pi_{\alpha}(x) - \sigma_{\alpha}(x) \\ &=\dot{\varphi}_{\alpha} (x) - \sigma_{\alpha} (x)\;, {} \end{aligned} $$
(11.62)

where σ α is defined by

$$\displaystyle \begin{aligned} \sigma_{\alpha}(x) = \frac{\partial \mathscr{L}_{\text{int}} (x)}{\partial \dot{\varphi}_{\alpha}(x)} =- n_{\mu} \frac{\partial \mathscr{L}_{\text{int}} (x) }{\partial \varphi_{\alpha , \mu}(x)}\;. {} \end{aligned} $$
(11.63)

Thus,

$$\displaystyle \begin{aligned} \mathscr{H}(x) = \sum_{\alpha} \pi_{\alpha}(x) \big[ \pi_{\alpha}(x) - \sigma_{\alpha}(x)\big] - \mathscr{L}\big( \varphi_{\alpha} (x) , \dot{\varphi}_{\alpha}(x) - \sigma_{\alpha}(x)\big)\;. {} \end{aligned} $$
(11.64)

We use a Taylor expansion for the second term. In this case, since \(\mathscr {L}\) includes at most second order terms in \(\dot {\varphi }_{\alpha }(x)\) and using

$$\displaystyle \begin{aligned} \frac{\partial^{2}}{\partial \dot{\varphi}_{\alpha} \partial \dot{\varphi}_{\beta}}\mathscr{L} = \delta_{\alpha \beta}\;,\quad \frac{\partial}{\partial \dot{\varphi}_{\alpha}}\mathscr{L} = \pi_{\alpha} + \sigma_{\alpha}\;, {} \end{aligned} $$
(11.65)

we obtain the following power series expansion in σ :

$$\displaystyle \begin{aligned} \mathscr{L}(\varphi_{\alpha} , \dot{\varphi}_{\alpha} - \sigma_{\alpha}) = \mathscr{L} (\varphi_{\alpha} , \dot{\varphi}_{\alpha}) - \sum_{\alpha} \sigma_{\alpha} \pi_{\alpha} - \frac{1}{2} \sum_{\alpha} \sigma^{2}_{\alpha} (x)\;. {} \end{aligned} $$
(11.66)

Inserting this into (11.64),

$$\displaystyle \begin{aligned} \mathscr{H}(x) = \sum_{\alpha} \pi^{2}_{\alpha}(x) - \mathscr{L} \big( \varphi_{\alpha} (x) , \dot{\varphi}_{\alpha}(x)\big) + \frac{1}{2} \sum_{\alpha} \sigma^2_\alpha (x)\;. {} \end{aligned} $$
(11.67)

Therefore,

$$\displaystyle \begin{aligned} \mathscr{H}_{\text{int}} (x) = - \mathscr{L}_{\text{int}}(x) + \frac{1}{2} \sum_{\alpha}n_{\mu}n_{\nu} \frac{\partial \mathscr{L}_{\text{int}} (x) }{\partial \varphi_{\alpha, \mu} (x)} \frac{\partial \mathscr{L}_{\text{int} } (x) }{\partial \varphi_{\alpha,\nu} (x)}\;. {} \end{aligned} $$
(11.68)

It turns out that the second term on the right-hand side expresses a shift from (11.51). We now prove the equality (11.52) in this case, i.e., Matthews’ theorem. To do so, we first specify the relation between the T-product and the normal product for the simple neutral scalar field theory.

We expand the T-product T[ABZ] into normal products in the neutral scalar theory. In this case, we write the contraction function Δ F, replaced by λΔ F, as T λ[ABZ]. Since differentiating this with respect to λ is equivalent to contracting φ(x) and φ(y) and multiplying by Δ F, we have

$$\displaystyle \begin{aligned} \frac{\partial}{\partial \lambda} T_{\lambda} [AB \ldots Z] = \frac{1}{2} \int \mathrm{d}^4x \int \mathrm{d}^4 y \frac{\updelta}{\updelta \varphi (x)} \varDelta_{\mathrm{F}} (x-y) \frac{\updelta}{\updelta \varphi (y)} T_{\lambda} [AB \ldots Z]\;, {} \end{aligned} $$
(11.69)

where we have treated φ’s in the normal product as c-numbers and differentiated with respect to them. Moreover, if we take λ = 0, then since this is exactly the same as that with no contraction, we have

$$\displaystyle \begin{aligned} T_{0}[AB \ldots Z] =\; :\! AB \ldots Z\! :\;. {} \end{aligned} $$
(11.70)

Solving the differential equation (11.69) under the initial condition (11.70) and taking λ = 1, we find

$$\displaystyle \begin{aligned} T[AB \ldots Z] = \exp \left[ \frac{1}{2} \int \mathrm{d}^4 x \int \mathrm{d}^4y \frac{\updelta}{\updelta \varphi (x)} \varDelta_{\mathrm{F}}(x-y) \frac{\updelta}{\updelta \varphi (y)} \right] :\! AB \ldots Z \! :\;. {} \end{aligned} $$
(11.71)

Next we take the vacuum expectation value of this equation. Since the expectation value of the normal product vanishes, the right-hand side survives only if all field operators are contracted. Thus, setting to zero the operators φ which are not contracted on the right-hand side, this gives only those contributions which appear when all operators are contracted:

$$\displaystyle \begin{aligned} \big\langle 0 \big| T [AB \ldots Z] \big| 0 \big\rangle = \left.\exp \left[ \frac{1}{2} \int \mathrm{d}^4 x \int \mathrm{d}^4 y \frac{\updelta}{\updelta \varphi (x)} \varDelta_{\mathrm{F}} (x-y) \frac{\updelta}{\updelta \varphi (y)} \right] :\! AB \ldots Z\!: \right|{}_{\varphi =0}\;. {} \end{aligned} $$
(11.72)

In fact, (11.52) is only true if no derivative of φ is included in ABZ. This operator appearing in the argument of the exponential function is related to the field quantization. It tells us how to contract operators. Denoting this by D, the operation which converts the normal product including φ and its derivative to the T-product is

$$\displaystyle \begin{aligned} T[AB \ldots Z] = \mathrm{e}^{D} :\! AB \ldots Z\! :\;. {} \end{aligned} $$
(11.73)

When the normal product includes φ, its derivative, and fermionic fields ψ and \(\bar {\psi }\), we have

(11.74)

where we have used (11.49) as a contraction function. Regarding the functional derivatives with respect to the Dirac field, the reader is referred to the caution after (8.59). Likewise for the T-product,

$$\displaystyle \begin{aligned} T^{\ast} [AB \ldots Z] = \mathrm{e}^{D^{\ast}}\! :\! AB \ldots Z\! :\;, {} \end{aligned} $$
(11.75)

where D is defined so that (11.50) applies to contractions of the φ derivatives in (11.74), whence

$$\displaystyle \begin{aligned} D=D^{\ast} + \frac{1}{2} \int \mathrm{d}^4 x \int \mathrm{d}^4 y \frac{\updelta}{\updelta \varphi_{,\mu} (x)} \big[-\mathrm{i} n_{\mu}n_{\nu} \delta^{4}(x-y)\big] \frac{\updelta}{\updelta \varphi_{,\nu}(y)}\;. {} \end{aligned} $$
(11.76)

In both cases, we have treated φ and its derivative as independent when taking functional derivatives.

We now choose the interaction Lagrangian density to have the form

$$\displaystyle \begin{aligned} \mathscr{L}_{\text{int}} (x) = - j_{\mu}(x) \partial_{\mu} \varphi (x) + \mathscr{L}^{(0)}_{\text{int}}(x)\;, {} \end{aligned} $$
(11.77)

where \(\mathscr {L}_{\text{int}}^{(0)}\) is a term which includes no derivatives. Therefore, from (11.68),

$$\displaystyle \begin{aligned} \mathscr{H}_{\text{int} } (x) =-\mathscr{L}_{\text{int}} (x) + \frac{1}{2}\big[ n_{\mu} j_{\mu} (x)\big]^{2}\;. {} \end{aligned} $$
(11.78)

In order to compute the S-matrix, we consider

$$\displaystyle \begin{aligned} T \exp \left[ -\mathrm{i}\int \mathrm{d}^4 x \mathscr{H}_{\text{int}} (x) \right] &= \mathrm{e}^{D} :\! \exp \left[ -\mathrm{i}\int \mathrm{d}^4 x \mathscr{H}_{\text{int}} (x) \right] \!: \\ &=\mathrm{e}^{D^{\ast}} \mathrm{e}^{D - D^{\ast}} :\! \exp \left[ -\mathrm{i}\int \mathrm{d}^4x \mathscr{H}_{\text{int}} (x) \right] \! : {} \end{aligned} $$
(11.79)

where D − D is the second term on the right-hand side of (11.76). This is a functional derivative with respect to the derivative of φ. Hence, this acts only on the term j μ μ φ in \(\mathscr {H}_{\text{int}}(x)\) which includes the derivative of φ. We must therefore compute

$$\displaystyle \begin{aligned} \mathrm{e}^{D-D^{\ast}} :\! \exp \left[ -\mathrm{i}\int \mathrm{d}^4x j_{\mu}(x) \partial_{\mu} \varphi (x)\right]\! :\;. {} \end{aligned} $$
(11.80)

To carry out this computation, we use

$$\displaystyle \begin{aligned} \exp \left( \lambda \frac{\mathrm{d}^{2}}{\mathrm{d} x^{2}} \right) \mathrm{e}^{ax} = \mathrm{e}^{ax + a^{2} \lambda}\;. {} \end{aligned} $$
(11.81)

Generalizing this formula and using it to calculate (11.80), we find that (11.80) is equivalent to something of the form

$$\displaystyle \begin{aligned} :\! \exp \bigg[ -\mathrm{i}\int \mathrm{d}^4x \left( j_{\mu}(x) \partial_{\mu} \varphi (x) - \frac{1}{2} \big[ n_{\mu} j_{\nu} (x)\big]^{2}\right)\bigg]\! :\;. {} \end{aligned} $$
(11.82)

Therefore, (11.79) can be written in the form

$$\displaystyle \begin{aligned} \begin{array}{rcl} T \exp \left[ -\mathrm{i}\int \mathrm{d}^4 x \mathscr{H}_{\text{int}}(x)\right] & =&\displaystyle \mathrm{e}^{D^{\ast}}:\! \exp \bigg[ -\mathrm{i}\int \mathrm{d}^4x \left( \mathscr{H}_{\text{int}}(x) - \frac{1}{2} \big[ n_{\mu} j_{\nu} (x)\big]^{2}\right)\bigg]\! : \\ & =&\displaystyle \mathrm{e}^{D^{\ast}} :\! \exp \left[\mathrm{i}\int \mathrm{d}^4 x \mathscr{L}_{\text{int}} (x) \right]\! : \\ & =&\displaystyle T^{\ast} \exp \left[\mathrm{i}\int \mathrm{d}^4 x \mathscr{L}_{\text{int}}(x) \right]\;. {} \end{array} \end{aligned} $$
(11.83)

Dividing both sides of this by the vacuum expectation value yields the S-matrix:

$$\displaystyle \begin{aligned} S=\frac{T^{\ast} \exp \big[\mathrm{i}\int \mathrm{d}^4x \mathscr{L}_{\text{int}}(x) \big] }{ \big\langle 0 \big| T^{\ast} \exp \big[\mathrm{i}\int \mathrm{d}^4x \mathscr{L}_{\text{int}}(x) \big] \big| 0 \big\rangle}\;. {} \end{aligned} $$
(11.84)

This is known as Matthews’ theorem. Note that this holds only if derivatives of the field operator are included linearly in (11.77).

11.5 Example of Matthews’ Theorem with Modification

In the last section, we considered a situation where Matthews’ theorem holds true and T and \(\mathscr {H}_{\text{int}}\) can be replaced by T and \(-\mathscr {L}_{\text{int}}\), respectively. In general, we have to replace \(\mathscr {H}_{\text{int}}\) by something slightly different from \(-\mathscr {L}_{\text{int}}\). We thus generalize Matthews’ theorem to

$$\displaystyle \begin{aligned} T \exp \left[ -\mathrm{i}\int \mathrm{d}^4 x \mathscr{H}_{\text{int}}(x) \right] = T^{\ast} \exp \left[\mathrm{i}\int \mathrm{d}^4 x \mathscr{L}_{\text{eff}}(x) \right]\;, {} \end{aligned} $$
(11.85)

where \(\mathscr {L}_{\text{eff}}\) is an effective interaction Lagrangian density, which is equivalent to \(\mathscr {L}_{\text{int}}\) only when Matthews’ theorem holds true. As an example of a situation where the theorem does not hold true, we consider the Lagrangian density

$$\displaystyle \begin{aligned} \mathscr{L} = -\frac{1}{2} D_{ab} (\varphi)\partial_{\lambda} \varphi_{a} \partial_{\lambda} \varphi_{b} - V(\varphi)\;, {} \end{aligned} $$
(11.86)

where φ a (a = 1, 2, …, N) are real scalar fields and D ab is a real positive-definite matrix which is a function of φ. In addition, we assume that D and V  do not include derivatives of φ. We shall now derive the effective interaction Lagrangian density for this theory.

When x 0 = y 0, the canonical commutation relations read

$$\displaystyle \begin{aligned} \big[ \boldsymbol{\varphi}_{a}(x) , \boldsymbol{\varphi}_{b}(y)\big]=0\;, {}\end{aligned} $$
(11.87)
$$\displaystyle \begin{aligned} \big[ \boldsymbol{\varphi}_{a}(x) , \boldsymbol{\dot{\varphi}}_{b}(y)\big] = \mathrm{i} C_{ab}(\boldsymbol{\varphi})\delta^{3}(x-y)\;, {} \end{aligned} $$
(11.88)

where C is the inverse matrix of D, i.e.,

$$\displaystyle \begin{aligned} \sum_{b}C_{ab}D_{bc} = \sum_{b}D_{ab}C_{bc} = \delta_{ac}\;. {} \end{aligned} $$
(11.89)

With summing over repeated indices, the Euler–Lagrange equation gives

(11.90)
$$\displaystyle \begin{aligned} \boldsymbol{j}_{a} = C_{ab} \left( \frac{1}{2} \frac{\partial}{\partial \boldsymbol{\varphi}_{b}} D_{cd} - \frac{\partial}{ \partial \boldsymbol{\varphi}_{d}}D_{bc} \right) \partial_{\lambda} \boldsymbol{\varphi}_{c} \partial_{\lambda} \boldsymbol{\varphi}_{d} + C_{ab}\frac{\partial}{\partial \boldsymbol{\varphi}_{b}}V\;. {} \end{aligned} $$
(11.91)

We now introduce a generating functional for the Green’s functions:

$$\displaystyle \begin{aligned} \mathscr{T}[J] = \Big\langle \boldsymbol{0} \Big| T \exp \left[ -\mathrm{i}\int \mathrm{d}^4 x\, J_{a} (x) \boldsymbol{\varphi}_{a}(x)\right]\Big| \boldsymbol{0} \Big\rangle\;. {} \end{aligned} $$
(11.92)

Therefore, combining (11.90) and the equal-time commutation relation (11.88),

(11.93)

Expressing this in terms of the generating functional,

(11.94)

Using \(\mathscr {T}[J]\), the second term on the right-hand side can be expressed as

$$\displaystyle \begin{aligned} J_{b}(x)C_{ab}\left(\mathrm{i}\frac{\updelta}{\updelta J(x)}\right) \mathscr{T}[J]\;. {} \end{aligned} $$
(11.95)

In the interaction picture, considering the T-product which includes only two derivatives of φ,

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} T\big[ \partial_{\mu} \varphi (x) , \partial_{\nu} \varphi (y) , \varphi (z), \ldots\big] & =&\displaystyle T^{\ast}\big[ \partial_{\mu}\varphi (x) , \partial_{\nu}\varphi (y) , \varphi (z), \ldots \big]\\ & &\displaystyle - \mathrm{i} n_{\mu}n_{\nu} \delta^{4} (x-y)T\big[\varphi (z) , \ldots \big]\;. \end{array} \end{aligned} $$
(11.96)

Noting that, in this derivation, we have only used the equal-time commutation relations and we have not used the field equation, we can easily make the extension to the Heisenberg picture:

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} T\big[ \partial_{\mu} \boldsymbol{\varphi}_{a} (x) , \partial_{\nu} \boldsymbol{\varphi}_{b} (y) , \boldsymbol{\varphi}_{c} (z), \ldots \big] & =&\displaystyle T^{\ast}\big[ \partial_{\mu}\boldsymbol{\varphi}_{a} (x) , \partial_{\nu} \boldsymbol{\varphi}_{b} (y) , \boldsymbol{\varphi}_{c} (z), \ldots \big] \\ & &\displaystyle -\mathrm{i} n_{\mu}n_{\nu} \delta^{4} (x-y)T\big[ C_{ab} ( \boldsymbol{\varphi }(x)) , \boldsymbol{\varphi}_{c}(z), \ldots \big]\;. \end{array} \end{aligned} $$
(11.97)

In particular, if we take y → x and μ = ν = λ, then since n λ n λ = −1,

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} T\big[ \partial_{\lambda} \boldsymbol{\varphi}_{a} (x) , \partial_{\lambda} \boldsymbol{\varphi}_{b} (y) , \boldsymbol{\varphi}_{c} (z), \ldots \big] & =&\displaystyle T^{\ast}\big[ \partial_{\lambda}\boldsymbol{\varphi}_{a} (x) , \partial_{\lambda} \boldsymbol{\varphi}_{b} (y) , \boldsymbol{\varphi}_{c} (z), \ldots \big]\\ & &\displaystyle +\mathrm{i}\delta^{4} (0)T\big[ C_{ab} ( \boldsymbol{\varphi }(x)) , \boldsymbol{\varphi}_{c}(z), \ldots \big]\;. \end{array} \end{aligned} $$
(11.98)

Using this result, the first term on the right-hand side of (11.94) can be written

(11.99)

where C ab(x) is an abbreviation for C ab(φ(x)), and the same goes for D cd(x).

Thus, \(\mathscr {T}[J]\) satisfies

(11.100)

where

$$\displaystyle \begin{aligned} F_{a}\big(\boldsymbol{\varphi}(x)\big) = C_{ab} \left( \frac{1}{2} \frac{\partial D_{cd}(x)}{\partial \boldsymbol{\varphi}_{b}(x)} - \frac{\partial D_{bc}(x)}{\partial \boldsymbol{\varphi}_{d}(x)} \right) C_{cd}(x)\;. {} \end{aligned} $$
(11.101)

Note that we can factorize the functional derivative j a using the T-product. Rewriting the third term on the right-hand side of (11.100), we have

$$\displaystyle \begin{aligned} J_{b}(x)C_{ab}\left(\mathrm{i}\frac{\updelta}{\updelta J (x)} \right) = C_{ab} \left(\mathrm{i} \frac{\updelta}{\updelta J(x)} \right)J_{b}(x) - \mathrm{i}\delta^{4}(0)\left. \frac{\partial C_{ab}(x)}{\partial \boldsymbol{\varphi}_{b}(x)} \right|{}_{\varphi \rightarrow \mathrm{i} \updelta / \updelta J}\;. {} \end{aligned} $$
(11.102)

Then the coefficients of iδ 4(0) in (11.100) are

$$\displaystyle \begin{aligned} F_{a} - \frac{\partial C_{ab}}{\partial \boldsymbol{\varphi}_{b}} &= \frac{1}{2} C_{ab}\frac{\partial D_{cd}}{\partial \boldsymbol{\varphi}_{b}}C_{cd} - C_{ab}\left( \frac{\partial D_{bc}}{\partial \boldsymbol{\varphi}_{d}}C_{cd} + D_{bc}\frac{\partial C_{cd}}{\partial \boldsymbol{\varphi}_{d}} \right) \\ &=\frac{1}{2}C_{ab}\frac{\partial}{\partial \boldsymbol{\varphi}_{b}}\ln (\det D) - C_{ab}\frac{\partial}{\partial \boldsymbol{\varphi}_{d}}\delta _{bd} \\ &=\frac{1}{2}C_{ab}\frac{\partial}{\partial \boldsymbol{\varphi}_{b}}\ln (\det D) \\ &=\frac{1}{2}G_{a}\;. {} \end{aligned} $$
(11.103)

Consequently, (11.100) can be written as

(11.104)

Multiplying this on the left by D, we obtain

(11.105)

This is the equation satisfied by the generating functional \(\mathscr {T}[J]\).

In order to derive a Feynman–Dyson-like formula in a theory like this, we must express the solution of the equation for \(\mathscr {T}[J]\) in terms of φ in the interaction picture. We thus test the following quantity:

$$\displaystyle \begin{aligned} \text{``}\mathscr{T}[J]\text{''} = \Big\langle 0 \Big| T^{\ast} \exp \bigg[ \mathrm{i}\int \mathrm{d}^4x\big[ \mathscr{L}_{\text{int}}(x) - J_{a}(x)\varphi_{a}(x)\big] \bigg] \Big| 0 \Big\rangle\;, {} \end{aligned} $$
(11.106)

where

$$\displaystyle \begin{aligned} \mathscr{L}_{\text{int}} = \mathscr{L} + \frac{1}{2} (\partial_{\lambda}\varphi_{a})^{2}\;. {} \end{aligned} $$
(11.107)

To obtain the functional equation satisfied by this \(\text{``}\mathscr {T}[J]\text{''}\), we begin with

$$\displaystyle \begin{aligned} \begin{array}{rcl} \text{``}\mathscr{T}[J]\text{''} & =&\displaystyle \exp \bigg[\mathrm{i}\int \mathrm{d}^4 x \mathscr{L}_{\text{int}} \left(\mathrm{i}\frac{\updelta}{\updelta J(x)} \right) \bigg] \Big\langle 0 \Big| \exp \left[ -\mathrm{i}\int \mathrm{d}^4x J_{b}(x) \varphi_{b}(x)\right]\Big| 0 \Big\rangle \\ & =&\displaystyle \exp \bigg[\mathrm{i}\int \mathrm{d}^4 x \mathscr{L}_{\text{int}} \left(\mathrm{i}\frac{\updelta}{\updelta J(x)} \right) \bigg] \exp \left[ - \frac{1}{2} \int \mathrm{d}^4x \int \mathrm{d}^4y J_{b}(x)\varDelta_{\mathrm{F}}(x-y)J_{b}(y) \right]\,,\\ {} \end{array} \end{aligned} $$
(11.108)

where we replace φ by iδ∕δJ in \(\mathscr {L}_{\text{int}}\). For the transformation on the right-hand side, we have used (11.72). Hence,

(11.109)

Since the last term on the right-hand side is the same as the left-hand side, these terms cancel out:

(11.110)

If we replace V  in this equation by

$$\displaystyle \begin{aligned} V^{\prime} = V + \frac{\mathrm{i}}{2} \delta^{4}(0) \ln(\det D)\;, {} \end{aligned} $$
(11.111)

it coincides with (11.105). Hence, \(\mathscr {T}[J]\) can be obtained by replacing V  by V in \(\text{``}\mathscr {T}[J]\text{''}\) and normalizing in such a way that it is equal to unity when J = 0. Thus,

$$\displaystyle \begin{aligned} \mathscr{T}[J] = \frac{ \big\langle 0 \big| T^{\ast} \exp \Big\{\mathrm{i}\int \mathrm{d}^4x \big[\mathscr{L}_{\text{eff}} (\varphi (x)) - J_{a}(x) \varphi_{a}(x)\big] \Big\}\big| 0 \big\rangle }{ \big\langle 0 \big| T^{\ast} \exp \big[\mathrm{i}\int \mathrm{d}^4x \mathscr{L}_{\text{eff}} (\varphi (x))\big] \big| 0 \big\rangle}\;, {} \end{aligned} $$
(11.112)

where

$$\displaystyle \begin{aligned} \mathscr{L}_{\text{eff}}= \mathscr{L}_{\text{int}} - \frac{\mathrm{i}}{2}\delta^{4}(0)\ln (\det D)\;. {} \end{aligned} $$
(11.113)

It thus turns out that, in this example, Matthews’ theorem has been modified.

The next question concerns properties of the additional term. First of all, this term is an imaginary number and includes the divergence δ 4(0). We have the following representation of the δ-function in the space-time coordinates:

$$\displaystyle \begin{aligned} \delta^{4}(x) = \frac{1}{(2\pi)^{4}}\int \mathrm{d}^4k\, \mathrm{e}^{\mathrm{i} k \cdot x} \rightarrow \delta^{4}(0) = \frac{1}{(2\pi)^{4}} \int \mathrm{d}^4k\;. {} \end{aligned} $$
(11.114)

This therefore gives a fourth-order divergence in momentum space. The imaginary coefficient is connected with the fourth-order divergence. If we compute the closed loop without the additional term, a fourth-order divergence comes about in this theory. Introducing the cutoff Λ in momentum space, we obtain

$$\displaystyle \begin{aligned} \int \mathrm{d}^4k \frac{k^{2}}{k^{2} + m^{2} - \mathrm{i}\epsilon} \sim \frac{\pi^{2}}{2}\varLambda^{4} - \mathrm{i}\pi^{2} m^{2} \varLambda^{2}\;. {} \end{aligned} $$
(11.115)

This implies that the divergences up to second order will give divergent contributions to the mass and coupling constants, as will be discussed later, but those are real numbers. On the other hand, the fourth-order divergence implied by the equation above, compared to the divergences up to second order, is an imaginary number. Such a contribution cannot be removed by renormalization and breaks the unitarity. Fortunately, the additional term mentioned above automatically cancels this fourth-order divergence, and in this sense a safety mechanism is automatically introduced into the theory.

Although the result mentioned above can also be derived by the path-integral method, this gives the additional term a different interpretation. The path-integral method produces the above result more easily than the method used here. This implies that, since the path-integral method is based on the Lagrangian, we can say that it is more suitable to derive the result including the Lagrangian and the T-product.

11.6 Reduction Formula in the Interaction Picture

So far we have discussed the S-matrix computational method in the interaction picture. Combining the reduction formula given in Sect. 8.3 and the Gell-Mann–Low relation, we can also express the S-matrix elements in terms of the Green’s function in the Heisenberg picture.

In Dyson’s formula, the interaction Hamiltonian density or Lagrangian density appear when we express the S-matrix. We can ask ourselves whether it is possible to derive an equation which does not depend explicitly on the form of the interaction.

For simplicity, we consider the charged scalar field and analyze the S-matrix elements for the scattering process

$$\displaystyle \begin{aligned} a+b \rightarrow b+a\;. {} \end{aligned} $$
(11.116)

To do so, we expand U(, −) in normal products using Wick’s theorem and determine the coefficient of the term

$$\displaystyle \begin{aligned} :\! \varphi^{\dagger}_{a} \varphi^{\dagger}_{b}\varphi_{a}\varphi_{b}\!:\;. {} \end{aligned} $$
(11.117)

Since this requires us to read off the normal product from U(, −) and contract the rest, we need to calculate

$$\displaystyle \begin{aligned} \Big\langle 0 \Big| \frac{\updelta}{\updelta \varphi^{\dagger}_{a}(x^{\prime}_{1})} \frac{\updelta}{\updelta \varphi^{\dagger}_{b}(x^{\prime}_{2})} \frac{\updelta}{\updelta \varphi_{a}(x_{1})} \frac{\updelta}{\updelta \varphi_{b} (x_{2})} U(\infty , -\infty) \Big| 0 \Big\rangle\;. {} \end{aligned} $$
(11.118)

Writing the final state as |a , b 〉, the S-matrix element is given by

$$\displaystyle \begin{aligned} \begin{array}{rcl} \langle a^{\prime} , b^{\prime} | S - 1 | a,b \rangle & =&\displaystyle \int \mathrm{d}^4x^{\prime}_{1} \mathrm{d}^4x^{\prime}_{2} \mathrm{d}^4x_{1}\mathrm{d}^4x_{2} \big\langle a^{\prime} \big| \varphi^{\dagger}_{a} (x^{\prime}_{1}) \big| 0 \big\rangle \big\langle b^{\prime} \big| \varphi^{\dagger}_{b}(x^{\prime}_{2}) \big|0 \big\rangle \\ & &\displaystyle \times \Big\langle 0 \Big| \frac{\updelta}{\updelta \varphi^{\dagger}_{a}(x^{\prime}_{1})} \frac{\updelta}{\updelta \varphi^{\dagger}_{b}(x^{\prime}_{2})} \frac{\updelta}{\updelta \varphi_{a}(x_{1})} \frac{\updelta}{\updelta \varphi_{b} (x_{2})} U(\infty , -\infty) \Big| 0 \Big\rangle \\ & &\displaystyle \times \frac{\langle 0 | \varphi_{a} (x_{1}) | a\rangle\langle 0 | \varphi_{b} (x_{2}) | b \rangle}{ \langle 0 | U(\infty , - \infty) | 0 \rangle}\;. {} \end{array} \end{aligned} $$
(11.119)

From the reduction formula, the functional derivative is given by

(11.120)

Denoting the Klein–Gordon operator on the left-hand side by \(K^{a}_{x}\) and using (11.120), equation (11.119) becomes

$$\displaystyle \begin{aligned} \begin{array}{rcl} \langle a^{\prime} , b^{\prime} | S - 1 | a, b \rangle & =&\displaystyle \langle 0 | U(\infty , - \infty) | 0 \rangle^{-1} \\ & &\displaystyle \times \int \mathrm{d}^4x^{\prime}_{1}\mathrm{d}^4x^{\prime}_{2}\mathrm{d}^4x_{1}\mathrm{d}^4x_{2}\big\langle a^{\prime} \big| \varphi^{\dagger}_{a} (x^{\prime}_{1}) \big| 0 \big\rangle \big\langle b^{\prime} \big| \varphi^{\dagger}_{b}(x^{\prime}_{2}) \big|0 \big\rangle \\ & &\displaystyle \times (-\mathrm{i})^{4} K^{a}_{x^{\prime}_{1}}K^{b}_{x^{\prime}_{2}}K^{a}_{x_{1}}K^{b}_{x_{2}} \big\langle 0 \big| T \big[ \varphi_{a}(x^{\prime}_{1}) \varphi_{b}(x^{\prime}_{2}) \varphi^{\dagger}_{a}(x_{1}) \varphi^{\dagger}_{b}(x_{2}) U(\infty , - \infty)\big]\big| 0 \big\rangle \\ & &\displaystyle \times \langle 0 | \varphi_{a} (x_{1}) | a\rangle\langle 0 | \varphi_{b} (x_{2}) | b \rangle\;. {} \end{array} \end{aligned} $$
(11.121)

Here 〈0|φ a(x)|a〉, which we should call a one-body wave function, has the same structure in both the interaction picture and the Heisenberg picture. The only difference would be a proportionality coefficient. Although it is not trivial to separate the whole Lagrangian density into the free part and the interaction part, the expressions are equal if we use the renormalized interaction picture discussed in the next chapter:

$$\displaystyle \begin{aligned} \langle 0 | \varphi_{a}(x) |a \rangle = \langle \boldsymbol{0} | \boldsymbol{\varphi}_{a}(x) | \boldsymbol{a} \rangle = \frac{1}{\sqrt{2p_{0}V}} \mathrm{e}^{\mathrm{i} p\cdot x}\;. {} \end{aligned} $$
(11.122)

If we use the Gell-Mann–Low relation in this case, (11.121) can be expressed solely in terms of quantities in the Heisenberg picture:

$$\displaystyle \begin{aligned} \begin{array}{rcl} \langle a^{\prime} , b^{\prime} | S - 1 | a, b \rangle & =&\displaystyle \int \mathrm{d}^4x^{\prime}_{1}\mathrm{d}^4x^{\prime}_{2}\mathrm{d}^4x_{1}\mathrm{d}^4x_{2} \big\langle \boldsymbol{a^{\prime}} \big| \boldsymbol{\varphi}^{\dagger}_{a} (x^{\prime}_{1}) \big| \boldsymbol{0} \big\rangle \big\langle \boldsymbol{b}^{\prime} \big| \boldsymbol{\varphi}^{\dagger}_{b}(x^{\prime}_{2}) \big| \boldsymbol{0} \big\rangle \\ & &\displaystyle \times K^{a}_{x^{\prime}_{1}}K^{b}_{x^{\prime}_{2}}K^{a}_{x_{1}}K^{b}_{x_{2}} \big\langle \boldsymbol{0} \big| T \big[ \boldsymbol{\varphi}_{a}(x^{\prime}_{1}) \boldsymbol{\varphi}_{b}(x^{\prime}_{2}) \boldsymbol{\varphi}^{\dagger}_{a}(x_{1}) \boldsymbol{\varphi}^{\dagger}_{b}(x_{2}) \big] \big|\boldsymbol{0} \big\rangle \\ & &\displaystyle \times \langle \boldsymbol{0} | \varphi_{a} (x_{1}) | \boldsymbol{a} \rangle \langle \boldsymbol{0} | \varphi_{b} (x_{2}) | \boldsymbol{b} \rangle\;. {} \end{array} \end{aligned} $$
(11.123)

Unlike Dyson’s formula, in the above expression of the S-matrix element, the explicit form of the interaction does not appear. The problem in the interaction picture of separating the Lagrangian into the free part and the interaction part does not arise. However, in the process of deriving this formula, we have made the assumption (11.7), which is hard to justify. In fact, this result is justified only when we start with the renormalized interaction picture discussed above. Thus, we have to discuss the asymptotic conditions which lead to the above formula in the framework of the Heisenberg picture.

11.7 Asymptotic Conditions

The derivation of the S-matrix element in the Heisenberg picture in the previous section has been based on several assumptions. The question is whether or not we can derive the same result from clearer assumptions. In fact, this was done by Lehmann, Symanzik, and Zimmermann. The assumptions they made are called asymptotic conditions [113].

In Sect. 6.3, we introduced two kinds of asymptotic field in connection with the Yang–Feldman formalism. The asymptotic fields φ int and φ out for the real scalar field φ satisfy

(11.124)
$$\displaystyle \begin{aligned} \big[\varphi^{\text{in}}(x) , \varphi^{\text{in}}(y)\big] = \mathrm{i} \varDelta (x-y) = \big[ \varphi^{\text{out}}(x) , \varphi^{\text{out}}(y)\big]\;. {} \end{aligned} $$
(11.125)

Since these two types of scalar field are engendered by the same scalar field φ, we know that φ int and φ out are not independent of one another. This implies that they will not commute. Intuitively speaking, as in the case of the Yang–Feldman formalism,

$$\displaystyle \begin{aligned} \boldsymbol{\varphi}(x) \;\longrightarrow\; \left\{ \begin{array}{l}\varphi^{\text{int}}(x)\;,\quad t \rightarrow - \infty\;,\\ \varphi^{\text{out}}(x)\;,\quad t \rightarrow \infty\;.\end{array}\right. \end{aligned} $$
(11.126)

As just described, the reason why the field asymptotes to the free field when t →± is that since the particles are then far away from each other and there is no effect from other particles, so they behave like free particles. This fact is closely related to the issue of renormalization discussed in the next chapter, and to make this idea more rigorous we have to express the wave function of a particle, not by a plane wave, but by a wave packet.

We assume that some function f(x) satisfies the conditions

(11.127)
$$\displaystyle \begin{aligned} -\mathrm{i}\int \mathrm{d}^3x \left( f \frac{\partial f^{\ast}}{\partial x_{0}} - f^{\ast}\frac{\partial f}{\partial x_{0}} \right) =1\;. {} \end{aligned} $$
(11.128)

Then corresponding to this f, we introduce the operators

$$\displaystyle \begin{aligned} &\boldsymbol{\varphi}_{f}(t) =-\mathrm{i} \int \mathrm{d}^3x \left[ \boldsymbol{\varphi}(x)\frac{\partial f^{\ast}(x)}{\partial x_{0}} - f^{\ast}(x)\frac{\partial \boldsymbol{\varphi}(x)}{\partial x_{0}} \right]\;, {} \end{aligned} $$
(11.129)
$$\displaystyle \begin{aligned} &\boldsymbol{\varphi}^{\dagger}_{f}(t) =\mathrm{i}\int \mathrm{d}^3x \left[ \boldsymbol{\varphi} (x) \frac{\partial f(x)}{\partial x_{0}} - f(x)\frac{\partial \boldsymbol{\varphi}(x)}{\partial x_{0}} \right]\;, {} \end{aligned} $$
(11.130)

where t = x 0. We then define the corresponding asymptotic fields \(\varphi ^{\text{in}}_{f}\) and \(\varphi ^{\text{out}}_{f}\) by

$$\displaystyle \begin{aligned} &\lim_{\tau \rightarrow - \infty} \big(\varPhi , \boldsymbol{\varphi}_{f}(\tau) \varPsi\big) = \big( \varPhi, \varphi^{\text{in}}_{f} \varPsi\big)\;, {} \end{aligned} $$
(11.131)
$$\displaystyle \begin{aligned} &\lim_{\tau \rightarrow \infty} \big( \varPhi , \boldsymbol{\varphi}_{f}(\tau) \varPsi\big) = \big( \varPhi, \varphi^{\text{out}}_{f} \varPsi\big)\;. {} \end{aligned} $$
(11.132)

For both states Φ and Ψ, we can define the same asymptotic field if we start with the normalized state vector \(\varphi ^{\dagger }_{f}(\tau )\). Note that the right-hand sides of (11.131) and (11.132) no longer depend on the time variable. Such a limit of an operator in the sense of the matrix element is called weak convergence, in contrast to strong convergence defined in the sense of the norm.

Next we consider an orthogonal system of wave functions. A wave function here is the matrix element with the vacuum of a field operator in a one-particle state. We consider the set of functions {f α(x)} satisfying (11.127), (11.128), and the condition

$$\displaystyle \begin{aligned} -\mathrm{i}\int \mathrm{d}^3x \left( f_{\alpha}\frac{\partial f^{\ast}_{\beta}}{\partial x_{0}} - f^{\ast}_{\beta}\frac{\partial f_{\alpha}}{\partial x_{0}}\right) = \delta_{\alpha \beta}\;. {} \end{aligned} $$
(11.133)

The completeness condition for this system of orthogonal functions is

$$\displaystyle \begin{aligned} \sum_{\alpha} f_{\alpha}(x)f^{\ast}_{\alpha}(y) = \mathrm{i} \varDelta^{(+)}(x-y)\;. {} \end{aligned} $$
(11.134)

We now introduce the complete system of state vectors {Φ in}. Assuming that Φ 0 is the vacuum,

$$\displaystyle \begin{aligned} \begin{array}{c} \varPhi_{0}\;, \\ \varPhi^{\text{in}}_{\alpha} = \varphi^{\dagger \text{in}}_{\alpha}\varPhi_{0}\;,\\ \vdots \\ \varPhi^{\text{in}}_{\alpha_{1} \ldots \alpha_{k}} = (p_{\alpha_{1} \ldots \alpha_{k}})^{-1/2}\varphi^{\dagger \text{in}}_{\alpha_{1}} \ldots \varphi^{\dagger \text{in}}_{\alpha_{k}}\varPhi_{0}\;, \end{array} \end{aligned} $$
(11.135)

where \(p_{\alpha _{1} \ldots \alpha _{k}} = n_{1}!n_{2}! \ldots n_{r}!\) and n stands for the number of particles in the same one-particle state in (α 1, …, α k). Replacing φ †int by φ †out, we can also construct the complete system {Φ out}. The S-matrix can be defined as the unitary transformation between these two pairs of complete orthonormal systems. It will be shown later that this definition reproduces the S-matrix elements given in the previous section. In the next chapter, it will be shown that it also coincides with the definition of the S-matrix in the Lippmann–Schwinger theory, viz.,

$$\displaystyle \begin{aligned} S_{\beta \alpha} = (\varPhi^{\text{out}}_{\beta} , \varPhi^{\text{in}}_{\alpha})\;. {} \end{aligned} $$
(11.136)

An equivalent definition is

$$\displaystyle \begin{aligned} \varPhi^{\text{in}}_{\alpha} = S \varPhi^{\text{out}}_{\alpha}\;. {} \end{aligned} $$
(11.137)

It is clear from the definition that, for the two asymptotic fields,

$$\displaystyle \begin{aligned} (\varPhi^{\text{in}}_{\beta} , \varphi^{\text{in}}_{f} \varPhi^{\text{in}}_{\alpha}) =(\varPhi^{\text{out}}_{\beta} , \varphi^{\text{out}}_{f}\varPhi^{\text{out}}_{\alpha})\;. {} \end{aligned} $$
(11.138)

Combining (11.137) and (11.138),

$$\displaystyle \begin{aligned} \varphi^{\text{out}}_{f} = S^{-1}\varphi^{\text{in}}_{f}S\;. {} \end{aligned} $$
(11.139)

Similarly,

$$\displaystyle \begin{aligned} \varphi^{\dagger \text{out}}_{f} = S^{-1}\varphi^{\dagger \text{in}}_{f} S\;. {} \end{aligned} $$
(11.140)

We introduce φ in(x) by

$$\displaystyle \begin{aligned} \varphi^{\text{in}}(x) = \sum_{\alpha} \big[ f^{\ast}_{\alpha}(x) \varphi^{\dagger \text{in}}_{\alpha} + f_{\alpha}(x)\varphi^{\text{in}}_{\alpha}\big]\;, {} \end{aligned} $$
(11.141)

and define φ out by the same equation, viz.,

$$\displaystyle \begin{aligned} \varphi^{\text{out}}(x) = S^{-1}\varphi^{\text{in}}(x)S\;. {} \end{aligned} $$
(11.142)

Since the latter coincides with (6.32), we see that this is the same as the S-matrix given previously. In relation to (11.122) in the previous section, we mentioned that we have not distinguished whether we take the in-state or the out-state for the one-particle state in the Heisenberg picture. This implies that, for the stable one-particle state α, we must have

$$\displaystyle \begin{aligned} \varPhi^{\text{in}}_{\alpha} = \varPhi^{\text{out}}_{\alpha}\;. {} \end{aligned} $$
(11.143)

Combining this with (11.122) in the previous section,

(11.144)
$$\displaystyle \begin{aligned} \big( \varPhi_{0} , \boldsymbol{\varphi}_{0} \varPhi^{\text{in}}_{\alpha}\big) = \big( \varPhi_{0} , \boldsymbol{\varphi}_{0} \varPhi^{\text{out}}_{\alpha}\big) = f_{\alpha}(x)\;. {} \end{aligned} $$
(11.145)

We will discuss this requirement in the context of renormalization in the next chapter.

Equations (11.131) and (11.132), together with the assumption of the existence of the asymptotic field, are called asymptotic conditions. Starting from these conditions, we will derive the LSZ reduction formula in the Heisenberg picture, or as they called it, the magic formula (Zauberformel) [113]. We introduce a more concise notation:

$$\displaystyle \begin{aligned} T (x_{1} , \ldots , x_{n}) = T [ \boldsymbol{\varphi}(x_{1}) \ldots \boldsymbol{\varphi}(x_{n}) ]\;, {}\end{aligned} $$
(11.146)
$$\displaystyle \begin{aligned} \tau (x_{1} , \ldots , x_{n}) = \big(\varPhi_{0} , T [ \boldsymbol{\varphi}(x_{1}) \ldots \boldsymbol{\varphi}(x_{n}) ] \varPhi_{0}\big)\;, {}\end{aligned} $$
(11.147)
(11.148)

To begin with, we prove the following equation:

$$\displaystyle \begin{aligned} \big(\varPhi_{0} , T(x_{1}, \ldots , x_{n}) \varPhi^{\text{in}}_{\alpha}\big) = -\mathrm{i}\int \mathrm{d}^4y f_{\alpha}(y)K_{y}\tau (x_{1}, \ldots , x_{n},y)\;. {} \end{aligned} $$
(11.149)

The left-hand side is

(11.150)

Noting that \(\varphi ^{\text{out}}_{\alpha }\) is an annihilation operator, in the limit y 0 →, we have

(11.151)

Taking the difference between the two equations above,

(11.152)

We combine this with Green’s theorem:

(11.153)

Here, we use the fact that, since f α(y) is a wave packet and corresponds to a local wave, it vanishes at a long range. Therefore, combining (11.152) with (11.153),

(11.154)

where we have used K y f α(y) = 0. The generalization of this equation is

IMAGE

Generalizing further, we obtain

$$\displaystyle \begin{aligned} \big( \varPhi^{\text{out}}_{\alpha} , T (x_{1} , \ldots , x_{n}) \varPhi^{\text{in}}_{\beta} \big) & = -\mathrm{i} \int \mathrm{d}^4\eta f_{\beta_{l}}(\eta) K_{\eta} \big( \varPhi^{\text{out}}_{\alpha} , T( x_{1} , \ldots , x_{n} , \eta ) \varPhi^{\text{in}}_{\beta_{1} \ldots \beta_{l-1}}\big) \\ &= -\mathrm{i} \int \mathrm{d}^4 \zeta f^{\ast}_{\alpha_{k}}(\zeta) K_{\zeta} \big( \varPhi^{\text{out}}_{\alpha_{1} \ldots \alpha_{k-1}} , T(x_{1} , \ldots , x_{n} , \zeta) \varPhi^{\text{in}}_{\beta}\big)\;, {} \end{aligned} $$
(11.155)

where α = α 1α k, β = β 1β l. We have assumed that there is no common one-particle state between α and β. Under a similar assumption, the S-matrix element becomes

(11.156)

What we understand from this is that, when k = l = 2, the above expression is basically the same as (11.123).

Although we considered the matrix element in the above derivation, it also holds true for the operator, i.e.,

$$\displaystyle \begin{aligned} -\mathrm{i}\int \mathrm{d}^4y\, f_{\alpha}(y)K_{y}T(x_{1}, \ldots , x_{n} , y) = T( x_{1}, \ldots , x_{n} ) \varphi^{\dagger \text{in}}_{\alpha} - \varphi^{\dagger \text{out}}_{\alpha} T (x_{1} , \ldots , x_{n})\;, {} \end{aligned} $$
(11.157)

and

$$\displaystyle \begin{aligned} \mathrm{i}\int \mathrm{d}^4y\, f^{\ast}_{\alpha}(y) K_{y} T(x_{1} , \ldots , x_{n} , y) = T (x_{1} , \ldots , x_{n}) \varphi^{ \text{in}}_{\alpha} - \varphi^{ \text{out}}_{\alpha} T (x_{1} , \ldots , x_{n})\;. {} \end{aligned} $$
(11.158)

Combining (11.134) and (4.18),

$$\displaystyle \begin{aligned} \int \mathrm{d}^4y\, \varDelta (y - x)K_{y}T(x_{1} , \ldots , x_{n} , y) = T (x_{1} , \ldots , x_{n}) \varphi^{ \text{in}}(x) - \varphi^{ \text{out}}(x) T (x_{1} , \ldots , x_{n})\;. {} \end{aligned} $$
(11.159)

This is the operator form of the LSZ reduction formula, which corresponds to (11.120). Putting together

$$\displaystyle \begin{aligned} S \varphi^{\text{out}} (x) = \varphi^{\text{in}}(x)S {} \end{aligned} $$
(11.160)

and (11.159),

$$\displaystyle \begin{aligned} \int \mathrm{d}^4y \varDelta (y - x) K_{y} ST (x_{1} , \ldots , x_{n} , y) = \big[ ST (x_{1} , \ldots , x_{n}), \varphi^{\text{in}}(x) \big]\;, {} \end{aligned} $$
(11.161)
$$\displaystyle \begin{aligned} \int \mathrm{d}^4y \varDelta (x - y) K_{y} ST (x_{1} , \ldots , x_{n} , y) = \big[\varphi^{\text{in}}(x), ST (x_{1} , \ldots , x_{n}) \big]\;. {} \end{aligned} $$
(11.161′)

Using the above recursively,

(11.162)

Taking the vacuum expectation value of this and using one of the renormalization conditions mentioned in the next chapter, viz.,

$$\displaystyle \begin{aligned} S \varPhi_{0} = \varPhi_{0}\;, {} \end{aligned} $$
(11.163)

we obtain

(11.164)

For n = 0,

(11.165)

The operator form of the S-matrix is determined by (11.165). Expanding the S-matrix in the normal product form based on Wick’s theorem, we have

$$\displaystyle \begin{aligned} S = \sum^{\infty}_{l=0} \frac{1}{l !} \int \mathrm{d}^4y_{1} \ldots \mathrm{d}^4y_{l} c (y_{1}, \ldots , y_{l}):\! \varphi^{\text{in}}(y_{1}) \ldots \varphi^{\text{in}}(y_{l})\!:\;, {} \end{aligned} $$
(11.166)

where we have assumed that c is symmetric with respect to y 1, y 2, …, y l. When we insert (11.166) into (11.165), what is left on the right-hand side is only the term including the normal ordered product of l operators, whence

(11.167)

Then c can be determined uniquely from (11.167), at least on the mass shell, i.e., for the Fourier components satisfying the Einstein energy–momentum dispersion relation. Moreover, since only the value of c on the mass shell contributes to (11.166),

$$\displaystyle \begin{aligned} c (y_{1}, \ldots , y_{l}) = (-\mathrm{i})^{l}K_{y_{1}} \ldots K_{y_{l}}\tau (y_{1}, \ldots , y_{l})\;. {} \end{aligned} $$
(11.168)

Substituting this into (11.166),

$$\displaystyle \begin{aligned} S=\sum^{\infty}_{l=0}\int \mathrm{d}^4y_{1} \ldots \mathrm{d}^4y_{l} K_{y_{1}} \ldots K_{y_{l}}\tau (y_{1}, \ldots , y_{l}) :\! \varphi^{\text{in}}(y_{1}) \ldots \varphi^{\text{in}}(y_{l})\!:\;, {} \end{aligned} $$
(11.169)

where we have assumed that the term corresponding to l = 0 is equal to unity. This is the operator form of the S-matrix. In addition, going back to (11.164), we have

(11.170)

As just described, many reduction formulae can be obtained from the asymptotic conditions. Indeed, the last formula effectively defines the quantization method for fields.

11.8 Unitarity Condition on the Green’s Function

The unitarity of the S-matrix is obvious as long as the S-matrix element is defined by (11.136) as a transition matrix between two complete orthonormal systems {Φ in} and {Φ out}. When the asymptotic states form complete systems like this, we speak of asymptotic completeness. In this section, we extend the unitarity of the S-matrix from unitarity on the mass shell to unitarity off the mass shell. This can be expressed by the unitarity condition for the Green’s functions.

To begin with, we consider the operator

$$\displaystyle \begin{aligned} T \exp \bigg[ -\mathrm{i}\exp \int \mathrm{d}^4x J(x) \boldsymbol{\varphi}(x) \bigg]\;. {} \end{aligned} $$
(11.171)

This operator is unitary, and denoting the operator for inverse time-ordering by \(\tilde {T}\),

$$\displaystyle \begin{aligned} T \exp \bigg[ -\mathrm{i}\int \mathrm{d}^4x\, J(x) \boldsymbol{\varphi}(x) \bigg] \tilde{T}\exp \bigg[ \mathrm{i}\int \mathrm{d}^4y J(y)\boldsymbol{\varphi}(y) \bigg] = 1\;. {} \end{aligned} $$
(11.172)

Functionally differentiating this equation n times with respect to J and subsequently setting J = 0,

$$\displaystyle \begin{aligned} \sum_{\text{comb}}(-\mathrm{i})^{k}{\mathrm{i}}^{n-k}T(x^{\prime}_{1}, \ldots , x^{\prime}_{k}) \tilde{T} (x^{\prime}_{k+1}, \ldots , x^{\prime}_{n}) = 0\;, {} \end{aligned} $$
(11.173)

where \((x^{\prime }_{1} , \ldots , x^{\prime }_{n})\) is a permutation of (x 1, …, x n) and we sum over all ways of dividing a set of n variables into two complementary subsets. We take the vacuum expectation value of this equation. Inserting the complete system {Φ in} between T and \(\tilde {T}\), we use the equation

(11.174)

and its complex conjugate

(11.175)

Then using (11.134), we sum over intermediate states. We use the notation

$$\displaystyle \begin{aligned} \bar{\tau}(x_{1}, \ldots , x_{n}) = (-\mathrm{i})^{n} K_{x_{1}} \ldots K_{x_{n}} \tau (x_{1}, \ldots , x_{n})\;. {} \end{aligned} $$
(11.176)

The Fourier transformation is the S-matrix element itself if all momenta are on the mass shell. Rewriting the expectation value of (11.173),

(11.177)

where l! in the denominator is a factor introduced to ensure that we do not count the same state more than once, and the prime on \(\sum ^{\prime }\) indicates that we neglect k = 0 and k = n. Restricting all momenta to the mass shell in the Fourier transformation of this equation, it becomes the condition for unitarity. Hence, when the momenta lie outside the mass shell, the Fourier transformation can be taken as its generalization. We call (11.177) the generalized unitarity condition. In fact, it should be obvious from the following discussion that (11.177) yields the unitarity condition for the S-matrix on the mass shell. Using

$$\displaystyle \begin{aligned} S=1+ \sum^{\infty}_{l=1}\frac{1}{l !} \int \mathrm{d}^4x_{1} \ldots \mathrm{d}^4x_{l} \bar{\tau} (x_{1}, \ldots , x_{l}):\! \varphi^{\text{in}}(x_{1}) \ldots \varphi^{\text{in}}(x_{l})\!:\;, {} \end{aligned} $$
(11.178)
$$\displaystyle \begin{aligned} S^{\dagger}=1+ \sum^{\infty}_{l=1}\frac{1}{l !} \int \mathrm{d}^4x_{1} \ldots \mathrm{d}^4x_{l} \bar{\tau}^{\ast} (x_{1}, \ldots , x_{l}) :\! \varphi^{\text{in}}(x_{1}) \ldots \varphi^{\text{in}}(x_{l})\!:\;, {} \end{aligned} $$
(11.179)

we expand SS as a sum of normal products:

(11.180)

Therefore, looking at the coefficients of each of the normal products, we see that

$$\displaystyle \begin{aligned} :\! \varphi^{\text{in}}(x_{1}) \ldots \varphi^{\text{in}}(x_{n})\!: \end{aligned}$$

is equal to the right-hand side of (11.177). Thus,

$$\displaystyle \begin{aligned} SS^{\dagger} = 1\;. {} \end{aligned} $$
(11.181)

Similarly,

$$\displaystyle \begin{aligned} S^{\dagger}S=1\;, {} \end{aligned} $$
(11.182)

using (11.177) with τ replaced by τ . As claimed, the unitarity for the S-matrix and for Green’s function are consequences of asymptotic completeness.

In the above, we considered the T-product of the Heisenberg operator. Next, we introduce the Green’s functions based on the retarded product, introduced in Sect. 6.2.

11.9 Retarded Green’s Functions

If A(x) is a local field, its retarded product is defined by

(11.183)

where θ(x) stands for θ(x 0), with \(x^{\prime }_{1}, \ldots , x^{\prime }_{n}\) a permutation of x 1, …, x n and summation over all permutations. The only permutations to contribute are those satisfying \(x^{\prime }_{1}>x^{\prime }_{2}> \ldots > x^{\prime }_{n}\) for the time variables. We introduce the unitary operator (11.171), denoting it by U :

$$\displaystyle \begin{aligned} U=T \exp \left[ -\mathrm{i}\int \mathrm{d}^4x\, J(x)\boldsymbol{\varphi}(x) \right]\;. {} \end{aligned} $$
(11.184)

Therefore,

$$\displaystyle \begin{aligned} U^{-1} = U^{\dagger} = \tilde{T}\exp \left[\mathrm{i}\int \mathrm{d}^4x\, J(x) \boldsymbol{\varphi}(x) \right]\;. {} \end{aligned} $$
(11.185)

We now introduce the generating functional

$$\displaystyle \begin{aligned} A_{R}[x,J] = U^{\dagger} T [U \boldsymbol{A}(x)]\;. {} \end{aligned} $$
(11.186)

Therefore, it is easy to check that the R-product above can be expressed by

$$\displaystyle \begin{aligned} R [ \boldsymbol{A}(x) : \boldsymbol{\varphi}(x_{1}) \ldots \boldsymbol{\varphi}(x_{n}) ] = \left. \frac{\updelta^{n}A_{R}[x,J]}{\updelta J(x_{1}) \ldots \delta J(x_{n})} \right|{}_{J=0}\;. {} \end{aligned} $$
(11.187)

Directly from the definition,

$$\displaystyle \begin{aligned} \frac{\updelta}{\updelta J(y)} A_{R}[x,J] = - \mathrm{i}\theta (x-y)\big[ A_{R}[x,J] , \varphi_{R}[y,J] \big]\;. {} \end{aligned} $$
(11.188)

In particular, taking A = φ, we have

$$\displaystyle \begin{aligned} \frac{\updelta}{\updelta J(y)} \varphi_{R}[x,J] - \frac{\updelta}{\updelta J(x)} \varphi_{R}[y,J] + \mathrm{i} \big[ \varphi_{R} [ x , J ] , \varphi_{R} [ y , J ] \big] = 0\;. {} \end{aligned} $$
(11.189)

This is called a unitarity condition. It corresponds to (11.173) in the case of the T-product. In addition, functionally differentiating A R a total of n times with respect to J,

(11.190)

Then taking the Hermitian conjugate of the reduction formula for the T-product, viz.,

$$\displaystyle \begin{aligned} \int \mathrm{d}^4 y \varDelta (x-y) K_{y} ST (x_{1}, \ldots , x_{n}, y) =\big[ \varphi^{\text{int}}(x) , ST(x_{1} , \ldots , x_{n}) \big]\;, {} \end{aligned} $$
(11.191)

we obtain

$$\displaystyle \begin{aligned} \int \mathrm{d}^4 y \varDelta (x-y) K_{y} \tilde{T} (x_{1}, \ldots , x_{n}, y)S^{\dagger} = - \big[ \varphi^{\text{int}}(x) , \tilde{T}(x_{1} , \ldots , x_{n}) S^{\dagger} \big]\;. {} \end{aligned} $$
(11.192)

Combining the three equations above,

$$\displaystyle \begin{aligned} \int \mathrm{d}^4 y \varDelta (x-y) K_{y}R (w : x_{1}, \ldots , x_{n}, y)= - \mathrm{i}\big[ \varphi^{\text{int}}(x) , R(w : x_{1} , \ldots , x_{n}) \big]\;, {} \end{aligned} $$
(11.193)

where

$$\displaystyle \begin{aligned} R (w : x_{1}, \ldots , x_{n}, y) = R\big[ \boldsymbol{A}(w) : \boldsymbol{\varphi}(x_{1}) \boldsymbol{\ldots} \boldsymbol{\varphi}(x_{n}) \big]\;. {} \end{aligned} $$
(11.194)

Then using (11.193) iteratively,

(11.195)

We now expand R as a sum of normal products:

(11.196)

where we have assumed that f is a symmetric function with respect to y 1, …, y l. Inserting this into (11.195) and taking the vacuum expectation value,

(11.197)

However, from (11.195), the right-hand side is equivalent to

$$\displaystyle \begin{aligned} \int \mathrm{d}^4y_{1} \ldots \mathrm{d}^4y_{l} \varDelta (z_{1} - y_{1}) \ldots \varDelta (z_{l} - y_{l}) r (w : x_{1}, \ldots , x_{n} , y_{1} , \ldots , y_{l})\;, {} \end{aligned} $$
(11.198)

where

$$\displaystyle \begin{aligned} r (w : x_{1}, \ldots , x_{n} , y_{1} , \ldots , y_{l}) = K_{x_{1}} \ldots K_{x_{n}} (\varPhi_{0}, R(w : x_{1}, \ldots , x_{n}) \varPhi_{0})\;. {} \end{aligned} $$
(11.199)

Thus, if the momenta corresponding to y 1, …, y l are on the mass shell,

$$\displaystyle \begin{aligned} f(w : x_{1}, \ldots , x_{n} , y_{1} , \ldots , y_{l}) = r(w : x_{1}, \ldots , x_{n} , y_{1} , \ldots , y_{l})\;. {} \end{aligned} $$
(11.200)

Since only those on the mass shell exert any influence,

(11.201)

In particular, for n = 0,

$$\displaystyle \begin{aligned} \boldsymbol{A}(w) = \sum^{\infty}_{l =0} \frac{1}{l!} \int \mathrm{d}^4y_{1} \ldots \mathrm{d}^4y_{l}r (w : y_{1}, \ldots , y_{l}) :\! \varphi^{\text{in}}(y_{1}) \ldots \varphi^{\text{in}}(y_{l})\!:\;. {} \end{aligned} $$
(11.202)

Moreover, if we take A = φ, then when (Φ 0, φ(x)Φ 0) = 0, we have

$$\displaystyle \begin{aligned} \boldsymbol{\varphi}(x) = \varphi^{\text{in}}(x) + \sum^{\infty}_{l=2} \frac{1}{l!} \int \mathrm{d}^4y_{1} \ldots \mathrm{d}^4y_{l}r ( x : y_{1}, \ldots , y_{l} ) :\! \varphi^{\text{in}}(y_{1}) \ldots \varphi (y_{l}) \!:\;. {} \end{aligned} $$
(11.203)

This gives the formal solution to the Yang–Feldman equation introduced in Sect. 6.3. In addition, from (11.189),

(11.204)

Taking the vacuum expectation value of this equation and using the reduction formula obtained by inserting the complete system {Φ in}, we obtain a non-linear equation for the system (Φ 0, R(x : x 1, …, x n)Φ 0). This is also one of the generalized unitarity conditions.

Both the in- and the out-states appear in the reduction formula for the T-product, while only the in-states appear in the R-products.