1 Integral Transforms

In several different conditions encountered in engineering and in physics, one may find it useful to define the integral transform of a given function f(t), namely

$$\begin{aligned} \tilde{f} (k) = \int \limits _a^b f(t) \, K(t, k)\, \mathrm {d}\, t, \end{aligned}$$
(2.1)

where K(tk) expresses the kernel of the integral transform. Typically, different integral transforms correspond to different choices of the kernel function K(tk) and of the integration interval (ab). The purpose in defining an integral transform is to set a rule for linking a function of “time”, f(t), to a transformed function, \(\tilde{f} (k)\), of the independent variable, k. This independent variable can be physically interpreted either as the “frequency” or the “wave number”. On the other hand, depending on the type of transform and on its applications, t can be either intended as time or as a spatial Cartesian coordinate. In this case, it is replaced by x.

Examples of integral transforms are:

  • the Fourier transform,

    $$ K(t, k) = \frac{\mathrm {e}^{-\mathrm {i}\, k t}}{\sqrt{2 \pi }} \; , \quad (a,b) = (-\infty , \infty ) \;, \quad k \in \mathbb {R} \; ; $$
  • the Laplace transform,

    $$ K(t, k) = \mathrm {e}^{- k t} \; , \quad (a,b) = (0, \infty ) \;, \quad k \in \mathbb {C} \; ; $$
  • the Hankel, or Fourier–Bessel, transform,

    $$ K(t, k) = t\, J_n( k t) \; , \quad (a,b) = (0, \infty ) \;, \quad k \in \mathbb {R} \; , $$

    where \(J_n\) denotes the Bessel function of first kind and order \(n \in \mathbb {N}_0\);

  • the Mellin transform,

    $$ K(t, k) = t^{k-1} \; , \quad (a,b) = (0, \infty ) \;, \quad k \in \mathbb {R} \; . $$

In the following, we will focus on the Fourier transform as it is definitely the most important within the stability analysis of flow systems.

2 The Fourier Transform

The aim of this section is not providing an exhaustive and mathematically rigorous survey of the Fourier transform and its properties. In fact, there exist several monographs devoted to the study of integral transforms, more or less mathematically oriented [1,2,3, 6].

2.1 Definition

Hereafter, we will choose to define and study the Fourier transform of functions of x, intended as a coordinate. Thus, we interpret k as a wave number along the x-direction. This choice, which at this stage is just a matter of notation, will turn out to be appropriate for the stability analysis applications.

Given a function f(x), the Fourier transform is defined as

$$\begin{aligned} \mathfrak {F}\{f(x)\}(k) = \tilde{f} (k) = \frac{1}{\sqrt{2 \pi }} \int \limits _{-\infty }^{\infty } f(x) \, \mathrm {e}^{-\mathrm {i}\, k x}\, \mathrm {d}\, x \; . \end{aligned}$$
(2.2)

The conditions for this transform to be well defined are:

  • f(x) is piecewise continuously differentiable;

  • f(x) is absolutely integrable on \(\mathbb {R}\), namely

    $$\begin{aligned} \int \limits _{-\infty }^{\infty } |f(x)|\, \mathrm {d}\, x < \infty \; . \end{aligned}$$
    (2.3)

2.2 Inversion of the Fourier Transform

The possibility to determine function f(x), if one knows its Fourier transform \(\tilde{f} (k)\), arises from Fourier’s integral formula,

$$\begin{aligned} f(x) = \frac{1}{2 \pi } \int \limits _{-\infty }^{\infty } \left[ \ \int \limits _{-\infty }^{\infty } f(s) \, \mathrm {e}^{-\mathrm {i}\, k s} \, \mathrm {d}\, s \right] \mathrm {e}^{\mathrm {i}\, k x} \, \mathrm {d}\, k \; . \end{aligned}$$
(2.4)

If x represents a finite discontinuity of f(x), then Eq. (2.4) must be modified as follows:

$$\begin{aligned} \frac{1}{2} \left[ f(x + 0) + f(x - 0) \right] = \frac{1}{2 \pi } \int \limits _{-\infty }^{\infty } \left[ \ \int \limits _{-\infty }^{\infty } f(s) \, \mathrm {e}^{-\mathrm {i}\, k s} \, \mathrm {d}\, s \right] \mathrm {e}^{\mathrm {i}\, k x} \, \mathrm {d}\, k \; , \end{aligned}$$
(2.5)

where \(x + 0\) and \(x - 0\) indicate the limits where x is approached from the right and from the left. From Eqs. (2.2) and (2.4), one may obtain the so-called inversion of the Fourier transform

$$\begin{aligned} f(x) = \mathfrak {F}^{-1}\{\tilde{f} (k)\}(x) = \frac{1}{\sqrt{2 \pi }} \int \limits _{-\infty }^{\infty } \tilde{f} (k) \, \mathrm {e}^{\mathrm {i}\, k x}\, \mathrm {d}\, k \; . \end{aligned}$$
(2.6)

Equation (2.6) is the inversion formula of the Fourier transform . We note that the conditions for the existence of the Fourier transform pose some strong restrictions on the function f(x). These restrictions can be relaxed if one goes beyond the usual concept of function and introduces the generalised functions or distributions . A rigorous theory of distributions was first introduced by Laurent Schwartz (1915–2002). This French mathematician was awarded in 1950 with the Fields medal for his work on the theory of distributions. We will not describe the details of this mathematical topic here, and we refer the reader to the book by Schwartz [5].

The most important distribution is Dirac’s delta function, \(\delta (x - x_0)\). Dirac’s delta function is such that \(\delta (x - x_0) = 0\) for every \(x \ne x_0\), and it is defined through the relationship

$$\begin{aligned} \int \limits _{-\infty }^{\infty } \varphi (x)\, \delta (x - x_0) \, d x = \varphi (x_0) \; , \end{aligned}$$
(2.7)

where \(\varphi (x)\) is any test function picked up from a properly defined functional space. Without any attempt to be rigorous, we merely mention that a typical choice is the space of the smooth functions on \((-\infty , \infty )\) with a compact support, i.e. functions that vanish outside a bounded open interval in \(\mathbb {R}\) [5]. The typical role of Dirac’s delta function in the mathematical models of physical systems is for expressing the density of a point-like object. Such a quantity is the result of a limit where it gets an infinite value at a point, \(x_0\), and it is zero everywhere else.

If we evaluate the Fourier transform of Dirac’s delta function through Eq. (2.2), and we employ Eq. (2.7), then we obtain

$$\begin{aligned} \begin{array}{ccc} &{}f(x) = \delta (x - x_0) \; , \\ &{}\tilde{f} (k) = \displaystyle \frac{1}{\sqrt{2 \pi }} \int \limits _{-\infty }^{\infty } \delta (x - x_0) \, \mathrm {e}^{-\mathrm {i}\, k x}\, \mathrm {d}\, x = \frac{\mathrm {e}^{-\mathrm {i}\, k x_0}}{\sqrt{2 \pi }} \; . \end{array} \end{aligned}$$
(2.8)

Conversely, one may note that the Fourier transform of function

$$\begin{aligned} f(x) = \mathrm {e}^{\mathrm {i}\, a x} \;, \end{aligned}$$
(2.9)

where a is a constant, is given by

$$\begin{aligned} \tilde{f} (k) = \sqrt{2 \pi } \, \delta (k - a) \;. \end{aligned}$$
(2.10)

This result is easily proved by employing the inversion formula of Fourier transform, Eqs. (2.6), and (2.7). Obviously, function f(x) defined by Eq. (2.9) is not absolutely integrable on \(\mathbb {R}\), that is, it does not satisfy Eq. (2.3). In fact, its Fourier transform exists only in the extended sense of the theory of distributions.

2.3 Some Properties of the Fourier Transform

The Fourier transform has several important properties. Most of them are straightforward consequences of its definition (2.2). A list of some properties of the Fourier transform is the following:

  • Linearity

    $$\begin{aligned} \mathfrak {F}\{ a\, f(x) + b\, g(x) \}(k) = a\, \tilde{f}(k) + b \ \tilde{g}(k) \; , \quad \forall a, b \in \mathbb {R}\; . \end{aligned}$$
    (2.11)
  • Scaling

    $$\begin{aligned} \mathfrak {F}\{ f(a x) \}(k) = \frac{1}{|a|}\ \tilde{f}\bigg ( \frac{k}{a} \bigg ) , \quad \forall a \in \mathbb {R}\;,\quad a \ne 0\; . \end{aligned}$$
    (2.12)
  • Shifting

    $$\begin{aligned} \mathfrak {F}\{ f(x - x_0) \}(k) = \mathrm {e}^{-\mathrm {i}\, k x_0}\, \tilde{f}(k) , \quad \forall x_0 \in \mathbb {R}\; . \end{aligned}$$
    (2.13)
  • Translation

    $$\begin{aligned} \mathfrak {F}\{ \mathrm {e}^{\mathrm {i}\, k_0 x} f(x) \}(k) = \tilde{f}(k - k_0) , \quad \forall k_0 \in \mathbb {R}\; . \end{aligned}$$
    (2.14)
  • Derivative

    $$\begin{aligned} \mathfrak {F}\{ f'(x) \}(k) = \mathrm {i}\, k \, \tilde{f}(k)\; , \end{aligned}$$
    (2.15)
    $$\begin{aligned} \mathfrak {F}\{ f^{(n)}(x) \}(k) = (\mathrm {i}\, k)^n \, \tilde{f}(k) \; , \quad \forall n \in \mathbb {N}\; . \end{aligned}$$
    (2.16)
  • Partial derivative

    $$\begin{aligned} \mathfrak {F}\left\{ \frac{\partial }{\partial x} f(x,t) \right\} (k) = \mathrm {i}\, k \, \tilde{f}(k,t) \; , \quad \mathfrak {F}\left\{ \frac{\partial }{\partial t} f(x,t) \right\} (k) = \frac{\partial }{\partial t} \tilde{f}(k,t) \; , \end{aligned}$$
    (2.17)
    $$\begin{aligned} \mathfrak {F}\left\{ \frac{\partial ^m}{\partial x^m} \frac{\partial ^n}{\partial t^n} f(x,t) \right\} (k) = (\mathrm {i}\, k)^m \, \frac{\partial ^n}{\partial t^n} \tilde{f}(k,t) \; , \quad \forall m,n \in \mathbb {N}_0\; , \end{aligned}$$
    (2.18)

    where, conventionally, \(\partial ^0/\partial x^0 = 1\) and \(\partial ^0/\partial t^0 = 1\) .

  • Convolution

    The convolution of two functions f(x) and g(x) is defined as

    $$\begin{aligned} f(x)\! *\! g (x) = \frac{1}{\sqrt{2 \pi }} \int \limits _{-\infty }^{\infty } f(x - \hat{x}) g(\hat{x}) \, \mathrm {d}\,\hat{x}\;. \end{aligned}$$
    (2.19)

    Hence, another important property of the Fourier transform is the following:

    $$\begin{aligned} \mathfrak {F}\{ f(x)\! *\! g (x) \}(k) = \tilde{f}(k) \; \tilde{g}(k) \; . \end{aligned}$$
    (2.20)

    We mention that the convolution between two functions has the usual properties of a product, namely

    $$\begin{aligned} \text {commutative} :\quad&\qquad f\! *\! g = g\! *\! f \; ;\nonumber \\ \text {associative} :\quad&\qquad f\! *\! (g\! *\! h) = (f\! *\! g)\! *\! h \; ; \nonumber \\ \text {distributive} :\quad&\qquad f\! *\! (g + h) = f\! *\! g + f\! *\! h \; . \end{aligned}$$
    (2.21)

    Dirac’s delta function plays the role of the neutral element for the convolution,

    $$\begin{aligned} f\! * \delta = \delta *\! f = f \; . \end{aligned}$$
    (2.22)

2.4 Solution of the One-Dimensional Wave Equation

We aim to solve the partial differential equation

$$\begin{aligned} \frac{\partial ^2 \psi (x,t)}{\partial t^2} = c^2\, \frac{\partial ^2 \psi (x,t)}{\partial x^2} \; , \end{aligned}$$
(2.23)

for \(x \in (-\infty , \infty )\) and \(t \in [0, \infty )\) with the initial conditions

$$\begin{aligned} \psi (x, 0) = f(x), \quad \left. \frac{\partial \psi (x,t)}{\partial t}\right| _{t=0} = c\, g'(x) \; . \end{aligned}$$
(2.24)

Here, f(x) and g(x) are functions known a priori, while c is the phase velocity of the wave, k is the wave number , and \(\omega = c k\) is the angular frequency . We apply the Fourier transform operator to both sides of the wave equation and we employ Eq. (2.18), so that we obtain

$$\begin{aligned} \frac{\partial ^2 \tilde{\psi }(k,t)}{\partial t^2} + c^2 k^2 \tilde{\psi }(k,t) = 0 \; . \end{aligned}$$
(2.25)

By the same method, the initial conditions can be rewritten as

$$\begin{aligned} \tilde{\psi }(k, 0) = \tilde{f}(k), \quad \left. \frac{\partial \tilde{\psi }(k,t)}{\partial t}\right| _{t=0} = \mathrm {i}\, c k \,\tilde{g}(k) \; . \end{aligned}$$
(2.26)

Therefore, we have just to solve an ordinary differential problem where t is the independent variable and k is a parameter. The general solution of Eq. (2.25) is

$$\begin{aligned} \tilde{\psi }(k,t) = \tilde{a}(k) \, \mathrm {e}^{\mathrm {i}\, c k t} + \tilde{b}(k) \, \mathrm {e}^{- \mathrm {i}\, c k t} \; . \end{aligned}$$
(2.27)

The integration constants \(\tilde{a}(k)\) and \(\tilde{b}(k)\) can be determined from the initial conditions, namely

$$\begin{aligned} \tilde{a}(k) + \tilde{b}(k) = \tilde{f}(k)\; , \quad \tilde{a}(k) - \tilde{b}(k) = \tilde{g}(k)\; , \end{aligned}$$
(2.28)

so that we obtain

$$\begin{aligned} \tilde{a}(k) = \frac{1}{2} \left[ \tilde{f}(k) + \tilde{g}(k) \right] \; , \quad \tilde{b}(k) = \frac{1}{2} \left[ \tilde{f}(k) - \tilde{g}(k) \right] \; . \end{aligned}$$
(2.29)

Hence, the Fourier transform of our solution is given by

$$\begin{aligned} \tilde{\psi }(k,t) = \frac{1}{2}\ \tilde{f}(k) \left( \mathrm {e}^{\mathrm {i}\, c k t} + \mathrm {e}^{- \mathrm {i}\, c k t} \right) + \frac{1}{2}\ \tilde{g}(k) \left( \mathrm {e}^{\mathrm {i}\, c k t} - \mathrm {e}^{- \mathrm {i}\, c k t} \right) \; . \end{aligned}$$
(2.30)

On account of the shifting property, Eq. (2.13), we have

$$\begin{aligned} \mathfrak {F}^{-1}\!\left\{ \tilde{f}(k) \, \mathrm {e}^{\mathrm {i}\, c k t} \right\} = f(x + c t), \quad \mathfrak {F}^{-1}\!\left\{ \tilde{f}(k) \, \mathrm {e}^{- \mathrm {i}\, c k t} \right\} = f(x - c t) \; , \end{aligned}$$
(2.31)
$$\begin{aligned} \mathfrak {F}^{-1}\!\left\{ \tilde{g}(k) \, \mathrm {e}^{\mathrm {i}\, c k t} \right\} = g(x + c t), \quad \mathfrak {F}^{-1}\!\left\{ \tilde{g}(k) \, \mathrm {e}^{- \mathrm {i}\, c k t} \right\} = g(x - c t) \; . \end{aligned}$$
(2.32)

The solution of our problem is

$$\begin{aligned} {\psi }(x,t) = \frac{1}{2} \left[ f(x + c t) + f(x - c t) \right] + \frac{1}{2} \left[ g(x + c t) - g(x - c t) \right] \; . \end{aligned}$$
(2.33)

It is easily verified that, if we introduce \(G(x) = g'(x)\), then we have

$$\begin{aligned} {\psi }(x,t) = \frac{1}{2} \left[ f(x + c t) + f(x - c t) \right] + \frac{1}{2} \int \limits _{x - c t}^{x + c t} G(s) \, \mathrm{d}s \; . \end{aligned}$$
(2.34)

2.5 Solution of the One-Dimensional Diffusion Equation

Let us now solve the partial differential equation,

$$\begin{aligned} \frac{\partial {\psi (x,t)}}{\partial {t}} = \alpha \, \frac{\partial ^2 {\psi (x,t)}}{\partial {x}^2}\; , \end{aligned}$$
(2.35)

for \(x \in (-\infty , \infty )\) and \(t \in [0, \infty )\) with the initial condition

$$\begin{aligned} \psi (x, 0) = f(x) \; . \end{aligned}$$
(2.36)

Here, \(\alpha \) is the diffusion coefficient, or the thermal diffusivity in the case of the heat conduction equation for a solid.

On applying the Fourier transform operator to both sides of the diffusion equation and employing Eqs. (2.17) and (2.18), we obtain

$$\begin{aligned} \frac{\partial {\tilde{\psi }(k,t)}}{\partial {t}} + \alpha k^2 \tilde{\psi }(k,t) = 0\; . \end{aligned}$$
(2.37)

The initial condition can be rewritten as

$$\begin{aligned} \tilde{\psi }(k, 0) = \tilde{f}(k) \; . \end{aligned}$$
(2.38)

Again, we have to solve an ordinary differential problem where t is the independent variable and k is a parameter. The general solution is

$$\begin{aligned} \tilde{\psi }(k,t) = \tilde{a}(k) \, \mathrm {e}^{- \alpha k^2 t} \; . \end{aligned}$$
(2.39)

The integration constant \(\tilde{a}(k)\) is easily determined from the initial condition, namely

$$\begin{aligned} \tilde{a}(k) = \tilde{f}(k)\; . \end{aligned}$$
(2.40)

Therefore, the Fourier transform of our solution is given by

$$\begin{aligned} \tilde{\psi }(k,t) = \tilde{f}(k) \, \mathrm {e}^{- \alpha k^2 t} \; . \end{aligned}$$
(2.41)

If we denote \(\mathrm {e}^{- \alpha k^2 t}\) as \(\tilde{g}(k,t)\), then we may write

$$\begin{aligned} \tilde{\psi }(k,t) = \tilde{f}(k) \,\tilde{g}(k,t) \; , \end{aligned}$$
(2.42)

and invoke the convolution property, Eq. (2.20), for expressing the solution as

$$\begin{aligned} {\psi }(x,t) = \frac{1}{\sqrt{2 \pi }} \int \limits _{-\infty }^{\infty } f(\hat{x}) g(x - \hat{x},t) \, \mathrm {d}\, \hat{x} \; . \end{aligned}$$
(2.43)

We have just to evaluate the inverse Fourier transform of \(\tilde{g}(k)\) by employing Eq. (2.6),

$$\begin{aligned} g(x,t) = \mathfrak {F}^{-1}\{ \mathrm {e}^{- \alpha k^2 t} \}(x) = \frac{1}{\sqrt{2 \pi }} \int \limits _{-\infty }^{\infty } \mathrm {e}^{- \alpha k^2 t + \mathrm {i}\, k x}\, \mathrm {d}\, k \; . \end{aligned}$$
(2.44)

The integral can be evaluated as follows:

$$\begin{aligned} \begin{array}{ccc} &{}\displaystyle - \alpha k^2 t + \mathrm {i}\, k x = - \left( k \sqrt{\alpha t} - \frac{\mathrm {i}\, x}{2 \sqrt{\alpha t}} \right) ^2 - \frac{x^2}{4 \alpha t} \; , \\ &{}\displaystyle w = k \sqrt{\alpha t} - \frac{\mathrm {i}\, x}{2 \alpha \sqrt{t}} \; , \quad \mathrm {d}\, w = \sqrt{\alpha t}\ \mathrm {d}\, k \; , \quad \mathrm {d}\,k = \frac{\mathrm {d}\,w}{\sqrt{\alpha t}} \; , \\ &{}\displaystyle g(x,t) = \frac{\mathrm {e}^{- x^2/(4 \alpha t)}}{\sqrt{2 \pi \alpha t}} \int \limits _{-\infty }^{\infty } \mathrm {e}^{- w^2}\, \mathrm {d}\, w = \frac{\mathrm {e}^{- x^2/(4 \alpha t)}}{\sqrt{2 \pi \alpha t}} \ \sqrt{\pi } = \frac{\mathrm {e}^{- x^2/(4 \alpha t)}}{\sqrt{2 \alpha t}}\; . \end{array} \end{aligned}$$
(2.45)

To conclude, the solution can be expressed as

Fig. 2.1
figure 1

Illustration of the limit in Eq. (2.49)

$$\begin{aligned} {\psi }(x,t) = \frac{1}{2 \sqrt{\pi \alpha t}} \int \limits _{-\infty }^{\infty } f(\hat{x}) \, {\mathrm {e}^{- (x - \hat{x})^2/(4 \alpha t)}} \, \mathrm {d}\,\hat{x} \; . \end{aligned}$$
(2.46)

Equation (2.46) allows one to infer that, when the initial condition (2.38) involves a point-like source at \(x=0\),

$$\begin{aligned} \psi (x,0) = f(x) = \delta (x) \;, \end{aligned}$$
(2.47)

Equations (2.43) and (2.45) yield

$$\begin{aligned} \psi (x,t) = \frac{g(x,t)}{\sqrt{2\pi }} = \frac{\mathrm {e}^{- x^2/(4 \alpha t)}}{2 \sqrt{\pi \alpha t}}\; . \end{aligned}$$
(2.48)

This means that the initial point-like distribution gradually spreads over the real axis with Gaussian trend as time t increases. In fact, Eqs. (2.47) and (2.48) suggest one of the many limit formula that lead to Dirac’s delta, namely

$$\begin{aligned} \lim _{s \rightarrow 0^{+}} \frac{\mathrm {e}^{- x^2/(4 s)}}{2 \sqrt{\pi s}} = \delta (x) \; . \end{aligned}$$
(2.49)

A sketch of how the Gaussian function becomes more and more peaked as \(s \rightarrow 0^{+}\) resembling more and more a distribution with point-like support and infinite strength, viz. Dirac’s delta function, is shown in Fig. 2.1.

2.6 Solution of the One-Dimensional Advection–Diffusion Equation

The advection–diffusion equation is an extension of the diffusion equation discussed in Sect. 2.2.5,

$$\begin{aligned} \frac{\partial {\psi (x,t)}}{\partial {t}} + U_0\, \frac{\partial {\psi (x,t)}}{\partial {x}} = \alpha \, \frac{\partial ^2 {\psi (x,t)}}{\partial {x}^2}\; , \end{aligned}$$
(2.50)

for \(x \in (-\infty , \infty )\) and \(t \in [0, \infty )\) with the initial condition

$$\begin{aligned} \psi (x, 0) = f(x) \; . \end{aligned}$$
(2.51)

The constant \(U_0\) defines an imposed flow that drives the diffusion along the x-axis.

By evaluating the Fourier transform of both sides of the advection equation and by using Eqs. (2.17) and (2.18), we get

$$\begin{aligned} \frac{\partial {\tilde{\psi }(k,t)}}{\partial {t}} + (\alpha k^2 + \mathrm {i}\, k U_0)\, \tilde{\psi }(k,t) = 0\; . \end{aligned}$$
(2.52)

The initial condition (2.51) yields

$$\begin{aligned} \tilde{\psi }(k, 0) = \tilde{f}(k) \; . \end{aligned}$$
(2.53)

Equations (2.52) and (2.53) define an ordinary differential problem where t is the independent variable and k is a parameter. The general solution is

$$\begin{aligned} \tilde{\psi }(k,t) = \tilde{a}(k) \, \mathrm {e}^{- (\alpha k^2 + \mathrm {i}\, k U_0) t} \; . \end{aligned}$$
(2.54)

The integration constant \(\tilde{a}(k)\) is evaluated from the initial condition, namely

$$\begin{aligned} \tilde{a}(k) = \tilde{f}(k)\; . \end{aligned}$$
(2.55)

The Fourier transform of \(\psi (x,t)\) is expressed as

$$\begin{aligned} \tilde{\psi }(k,t) = \tilde{f}(k) \, \mathrm {e}^{- (\alpha k^2 + \mathrm {i}\, k U_0) t} \; . \end{aligned}$$
(2.56)

On writing \(\mathrm {e}^{- (\alpha k^2 + \mathrm {i}\, k U_0) t}\) as \(\tilde{g}(k,t)\), then we obtain

$$\begin{aligned} \tilde{\psi }(k,t) = \tilde{f}(k) \,\tilde{g}(k,t) \; . \end{aligned}$$
(2.57)

We now employ the convolution property, Eq. (2.20), and express \(\psi (x,t)\) as

$$\begin{aligned} {\psi }(x,t) = \frac{1}{\sqrt{2 \pi }} \int \limits _{-\infty }^{\infty } f(\hat{x}) g(x - \hat{x}, t) \, \mathrm {d}\, \hat{x} \; . \end{aligned}$$
(2.58)

We evaluate the inverse Fourier transform of \(\tilde{g}(k,t)\) by employing Eq. (2.6),

$$\begin{aligned} g(x,t) = \mathfrak {F}^{-1}\{ \mathrm {e}^{- (\alpha k^2 + \mathrm {i}\, k U_0) t} \}(x) = \frac{1}{\sqrt{2 \pi }} \int \limits _{-\infty }^{\infty } \mathrm {e}^{- \alpha k^2 t + \mathrm {i}\, k (x - U_0 t)}\, \mathrm {d}\, k \; . \end{aligned}$$
(2.59)

The evaluation of the integral in Eq. (2.59) yields

$$\begin{aligned} \begin{array}{ccc} &{}\displaystyle - \alpha k^2 t + \mathrm {i}\, k (x - U_0 t) = - \left[ k \sqrt{\alpha t} - \frac{\mathrm {i}\, (x - U_0 t)}{2 \sqrt{\alpha t}} \right] ^2 - \frac{(x - U_0 t)^2}{4 \alpha t} \; , \\ &{}\displaystyle w = k \sqrt{\alpha t} - \frac{\mathrm {i}\, (x - U_0 t)}{2 \alpha \sqrt{t}} \; , \quad \mathrm {d}\, w = \sqrt{\alpha t}\ \mathrm {d}\, k \; , \quad \mathrm {d}\,k = \frac{\mathrm {d}\,w}{\sqrt{\alpha t}} \; , \\ &{}\displaystyle g(x,t) = \frac{\mathrm {e}^{- (x - U_0 t)^2/(4 \alpha t)}}{\sqrt{2 \pi \alpha t}} \int \limits _{-\infty }^{\infty } \mathrm {e}^{- w^2}\, \mathrm {d}\, w = \frac{\mathrm {e}^{- (x - U_0 t)^2/(4 \alpha t)}}{\sqrt{2 \alpha t}}\; . \end{array} \end{aligned}$$
(2.60)

The final expression of \(\psi (x,t)\) can be written as

$$\begin{aligned} {\psi }(x,t) = \frac{1}{2 \sqrt{\pi \alpha t}} \int \limits _{-\infty }^{\infty } f(\hat{x}) \, {\mathrm {e}^{- (x - \hat{x} - U_0 t)^2/(4 \alpha t)}} \, \mathrm {d}\,\hat{x} \; . \end{aligned}$$
(2.61)

With a reasoning similar to that presented in Sect. 2.2.5, Eq. (2.61) implies that, when the initial condition (2.51) describes a point-like source at \(x=0\),

$$\begin{aligned} f(x) = \delta (x) \;, \end{aligned}$$
(2.62)

Equations (2.43) and (2.60) lead to

$$\begin{aligned} \psi (x,t) = \frac{g(x,t)}{\sqrt{2\pi }} = \frac{\mathrm {e}^{- (x - U_0 t)^2/(4 \alpha t)}}{2 \sqrt{\pi \alpha t}}\; . \end{aligned}$$
(2.63)

Equation (2.63) describes a situation where the initial condition, given by a point-like distribution at \(x=0\), gradually spreads over the real axis as time t increases, while the maximum of this Gaussian signal travels along the x-direction with constant velocity \(U_0\).

2.7 Solution of the One-Dimensional Schrödinger Equation

The Schrödinger equation for the one-dimensional quantum evolution of a free particle reads [4],

$$\begin{aligned} \mathrm {i}\, \hbar \, \frac{\partial {\psi (x,t)}}{\partial {t}} = {\mathsf {H}}\, \psi (x,t) \;, \end{aligned}$$
(2.64)

where \(\hbar = 1.05457\times 10^{-34}\, \mathrm {J\, s}\) is the reduced Planck’s constant, \(\psi (x,t)\) is the wave function of the particle, and \({\mathsf {H}}\) is the Hamiltonian operator. For a one-dimensional free particle, \({\mathsf {H}}\) is given by

$$\begin{aligned} {\mathsf {H}} = -\, \frac{\hbar ^2}{2 m}\, \frac{\partial ^2 {}}{\partial {x}^2} \;, \end{aligned}$$
(2.65)

where m is the particle mass. Thus, Eq. (2.64) can be rewritten as

$$\begin{aligned} \frac{\partial {\psi (x,t)}}{\partial {t}} = \frac{\mathrm {i}\, \hbar }{2 m}\, \frac{\partial ^2 {\psi (x,t)}}{\partial {x}^2} \;. \end{aligned}$$
(2.66)

Mathematically speaking, Eq. (2.66) is nothing but a diffusion equation (2.35) with an imaginary diffusion coefficient,

$$\begin{aligned} \alpha = \frac{\mathrm {i}\, \hbar }{2 m} . \end{aligned}$$
(2.67)

This information is what is strictly needed to deduce from Eq. (2.46) the evolution formula, at a given time t, of an initial wave function \(\psi (x,0) = f(x)\), namely

$$\begin{aligned} {\psi }(x,t) = \sqrt{\frac{m}{2 \pi \mathrm {i}\, \hbar t}}\, \int \limits _{-\infty }^{\infty } f(\hat{x}) \, {\mathrm {e}^{\mathrm {i}\, m\, (x - \hat{x})^2/(2 \hbar t)}} \, \mathrm {d}\,\hat{x} \; . \end{aligned}$$
(2.68)

3 Plane Waves and Wave Packets

In general, we will call plane wave a function defined as

$$\begin{aligned} \psi (x,t) = A \, \mathrm {e}^{\mathrm {i}\, (k x - \omega t)} \; . \end{aligned}$$
(2.69)

Here, A is the amplitude of the wave, while k and \(\omega \) are the wave number and the angular frequency, respectively. With A, k and \(\omega \) considered as constants, plane waves are typical solutions of the wave equation (2.23) , provided that

$$\begin{aligned} \omega = \pm \, c k \;. \end{aligned}$$
(2.70)

This is a special case, as c is a characteristic positive constant of governing equation (2.23) and, hence, it is independent of k or, equivalently, \(\omega = \pm \, c k\) just means that \(\omega \) depends linearly on k. Such a situation defines a non-dispersive wave propagation. On the other hand, dispersive waves occur when \(\omega \) is a function of k and \(\mathrm {d}^2 \omega /\mathrm {d}\, k^2\) is not identically zero. We also mention that A is not, in general, a constant as it can be time-dependent.

A linear combination of several plane waves yields a wave packet , namely

$$\begin{aligned} \psi (x,t) = \sum _{k} B(k,t) \, \mathrm {e}^{\mathrm {i}\, [k x - \omega (k)\, t]} \; , \end{aligned}$$
(2.71)

where B(kt) is the product of the coefficients of the linear combination and the amplitude of each plane wave. An alternative name for the plane waves is normal modes .Footnote 1

Hence, a wave packet is often devised as a superposition of several normal modes. Such a superposition may involve a continuously varying k over a given real interval or, possibly, over all real axis. In that case, the sum in Eq. (2.71) is rather an integral over all real values of k. For the continuum limit to be mathematically coherent, B(kt) becomes infinitesimal, namely \(B(k,t) = b(k,t)\, \mathrm {d}\,k\), so that Eq. (2.71) is rewritten as

$$\begin{aligned} \psi (x,t) = \int \limits _{-\infty }^{\infty } b(k,t) \, \mathrm {e}^{\mathrm {i}\, [k x - \omega (k)\, t]} \, \mathrm {d}\, k \;. \end{aligned}$$
(2.72)

By comparing Eq. (2.6) with Eq. (2.72), one can immediately recognise that the Fourier transform of \(\psi (x,t)\) is given by

$$\begin{aligned} \tilde{\psi }(k,t) = \sqrt{2\pi } \; b(k,t) \, \mathrm {e}^{-\mathrm {i}\, \omega (k)\, t} \; . \end{aligned}$$
(2.73)

On the basis of Eq. (2.73), every function \(\psi (x,t)\) that admits a Fourier transform can be considered as a wave packet.

The main features of wave packets are illustrated in the following Sects. 2.3.1 and 2.3.2. This discussion follows that presented by [4] in Chap. 2 of his book, on illustrating the wave–particle duality of quantum mechanics. Such a method applies well beyond the domain of quantum theory and is extremely illuminating for general wave phenomena.

3.1 Stationary Waves in x-Space and k-Space

Stationary wave packets are given by Eq. (2.72) when \(\omega (k)=0\) and b(kt) is time-independent, namely

$$\begin{aligned} \psi _\mathrm{s}(x) = \int \limits _{-\infty }^{\infty } b_\mathrm{s}(k)\, \mathrm {e}^{\mathrm {i}\, k x}\, \mathrm {d}\,k \; . \end{aligned}$$
(2.74)

Equation (2.74) defines \(\psi _\mathrm{s}(x)\) as a linear superposition of infinite standing plane waves with wavelength \(\lambda = 2\pi /k\). This means that, say, two neighbouring maxima of the real and imaginary parts of \(\mathrm {e}^{\mathrm {i}\, k x}\) are separated by a distance \(2\pi /k\). Each stationary wave, \(\mathrm {e}^{\mathrm {i}\, k x}\), is weighted by the coefficient function \(b_\mathrm{s} (k)\).

One may consider a Gaussian weight function

$$\begin{aligned} b_\mathrm{s}(k) = \mathrm {e}^{- \gamma \left( k - k_0 \right) ^2} \;, \end{aligned}$$
(2.75)

where \(\gamma > 0\) is a constant parameter. One can substitute Eq. (2.75) into (2.74) and evaluate the integral on the right-hand side of Eq. (2.74),

$$\begin{aligned} \psi _\mathrm{s}(x) = \int \limits _{-\infty }^{\infty } \mathrm {e}^{- \gamma \left( k - k_0 \right) ^2}\, \mathrm {e}^{\mathrm {i}\, k x}\, \mathrm {d}\,k = \mathrm {e}^{\mathrm {i}\, k_0 x} \int \limits _{-\infty }^{\infty } \mathrm {e}^{- \gamma \kappa ^2}\, \mathrm {e}^{\mathrm {i}\, \kappa x}\, \mathrm {d}\kappa \nonumber \\ = \mathrm {e}^{\mathrm {i}\, k_0 x} \sqrt{\frac{\pi }{\gamma }} \ \mathrm {e}^{-x^2/(4 \gamma )} \;. \end{aligned}$$
(2.76)

Here, the change of variable \(\kappa = k - k_0\) has been done. On considering the square modulus of \(b_\mathrm{s}(k)\) and the square modulus of \(\psi _\mathrm{s}(x)\),

$$\begin{aligned} |b_\mathrm{s}(k)|^2 = \mathrm {e}^{- 2 \gamma \left( k - k_0 \right) ^2} , \qquad |\psi _\mathrm{s}(x)|^2 = \frac{\pi }{\gamma } \; \mathrm {e}^{-x^2/(2 \gamma )} \;, \end{aligned}$$
(2.77)
Fig. 2.2
figure 2

Gaussian signals in x-space and in k-space

one realises that we have a Gaussian signal both in k-space and in x-space. We may easily check that, when \(k = k_0 \pm \varDelta k/2\), where \(\varDelta k = 2/\sqrt{2 \gamma }\), the Gaussian signal in k-space drops to \(\mathrm {e}^{-1}\) times its peak value. When \(x = \pm \varDelta x/2\), where \(\varDelta x = 2\sqrt{2\gamma }\), the Gaussian signal in x-space drops to \(\mathrm {e}^{-1}\) times its peak value (a qualitative sketch is given in Fig. 2.2). If \(\gamma \) becomes smaller and smaller, the signal in k-space increases its width \(\varDelta k\), while the signal in x-space decreases its width \(\varDelta x\). One may easily check that

$$\begin{aligned} \varDelta k \; \varDelta x = 4 \;. \end{aligned}$$
(2.78)

The precise numerical value of the product is not important. What is important is that the product \(\varDelta k \ \varDelta x\) is finite and independent of \(\gamma \). A highly localised Gaussian distribution in k-space, i.e. one with a small \(\varDelta k\), means a poorly localised Gaussian distribution in x-space, i.e. one with a large \(\varDelta x\), and vice versa.

It is not possible to reduce the width of the Gaussian signal both in k-space and in x-space. This feature is a statement of the Heisenberg uncertainty principle relative to general wave phenomena [4].

3.2 Travelling Wave Packets

Let us now consider a non-stationary wave packet given by Eq. (2.72). The simplest case is that of non-dispersive waves, with \(\omega = c k\), where c is a constant. In this case, Eq. (2.72) describes a superposition of plane waves having a constant phase velocity c. Then, a comparison between Eqs. (2.72) and (2.74), allows one to write

$$\begin{aligned} \psi (x,t) = \psi _\mathrm{s}(x - c t) \; . \end{aligned}$$
(2.79)

The effect of the standing waves being replaced by travelling waves is just a rigid translational motion of the wave packet with a velocity c. No distortion of the wave packet is caused by the time evolution. This is the typical behaviour of the solutions of Eq. (2.23) as it can be inferred from Eq. (2.33).

With dispersive waves, \(\omega \) is not simply proportional to k. We are, therefore, interested in assuming a general relationship \(\omega = \omega (k)\), where \(\omega (k)/k\) is not a constant. We assume a wave packet strongly localised in k-space, with \(b(k,t)=b_\mathrm{s}(k)\) given by Eq. (2.75) and a marked peak at \(k=k_0\), namely a quasi-monochromatic wave packet. This means that \(\gamma \) in Eq. (2.75) is assumed to have a large value.

The strong localisation in k-space suggests that one may express \(\omega (k)\) as a Taylor expansion around \(k=k_0\) truncated to second order,

$$\begin{aligned} \omega (k) \approx \omega (k_0) + \left. \frac{\mathrm {d}\, {\omega }}{\mathrm {d}\, {k}} \right| _{k=k_0}\!\! \left( k - k_0 \right) + \frac{1}{2} \left. \frac{\mathrm {d}^2 {\omega }}{\mathrm {d}\, {k}^2} \right| _{k=k_0}\!\! \left( k - k_0 \right) ^2 \; . \end{aligned}$$
(2.80)

We use the notations

$$\begin{aligned} \omega _0 = \omega (k_0)\;, \qquad c_\mathrm{g} = \left. \frac{\mathrm {d}\, {\omega }}{\mathrm {d}\, {k}} \right| _{k=k_0} \;, \qquad \sigma = \frac{1}{2} \left. \frac{\mathrm {d}^2 {\omega }}{\mathrm {d}\, {k}^2} \right| _{k=k_0} \; , \end{aligned}$$
(2.81)

where \(c_\mathrm{g}\) is called the group velocity .

We now substitute Eqs. (2.75), (2.80) and (2.81) in Eq. (2.72) and we obtain

$$\begin{aligned} \begin{array}{ccc} &{}\displaystyle \psi (x,t) = \int \limits _{-\infty }^{\infty } \mathrm {e}^{-\gamma \left( k - k_0 \right) ^2} \mathrm {e}^{\mathrm {i}\left\{ k x - \left[ \omega _0 + (k - k_0) c_\mathrm{g} + (k - k_0)^2 \sigma \right] t \right\} } \, \mathrm {d}\, k \\ &{}\displaystyle = \int \limits _{-\infty }^{\infty } \mathrm {e}^{-\gamma \kappa ^2} \mathrm {e}^{\mathrm {i}\left\{ (k_0 + \kappa ) x - \left[ \omega _0 + \kappa c_\mathrm{g} + \kappa ^2 \sigma \right] t \right\} } \, \mathrm {d}\kappa \\ &{}\displaystyle = \mathrm {e}^{\mathrm {i}\left( k_0 x - \omega _0 t \right) } \int \limits _{-\infty }^{\infty } \mathrm {e}^{- \left( \gamma + \mathrm {i}\, \sigma t \right) \kappa ^2} \mathrm {e}^{\mathrm {i}\, \kappa \left( x - c_\mathrm{g} t \right) } \, \mathrm {d}\kappa \\ &{}\displaystyle = \mathrm {e}^{\mathrm {i}\left( k_0 x - \omega _0 t \right) } \int \limits _{-\infty }^{\infty } \mathrm {e}^{- \gamma ^* \kappa ^2} \mathrm {e}^{\mathrm {i}\, \kappa x^*} \, \mathrm {d}\kappa \; , \end{array} \end{aligned}$$
(2.82)

where \(\gamma ^* = \gamma + \mathrm {i}\, \sigma t\) and \(x^* = x - c_\mathrm{g} t\). We note that the integral appearing in Eq. (2.82) is just the same as that evaluated in Eq. (2.76), with \(\gamma \) replaced by \(\gamma ^*\) and x replaced by \(x^*\). Thus, we may write

$$\begin{aligned} \begin{array}{ccccc} &{}\displaystyle \psi (x,t) = \mathrm {e}^{\mathrm {i}\left( k_0 x - \omega _0 t \right) } \int \limits _{-\infty }^{\infty } \mathrm {e}^{- \gamma ^* \kappa ^2} \mathrm {e}^{\mathrm {i}\, \kappa x^*} \, \mathrm {d}\kappa \\ &{}\displaystyle = \mathrm {e}^{\mathrm {i}\left( k_0 x - \omega _0 t \right) } \sqrt{\frac{\pi }{\gamma + i \sigma t}} \; \exp \left[ - \frac{\left( x - c_\mathrm{g} t \right) ^2}{4 \left( \gamma + i \sigma t \right) } \right] \; . \end{array} \end{aligned}$$
(2.83)

Again, we consider the square moduli of b(kt) and of \(\psi (x,t)\) as in Eq. (2.77),

$$\begin{aligned}&\displaystyle |b(k,t)|^2 = \mathrm {e}^{- 2 \gamma \left( k - k_0 \right) ^2} \;, \nonumber \\&\displaystyle |\psi (x,t)|^2 = \frac{\pi }{\sqrt{ \gamma ^2 + \sigma ^2 t^2 }} \; \exp \left[ - \frac{\gamma \left( x - c_\mathrm{g} t \right) ^2}{ 2 \left( \gamma ^2 + \sigma ^2 t^2 \right) } \right] \;. \end{aligned}$$
(2.84)

One recognises Gaussian signals both in k-space and in x-space. The peak of the Gaussian signal in x-space is located at \(x = c_\mathrm{g} t\), and thus, it travels in the x-direction with the constant group velocity, \(c_\mathrm{g}\).

The width of the Gaussian signal in k-space is still defined as in Sect. 2.3.1 and it is given by \(\varDelta k = 2/\sqrt{2 \gamma }\). The width of the Gaussian signal in x-space, also defined as in Sect. 2.3.1, is now a function of time,

$$\begin{aligned} \varDelta x = 2 \sqrt{2 \gamma }\ \sqrt{1 + \frac{\sigma ^2 t^2}{\gamma ^2}} \;. \end{aligned}$$
(2.85)

Equation (2.85) shows that the width of the Gaussian signal in x-space increases in time. This means that the time evolution of the wave packet implies a spreading in x-space with a decreasing value at the peak position, \(x = c_\mathrm{g} t\). The latter feature can be easily inferred from Eq. (2.84), and it is qualitatively sketched in Fig. 2.3.

Fig. 2.3
figure 3

Spreading of the Gaussian wave packet in x-space

4 Three-Dimensional Fourier Transform and Wave Packets

Up to this point, we have discussed cases where the symmetries existing in the physical system are such that the solution of the governing equation depends on just one Cartesian coordinate, x. In the general case, with a \(f(\mathbf {x})\) where \(\mathbf {x} = (x,y,z)\) is the position vector , we can define the three-dimensional Fourier transform , namely

$$\begin{aligned} \begin{array}{ccc} &{}\mathfrak {F}_3\{f(x,y,z)\}(k_x, k_y, k_z) = \tilde{f} (k_x, k_y, k_z) \\ &{}\displaystyle = \frac{1}{(2 \pi )^{3/2}} \int \limits _{-\infty }^{\infty } \int \limits _{-\infty }^{\infty } \int \limits _{-\infty }^{\infty } f(x,y,z) \, \mathrm {e}^{-\mathrm {i}\left( k_x x + k_y y + k_z z \right) }\, \mathrm {d}\, x\, \mathrm {d}\, y\, \mathrm {d}\, z \; . \end{array} \end{aligned}$$
(2.86)

Such definition extends Eq. (2.2). In Eq. (2.86), \(\mathbf {k}=(k_x, k_y, k_z)\) is the wave vector . The wave number, k, is given by the modulus of vector \(\mathbf {k}\), namely

$$\begin{aligned} k = |\mathbf {k}| = \sqrt{k_x^2 + k_y^2 +k_z^2} \;. \end{aligned}$$
(2.87)

The inversion formula of Fourier transform, Eq. (2.6), can be extended to the three-dimensional case,

$$\begin{aligned} \begin{array}{cccc} &{}\mathfrak {F}_3^{-1}\{\tilde{f}(k_x,k_y,k_z)\}(x, y, z) = f(x,y,z) \\ &{}\displaystyle = \frac{1}{(2 \pi )^{3/2}} \int \limits _{-\infty }^{\infty } \int \limits _{-\infty }^{\infty } \int \limits _{-\infty }^{\infty } \tilde{f}(k_x,k_y,k_z) \, \mathrm {e}^{\mathrm {i}\left( k_x x + k_y y + k_z z \right) }\, \mathrm {d}\, k_x\, \mathrm {d}\, k_y\, \mathrm {d}\, k_z \; . \end{array} \end{aligned}$$
(2.88)

The main properties of the three-dimensional Fourier transform are quite similar to those reviewed in Sect. 2.2.3. In particular the transform of derivatives now reads

$$\begin{aligned} \begin{array}{cccc} &{}\mathfrak {F}_3\{ \varvec{\nabla }f \} = \mathrm {i}\, \mathbf {k}\, \tilde{f}(\mathbf {k}) \; , \\ &{}\mathfrak {F}_3\{ \nabla ^2 f \} = - k^2\, \tilde{f}(\mathbf {k}) \; . \end{array} \end{aligned}$$
(2.89)

The definition of three-dimensional transform is really useful when the physical domain is the whole real space \(\mathbb {R}^3\). If, on the contrary, one or two coordinates have a limited range of variation, it is more appropriate to transform only the functional dependence on the unbounded coordinate or coordinates. For instance, if both \(y \in [0,a]\) and \(z \in [0, b]\), while \(x\in \mathbb {R}\), one finds more useful a one-dimensional Fourier transform given by

$$\begin{aligned} \mathfrak {F}\{f(x,y,z)\}(k,y,z) = \tilde{f} (k,y,z) = \frac{1}{\sqrt{2 \pi }} \int \limits _{-\infty }^{\infty } f(x,y,z) \, \mathrm {e}^{-\mathrm {i}\, k x}\, \mathrm {d}\, x \; . \end{aligned}$$
(2.90)

If \(z \in [0, b]\), while \((x,y)\in \mathbb {R}^2\), one should rather employ a two-dimensional Fourier transform , given by

$$\begin{aligned} \begin{array}{ccc} &{}\mathfrak {F}_2\{f(x,y,z)\}(k_x, k_y, z) = \tilde{f} (k_x, k_y, z) \\ &{}\displaystyle = \frac{1}{2 \pi } \int \limits _{-\infty }^{\infty } \int \limits _{-\infty }^{\infty } f(x,y,z) \, \mathrm {e}^{-\mathrm {i}\left( k_x x + k_y y \right) }\, \mathrm {d}\, x\, \mathrm {d}\, y \; . \end{array} \end{aligned}$$
(2.91)

In Eq. (2.91), the wave vector is just two-dimensional, \(\mathbf {k} = (k_x, k_y)\). The inversion formula for \(\mathfrak {F}_2\) reads

$$\begin{aligned} \begin{array}{cccc} &{}\mathfrak {F}_2^{-1}\{\tilde{f}(k_x,k_y,z)\}(x, y, z) = f(x, y, z) \\ &{}\displaystyle = \frac{1}{2 \pi } \int \limits _{-\infty }^{\infty } \int \limits _{-\infty }^{\infty } \tilde{f}(k_x,k_y,z) \, \mathrm {e}^{\mathrm {i}\left( k_x x + k_y y \right) }\, \mathrm {d}\, k_x\, \mathrm {d}\, k_y \; . \end{array} \end{aligned}$$
(2.92)

A variant of Eq. (2.90) is appropriate for cylindrical systems where the polar coordinates \((r, \phi )\) are employed instead of (yz). For instance, one may have \(r \in [r_1, r_2]\) and \(\phi \in [0, 2 \pi ]\) in the case of a cylindrical layer. In these cases, Eq. (2.90) is rather written as

$$\begin{aligned} \mathfrak {F}\{f(x,r,\phi )\}(k,r,\phi ) = \tilde{f} (k,r,\phi ) = \frac{1}{\sqrt{2 \pi }} \int \limits _{-\infty }^{\infty } f(x,r,\phi ) \, \mathrm {e}^{-\mathrm {i}\, k x}\, \mathrm {d}\, x \; . \end{aligned}$$
(2.93)

The three-dimensional wave equation is given by

$$\begin{aligned} \frac{\partial ^2 {\psi (\mathbf {x},t)}}{\partial {t}^2} = c^2\, \nabla ^2 \psi (\mathbf {x},t) \; . \end{aligned}$$
(2.94)

Simple solutions of Eq. (2.94) are the plane waves,

$$\begin{aligned} \psi (\mathbf {x},t) = A\, \mathrm {e}^{\mathrm {i}\, ( \mathbf {k}\cdot \mathbf {x} - \omega t)} \; . \end{aligned}$$
(2.95)

In fact, \(\psi (\mathbf {x},t)\) given by Eq. (2.95) is a solution of Eq. (2.94) provided that A is a constant and Eq. (2.70) is satisfied with \(k=|\mathbf {k}|\). More general, dispersive, plane waves entail a function \(\omega (\mathbf {k})\) not necessarily given by a linear function of k. Thus, a three-dimensional wave packet built by the superposition of plane waves with all possible wave vectors, \(\mathbf {k} \in \mathbb {R}^3\), is expressed as

$$\begin{aligned} \psi (\mathbf {x},t) = \int \limits _{-\infty }^{\infty } \int \limits _{-\infty }^{\infty } \int \limits _{-\infty }^{\infty } b(\mathbf {k},t) \, \mathrm {e}^{\mathrm {i}\, [\mathbf {k} \cdot \mathbf {x} - \omega (\mathbf {k})\, t]} \, \mathrm {d}\, k_x\, \mathrm {d}\, k_y\, \mathrm {d}\, k_z \;, \end{aligned}$$
(2.96)

which is an extended form of Eq. (2.72). Comparison with Eq. (2.88) suggests that function \(b(\mathbf {k},t)\) is directly related to the Fourier transform of \(\psi (\mathbf {x}, t)\), namely

$$\begin{aligned} b(\mathbf {k},t) = \frac{1}{(2 \pi )^{3/2}}\, \tilde{\psi }(\mathbf {k}, t) \, \mathrm {e}^{\mathrm {i}\, \omega (\mathbf {k})\, t} \; . \end{aligned}$$
(2.97)

In three dimensions, there is much more that can be done in terms of wave packets. For instance, one can create a superposition of those plane waves whose wave vector is directed, say, along the x-axis, namely \(\mathbf {k}=(k,0,0)\). In this case, one has

$$\begin{aligned} \psi (\mathbf {x},t) = \int \limits _{-\infty }^{\infty } b(k,y,z,t) \, \mathrm {e}^{\mathrm {i}\, [k x - \omega (k)\, t]} \, \mathrm {d}\, k \;. \end{aligned}$$
(2.98)

Another possibility is that the linear combination of plane waves involves just those waves having a wave vector lying on the (xy) plane, \(\mathbf {k} = (k_x, k_y, 0)\), namely

$$\begin{aligned} \psi (\mathbf {x},t) = \int \limits _{-\infty }^{\infty } \int \limits _{-\infty }^{\infty } b(k_x, k_y, z, t) \, \mathrm {e}^{\mathrm {i}\, [k_x x + k_y y - \omega (\mathbf {k})\, t]} \, \mathrm {d}\, k_x\, \mathrm {d}\, k_y \;, \end{aligned}$$
(2.99)

Clearly, Eqs. (2.98) and (2.99) are to be compared with the partial Fourier transforms in one and two dimensions as defined in Eqs. (2.90) and (2.91).

Other possibilities are allowed in three dimensions, for wave propagation, that expand the limited class of plane waves. For instance, a point-like source may generate spherical waves, invariant under general rotations around the origin. Such waves are solutions of the spherically symmetric wave equation, i.e. a special case of Eq. (2.94),

$$\begin{aligned} \frac{\partial ^2 {\psi (r,t)}}{\partial {t}^2} = \frac{c^2}{r^2}\, \frac{\partial {}}{\partial {r}} \left[ r^2\, \frac{\partial {\psi (r,t)}}{\partial {r}} \right] \; , \end{aligned}$$
(2.100)

where \(r=|\mathbf {x}|\) is the spherical radial coordinate. Spherical normal modes are given by

$$\begin{aligned} \psi (r,t) = A \; \frac{\mathrm {e}^{\mathrm {i}\left( k r - \omega t \right) }}{r} \; , \end{aligned}$$
(2.101)

and they solve Eq. (2.100) provided that \(\omega = \pm \,k c\) and A is a constant. As for plane waves, given by Eqs. (2.69), (2.101) with a constant amplitude A defines the case of non-dispersive waves. Other partial differential equations might involve dispersive spherical waves, where A is time-dependent and \(\omega \) is a nonlinear function of k. Wave packets can be built up from the superposition of these more general, dispersive, spherical waves, so that we can write the analogous of Eqs. (2.71) and (2.72),

$$\begin{aligned} \psi (r,t) = \frac{1}{r}\ \sum _{k} B(k,t) \, \mathrm {e}^{\mathrm {i}\, [k r - \omega (k)\, t]} \; , \end{aligned}$$
(2.102)

and

$$\begin{aligned} \psi (r,t) = \frac{1}{r} \int \limits _{-\infty }^{\infty } b(k,t) \, \mathrm {e}^{\mathrm {i}\, [k r - \omega (k)\, t]} \, \mathrm {d}\, k \;, \end{aligned}$$
(2.103)

respectively, where Eq. (2.103) is relative to a case where k spans continuously the whole real axis.