Keywords

1 Introduction

The spectral analysis of deterministic functions and the formulation of stochastic processes belong to the well-established statistical tools in various fields within geodesy (see, e.g. Koch and Schmidt 1994; Moritz 1989; Welsch et al. 2000). We found, however, that in particular the nature of the spectral representation of stochastic processes in terms of the stochastic Fourier integral and its relationships with the autocovariance and spectral distribution (or density) function is far less well known than the details of the time-domain and Fourier analyses of deterministic functions. Our motivation for this paper is therefore to take a step towards closing this gap in understanding. We will in particular provide the reader with the key definitions of the involved stochastic processes as well as of their crucial properties (Sect. 2). Then we will state and explain the computational formulae for the spectral analysis of general real-valued covariance-stationary stochastic processes (Sect. 3). This is in contrast to the usual representation of these formulae in the mathematical statistics oriented literature (e.g. Brockwell and Davis 1991; Priestley 2004), where one generally finds only the results for complex-valued stochastic processes, which often complicates their application in practical situations. To aid the understanding of the mathematical relationships of the involved fundamental statistical quantities (stochastic process, autocovariance function, spectral representation of the process, spectral distribution or density function) we will use a corresponding graphical representation in form of a “magic square” (also in Sect. 3). We will conclude this paper with an outlook to extensions to the presented example [moving average (MA) process].

2 Basic Elements of Stochastic Processes

In this chapter, we will provide a summary of basic definitions (D) and properties of the stochastic processes considered in Sect. 3.

(D1):We say that \(\mathcal{X}_{T} = (\varOmega,\mathcal{A},P,\{\mathcal{X}_{t},t \in T\})\) is a (general) stochastic process if and only if (iff)

  • \((\varOmega,\mathcal{A},P)\) is any probability space (where Ω denotes the sample space, \(\mathcal{A}\) a σ-algebra of events, and P a probability measure),

  • T is any non-empty set, and

  • \(\mathcal{X}_{t}\) is a random variable defined on \((\varOmega,\mathcal{A})\) for any t ∈ T.

In this paper, we will restrict our attention to real-valued and one-dimensional stochastic processes as given in the following definition.

(D2):We say that \(\mathcal{X}_{T}\) is a real-valued (one-dimensional) stochastic process iff \(\mathcal{X}_{t}: (\varOmega,\mathcal{A}) \rightarrow (\mathbb{R},\mathcal{B})\) for any t ∈ T, where \(\mathcal{B}\) is the Borel sigma algebra generated by the set of all real-valued, one-dimensional, left-open and right-closed intervals.

In Sect. 3, we will use stochastic processes that have a discrete parameter set T in the time domain as well as processes with a continuous parameter set in the frequency domain. This distinction is made by the following definition.

(D3): We say that \(\mathcal{X}_{T}\) is a

  • discrete-parameter stochastic process (or stochastic process with discrete parameter) iff \(T \subset \mathbb{Z}\). Furthermore, we call \(\mathcal{X}_{T}\) a discrete-time stochastic process or discrete-time time series iff the elements of T refer to points in time.

  • continuous-parameter stochastic process (or stochastic process with continuous parameter) iff \(T \subset \mathbb{R}\). In addition, we call \(\mathcal{X}_{T}\) a continuous-frequency stochastic process iff the elements of T refer to (angular) frequencies, in which case we will also write T = W.

As far as discrete-parameter stochastic processes are concerned, we will focus our attention on covariance-stationary processes (in the time domain). The precise meaning of this concept is provided as follows.

(D4): We say that \(\mathcal{X}_{T}\) is covariance stationary iff

  • \(E\{\mathcal{X}_{t}\} =\mu < \infty \;(\mathrm{i.e.\;constant/finite})\) for any t ∈ T,

  • \(E\{(\mathcal{X}_{t}-\mu )^{2}\} =\sigma _{ \mathcal{X}}^{2} < \infty \;(\mathrm{i.e.\;constant/finite})\) for any t ∈ T, and

  • \(\gamma _{\mathcal{X}}(t_{1},t_{2}) =\gamma _{\mathcal{X}}(t_{1} +\varDelta t,t_{2} +\varDelta t)\) for any t 1, t 2 ∈ T and any \(\varDelta t\) with \(t_{1} +\varDelta t,t_{2} +\varDelta t \in T\),

where E{. } denotes the expectation operator and \(\gamma _{\mathcal{X}}\) the autocovariance function, defined by \(\gamma _{\mathcal{X}}(t_{1},t_{2}) = E\{(\mathcal{X}_{t_{1}}-\mu )((\mathcal{X}_{t_{2}}-\mu )\}\). For a covariance-stationary stochastic process, we have that \(\gamma _{\mathcal{X}}(t_{1},t_{2}) =\gamma _{\mathcal{X}}(t_{1} - t_{2},0)\) for any t 1, t 2 ∈ T (and 0 ∈ T) such that also \(t_{1} - t_{2} \in T\); that is, we can always rewrite \(\gamma _{\mathcal{X}}\) by using only a single variable argument, the second one taking the constant value 0. In light of this, we redefine the autocovariance function for covariance-stationary processes as

$$\displaystyle\begin{array}{rcl} \gamma _{\mathcal{X}}(k):=\gamma _{\mathcal{X}}(k,0) =\gamma _{\mathcal{X}}(t + k,t)& & {}\\ \end{array}$$

for any k, t ∈ T with t + k ∈ T; the parameter k is called lag (cf. Brockwell and Davis 1991, pp. 11–12).

The fundamental instance of a covariance-stationary process and primary building block for certain other stochastic processes is white noise, defined as follows.

(D5): We say that \(\mathcal{E}_{T}:= \mathcal{X}_{T}\) is (discrete-parameter) white noise with mean 0 and variance \(\sigma _{\mathcal{X}}^{2}\) iff

  • \(T \subset \mathbb{Z}\),

  • \(E\{\mathcal{X}_{t}\} = 0\) for any t ∈ T, and

  • \(\gamma _{\mathcal{X}}(k) = \left \{\begin{array}{cl} \sigma _{\mathcal{X}}^{2} & \;\mathrm{if}\;k = 0, \\ 0 &\;\mathrm{if}\;k\neq 0\end{array} \right.\).

Now let us consider a non-recursive filter C, defined by the filter equation \(y_{t} =\sum _{ k=-\infty }^{\infty }c_{k}u_{t-k}\) for any \(t \in \mathbb{Z}\), or in lag operator notation y t  = C(L)u t with \(L^{k}u_{t}:= u_{t-k}\) and \(C(L) =\sum _{ k=-\infty }^{\infty }c_{k}L^{k}\), where \((u_{t}\,\vert \,t \in \mathbb{Z})\) is any filter input sequence and \((y_{t}\,\vert \,t \in \mathbb{Z})\) any filter output sequence (in either case of real numbers or random variables), and \((c_{k}\,\vert \,k \in \mathbb{Z})\) is any sequence of real-valued filter coefficients. If we view the random variables of a white noise process \(\mathcal{E}_{T}\) as filter input to a

  • causal (i.e. c k  = 0 for any k < 0),

  • either finite or infinite (i.e. a finite or an infinite number of filter coefficients is non-zero),

  • absolutely summable (i.e. \(\sum _{k=-\infty }^{\infty }\vert c_{k}\vert < \infty \)), and

  • invertible (i.e. there exists an inverse filter \(\overline{C}\) with filter coefficients \((\bar{c}_{k}\,\vert \,k \in \mathbb{N}^{0})\) such that \([\overline{C}(L)C(L)]u_{t} = u_{t}\) where \(\overline{C}(L) =\sum _{ k=0}^{\infty }\bar{c}_{k}L^{k}\))

version of such a non-recursive filter, then we obtain the moving average process as filter output, as explained in the following definition.

(D6): If \(\mathcal{E}_{T}\) with \(T \subset \mathbb{Z}\) is (discrete) white noise with mean 0 and variance \(\sigma _{\mathcal{E}}^{2}\), then we say that \(\mathcal{L}_{T}: (\varOmega,\mathcal{A},P,\{\mathcal{L}_{t},t \in T\})\) is a (discrete-parameter) moving average process of order q (or MA(q) process) (with \(q \in \mathbb{N}\)) iff the random variables \(\mathcal{L}_{t}\) satisfy, for any t ∈ T, the equation

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{t} = \mathcal{E}_{t} +\beta _{1}\mathcal{E}_{t-1} +\ldots +\beta _{q}\mathcal{E}_{t-q} =\beta (L)\mathcal{E}_{t}& & {}\\ \end{array}$$

with \(\beta (L) = 1 +\beta _{1}L +\ldots +\beta _{q}L^{q}\). In the limiting case of q = , we call \(\mathcal{L}_{T}\) an MA() process.

Treating β(L) as a complex polynomial, then, if β(z) ≠ 0 for any \(z \in \mathbb{C}\) with | z | ≤ 1, then the filter β and, hence the MA(q) process, is invertible (cf. Brockwell and Davis 1991, pp. 86–87). Furthermore, whereas any MA(q) process with q <  is covariance-stationary (cf. Priestley 2004, p. 137), the MA() process is covariance-stationary iff the sequence \((\beta _{k}\,\vert \,k \in \mathbb{N}^{0})\) of filter coefficients is absolutely summable (i.e. iff \(\sum _{k=0}^{\infty }\vert \beta _{k}\vert < \infty \)) (cf. Brockwell and Davis 1991, pp. 89–91).

On the other hand, as far as continuous-parameter stochastic processes are concerned, we will only deal with stochastic processes that are in a certain sense stochastically continuous and that have orthogonal increments, which is explained in the following.

(D7): We say that any continuous-parameter stochastic process \(\mathcal{X}_{T}\) is mean-square right-continuous (or mean-square continuous from the right) at t 0 ∈ T iff \(\lim _{t\rightarrow t_{0}^{+}}E\{[\mathcal{X}_{t} -\mathcal{X}_{t_{0}}]^{2}\} = 0\) holds (cf. Gilgen 2006, p. 453).

(D8): We say that any continuous-parameter stochastic process \(\mathcal{X}_{T}\) is a stochastic process with orthogonal increments \(\varDelta \mathcal{X}_{s,t} = \mathcal{X}_{t} -\mathcal{X}_{s}\) with s, t ∈ T and s < t iff \(E\{\varDelta \mathcal{X}_{t_{1},t_{2}}\varDelta \mathcal{X}_{t_{3},t_{4}}\} = 0\) for any t 1, t 2, t 3, t 4 ∈ T with t 1 < t 2 < t 3 < t 4 or t 3 < t 4 < t 1 < t 2, i.e. iff any two “non-overlapping” increments are uncorrelated.

3 Magic Square for Covariance-Stationary, Discrete-Time Processes

In the current section, we will review the mathematical operations that connect the following four fundamental quantities of a spectral analysis in the time and the frequency domain:

  1. 1.

    stochastic process,

  2. 2.

    autocovariance function,

  3. 3.

    the spectral representation of the stochastic process,

  4. 4.

    the spectral distribution function (or, if it exists, its derivative, the spectral density function).

We may view these quantities as forming the corners of a square with various connecting lines, which represent certain mathematical operations between them (see Fig. 1). We will refer to this as the Magic Square. To demonstrate the evaluation of these operations, we will consider the example of an MA(q) process. The reader should note that, with this understanding, it would be straightforward to apply the Magic Square to more complicated processes such as ARMA(p,q) processes, which would, however, exceed the limit of this paper. We begin the discussion by constructing the time domain (the “left-hand side”) of the Magic Square.

Fig. 1
figure 1

Magic square for covariance-stationary discrete-time stochastic processes; upper left stochastic process, lower left autocovariance function, upper right spectral representation of the stochastic process, lower right spectral distribution function; the numbers in brackets indicate the mathematical operations as defined in Sects. 3.13.3

3.1 Time Domain (Left-Hand Side)

We consider any stochastic process \(\mathcal{X}_{T}\) which is

  • one-dimensional and real-valued (i.e. \(\varOmega = \mathbb{R}^{1}\)),

  • discrete in time (with \(T = \mathbb{Z}\)), and

  • covariance-stationary with zero mean, variance \(\sigma _{\mathcal{X}}^{2}\) and autocovariance function \(\gamma _{\mathcal{X}}\).

Furthermore, we consider any stochastic process \(\mathcal{L}_{T}\) obtained by filtering the process \(\mathcal{X}_{T}\), where we assume that the filter ψ(L) is

  • non-recursive,

  • causal,

  • either finite or infinite,

  • absolutely summable, and

  • invertible (with inverse filter \(\overline{\psi }(L)\)).

It follows that the process \(\mathcal{L}_{T}\) is

  • one-dimensional and real-valued (i.e. \(\varOmega = \mathbb{R}^{1}\)),

  • discrete in time (with \(T = \mathbb{Z}\)), and

  • covariance-stationary (cf. Brockwell and Davis 1991, p. 84) with zero mean, variance \(\sigma _{\mathcal{L}}^{2}\) and autocovariance function \(\gamma _{\mathcal{L}}\).

The general mathematical operations within the time domain can be summarized as follows:

  1. (1)

    \(\mathcal{X}_{T}\Longleftrightarrow\mathcal{L}_{T}\):

    $$\displaystyle\begin{array}{rcl} \mathcal{L}_{t}& =& \psi (L)\mathcal{X}_{t},\qquad \mathcal{X}_{t} = \overline{\psi }(L)\mathcal{L}_{t}, {}\\ \end{array}$$

    hold for any \(t \in \mathbb{Z}\). The first of these equations is an expression of the above assumption that \(\mathcal{L}_{T}\) is a stochastic process obtained by non-recursive filtering of \(\mathcal{X}_{T}\). The second of these equations reflects the presumed invertibility of the filter operation.

  2. (2)

    \(\mathcal{X}_{T}\Longrightarrow\gamma _{\mathcal{X}},\mathcal{L}_{T}\Longrightarrow\gamma _{\mathcal{L}}\):

    $$\displaystyle\begin{array}{rcl} \gamma _{\mathcal{X}}(k)& =& E\{\mathcal{X}_{t+k}\mathcal{X}_{t}\},\qquad \gamma _{\mathcal{L}}(k) = E\{\mathcal{L}_{t+k}\mathcal{L}_{t}\}, {}\\ \end{array}$$

    hold for any \(t,k\,\in \,\mathbb{Z}\). These equations are simply an expression of the definition of the autocovariance function applied to the stochastic processes \(\mathcal{X}_{T}\) and \(\mathcal{L}_{T}\) with the properties stated above (cf. Priestley 2004, p. 107).

  3. (3)

    \(\mathcal{X}_{T}\Longrightarrow\gamma _{\mathcal{L}},\mathcal{L}_{T}\Longrightarrow\gamma _{\mathcal{X}}\):

    Substitution of (1) and shifted versions thereof into (2) yields the expressions for \(\gamma _{\mathcal{L}}(k)\) and \(\gamma _{\mathcal{X}}(k)\).

  4. (4)

    \(\gamma _{\mathcal{X}}\Longleftrightarrow\gamma _{\mathcal{L}}\):

    $$\displaystyle\begin{array}{rcl} \gamma _{\mathcal{L}}(k)& =& \sum \limits _{m=0}^{\infty }\sum \limits _{ n=0}^{\infty }\psi _{ m}\psi _{n}\gamma _{\mathcal{X}}(k - m + n), {}\\ \gamma _{\mathcal{X}}(k)& =& \sum \limits _{m=0}^{\infty }\sum \limits _{ n=0}^{\infty }\overline{\psi }_{ m}\overline{\psi }_{n}\gamma _{\mathcal{L}}(k - m + n) {}\\ \end{array}$$

    hold for any \(k\,\in \,\mathbb{Z}\). These equations show how the autocovariance function of a covariance-stationary stochastic process is propagated by an essentially absolutely summable filter to the autocovariance function of the filtered (covariance-stationary) process (see Proposition 3.1.2 in Brockwell and Davis 1991, p. 84).

Example: MA(q) Process

Let us consider a non-recursive, causal, finite, absolutely summable and invertible filter β(L). If we apply such a filter to white noise \(\mathcal{E}_{T}\) (which satisfies the conditions made for the input process \(\mathcal{X}_{T}\)), then we obtain, by definition, an invertible MA(q) process, which then satisfies the above stated properties of \(\mathcal{L}_{T}\). Hence, we may apply the general mathematical operations stated in equations under (1)–(4) as follows.

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{t}& =& \beta (L)\mathcal{E}_{t}, {}\\ \mathcal{E}_{t}& =& \overline{\beta }(L)\mathcal{L}_{t} =\sum \limits _{ k=0}^{\infty }\overline{\beta }_{ k}\mathcal{L}_{t-k} {}\\ \end{array}$$

The first of these equations defines the MA(q) process; the second equation yields white noise expressed in terms of the random variables of \(\mathcal{L}_{T}\), filtered by means of a non-recursive, causal and infinite filter \(\overline{\beta }\). As far as autocovariance functions are concerned, \(\gamma _{\mathcal{X}} =\gamma _{\mathcal{E}}\) takes a very simple form (see the definition of white noise in Sect. 2); then, the first equation of (4) may be simplified to

$$\displaystyle\begin{array}{rcl} \gamma _{\mathcal{L}}(k) = \left \{\begin{array}{ll} \sigma _{\mathcal{E}}^{2}\sum \limits _{n=0}^{q-\vert k\vert }\beta _{n}\beta _{n+\vert k\vert },&\mbox{ if}\vert k\vert \leq q \\ 0, &\mbox{ if}\vert k\vert > q \end{array} \right.& & {}\\ \end{array}$$

(cf. Brockwell and Davis 1991, pp. 78–79).

3.2 Frequency Domain (Right-Hand Side)

The usual approach to a spectral representation of a stochastic process given in the time domain is to define it in the frequency domain in terms of a complex stochastic process associated with complex exponential base functions. This allows one, besides a shorter notation, to also cover the case where the stochastic process in the time domain is complex-valued. Whenever the process in the time domain is real-valued, as it is the case with the applications we have in mind, this complication is, however, unnecessary. We therefore restate the main results, given in complex notation in the literature, in terms of pairs of real stochastic processes associated with sine and cosine base functions. We find these to be closer to our natural understanding of the concept of “frequency” than complex exponentials. Thus, we will consider as the spectral representations of the processes \(\mathcal{X}_{T}\) and \(\mathcal{L}_{T}\) (defined in Sect. 2), in each case a tuple of two stochastic processes

$$\displaystyle\begin{array}{rcl} \left (\mathcal{U}_{W}^{(\mathcal{X})},\mathcal{V}_{ W}^{(\mathcal{X})}\right )& =& (\varOmega,\mathcal{A},P,\{\left (\mathcal{U}_{\omega }^{(\mathcal{X})},\mathcal{V}_{\omega }^{(\mathcal{X})}\right ),\omega \in W\}), {}\\ \left (\mathcal{U}_{W}^{(\mathcal{L})},\mathcal{V}_{ W}^{(\mathcal{L})}\right )& =& (\varOmega,\mathcal{A},P,\{\left (\mathcal{U}_{\omega }^{(\mathcal{L})},\mathcal{V}_{\omega }^{(\mathcal{L})}\right ),\omega \in W\}), {}\\ \end{array}$$

which we assume to be

  • one-dimensional and real-valued (i.e. \(\varOmega = \mathbb{R}^{1}\)),

  • frequency-continuous (with \(W = [-\pi,\pi ]\)),

  • mean-square right-continuous, and

  • processes with orthogonal increments.

The relationships of these processes in the frequency domain with \(\mathcal{X}_{T}\) and \(\mathcal{L}_{T}\) in the time domain will become evident in Sect. 3.3. The general mathematical operations within the frequency domain are:

  1. (5)

    \((\mathcal{U}_{W}^{(\mathcal{X})},\mathcal{V}_{W}^{(\mathcal{X})})\Longleftrightarrow(\mathcal{U}_{W}^{(\mathcal{L})},\mathcal{V}_{W}^{(\mathcal{L})})\):

    For any ω ∈ W,

    $$\displaystyle\begin{array}{rcl} \mathcal{U}_{W}^{(\mathcal{L})}(\omega )& =& \int _{ -\pi }^{\omega }\mbox{ Re}(H(\lambda ))d\mathcal{U}_{ W}^{(\mathcal{X})}(\lambda ) {}\\ & & -\mbox{ Im}(H(\lambda ))d\mathcal{V}_{W}^{(\mathcal{X})}(\lambda ), {}\\ \mathcal{V}_{W}^{(\mathcal{L})}(\omega )& =& \int _{ -\pi }^{\omega }\mbox{ Im}(H(\lambda ))d\mathcal{U}_{ W}^{(\mathcal{X})}(\lambda ) {}\\ & & +\mbox{ Re}(H(\lambda ))d\mathcal{V}_{W}^{(\mathcal{X})}(\lambda ), {}\\ \mathcal{U}_{W}^{(\mathcal{X})}(\omega )& =& \int _{ -\pi }^{\omega }\mbox{ Re}(\overline{H}(\lambda ))d\mathcal{U}_{ W}^{(\mathcal{L})}(\omega ) {}\\ & & -\mbox{ Im}(\overline{H}(\lambda ))d\mathcal{V}_{W}^{(\mathcal{L})}(\lambda ), {}\\ \mathcal{V}_{W}^{(\mathcal{X})}(\omega )& =& \int _{ -\pi }^{\omega }\mbox{ Im}(\overline{H}(\lambda ))d\mathcal{U}_{ W}^{(\mathcal{L})}(\omega ) {}\\ & & +\mbox{ Re}(\overline{H}(\lambda ))d\mathcal{V}_{W}^{(\mathcal{L})}(\lambda ) {}\\ \end{array}$$

    hold (Theorem 4.10.1 in Brockwell and Davis 1991, pp. 154–155), where \(H(\omega ) =\sum _{ k=0}^{\infty }\psi _{k}e^{-ik\omega }\) with ω ∈ [−π, π] is the transfer function of the filter ψ, and where \(\overline{H}(\omega )\) is the transfer function of the inverse filter \(\overline{\psi }\) (this implies that the transfer function is generally one-dimensional, frequency-continuous and complex-valued). The relations are described by stochastic Riemann-Stieltjes-Integral, which will be explained more precisely in Sect. 3.3.

  2. (6)

    \((\mathcal{U}_{W}^{(\mathcal{X})},\mathcal{V}_{W}^{(\mathcal{X})})\Longrightarrow F_{\mathcal{X}},\,\,(\mathcal{U}_{W}^{(\mathcal{L})},\mathcal{V}_{W}^{(\mathcal{L})})\Longrightarrow F_{\mathcal{L}}\)

    $$\displaystyle\begin{array}{rcl} F_{\mathcal{X}}(\omega )& =& E\{(\mathcal{U}_{W}^{(\mathcal{X})}(\omega ))^{2}\} + E\{(\mathcal{V}_{ W}^{(\mathcal{X})}(\omega ))^{2}\}, {}\\ F_{\mathcal{L}}(\omega )& =& E\{(\mathcal{U}_{W}^{(\mathcal{L})}(\omega ))^{2}\} + E\{(\mathcal{V}_{ W}^{(\mathcal{L})}(\omega ))^{2}\}, {}\\ \end{array}$$

    hold for ω ∈ W (Priestley 2004, pp. 250–251). These equations express the relations between the stochastic processes \(\mathcal{U}_{W}^{(\cdot )},\mathcal{V}_{W}^{(\cdot )}\) and the so-called spectral distribution function F (⋅ )(ω). Hence, this function is real-valued and, due to \(F_{(\cdot )}(-\pi ) = 0\), F (⋅ )(π) = γ (⋅ )(0), has similar properties as the probability distribution function. If the derivative \(f_{(\cdot )}(\omega ) = \mathit{dF}_{(\cdot )}(\omega )/d\omega\) exists, f (⋅ )(ω) is called spectral density function and is also known as the power spectrum.

  3. (7)

    \((\mathcal{U}_{W}^{(\mathcal{X})},\mathcal{V}_{W}^{(\mathcal{X})})\Longrightarrow F_{\mathcal{L}},\,\,(\mathcal{U}_{W}^{(\mathcal{L})},\mathcal{V}_{W}^{(\mathcal{L})})\Longrightarrow F_{\mathcal{X}}\):

    Substitution of (5) into (6) yields expressions for \(F_{\mathcal{L}}(\omega )\) and \(F_{\mathcal{X}}(\omega )\) in terms of \(\mathcal{U}_{W}^{(\mathcal{X})},\mathcal{V}_{W}^{(\mathcal{X})}\) and \(\mathcal{U}_{W}^{(\mathcal{L})},\mathcal{V}_{W}^{(\mathcal{L})}\).

  4. (8)

    \(F_{\mathcal{X}}\Longleftrightarrow F_{\mathcal{L}}\):

    $$\displaystyle\begin{array}{rcl} F_{\mathcal{L}}(\omega ) =\int \limits _{ -\pi }^{\omega }\vert H(\lambda )\vert ^{2}\mathit{dF}_{ \mathcal{X}}(\lambda ),& & {}\\ F_{\mathcal{X}}(\omega ) =\int \limits _{ -\pi }^{\omega }\vert \overline{H}(\lambda )\vert ^{2}\mathit{dF}_{ \mathcal{L}}(\lambda )& & {}\\ \end{array}$$

    (cf. Theorem 4.4.1 in Brockwell and Davis 1991, p. 122). These equations depend on the transfer function of the corresponding filter and reflect direct relationships between the spectral density functions of \(\mathcal{X}_{T}\) and \(\mathcal{L}_{T}\). These integrals are the usual (deterministic) Riemann-Stieltjes integral (see also Sect. 3.3).

3.2 Example: MA(q) Process

As explained in Sect. 3.1, the input process of an MA(q) process is white noise. The spectral distribution function for white noise, given by \(F_{\mathcal{E}}(\omega ) = \frac{\sigma _{\mathcal{E}}^{2}} {2\pi } (\omega +\pi ),\omega \in [-\pi,\pi ]\), can be calculated from (12) in Sect. 3.3. The derivative \(f_{\mathcal{E}}\) of this function, the spectral density function for white noise, clearly exists and is \(f_{\mathcal{E}}(\omega ) = \frac{\sigma _{\mathcal{E}}^{2}} {2\pi }\). To evaluate (5)–(8), we may substitute the Euler equations into \(H(\omega ) =\sum _{ k=0}^{q}\beta _{k}e^{-ik\omega }\), we may rewrite this as

$$\displaystyle\begin{array}{rcl} H(\omega ) =\sum \limits _{ k=0}^{q}\beta _{ k}\cos (k\omega ) - i\sum \limits _{k=1}^{q}\beta _{ k}\sin (k\omega ).& & {}\\ \end{array}$$

Hence, (8) can be rewritten for an MA(q) process as

$$\displaystyle\begin{array}{rcl} F_{\mathcal{L}}(\omega )& =& \frac{\sigma _{\mathcal{E}}^{2}} {2\pi } \int \limits _{-\pi }^{\omega }\left (\sum \limits _{ k=0}^{q}\beta _{ k}\cos (k\lambda )\right )^{2} + \left (\sum \limits _{ k=1}^{q}\beta _{ k}\sin (k\lambda )\right )^{2}d\lambda. {}\\ \end{array}$$

3.3 Transitions Between the Time and Frequency Domain

In pursuing a spectral analysis of time-series, one establishes a link between discrete-time covariance-stationary stochastic processes and continuous-frequency mean-square right-continuous stochastic processes with orthogonal increments in form of a stochastic integral, which is very similar to the connection of continuous deterministic functions and the Fourier transform via the Fourier integral. This link can be explained in four steps:

  1. (a)

    First we have to familiarize ourselves with the usual (i.e. deterministic) form of the Riemann-Stieltjes integral. The key idea here is that one seeks to integrate some function f (the integrand) with respect to some other function g (the integrator) over some domain of integration; this is achieved by defining a Riemann-Stieltjes sum with respect to some partition of the domain of integration and then to determine its “limit” (i.e. the integral value) as the partition becomes infinitely fine (cf. Bartle 1976, Chap. 29).

  2. (b)

    The next step is to replace the deterministic integrator by some continuous-parameter stochastic process with parameter set T. Then, one defines a stochastic Riemann-Stieltjes sum with respect to some partition of the interval T and subsequently determines its “limit in mean square” as the partition becomes infinitely fine; thus, the integral value becomes a random variable (cf. Priestley 2004, pp. 154–155).

  3. (c)

    Then, one replaces the general integrator process by a continuous-frequency mean-square right-continuous stochastic process with orthogonal increments (which may be viewed as the variables of a “stochastic Fourier transform”) and the general integrand by some complex exponential or sine/cosine with discrete-time parameter t. Then, the time-variable random integral variables constitute a discrete-time stochastic process (cf. Brockwell and Davis 1991, Sects. 4.6–4.8).

  4. (d)

    Finally, we have to distinguish two cases: Either some discrete-time covariance-stationary stochastic process \(\mathcal{X}_{T}\) is given and one has to find a corresponding continuous-frequency mean-square right-continuous process with orthogonal increments as its spectral representation, or one defines a process in the frequency domain and seeks its time-domain representation (cf. Brockwell and Davis 1991, Sect. 4.9).

In the following, we will treat the two cases described in (d) by formulating the mathematical operations from the frequency domain into time domain and vice versa. In addition, the mathematical relationships between the autocovariance and spectral distribution functions will be explained. We will, however, not mention certain obvious transition relationships that can be obtained via simple substitution.

  1. (9)

    \(\mathcal{X}_{T} \Leftarrow = (\mathcal{U}_{W}^{(\mathcal{X})},\mathcal{V}_{W}^{(\mathcal{X})}),\,\,\mathcal{L}_{T} \Leftarrow = (\mathcal{U}_{W}^{(\mathcal{L})},\mathcal{V}_{W}^{(\mathcal{L})})\):

    $$\displaystyle\begin{array}{rcl} \left \{\begin{array}{c} \mathcal{X}_{t} \\ \mathcal{L}_{t}\end{array} \right \}& =\int \limits _{ -\pi }^{\pi }\cos (\omega t)\,\left \{\begin{array}{c} d\mathcal{U}_{W}^{(\mathcal{X})}(\omega ) \\ d\mathcal{U}_{W}^{(\mathcal{L})}(\omega )\end{array} \right \}& {}\\ & \;\;\;\;\;+\sin (\omega t)\,\left \{\begin{array}{c} d\mathcal{V}_{W}^{(\mathcal{X})}(\omega ) \\ d\mathcal{V}_{W}^{(\mathcal{L})}(\omega )\end{array} \right \} & {}\\ \end{array}$$

    hold for any \(t \in \mathbb{Z}\) and ω ∈ W. These equations may be viewed as the stochastic counterparts to the Fourier integral (written in terms of sine/cosine base functions) of some deterministic function; the reader will find these equations in terms of complex exponentials, for instance, in Brockwell and Davis (1991, Theorem 4.8.2, pp. 145–147) or Priestley (2004, pp. 246–252).

  2. (10)

    \(\mathcal{X}_{T}\Longrightarrow(\mathcal{U}_{W}^{(\mathcal{X})},\mathcal{V}_{W}^{(\mathcal{X})}),\,\,\mathcal{L}_{T}\Longrightarrow(\mathcal{U}_{W}^{(\mathcal{L})},\mathcal{V}_{W}^{(\mathcal{L})})\):

    $$\displaystyle\begin{array}{rcl} & & \left \{\begin{array}{*{10}c} \mathcal{U}_{W}^{(\mathcal{X})}(\omega ) -\mathcal{U}_{W}^{(\mathcal{X})}(\nu ) \\ \mathcal{V}_{W}^{(\mathcal{X})}(\omega ) -\mathcal{V}_{W}^{(\mathcal{X})}(\nu )\end{array} \right \}\mathop{\longleftarrow }\limits^{m.s.}\frac{1} {2\pi } {}\\ & & \quad \times \sum \limits _{t=-\infty }^{\infty }\mathcal{X}_{ t}\int \limits _{\nu }^{\omega }\left \{\begin{array}{*{10}c} \cos (t\lambda )d\lambda \\ \sin (t\lambda )d\lambda \\ \end{array} \right \}, {}\\ & & \left \{\begin{array}{*{10}c} \mathcal{U}_{W}^{(\mathcal{L})}(\omega ) -\mathcal{U}_{W}^{(\mathcal{L})}(\nu ) \\ \mathcal{V}_{W}^{(\mathcal{L})}(\omega ) -\mathcal{V}_{W}^{(\mathcal{L})}(\nu )\end{array} \right \}\mathop{\longleftarrow }\limits^{m.s.}\frac{1} {2\pi } {}\\ & & \quad \times \sum \limits _{t=-\infty }^{\infty }\mathcal{L}_{ t}\int \limits _{\nu }^{\omega }\left \{\begin{array}{*{10}c} \cos (t\lambda )d\lambda \\ \sin (t\lambda )d\lambda \\ \end{array} \right \}, {}\\ \end{array}$$

    hold for any \(t\,\in \,\mathbb{Z}\) and ω, ν ∈ W. These equations show the reversed operation given by (9), so that one obtains the increments \(\mathcal{U}_{W}^{(.)}(\omega ) -\mathcal{U}_{W}^{(.)}(\nu )\), \(\mathcal{V}_{W}^{(.)}(\omega ) -\mathcal{V}_{W}^{(.)}(\nu )\) and not U W (⋅ )(ω) or V W (⋅ )(ω) themselves (Theorem 4.9.1 in Brockwell and Davis 1991, pp. 151–152). Here, \(\mathop{\longrightarrow }\limits^{m.s.}\) denotes convergence in mean square.

  3. (11)

    \(\gamma _{\mathcal{X}}\Leftarrow = F_{\mathcal{X}},\gamma _{\mathcal{L}}\Leftarrow = F_{\mathcal{L}}\)

    $$\displaystyle{\left \{\begin{array}{*{10}c} \gamma _{\mathcal{X}}(k) \\ \gamma _{\mathcal{L}}(k)\end{array} \right \} =\int \limits _{ -\pi }^{\pi }\cos (k\omega )\left \{\begin{array}{*{10}c} \mathit{dF}_{\mathcal{X}}(\omega ) \\ \mathit{dF}_{\mathcal{L}}(\omega )\end{array} \right \},}$$

    hold for \(k \in \mathbb{Z},\omega \in W\) and describe the mathematical relationships between a given spectral distribution and the autocovariance function (known as Wold’s theorem, a discrete version of the Wiener-Khintchine Theorem), see Brockwell and Davis (1991, Corollary 4.3.1, p. 119) or Priestley (2004, pp. 222–226). The described Fourier transform is reduced to a cosine transform due to the fact that the autocovariance function of any real stochastic process is even (see Priestley 2004, p. 214).

  4. (12)

    \(\gamma _{\mathcal{X}}\Longrightarrow F_{\mathcal{X}},\gamma _{\mathcal{L}}\Longrightarrow F_{\mathcal{L}}\)

    $$\displaystyle\begin{array}{rcl} \left \{\begin{array}{*{10}c} F_{\mathcal{X}}(\omega ) \\ F_{\mathcal{L}}(\omega )\end{array} \right \} = \left \{\begin{array}{*{10}c} \gamma _{\mathcal{X}}(0) \\ \gamma _{\mathcal{L}}(0)\end{array} \right \}\frac{\omega +\pi } {2\pi } + \frac{1} {\pi } \sum \limits _{k=1}^{\infty }\left \{\begin{array}{*{10}c} \gamma _{\mathcal{X}}(k)\frac{\sin k\omega } {k} \\ \gamma _{\mathcal{L}}(k)\frac{\sin k\omega } {k}\end{array} \right \}& & {}\\ \end{array}$$

    hold for \(k \in \mathbb{Z},\omega \in W\) and is the inverse operation to (11); see Brockwell and Davis (1991, Theorem 4.9.1, pp. 151–152) and Priestley (2004, pp. 222–226).

Example: MA(q) Process

In the previous sections the main results in the time and frequency domain for an MA(q) process were presented. The above mentioned equations in this section can be used to verify these results.

Conclusion and Outlook

In this paper we demonstrated certain aspects of the Magic Square, which connects a covariance-stationary stochastic process with its autocovariance function, its spectral representation, and the corresponding spectral distribution or density function (if it exists). To keep the presentation short, we focussed on the example of a moving average process and its transition from the time into the frequency domain. The application of more complex (and more widely used) stochastic processes in the time domain such as autoregressive moving average processes would be an obvious extension of this scenario, which we will deal with in the future. Furthermore, it would be valuable to explore the principles behind the transition from the frequency into the time domain by specifying suitable spectral processes and to find their time-domain representations.