Keywords

1 Introduction

In practice many phenomena with random characteristics exist, which cannot be represented by deterministic functions. In these cases, stochastic processes often allow for a sufficient description (see e.g. Koch and Schmidt 1994; Moritz 1980; Welsch et al. 2000). Applications of stochastic processes in geodesy tend to focus on analyses within the time domain at the level of the measurements themselves. In contrast, the usage of the relationships of a process with its spectral representation, autocovariance function and spectral distribution (or density) function is less popular, or even done incorrectly. One reason for this is that the mathematics and thus the computational aspects of these relationships and representations are rather intricate and oftentimes not readily available for a specific type of process to be used in a practical situation. To remedy this problem, Krasbutter et al. (2015) discussed a kind of Magic Square with respect to a general real-valued, one-dimensional, discrete-time, covariance-stationary stochastic process. The Magic Square has the advantage that it is a well-arranged representation of the described four quantities of a process in the time and frequency domain and their relationships. Furthermore, Krasbutter et al. (2015) expanded the Magic Square to stochastic processes, which are obtained by non-recursive filtering of an input process and evaluation of the results by application to a q-th order moving average (MA(q)) process (see the following Sect. 2 for an overview).

In this contribution the Magic Square will be formulated for another well-known process: a discrete-time p-th order autoregressive (AR(p)) process. The use of such a process as a description of the random error term in linear observation equations seems to have been proposed first by the econometricians D. Cochrane and G. H. Orcutt (Cochrane and Orcutt 1949). In geodesy, this kind of model is becoming increasingly popular; see Koch and Schmidt (1994) and Schuh (2003) for general descriptions and Schubert et al. (2019) for a current application to the highly correlated measurement errors of the Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) satellite mission. The AR(p) process may be viewed as being obtained by recursive filtering. The Magic Square of such a process is explained in Schuh et al. (2014) and further discussed in Schuh (2016).

In contrast, we study in the current contribution the Magic Square for AR(p) processes obtained through non-recursive filtering. For this purpose, we elaborate a certain reformulation of that process. In Sect. 3 the transformation is presented with a MA process of infinite order (MA()) as a result. The Magic Square of this transformed stochastic process is an alternative representation of the four quantities of an AR process given in Schuh et al. (2014). The transformation can be seen as a link to the Magic Square describe in Krasbutter et al. (2015) To demonstrate the evaluation of these results, the transformation is applied to an AR(1) process and the corresponding Magic Square is compared with representations given in the literature (cf. Sect. 3). In Sect. 4 this paper is concluded with a summary and an outlook.

2 The Magic Square of a Non-Recursive Filtered Stochastic Process

A general stochastic process \({\mathcal {X}}_T\) is defined by a family of random variables on a probability space, symbolically

$$\displaystyle \begin{aligned} \begin{array}{rcl} {\mathcal{X}}_T=(\varOmega,{\mathcal{A}}, P, \{{\mathcal{X}}_t,t\in T\}), \end{array} \end{aligned} $$
(1)

where Ω denotes the sample space, \({\mathcal {A}}\) a σ-Algebra of events, P the probability measure and \({\mathcal {X}}_t\) is a random variable.

Additionally we restrict our representation to stochastic processes with the following properties:

  • One-dimensional and real-valued:

    $$\displaystyle \begin{aligned} \begin{array}{rcl} (\varOmega,{\mathcal{A}})\rightarrow (\mathbb{R},{\mathcal{B}}), \end{array} \end{aligned} $$

    where \({\mathcal {B}}\) is the Borel σ-Algebra, which is generated by all real-valued, one-dimensional, left open and right closed intervals.

  • Discrete in time with constant and general sampling rate Δt:

    $$\displaystyle \begin{aligned} \begin{array}{rcl} t=n\cdot \varDelta t , n\in \mathbb{Z}, \varDelta t \in \mathbb{R}. \end{array} \end{aligned} $$

    On account of this, a random variable depends only on n and it will by symbolised by \({\mathcal {X}}_n\) in the following.

  • Covariance-stationary with zero mean, variance σ 2 and autocovariance function \(\gamma ^{\mathcal {X}}_k\), where k is called lag (cf. Brockwell and Davis 1991, pp. 11–12).

Additionally, it is assumed, that the process \({\mathcal {X}}_T\) is obtained by filtering another one-dimensional, discrete-time, covariance-stationary stochastic process \({\mathcal {U}}_T\) by

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} {\mathcal{X}}_n=\sum_{j=0}^{q}\psi_j{\mathcal{U}}_{n-j}=\varPsi(L){\mathcal{U}}_{n}, \end{array} \end{aligned} $$
(2)

where Ψ(L) = ψ 0 + ψ 1L + … + ψ qL q with lag operator notation \(L^j{\mathcal {U}}_n={\mathcal {U}}_{n-j}\) is a non-recursive, causal, absolutely summable and invertible filter. The order q of the filter can be infinite (‘q = ’) or finite (\(q \in \mathbb {N}\)).

The Magic Square of this real-valued, one-dimensional, discrete-time, covariance-stationary, non-recursive filtered stochastic process is presented in Fig. 1. In the upper left corner the stochastic process itself is given and can be seen as a collection of random and equidistant variables.

Fig. 1
figure 1

Magic Square for covariance-stationary, discrete-time, non-recursive filtered stochastic process with upper left: stochastic process, lower left: autocovariance function, upper right: spectral representation of the stochastic process and lower right: spectral representation of the autocovariance function. The interrelations are symbolised by arrows with the corresponding mathematical operations, where \(\mathcal {F}\{\cdot \}\)/\(\mathcal {F}^{-1}\{\cdot \}\) symbolise the Fourier transform/integral, \(\mathcal {F}_{(s)}\{\cdot \}\)/\(\mathcal {F}_{(s)}^{-1}\{\cdot \}\) the stochastic Fourier transform/integral, \({E \left \{ \cdot \right \}}\) the expectation and ‘⋆ ’ a correlation

The spectral representation of the described stochastic process (upper right corner) is denoted by \(d\widehat {{\mathcal {Z}}}^{{\mathcal {X}}}_s(\nu )\). In the following the hat will symbolize a quantity in the frequency domain and the superscripted factor s the fact that the corresponding representation in the time domain is discrete-valued.

However \(d\widehat {{\mathcal {Z}}}^{{\mathcal {X}}}_s(\nu )\) is a stochastic orthogonal increment process with the following properties:

  • One-dimensional and complex-valued,

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \varOmega \rightarrow \mathbb{C}, \end{array} \end{aligned} $$
  • Frequency-continuous with ν defined within the interval [−ν N, ν N], where \(\nu ^N=\frac {1}{2\varDelta t}\) is known as the Nyquist frequency.

  • Orthogonal increments (cf. Brockwell and Davis 1991, pp. 138–140):

    $$\displaystyle \begin{aligned} \begin{array}{rcl} {E \left\{ d\widehat{{\mathcal{Z}}}^{{\mathcal{X}}}_s(\nu_1)(d\widehat{{\mathcal{Z}}}^{{\mathcal{X}}}_s(\nu_2))^* \right\}}=0, \quad \text{for } \nu_1\neq\nu_2, \end{array} \end{aligned} $$

    where ∗ denotes the conjugate complex.

While the proposed process can be described as a filtering of an input process \({\mathcal {U}}_n\) in the time domain (see (2)), the description for the frequency domain is:

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} d\widehat{{\mathcal{Z}}}^{{\mathcal{X}}}_s(\nu)=\widehat{\varPsi}_s(\nu)d\widehat{{\mathcal{Z}}}^{\mathcal{U}}_s(\nu), \end{array} \end{aligned} $$
(3)

where \(d\widehat {{\mathcal {Z}}}^{{\mathcal {U}}}_s(\nu )\) is the corresponding spectral representation of the input stochastic process \({\mathcal {U}}_n\) and \(\widehat {\varPsi }_s(\nu )=\sum _{j=0}^{q}\psi _{j}e^{-i2\pi \nu j \varDelta t}\) is the Fourier transform of the filter Ψ(L) called the transfer function (cf. Priestley 2004, pp. 263–270).

The interrelation between the stochastic process and its spectral representation, which is indicated by the harpoons, can be described by the “stochastic Fourier transform” \(\mathcal {F}_{(s)}\{\cdot \}\) and the converse relationship by the “stochastic Fourier integral” \(\mathcal {F}_{(s)}^{-1}\{\cdot \}\). The mathematical formulas and a detailed explanation of these relations is not the focus of this paper and are omitted. But the interested reader is referred to Krasbutter et al. (2015) and Lindquist and Picci (2015, Chapter 3).

The two quantities in the lower row of the square are the autocorrelation function \(\gamma _k^{{\mathcal {X}}}\) (left side) and its spectral representation \(d\widehat {\varGamma }_s^{{\mathcal {X}}}(\nu )\) (right side). This spectral representation is an increment process of the spectral distribution function \(\widehat {\varGamma }^{{\mathcal {X}}}_s(\nu )\). If the derivative \(d\widehat {\varGamma }^{{\mathcal {X}}}_s(\nu )/d\nu \) exits, it is called spectral density function \(\widehat {\varGamma }^{{\mathcal {X}}}_s(\nu )\).

Both quantities in the lower row of the square are deterministic functions, where the autocorrelation function is discrete and its spectral representation is frequency-continuous with ν defined in the interval − ν N to ν N and is continued periodically outside of this range. As described in Priestley (2004, p. 214) \(\gamma _k^{{\mathcal {X}}}\) and \(\widehat {\gamma }_s^{{\mathcal {X}}}(\nu )\) are even, if the related stochastic process is real-valued, which we have fixed by the above described characteristics of the process. Furthermore, the autocovariance can be formulated by the filter Ψ(L) and the autocovariance of the input process \(\gamma _k^{{\mathcal {U}}}\), while its spectral representation is given by the transfer function of the filter \(\widehat {\varPsi }_s(\nu )\) and the increment of the spectral distribution function of the input process \(d\widehat {\varGamma }_s^{{\mathcal {U}}}(\nu )\). Thus, the autocorrelation of \(\gamma _k^{{\mathcal {X}}}\) is given by

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} \gamma_k^{\mathcal{X}}=\sum_{m=0}^{\infty}\sum_{s=0}^{\infty}\psi_m\psi_s\gamma_{|k|-m+s}^{\mathcal{U}} \end{array} \end{aligned} $$
(4)

and the corresponding spectral representation by

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} d\widehat{\varGamma}^{\mathcal{X}}_{s}(\nu)=|\widehat{\varPsi}_s(\nu)|{}^2d\widehat{\varGamma}^{{\mathcal{U}}}_{s}(\nu). \end{array} \end{aligned} $$
(5)

The interrelation of these two equations can be described by the “deterministic Fourier transform” \(\mathcal {F}\{\cdot \}\) and the converse relationship by the “deterministic Fourier integral” \(\mathcal {F}^{-1}\{\cdot \}\). A detailed explanation of these two equations and its interrelation is given by Priestley (2004, Sect. 4.12), Brockwell and Davis (1991, Proposition 3.1.2) and Krasbutter et al. (2015).

As explained above, the upper corners are stochastic functions and in contrast to them the lower corners are deterministic functions so that the transformation from top to bottom can be interpreted as a reduction from stochastic to deterministic. This reduction is achieved by using the expectation (symbolised with \({E \left \{ \cdot \right \}}\) in Fig. 1), which has the drawback of information loss, thus the way vice versa (from bottom to top) is not possible without additional information (indicated by the missing arrow from the lower to the upper corners). The operations from top to bottom can be seen on the left side as a stochastic correlation. The correlation is symbolised by ‘⋆ ’ in Fig. 1. The corresponding operation in the frequency is a stochastic multiplication.

3 The Magic Square of an p-th Order Autoregressive Process

The time-discrete, autocovariance-stationary, invertible AR(p) process with \(p\in \mathbb {N}\) is defined by

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} {{\mathcal{X}}}_n: &\displaystyle =&\displaystyle \theta_1{\mathcal{X}}_{n-1}+\ldots+\theta_p{{\mathcal{X}}}_{n-p}+{\mathcal{E}}_n, \,\, (n \in \mathbb{Z}) \\ \Longleftrightarrow \varTheta(L){\mathcal{X}}_n&\displaystyle =&\displaystyle {\mathcal{E}}_n, \end{array} \end{aligned} $$
(6)

where Θ(L) = 1 − θ 1L −… − θ pL p is a recursive, causal filter and \({\mathcal {E}}_n\) denotes white noise with \({\mathcal {E}}_n\sim N(0,\sigma _{\mathcal {E}}^2)\) (Brockwell and Davis 1991, Definition 3.1.1). The AR process is covariance-stationary if and only if the roots of the polynomial

$$\displaystyle \begin{aligned} \begin{array}{rcl} 1-\theta_1z-\theta_2z^2\ldots - \theta_pz^p =0, z\in\mathbb{C} \end{array} \end{aligned} $$

lie outside of the unit circle. Furthermore, AR processes with finite order are invertible (cf. Box and Jenkins 1976, Sect. 3.2.1).

In Schuh et al. (2014) the Magic Square of an AR(p) process is presented. In contrast, the idea of this contribution is to transform the AR(p) process given by (6) in the form of (2). The advantage of a non-recursive representation is that it is easier to evaluate concerning warm up, behaviour of the covariance and the stationarity of the process. In so doing the explained Magic Square in Sect. 2 can be applied to the reformulated process. The result is an alternative but equivalent representation of the AR(p) process within the Magic Square; the transformation can be seen as a link between these two representations of an AR(p) process within the Magic Square.

In a first step the reformulation of the AR(p) process to (2) is described and afterwards the alternative representation of the Magic Square is given. All outcomes of each step are applied to an AR(1) process and compared to the results given in the literature, where no reformulation is applied.

3.1 Reformulation of the AR(p) Process

The reformulation can be seen as a pre-processing step and is symbolised in Fig. 2, where the Magic Square of the AR(p) process is presented. This additional step starts by the multiplication of (6) with Θ(L)−1 on both sides:

$$\displaystyle \begin{aligned} \begin{array}{rcl} {\mathcal{X}}_t&\displaystyle =&\displaystyle \varTheta(L)^{-1}{\mathcal{E}}_n. \end{array} \end{aligned} $$

(cf. Gilgen 2006, p. 251).

Fig. 2
figure 2

Magic Square for covariance-stationary, discrete-time autoregressive process of order p (AR(p)) with general sampling rate Δt. This AR(p) process is reformulated in a pre-processing step to a moving average process of infinitive order. The Magic Square is then derived out of this reformulated process with upper left: reformulated stochastic process itself, lower left: autocovariance function, upper right: spectral representation of the reformulated stochastic process and lower right: spectral density function. The interrelations are symbolised by arrows with the corresponding mathematical operations, where \(\mathcal {F}\{\cdot \}\)/\(\mathcal {F}^{-1}\{\cdot \}\) symbolises the Fourier transform/integral, \(\mathcal {F}_{(s)}\{\cdot \}\)/\(\mathcal {F}_{(s)}^{-1}\{\cdot \}\) the stochastic Fourier transform/integral, \({E \left \{ \cdot \right \}}\) the expectation and ‘⋆ ’ a correlation

To familiarize ourselves with the filter Θ(L)−1, whose order for instance we do not know yet, a reformulation is done. Thus, the inverse filter of Θ(L) can be formulated by

$$\displaystyle \begin{aligned} \begin{array}{rcl} \varTheta(L)^{-1}&\displaystyle =&\displaystyle \frac{1}{1-\theta_1L-\ldots-\theta_pL^p} \end{array} \end{aligned} $$
(7)

(see Hamilton 1994, Chapter 2.4). This inverse representation can be rewritten by using the infinite geometric series

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} \frac{1}{1-x}=1+x+x^2+x^3+\ldots \end{array} \end{aligned} $$
(8)

with |x| < 1 (cf. Andrews 1998, Eq. (3.3)) and results in

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} \varTheta(L)^{-1} &\displaystyle =&\displaystyle 1+(\theta_1L+\ldots +\theta_pL^p)\\ &\displaystyle &\displaystyle + (\theta_1L+\ldots +\theta_pL^p)^2+\ldots. \end{array} \end{aligned} $$
(9)

The next step is to expand and resort (9) to

$$\displaystyle \begin{aligned} \begin{array}{rcl} \varTheta(L)^{-1} &\displaystyle =&\displaystyle 1+\overline{\theta}_1L+\overline{\theta}_2L^2+\ldots\\ &\displaystyle =&\displaystyle \overline{\varTheta}(L), \end{array} \end{aligned} $$
(10)

where \(\overline {\varTheta }(L)\) is the reformulated representation of Θ(L). The determination of \(\overline {\theta }_i\) for \(i \in \mathbb {N}\) can be achieved by the following algorithm:

Step 1: :

Find all s combinations of the sorted product \(\theta _1^{l_{1,j}}\cdot \theta _2^{l_{2,j}}\cdot \ldots \cdot \theta _p^{l_p,j}\), where \(\sum _{m=1}^pm\cdot l_{m,j}=i\) with \(l_{m,j}\in \mathbb {N}\) holds. The index j = 1, …, s specifies the combination number.

Step 2: :

Determination of

$$\displaystyle \begin{aligned} d_j=\frac{(\sum_{m=1}^{p}l_{m,j})!}{l_{1,j}!\cdot l_{2,j}! \cdot \ldots \cdot l_{p,j}!}. \end{aligned} $$

where ! denotes the factorial function. This factor indicates the number of possibilities for combining the filter coefficients θ 1, θ 2, …θ p of combination j.

Step 3: :

The last step is to determine the filter coefficient \(\overline {\theta }_i\) by

$$\displaystyle \begin{aligned} \overline{\theta}_i=\sum_{j=1}^{s}(d_j\cdot \theta_1^{l_{1,j}}\cdot \theta_2^{l_{2,j}}\cdot \ldots \cdot \theta_p^{l_{p,j}}). \end{aligned} $$

Hence, the alternative representation of (6) is given by

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} {\mathcal{X}}_n&\displaystyle =&\displaystyle \overline{\varTheta}(L){\mathcal{E}}_n = \sum_{j=0}^{\infty}\overline{\theta}_j{\mathcal{E}}_{n-j}, \text{with }\overline{\theta}_0=1. \end{array} \end{aligned} $$
(11)

This process is known as moving average process of infinite order (MA() process), which is a special form of the filtered stochastic process described in Sect. 2.

Example of the Transformation

The transformation is applied exemplary to an AR(1) process, which is defined by

$$\displaystyle \begin{aligned} \begin{array}{rcl} \varTheta(L){\mathcal{X}}_n:={\mathcal{E}}_n, \end{array} \end{aligned} $$
(12)

with Θ(L) = 1 − θ 1L and |θ 1| < 1 (cf. Hamilton 1994, p. 53). This process has only θ 1 as filter coefficient and therefore the described algorithm to determine \(\overline {\theta }_i\) is simplified, because l 1,⋅ = i. Hence, the reformulated AR(1) process is given by

$$\displaystyle \begin{aligned} \begin{array}{rcl} {\mathcal{X}}_n&\displaystyle =&\displaystyle \sum_{j=0}^{\infty}\theta_1^j{\mathcal{E}}_{n-j}. \end{array} \end{aligned} $$

The results for transformed AR processes with higher orders are much more complicated. For instance, the reformulated AR(2) process is given by

$$\displaystyle \begin{aligned} \begin{array}{rcl} \overline{\varTheta}(L)&\displaystyle =&\displaystyle \underbrace{1}_{\overline{\theta}_0}+\underbrace{\theta_1}_{\overline{\theta}_1}L+\underbrace{(\theta_1^2+\theta_2)}_{\overline{\theta}_2}L^2+\underbrace{(\theta_1^2+2\theta_1\theta_2)}_{\overline{\theta}_3}L^3\\ &\displaystyle +&\displaystyle \underbrace{(\theta_1^4+3\theta_1^2\theta_2+\theta_2^2)}_{\overline{\theta}_4}L^4+\underbrace{(\theta_1^5+4\theta_1^3\theta_2+3\theta_1\theta_2^2)}_{\overline{\theta}_5}L^5\\ &\displaystyle +&\displaystyle \underbrace{(\theta_1^6+5\theta_1^4\theta_2+6\theta_1^2\theta_2^2+\theta_2^3)}_{\overline{\theta}_6}L^6 + \ldots. \end{array} \end{aligned} $$

In this contribution, due to the complexity of AR processes with higher orders, the results are applied only to AR(1) processes.

3.2 Magic Square of the Reformulated AR(p) Process

The Magic Square of the reformulated AR(p) process is described and the results are presented in Fig. 2. The derivation starts with the quantities in the time domain (corners on the left-hand side of the square), followed by the results in the frequency domain (corners on the right-hand side of the square).

3.2.1 Time Domain (Left-Hand Side)

The upper left corner of the square is given by (11), being the reformulated AR process itself. This process has an autocovariance function (lower left corner), which can be derived by using (4) and the property

$$\displaystyle \begin{aligned} \begin{array}{rcl} \gamma_{k}^{\mathcal{E}}=\left \{\begin{array}{ccc} \sigma_{\mathcal{E}}^2 &\displaystyle \text{for} &\displaystyle k=0\\ 0 &\displaystyle \text{else} &\displaystyle \end{array}\right. \end{array} \end{aligned} $$

of white noise, resulting in

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} \gamma_k^{{\mathcal{X}}}=\sigma_{\mathcal{E}}^2\sum_{s=0}^{\infty}\overline{\theta}_{s+|k|}\overline{\theta}_s. \end{array} \end{aligned} $$
(13)

Example: AR(1) Process

As described in the last section the reformulation of the AR(1) to an MA() process is given by \(\overline {\theta }_i=\theta _1^i\) . This result is substituted into (13), leading to

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} \gamma_k^{{\mathcal{X}}} &\displaystyle =&\displaystyle \sigma_{\mathcal{E}}^2\sum_{s=0}^{\infty}\theta_1^{s+|k|}\theta_1^s =\sigma_{\mathcal{E}}^2\sum_{s=0}^{\infty}\theta_1^{2s+|k|}. \end{array} \end{aligned} $$
(14)

It can be shown, that (14) is an alternative representation of the autocovariance

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} \gamma_k^{\mathcal{X}} &\displaystyle =&\displaystyle \sigma_{\mathcal{E}}^2\theta_1^{|k|}\cdot\frac{1}{1-\theta_1^2}, \end{array} \end{aligned} $$
(15)

which is often mentioned in the literature (cf. Priestley 2004, Eq. (3.5.16)). To show the equivalence between these two representations of the autocovariance function, (14) is reorganised to

$$\displaystyle \begin{aligned} \begin{array}{rcl} \gamma_k^{{\mathcal{X}}} &\displaystyle =&\displaystyle \sigma_{\mathcal{E}}^2\theta_1^{|k|}\sum_{s=0}^{\infty}\theta_1^{2s}\\ &\displaystyle =&\displaystyle \sigma_{\mathcal{E}}^2\theta_1^{|k|}(1+\theta_1^{2}+\theta_1^{4}+\ldots ). \end{array} \end{aligned} $$

In the next step the pre-processing step is undone by using the definition (8) of the infinite geometric series. The result is (15).

3.2.2 Frequency Domain (Right-Hand Side)

The spectral representation of an AR(p) process by using (3) is defined by

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} d\widehat{\mathcal{Z}}_s^{\mathcal{X}}(\nu) &\displaystyle =&\displaystyle d\widehat{{\mathcal{Z}}}_s^{\mathcal{E}}(\nu)\widehat{\overline{\varTheta}}_s(\nu)\\ &\displaystyle =&\displaystyle \,d\widehat{\mathcal{Z}}_s^{\mathcal{E}}(\nu)\sum_{j=0}^{\infty}\overline{\theta}_je^{-i2\pi\nu j\varDelta t}, \end{array} \end{aligned} $$
(16)

where \(\widehat {\overline {\varTheta }}_s(\nu )=\sum _{j=0}^{\infty }\overline {\theta }_je^{-i2\pi \nu j\varDelta t}\) is the transfer function of the filter \(\overline {\varTheta }(L)\) and \(d\widehat {{\mathcal {Z}}}_s^{\mathcal {E}}(\nu )\) the spectral representation of discrete-time white noise, also known as increment process of a Wiener process (see Lindquist and Picci 2015, Sect. 3.3.3). The spectral representation is defined for ν given in the interval [−ν N, ν N] and is periodic outside of this range.

The spectral representation of the autocovariance is defined by using (5) and the property of white noise \(d\widehat {\varGamma }_s^{\mathcal {E}}(\nu )=\sigma _{\mathcal {E}}^2d\nu \):

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} d\widehat{\varGamma}_s^{\mathcal{X}}(\nu)=\sigma_{\mathcal{E}}^2d\nu (\sum_{j=0}^{\infty}\sum_{s=0}^{\infty}\overline{\theta}_j\overline{\theta}_s e^{-i2\pi \nu j\varDelta t}e^{i2\pi\nu s\varDelta t}).\quad \end{array} \end{aligned} $$
(17)

This sum can be reorganised to

$$\displaystyle \begin{aligned} d\widehat{\varGamma}_s^{\mathcal{X}}(\nu)&= \sigma_{\mathcal{E}}^2d\nu\Big(\overline{\theta}_0\overline{\theta}_0+\sum_{j=1}^{\infty}\overline{\theta}_j\overline{\theta}_0e^{-i2\pi\nu j\varDelta t}\\ &+\sum_{s=1}^{\infty}\overline{\theta}_s\overline{\theta}_0e^{i2\pi\nu s \varDelta t}+\sum_{j=1}^{\infty}\overline{\theta}_j^2\\ &+ \sum_{j=1}^{\infty}\sum_{s=1, s \neq j}^{\infty}\overline{\theta}_j\overline{\theta}_se^{-i2\pi\nu j\varDelta t}e^{i2\pi\nu s\varDelta t}\Big). \end{aligned} $$

Now, Euler’s formula and the relation \(\overline {\theta }_0=1\) is applied and results in

$$\displaystyle \begin{aligned} d\widehat{\varGamma}_s^{\mathcal{X}}(\nu)&=\sigma_{\mathcal{E}}^2d\nu\Big(1+2\sum_{j=1}^{\infty}\overline{\theta}_j\cos{(2\pi\nu j \varDelta t)}+\sum_{j=1}^{\infty}\overline{\theta}_j^2\\ &+ 2\sum_{j=1}^{\infty}\sum_{s=j+1}^{\infty}\overline{\theta}_j\overline{\theta}_s\cos{(2\pi\nu (s-j)\varDelta t)}\Big), \end{aligned} $$
(18)

where the frequency ν takes values within the interval [−ν N, ν N]. Obviously, the derivative of (18) exits, so the spectral density function of the AR(p) process is given by

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} \widehat{\gamma}_s^{\mathcal{X}}(\nu)&\displaystyle =&\displaystyle \sigma_{\mathcal{E}}^2\Big(1+2\sum_{j=1}^{\infty}\overline{\theta}_j\cos{(2\pi\nu j \varDelta t)}+\sum_{j=1}^{\infty}\overline{\theta}_j^2\\ &\displaystyle +&\displaystyle 2\sum_{j=1}^{\infty}\sum_{s=j+1}^{\infty}\overline{\theta}_j\overline{\theta}_s\cos{(2\pi\nu (s-j)\varDelta t)}\Big). \end{array} \end{aligned} $$
(19)

Example: AR(1) Process

The spectral representation and the spectral density function of the reformulated AR(1) process are obtained by substituting \(\overline {\theta }_j=\theta _1^j\) into (16) and (19). The spectral representation is then defined by

$$\displaystyle \begin{aligned} \begin{array}{rcl} d\widehat{\mathcal{Z}}_s^{\mathcal{X}}(\nu) &\displaystyle =&\displaystyle d\widehat{\mathcal{Z}}_s^{\mathcal{E}}(\nu)\sum_{j=0}^{\infty}\theta_1^je^{-i2\pi\nu j\varDelta t} \end{array} \end{aligned} $$
(20)

and the spectral density function by

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} \widehat{\gamma}_s^{\mathcal{X}}(\nu)&\displaystyle =&\displaystyle \sigma_{\mathcal{E}}^2\Big(1+2\sum_{j=1}^{\infty}\theta_1^j\cos{(2\pi\nu j \varDelta t)}+\sum_{j=1}^{\infty}\theta_1^{2j}\\ &\displaystyle +&\displaystyle 2\sum_{j=1}^{\infty}\sum_{s=j+1}^{\infty}\theta_1^{j+s}\cos{(2\pi\nu (s-j)\varDelta t)}\Big). \end{array} \end{aligned} $$
(21)

In (Priestley 2004, p. 238) the spectral density function of an AR(1) process is given by

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} \widehat{\gamma}_s^{\mathcal{X}}(\nu)= \frac{\sigma_{\mathcal{E}}^2}{(1-2\theta_1\cos{(2\pi\nu\varDelta t)}+\theta_1^2)}. \end{array} \end{aligned} $$
(22)

It can be shown that this result is equivalent to (21) by substituting Euler’s formula:

$$\displaystyle \begin{aligned} \begin{array}{rcl} \widehat{\gamma}_s^{\mathcal{X}}(\nu)&\displaystyle =&\displaystyle \frac{\sigma_{\mathcal{E}}^2}{(1-\theta_1e^{i2\pi\nu\varDelta t}-\theta_1e^{-i2\pi\nu\varDelta t}+\theta_1^2)}\\ &\displaystyle =&\displaystyle \sigma_{\mathcal{E}}^2\left[ (\frac{1}{1-\theta_1e^{-i2\pi\nu\varDelta t}})(\frac{1}{1-\theta_1e^{i2\pi\nu\varDelta t}})\right]. \end{array} \end{aligned} $$

The infinite geometric series is applied and results in

$$\displaystyle \begin{aligned} \begin{array}{rcl} \widehat{\gamma}_s^{\mathcal{X}}(\nu)&\displaystyle =&\displaystyle \sigma_{\mathcal{E}}^2(\sum_{j=0}^{\infty}\theta_1^je^{-i2\pi\nu j\varDelta t})(\sum_{s=0}^{\infty}\theta_1^se^{i2\pi\nu s\varDelta t})\\ &\displaystyle =&\displaystyle \sigma_{\mathcal{E}}^2(\sum_{j=0}^{\infty}\sum_{s=0}^{\infty}\theta_1^j\theta_1^se^{-i2\pi\nu j\varDelta t}e^{i2\pi\nu s\varDelta t}). \end{array} \end{aligned} $$

As described above a rearrangement of the sums results in (21).

4 Conclusion and Outlook

Within this paper the transformation of an AR(p) into a MA() process, which is in practical use easier to interpretate concerning warm-up, covariance and stationarity, is demonstrated. In so doing the graphical representation of a stochastic process in time and frequency domain given by Krasbutter et al. (2015) can be applied to determine the explicit mathematical expressions of each corner in the Magic Square for an AR(p) process. The practical application for instance to AR processes estimated by means of the data given by satellite mission GOCE and the convergence behaviour of the transformed AR(p) process is still to be examined. Due to lack of space in this contribution this investigation is omitted.

The application of the transformation to widely used stochastic processes, for instance the autoregressive moving average process (ARMA process), would be an extension of this scenario and will be considered in the future.