1 Introduction

Many processes encountered in practice exhibit seasonal behavior which cannot be modeled by conventional stationary processes (see Castro and Girardin (2002) for examples). For modeling such data, Gladyshev (1961) introduced the class of periodically correlated processes: a process \(\left\{ {X_{n} , \, n \in {\mathbb{Z}}} \right\}\) is said to be periodically correlated (PC) with period \(T\) if

$$E\left( {X_{n + T} } \right) = E\left( {X_{n} } \right)\quad {\text{and}}\quad {\text{Cov}}\left( {X_{n + T} ,X_{m + T} } \right) = {\text{Cov}}\left( {X_{n} ,X_{m} } \right), \quad n,m \in {\mathbb{Z}},$$

where \({\mathbb{Z}}\) stands for the set of all integers. Since we propose to deal with second-order properties, it will be assumed throughout the paper that \(E\left( {X_{n} } \right) = 0\). (In practice, the periodic sample mean is subtracted from the data.) Associated with a PC process \(\left\{ {X_{n} , n \in {\mathbb{Z}}} \right\},\) there is a T-dimensional stationary process \(\left\{ {\varvec{Y}_{n} , n \in {\mathbb{Z}}} \right\}\), where

$$\varvec{Y}_{n} = \left( {X_{nT} ,X_{nT + 1} , \ldots ,X_{nT + T - 1} } \right)^{*} ,\quad n \in {\mathbb{Z}}.$$
(1.1)

The Markov property is the most important property that describes the relation between the present and past values of a process. A process \(\left\{ {X_{n} , n \in {\mathbb{Z}}} \right\}\) is said to be wide-sense Markov (i.e., Markov in the wide sense), henceforth abbreviated by WM, if

$$\hat{E}\left( {X_{n} |X_{{m_{1} }} , \ldots ,X_{{m_{k} }} } \right) = \hat{E}\left( {X_{n} |X_{{m_{k} }} } \right),\quad m_{1} < \cdots < m_{k} < n \in {\mathbb{Z}},$$

where \(\hat{E}\) stands for the linear projection (which is a version of the conditional expectation for Gaussian processes). A PC process which is also Markov in the wide sense will be denoted by PCWM. Nematollahi and Soltani (2000) characterized the covariance function and the spectral density matrix of a zero mean second-order PCWM process \(\left\{ {X_{n} , n \in {\mathbb{Z}}} \right\}\) and its associated T-dimensional stationary process \(\left\{ {\varvec{Y}_{n} , n \in {\mathbb{Z}}} \right\}\), defined in (1.1). Castro and Girardin (2008) considered the structure of the covariance, correlation, and reflection coefficient matrices for WM processes which are either periodically correlated or multivariate stationary. They also characterized these processes in terms of autoregressive models of order one.

In this paper, we consider the structure of univariate and multivariate PCWM processes and their associated multi-dimensional stationary processes. In Sect. 2, we correct some results obtained by Nematollahi and Soltani (2000) and Castro and Girardin (2008) in the univariate case. In Sect. 3, we characterize multivariate discrete time WM processes. In Sect. 4, we study autoregressive characterizations of PCWM processes. Finally, in Sect. 5, the results of Sect. 2 (for the univariate case) are extended to the multivariate case.

2 Results for univariate PCWM processes

In this section, we correct some results obtained by Nematollahi and Soltani (2000) and Castro and Girardin (2008) for the T-dimensional stationary process \(\left\{ {\varvec{Y}_{n} , n \in {\mathbb{Z}}} \right\}\), defined in (1.1), which is associated with the PCWM process \(\left\{ {X_{n} , n \in {\mathbb{Z}}} \right\}\).

Theorem 2.1

Let \(\left\{ {X_{n} , n \in {\mathbb{Z}}} \right\}\) be a PCWM process and let \(\left\{ {\varvec{Y}_{n} , n \in {\mathbb{Z}}} \right\}\) be its associated T-dimensional stationary process. Let \(\varvec{Q}\left( k \right) = {\text{Cov}}\left( {\varvec{Y}_{n + k} ,\varvec{Y}_{n} } \right) = \left[ {Q_{ij} \left( k \right)} \right]_{i,j = 0}^{T - 1}\), \(n,k \in {\mathbb{Z}}\). Then

$$Q_{ij} \left( k \right) = \left\{ {\begin{array}{*{20}l} {\left[ {h\left( T \right)} \right]^{ - k} d_{ji} C\left( {i,i} \right), } & {k \le - 1,} \\ {\left\{ {\begin{array}{*{20}c} {d_{ji} C\left( {i,i} \right),} & { i \le j, } \\ {d_{ij} C\left( {j,j} \right),} & { i \ge j,} \\ \end{array} \;\;\;\;\;} \right.} & {k = 0,} \\ {\left[ {h\left( T \right)} \right]^{k} d_{ij} C\left( {j,j} \right), } & {k \ge 1,} \\ \end{array} } \right.$$

where \(C\left( {n,m} \right) = {\text{Cov}}(X_{n} , X_{m} ) \ne 0, \;n,m \in {\mathbb{Z}}\), \(h\left( 0 \right) = 1,\) and

$$h\left( l \right) = \mathop \prod \limits_{i = 1}^{l} \frac{C(i,i - 1)}{C(i - 1,i - 1)}, \quad l = 1, 2 \ldots ; \quad d_{ij} = \frac{h\left( i \right)}{h\left( j \right)}, \quad i, \; j = 0, 1, \ldots ,T - 1.$$

Proof

For \(k = 0\) and \(i \ge j\), using relation (3.5) in Nematollahi and Soltani (2000, p. 131) with \(m = k,n = j,\) and \(v = i - j,\) we have \(Q_{ij} \left( 0 \right) = C\left( {i,j} \right) = \left[ {h\left( T \right)} \right]^{0} h\left( i \right)\left[ {h\left( j \right)} \right]^{ - 1} C\left( {j,j} \right) = d_{ij} C\left( {j,j} \right)\). Since \(\varvec{Q}\left( 0 \right) = {\text{Var}}\left( {\varvec{Y}_{n} } \right)\) is a symmetric matrix, we have \(Q_{ij} \left( 0 \right) = Q_{ji} \left( 0 \right) = d_{ji} C\left( {i,i} \right)\), for \(i \, < \, j\). For \(k \ge 1\) and \(i \ge j,\) we apply relation (3.5) in Nematollahi and Soltani (2000, p. 131), with \(n = j\), \(v = i - j\), \(m = k\), to conclude that

$$Q_{ij} \left( k \right) = C\left( {kT + i,j} \right) = \left[ {h\left( T \right)} \right]^{k} h\left( i \right)\left[ {h\left( j \right)} \right]^{ - 1} C\left( {j,j} \right) = \left[ {h\left( T \right)} \right]^{k} d_{ij} C\left( {j,j} \right), \quad i \ge j,\quad k \ge 1.$$

Similarly, for \(k \ge 1\) and \(i < j,\) we have

$$Q_{ij} \left( k \right) = C\left( {kT + i,j} \right) = C\left( {\left( {k - 1} \right)T + T + i,j} \right) = \left[ {h\left( T \right)} \right]^{k - 1} h\left( {T + i} \right)\left[ {h\left( j \right)} \right]^{ - 1} C\left( {j,j} \right).$$

But, since \(C\left( {X_{n + T} ,X_{m + T} } \right) = C\left( {X_{n} ,X_{m} } \right)\), we have

$$\begin{aligned} h\left( {T + i} \right) & = \mathop \prod \limits_{j = 1}^{T + i} \frac{{C\left( {j,j - 1} \right)}}{{C\left( {j - 1,j - 1} \right)}} \\ \, = \left( {\frac{{C\left( {1,0} \right)}}{{C\left( {0,0} \right)}}\frac{{C\left( {2,1} \right)}}{{C\left( {1,1} \right)}} \cdots \frac{{C\left( {T,T - 1} \right)}}{{C\left( {T - 1,T - 1} \right)}}} \right)\left( {\frac{{C\left( {1,0} \right)}}{{C\left( {0,0} \right)}}\frac{{C\left( {2,1} \right)}}{{C\left( {1,1} \right)}} \cdots \frac{{C\left( {i,i - 1} \right)}}{{C\left( {i - 1,i - 1} \right)}}} \right) = h\left( T \right)h\left( i \right). \\ \end{aligned}$$

Therefore,

$$Q_{ij} \left( k \right) = \left[ {h\left( T \right)} \right]^{k} h\left( i \right)\left[ {h\left( j \right)} \right]^{ - 1} C\left( {j,j} \right) = \left[ {h\left( T \right)} \right]^{k} d_{ij} C\left( {j,j} \right), \quad i\,<\,j,\quad k \ge 1.$$

Finally, for \(k < 0\), we have \(Q_{ij} \left( k \right) = Q_{ji} \left( { - k} \right) = \left[ {h\left( T \right)} \right]^{ - k} d_{ji} C\left( {i,i} \right), i,j = 0,1, \ldots ,T - 1.\)

Remark 2.1

Nematollahi and Soltani (2000, Theorem 3.3) gave the expression \(\varvec{Q}\left( k \right) = \left[ {h\left( T \right)} \right]^{k} \varvec{DR},\,k \ge 0,\) where \(\varvec{D} = \left[ {d_{ij} } \right]_{i,j = 0}^{T - 1}\) and \(\varvec{R} = {\text{diag}}\left\{ {C\left( {0,0} \right), C\left( {1,1} \right), \ldots ,C\left( {T - 1,T - 1} \right)} \right\}.\) But their expression is not correct since, for \(k = 0\), it gives \(\varvec{ Q}\left( 0 \right) = \varvec{DR}\), which is not correct.

Remark 2.2

Castro and Girardin (2008, Eq. 11) gave the expression

$$\varvec{Q}\left( k \right) = \left[ {h\left( T \right)} \right]^{k} \varvec{A B}^{*} ,$$

where the column vectors \(\varvec{A} = (a_{0} ,a_{1} , \ldots ,a_{T - 1} )^{*}\) and \(\varvec{B} = (b_{0} ,b_{1} , \ldots ,b_{T - 1} )^{*}\) are defined by \(a_{i} = h\left( i \right)\) and \(b_{i} = \left[ {h\left( i \right)} \right]^{ - 1} C\left( {i,i} \right),\) for \(0 \le i \le T - 1\). But, this expression is not correct, since for \(k = 0\), it gives \(\varvec{ Q}\left( 0 \right) = \varvec{A B}^{*}\), which is clearly incorrect.

Remark 2.3

In Theorem 2.1, comparing \(Q_{ij} \left( 1 \right)\) and \(Q_{ij} \left( k \right)\) for \(k \ge 1,\) we conclude that

$$\varvec{Q}\left( k \right) = \left[ {h\left( T \right)} \right]^{k - 1} \varvec{Q}\left( 1 \right),\quad k \ge 1.$$

3 Characterization of multivariate WM processes

A \(p\)-variate (\(p \ge 1\)) process \(\left\{ {\varvec{X}_{n} , n \in {\mathbb{Z}}} \right\}\) is said to be wide-sense Markov (WM) if

$$\hat{E}\left( {\varvec{X}_{n} |\varvec{X}_{{m_{1} }} , \ldots ,\varvec{X}_{{m_{k} }} } \right) = \hat{E}\left( {\varvec{X}_{n} |\varvec{X}_{{m_{k} }} } \right),\quad m_{1} < \cdots < m_{k} < n \in {\mathbb{Z}},$$

with probability 1, where \(\hat{E}\) stands for the linear projection (which is a version of the conditional expectation for Gaussian processes); see, e.g., Doob (1953, p. 90). Doob (1953, p. 233) showed that a one-dimensional process \(\left\{ {X_{n} , n \in {\mathbb{Z}}} \right\}\) is WM if and only if

$$C\left( {n,n} \right)C\left( {m,u} \right) = C\left( {m,n} \right)C\left( {n,u} \right),\quad m \le n \le u \in {\mathbb{Z}},$$
(3.1)

or, equivalently, if and only if

$$C\left( {m,u} \right) = C\left( {m,n} \right)\left[ {C\left( {n,n} \right)} \right]^{ - 1} C\left( {n,u} \right), \quad m \le n \le u \in {\mathbb{Z}},$$
(3.2)

assuming \(C\left( {n,n} \right) \ne 0\). (The case where \(C\left( {n,n} \right) = 0\) is trivial.) Beutler (1963, p. 428) extended this characterization to multivariate continuous time processes. In this section, we prove that the characterization (3.2) holds for multivariate discrete time processes, correcting a result given by Castro and Girardin (2008, pp.160–161), who wrongly concluded that for multivariate discrete time processes, the above “triangular characterization does not hold true”.

In what follows \(\left\{ {\varvec{X}_{n} , n \in {\mathbb{Z}}} \right\}\) is a multivariate discrete time zero mean second-order process with nonzero covariance matrices \(\varvec{C}\left( {.,.} \right)\) defined by

$$\varvec{C}\left( {n,m} \right) = {\text{Cov}}\left( {\varvec{X}_{n} ,\varvec{X}_{m} } \right) = E\left[ {\varvec{X}_{n} \varvec{X}_{m}^{*} } \right], \quad m,n \in {\mathbb{Z}}.$$

(Here and in what follows \(\varvec{X}^{*}\) denotes the transpose of \(\varvec{X}\)). It will be assumed throughout that \(\varvec{C}\left( {m,m} \right) = {\text{Var}}(\varvec{X}_{m} )\) is nonsingular for all \(m \in {\mathbb{Z}}\).

Note that except for \(m = n\), these covariance matrices are in general not symmetric, \(\left[ {\varvec{C}\left( {n,m} \right)} \right]^{*} \ne \varvec{C}\left( {n,m} \right)\), \(m \ne n\) (and \(\varvec{C}\left( {n,m} \right) \ne \varvec{C}\left( {m,n} \right), \;m \ne n\)), but we have

$$\left[ {\varvec{C}\left( {n,m} \right)} \right]^{*} = \varvec{C}\left( {m,n} \right),\quad m,n \in {\mathbb{Z}}.$$
(3.3)

Let \(\varvec{R}\left( {.,.} \right)\) denote the “normalized covariance” function, defined by

$$\varvec{R}\left( {n,m} \right) = \varvec{C}\left( {n,m} \right)\left[ {\varvec{C}\left( {m,m} \right)} \right]^{ - 1} ,\quad m,n \in {\mathbb{Z}}.$$
(3.4)

Then it is easily verified that

$$\hat{E}\left( {\varvec{X}_{n} |\varvec{X}_{m} } \right) = \varvec{R}\left( {n,m} \right)\varvec{X}_{m} , \quad m \le n\text{ } \in {\mathbb{Z}}.$$
(3.5)

Remark 3.1

Castro and Girardin (2008, Eq. 4) define the “correlation function” of the process as follows:

$$\varvec{\rho}\left( {m,n} \right) = \varvec{C}\left( {m,n} \right)[\varvec{C}\left( {m,m} \right)]^{ - 1} , \quad m,n \in {\mathbb{Z}}.$$
(3.6)

They then assert that \(\hat{E}\left( {\varvec{X}_{n} |\varvec{X}_{m} } \right) =\varvec{\rho}\left( {m,n} \right)\varvec{X}_{m}\), \(m < n\), which is not true since

$$E\left\{ {\left[ {\varvec{X}_{n} -\varvec{\rho}\left( {m,n} \right)\varvec{X}_{m} } \right]\varvec{X}_{m}^{*} } \right\} = \varvec{C}\left( {n,m} \right) - \varvec{C}\left( {m,n} \right) \ne 0.$$

(Note that \(\varvec{\rho}\left( {m,n} \right)\) is not a “correlation matrix” in the usual sense, i.e., it is not a matrix of correlations. This is why we have used the term “normalized covariance” for (3.4).)

Theorem 3.1

Let \(\left\{ {\varvec{X}_{n} , n \in {\mathbb{Z}}} \right\}\) be a zero mean second-order multivariate process with nonzero covariance matrices \(\varvec{C}\left( {.,.} \right)\) and nonsingular variance matrices \(\varvec{C}\left( {n,n} \right)\), \(n \in {\mathbb{Z}}\). Then \(\left\{ {\varvec{X}_{n} , n \in {\mathbb{Z}}} \right\}\) is WM if and only if one of the following equivalent conditions holds

  1. (a)
    $$\varvec{R}\left( {u,m} \right) = \varvec{R}\left( {u,n} \right)\varvec{R}\left( {n,m} \right), \quad m \le n \le u \in {\mathbb{Z}},$$
    (3.7)
  2. (b)
    $$\varvec{C}\left( {u,m} \right) = \varvec{C}\left( {u,n} \right)\left[ {\varvec{C}\left( {n,n} \right)} \right]^{ - 1} \varvec{C}\left( {n,m} \right), \quad m \le n \le u \in {\mathbb{Z}},$$
    (3.8)
  3. (c)
    $$\varvec{C}\left( {m,u} \right) = \varvec{C}\left( {m,n} \right)\left[ {\varvec{C}\left( {n,n} \right)} \right]^{ - 1} \varvec{C}\left( {n,u} \right), \quad m \le n \le u \in {\mathbb{Z}},$$
    (3.9)
  4. (d)
    $$\varvec{R}\left( {m,u} \right) = \varvec{R}\left( {m,n} \right)\varvec{R}\left( {n,u} \right), \quad m \le n \le u \in {\mathbb{Z}}.$$
    (3.10)

Proof

  1. (a)

    Suppose that \(\left\{ {\varvec{X}_{n} , n \in {\mathbb{Z}}} \right\}\) is WM and let \(m \le n \le u \in {\mathbb{Z}}\). Then using (3.5), we have \(\hat{E}\left( {\varvec{X}_{u} |\varvec{X}_{m} ,\varvec{X}_{n} } \right) = \hat{E}\left( {\varvec{X}_{u} |\varvec{X}_{n} } \right) = \varvec{R}\left( {u,n} \right)\varvec{X}_{n}\). Hence, \(\varvec{X}_{u} - \varvec{R}\left( {u,n} \right)\varvec{X}_{n}\) is orthogonal to \(\varvec{X}_{m} ;\) i.e., \(E\left\{ {\left[ {\varvec{X}_{u} - \varvec{R}\left( {u,n} \right)\varvec{X}_{n} } \right]\varvec{X}_{m}^{*} } \right\} = 0.\) It follows that \(E\left( {\varvec{X}_{u} \varvec{X}_{m}^{*} } \right) - \varvec{R}\left( {u,n} \right)E\left( {\varvec{X}_{n} \varvec{X}_{m}^{*} } \right) = 0,\) which yields \(\varvec{C}\left( {u,m} \right) = \varvec{R}\left( {u,n} \right)\varvec{C}\left( {n,m} \right)\). Then (3.7) follows by post-multiplication of both sides of this relation by \(\left[ {\varvec{C}\left( {m,m} \right)} \right]^{ - 1}\). Conversely, suppose that (3.7) holds and \(m_{1} < \cdots < m_{k} < n\). Let \(\varvec{W} = \varvec{X}_{n} - \hat{E}\left( {\varvec{X}_{n} |\varvec{X}_{{m_{k} }} } \right)\). Then \(\varvec{W} = \varvec{X}_{n} - \varvec{R}\left( {n,m_{k} } \right)\varvec{X}_{{m_{k} }} ,\) using (3.5), and, for any \(i = 1, \ldots , k\),

    $$\begin{aligned} E\left( {\varvec{WX}_{{m_{\varvec{i}} }}^{\varvec{*}} } \right) & = \varvec{C}\left( {n, m_{i} } \right) - \varvec{ R}\left( {n,m_{k} } \right)\varvec{C}(m_{k} ,m_{i} ) \\ & = \varvec{R}\left( {n, m_{i} } \right)\varvec{C}(m_{i} ,m_{i} ) - \varvec{ R}\left( {n,m_{k} } \right)\varvec{R}(m_{k} ,m_{i} )\varvec{C}(m_{i} ,m_{i} ) \\ & = \left[ {\varvec{R}\left( {n, m_{i} } \right) - \varvec{R}\left( {n,m_{k} } \right)\varvec{R}\left( {m_{k} ,m_{i} } \right)} \right]\varvec{ C}\left( {m_{i} ,m_{i} } \right) = 0, \\ \end{aligned}$$

    using (3.7). Thus, \(\varvec{X}_{n} - \hat{E}\left( {\varvec{X}_{n} |\varvec{X}_{{m_{k} }} } \right)\) is orthogonal to \(\{ \varvec{X}_{{m_{1} }} , \ldots ,\varvec{X}_{{m_{k} }} \}\). Hence

    $$\hat{E}\left( {\varvec{X}_{n} |\varvec{X}_{{m_{1} }} , \ldots ,\varvec{X}_{{m_{k} }} } \right) = \hat{E}\left( {\varvec{X}_{n} |\varvec{X}_{{m_{k} }} } \right);$$

    i.e., \(\left\{ {\varvec{X}_{n} , n \in {\mathbb{Z}}} \right\}\) is a WM process.

  2. (b)

    Rewriting (3.7) in terms of covariance matrices, we have

    $$\varvec{C}\left( {u,m} \right)\left[ {\varvec{C}\left( {m,m} \right)} \right]^{ - 1} = \varvec{C}\left( {u,n} \right)\left[ {\varvec{C}\left( {n,n} \right)} \right]^{ - 1} \varvec{C}\left( {n,m} \right)\left[ {\varvec{C}\left( {m,m} \right)} \right]^{ - 1} ,$$

    which is obviously equivalent to (3.8).

  3. (c)

    Taking transpose of both sides, and using (3.3), it is seen that (3.8) is equivalent to (3.9).

  4. (d)

    It is easily verified that (3.9) and (3.10) are equivalent.

Remark 3.2

Castro and Girardin (2008, Proposition 2) state that \(\left\{ {\varvec{X}_{n} , n \in {\mathbb{Z}}} \right\}\) is WM process if and only if

$$\varvec{\rho}\left( {m,u} \right) =\varvec{\rho}\left( {m,n} \right)\varvec{\rho}\left( {n,u} \right), \quad m \le n \le u \in {\mathbb{Z}},$$
(3.11)

where \(\varvec{\rho}\left( {m,n} \right)\) is as defined by (3.6). (Cf. condition (3.11) with condition (3.10).) But with their definition of \(\varvec{\rho}\left( {m,n} \right)\), condition (3.11) is incorrect (see Remark 3.1). Note that rewriting (3.11) in terms of covariance matrices, we get

$$\varvec{C}\left( {m,u} \right) = \varvec{C}\left( {m,n} \right)\left[ {\varvec{C}\left( {m,m} \right)} \right]^{ - 1} \varvec{C}\left( {n,u} \right)\left[ {\varvec{C}\left( {n,n} \right)} \right]^{ - 1} \varvec{C}\left( {m,m} \right) ,$$
(3.12)

which is different from (3.9). (See the following counter example for which (3.12) does not hold.) Rewriting (3.12) in the form

$$\varvec{C}\left( {m,u} \right)\left[ {\varvec{C}\left( {m,m} \right)} \right]^{ - 1} = \varvec{C}\left( {m,n} \right)\left[ {\varvec{C}\left( {m,m} \right)} \right]^{ - 1} \varvec{C}\left( {n,u} \right)\left[ {\varvec{C}\left( {n,n} \right)} \right]^{ - 1} ,$$

we obtain the result given by Castro and Girardin (2008, Eq. 13). Based on this (wrong) result, Castro and Girardin (2008, pp. 160–161) (wrongly) concluded that, for multivariate WM processes, the “triangular characterization does not hold true”; whereas condition (3.9) is clearly the multivariate analog of condition (3.2).

Example 3.1

Let \(\left\{ {X_{n} , n \in {\mathbb{Z}}} \right\}\) be a nonstationary Markov process with

$$C\left( {j,j} \right) = {\text{Var}}\left( {X_{j} } \right) = j^{2} + 1,\quad {\rm {and}}\quad C\left( {j,j + 1} \right) = {\text{Cov}}\left( {X_{j} ,X_{j + 1} } \right) = 1,\quad j \in {\mathbb{Z}}.$$

Then, according to Doob (1953, Theorem 8.1, p. 233), we have

$$\begin{aligned} C\left( {j,j + 2} \right) & = \frac{{C\left( {j,j + 1} \right)C\left( {j + 1,j + 2} \right)}}{{C\left( {j + 1,j + 1} \right)}} = \frac{1}{{\left( {j + 1} \right)^{2} + 1}} , \\ C\left( {j,j + 3} \right) & = \frac{{C\left( {j,j + 2} \right)C\left( {j + 2,j + 3} \right)}}{{C\left( {j + 2,j + 2} \right)}} = \frac{1}{{\left[ {\left( {j + 1} \right)^{2} + 1} \right]\left[ {\left( {j + 2} \right)^{2} + 1} \right]}} , \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \cdots \\ C\left( {j,j + k} \right) & = \frac{{C\left( {j,j + k - 1} \right)C\left( {j + k - 1,j + k} \right)}}{{C\left( {j + k - 1,j + k - 1} \right)}} = \frac{{C\left( {j,j + k - 1} \right)}}{{C\left( {j + k - 1,j + k - 1} \right)}} \\ & = \frac{1}{{\left[ {\left( {j + 1} \right)^{2} + 1} \right] \ldots \left[ {\left( {j + k - 1} \right)^{2} + 1} \right]}} = \mathop \prod \limits_{i = 1}^{k - 1} \frac{1}{{\left( {j + i} \right)^{2} + 1}} . \\ \end{aligned}$$

Now we define \(\varvec{Y}_{n} = \left[ {\begin{array}{*{20}c} {X_{2n} } \\ {X_{2n + 1} } \\ \end{array} } \right]\), \(n \in {\mathbb{Z}}\). Then, for \(m < n \in {\mathbb{Z}}\), we have

$$\begin{aligned} \varvec{C}\left( {m,n} \right) = {\text{Cov}}\left( {\varvec{Y}_{m} ,\varvec{Y}_{n} } \right) & = \left[ {\begin{array}{*{20}c} {C\left( {2m,2n} \right)} & {C\left( {2m,2n + 1} \right)} \\ {C\left( {2m + 1,2n} \right)} & {C\left( {2m + 1,2n + 1} \right)} \\ \end{array} } \right] \\ & = \mathop \prod \limits_{i = 1}^{{2\left( {n - m} \right) - 2}} \frac{1}{{\left( {2m + 1 + i} \right)^{2} + 1}}\left[ {\begin{array}{*{20}c} {\frac{1}{{\left( {2m + 1} \right)^{2} + 1}}} & {\frac{1}{{\left[ {\left( {2m + 1} \right)^{2} + 1} \right]\left[ {\left( {2n} \right)^{2} + 1} \right]}}} \\ 1 & {\frac{1}{{\left( {2n} \right)^{2} + 1}}} \\ \end{array} } \right]. \\ \end{aligned}$$

and

$$\varvec{C}\left( {n,n} \right) = {\text{Cov}}\left( {\varvec{Y}_{n} ,\varvec{Y}_{n} } \right) = \left[ {\begin{array}{*{20}c} {C\left( {2n,2n} \right)} & {C\left( {2n,2n + 1} \right)} \\ {C\left( {2n + 1,2n} \right)} & {C\left( {2n + 1,2n + 1} \right)} \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {\left( {2n} \right)^{2} + 1} & 1 \\ 1 & {\left( {2n + 1} \right)^{2} + 1} \\ \end{array} } \right].$$

It is then easily verified that, for every \(m \le n \le u \in {\mathbb{Z}} ,\) we have

$$\begin{aligned} \varvec{C}\left( {m,n} \right)\left[ {\varvec{C}\left( {n,n} \right)} \right]^{ - 1} \varvec{C}\left( {n,u} \right) \hfill \\ \quad = \mathop \prod \limits_{i = 1}^{{2\left( {u - m} \right) - 2}} \frac{1}{{\left( {2m + 1 + i} \right)^{2} + 1}}\left[ {\begin{array}{*{20}c} {\frac{1}{{\left( {2m + 1} \right)^{2} + 1}}} & {\frac{1}{{\left[ {\left( {2m + 1} \right)^{2} + 1} \right]\left[ {\left( {2u} \right)^{2} + 1} \right]}}} \\ 1 & {\frac{1}{{\left( {2u} \right)^{2} + 1}}} \\ \end{array} } \right] \hfill \\ \varvec{ }\quad = \varvec{C}\left( {m,u} \right). \hfill \\ \end{aligned}$$

Therefore, (3.9) holds for every \(m \le n \le u \in {\mathbb{Z}},\) and \(\left\{ {\varvec{Y}_{n} , n \in {\mathbb{Z}}} \right\}\) is a 2-dimensional (nonstationary) wide-sense Markov process. But, for this example, Proposition 2 in Castro and Girardin (2008) which states that \(\varvec{C}\left( {m,u} \right) = \varvec{C}\left( {m,n} \right)\left[ {\varvec{C}\left( {m,m} \right)} \right]^{ - 1} \varvec{C}\left( {n,u} \right)\left[ {\varvec{C}\left( {n,n} \right)} \right]^{ - 1} \varvec{C}\left( {m,m} \right)\) does not hold. For example, for \(m = - 1\), \(n = 0\), \(u = 1\), we have

$$\begin{aligned} \varvec{C}\left( { - 1,0} \right)\left[ {\varvec{C}\left( { - 1, - 1} \right)} \right]^{ - 1} \varvec{C}\left( {0,1} \right)\left[ {\varvec{C}\left( {0,0} \right)} \right]^{ - 1} \varvec{C}\left( { - 1, - 1} \right) \hfill \\ \quad = \left[ {\begin{array}{*{20}c} {\frac{1}{2}} & {\frac{1}{2}} \\ 1 & 1 \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {\frac{2}{9}} & {\frac{ - 1}{9}} \\ {\frac{ - 1}{9}} & {\frac{5}{9}} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {\frac{1}{2}} & {\frac{1}{10}} \\ 1 & {\frac{1}{5}} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} 2 & { - 1} \\ { - 1} & 1 \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} 5 & 1 \\ 1 & 2 \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {\frac{41}{20}} & {\frac{1}{20}} \\ {\frac{41}{10}} & {\frac{1}{10}} \\ \end{array} } \right] \ne \left[ {\begin{array}{*{20}c} {\frac{1}{4}} & {\frac{1}{20}} \\ {\frac{1}{2}} & {\frac{1}{10}} \\ \end{array} } \right] = \varvec{C}\left( { - 1,1} \right). \hfill \\ \end{aligned}$$

Corollary 3.1

Let \(\left\{ {\varvec{Y}_{n} , n \in {\mathbb{Z}}} \right\}\) be a nonsingular multivariate stationary WM process, with covariance function \(\varvec{Q}\left( k \right) = {\text{Cov}}(\varvec{Y}_{n + k} , \varvec{Y}_{n} )\). Then

$$\varvec{Q}\left( k \right) = \left( {\varvec{Q}\left( 1 \right)\varvec{Q}^{ - 1} \left( 0 \right)} \right)^{k} \varvec{Q}\left( 0 \right),\quad k \ge 0,$$
(3.13)
$$\varvec{Q}\left( k \right) = \left( {\varvec{Q}^{\varvec{*}} \left( 1 \right)\varvec{Q}^{ - 1} \left( 0 \right)} \right)^{ - k} \varvec{Q}\left( 0 \right),\quad k < 0.$$
(3.14)

Proof

Clearly (3.13) holds for \(k = 0\) and \(k = 1\). Also, for \(k \ge 0\), using (3.8) with \(m = n - 1\) and \(u = n + k\), we have \(\varvec{Q}\left( {k + 1} \right) = \varvec{Q}\left( k \right)\varvec{Q}^{ - 1} \left( 0 \right)\varvec{Q}\left( 1 \right),\) and (3.13) follows by induction. For \(k < 0\), (3.14) follows from (3.13) by noting that

$$\begin{aligned} \varvec{Q}\left( k \right) & = \varvec{Q}^{\varvec{*}} \left( { - k} \right) = \left[ {\left( {\varvec{Q}\left( 1 \right)\varvec{Q}^{ - 1} \left( 0 \right)} \right)^{ - k} \varvec{Q}\left( 0 \right)} \right]^{\varvec{*}} = \left[ {\left( {\varvec{Q}\left( 1 \right)\varvec{Q}^{ - 1} \left( 0 \right)} \right)^{ - k - 1} \varvec{Q}\left( 1 \right)} \right]^{\varvec{*}} \\ & = \varvec{Q}^{\varvec{*}} \left( 1 \right)\left( {\varvec{Q}^{ - 1} \left( 0 \right)\varvec{Q}^{\varvec{*}} \left( 1 \right)} \right)^{ - k - 1} = \left( {\varvec{Q}^{\varvec{*}} \left( 1 \right)\varvec{Q}^{ - 1} \left( 0 \right)} \right)^{ - k - 1} \varvec{Q}^{\varvec{*}} \left( 1 \right) = \left( {\varvec{Q}^{\varvec{*}} \left( 1 \right)\varvec{Q}^{ - 1} \left( 0 \right)} \right)^{ - k} \varvec{Q}\left( 0 \right). \\ \end{aligned}$$

Remark 3.3

Castro and Girardin (2008, p.161) use their (incorrect) Eq. (13) to conclude that \(\varvec{Q}\left( k \right) = \left[ {\varvec{Q}\left( 1 \right)\varvec{Q}^{ - 1} \left( 0 \right)} \right]^{k} \varvec{Q}\left( 0 \right) = \varvec{Q}\left( { - k} \right)\), \(k \in {\mathbb{Z}}\), which is not true for \(k < 0\). (Note that in general, \(\varvec{Q}\left( k \right) = \varvec{Q}^{\varvec{*}} \left( { - k} \right)\).) Also, it can be shown that Eq. (14) in Castro and Girardin (2008) does not hold for negative powers; in fact, from (3.13) and (3.14) it follows that

$$\varvec{\rho}(k) = \left\{ {\begin{array}{*{20}l} {\left[ {\varvec{\rho}\left( 1 \right)} \right]^{k} , } & {k \ge 0,} \\ {\left[ {\varvec{\rho}\left( { - 1} \right)} \right]^{ - k} ,} & {k < 0,} \\ \end{array} } \right.$$

where \(\varvec{\rho}\left( k \right) = \varvec{Q}\left( k \right)\varvec{Q}^{ - 1} \left( 0 \right)\).

4 Autoregressive characterizations

In this section we consider the structure of PCWM processes in terms of autoregressive (AR) and periodic autoregressive (PAR) processes. For related results, see Pagano (1978), Castro and Girardin (2002, 2008).

The univariate case. Using their results on reflection coefficients of WM processes, Castro and Girardin (2008, Theorem 3) showed that the class of PCWM processes is exactly the class of PAR processes of order 1 and that the T-dimensional stationary process associated with a PCWM process has an autoregressive representation. We give a direct and more transparent proof of their results, which shows an error in their results (see Remark 4.2).

We recall that a process \(\left\{ {X_{n} , n \in {\mathbb{Z}}} \right\}\) is said to be a PAR process of order one, PAR(1), with period \(T\), if it has a representation of the form

$$X_{n} = a_{n} X_{n - 1} + \varepsilon_{n} ,\quad n \in {\mathbb{Z}},$$
(4.1)

where \(\left\{ {a_{n} , n \in {\mathbb{Z}}} \right\}\) is nonzero periodic sequence of constants, \(a_{n + T} = a_{n} ,\) and \(\left\{ {\varepsilon_{n} , n \in {\mathbb{Z}}} \right\}\) is a white noise process with periodic variances, \({\text{Var}}\left( {\varepsilon_{n + T} } \right) = {\text{Var}}\left( {\varepsilon_{n} } \right)\), with \({\text{Cov}}(X_{m} ,\varepsilon_{n} ) = 0,\) for \(m < n\) (see Pagano (1978) for general definition of \({\text{PAR}}(p_{1} , \ldots , p_{d} )\)).

First, we verify that the PAR(1) process (4.1) is a PCWM process: Let \(D\left( {s,r} \right) = \mathop \prod \nolimits_{j = r}^{s} a_{j} ,\) with the convention that \(D\left( {s,r} \right) = 1\) if \(r > s\). Then, for \(n \in {\mathbb{Z}}\) and \(k \ge 1\), we have

$$X_{n + k} = D(n + k,n + 1)X_{n} + \mathop \sum \limits_{j = 1}^{k} D(n + k,n + 1 + j)\varepsilon_{n + j} ,$$

and, therefore,

$${\text{Cov}}\left( {X_{n + k} ,X_{n} } \right) = D\left( {n + k,n + 1} \right){\text{Var}}\left( {X_{n} } \right) = \left( {\mathop \prod \limits_{j = n + 1}^{n + k} a_{j} } \right){\text{Var}}\left( {X_{n} } \right),$$

since \({\text{Cov}}\left( {X_{n} ,\varepsilon_{n + j} } \right) = 0\). It is then easily verified that the covariance function of \(\left\{ {X_{n} , n \in {\mathbb{Z}}} \right\}\) satisfies Doob’s condition (3.1) and, therefore, \(\left\{ {X_{n} , n \in {\mathbb{Z}}} \right\}\) is a PCWM process.

Conversely, suppose that \(\left\{ {X_{n} , n \in {\mathbb{Z}}} \right\}\) is a second-order PCWM process with period T and covariance function \(C\left( {.,.} \right)\). Let

$$a_{n} = \frac{{C\left( {n,n - 1} \right)}}{{C\left( {n - 1,n - 1} \right)}}, \quad \sigma_{n}^{2} = C\left( {n, n} \right) - \frac{{\left[ {C\left( {n,n - 1} \right)} \right]^{2} }}{{C\left( {n - 1,n - 1} \right)}}, \quad n \in {\mathbb{Z}}.$$
(4.2)

Note that \(\left\{ {a_{n} , n \in {\mathbb{Z}}} \right\}\) and \(\left\{ {\sigma_{n}^{2} , n \in {\mathbb{Z}}} \right\}\) are periodic with period \(T\). Now, let \(\varepsilon_{n} = X_{n} - a_{n} X_{n - 1}\). Then it is easily verified that \(\left\{ {\varepsilon_{n} } \right\}\) is a periodic white noise process, with \({\text{Var}}\left( {\varepsilon_{n} } \right) = \sigma_{n}^{2}\). Also, for \(m < n\) we have

$${\text{Cov}}(X_{m} ,\varepsilon_{n} ) = C\left( {m,n} \right) - \frac{{C\left( {n,n - 1} \right)}}{{C\left( {n - 1,n - 1} \right)}}C\left( {m,n - 1} \right) = C\left( {m,n} \right) - C\left( {m,n} \right) = 0,$$

using Doob’s condition (3.2). Thus, \(\left\{ {X_{n} , n \in {\mathbb{Z}}} \right\}\) is a PAR(1) process of the form (4.1). See Castro and Girardin (2008, Theorem 3, Eq. 18), but note that their \(w(l + nd)\) is a periodic white noise process and the variances are needed in the rest of the proof; see Remark 4.2.

Remark 4.1

Since the sequences \(\left\{ {a_{n} , n \in {\mathbb{Z}}} \right\}\) and \(\left\{ {\sigma_{n}^{2} , n \in {\mathbb{Z}}} \right\}\) defined by (4.2) are periodic with period \(T,\) their values are completely determined by the values of \(a_{1} , \ldots ,a_{T}\) and \(\sigma_{1 }^{2} , \ldots ., \sigma_{T }^{2}\). Thus, the PAR(1) representation shows that the covariance function of a PCWM process, \(C\left( {.,.} \right),\) is completely determined by the values of \(C\left( {i, i} \right)\) and \(C\left( {i + 1, i} \right), i = 0, 1, \ldots , T - 1\). (As we shall see in Theorem 4.1, a similar result holds in the multivariate case.)

Let \(\left\{ {X_{n} , n \in {\mathbb{Z}}} \right\}\) be a PCWM process and let \(\left\{ {\varvec{Y}_{n} , n \in {\mathbb{Z}}} \right\}\) be the T-dimensional stationary process associated with \(\left\{ {X_{n} , n \in {\mathbb{Z}}} \right\};\) i.e., \(\varvec{Y}_{n} = \left( {X_{nT} ,X_{nT + 1} , \ldots ,X_{nT + T - 1} } \right)^{*} .\) Then using the PAR(1) representation (4.1), with \(\{ a_{n} ,n \in {\mathbb{Z}}\}\) and \(\{ \sigma_{n}^{2} , n \in {\mathbb{Z}}\}\) as defined in (4.2), it is easily shown that \(\left\{ {\varvec{Y}_{n} , n \in {\mathbb{Z}}} \right\}\) has a multivariate AR(1) representation

$$\varvec{A}\left( 0 \right)\varvec{Y}_{n} + \varvec{A}\left( 1 \right)\varvec{Y}_{n - 1} =\varvec{\varepsilon}_{n} ,$$
(4.3)

where \(\varvec{\varepsilon}_{n} = \left( {\varepsilon_{nT} ,\varepsilon_{nT + 1} , \ldots ,\varepsilon_{nT + T - 1} } \right)^{*}\) is a T-dimensional white noise process with diagonal covariance matrix \(\varvec{\varSigma}\) with \(\varvec{\varSigma}_{ii} = \sigma_{i }^{2} ,0 \le i \le T - 1\); \(\varvec{A}\left( 0 \right)\) is a triangular matrix with only \(2T - 1\) nonzero entries, \(\left[ {\varvec{A}\left( 0 \right)} \right]_{i,i} = 1, 0 \le i \le T - 1\), and \(\left[ {\varvec{A}\left( 0 \right)} \right]_{i,i - 1} = - a_{i}\), \(1 \le i \le T - 1;\) and \(\varvec{A}\left( 1 \right)\) has only one nonzero entry \(\left[ {\varvec{A}\left( 1 \right)} \right]_{0,T - 1} = - a_{T} = - a_{0}\). See Castro and Girardin (2008, Eq. 17).

Remark 4.2

Regarding (4.3), Castro and Girardin (2008, p. 163) gave the expression

$$\varvec{\varSigma}_{00} = C\left( {T - 1,T - 1} \right) - \left[ {C\left( {T,T - 1} \right)} \right]^{2} /C\left( {T - 1,T - 1} \right),$$
(4.4)

but this expression is not correct since, as noted above, \(\varvec{\varSigma}_{00} = \sigma_{0}^{2}\); i.e.

$$\varvec{\varSigma}_{00} = C\left( {T,T} \right) - \left[ {C\left( {T,T - 1} \right)} \right]^{2} /C\left( {T - 1,T - 1} \right),$$
(4.5)

Clearly (4.4) is different from (4.5), unless \(C\left( {T - 1,T - 1} \right) = C\left( {T,T} \right)\), which need not hold.

The multivariate case A p-dimensional second-order process \(\left\{ {\varvec{X}_{n} , n \in {\mathbb{Z}}} \right\}\) is said to be a multivariate PAR(1) process with period T if it has a representation of the form

$$\varvec{X}_{n} = \varvec{A}_{n} \varvec{X}_{n - 1} +\varvec{\varepsilon}_{n} ,$$
(4.6)

where \(\left\{ {\varvec{A}_{n} } \right\}\) is a periodic sequence of nonzero transition matrices, \(\varvec{A}_{n + T} = \varvec{A}_{n} ,\) and \(\left\{ {\varvec{\varepsilon}_{n} , n \in {\mathbb{Z}}} \right\}\) is a multivariate periodic white noise process; i.e.,

$$E\left( {\varvec{\varepsilon}_{n} } \right) = 0, E\left( {\varvec{\varepsilon}_{m}\varvec{\varepsilon}_{n}^{\varvec{*}} } \right) = 0,\left( {m \ne n} \right);\quad {\text{Var}}\left( {\varvec{\varepsilon}_{n + T} } \right) = {\text{Var}}\left( {\varvec{\varepsilon}_{n} } \right),\quad m,n \in {\mathbb{Z}},$$

and \({\text{Cov}}(\varvec{X}_{m } ,\varvec{\varepsilon}_{n} ) = 0,\) for \(m < n.\)

Theorem 4.1

  1. (i)

    A p-dimensional second-order process \(\left\{ {\varvec{X}_{n} , n \in {\mathbb{Z}}} \right\}\) is a multivariate PCWM process if and only if \(\{ \varvec{X}_{n} ,n \in {\mathbb{Z}}\}\) is a multivariate PAR(1) process.

  2. (ii)

    The Tp-dimensional stationary process \(\left\{ {\varvec{Y}_{n} , n \in {\mathbb{Z}}} \right\}\) associated with a multivariate PCWM process \(\left\{ {\varvec{X}_{n} , n \in {\mathbb{Z}}} \right\},\) defined by \(\varvec{Y}_{n} = \left( {\varvec{X}_{nT}^{*} , \varvec{X}_{nT + 1}^{*} , \ldots ,\varvec{X}_{nT + T - 1}^{*} } \right)^{*}\) has multivariate AR(1) representation

    $$\varvec{A}\left( 0 \right)\varvec{Y}_{n} + \varvec{A}\left( 1 \right)\varvec{Y}_{n - 1} =\varvec{\varepsilon}\left( n \right),$$

    where \(\varvec{\varepsilon}\left( n \right) = \left( {\varvec{\varepsilon}_{nT}^{*} ,\varvec{\varepsilon}_{nT + 1}^{*} , \ldots ,\varvec{\varepsilon}_{nT + T - 1}^{*} } \right)^{*}\) is a Tp-dimensional white noise process with block diagonal covariance matrix \(\varvec{\varSigma}= {\text{diag}}\left[ {\varvec{\varSigma}_{0} ,\varvec{ \varSigma }_{1} ,\varvec{ } \ldots ,\varvec{\varSigma}_{T - 1} } \right]\), where

    $$\varvec{\varSigma}_{i} = \varvec{C}\left( {i,i} \right) - \varvec{C}\left( {i,i - 1} \right)\left[ {\varvec{C}\left( {i - 1,i - 1} \right)} \right]^{ - 1} \varvec{C}\left( {i - 1,i} \right),$$

    \(\varvec{A}\left( 0 \right)\) is a block triangular matrix with only \(2T - 1\) nonzero \(p \times p\) block entries, \(\left[ {\varvec{A}\left( 0 \right)} \right]_{i,i} = \varvec{I}_{p}\), \(0 \le i \le T - 1\) and \(\left[ {\varvec{A}\left( 0 \right)} \right]_{i,i - 1} = - \varvec{A}_{i}\) for \(1 \le i \le T - 1\), where

    $$\varvec{A}_{i} = \varvec{C}\left( {i,i - 1} \right)\left[ {\varvec{C}\left( {i - 1,i - 1} \right)} \right]^{ - 1} ,$$

    and \(\varvec{A}\left( 1 \right)\) is a block matrix with only one nonzero \(p \times p\) block entry \(\left[ {\varvec{A}\left( 1 \right)} \right]_{0,T - 1} = - \varvec{A}_{0}\), where

    $$\varvec{A}_{0} = \varvec{C}\left( {T,T - 1} \right)\left[ {\varvec{C}\left( {T - 1,T - 1} \right)} \right]^{ - 1} .$$

Proof

  1. (i)

    Let \(\varvec{D}\left( {s,r} \right) = \varvec{A}_{s} \varvec{A}_{s - 1} \ldots \varvec{A}_{r}\), \(r \le s\) with the convention that \(\varvec{D}\left( {s,r} \right) = \varvec{I}\) (identity matrix) if \(r > s\). Suppose that \(\{ \varvec{X}_{n} , n \in {\mathbb{Z}}\}\) is a multivariate PAR(1) process, as given by (4.6). Then for \(n \in {\mathbb{Z}}\) and \(k \ge 1\), we have

    $$\varvec{X}_{n + k} = \varvec{D}\left( {n + k,n + 1} \right)\varvec{X}_{n} + \mathop \sum \limits_{j = 1}^{k} \varvec{D}\left( {n + k,n + 1 + j} \right)\varvec{\varepsilon}_{n + j} .$$

    Therefore,

    $${\text{Cov}}\left( {\varvec{X}_{n + k} ,\varvec{X}_{n} } \right) = \varvec{D}\left( {n + k,n + 1} \right){\text{Var}}\left( {\varvec{X}_{n} } \right),$$

    since \({\text{Cov}}\left( {\varvec{X}_{n} ,\varvec{\varepsilon}_{n + j} } \right) = 0\). It is then easily verified that the covariance function of \(\left\{ {X_{n} , n \in {\mathbb{Z}}} \right\}\) satisfies condition (3.8) and, therefore, \(\left\{ {X_{n} , n \in {\mathbb{Z}}} \right\}\) is a wide-sense Markov process.

    Conversely, let \(\left\{ {\varvec{X}_{n} , n \in {\mathbb{Z}}} \right\}\) be a multivariate PCWM process with nonzero covariance function \(\varvec{C}\left( {m,n} \right) = E\left( {\varvec{X}_{m} \varvec{X}_{n}^{\varvec{*}} } \right)\) and let \(\varvec{\varepsilon}_{n} = \varvec{X}_{n} - \varvec{A}_{n} \varvec{X}_{n - 1} ,\) where

    $$\varvec{A}_{n} = \varvec{C}\left( {n,n - 1} \right)\left[ {\varvec{C}\left( {n - 1,n - 1} \right)} \right]^{ - 1} , \quad n \in {\mathbb{Z}}.$$
    (4.7)

    Note that \(\left\{ {\varvec{A}_{n} , n \in {\mathbb{Z}}} \right\}\) is a periodic sequence with period \(T\), \(\varvec{A}_{n + T} = \varvec{A}_{n}\). Now, for \(m < n\), we have

    $$\begin{aligned} {\text{Cov}}\left( {\varvec{\varepsilon}_{m} ,\varvec{\varepsilon}_{n} } \right) &= \varvec{C}\left( {m,n} \right) - \varvec{C}\left( {m,n - 1} \right)\left[ {\varvec{C}\left( {n - 1,n - 1} \right)} \right]^{ - 1} \varvec{C}\left( {n - 1,n} \right) \\ & - \varvec{C}\left( {m,m - 1} \right)\left[ {\varvec{C}\left( {m - 1,m - 1} \right)} \right]^{ - 1} \varvec{C}\left( {m - 1,n} \right) \\ & + \varvec{C}\left( {m,m - 1} \right)\left[ {\varvec{C}\left( {m - 1,m - 1} \right)} \right]^{ - 1} \varvec{C}\left( {m - 1,n - 1} \right)\left[ {\varvec{C}\left( {n - 1,n - 1} \right)} \right]^{ - 1} \varvec{C}\left( {n - 1,n} \right), \\ \end{aligned}$$

    and, using (3.9), it follows that

    $$\begin{aligned} {\text{Cov}}\left( {\varvec{\varepsilon}_{m} ,\varvec{\varepsilon}_{n} } \right) &= \varvec{C}\left( {m,n} \right) - \varvec{C}\left( {m,n} \right) - \varvec{C}\left( {m,m - 1} \right)\left[ {\varvec{C}\left( {m - 1,m - 1} \right)} \right]^{ - 1} \varvec{C}\left( {m - 1,n} \right) \\ & \quad + \;\varvec{C}\left( {m,m - 1} \right)\left[ {\varvec{C}\left( {m - 1,m - 1} \right)} \right]^{ - 1} \varvec{C}\left( {m - 1,n} \right) = 0. \\ \end{aligned}$$

    Also, for \(m < n\), we have

    $$\begin{aligned} {\text{Cov}}\left( {\varvec{X}_{m} ,\varvec{\varepsilon}_{n} } \right) & = \varvec{C}\left( {m,n} \right) - \varvec{C}\left( {m,n - 1} \right)\left[ {\varvec{C}\left( {n - 1,n - 1} \right)} \right]^{ - 1} \varvec{C}\left( {n - 1,n} \right) \\ & = \varvec{C}\left( {m,n} \right) - \varvec{C}\left( {m,n} \right) = 0, \\ \end{aligned}$$

    using (3.9). Finally, for \(n \in {\mathbb{Z}}\)

    $$\begin{aligned} {\text{Var}}\left( {\varvec{\varepsilon}_{n} } \right) & = {\text{Var}}\left( {\varvec{X}_{n} - \varvec{A}_{n} \varvec{X}_{n - 1} } \right) \\ & = \varvec{C}(n,n) + \varvec{A}_{n} \varvec{C}(n - 1,n - 1)\varvec{A}_{n}^{\varvec{*}} - \varvec{C}\left( {n,n - 1} \right)\varvec{A}_{\varvec{n}}^{\varvec{*}} - \varvec{A}_{\varvec{n}} \varvec{C}(n - 1,n), \\ \end{aligned}$$

    which simplifies to

    $${\text{Var}}\left( {\varvec{\varepsilon}_{n} } \right) = \varvec{C}\left( {n,n} \right) - \varvec{C}\left( {n,n - 1} \right)\left[ {\varvec{C}\left( {n - 1,n - 1} \right)} \right]^{ - 1} \varvec{C}\left( {n - 1,n} \right).$$
    (4.8)

    Hence, \(\{\varvec{\varepsilon}_{n} \}\) is a p-dimensional periodic white noise process and \(\left\{ {\varvec{X}_{n} , n \in {\mathbb{Z}}} \right\}\) is a periodic AR(1) process with representation (4.6).

  2. (ii)

    This follows from the multivariate PAR(1) representation (4.6), with values of \(\varvec{A}_{i}\) and \(\varvec{\varSigma}_{i}\) given by (4.7) and (4.8).

5 Results for multivariate PCWM processes

Let \(\left\{ {\varvec{X}_{n} ,n \in {\mathbb{Z}}} \right\}\) be p-dimensional zero mean second-order PCWM process of period T, with nonzero covariance matrices \(\varvec{C}\left( {m,n} \right) = {\text{Cov}}\left( {\varvec{X}_{m} ,\varvec{X}_{n} } \right)\) and nonsingular variance matrices \(\varvec{C}\left( {n,n} \right)\), \(n \in {\mathbb{Z}}\). Let \(\left\{ {\varvec{Y}_{n} , n \in {\mathbb{Z}}} \right\}\) be the Tp-dimensional stationary process defined by \(\varvec{Y}_{n} = \left( {\varvec{X}_{nT}^{*} ,\varvec{X}_{nT + 1}^{*} , \ldots ,\varvec{X}_{nT + T - 1}^{*} } \right)^{*} , n \in {\mathbb{Z}}\), and let \(\varvec{Q}\left( k \right) = {\text{Cov}}\left( {\varvec{Y}_{n + k} , \varvec{Y}_{n} } \right)\) denote the covariance function of \(\left\{ {\varvec{Y}_{n} } \right\}\). As noted in Remark 4.1 the covariance structure of \(\left\{ {\varvec{X}_{n} ,n \in {\mathbb{Z}}} \right\}\) is completely determined by the values of \(\varvec{C}\left( {i, i} \right)\) and \(\varvec{C}\left( {i + 1, i} \right)\), \(i = 0, 1, \ldots , T - 1\). In the following theorem, \(\varvec{Q}\left( k \right)\) is expressed in terms of \(\varvec{C}\left( {i, i} \right)\) and \(\varvec{C}\left( {i + 1, i} \right)\), \(i = 0, 1, \ldots , T - 1\).

Theorem 5.1

The covariance function of \(\left\{ {\varvec{Y}_{n} ,n \in {\mathbb{Z}}} \right\}\), \(\varvec{Q}\left( k \right) = \left[ {\varvec{Q}_{ij} (k)} \right]_{i,j = 0}^{T - 1}\), is given by

$$\varvec{Q}_{ij} \left( 0 \right) = \left\{ {\begin{array}{*{20}l} {\varvec{D}\left( {i,j + 2} \right)\varvec{C}\left( {j + 1,j} \right), } & {j + 2 \le i \le T - 1,} & {0 \le j \le T - 3,} \\ {\begin{array}{*{20}l} {\varvec{C}\left( {j + 1,j} \right), } \\ {\varvec{C}\left( {j,j} \right), } \\ {\left[ {\varvec{C}\left( {i + 1,i} \right)} \right]^{*} } \\ \end{array} } & {\begin{array}{*{20}l} {i = j + 1, } \\ {i = j, } \\ {j = i + 1, } \\ \end{array} } & {\begin{array}{*{20}l} {0 \le j \le T - 2,} \\ {0 \le j \le T - 1,} \\ {0 \le i \le T - 2,} \\ \end{array} } \\ {\left[ {\varvec{C}\left( {i + 1,i} \right)} \right]^{*} \left[ {\varvec{D}\left( {j,i + 2} \right)} \right]^{*} ,} & {i + 2 \le j \le T - 1,} & {0 \le i \le T - 3,} \\ \end{array} } \right.$$
(5.1)

and for \(k \ge 1,\)

$$\varvec{Q}_{ij} \left( k \right) = \left\{ {\begin{array}{*{20}l} {\varvec{D}\left( {i,0} \right)\left[ {\varvec{D}\left( {T - 1,0} \right)} \right]^{k - 1} \varvec{D}\left( {T - 1,j + 2} \right)\varvec{C}\left( {j + 1,j} \right), \quad 0 \le j \le T - 3, } \\ {\varvec{D}\left( {i,0} \right)\left[ {\varvec{D}\left( {T - 1,0} \right)} \right]^{k - 1} \varvec{C}\left( {T - 1,T - 2} \right), \quad j = T - 2,} \\ {\varvec{D}\left( {i,0} \right)\left[ {\varvec{D}\left( {T - 1,0} \right)} \right]^{k - 1} \varvec{C}\left( {T - 1,T - 1} \right), \quad j = T - 1,} \\ \end{array} } \right.$$
(5.2)

where

$$\varvec{D}\left( {i,j} \right) = \left\{ \begin{array}{ll}{\varvec{A}_{i} \varvec{A}_{i - 1} \ldots \varvec{A}_{j}} &\quad i \ge j, \\ I, &\quad i < j, \\ \end{array} \right.$$

(\(\varvec{I}\) denotes the identity matrix) and \(\varvec{A}_{i} = \varvec{C}\left( {i,i - 1} \right)\left[ {\varvec{C}\left( {i - 1,i - 1} \right)} \right]^{ - 1} , i = 0,1, \ldots T - 1\).

Proof

As in Nematollahi and Soltani (2000), it is easily verified that \(\varvec{Q}_{ij} \left( k \right) = \varvec{C}\left( {kT + i,j} \right).\) Hence, \(\varvec{Q}_{jj} \left( 0 \right) = \varvec{C}\left( {j, j} \right)\) and \(\varvec{Q}_{j + 1,j} \left( 0 \right) = \varvec{C}\left( {j + 1, j} \right).\) For \(i \ge j + 2(0 \le j \le T - 3)\), by applying (3.8) repeatedly, we have

$$\begin{aligned} \varvec{Q}_{ij} \left( 0 \right) & = \varvec{C}\left( {i,j} \right) = \varvec{C}\left( {i,i - 1} \right)\left[ {\varvec{C}\left( {i - 1,i - 1} \right)} \right]^{ - 1} \varvec{C}\left( {i - 1,j} \right) \\ & = \varvec{C}\left( {i,i - 1} \right)\left[ {\varvec{C}\left( {i - 1,i - 1} \right)} \right]^{ - 1} \varvec{C}\left( {i - 1,i - 2} \right)\left[ {\varvec{C}\left( {i - 2,i - 2} \right)} \right]^{ - 1} \varvec{C}\left( {i - 2,j} \right) \\ & = \cdots = \varvec{C}\left( {i,i - 1} \right)\left[ {\varvec{C}\left( {i - 1,i - 1} \right)} \right]^{ - 1} \cdots \varvec{C}\left( {j + 2,j + 1} \right)\left[ {\varvec{C}\left( {j + 1,j + 1} \right)} \right]^{ - 1} \varvec{C}\left( {j + 1,j} \right) \\ & = \varvec{A}_{i} \varvec{A}_{i - 1} \cdots \varvec{A}_{j + 2} \varvec{C}\left( {j + 1,j} \right) = \varvec{D}\left( {i,j + 2} \right)\varvec{C}\left( {j + 1,j} \right), \\ \end{aligned}$$

and for \(i < j,\) the entries of \(\varvec{Q}\left( 0 \right)\) are given by \(\varvec{Q}_{ij} \left( 0 \right) = \left[ {\varvec{Q}_{ji} \left( 0 \right)} \right]^{*}\), since \(\varvec{ Q}\left( 0 \right)\) is symmetric. For \(k \ge 1\), first we note that

$$\begin{aligned} \varvec{Q}_{ij} \left( k \right) & = \varvec{C}\left( {kT + i,j} \right) = \varvec{C}\left( {kT + i,kT + i - 1} \right)\left[ {\varvec{C}\left( {kT + i - 1,kT + i - 1} \right)} \right]^{ - 1} \varvec{C}\left( {kT + i - 1,j} \right) \\ & = \varvec{C}\left( {kT + i, kT + i - 1} \right)\left[ {\varvec{C}\left( {kT + i - 1,kT + i - 1} \right)} \right]^{ - 1} \\ & \quad \times \;\varvec{C}\left( {kT + i - 1,kT + i - 2} \right)\left[ {\varvec{C}\left( {kT + i - 2,kT + i - 2} \right)} \right]^{ - 1} \varvec{C}\left( {kT + i - 2,j} \right) \\ & = \cdots = \varvec{C}\left( {kT + i,kT + i - 1} \right)\left[ {\varvec{C}\left( {kT + i - 1,kT + i - 1} \right)} \right]^{ - 1} \cdots \\ \varvec{ } & \quad \times \;\varvec{C}\left( {kT,kT - 1} \right)\left[ {\varvec{C}\left( {kT - 1,kT - 1} \right)} \right]^{ - 1} \varvec{C}\left( {kT - 1,j} \right) \\ & = \varvec{C}\left( {i,i - 1} \right)\left[ {\varvec{C}\left( {i - 1,i - 1} \right)} \right]^{ - 1} \cdots \varvec{C}\left( {1,0} \right)\left[ {\varvec{C}\left( {0,0} \right)} \right]^{ - 1} \\ & \quad \times \;\varvec{C}\left( {T,T - 1} \right)\left[ {\varvec{C}\left( {T - 1,T - 1} \right)} \right]^{ - 1} \varvec{C}\left( {kT - 1,j} \right) \\ & = \varvec{A}_{i} \varvec{A}_{i - 1} \cdots \varvec{A}_{0} \varvec{C}\left( {kT - 1,j} \right) = \varvec{D}\left( {i,0} \right)\varvec{C}\left( {kT - 1,j} \right). \\ \end{aligned}$$
(5.3)

Now, by induction we show that

$$\varvec{C}\left( {kT - 1,j} \right) = \left[ {\varvec{D}\left( {T - 1,0} \right)} \right]^{k - 1} \varvec{C}\left( {T - 1,j} \right),\quad k \ge 1.$$
(5.4)

Clearly (5.4) holds for \(k = 1.\) Now suppose (5.4) holds for \(k = m;\) i.e., \(\varvec{C}\left( {mT - 1,j} \right) = \left[ {\varvec{D}\left( {T - 1,0} \right)} \right]^{m - 1} \varvec{C}\left( {T - 1,j} \right).\) Then, for \(k = m + 1,\) we have

$$\begin{aligned} & \varvec{C}\left( {\left( {m + 1} \right)T - 1,j} \right) \\ & \quad = \varvec{C}\left( {(m + 1)T - 1,(m + 1)T - 2} \right)\left[ {\varvec{C}\left( {(m + 1)T - 2,(m + 1)T - 2} \right)} \right]^{ - 1} \varvec{C}\left( {(m + 1)T - 2,j} \right) \\ & \quad = \cdots = \varvec{C}\left( {\left( {m + 1} \right)T - 1,\left( {m + 1} \right)T - 2} \right)\left[ {\varvec{C}\left( {\left( {m + 1} \right)T - 2,\left( {m + 1} \right)T - 2} \right)} \right]^{ - 1} \cdots \\ & \quad \quad \times \;\varvec{C}\left( {mT,mT - 1} \right)\left[ {\varvec{C}\left( {mT - 1,mT - 1} \right)} \right]^{ - 1} \varvec{C}\left( {mT - 1,j} \right) \\ & \quad = \varvec{A}_{T - 1} \varvec{A}_{T - 2} \cdots \varvec{A}_{0} \varvec{C}\left( {mT - 1,j} \right) = \varvec{D}\left( {T - 1,0} \right)\left[ {\varvec{D}\left( {T - 1,0} \right)} \right]^{m - 1} \varvec{C}\left( {T - 1,j} \right) \\ & \quad = \left[ {\varvec{D}\left( {T - 1,0} \right)} \right]^{m} \varvec{C}\left( {T - 1,j} \right), \\ \end{aligned}$$

which completes the proof of (5.4). From (5.3) and (5.4), we have

$$\varvec{Q}_{ij} \left( k \right) = \varvec{D}\left( {i,0} \right)\left[ {\varvec{D}\left( {T - 1,0} \right)} \right]^{k - 1} \varvec{C}\left( {T - 1,j} \right),$$

and (5.2) follows by noting that \(\varvec{C}\left( {T - 1,j} \right) = Q_{T - 1,j} (0)\), which is given in (5.3).