1 Introduction

Let \((\varOmega , {{\mathcal {F}}}, {\mathbb P})\) be a basic probability space equipped with a right continuous and increasing family of \(\sigma \)-algebras \(({{\mathcal {F}}}_{t}, t\ge 0)\) and let \((Z_{t}, t\ge 0)\) be a standard \(\alpha \)-stable Lévy motion with \(Z_{1} \sim S_{\alpha }(1,\beta , 0)\), where \(\alpha \) is the stability index and \(\beta \in [-1,1]\) is the skewness parameter (we shall briefly recall the relevant definitions in next section). The so-called \(\alpha \)-stable Ornstein–Uhlenbeck motion \(X=(X_{t}, t\ge 0)\), starting from a point \(x_{0} \in {\mathbb R}\) is defined as an Ornstein–Uhlenbeck processes driven by the \(\alpha \)-stable Lévy motion \(Z_t\). It satisfies the following stochastic Langevin equation

$$\begin{aligned} dX_{t}=-\theta X_{t} dt+ \sigma dZ_{t},\quad t \in [0,\infty ), \quad X_{0}=x_{0}, \end{aligned}$$
(1.1)

where \(\theta \), \(\sigma \) are some constants. It is well-known (see Theorem 17.5 in Sato 1999) that if \(\theta >0\), \(X_{t}\) is ergodic and it converges in law to the random variable \(X_{o}=\sigma \int _{0}^{\infty } e^{-\theta s} dZ_{s}\).

From the above definition we see that the \(\alpha \)-stable Ornstein–Uhlenbeck motion \(X_t\) depends on the following parameters: the stability index \(\alpha \), the skewness parameter \(\beta \), the drift parameter \(\theta \) and the dispersion parameter \( \sigma \). In this work we assume that the values of these parameters are unknown but the \(\alpha \)-stable Ornstein–Uhlenbeck motion \((X_{t}, t\ge 0)\) can be observed at discrete time \(t_k\) (For simplicity, we let \(t_k=kh\) for some fixed \(h>0\)). We want to use the available data \(\{X_{t_k}, k= 1, 2, \ldots , n\}\) to estimate the parameters \(\alpha , \beta , \theta \), and \(\sigma \) simultaneously.

The parametric estimation problems for diffusion processes driven by a Lévy process such as compound Poisson process, gamma process, inverse Gaussian process, variance gamma process, normal inverse Gaussian process or some generalized tempered stable processes have been studied earlier. Let us mention the following works: Brockwell et al. (2007), Masuda (2005), Ogihara and Yoshida (2011), Shimizu (2006), Shimizu and Yoshida (2006), Spiliopoulos (2008), and Valdivieso et al. (2009). In these works it is considered the quasi-maximum likelihood, least squares estimators, or trajectory-fitting estimator and it is also established the consistency and asymptotic normality for those estimators. Masuda (2010) proposed a self-weighted least absolute deviation estimator for discretely observed ergodic Ornstein–Uhlenbeck processes driven by symmetric Lévy processes. For some recent developments on estimation of drift parameters for stochastic processes driven by small Lévy noises, we refer to Long (2009) and Long et al. (2013, 2017) as well as related references therein.

However, all aforementioned papers did not cover the case that the noise is given by an \(\alpha \)-stable Lévy motion. When the noise is an \(\alpha \)-stable Lévy motion the process does not have the second moment which makes the parametric estimation problem more difficult. In this case there are limited papers dealing with the parametric estimation problem. Let us first summarize some relevant work. Hu and Long (2007, 2009) proposed the trajectory fitting estimator and least squares estimators for the drift parameter \(\theta \) assuming other parameters \(\alpha \), \(\beta \), and \(\sigma \) are known and under both continuous or discrete observations. They discovered that the limiting distributions are stable distributions which are different from the classical ones where asymptotic distributions are normal. Fasen (2013) extended the results of Hu and Long (2009) to high dimensions.

To deal with the discrete time observations, which is the common practice and the main focus of this paper, in most literature, one needs to assume that the time step h depends on n and converges to 0 as n goes to infinity. This means that a high frequency data must be available for the estimators to be effective. In some situations such as in finance high frequency data collection is possible. But in many other situations high frequency data collection may be impossible or too expensive. To construct estimators applicable to this situation, one has to find consistent estimators which allow h to be an arbitrarily fixed constant. Along with this line, some progresses have been made in Hu and Song (2013) and Hu et al. (2015) for Ornstein–Uhlenbeck processes or reflected Ornstein–Uhlenbeck processes driven by Brownian motion or fractional Brownian motions as well as Zhang and Zhang (2013) for Ornstein–Uhlenbeck processes driven by symmetric \(\alpha \)-stable motions. The idea is to use the ergodic theorems for the underlying Ornstein–Uhlenbeck processes to construct ergodic type estimators. The strong consistency and the asymptotic normality are proved when the time step h remains constant (as the number of sample point n goes to infinity). However, in the above papers, one can only estimate the drift parameter \(\theta \). There have been no available estimators simultaneously for all parameters. The main goal of the present paper is to fill this gap. We want to simultaneously estimate all the parameters \(\theta , \alpha , \sigma \) and \(\beta \) in the \(\alpha \)-stable Ornstein–Uhlenbeck motion (1.1). We still use the generalized method of moments via ergodic theory. But since the \(\alpha \)-stable motion has no second or higher moments we shall use the sample characteristic functions. Namely, we use the following the ergodic theorem: \(\lim _{n \rightarrow \infty } \frac{1}{n} \sum _{k=1}^{n} f(X_{t_k})= {\mathbb {E}} f({\tilde{X}}_o) \) almost surely, where we recall that the distribution of \({\tilde{X}}_o \) is the invariant measure of the \(\alpha \)-stable Ornstein–Uhlenbeck motion \(X_t\). However, this cannot be used to estimate all the parameters \(\theta , \alpha , \sigma \) and \(\beta \) since we cannot separate all the parameters in the stationary distribution of \({\tilde{X}}_o\) (see Remark 1). The idea is then to use a more sophisticated ergodic theorem: \(\lim _{n \rightarrow \infty } \frac{1}{n} \sum _{k=1}^{n} f(X_{t_k}, X_{t_{k+1}})= {\mathbb {E}} f({\tilde{X}}_0, {\tilde{X}}_{t_1})\), where \({\tilde{X}}_t\) satisfies (1.1) with initial condition \({\tilde{X}}_0\) having the invariant measure (namely, \({\tilde{X}}_0\) and \({\tilde{X}}_o\) have the same probability measure) and being independent of the \(\alpha \)-stable motion \(Z_t\). Note that the explicit forms of the probability density functions of \(\tilde{X}_o\) and the joint probability density function of \({\tilde{X}}_0, {\tilde{X}}_{t_1}\) are unknown except for some very special parameters. However, it is possible to find the explicit forms of the characteristic functions of \({\tilde{X}}_o \) and that of \({\tilde{X}}_0, {\tilde{X}}_{t_1}\). These characteristic functions will be used to construct estimators for \(\theta , \alpha , \sigma \) and \(\beta \).

To validate our approach we have done a number of simulations to illustrate our estimators. First, we simulate some data from (1.1) assuming some given values of \(\alpha ,\beta , \theta \) and \(\sigma \). Then we apply our estimators to estimate these parameters. The numerical results show that our estimators are accurate and converge fast to all the true parameters. Our estimators work for all fixed \(h>0\) (even large h) although we list only \(h=0.5\) (which is already big enough). As discussed in Rosinski (2002) and Zhang (2011), the Euler scheme in simulating Ornstein–Uhlenbeck process driven by a Lévy process is seldom used. To save computation time we find a way to simulate the \(\alpha \)-stable Lévy motion \(\{X_{kh}, k=1, \ldots , n\}\) in a straightforward way without any extra computations.

We note that another method of estimating all the parameters for time series models is the ECF (empirical characteristic function) method discussed in Knight and Yu (2002) and Yu (2004). They fit the ECF to the theoretical one continuously in frequency by minimizing a distance measure between the joint CF (characteristic function) and joint ECF. Under certain regularity conditions, they established consistency, asymptotic normality, and the asymptotic efficiency of the proposed ECF estimators. The i.i.d. case was discussed much earlier by Paulson et al. (1975) and Heathcote (1977), where they called it the integrated squared error method.

In this paper we employ the well-known generalized method of moments (GMM) for parameter estimation. GMM is referred to a class of estimators which can be constructed by utilizing the sample moment counterparts of population moments. It nests the classical method of moments, least squares method, and maximum likelihood method. GMM has been extensively studied and widely used in many applications since the seminal work of Hansen (1982). In particular, GMM has been successfully applied to parameter estimation and inference for stochastic models in finance including foreign exchange markets and asset pricing in Hansen and Hodrick (1980), Hansen and Singleton (1982), Harvey (1989), Zhou (1994), Brandt (1999), Cochrane (2001), and Singleton (2006). For a comprehensive treatment of GMM, we refer to Hall (2005). For generalization and improvement on GMM, we refer to Qian and Schmidt (1999), Carrsasco and Florens (2000), Duffie and Glynn (2004), Newey and Smith (2004), Smith (2005), Hall et al. (2007), Bravo (2011), and Lynch and Wachter (2013).

The paper is organized as follows. In Sect. 2, we recall some basic results for \(\alpha \)-stable Lévy motions which we need in this paper. In Sect. 3, we construct estimators for all the parameters in the \(\alpha \)-stable Ornstein–Uhlenbeck motion by using ergodic theory and sample characteristic functions. The consistency of the estimators is established. The asymptotic normality of the joint estimators is obtained and the asymptotic covariance matrix is computed. The asymptotic covariance depends on the parameters we choose in the characteristic function. We also design a procedure of selecting the four grid points used for the parameter estimation in certain optimal way. Section 4 provides validation of our estimators from numerical simulations. The values of the (true) parameters are given and then they are used to simulate the \(\alpha \)-stable Ornstein–Uhlenbeck motion \(X_t\). With these simulated values we compute our estimators and compare them with the true parameters. Numerical results show that our estimators converges fast to the true parameters. Finally, all the lemmas with their proofs, proof of Theorem 1, and the explicit expression of the crucial covariance matrix defined in Sect. 3 are provided in Sect. 5 (“Appendix”).

2 Limiting distributions of \(\alpha \)-stable Ornstein–Uhlenbeck motions

We first recall some basic definitions. A random variable \(\eta \) is said to follow a stable distribution, denoted by \(\eta \sim S_{\alpha }(\sigma , \beta ,\gamma )\), if its characteristic function has the following form:

$$\begin{aligned} \phi _{\eta }(u)={\mathbb {E}}[e^{iu\eta }]=\left\{ \begin{array}{ll} \exp \left\{ -\sigma ^{\alpha }|u|^{\alpha } \left( 1-i\beta \mathrm{sgn}(u) \tan \frac{\alpha \pi }{2}\right) +i\gamma u \right\} , &{} \quad \mathrm{if}\ \alpha \ne 1, \\ \exp \left\{ -\sigma |u|\left( 1+i\beta \frac{2}{\pi }\mathrm{sgn}(u)\log |u| \right) +i\gamma u \right\} , &{} \quad \mathrm{if}\ \alpha =1. \end{array} \right. \end{aligned}$$

In the above definition \(\alpha \in (0,2]\), \(\sigma \in (0,\infty )\), \(\beta \in [-1,1]\), and \(\gamma \in (-\infty , \infty )\) are called the index of stability, the scale, skewness, and location parameters, respectively.

We shall assume \(\gamma =0\) throughout the paper. This means that we consider only strictly \(\alpha \)-stable distribution. If in addition \(\beta =0\), we call \(\eta \)symmetric \(\alpha \)-stable.

Definition 1

An \({\mathcal {F}}_t\)-adapted stochastic process \(\{Z_{t}\}_{t\ge 0}\) is called a standard \(\alpha \)-stable Lévy motion if

  1. (i)

    \(Z_{0}=0\), a.s.;

  2. (ii)

    \(Z_{t}-Z_{s} \sim S_{\alpha }((t-s)^{1/\alpha }, \beta , 0)\), \( t>s\ge 0\);

  3. (iii)

    For any finite time points \(0\le s_{0}<s_{1}<\cdots<s_{m}<\infty \), the random variables \(Z_{s_{0}}, Z_{s_{1}}-Z_{s_{0}}, \ldots , Z_{s_{m}}-Z_{s_{m-1}}\) are independent.

Stochastic analysis with respect to \(\alpha \)-stable motion has been studied by many authors. We refer to Janicki and Weron (1994), Samorodnitsky and Taqqu (1994), Sato (1999), and Zolotarev (1986) for more references.

When Z is an \(\alpha \)-stable Lévy motion, the stochastic Langevin equation (1.1) has a unique solution which is given explicitly by

$$\begin{aligned} X_{t}=e^{-\theta t}x_{0}+\sigma \int _{0}^{t}e^{-\theta (t-s)} dZ_{s}. \end{aligned}$$
(2.1)

It is known that the \(\alpha \)-stable Ornstein–Uhlenbeck motion \(X_t\) has a limiting distribution which coincides with the distribution of \({\tilde{X}}_o =\sigma \int _{0}^{\infty } e^{-\theta s}dZ_{s}\). It is also well-known that \(X_t\) is ergodic. This means that for any function \(f:{\mathbb R}\overset{d}{\rightarrow } {\mathbb R}\) such that \({\mathbb E} | f({\tilde{X}}_o ) |<\infty \) we have

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{1}{n}\sum _{j=1}^{n} f(X_{t_{j}})={\mathbb {E}}\left[ f({\tilde{X}}_o)\right] \end{aligned}$$
(2.2)

almost surely. The explicit computation of the above right hand side is usually difficult for general function f since the the explicit form of the probability density function of \({\tilde{X}}_o\) is not available. But when f has some specific form (the characteristic function), it is explicit which is given below. The limiting random variable \({\tilde{X}}_o\) is \(\alpha \)-stable with distribution \((\frac{1}{\alpha \theta })^{1/\alpha } S_{\alpha }(\sigma ,\beta ,0)\)\(=S_{\alpha }(\sigma (\frac{1}{\alpha \theta })^{1/\alpha }, \beta , 0)\) (via time change technique and self-similarity). So the characteristic function of \({\tilde{X}}_{o}\) in this one-dimensional case is given by

$$\begin{aligned} \phi (u)={\mathbb {E}}[\exp (iu{\tilde{X}}_{o})]= \left\{ \begin{array}{ll} \exp \left\{ -\frac{\sigma ^{\alpha }}{\alpha \theta }|u|^{\alpha } \left( 1-i\beta \mathrm{sgn}(u) \tan \frac{\alpha \pi }{2}\right) \right\} , &{} \quad \mathrm{if}\ \alpha \ne 1, \\ \exp \left\{ -\frac{\sigma }{\theta }|u|\left( 1+i\beta \frac{2}{\pi }\mathrm{sgn}(u)\log |u| \right) \right\} , &{} \quad \mathrm{if}\ \alpha =1. \end{array} \right. \end{aligned}$$
(2.3)

Remark 1

Since the probability distribution is uniquely determined by its characteristic function we see from the above expression (2.3) that the probability distribution function of \(\tilde{X}_0\) is a function of \(\frac{\sigma ^{\alpha }}{\alpha \theta }\). We cannot separate \(\alpha \), \(\sigma \), and \(\theta \). This further implies that for any measurable function f the expectation \({\mathbb {E}}|f({\tilde{X}}_o)|\) is also a function of \(\frac{\sigma ^{\alpha }}{\alpha \theta }\) when it is finite.

The ergodic theorem (2.2) can then be written as

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{1}{n}\sum _{j=1}^{n} \exp (iuX_{t_{j}})=\phi (u), \quad u \in {\mathbb R}, \quad a.s. \end{aligned}$$
(2.4)

This identity will be used to construct statistical estimators of the parameters appeared in (1.1).

One may think to use the ergodic theorem (2.2) to estimate all the parameters: There are reasons to support this thought; one may choose f differently to obtain sufficient large number of different equations, which may be used to obtain all the unknown parameters. However, this is impossible in our current situation since in the stationary distribution, as we can see from its characteristic function (2.3), one can only estimate \(\frac{\sigma ^{\alpha }}{\alpha \theta }\) as a whole. For example, one can not separate \(\sigma \) and \(\theta \) in the characteristic function \(\phi (u)\) of \({\tilde{X}}_{o}\). This forces us to seek other possibilities. To this end we shall use the ergodic theorem for \(X_{t_{k}}-X_{t_{k-1}}\). More precisely, from Theorem 1.1 of Billingsley (1961), it follows

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{1}{n}\sum _{k=1}^{n} \exp [iu(X_{t_{k}}-X_{t_{k-1}})] = {\mathbb E}[e^{iu({\tilde{X}}_{h}-\tilde{X}_{0})}] \quad \hbox {almost surely}, \end{aligned}$$
(2.5)

for arbitrarily fixed \(u\in {\mathbb R}\), where \({\tilde{X}}_t\) satisfies the Langevin equation (1.1) with \(\tilde{X}_0={\tilde{X}}_o\). To make this formula (2.5) useful for the statistical estimation of the parameters, we need to find the explicit form of the characteristic function of \(\tilde{X}_{h}-{\tilde{X}}_{0}\). From (1.1), we have

$$\begin{aligned} {\tilde{X}}_{h}=e^{-\theta h} {\tilde{X}}_{0}+\sigma e^{-\theta h}\int _{0}^{h}e^{\theta s}dZ_{s} \end{aligned}$$

and

$$\begin{aligned} {\tilde{X}}_{h}-{\tilde{X}}_{0}=(e^{-\theta h}-1) {\tilde{X}}_{0}+\sigma e^{-\theta h} \int _{0}^{h}e^{\theta s}dZ_{s}. \end{aligned}$$

Note that \({\tilde{X}}_{0}={\tilde{X}}_o\) and \(\sigma e^{-\theta h} \int _{0}^{h}e^{\theta s}dZ_{s} \sim \sigma \left( \frac{1-e^{-\alpha \theta h}}{\alpha \theta } \right) ^{1/\alpha } Z_{1}\). Note also that \({\tilde{X}}_{0}\) and \(\sigma e^{-\theta h} \int _{0}^{h}e^{\theta s}dZ_{s}\) are independent. Therefore, we find

$$\begin{aligned} \psi (u):= & {} {\mathbb E}[\exp \{iu({\tilde{X}}_{h}-{\tilde{X}}_{0})\}] \nonumber \\= & {} {\mathbb E}[\exp \{iu(e^{-\theta h} -1){\tilde{X}}_{0})\}] {\mathbb E}[\exp \{iu \sigma e^{-\theta h} \int _{0}^{h}e^{\theta s}dZ_{s}\}] \nonumber \\= & {} \left\{ \begin{array}{ll} \exp \left\{ -\frac{\sigma ^{\alpha }|u|^{\alpha }}{\alpha \theta }\left[ (1-e^{-\theta h})^{\alpha } \left( 1+i\beta \mathrm{sgn}(u)\tan \frac{\alpha \pi }{2}\right) \right. \right. \\ \qquad \qquad \left. \left. +(1-e^{-\alpha \theta h})\left( 1-i\beta \mathrm{sgn}(u) \tan \frac{\alpha \pi }{2}\right) \right] \right\} , &{} \quad \mathrm{if}\ \alpha \ne 1\,; \\ \exp \left\{ -\frac{2\sigma (1-e^{-\theta h})}{\theta }|u|\right\} , &{}\quad \mathrm{if}\ \alpha =1\,. \end{array} \right. \end{aligned}$$
(2.6)

3 Moment estimation of all parameters

In this section, we assume that all the parameters \(\theta \), \(\sigma \), \(\alpha \) and \(\beta \) involved in the \(\alpha \)-stable Ornstein–Uhlenbeck motion \((X_{t}, t \ge 0)\) are unknown and we follow Press (1972) to estimate them based on the discrete time observations \(\{X_{t_1}, \ldots , X_{t_n}\}\), where as in the last section \(t_k=kh\) for some fixed time step h.

As we explained in Remark 1 or paragraphs after that remark, we cannot use (2.4) alone to estimate all the parameters in the \(\alpha \)-stable Ornstein–Uhlenbeck motion \(X_t\) given by (1.1). As indicated in Sect. 2, we shall use (2.5) which motivates us to set \(\frac{1}{n} \sum _{j=1}^{n} e^{i u(X_{t_{j}}-X_{t_{j-1}})} =\psi (u)\).

We define the empirical characteristic functions \({\hat{\phi }}_{n}(u)\) and \({\hat{\psi }}_{n}(v)\) as follows:

$$\begin{aligned} {\hat{\phi }}_{n}(u):=\frac{1}{n}\sum _{j=1}^{n} \exp (iuX_{t_{j}}), \quad {\hat{\psi }}_{n}(v):=\frac{1}{n} \sum _{j=1}^{n} \exp {[i v(X_{t_{j}}-X_{t_{j-1}})]}. \end{aligned}$$

Motivated by (2.4) and (2.5), we can estimate all the parameters by matching the empirical characteristic functions \({\hat{\phi }}_{n}(u)\) and \({\hat{\psi }}_{n}(v)\) with the corresponding theoretical characteristic functions \(\phi (u)\) and \(\psi (v)\), respectively as given as follows

$$\begin{aligned}&{\hat{\phi }}_{n}(u)=\phi (u)\,; \end{aligned}$$
(3.1)
$$\begin{aligned}&{\hat{\psi }}_{n}(v)=\psi (v), \end{aligned}$$
(3.2)

where uv are two constants to be appropriately chosen so that the parametric estimators for all parameters can be constructed.

3.1 Methodology of parameter estimation

Now we provide the details to obtain the estimators for the parameters in the order of \(\alpha \), \(\theta \), \(\sigma \), and \(\beta \). We shall first find the moment estimator for \(\alpha \).

3.1.1 Estimator for \(\alpha \)

Choose any arbitrarily two non-zero values \(u_{1}\) and \(u_{2}\) such that \(u_{1}\ne u_{2}\). Then, we have

$$\begin{aligned}&\&\log (-\log |\phi (u_{1})|^2)=\log \left( \frac{2\sigma ^{\alpha }}{\alpha \theta }\right) +\alpha \log |u_{1}|, \end{aligned}$$
(3.3)
$$\begin{aligned}&\&\log (-\log |\phi (u_{2})|^2)=\log \left( \frac{2\sigma ^{\alpha }}{\alpha \theta }\right) +\alpha \log |u_{2}|, \end{aligned}$$
(3.4)

where \(\phi (u)\) is defined in (2.3). Subtracting the Eq. (3.4) from (3.3), and replacing \(\phi (u)\) with its estimated value \({\hat{\phi }}_{n}(u)\) as indicated in (3.1), we find an estimator of \(\alpha \) as follows

$$\begin{aligned} {\hat{\alpha }}_{n}=\frac{\log \left( \frac{\log |{\hat{\phi }}_{n}(u_{2})| }{\log |{\hat{\phi }}_{n}(u_{1})| }\right) }{\log \frac{|u_{2}|}{|u_{1}|}}. \end{aligned}$$
(3.5)

Since for any fixed \(u\in {\mathbb R}\), \({\hat{\phi }}_{n}(u)\) converges to \(\phi (u)\) almost surely, we see that \({\hat{\alpha }}_{n}\) converges to \(\alpha \) almost surely.

3.1.2 Estimator for \(\theta \) given \(\alpha \)

To construct an estimator for \(\theta \) (which depends on the estimation of \(\alpha \)), we need to use the characteristic function \(\psi (u)\) of \({\tilde{X}}_{t_1}-{\tilde{X}}_{0}\). It is easy to verify from the expressions (2.3) and (2.6) of \(\phi (u)\) and \(\psi (u)\)

$$\begin{aligned} \frac{\log |\psi (u)|^2}{\log |\phi (u)|^2} =(1-e^{-\theta h})^{\alpha }+1-e^{-\alpha \theta h}. \end{aligned}$$
(3.6)

For any arbitrarily u, denote

$$\begin{aligned} \delta =\frac{\log | {\psi } (u)|^2}{\log | {\phi } (u)|^2} \end{aligned}$$
(3.7)

and rewrite Eq. (3.6) as

$$\begin{aligned} (1-e^{-\theta h})^{ {\alpha } }+1-e^{- {\alpha } \theta h}=\delta . \end{aligned}$$
(3.8)

This is a nonlinear algebraic equation of \(\theta \), when \(\alpha \) and \(\delta \) are considered as given. To simplify notation, we denote \(\lambda =e^{-\theta h}\) and then \(\theta \) is related to \(\lambda \) via

$$\begin{aligned} \theta =-\log \lambda /h. \end{aligned}$$

With this substitution, the Eq. (3.8) can be written as an equation for \(\lambda \):

$$\begin{aligned} (1-\lambda )^{ {\alpha } }+1-\lambda ^{ {\alpha } }= {\delta }. \end{aligned}$$
(3.9)

Let \( \zeta _\lambda (\alpha , \delta )\) denote the solution of the above equation. Then we can construct an estimator for \(\theta \) by

$$\begin{aligned} {{\hat{\theta }}}_n=-\log \left( {\hat{\lambda }}_n\right) /h,\quad \mathrm{where}\quad {\hat{\lambda }}_n=\zeta _\lambda ({\hat{\alpha }}_n, {\hat{\delta }}_n). \end{aligned}$$
(3.10)

Here \({\hat{\alpha }}_n\) is the estimator for \(\alpha \) defined by (3.5) and

$$\begin{aligned} {\hat{\delta }}_n = \frac{\log | \hat{\psi }_{n}(u_{3})|^2}{\log | \hat{\phi }_{n}(u_{3})|^2} \end{aligned}$$
(3.11)

with \({\hat{\phi }}_n(u_{3})\) and \({\hat{\psi }}_n(u_{3})\) being defined by (3.1) and (3.2) when \(u=u_{3}\ne u_{2}\ne u_{1}\). Since \(\hat{\alpha }_{n} \rightarrow \alpha \) a.s. and \(\hat{\delta }_{n} \rightarrow \delta \) a.s., we have \(\hat{\lambda }_{n} \rightarrow \lambda \) a.s. and \(\hat{\theta }_{n} \rightarrow \theta \) a.s.

Our estimator \({\hat{\theta }}_n\) depends on the function \( \zeta _\lambda (\alpha , \delta )\), which is the solution to (3.9). This is a simple algebraic equation. There are many methods to solve general algebraic equation numerically. Here we shall use the Newton’s method. Denote

$$\begin{aligned} g(\lambda )=g(\lambda , {\hat{\alpha }}_n, {\hat{\delta }}_n)=(1-\lambda )^{ \hat{\alpha }_n }+1-\lambda ^{ \hat{\alpha }_n }- \hat{\delta }_n . \end{aligned}$$

For any fixed value of \(\theta \), we can always choose h fixed but small enough (e.g. \(0<h<\ln 2/\theta \)) such that \(\lambda =e^{-\theta h} \in (\frac{1}{2}, 1)\) and \(0<\hat{\delta }_{n}<1\). Note that g is decreasing with derivative \(g^{'}(\lambda )=-\hat{\alpha }_{n}[\lambda ^{\hat{\alpha }_{n}-1}+(1-\lambda )^{\hat{\alpha }_{n}-1}]<0\) for \(\lambda \in (\frac{1}{2},1)\), \(g(\frac{1}{2})=1-\hat{\delta }_{n}>0\) and \(g(1)=-\hat{\delta }_{n}<0\). Hence there is a unique root for \(g(\lambda )\) in the interval \((\frac{1}{2},1)\). Namely, there exists a unique \(\hat{\lambda }_{n} \in (\frac{1}{2}, 1)\) such that \(g(\hat{\lambda }_{n})=0\). Then, the Newton’s method to approximate \(\hat{\lambda }_{n}\) is as follows. First, we define \(\lambda _{n,0}=\frac{1}{2}.\) Then, we define

$$\begin{aligned} \lambda _{n,m+1}=\lambda _{n,m}-\frac{g(\lambda _{n,m})}{g^{'}(\lambda _{n,m})}, \quad m=0, 1, 2, \ldots \end{aligned}$$
(3.12)

Note that \(g^{''}(\lambda )=\hat{\alpha }_{n}(\hat{\alpha }_{n}-1)\frac{\lambda ^{2-\hat{\alpha }_{n}}-(1-\lambda )^{2-\hat{\alpha }_{n}}}{\lambda ^{2-\hat{\alpha }_{n}}(1-\lambda )^{2-\hat{\alpha }_{n}}}>0\) if \(1<\hat{\alpha }_{n}<2\) and \(\lambda \in (1/2,1)\). In this case, we have global convergence of the Newton’s iterations \(\{\lambda _{n,m}\}_{m=1}^{\infty }\). In fact, let the approximation error at the (\(m+1\))-th interation be \({\varepsilon }_{n,m+1}=\lambda _{n,m+1}-\hat{\lambda }_{n}\). By (3.12), we have

$$\begin{aligned} \varepsilon _{n,m+1}=\varepsilon _{n,m}-\frac{g(\lambda _{n,m})}{g^{'}(\lambda _{n,m})}. \end{aligned}$$
(3.13)

Then by Taylor expansion we find that \({\varepsilon }_{n,m+1}=\frac{1}{2} \frac{g^{''}(\xi _{n,m})}{g^{'}(\lambda _{n,m})} {\varepsilon }_{n,m}^2<0\), where \(\xi _{n,m}\) is between \(\lambda _{n,m}\) and \(\hat{\lambda }_{n}\). This implies that \(\lambda _{n,m}<\hat{\lambda }_{n}\) for each \(m\ge 1\). Since g is decreasing, we have \(g(\lambda _{n,m})>g(\hat{\lambda }_{n})=0\). Thus \({\varepsilon }_{n,m+1}>{\varepsilon }_{n,m}\) and \(\lambda _{n,m+1}>\lambda _{n,m}\) for each \(m\ge 1\). Hence, the two sequences \(\{{\varepsilon }_{n,m}\}_{m=1}^{\infty }\) and \(\{\lambda _{n,m}\}_{m=1}^{\infty }\) are increasing and bounded from above. Thus there exist \(\varepsilon ^{*}_{n}\) and \(\lambda ^{*}_{n}\) such that

$$\begin{aligned} \lim _{m \rightarrow \infty } {\varepsilon }_{n,m}=\varepsilon ^{*}_{n}, \quad \lim _{m \rightarrow \infty } \lambda _{n,m}=\lambda ^{*}_{n}. \end{aligned}$$

Thus, by (3.13), it follows that

$$\begin{aligned} \varepsilon ^{*}_{n}=\varepsilon ^{*}_{n}-\frac{g(\lambda ^{*}_{n})}{g^{'}(\lambda ^{*}_{n})}. \end{aligned}$$
(3.14)

This implies that \(g(\lambda ^{*}_{n})=0\) and consequently \(\lambda ^{*}_{n}=\hat{\lambda }_{n}\).

Now, when \(0<\hat{\alpha }_{n}<1\), we can use similar arguments to show that Newton’s method converges to the unique root \(\hat{\lambda }_{n}\) of \(g(\lambda )\) from any starting point (namely we have global convergence of the Newton’s method).

3.1.3 Estimator for \(\sigma \) given \(\alpha \) and \(\theta \)

Next we turn to the estimation of \(\sigma \). Let \(\tau =\frac{2 \sigma ^{\alpha }}{\alpha \theta }\) and \(\sigma \) is related to \(\tau \) by

$$\begin{aligned} {\sigma }= & {} \exp \left\{ \frac{\log {\tau } +\log {\alpha } +\log {\theta } -\log 2}{ {\alpha } } \right\} \quad \mathrm{or}\nonumber \\ \log {\sigma }= & {} \frac{\log {\tau } +\log {\alpha } +\log {\theta } -\log 2}{ {\alpha } } . \end{aligned}$$
(3.15)

Thus, the estimation of \(\sigma \) is reduced to the estimation of \(\tau \) since we already have estimators for \(\alpha \) and \(\theta \).

To obtain an estimator for \(\sigma \) (or for \(\log \sigma \)), we may use any one of the Eqs. (3.3) and (3.4). However, we shall use both of these two equations in the following way, which will eliminate the explicit dependence on \(\alpha \). Multiply Eqs. (3.3) by \(\log |u_2|\) and multiply Eqs. (3.4) by \(\log |u_1|\). Taking the difference yields

$$\begin{aligned} \log \tau = \frac{\log \left( |u_{1}|\right) \log \left( -\log |\phi (u_2)|^2\right) -\log \left( |u_{2}|\right) \log \left( -\log |\phi (u_1)|^2 \right) }{\log \frac{|u_{1}|}{|u_{2}|}} . \end{aligned}$$
(3.16)

From this identity, we construct the estimator for \(\tau \) as follows

$$\begin{aligned} \log \hat{\tau }_{n} = \frac{\log \left( |u_{1}|\right) \log \left( -\log |{\hat{\phi }}_n(u_2)|^2\right) -\log \left( |u_{2}|\right) \log \left( -\log |{\hat{\phi }}_n(u_1)|^2\right) }{\log \frac{|u_{1}|}{|u_{2}|}} , \end{aligned}$$
(3.17)

where \({\hat{\phi }}_n(u)\) is given by (3.1) . Thus, we can construct the estimator for \(\sigma \) by

$$\begin{aligned} \hat{\sigma }_{n}=\exp \left\{ \frac{\log \hat{\tau }_{n}+\log \hat{\alpha }_{n}+\log \hat{\theta }_{n}-\log 2}{\hat{\alpha }_{n}}\right\} . \end{aligned}$$
(3.18)

Based on the almost sure convergence of \({\hat{\phi }}_n(u)\) to \(\phi (u)\), we see easily that \(\hat{\sigma }_{n} \rightarrow \sigma \) almost surely.

3.1.4 Estimator for \(\beta \) given \(\alpha \), \(\theta \), and \(\sigma \)

Finally, we discuss the estimation of the skewness parameter \(\beta \in [-1, 1]\). Note from (2.3) that for \(\alpha \ne 1\), we have

$$\begin{aligned} \arctan \left( \frac{\mathfrak {I}(\phi (u))}{\mathfrak {R}(\phi (u))}\right) =\beta \frac{\sigma ^{\alpha }}{\alpha \theta } \tan \left( \frac{\alpha \pi }{2}\right) |u|^{\alpha } \text{ sgn } (u), \end{aligned}$$
(3.19)

where \(\mathfrak {I}(\phi (u))\) and \(\mathfrak {R}(\phi (u))\) are the imaginary and real parts of the complex valued function \(\phi (u)\), respectively. In order to make sure that the right hand side is in the range of \(\arctan \), choose \(u=u_4\) in (3.19) such that \(-\frac{\pi }{2}<\frac{\sigma ^{\alpha }}{\alpha \theta } \tan \left( \frac{\alpha \pi }{2}\right) |u|^{\alpha } \text{ sgn } (u)<\frac{\pi }{2}\) . Replacing \(\phi (u_{4})\), \(\alpha \), \(\theta \), and \(\sigma \) by \(\hat{\phi }_{n}(u_{4})\),\(\hat{\alpha }_{n}\), \(\hat{\theta }_{n}\), and \(\hat{\sigma }_{n}\), we can construct an estimator of \(\beta \) as follows

$$\begin{aligned} \hat{\beta }_{n}=\frac{\hat{\alpha }_{n} \hat{\theta }_{n} \arctan [(\sum _{j=1}^{n} \sin u_4X_{t_{j}})/(\sum _{j=1}^{n}\cos u_4X_{t_{j}})]}{\hat{\sigma }_{n}^{\hat{\alpha }_{n}} \tan (\hat{\alpha }_{n}\pi /2) |u_4|^{\hat{\alpha }_{n}} \text{ sgn } (u_4)}. \end{aligned}$$
(3.20)

When \(\alpha =1\), we have

$$\begin{aligned} \hat{\beta }_{n}=-\frac{\hat{\theta }_{n} \arctan [(\sum _{j=1}^{n} \sin u_4X_{t_{j}})/(\sum _{j=1}^{n}\cos u_4X_{t_{j}})]}{\hat{\sigma }_{n} \frac{2}{\pi } \log |u_4| \text{ sgn } (u_4)}. \end{aligned}$$
(3.21)

By the almost sure convergence of \(\hat{\alpha }_{n}\), \(\hat{\theta }_{n}\), \(\hat{\sigma }_{n}\) and \(\hat{\phi }_{n}(u_4)\), we can easily get the almost sure convergence of \(\hat{\beta }_{n}\) to \(\beta \).

3.2 Joint asymptotic behavior of all the obtained estimators

In this subsection, we are going to study the joint behavior of the estimators of all the parameters \(\alpha , \theta , \sigma \), and \(\beta \). We let \(\eta =(\alpha , \theta , \sigma , \beta )^{T}\) and \(\hat{\eta }_{n}=(\hat{\alpha }_{n}, \hat{\theta }_{n}, \hat{\sigma }_{n}, \hat{\beta }_{n})^{T}\). Our main task is to compute the asymptotic covariance of the estimators of all the parameters \(\alpha , \theta , \sigma \), and \(\beta \). We want to compute the covariance matrix of the limiting distribution of \(\sqrt{n} (\hat{\eta }_n-\eta )\). Due to the difficulty that the \(\alpha \)-stable Ornstein–Uhlenbeck motion has no second moment, we shall discuss how to find the asymptotic covariance matrix of \(\sqrt{n} (\hat{\eta }_n-\eta )\) in detail.

For any nice function f denote

$$\begin{aligned} S_{n}(f)=\frac{1}{n}\sum _{j=1}^{n} f(X_{t_{j}}) \quad \mathrm{and}\quad T_{n}(f)=\frac{1}{n}\sum _{j=1}^{n}f(X_{t_{j}}-X_{t_{j-1}}). \end{aligned}$$
(3.22)

Let \(F_{u}(x)=\cos (ux)\) and \(G_{u}(x)=\sin (ux)\). Then \(\hat{\phi }_{n}(u)=S_{n}(F_{u})+iS_{n}(G_{u})\) and \(|\hat{\phi }_{n}(u)|^2=S_{n}^{2}(F_{u})+S_{n}^{2}(G_{u})\). Let \(V_{n1}=S_{n}(F_{u_{1}})\), \(V_{n2}=S_{n}(G_{u_{1}})\), \(V_{n3}=S_{n}(F_{u_{2}})\), \(V_{n4}=S_{n}(G_{u_{2}})\), \(V_{n5}=S_{n}(F_{u_{3}})\), \(V_{n6}=S_{n}(G_{u_{3}})\), \(V_{n7}=T_{n}(F_{u_{3}})\), \(V_{n8}=T_{n}(G_{u_{3}})\), \(V_{n9}=S_{n}(F_{u_{4}})\), \(V_{n10}=S_{n}(G_{u_{4}})\). We need first to compute the asymptotic covariance matrix associated with

$$\begin{aligned} V_{n}=(V_{n1}, V_{n2}, V_{n3}, V_{n4}, V_{n5}, V_{n6}, V_{n7}, V_{n8}, V_{n9}, V_{n10})^{T}. \end{aligned}$$

Then we shall use this computation to find the asymptotic covariance matrix of \({\hat{\eta }}_n\).

To compute the asymptotic covariance matrix associated with \(V_{n}\) we consider the functional \(S_n(f)\) and \(T_n(f)\) as a special case of

$$\begin{aligned} R_n(f)=\frac{1}{n} \sum _{j=1}^n f(X_{t_{j-1}}, X_{t_{j}}), \end{aligned}$$

where f(xy) is a function of two variables. It is well-known that for two functions f(xy) and g(xy), the asymptotic covariance \(\mathrm{cov}(\sqrt{n}R_n(f), \sqrt{n}R_n(g))\) of \(\sqrt{n}R_n(f)\) and \( \sqrt{n}R_n(g)\) is given by

$$\begin{aligned} \sigma _{fg}:= & {} \lim _{ n\rightarrow \infty } \mathrm{cov}(\sqrt{n}R_n(f), \sqrt{n}R_n(g))= \mathrm{cov}(f(\tilde{X}_{0}, \tilde{X}_{h}), g(\tilde{X}_{0},\tilde{X}_{h}))\\&+\,\sum _{j=1}^{\infty }[\mathrm{cov}(f(\tilde{X}_{0}, \tilde{X}_{h}), g(\tilde{X}_{jh}, \tilde{X}_{(j+1)h})) +\mathrm{cov}(g(\tilde{X}_{0}, \tilde{X}_{h}), f(\tilde{X}_{jh}, \tilde{X}_{(j+1)h}))]. \end{aligned}$$

The asymptotic covariance matrix of \(V_n\) will then be given by the covariance matrix

$$\begin{aligned} \varSigma _{10}:=\lim _{n \rightarrow \infty } \left( \mathrm{cov}(V_{nk}, V_{nl})\right) _{ 1\le k,l\le 10}=(\sigma _{g_{k}g_{l}})_{1\le k,l\le 10}, \end{aligned}$$
(3.23)

where

$$\begin{aligned} \left\{ \begin{array}{ll} &{} g_{1}(x,y)=F_{u_{1}}(x),\ g_{2}(x,y)=G_{u_{1}}(x), \ g_{3}(x,y)=F_{u_{2}}(x),\\ &{} g_{4}(x,y)=G_{u_{2}}(x), \ g_{5}(x,y)=F_{u_{3}}(x), \ g_{6}(x,y)=G_{u_{3}}(x), \\ &{}g_{7}(x,y)=F_{u_{3}}(y-x),\ g_{8}(x,y)=G_{u_{3}}(y-x), \\ &{}g_{9}(x,y)=F_{u_{4}}(x),\ \ g_{10}(x,y)=G_{u_{4}}(x). \end{array} \right. \end{aligned}$$

Let \(v=(v_{1}, v_{2}, \dots , v_{10})^{T}\), where \(v_{j}={\mathbb E}[g_{j}(\tilde{X}_{0},\tilde{X}_{h})], j=1, 2,\dots , 10\) . The explicit expressions of the elements in the covariance matrix \(\varSigma _{10}\) will be provided in “Appendix”. For \(z=(z_{1}, \dots , z_{10})^{T}\), we define the following functions

$$\begin{aligned} \left\{ \begin{array}{ll} &{} {\hat{\gamma }}_{1}(z)=\log \left( -\log (z_{1}^2+z_{2}^2)\right) , \quad {\hat{\gamma }}_{2}(z)=\log \left( -\log (z_{3}^2+z_{4}^2)\right) ,\\ &{} {\hat{\gamma }}_{3}(z)=\frac{\log (z_{7}^2+z_{8}^2)}{\log (z_{5}^2+z_{6}^2)}, \quad {\hat{\gamma }}_{4}(z)=\arctan \left( \frac{z_{10}}{z_{9}}\right) . \end{array} \right. \end{aligned}$$

Then, basic calculation shows that

$$\begin{aligned} \left\{ \begin{array}{ll} &{} \gamma _1(\eta ) :=\hat{\gamma }_{1}(v)=\log \left( \frac{2\sigma ^{\alpha }}{\alpha \theta }\right) +\alpha \log |u_{1}|,\\ &{} \gamma _2(\eta ) :=\hat{\gamma }_{2}(v)= \log \left( \frac{2\sigma ^{\alpha }}{\alpha \theta }\right) +\alpha \log |u_{2}|,\\ &{} \gamma _3(\eta ) :=\hat{\gamma }_{3}(v)=(1-e^{-\theta h})^{ {\alpha } }+1-e^{- {\alpha } \theta h},\\ &{} \gamma _4(\eta ) :=\hat{\gamma }_{4}(v)= \beta \frac{\sigma ^{\alpha }}{\alpha \theta } \tan \left( \frac{\alpha \pi }{2}\right) |u_{4}|^{\alpha } \text{ sgn }(u_{4}). \end{array} \right. \end{aligned}$$

Let \({\hat{\gamma }} (z)=({\hat{\gamma }}_{1}(z), {\hat{\gamma }}_{2}(z), {\hat{\gamma }}_{3}(z), {\hat{\gamma }}_{4}(z))^{T}\) for \(z\in {\mathbb R}^{10}\), \({\hat{\gamma }}^{(1)}(z)=\left( \frac{\partial {\hat{\gamma }}_{j}}{\partial z_{k}}\right) _{1\le j\le 4, 1\le k \le 10}\), and \(\gamma (\eta )=(\gamma _{1}(\eta ), \gamma _{2}(\eta ), \gamma _{3}(\eta ), \gamma _{4}(\eta ))^{T}\). We have

$$\begin{aligned} \frac{\partial \gamma ^{1}}{\partial z_{1}}= & {} \frac{-2z_{1}}{(z_{1}^2+z_{2}^2)\log (z_{1}^2+z_{2}^2)}, \quad \frac{\partial \gamma ^{1}}{\partial z_{2}}=\frac{-2z_{2}}{(z_{1}^2+z_{2}^2)\log (z_{1}^2+z_{2}^2)} \quad \\ \frac{\partial \gamma ^{1}}{\partial z_{3}}= & {} \cdots =\frac{\partial \gamma ^{1}}{\partial z_{10}}=0;\\ \frac{\partial \gamma ^{2}}{\partial z_{3}}= & {} \frac{-2z_{3}}{(z_{3}^2+z_{4}^2)\log (z_{3}^2+z_{4}^2)}, \quad \frac{\partial \gamma ^{2}}{\partial z_{4}}=\frac{-2z_{4}}{(z_{3}^2+z_{4}^2)\log (z_{3}^2+z_{4}^2)} \\ \frac{\partial \gamma ^{2}}{\partial z_{1}}= & {} 0, \quad \frac{\partial \gamma ^{2}}{\partial z_{2}}=0, \quad \frac{\partial \gamma ^{2}}{\partial z_{5}}=\cdots =\frac{\partial \gamma ^{2}}{\partial z_{10}}=0;\\ \frac{\partial \gamma ^{3}}{\partial z_{5}}= & {} \frac{-2z_{5}\log (z_{7}^2+z_{8}^2)}{(z_{5}^2+z_{6}^2)\log ^2(z_{5}^2+z_{6}^2)}, \quad \frac{\partial \gamma ^{3}}{\partial z_{6}}=\frac{-2z_{6}\log (z_{7}^2+z_{8}^2)}{(z_{5}^2+z_{6}^2)\log ^2(z_{5}^2+z_{6}^2)}\\ \frac{\partial \gamma ^{3}}{\partial z_{7}}= & {} \frac{2z_{7}}{(z_{7}^2+z_{8}^2)\log (z_{5}^2+z_{6}^2)}, \quad \frac{\partial \gamma ^{3}}{\partial z_{8}}=\frac{2z_{8}}{(z_{7}^2+z_{8}^2)\log (z_{5}^2+z_{6}^2)}\\ \frac{\partial \gamma ^{3}}{\partial z_{1}}= & {} \cdots =\frac{\partial \gamma ^{3}}{\partial z_{4}}=0,\quad \frac{\partial \gamma ^{3}}{\partial z_{9}}=\frac{\partial \gamma ^{3}}{\partial z_{10}}=0\\ \frac{\partial \gamma ^{4}}{\partial z_{9}}= & {} \frac{-z_{10}}{z_{9}^2+z_{10}^2},\quad \frac{\partial \gamma ^{4}}{\partial z_{10}}=\frac{z_{9}}{z_{9}^2+z_{10}^2},\quad \frac{\partial \gamma ^{4}}{\partial z_{1}}=\cdots =\frac{\partial \gamma ^{4}}{\partial z_{8}}=0. \end{aligned}$$

Let \(\varPhi _{n}(\eta )=(\varPhi _{1,n}(\eta ),\varPhi _{2,n}(\eta ),\varPhi _{3,n}(\eta ),\varPhi _{4,n}(\eta ))^{T}\), where \(\varPhi _{j,n}(\eta )={\hat{\gamma }}_{j}(V_{n})-\gamma _{j}(\eta ), \ j=1,2,3,4\). Then, we know that \(\hat{\eta }_{n}\) is the generalized moment estimator of \(\eta \), which satisfies

$$\begin{aligned} \varPhi _{n}(\hat{\eta }_{n})=0. \end{aligned}$$
(3.24)

Basic calculation gives

$$\begin{aligned}&\frac{\partial \gamma _{1}}{\partial \alpha }=\log \sigma -\frac{1}{\alpha }+\log |u_{1}|, \quad \frac{\partial \gamma _{1}}{\partial \theta }=-\frac{1}{\theta }, \quad \frac{\partial \gamma _{1}}{\partial \sigma }=\frac{\alpha }{\sigma }, \quad \frac{\partial \gamma _{1}}{\partial \beta }=0;\\&\frac{\partial \gamma _{2}}{\partial \alpha }=\log \sigma -\frac{1}{\alpha }+\log |u_{2}|, \quad \frac{\partial \gamma _{2}}{\partial \theta }=-\frac{1}{\theta }, \quad \frac{\partial \gamma _{2}}{\partial \sigma }=\frac{\alpha }{\sigma }, \quad \frac{\partial \gamma _{2}}{\partial \beta }=0;\\&\frac{\partial \gamma _{3}}{\partial \alpha }=(1-e^{-\theta h})^{\alpha }\log (1-e^{-\theta h})+\theta h e^{-\alpha \theta h}, \\&\frac{\partial \gamma _{3}}{\partial \theta }=\alpha h e^{-\theta h}(1-e^{-\theta h})^{\alpha -1}+\alpha h e^{-\alpha \theta h}, \\&\frac{\partial \gamma _{3}}{\partial \sigma }=0, \quad \frac{\partial \gamma _{3}}{\partial \beta }=0;\\&\frac{\partial \gamma _{4}}{\partial \alpha }=\frac{\beta \sigma ^{\alpha } |u_{4}|^{\alpha }\text{ sgn }(u_{4})}{\alpha \theta } \left[ \log (\sigma |u_{4}|)\tan \left( \frac{\alpha \pi }{2}\right) -\alpha ^{-1}\tan \left( \frac{\alpha \pi }{2}\right) + \frac{\pi }{2}\sec ^{2}\left( \frac{\alpha \pi }{2}\right) \right] , \\&\frac{\partial \gamma _{4}}{\partial \theta }=-\beta \frac{\sigma ^{\alpha }}{\alpha \theta ^2} \tan \left( \frac{\alpha \pi }{2}\right) |u_{4}|^{\alpha } \text{ sgn }(u_{4}),\\&\frac{\partial \gamma _{4}}{\partial \sigma }=\beta \frac{\sigma ^{\alpha -1}}{\theta } \tan \left( \frac{\alpha \pi }{2}\right) |u_{4}|^{\alpha } \text{ sgn }(u_{4}),\\&\frac{\partial \gamma _{4}}{\partial \beta }=\frac{\sigma ^{\alpha }}{\theta } \tan \left( \frac{\alpha \pi }{2}\right) |u_{4}|^{\alpha } \text{ sgn }(u_{4}). \end{aligned}$$

Note that

$$\begin{aligned} \nabla _{\eta }\varPhi _{n}(\eta )=-\nabla _{\eta } \gamma (\eta ), \end{aligned}$$
(3.25)

where

$$\begin{aligned} \nabla _{\eta } \gamma (\eta )&= \begin{pmatrix} \frac{\partial \gamma _{1}(\eta )}{\partial \alpha } &{}\quad \frac{\partial \gamma _{1}(\eta )}{\partial \theta } &{}\quad \frac{\partial \gamma _{1}(\eta )}{\partial \sigma } &{}\quad \frac{\partial \gamma _{1}(\eta )}{\partial \beta } \\ \frac{\partial \gamma _{2}(\eta )}{\partial \alpha } &{}\quad \frac{\partial \gamma _{2}(\eta )}{\partial \theta } &{}\quad \frac{\partial \gamma _{2}(\eta )}{\partial \sigma } &{}\quad \frac{\partial \gamma _{2}(\eta )}{\partial \beta } \\ \frac{\partial \gamma _{3}(\eta )}{\partial \alpha } &{}\quad \frac{\partial \gamma _{3}(\eta )}{\partial \theta } &{}\quad \frac{\partial \gamma _{3}(\eta )}{\partial \sigma } &{}\quad \frac{\partial \gamma _{3}(\eta )}{\partial \beta } \\ \frac{\partial \gamma _{4}(\eta )}{\partial \alpha } &{}\quad \frac{\partial \gamma _{4}(\eta )}{\partial \theta } &{}\quad \frac{\partial \gamma _{4}(\eta )}{\partial \sigma } &{}\quad \frac{\partial \gamma _{4}(\eta )}{\partial \beta } \end{pmatrix}. \end{aligned}$$
(3.26)

For convenience, let \(I(\eta )=\nabla _{\eta } \gamma (\eta )\).

Finally we have the following main result.

Theorem 1

Fix an arbitrary \(h>0\). Denote \(\eta =(\alpha , \theta , \sigma , \beta )^{T}\) and \(\hat{\eta }_{n}=(\hat{\alpha }_{n}, \hat{\theta }_{n}, \hat{\sigma }_{n}, \hat{\beta }_{n})^{T}\), where \(\hat{\alpha }_{n}, \hat{\theta }_{n}, \hat{\sigma }_{n}, \hat{\beta }_{n}\) are given by (3.5), (3.10), (3.18), (3.20) and (3.21), respectively. Then we have the following statements. (i) The ergodic estimators \(\hat{\eta }_n\) converges to \(\eta \) almost surely as \(n\rightarrow \infty \). (ii) As \(n \rightarrow \infty \) we have the following central limit type theorem:

$$\begin{aligned} \sqrt{n} (\hat{\eta }_{n}-\eta ) \overset{d}{\rightarrow } N(0, \varSigma _4 ), \end{aligned}$$
(3.27)

where

$$\begin{aligned} \varSigma _4 =(I(\eta ))^{-1}{\hat{\gamma }}^{(1)}(v)\varSigma _{10}({\hat{\gamma }}^{(1)}(v))^{T}((I(\eta ))^{-1})^{T}. \end{aligned}$$

3.3 Optimal selection of the four grid points \(\{u_{1}, u_{2}, u_{3}, u_{4}\}\)

Following some ideas in Zhang and He (2016), we shall discuss how to select the four grid points \(\{u_{1}, u_{2}, u_{3}, u_{4}\}\) in certain optimal way. We first choose a relatively extensive grid set consisting of K grid points defined by

$$\begin{aligned} \varDelta _{K}=\left\{ \frac{ka}{K}, k=1,2, \ldots , K\right\} , \end{aligned}$$

where a is a fixed positive number, and K is a relatively large positive integer. For example, we can set \(a=5\) (or 8, 10 etc) and \(K=200\) (or 400, 500 etc). For a finite set A, we use \(\min -\arg \min _{x\in A} f(x)\) to denote the minimal value of \(x\in A\) that minimizes f(x). Note that the values that minimize f(x) are not always unique. We will use the following two steps to select four grid points \(\{u_{1}, u_{2}, u_{3}, u_{4}\}\) optimally.

Step 1. We choose

$$\begin{aligned} \{u_{1}^{*}, u_{2}^{*}, u_{3}^{*}, u_{4}^{*}\}=\{\hat{u}_{1,n}^{*}, \hat{u}_{2,n}^{*}, \hat{u}_{3,n}^{*}, \hat{u}_{4,n}^{*}\} \subset \varDelta _{K} \end{aligned}$$

arbitrarily in an increasing order, i.e. \(u_{1}^{*}<u_{2}^{*}<u_{3}^{*}<u_{4}^{*}\). Then we compute \(\hat{\eta }_{n}=(\hat{\alpha }_{n}, \hat{\theta }_{n}, \hat{\sigma }_{n}, \hat{\beta }_{n})\), \(\varSigma _{4,n}^{*}=\varSigma _{4}(\hat{\eta }_{n}, \{\hat{u}_{1,n}^{*}, \hat{u}_{2,n}^{*}, \hat{u}_{3,n}^{*}, \hat{u}_{4,n}^{*}\})\) (which is the matrix \(\varSigma _{4}\) computed by replacing \(\eta \) with \(\hat{\eta }_{n}\) in Theorem 1) as well as the closeness measure \(m(\varSigma _{4,n}^{*})=\mathrm{tr}(\varSigma _{4,n}^{*})\) (namely the trace of \(\varSigma _{4,n}^{*}\)).

Step 2. Adjust the location of \(\{u_{1}^{*}, u_{2}^{*}, u_{3}^{*}, u_{4}^{*}\}\) to \(\{u_{1}^{**}, u_{2}^{**}, u_{3}^{**}, u_{4}^{**}\}\) by

$$\begin{aligned}&\&u_{1}^{**}=\hat{u}_{1,n}^{**}=\min -\arg \min _{u \in \{u\in \varDelta _{K}: u<\hat{u}_{2,n}^{*}, u \ne \hat{u}_{1,n}^{*}\}} m(\varSigma _{4}(\hat{\eta }_{n}, \{u, \hat{u}_{2,n}^{*}, \hat{u}_{3,n}^{*}, \hat{u}_{4,n}^{*}\})), \\&\&u_{2}^{**}=\hat{u}_{2,n}^{**}=\min -\arg \min _{u \in \{u\in \varDelta _{K}: \hat{u}_{1,n}^{**}<u<\hat{u}_{3,n}^{*}, u \ne \hat{u}_{2,n}^{*}\}} m(\varSigma _{4}(\hat{\eta }_{n}, \{\hat{u}_{1,n}^{**}, u, \hat{u}_{3,n}^{*}, \hat{u}_{4,n}^{*}\})), \\&\&u_{3}^{**}=\hat{u}_{3,n}^{**}=\min -\arg \min _{u \in \{u\in \varDelta _{K}: \hat{u}_{2,n}^{**}<u<\hat{u}_{4,n}^{*}, u \ne \hat{u}_{3,n}^{*}\}} m(\varSigma _{4}(\hat{\eta }_{n}, \{\hat{u}_{1,n}^{**},\hat{u}_{2,n}^{**}, u, \hat{u}_{4,n}^{*}\})), \\&\&u_{4}^{**}=\hat{u}_{4,n}^{**}=\min -\arg \min _{u \in \{u\in \varDelta _{K}: u>\hat{u}_{3,n}^{**}, u \ne \hat{u}_{4,n}^{*}\}} m(\varSigma _{4}(\hat{\eta }_{n}, \{\hat{u}_{1,n}^{**},\hat{u}_{2,n}^{**}, \hat{u}_{3,n}^{**}, u\})). \end{aligned}$$

Step 3. Compute \(m(\varSigma _{4,n}^{**})\), where

$$\begin{aligned} \varSigma _{4,n}^{**}=\varSigma _{4}(\hat{\eta }_{n}, \{u_{1}^{**}, u_{2}^{**}, u_{3}^{**}, u_{4}^{**}\}). \end{aligned}$$

Then compute

$$\begin{aligned} \hat{\rho }_{n}=\frac{m(\varSigma _{4,n}^{*})-m(\varSigma _{4,n}^{**})}{m(\varSigma _{4,n}^{*})}. \end{aligned}$$

Step 4. If \(\hat{\rho }_{n}> \varepsilon \) (a pre-specified error value like 0.001), then set \(\{u_{1}^{**}, u_{2}^{**}, u_{3}^{**}, u_{4}^{**}\}\) to be \(\{u_{1}^{*}, u_{2}^{*}, u_{3}^{*}, u_{4}^{*}\}\) and repeat steps 2-3; else stop and output

$$\begin{aligned} \{u_{1}, u_{2}, u_{3}, u_{4}\}=\{u_{1}^{**}, u_{2}^{**}, u_{3}^{**}, u_{4}^{**}\}. \end{aligned}$$

Thus, we get our optimal selection of four grid points \(\{u_{1}, u_{2}, u_{3}, u_{4}\}\) and the corresponding estimator \(\hat{\eta }_{n}\) in terms of these four points.

4 Simulation

In this section we shall validate our estimators discussed in Sects. 3 and 4. We consider the following specific \(\alpha \)-stable Ornstein–Uhlenbeck motion determined by (1.1) which we restate as follows:

$$\begin{aligned} dX_{t}=-\,\theta X_{t}dt+\sigma dZ_{t},\quad X_{0}\quad \hbox {is given}. \end{aligned}$$
(4.1)

First we describe our approach to simulate the above process. There have been numerous schemes to simulate the above process. However, in all the existing schemes one needs to divide the interval [0, T] into small intervals \(0=t_0<t_1<\cdots <t_N=T=n{\tilde{h}}\) such that the partition step size \(t_{k+1}-t_k={\tilde{h}}\) goes to zero. This means that we would need to simulate \(nh/{\tilde{h}}\) many random variables. As we need \(n\rightarrow \infty \) and we allow h to be a constant, this will require too large amount of computations. For this specific Eq. (4.1), we shall use the following scheme. This scheme may also be useful in other applications. For our scheme we can allow \({\tilde{h}} =h\).

From (4.1) we see easily that

$$\begin{aligned} X_{t}=e^{-\theta (t-s)}X_{s}+\sigma \int _{s}^{t}e^{-\theta (t-r)}dZ_{r}. \end{aligned}$$

Thus

$$\begin{aligned} X_{t_{k+1}}=e^{-\theta h}X_{t_k}+\sigma \int _{kh}^{(k+1)h}e^{-\theta ((k+1)h-r)}dZ_{r}. \end{aligned}$$

Since \(f(r)=\sigma e^{-\theta ((k+1)h-r)}\) is a deterministic function we see that

$$\begin{aligned} \sigma \int _{kh}^{(k+1)h}e^{-\theta ((k+1)h-r)}dZ_{r} \overset{d}{=}\left( \int _{kh}^{(k+1)h }f^{\alpha }(t)dt\right) ^{\frac{1}{\alpha }}DZ_k, \end{aligned}$$

where \(DZ_k\) are iid \(\alpha \)-stable random variables. Janicki and Weron (1994) proposed numerical simulation of independent \(\alpha \)-stable random variables. However, there is an error in Janicki and Weron (1994), which is corrected in Weron and Weron (1995). We shall use the following formula to simulate \(DZ_k\):

$$\begin{aligned} DZ_k=D\sin \left( \alpha U_k+\alpha C\right) \bigg (\frac{cos(U_k-\alpha (U_k+ C))}{W_k}\bigg )^{\frac{1-\alpha }{\alpha }}/\cos (U_k)^{\frac{1}{\alpha }}. \end{aligned}$$

Here, \(U_k\) are iid uniformly distributed on \((-\frac{\pi }{2},\frac{\pi }{2})\), \(W_k\) are iid exponentially distributed with mean 1, \(D= \left( 1+\beta ^2\tan ^2\frac{\alpha \pi }{2}\right) ^{\frac{1}{2\alpha }}\) and \(C=\left( \arctan (\beta \tan \frac{\alpha \pi }{2})\right) /\alpha \).

Table 1 True parameter values for the following tables
Table 2 Mean of the estimators \(\hat{\alpha }\), \(\hat{\theta }\), \(\hat{\sigma }\), \(\hat{\beta }\) with \(h=0.5\) through 500 paths at different value of n
Table 3 Standard deviation of the estimators \(\hat{\alpha }\), \(\hat{\theta }\), \(\hat{\sigma }\), \(\hat{\beta }\) with \(h=0.5\) through 500 paths at different value of n

Then, we have the iteration as

$$\begin{aligned} X_{t_{k+1}}=e^{-\theta h}X_{t_k}+\sigma \frac{1}{(\theta \alpha )^{\frac{1}{\alpha }}}(1-e^{-\alpha \theta h})^{\frac{1}{\alpha }}DZ_{k}. \end{aligned}$$

To be specific we choose the following baseline parameter values and simulate the process in the interval [0, T] with \(nh=T=10000\). We shall fix \(h=0.5\). For the four grid points \(u_1\), \(u_2\), \(u_3\) and \(u_4\), we select them in a certain optimal way which is discussed in detail in the Sect. 3.3 and here we choose the \(a=12\), \(K=120\) and \(\epsilon =10^{-3}\). Values of the four parameters used are given in Table 1. Here we use two sets of values.

Tables 2 and 3 give the mean and standard deviation of the estimators with the first set of assumed values of the parameters as the value of n changing from a smaller value to a larger value. For the grid points, we are choosing them in the optimal way. So they are different for different sample paths, here we just list one set of values. The optimal grid points we got from one sample path are \(\{5.0, 5.9, 6.0, 10.8\}\). We can see that as the value of n is getting larger, our estimators converge to the true values of the parameters and their standard deviations become smaller.

Tables 4 and 5 give the mean and standard deviation of the estimators with the second set of assumed values of the parameters as the value of n changing from a smaller value to a larger value. In this case, \(0<\alpha <1\) and \(\beta <0\). And the optimal grid points we got from one sample path are \(\{0.2, 3.1, 6.1, 9.0\}\). They are different for other paths. We see that the estimators also have good consistency with relatively small standard deviations.

Table 4 Mean of the estimators \(\hat{\alpha }\), \(\hat{\theta }\), \(\hat{\sigma }\), \(\hat{\beta }\) with \(h=0.5\) through 500 paths at different value of n
Table 5 Standard deviation of the estimators \(\hat{\alpha }\), \(\hat{\theta }\), \(\hat{\sigma }\), \(\hat{\beta }\) with \(h=0.5\) through 500 paths at different value of n

5 Appendix

5.1 Lemmas and proofs

In this subsection, we provide all the necessary lemmas with their proofs and the proof of our main result (Theorem 1) presented in Sect. 3.

Let \(U=(U_{1}, U_{2}, \dots , U_{10})^{T} \sim N(0, \varSigma _{10})\). Then, we have the following result:

Lemma 1

We have the CLT

$$\begin{aligned} \sqrt{n}(V_{n}-v) \overset{d}{\rightarrow } U. \end{aligned}$$
(5.1)

Proof

Let \(U=(U_{1}, U_{2}, \dots , U_{10})^{T}\) be a normally distributed random vector with mean 0 and covariance matrix \(\varSigma _{10}\). Then for any non-zero vector \(a=(a_{1}, a_{2}, \dots , a_{10})^{T} \in {\mathbb R}^{10}\), we have \(a^{T}U \sim N(0, a^{T}\varSigma _{10} a)\). By the Cramer–Wold device (Theorem 29.4 of Billingsley 1995), it suffices to prove that

$$\begin{aligned} a^{T}\sqrt{n}(V_{n}-v) \overset{d}{\rightarrow } a^{T}U. \end{aligned}$$

Define \(K=a^{T}(g_{1},g_{2},\dots , g_{10})^{T}\) and \(\bar{K}=K-{\mathbb E}[K(\tilde{X}_{0}, \tilde{X}_{h})]=a^{T}(\bar{g}_{1},\bar{g}_{2},\dots ,\bar{g}_{10})^{T}\). Note that the underlying Ornstein–Uhlenbeck process is stationary and exponentially \(\alpha \)-mixing (see Theorem 2.6 of Masuda 2007). Then by the univariate CLT Theorem 18.6.3 of Ibragimov and Linnik (1971) for stationary process with \(\alpha \)-mixing condition, we have

$$\begin{aligned} a^{T}\sqrt{n}(V_{n}-v)=\sqrt{n}T_{n}(\bar{K}) \overset{d}{\rightarrow } N(0, \sigma _{K}^2), \end{aligned}$$
(5.2)

where

$$\begin{aligned} \sigma _{K}^2={\mathbb {E}}_{\mu }[\bar{K}^2(\tilde{X}_{0}, \tilde{X}_{h})]+2\sum _{j=1}^{\infty } {\mathbb {E}}_{\mu }[\bar{K}(\tilde{X}_{0}, \tilde{X}_{h})\bar{K}(\tilde{X}_{jh}, \tilde{X}_{(j+1)h})] =a^{T}\varSigma _{10} a. \end{aligned}$$

Therefore, we have \(a^{T}\sqrt{n}(V_{n}-v)\overset{d}{\rightarrow } a^{T}U\) for any non-zero \(a \in {\mathbb R}^{10}\). The proof is complete. \(\square \)

Lemma 2

We have the following CLT

$$\begin{aligned} \sqrt{n}\varPhi _{n}(\eta ) \overset{d}{\rightarrow } {\hat{\gamma }}^{(1)}(v)U. \end{aligned}$$
(5.3)

Proof

Note that \(\sqrt{n}\varPhi _{n}(\eta )=\sqrt{n}({\hat{\gamma }}(V_{n})-{\hat{\gamma }} (v))\). The result follows directly from Lemma 1 and the delta method (see, e.g., Lemma 5.3.3 of Bickel and Doksum 2001). \(\square \)

Now we are ready to prove our main result (Theorem 1).

Proof of Theorem 1

  1. (i)

    It is obvious since each component of \({\hat{\eta }}_n\) converges to the corresponding component of \(\eta \) almost surely as \(n \rightarrow \infty \) as discussed in Sects. 3.1.13.1.4.

  2. (ii)

    By Taylor’s formula, we have

    $$\begin{aligned} \varPhi _{n}(\hat{\eta }_{n})-\varPhi _{n}(\eta ) =\int _{0}^{1} \nabla _{\eta }\varPhi _{n}(\eta +s(\hat{\eta }_{n}-\eta ))ds \cdot (\hat{\eta }_{n}-\eta ). \end{aligned}$$
    (5.4)

    Let \(I_{n}(\eta )=-\int _{0}^{1} \nabla _{\eta }\varPhi _{n}(\eta +s(\hat{\eta }_{n}-\eta ))ds\) be invertible. Note that \(\varPhi _{n}(\hat{\eta }_{n})=0\). Then, we have

    $$\begin{aligned} \sqrt{n} (\hat{\eta }_{n}-\eta )=\left( I_{n}(\eta )\right) ^{-1} \cdot \sqrt{n}\varPhi _{n}(\eta ). \end{aligned}$$
    (5.5)

    Note that \(\left( I_{n}(\eta )\right) ^{-1} \rightarrow \left( I(\eta )\right) ^{-1}\) a.s. since \(\hat{\eta }_{n} \rightarrow \eta \) a.s. Therefore by using Lemma 2 and Slutsky’s Theorem, we have

    $$\begin{aligned} \sqrt{n} (\hat{\eta }_{n}-\eta ) \overset{d}{\rightarrow } \left( I(\eta )\right) ^{-1} {\hat{\gamma }}^{(1)}(v) U. \end{aligned}$$

    The proof is complete.

\(\square \)

5.2 Computation of the covariance matrix \(\varSigma _{10}\)

The explicit expressions of the elements in the covariance matrix \(\varSigma _{10}\) are given in this subsection.

By using the characteristic function \(\phi (u)\) given in (2.3), we define

$$\begin{aligned} A_{0}(u)= & {} {\mathbb E}(\cos u\tilde{X}_{0}) \nonumber \\= & {} \exp \bigg \{-\frac{\sigma ^{\alpha }}{\alpha \theta }|u|^{\alpha }\bigg \}\cos \bigg (\frac{\sigma ^{\alpha }}{\alpha \theta }|u|^{\alpha }\beta \mathrm{sign\ }(u)\tan \frac{\alpha \pi }{2}\bigg ). \end{aligned}$$
(5.6)
$$\begin{aligned} B_{0}(u)= & {} {\mathbb E}(\sin u\tilde{X}_{0})\nonumber \\= & {} \exp \bigg \{-\frac{\sigma ^{\alpha }}{\alpha \theta }|u|^{\alpha }\bigg \}\sin \bigg (\frac{\sigma ^{\alpha }}{\alpha \theta }|u|^{\alpha }\beta \mathrm{sign\ }(u)\tan \frac{\alpha \pi }{2}\bigg ). \end{aligned}$$
(5.7)

Computation of \(\sigma _{g_1g_1}\). From the definition of \(g_1\) we have

$$\begin{aligned} \sigma _{g_1g_1}= & {} \mathrm{cov}(\cos u_1\tilde{X}_0,\cos u_1\tilde{X}_0), +2\sum _{j=1}^{\infty }[\mathrm{cov}(\cos u_1\tilde{X}_0,\cos u_1\tilde{X}_{jh})]\nonumber \\= & {} {\mathbb E}((\cos u_1\tilde{X}_0)^2)-({\mathbb E}(\cos u_1\tilde{X}_0))^2\nonumber \\&+\,2\sum _{j=1}^{\infty }\left\{ {\mathbb E}(\cos u_1\tilde{X}_0\cos u_1\tilde{X}_{jh})-{\mathbb E}(\cos u_1\tilde{X}_0) {\mathbb E} (\cos u_1\tilde{X}_{jh})\right\} . \end{aligned}$$
(5.8)

The first term in (5.8) is given by

$$\begin{aligned} {\mathbb E}((\cos u_1\tilde{X}_0)^2)= & {} {\mathbb E}\bigg (\frac{\cos 2u_1\tilde{X}_0+1}{2}\bigg )=\frac{1}{2}+\frac{1}{2}{\mathbb E}(\cos 2u_1\tilde{X}_0) \nonumber \\= & {} \frac{1}{2}+\frac{1}{2} A_{0}(2u_1). \end{aligned}$$
(5.9)

To compute the second term in (5.8) one needs

$$\begin{aligned} {\mathbb E}(\cos u_1\tilde{X}_0)=A_{0}(u_1). \end{aligned}$$
(5.10)

Notice that \({\mathbb E}(\cos u_1\tilde{X}_{jh})={\mathbb E}(\cos u_1\tilde{X}_0)\) and then the second summand in the sum of (5.8) is also given by the above formula. We write

$$\begin{aligned} u\tilde{X}_0+v\tilde{X}_{jh}=(u+ve^{-\theta jh})\tilde{X}_0+v\sigma e^{-\theta jh}\int _{0}^{jh}e^{\theta s}dZ_{s} \end{aligned}$$

and then we see

$$\begin{aligned}&{\mathbb E}\left[ \exp \{iu\tilde{X}_0+iv\tilde{X}_{jh}\}\right] \nonumber \\&\quad = {\mathbb E}\left[ \exp \{i(u+ve^{-\theta jh})\tilde{X}_0\}\right] {\mathbb E}\left[ \exp \{iv\sigma e^{\theta jh}\int _{0}^{jh}e^{\theta s}dZ_{s}\}\right] \nonumber \\&\quad = \exp \bigg \{-\frac{\sigma ^{\alpha }}{\alpha \theta }\left[ |u+ve^{-\theta jh}|^{\alpha }\left( 1-i\beta \mathrm{sign\ }(u+ve^{-\theta jh})\tan \frac{\alpha \pi }{2}\right) \right. \nonumber \\&\left. \qquad +\,|v|^{\alpha }(1-e^{-\alpha \theta jh})\left( 1-i\beta \mathrm{sign\ }(v)\tan \frac{\alpha \pi }{2}\right) \right] \bigg \}. \end{aligned}$$
(5.11)

Let

$$\begin{aligned}&A_j(u,v)={\mathbb E}(\cos (u\tilde{X}_0+v\tilde{X}_{jh}))\nonumber \\&\quad =\exp \bigg \{-\frac{\sigma ^{\alpha }}{\alpha \theta }\left[ |u+ve^{-\theta jh}|^{\alpha }+|v|^{\alpha }(1-e^{-\alpha \theta jh})\right] \bigg \}\nonumber \\&\qquad \cos \bigg (\frac{\sigma ^{\alpha }}{\alpha \theta }\beta \tan \frac{\alpha \pi }{2}\left[ |u+ve^{\theta jh}|^{\alpha } \mathrm{sign\ }(u+ve^{-\theta jh})+|v|^{\alpha }(1-e^{-\alpha \theta jh})\mathrm{sign\ }(v)\right] \bigg ),\nonumber \\ \end{aligned}$$
(5.12)
$$\begin{aligned}&B_j(u,v) ={\mathbb E}(\sin (u\tilde{X}_0+v\tilde{X}_{jh}))\nonumber \\&\quad =\exp \bigg \{-\frac{\sigma ^{\alpha }}{\alpha \theta }\left[ |u+ve^{-\theta jh}|^{\alpha }+|v|^{\alpha }(1-e^{-\alpha \theta jh})\right] \bigg \}\nonumber \\&\qquad \sin \bigg (\frac{\sigma ^{\alpha }}{\alpha \theta }\beta \tan \frac{\alpha \pi }{2}\left[ |u+ve^{\theta jh}|^{\alpha } \mathrm{sign\ }(u+ve^{-\theta jh})+|v|^{\alpha }(1-e^{-\alpha \theta jh})\mathrm{sign\ }(v)\right] \bigg ).\nonumber \\ \end{aligned}$$
(5.13)

From this computation we have the following formula for the first summand in the sum of (5.8).

$$\begin{aligned} {\mathbb E}(\cos u_1\tilde{X}_0\cos u_1\tilde{X}_{jh})= & {} \frac{{\mathbb E}(\cos u_1(\tilde{X}_0+\tilde{X}_{jh}))+{\mathbb E}(\cos u_1(\tilde{X}_0-\tilde{X}_{jh}))}{2}\nonumber \\= & {} \frac{A_j(u_1,u_1)+A_j(u_1,-u_1)}{2}. \end{aligned}$$
(5.14)

Substituting (5.9)–(5.10), (5.12), and (5.14) into (5.8) gives the computation for \(\sigma _{g_1g_1}\).

Computation of \(\sigma _{g_2g_2}\). From the definition of \(g_2\) we have

$$\begin{aligned} \sigma _{g_2g_2}= & {} \mathrm{cov}(\sin u_1\tilde{X}_0,\sin u_1\tilde{X}_0) +2\sum _{j=1}^{\infty }[\mathrm{cov}(\sin u_1\tilde{X}_0,\sin u_1\tilde{X}_{jh})]\nonumber \\= & {} {\mathbb E}((\sin u_1\tilde{X}_0)^2)-({\mathbb E}(\sin u_1\tilde{X}_0))^2\nonumber \\&+\,2\sum _{j=1}^{\infty }{\mathbb E}(\sin u_1\tilde{X}_0\sin u_1\tilde{X}_{jh})-{\mathbb E}(\sin u_1\tilde{X}_0){\mathbb E}(\sin u_1\tilde{X}_{jh}). \end{aligned}$$
(5.15)

The first term in (5.15) is given by

$$\begin{aligned} {\mathbb E}((\sin u_1\tilde{X}_0)^2)= & {} {\mathbb E}\bigg (\frac{1-\cos 2u_1\tilde{X}_0}{2}\bigg )=\frac{1}{2}-\frac{1}{2}{\mathbb E}(\cos 2u_1\tilde{X}_0) \nonumber \\= & {} \frac{1}{2}-\frac{1}{2}A_{0}(2u_1). \end{aligned}$$
(5.16)

The other terms appeared in (5.15) are given by

$$\begin{aligned} {\mathbb E}(\sin u_1\tilde{X}_0)=B_{0}(u_1) \end{aligned}$$
(5.17)

and

$$\begin{aligned} {\mathbb E}(\sin u_1\tilde{X}_0\sin u_1\tilde{X}_{jh})= & {} \frac{{\mathbb E}(\cos u_1(\tilde{X}_0-\tilde{X}_{jh}))-{\mathbb E}(\cos u_1(\tilde{X}_0+\tilde{X}_{jh}))}{2}\nonumber \\= & {} \frac{A_j(u_1, -u_1)-A_j(u_1,u_1)}{2} \end{aligned}$$
(5.18)

We can get \(\sigma _{g_2g_2}\) from Eq. (5.15).

The method of getting \(\sigma _{g_3g_3}\), \(\sigma _{g_4g_4}\), \(\sigma _{g_5g_5}\), \(\sigma _{g_6g_6}\), \(\sigma _{g_9g_9}\) and \(\sigma _{g_{10}g_{10}}\) are essentially the same as \(\sigma _{g_1g_1}\) and \(\sigma _{g_2g_2}\) by simply changing the value of u.

Computation of \(\sigma _{g_7g_7}\). From the definition of \(g_7\) we have

$$\begin{aligned} \sigma _{g_7g_7}= & {} \mathrm{cov}(\cos u_3(\tilde{X}_h-\tilde{X}_0),\cos u_3(\tilde{X}_h-\tilde{X}_0))\nonumber \\&+\,2\sum _{j=1}^{\infty }[\mathrm{cov}(\cos u_3(\tilde{X}_h-\tilde{X}_0),\cos u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))]\nonumber \\= & {} {\mathbb E}((\cos u_3(\tilde{X}_h-\tilde{X}_0))^2)-({\mathbb E}\cos u_3(\tilde{X}_h-\tilde{X}_0))^2\nonumber \\&+\,2\sum _{j=1}^{\infty }{\mathbb E}(\cos u_3(\tilde{X}_h-\tilde{X}_0)\cos u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\nonumber \\&-\,{\mathbb E}(\cos u_3(\tilde{X}_h-\tilde{X}_0)){\mathbb E}(\cos u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh})). \end{aligned}$$
(5.19)

The first term in (5.19) is given by

$$\begin{aligned} {\mathbb E}((\cos u_3(\tilde{X}_h-\tilde{X}_0))^2)= & {} {\mathbb E}\bigg (\frac{\cos 2u_3(\tilde{X}_h-\tilde{X}_0)+1}{2}\bigg )=\frac{1}{2}+\frac{1}{2}{\mathbb E}(\cos 2u_3(\tilde{X}_h-\tilde{X}_0)) \nonumber \\= & {} \frac{1}{2}+\frac{1}{2}A_{1}(-2u_3, 2u_3). \end{aligned}$$
(5.20)

The second term in (5.19) is given by

$$\begin{aligned}&{\mathbb E}(\cos u_3(\tilde{X}_h-\tilde{X}_0))=A_{1}(-u_3,u_3). \end{aligned}$$
(5.21)

For any real numbers u and v we have

$$\begin{aligned}&u(\tilde{X}_h-\tilde{X}_0)+v(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}) \nonumber \\&\quad =[u(e^{-\theta h}-1)+v(e^{-\theta (j+1)h}-e^{-\theta jh})]X_{0}\nonumber \\&\qquad +\,\int _{0}^{\infty }u\sigma e^{-\theta h}e^{\theta s}1_{[0,h]}(s)dZ_s +\int _{0}^{\infty }v\sigma e^{-\theta (j+1) h}e^{\theta s}1_{[0,(j+1)h]}(s)dZ_s\nonumber \\&\qquad -\,\int _{0}^{\infty }v\sigma e^{-\theta jh}e^{\theta s}1_{[0,jh]}(s)dZ_s. \end{aligned}$$
(5.22)

Therefore, we have

$$\begin{aligned} w_{j}(u,v):= & {} {\mathbb E}[\exp \{iu(\tilde{X}_h-\tilde{X}_0)+iv(\tilde{X}_{(j+1)h}-\tilde{X}_{jh})\}]\nonumber \\= & {} {\mathbb E}[\exp \{i[u(e^{-\theta h}-1)+v(e^{-\theta (j+1)h}-e^{-\theta jh})]X_{0}]\nonumber \\&\quad {\mathbb E}[\exp \{i(u\sigma e^{-\theta h}\int _{0}^{\infty }e^{\theta s}1_{[0,h]}(s)dZ_s -v\sigma e^{-\theta jh}\int _{0}^{\infty }e^{\theta s}1_{[0,jh]}(s)dZ_s \nonumber \\&+\,v\sigma e^{-\theta (j+1) h}\int _{0}^{\infty }e^{\theta s}1_{[0,(j+1)h]}(s)dZ_s \}]\nonumber \\= & {} \exp \bigg \{-\frac{\sigma ^{\alpha }}{\alpha \theta }[|u(e^{-\theta h}-1)+v(e^{-\theta (j+1)h}-e^{-\theta jh})|^{\alpha }\nonumber \\&\quad \left( 1-i\beta \mathrm{sign\ }(u(e^{-\theta h}-1)+v(e^{-\theta (j+1)h}-e^{-\theta jh}))\tan \frac{\alpha \pi }{2}\right) \nonumber \\&+\,|ue^{-\theta h}+ve^{-\theta (j+1)h}-ve^{-\theta jh}|^{\alpha }(e^{\alpha \theta h}-1)\nonumber \\&\quad \left( 1-i\beta \mathrm{sign\ }(ue^{-\theta h}+ve^{-\theta (j+1)h}-ve^{-\theta jh})\tan \frac{\alpha \pi }{2}\right) \nonumber \\&+\,|v|^{\alpha }(1-e^{-\theta h})^{\alpha }(1-e^{-\alpha \theta (j-1)h}) (1+i\beta \mathrm{sign\ }(v)\tan \frac{\alpha \pi }{2})\nonumber \\&+\,|v|^{\alpha }(1-e^{-\alpha \theta h}) (1-i\beta \mathrm{sign\ }(v)\tan \frac{\alpha \pi }{2})]\bigg \}. \end{aligned}$$
(5.23)

Then the first summand of the sum in (5.19) is given by

$$\begin{aligned}&{\mathbb E}(\cos u_3(\tilde{X}_h-\tilde{X}_0)\cos u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\nonumber \\&\quad =\frac{1}{2}\bigg [{\mathbb E}(\cos u_3((\tilde{X}_h-\tilde{X}_0)+(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\nonumber \\&\qquad +\,{\mathbb E}(\cos u_3((\tilde{X}_h-\tilde{X}_0)-(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\bigg ]\nonumber \\&\quad =\frac{1}{2}\mathfrak {R}\left[ w_{j}(u_3,u_3)+w_{j}(u_3,-u_3)\right] . \end{aligned}$$
(5.24)

Then we can get \(\sigma _{g_7g_7}\) from Eq. (5.19).

Computation of \(\sigma _{g_8g_8}\). From the definition of \(g_8\) we have

$$\begin{aligned} \sigma _{g_8g_8}= & {} \mathrm{cov}(\sin u_3(\tilde{X}_h-\tilde{X}_0),\sin u_3(\tilde{X}_h-\tilde{X}_0))\nonumber \\&+\,2\sum _{j=1}^{\infty }[\mathrm{cov}(\sin u_3(\tilde{X}_h-\tilde{X}_0),\sin u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))]\nonumber \\= & {} {\mathbb E}((\sin u_3(\tilde{X}_h-\tilde{X}_0))^2)-({\mathbb E}\sin u_3(\tilde{X}_h-\tilde{X}_0))^2\nonumber \\&+\,2\sum _{j=1}^{\infty }{\mathbb E}(\sin u_3(\tilde{X}_h-\tilde{X}_0)\sin u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\nonumber \\&-\,{\mathbb E}(\sin u_3(\tilde{X}_h-\tilde{X}_0)){\mathbb E}(\sin u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh})). \nonumber \\ \end{aligned}$$
(5.25)
$$\begin{aligned} {\mathbb E}((\sin u_3(\tilde{X}_h-\tilde{X}_0))^2)= & {} {\mathbb E}\bigg (\frac{1-\cos 2u_3(\tilde{X}_h-\tilde{X}_0)}{2}\bigg )\nonumber \\= & {} \frac{1}{2}-\frac{1}{2}{\mathbb E}(\cos 2u_3(\tilde{X}_h-\tilde{X}_0)) \nonumber \\= & {} \frac{1}{2}-\frac{1}{2}A_{1}(-2u_3, 2u_3). \end{aligned}$$
(5.26)
$$\begin{aligned} {\mathbb E}\sin u_3(\tilde{X}_h-\tilde{X}_0)= & {} B_{1}(-u_3,u_3). \end{aligned}$$
(5.27)
$$\begin{aligned} {\mathbb E}(\sin u_3(\tilde{X}_h-\tilde{X}_0)\sin u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))= & {} \frac{1}{2}\left[ {\mathbb E}(\cos (u_3(\tilde{X}_h-\tilde{X}_0)- u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))) \right. \nonumber \\&-\,\left. {\mathbb E}(\cos (u_3(\tilde{X}_h-\tilde{X}_0)- u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh})))\right] \nonumber \\= & {} \frac{1}{2}\mathfrak {R}\left[ w_j(u_3, -u_3)-w_j(u_3, u_3)\right] . \end{aligned}$$
(5.28)

Then we can get \(\sigma _{g_8g_8}\) from Eq. (5.25).

Computation of \(\sigma _{g_1g_2}\). From the definition of \(g_1\) and \(g_2\) we have

$$\begin{aligned} \sigma _{g_1g_2}= & {} \mathrm{cov}(\cos u_1\tilde{X}_0,\sin u_1\tilde{X}_0)+\sum _{j=1}^{\infty }[\mathrm{cov}(\cos u_1\tilde{X}_0,\sin u_1\tilde{X}_{jh})\nonumber \\&+\,\mathrm{cov}(\sin u_1\tilde{X}_0,\cos u_1\tilde{X}_{jh})]\nonumber \\= & {} {\mathbb E}(\cos u_1\tilde{X}_0\sin u_1\tilde{X}_0)-{\mathbb E}(\cos u_1\tilde{X}_0){\mathbb E}(\sin u_1\tilde{X}_0)\nonumber \\&+\,\sum _{j=1}^{\infty }[{\mathbb E}(\cos u_1\tilde{X}_0\sin u_1\tilde{X}_{jh})-{\mathbb E}(\cos u_1\tilde{X}_0){\mathbb E}(\sin u_1\tilde{X}_{jh})\nonumber \\&+\,{\mathbb E}(\sin u_1\tilde{X}_0\cos u_1\tilde{X}_{jh})-{\mathbb E}(\sin u_1\tilde{X}_0){\mathbb E}(\cos u_1\tilde{X}_{jh})], \end{aligned}$$
(5.29)

where

$$\begin{aligned} {\mathbb E}(\cos u_1\tilde{X}_0\sin u_1\tilde{X}_0)= & {} \frac{{\mathbb E}(\sin 2u_1\tilde{X}_0)}{2}=\frac{1}{2}B_{0}(2u_1), \end{aligned}$$
(5.30)
$$\begin{aligned} {\mathbb E}(\cos u_1\tilde{X}_0)= & {} A_{0}(u_1), \end{aligned}$$
(5.31)
$$\begin{aligned} {\mathbb E}(\sin u_1\tilde{X}_0)= & {} B_{0}(u_1), \end{aligned}$$
(5.32)
$$\begin{aligned} {\mathbb E}(\cos u_1\tilde{X}_0\sin u_1\tilde{X}_{jh})= & {} \frac{{\mathbb E}(\sin u_1(\tilde{X}_0+\tilde{X}_{jh}))-{\mathbb E}(\sin u_1(\tilde{X}_0-\tilde{X}_{jh}))}{2}\nonumber \\= & {} \frac{B_{j}(u_1, u_1)-B_j(u_1, -u_1)}{2}, \end{aligned}$$
(5.33)
$$\begin{aligned} {\mathbb E}(\sin u_1\tilde{X}_0\cos u_1\tilde{X}_{jh})= & {} \frac{{\mathbb E}(\sin u_1(\tilde{X}_0+\tilde{X}_{jh}))+{\mathbb E}(\sin u_1(\tilde{X}_0-\tilde{X}_{jh}))}{2}\nonumber \\= & {} \frac{B_{j}(u_1, u_1)+B_j(u_1, -u_1)}{2}. \end{aligned}$$
(5.34)

Similarly, we can get \(\sigma _{g_{3}g_{4}}\), \(\sigma _{g_{5}g_{6}}\), \(\sigma _{g_{9}g_{10}}\) by changing \(u_1\) to \(u_2\), \(u_3\) and \(u_4\).

Computation of \(\sigma _{g_1g_3}\). From the definition of \(g_1\) and \(g_3\) we have

$$\begin{aligned} \sigma _{g_1g_3}= & {} \mathrm{cov}(\cos u_1\tilde{X}_0,\cos u_2\tilde{X}_0)+\sum _{j=1}^{\infty }[\mathrm{cov}(\cos u_1\tilde{X}_0,\cos u_2\tilde{X}_{jh})\nonumber \\&+\,\mathrm{cov}(\cos u_2\tilde{X}_0,\cos u_1\tilde{X}_{jh})]\nonumber \\= & {} {\mathbb E}(\cos u_1\tilde{X}_0\cos u_2\tilde{X}_0)-{\mathbb E}(\cos u_1\tilde{X}_0){\mathbb E}(\cos u_2\tilde{X}_0)\nonumber \\&+\,\sum _{j=1}^{\infty }\bigg [ {\mathbb E}(\cos u_1\tilde{X}_0\cos u_2\tilde{X}_{jh})-{\mathbb E}(\cos u_1\tilde{X}_0){\mathbb E}(\cos u_2\tilde{X}_{jh})\nonumber \\&+\,{\mathbb E}(\cos u_2\tilde{X}_0\cos u_1\tilde{X}_{jh})-{\mathbb E}(\cos u_2\tilde{X}_0){\mathbb E}(\cos u_1\tilde{X}_{jh})\bigg ], \end{aligned}$$
(5.35)

where

$$\begin{aligned} {\mathbb E}(\cos u_1\tilde{X}_0\cos u_2 \tilde{X}_0)= & {} \frac{1}{2}[{\mathbb E}(\cos (u_1+u_2)\tilde{X}_0)+{\mathbb E}(\cos (u_1-u_2)\tilde{X}_0)]\nonumber \\\end{aligned}$$
(5.36)
$$\begin{aligned}= & {} \frac{1}{2}\left[ A_{0}(u_1+u_2)+A_{0}(u_1-u_2)\right] , \nonumber \\ {\mathbb E}(\cos u_1\tilde{X}_0)= & {} A_{0}(u_1), \quad {\mathbb E}(\cos u_2\tilde{X}_0)=A_{0}(u_2), \end{aligned}$$
(5.37)
$$\begin{aligned} {\mathbb E}(\cos u_1\tilde{X}_0\cos u_2\tilde{X}_{jh})= & {} \frac{{\mathbb E}\cos (u_1\tilde{X}_0+u_2\tilde{X}_{jh})+{\mathbb E}\cos (u_1\tilde{X}_0-u_2\tilde{X}_{jh}))}{2}\nonumber \\= & {} \frac{A_j(u_1, u_2)+A_j(u_1, -u_2)}{2}, \end{aligned}$$
(5.38)
$$\begin{aligned} {\mathbb E}(\cos u_2\tilde{X}_0\cos u_1\tilde{X}_{jh})= & {} \frac{{\mathbb E}\cos (u_2\tilde{X}_0+u_1\tilde{X}_{jh})+{\mathbb E}\cos (u_2\tilde{X}_0-u_1\tilde{X}_{jh}))}{2}\nonumber \\= & {} \frac{A_{j}(u_2,u_1)+A_{j}(u_2,-u_1)}{2}. \end{aligned}$$
(5.39)

Then we can get \(\sigma _{g_1g_3}\) from Eq. (5.35).

Similarly, we can get \(\sigma _{g_1g_5}\), \(\sigma _{g_1g_9}\), \(\sigma _{g_3g_5}\), \(\sigma _{g_3g_9}\), and \(\sigma _{g_5g_9}\).

Computation of\(\sigma _{g_1g_4}\). From the definition of \(g_1\) and \(g_4\) we have

$$\begin{aligned} \sigma _{g_1g_4}= & {} \mathrm{cov}(\cos u_1\tilde{X}_0,\sin u_2\tilde{X}_0)+\sum _{j=1}^{\infty }[\mathrm{cov}(\cos u_1\tilde{X}_0,\sin u_2\tilde{X}_{jh})\nonumber \\&+\,\mathrm{cov}(\sin u_2\tilde{X}_0,\cos u_1\tilde{X}_{jh})]\nonumber \\= & {} {\mathbb E}(\cos u_1\tilde{X}_0\sin u_2\tilde{X}_0)-{\mathbb E}(\cos u_1\tilde{X}_0){\mathbb E}(\sin u_2\tilde{X}_0)\nonumber \\&+\,\sum _{j=1}^{\infty }{\mathbb E}(\cos u_1\tilde{X}_0\sin u_2\tilde{X}_{jh})-{\mathbb E}(\cos u_1\tilde{X}_0){\mathbb E}(\sin u_2\tilde{X}_{jh})\nonumber \\&+\,\sum _{j=1}^{\infty }{\mathbb E}(\sin u_2\tilde{X}_0\cos u_1\tilde{X}_{jh})-{\mathbb E}(\sin u_2\tilde{X}_0){\mathbb E}(\cos u_1\tilde{X}_{jh}), \end{aligned}$$
(5.40)

where

$$\begin{aligned} {\mathbb E}(\cos u_1\tilde{X}_0\sin u_2 \tilde{X}_0)= & {} \frac{1}{2}[{\mathbb E}(\sin (u_1+u_2)\tilde{X}_0)-{\mathbb E}(\sin (u_1-u_2)\tilde{X}_0)]\nonumber \\= & {} \frac{1}{2}\left[ B_{0}(u_1+u_2)-B_{0}(u_1-u_2)\right] , \nonumber \\ {\mathbb E}(\cos u_1\tilde{X}_0)= & {} A_{0}(u_1), \ {\mathbb E}(\sin u_2\tilde{X}_0)=B_{0}(u_2), \end{aligned}$$
(5.41)
$$\begin{aligned} {\mathbb E}(\cos u_1\tilde{X}_0\sin u_2\tilde{X}_{jh})= & {} \frac{{\mathbb E}\sin (u_1\tilde{X}_0+u_2\tilde{X}_{jh})-{\mathbb E}\sin (u_1\tilde{X}_0-u_2\tilde{X}_{jh}))}{2}\nonumber \\= & {} \frac{B_{j}(u_1,u_2)-B_j(u_1, -u_2)}{2}, \end{aligned}$$
(5.42)
$$\begin{aligned} {\mathbb E}(\sin u_2\tilde{X}_0\cos u_1\tilde{X}_{jh})= & {} \frac{{\mathbb E}\sin (u_2\tilde{X}_0+u_1\tilde{X}_{jh})-{\mathbb E}\sin (u_2\tilde{X}_0-u_1\tilde{X}_{jh}))}{2} \end{aligned}$$
(5.43)
$$\begin{aligned}= & {} \frac{B_{j}(u_2, u_1)-B_{j}(u_2, -u_1)}{2}. \end{aligned}$$
(5.44)

Then we can get \(\sigma _{g_1g_4}\) from Eq. (5.40).

Similarly, we can get \(\sigma _{g_1g_6}\), \(\sigma _{g_1g_{10}}\), \(\sigma _{g_3g_2}\), \(\sigma _{g_3g_6}\), \(\sigma _{g_3g_{10}}\), \(\sigma _{g_5g_2}\), \(\sigma _{g_5g_4}\), \(\sigma _{g_5g_{10}}\), \(\sigma _{g_9g_2}\), \(\sigma _{g_9g_4}\), and \(\sigma _{g_9g_6}\) by changing the value of \(u_1\) and \(u_2\).

Computation of \(\sigma _{g_2g_4}\). From the definition of \(g_2\) and \(g_4\) we have

$$\begin{aligned} \sigma _{g_2g_4}= & {} \mathrm{cov}(\sin u_1\tilde{X}_0,\sin u_2\tilde{X}_0)+\sum _{j=1}^{\infty }[\mathrm{cov}(\sin u_1\tilde{X}_0,\sin u_2\tilde{X}_{jh})\nonumber \\&+\,\mathrm{cov}(\sin u_2\tilde{X}_0,\sin u_1\tilde{X}_{jh})]\nonumber \\= & {} {\mathbb E}(\sin u_1\tilde{X}_0\sin u_2\tilde{X}_0)-{\mathbb E}(\sin u_1\tilde{X}_0){\mathbb E}(\sin u_2\tilde{X}_0)\nonumber \\&+\,\sum _{j=1}^{\infty }{\mathbb E}(\sin u_1\tilde{X}_0\sin u_2\tilde{X}_{jh})-{\mathbb E}(\sin u_1\tilde{X}_0){\mathbb E}(\sin u_2\tilde{X}_{jh})\nonumber \\&+\,\sum _{j=1}^{\infty }{\mathbb E}(\sin u_2\tilde{X}_0\sin u_1\tilde{X}_{jh})-{\mathbb E}(\sin u_2\tilde{X}_0){\mathbb E}(\sin u_1\tilde{X}_{jh}), \end{aligned}$$
(5.45)

where

$$\begin{aligned} {\mathbb E}(\sin u_1\tilde{X}_0\sin u_2 \tilde{X}_0)= & {} \frac{1}{2}[{\mathbb E}(\cos (u_1-u_2)\tilde{X}_0)-{\mathbb E}(\cos (u_1+u_2)\tilde{X}_0)]\nonumber \\= & {} \frac{1}{2}\left[ A_{0}(u_1-u_2)-A_{0}(u_1+u_2)\right] , \end{aligned}$$
(5.46)
$$\begin{aligned} {\mathbb E}(\sin u_1\tilde{X}_0)= & {} B_{0}(u_1), \quad {\mathbb E}(\sin u_2\tilde{X}_0)=B_{0}(u_2), \end{aligned}$$
(5.47)
$$\begin{aligned} {\mathbb E}(\sin u_1\tilde{X}_0\sin u_2\tilde{X}_{jh})= & {} \frac{{\mathbb E}\cos (u_1\tilde{X}_0-u_2\tilde{X}_{jh})-{\mathbb E}\cos (u_1\tilde{X}_0+u_2\tilde{X}_{jh}))}{2}\nonumber \\= & {} \frac{A_j(u_1, -u_2)-A_j(u_1, u_2)}{2}, \end{aligned}$$
(5.48)
$$\begin{aligned} {\mathbb E}(\sin u_2\tilde{X}_0\sin u_1\tilde{X}_{jh})= & {} \frac{{\mathbb E}\cos (u_2\tilde{X}_0-u_1\tilde{X}_{jh})-{\mathbb E}\cos (u_2\tilde{X}_0+u_1\tilde{X}_{jh}))}{2}\nonumber \\= & {} \frac{A_{j}(u_2, -u_1)-A_{j}(u_2,u_1)}{2}. \end{aligned}$$
(5.49)

Then we can get \(\sigma _{g_2g_4}\) from Eq. (5.45).

Similarly, we can get \(\sigma _{g_2g_6}\), \(\sigma _{g_2g_{10}}\), \(\sigma _{g_4g_6}\), \(\sigma _{g_4g_{10}}\), and \(\sigma _{g_6g_{10}}\) by changing the value of \(u_1\) and \(u_2\).

Computation of \(\sigma _{g_7g_8}\). From the definition of \(g_7\) and \(g_8\) we have

$$\begin{aligned} \sigma _{g_7g_8}= & {} \mathrm{cov}(\cos u_3(\tilde{X}_h-\tilde{X}_0),\sin u_3(\tilde{X}_h-\tilde{X}_0))\nonumber \\&+\,\sum _{j=1}^{\infty }[\mathrm{cov}(\cos u_3(\tilde{X}_h-\tilde{X}_0),\sin u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\nonumber \\&+\,\mathrm{cov}(\sin u_3(\tilde{X}_h-\tilde{X}_0),\cos u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))]\nonumber \\= & {} {\mathbb E}(\cos u_3(\tilde{X}_h-\tilde{X}_0)\sin u_3(\tilde{X}_h-\tilde{X}_0))\nonumber \\&-\,{\mathbb E}\cos u_3(\tilde{X}_h-\tilde{X}_0)){\mathbb E}\sin u_3(\tilde{X}_h-\tilde{X}_0))\nonumber \\&+\,\sum _{j=1}^{\infty }[{\mathbb E}(\cos u_3(\tilde{X}_h-\tilde{X}_0)\sin u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\nonumber \\&-\,{\mathbb E}(\cos u_3(\tilde{X}_h-\tilde{X}_0)){\mathbb E}(\sin u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\nonumber \\&+\,{\mathbb E}(\sin u_3(\tilde{X}_h-\tilde{X}_0)\cos u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\nonumber \\&-\,{\mathbb E}(\sin u_3(\tilde{X}_h-\tilde{X}_0)){\mathbb E}(\cos u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))], \end{aligned}$$
(5.50)

where

$$\begin{aligned} {\mathbb E}(\cos u_3(\tilde{X}_h-\tilde{X}_0)\sin u_3(\tilde{X}_h-\tilde{X}_0))= & {} {\mathbb E}\bigg (\frac{\sin 2u_3(\tilde{X}_h-\tilde{X}_0)}{2}\bigg )\nonumber \\= & {} \frac{1}{2}B_{1}(-2u_3, 2u_3), \end{aligned}$$
(5.51)
$$\begin{aligned} {\mathbb E}(\cos u_3(\tilde{X}_h-\tilde{X}_0))= & {} A_{1}(-u_3, u_3), \end{aligned}$$
(5.52)
$$\begin{aligned} {\mathbb E}(\sin u_3(\tilde{X}_h-\tilde{X}_0))= & {} B_{1}(-u_3, u_3), \end{aligned}$$
(5.53)
$$\begin{aligned} {\mathbb E}(\cos u_3(\tilde{X}_h-\tilde{X}_0)\sin u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))= & {} \frac{1}{2}\bigg [{\mathbb E}(\sin (u_3(\tilde{X}_h-\tilde{X}_0))+u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\nonumber \\&-\,{\mathbb E}(\sin (u_3(\tilde{X}_h-\tilde{X}_0)-u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\bigg ]\nonumber \\= & {} \frac{1}{2}\mathfrak {I}\left[ w_{j}(u_3, u_3)-w_{j}(u_3, -u_3)\right] , \nonumber \\ \end{aligned}$$
(5.54)
$$\begin{aligned} {\mathbb E}(\sin u_3(\tilde{X}_h-\tilde{X}_0)\cos u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))= & {} \frac{1}{2}\bigg [{\mathbb E}(\sin (u_3(\tilde{X}_h-\tilde{X}_0))+u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\nonumber \\&+\,{\mathbb E}(\sin (u_3(\tilde{X}_h-\tilde{X}_0)-u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\bigg ]\nonumber \\= & {} \frac{1}{2}\mathfrak {I}\left[ w_{j}(u_3, u_3)+w_{j}(u_3, -u_3)\right] . \end{aligned}$$
(5.55)

Then we can get \(\sigma _{g_7g_8}\) from Eq. (5.50).

Computation of \(\sigma _{g_1g_7}\). From the definition of \(g_1\) and \(g_7\) we have

$$\begin{aligned} \sigma _{g_1g_7}= & {} \mathrm{cov}(\cos u_1\tilde{X}_0,\cos u_3(\tilde{X}_h-\tilde{X}_0)) \nonumber \\&+\sum _{j=1}^{\infty }[\mathrm{cov}(\cos u_1\tilde{X}_0,\cos u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\nonumber \\&+\,\mathrm{cov}(\cos u_3(\tilde{X}_h-\tilde{X}_0),\cos u_1\tilde{X}_{jh})]\nonumber \\= & {} {\mathbb E}(\cos u_1\tilde{X}_0\cos u_3(\tilde{X}_h-\tilde{X}_0))-{\mathbb E}\cos u_1\tilde{X}_0{\mathbb E}\cos u_3(\tilde{X}_h-\tilde{X}_0))\nonumber \\&+\,\sum _{j=1}^{\infty }[{\mathbb E}(\cos u_1\tilde{X}_0\cos u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\nonumber \\&-\,{\mathbb E}(\cos u_1\tilde{X}_0){\mathbb E}(\cos u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\nonumber \\&+\,{\mathbb E}(\cos u_3(\tilde{X}_h-\tilde{X}_0)\cos u_1\tilde{X}_{jh})\nonumber \\&-\,{\mathbb E}(\cos u_3(\tilde{X}_h-\tilde{X}_0)){\mathbb E}(\cos u_1\tilde{X}_{jh}))]. \end{aligned}$$
(5.56)

Note that

$$\begin{aligned}&{\mathbb E}\cos u_3(\tilde{X}_h-\tilde{X}_0)=A_{1}(-u_3, u_3), {\mathbb E}\cos u_1\tilde{X}_0=A_{0}(u_1). \end{aligned}$$
(5.57)

We write

$$\begin{aligned} u\tilde{X}_0+v(\tilde{X}_{(j+1)h}-\tilde{X}_{jh})= & {} (u+ve^{-\theta (j+1)h}-ve^{-\theta jh})\tilde{X}_0\\&+\,v\sigma e^{-\theta (j+1) h}\int _{0}^{(j+1)h}e^{\theta s}dZ_{s}-v\sigma e^{-\theta j h}\int _{0}^{jh}e^{\theta s}dZ_{s}. \end{aligned}$$

Let

$$\begin{aligned} \rho _{j}(u,v):= & {} {\mathbb E}[\exp \{iu\tilde{X}_0+iv(\tilde{X}_{(j+1)h}-\tilde{X}_{jh})\}]\nonumber \\= & {} {\mathbb E}[\exp \{i[u+v(e^{-\theta (j+1)h}-e^{-\theta jh})]X_{0}]\nonumber \\= & {} {\mathbb E}[\exp \{i(v\sigma e^{-\theta (j+1) h}\int _{0}^{\infty }e^{\theta s}1_{[0,(j+1)h]}(s)dZ_s\nonumber \\&-\,v\sigma e^{-\theta jh}\int _{0}^{\infty }e^{\theta s}1_{[0,jh]}(s)dZ_s\})]\nonumber \\= & {} \exp \bigg \{-\frac{\sigma ^{\alpha }}{\alpha \theta }\left[ |u+v(e^{-\theta (j+1)h}-e^{-\theta jh})|^{\alpha }\right. \nonumber \\&\quad \left( 1-i\beta \mathrm{sign\ }(u+v(e^{-\theta (j+1)h}-e^{-\theta jh}))\tan \frac{\alpha \pi }{2}\right) \nonumber \\&+\,|v|^{\alpha }\left( 1-e^{-\theta h}\right) ^{\alpha }\left( 1-e^{-\alpha \theta jh}\right) \left( 1+i\beta \mathrm{sign\ }(v)\tan \frac{\alpha \pi }{2}\right) \nonumber \\&\left. +|v|^{\alpha }(1-e^{-\alpha \theta h}) \left( 1-i\beta \mathrm{sign\ }(v)\tan \frac{\alpha \pi }{2}\right) \right] \bigg \}. \end{aligned}$$
(5.58)

Then

$$\begin{aligned}&{\mathbb E}(\cos u_1\tilde{X}_0\cos u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\nonumber \\&\quad =\frac{1}{2}\bigg [{\mathbb E}(\cos (u_1\tilde{X}_0+u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\nonumber \\&\qquad +\,{\mathbb E}(\cos (u_1\tilde{X}_0-u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\bigg ]\nonumber \\&\qquad =\frac{1}{2}\mathfrak {R}\left[ \rho _{j}(u_1, u_3)+\rho _{j}(u_1, -u_3)\right] . \end{aligned}$$
(5.59)

We write

$$\begin{aligned} u\tilde{X}_{jh}+v(\tilde{X}_{h}-\tilde{X}_{0})= & {} (ue^{-\theta jh}+v(e^{-\theta h}-1))\tilde{X}_0\\&+\,u\sigma e^{-\theta jh}\int _{0}^{jh}e^{\theta s}dZ_{s}+v\sigma e^{-\theta h}\int _{0}^{h}e^{\theta s}dZ_{s}. \end{aligned}$$

Let

$$\begin{aligned} \kappa _{j}(u,v):= & {} {\mathbb E}[\exp \{iu\tilde{X}_{jh}+iv(\tilde{X}_{h}-\tilde{X}_{0})\}] \nonumber \\= & {} {\mathbb E}[\exp \{i[ue^{-\theta jh}+v(e^{-\theta h}-1)]\tilde{X}_{0}]\nonumber \\&\times {\mathbb E}[\exp \{i(u\sigma e^{-\theta jh}\int _{0}^{jh}e^{\theta s}dZ_{s}+v\sigma e^{-\theta h}\int _{0}^{h}e^{\theta s}dZ_{s})]\nonumber \\= & {} \exp \bigg \{-\frac{\sigma ^{\alpha }}{\alpha \theta }\Bigg [ue^{-\theta jh}+v(e^{-\theta h}-1)|^{\alpha }\nonumber \\&\quad \left( 1-i\beta \mathrm{sign\ }(ue^{-\theta jh}+v(e^{-\theta h}-1))\tan \frac{\alpha \pi }{2}\right) \nonumber \\&+\,|ue^{-\theta jh}+ve^{-\theta h}|^{\alpha }(e^{\alpha \theta h}-1)\nonumber \\&\quad \left( 1-i\beta \mathrm{sign\ }(ue^{-\theta jh}+ve^{-\theta h})\tan \frac{\alpha \pi }{2}\right) \nonumber \\&+\,|u|^{\alpha }(1-e^{-\alpha \theta (j-1) h})\left( 1-i\beta \mathrm{sign\ }(u)\tan \frac{\alpha \pi }{2}\right) \Bigg ]\bigg \}. \end{aligned}$$
(5.60)

Then

$$\begin{aligned} {\mathbb E}(\cos u_3(\tilde{X}_h-\tilde{X}_0)\cos u_1\tilde{X}_{jh})= & {} \frac{1}{2}\bigg [{\mathbb E}(\cos (u_1\tilde{X}_{jh}+u_3(\tilde{X}_h-\tilde{X}_0)))\nonumber \\&+\,{\mathbb E}(\cos (u_1\tilde{X}_{jh}-u_3(\tilde{X}_h-\tilde{X}_0)))\bigg ]\nonumber \\= & {} \frac{1}{2}\mathfrak {R}\left[ \kappa _{j}(u_1,u_3)+\kappa _{j}(u_1,-u_3)\right] . \end{aligned}$$
(5.61)

Then we can get \(\sigma _{g_1g_7}\) from Eq. (5.56). By changing the value of \(u_1\), we can get \(\sigma _{g_3g_7}\), \(\sigma _{g_5g_7}\), and \(\sigma _{g_9g_7}\).

Computation of \(\sigma _{g_1g_8}\). From the definition of \(g_1\) and \(g_8\) we have

$$\begin{aligned} \sigma _{g_1g_8}= & {} \mathrm{cov}(\cos u_1\tilde{X}_0,\sin u_3(\tilde{X}_h-\tilde{X}_0)) \nonumber \\&+\sum _{j=1}^{\infty }[\mathrm{cov}(\cos u_1\tilde{X}_0,\sin u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\nonumber \\&+\,\mathrm{cov}(\cos u_3(\tilde{X}_h-\tilde{X}_0),\sin u_1\tilde{X}_{jh})]\nonumber \\= & {} {\mathbb E}(\cos u_1\tilde{X}_0\sin u_3(\tilde{X}_h-\tilde{X}_0))-{\mathbb E}\cos u_1\tilde{X}_0{\mathbb E}\sin u_3(\tilde{X}_h-\tilde{X}_0))\nonumber \\&+\sum _{j=1}^{\infty }[{\mathbb E}(\cos u_1\tilde{X}_0\sin u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\nonumber \\&-\,{\mathbb E}(\cos u_1\tilde{X}_0)E(\sin u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\nonumber \\&+\,{\mathbb E}(\sin u_3(\tilde{X}_h-\tilde{X}_0)\cos u_1\tilde{X}_{jh})\nonumber \\&-\,{\mathbb E}(\sin u_3(\tilde{X}_h-\tilde{X}_0))E(\cos u_1\tilde{X}_{jh}))], \end{aligned}$$
(5.62)

where

$$\begin{aligned}&{\mathbb E}\cos u_1\tilde{X}_0=A_{0}(u_1), \ {\mathbb E}\sin u_3(\tilde{X}_h-\tilde{X}_0)=B_{1}(-u_{3},u_3). \end{aligned}$$
(5.63)

By Eqs. (5.58) and (5.60), we get

$$\begin{aligned}&{\mathbb E}(\cos u_1\tilde{X}_0\sin u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh})) \end{aligned}$$
(5.64)
$$\begin{aligned}&\quad =\frac{1}{2}\bigg [{\mathbb E}(\sin (u_1\tilde{X}_0+u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\nonumber \\&\qquad -\,{\mathbb E}(\sin (u_1\tilde{X}_0-u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\bigg ]\nonumber \\&\quad =\frac{1}{2}\mathfrak {I}\left[ \rho _j(u_1,u_3)-\rho _j(u_1,-u_3)\right] , \nonumber \\&{\mathbb E}(\sin u_3(\tilde{X}_h-\tilde{X}_0)\cos u_1\tilde{X}_{jh}) \nonumber \\&\quad =\frac{1}{2}\bigg [{\mathbb E}(\sin (u_1\tilde{X}_{jh}+u_3(\tilde{X}_h-\tilde{X}_0)))\nonumber \\&\qquad -\,{\mathbb E}(\sin (u_1\tilde{X}_{jh}-u_3(\tilde{X}_h-\tilde{X}_0)))\bigg ]\nonumber \\&\quad =\frac{1}{2}\mathfrak {I}\left[ \kappa _j(u_1,u_3)-\kappa _j(u_1, -u_3)\right] . \end{aligned}$$
(5.65)

Then we can get \(\sigma _{g_1g_8}\) from Eq. (5.62). By changing the value of \(u_1\), we can get \(\sigma _{g_3g_8}\) ,\(\sigma _{g_5g_8}\), and \(\sigma _{g_9g_8}\).

Computation of \(\sigma _{g_2g_7}\). From the definition of \(g_2\) and \(g_7\) we have

$$\begin{aligned} \sigma _{g_2g_7}= & {} \mathrm{cov}(\sin u_1\tilde{X}_0,\cos u_3(\tilde{X}_h-\tilde{X}_0))\nonumber \\&+\sum _{j=1}^{\infty }[\mathrm{cov}(\sin u_1\tilde{X}_0,\cos u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh})) \nonumber \\&+\,\mathrm{cov}(\cos u_3(\tilde{X}_h-\tilde{X}_0),\sin u_1\tilde{X}_{jh})]\nonumber \\= & {} {\mathbb E}(\sin u_1\tilde{X}_0\cos u_3(\tilde{X}_h-\tilde{X}_0))-{\mathbb E}\sin u_1\tilde{X}_0{\mathbb E}\cos u_3(\tilde{X}_h-\tilde{X}_0))\nonumber \\&+\sum _{j=1}^{\infty }[{\mathbb E}(\sin u_1\tilde{X}_0\cos u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\nonumber \\&-\,{\mathbb E}(\sin u_1\tilde{X}_0){\mathbb E}(\cos u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\nonumber \\&+\,{\mathbb E}(\cos u_3(\tilde{X}_h-\tilde{X}_0)\sin u_1\tilde{X}_{jh})\nonumber \\&-\,{\mathbb E}(\cos u_3(\tilde{X}_h-\tilde{X}_0)){\mathbb E}(\sin u_1\tilde{X}_{jh}))]. \end{aligned}$$
(5.66)

Note that

$$\begin{aligned}&{\mathbb E}\sin u_1\tilde{X}_0=B_{0}(u_1), \ {\mathbb E}\cos u_3(\tilde{X}_h-\tilde{X}_0)=A_{1}(-u_{3},u_3). \end{aligned}$$
(5.67)

By Eqs. (5.58) and (5.60), we get

$$\begin{aligned}&{\mathbb E}(\sin u_1\tilde{X}_0\cos u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\nonumber \\&\quad =\frac{1}{2}\bigg [{\mathbb E}(\sin (u_1\tilde{X}_0+u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\nonumber \\&\qquad +\,{\mathbb E}(\sin (u_1\tilde{X}_0-u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\bigg ]\nonumber \\&\quad =\frac{1}{2}\mathfrak {I}\left[ \rho _j(u_1, u_3)+\rho _j(u_1, -u_3)\right] \end{aligned}$$
(5.68)

and

$$\begin{aligned}&{\mathbb E}(\cos u_3(\tilde{X}_h-\tilde{X}_0)\sin u_1\tilde{X}_{jh})\nonumber \\&\quad =\frac{1}{2}\bigg [{\mathbb E}(\sin (u_1\tilde{X}_{jh}+u_3(\tilde{X}_h-\tilde{X}_0)))\nonumber \\&\qquad +\,{\mathbb E}(\sin (u_1\tilde{X}_{jh}-u_3(\tilde{X}_h-\tilde{X}_0)))\bigg ]\nonumber \\&\quad =\frac{1}{2}\mathfrak {I}\left[ \kappa _j(u_1,u_3)+\kappa _j(u_1,-u_3)\right] . \end{aligned}$$
(5.69)

Then we can get \(\sigma _{g_2g_7}\) from Eq. (5.66). By changing the value of \(u_1\), we can get \(\sigma _{g_4g_7}\), \(\sigma _{g_6g_7}\), and \(\sigma _{g_10g_7}\).

Computation of \(\sigma _{g_2g_8}\). From the definition of \(g_2\) and \(g_8\) we have

$$\begin{aligned} \sigma _{g_2g_8}= & {} \mathrm{cov}(\sin u_1\tilde{X}_0,\sin u_3(\tilde{X}_h-\tilde{X}_0))\nonumber \\&+\,\sum _{j=1}^{\infty }[\mathrm{cov}(\sin u_1\tilde{X}_0,\sin u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\nonumber \\&+\,\mathrm{cov}(\sin u_3(\tilde{X}_h-\tilde{X}_0),\sin u_1\tilde{X}_{jh})]\nonumber \\= & {} {\mathbb E}(\sin u_1\tilde{X}_0\sin u_3(\tilde{X}_h-\tilde{X}_0)) \nonumber \\&-\,{\mathbb E}\sin u_1\tilde{X}_0{\mathbb E}\sin u_3(\tilde{X}_h-\tilde{X}_0))\nonumber \\&+\,\sum _{j=1}^{\infty }[{\mathbb E}(\sin u_1\tilde{X}_0\sin u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\nonumber \\&-\,{\mathbb E}(\sin u_1\tilde{X}_0){\mathbb E}(\sin u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\nonumber \\&+\,{\mathbb E}(\sin u_3(\tilde{X}_h-\tilde{X}_0)\sin u_1\tilde{X}_{jh})\nonumber \\&-\,{\mathbb E}(\sin u_3(\tilde{X}_h-\tilde{X}_0)){\mathbb E}(\sin u_1\tilde{X}_{jh}))]. \end{aligned}$$
(5.70)

Note that

$$\begin{aligned}&{\mathbb E}\sin u_1\tilde{X}_0=B_{0}(u_1), \ {\mathbb E}\sin u_3(\tilde{X}_h-\tilde{X}_0)=B_{1}(-u_{3},u_3). \end{aligned}$$
(5.71)

By Eqs. (5.58) and (5.60), we find

$$\begin{aligned}&{\mathbb E}(\sin u_1\tilde{X}_0\sin u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh})) \nonumber \\&\quad =\frac{1}{2}\bigg [{\mathbb E}(\cos (u_1\tilde{X}_0-u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\nonumber \\&\qquad -\,{\mathbb E}(\cos (u_1\tilde{X}_0+u_3(\tilde{X}_{(j+1)h}-\tilde{X}_{jh}))\bigg ]\nonumber \\&\quad =\frac{1}{2}\mathfrak {R}\left[ \rho _j(u_1, -u_3)-\rho _j(u_1,u_3)\right] . \end{aligned}$$
(5.72)
$$\begin{aligned}&{\mathbb E}(\sin u_3(\tilde{X}_h-\tilde{X}_0)\sin u_1\tilde{X}_{jh}) \nonumber \\&\quad =\frac{1}{2}\left[ {\mathbb E}(\cos (u_1\tilde{X}_{jh}-u_3(\tilde{X}_h-\tilde{X}_0)))\right. \nonumber \\&\qquad -\,\left. {\mathbb E}(\cos (u_1\tilde{X}_{jh}+u_3(\tilde{X}_h-\tilde{X}_0)))\right] \nonumber \\&\quad =\frac{1}{2}\mathfrak {R}\left[ \kappa _j(u_1,-u_3)-\kappa _{j}(u_1,u_3)\right] . \end{aligned}$$
(5.73)

Then we can get \(\sigma _{g_2g_8}\) by Eq. (5.70). Similarly, we can get \(\sigma _{g_4g_8}\),\(\sigma _{g_6g_8}\),and \(\sigma _{g_{10}g_8}\).

Thus, we have obtained the explicit expression of \(\varSigma _{10}=(\sigma _{g_{k}g_{l}})_{1\le k,l\le 10}\).