1 Introduction and Main Results

1.1 Introduction

In many situations, the stochastic processes involved are not allowed to cross a certain boundary, or are even supposed to remain within two boundaries. For instance, the reflected Ornstein–Uhlenbeck process behaves like the standard Ornstein–Uhlenbeck process in the interior of its domain. However, when it reaches the boundary, the sample path returns to the interior in a manner that the “pushing” force is minimal. This kind of process has wide range of applications in the field of queueing system, financial engineering, and mathematical biology.

Consider the following reflected Ornstein–Uhlenbeck process with one-sided barrier \(b_L\):

$$\begin{aligned} \left\{ \begin{array}{lll} \mathrm{d}X_t=(-\theta X_{t}+\gamma )\mathrm{d}t+\mathrm{d}W_{t}+\mathrm{d}L_t,\\ X_t\ge b_L,~\text {for all } t\ge 0,\\ X_0=x_0\ge b_L, \end{array}\right. \end{aligned}$$
(1.1)

where \(\theta \in (0,+\infty )\) and \(\gamma \) are unknown,  \(W=\{W_{t},t\in [0,\infty )\}\) is a standard Brownian motion. Here, the process \(L=\{L_t, t\ge 0\}\) is the minimal continuous increasing process with \(L_0=0\), which makes the process \(X_t\ge b_\mathrm{L}\) for all \(t\ge 0\). The process  L increases only when  X hits the boundary  \(b_\mathrm{L}\), satisfying \(\int _0^{\infty }I_{\big \{X_t>b_{\mathrm{L}}\big \}}\mathrm{d}L_t=0\). Denote by \(P_{\theta ,\gamma ,x_0}\) the probability distribution of the solution of (1.1) on \(C({{\mathbb {R}}}_{+},{{\mathbb {R}}})\), the space of continuous functions from \(\mathbb {R}^+\) to \(\mathbb {R}\). Without specific instruction, we will suppress \(\theta , \gamma \) and denote by \(P_{x_0}\).

For \(\theta \in (0,+\infty )\), the reflected Ornstein–Uhlenbeck process (1.1) is an ergodic Markov process  [36, 37], and its properties have been extensively studied. To be explicit, we can refer to  [30, 32, 36] for the transition density analysis; and  [8, 10, 11] for the study of first passages time,   [36, 37] for the formula of stationary distribution, as well as   [26, 31] for the limit theorems of the processes \(\{X_t, L_t, t\ge 0\}\).

The reflected Ornstein–Uhlenbeck process (1.1), as an extended Vasicek model, can successfully characterize mean reversion property of short interest rate. Actually,  \(\theta \) indicates the mean reversion rate, whereas  \(\gamma \), along with \(\theta \), determines the long run average. Then, to estimate them is a crucial step for practical applications. By Girsanov formula in Ward and Glynn [36], Bo et al.  [9], the log-likelihood ratio process can be written as

$$\begin{aligned} \log \left( \frac{\mathrm{d}P_{\theta ,\gamma ,x_0}}{\mathrm{d}P_{0,0,x_0}}\bigg |_{\mathcal {F}_T}\right)= & {} -\theta \int _0^TX_t\mathrm{d}(X_t-L_t)+\gamma \big (X_T-L_T-x_0\big ) \\&-\frac{\theta ^2}{2}\int _0^TX_t^2\mathrm{d}t+\theta \gamma \int _0^TX_t\mathrm{d}t-\frac{\gamma ^2}{2}T, \end{aligned}$$

where \(\mathcal {F}_T=\sigma (W_s,s\le T)\). The maximum likelihood estimators of \(\theta \) and \(\gamma \) are given by

$$\begin{aligned} \widehat{\theta }_{T}=\frac{-T\int _{0}^{T}X_{t}\mathrm{d}(X_t-L_t)+(X_T-L_T-x_0)\int _0^TX_t\mathrm{d}t}{T\int _{0}^{T}X_{t}^2\mathrm{d}t-\big (\int _0^TX_t\mathrm{d}t\big )^2} \end{aligned}$$
(1.2)

and

$$\begin{aligned} \widehat{\gamma }_{T}=\frac{-\int _0^TX_t\mathrm{d}t\int _{0}^{T}X_{t}\mathrm{d}(X_{t}-L_t) +(X_T-L_T-x_0)\int _0^TX_t^2\mathrm{d}t}{T\int _{0}^{T}X_{t}^2\mathrm{d}t-\big (\int _0^TX_t\mathrm{d}t\big )^2}. \end{aligned}$$
(1.3)

In the case of \(\theta >0, \gamma \equiv 0\), Bo et al.  [9] studied the strong consistency and asymptotic normality for  \(\widehat{\theta }_T\), while Zang and Zhang  [38] analyzed the Cramér–Rao lower bound. Moreover, Hu et al.  [25] constructed another estimator via discrete observations, and considered its asymptotic normality. For more details, one can refer to Hu and Lee  [24], Lee and Song  [29], and the references therein. On the other hand, Zang and Zhang  [39] considered asymptotic behavior of the trajectory fitting estimator for nonergodic reflected Ornstein–Uhlenbeck processes (\(\theta <0, \gamma \equiv 0\)).

Compared with huge literature in classical Ornstein–Uhlenbeck type process   [3,4,5,6,7, 14,15,16,17,18,19,20,21,22,23, 27], the large and moderate deviations for estimators in reflected Ornstein–Uhlenbeck process have been in the ascendant. In this paper, our goal is to fill this gap, refining the already known results in Bo et al.  [9], Zang and Zhang  [38]. Here, we will analyze the reflected Ornstein–Uhlenbeck process (1.1) in view of regenerative process, and this method is quite different from the techniques in the existed work.

Generally speaking, moderate deviation fulfills the gap between the limiting distribution and large deviation. More precisely, consider the estimation \(\mathbb {P}\left( \frac{\sqrt{T}}{\lambda _T}\left( \begin{array}{ll}\widehat{\theta }_T-\theta \\ \widehat{\gamma }_T-\gamma \\ \end{array}\right) \in A\right) \), where A is a given domain of deviations and \(\lambda _{T}\) denotes the scale of deviation. When \(\lambda _{T}\equiv 1\), this is exactly the estimation of limiting distribution result. When  \(\lambda _{T}\equiv \sqrt{T}\), this corresponds to the large deviation. And when \(\lambda _{T}\) between 1 and \(\sqrt{T}\), that is, \(\lambda _T\rightarrow \infty \) and \(\frac{\lambda _T}{\sqrt{T}}\rightarrow 0\) as \(T\rightarrow \infty \), this is the so-called moderate deviation.

1.2 Main Results

Denote the stationary distribution of (1.1) by ( [36])

$$\begin{aligned} \pi (\mathrm{d}x)=\frac{e^{-\theta \big (x-\gamma /\theta \big )^2}}{M}I_{[b_\mathrm{L},\infty )}\mathrm{d}x,\quad M=\int _{\mathrm{b}_L}^{\infty }e^{-\theta (x-\gamma /\theta )^2}\mathrm{d}x. \end{aligned}$$
(1.4)

Now, we state our main results as follows:

Theorem 1.1

Let \(\lambda _T\) be positive numbers, satisfying as \(T\rightarrow \infty \)

$$\begin{aligned} \lambda _T\rightarrow \infty , \quad \frac{\lambda _T}{\sqrt{T}}\rightarrow 0. \end{aligned}$$
(1.5)

Then, the family \(\left\{ \frac{\sqrt{T}}{\lambda _T}\left( \begin{array}{ll}\widehat{\theta }_T-\theta \\ \widehat{\gamma }_T-\gamma \\ \end{array}\right) , T>0\right\} \) satisfies the large deviations with speed \(\lambda _T^2\) and rate function

$$\begin{aligned} I(x)=\frac{1}{2}x^{\tau }\Sigma ^{-1}x,\quad x\in \mathbb {R}^2, \end{aligned}$$

where \(\mu _1=\int _{b_\mathrm{L}}^{\infty }x\pi (\mathrm{d}x)\),  \(\mu _2=\int _{b_\mathrm{L}}^{\infty }x^2\pi (\mathrm{d}x)\) and

$$\begin{aligned} \Sigma =\big (\mu _2-\mu _1^2\big )^{-1}\left( \begin{array}{ll}1\\ \mu _1\\ \end{array} \begin{array}{ll}\mu _1\\ \mu _2\\ \end{array}\right) . \end{aligned}$$

Explicitly, for any \(A\in \mathcal {B}(\mathbb {R}^2)\)

$$\begin{aligned} -\inf _{x\in A^o}I(x)&\le \liminf _{T\rightarrow \infty }\frac{1}{\lambda _T^2} \log P\left( \frac{\sqrt{T}}{\lambda _T}\left( \begin{array}{ll}\widehat{\theta }_T-\theta \\ \widehat{\gamma }_T-\gamma \\ \end{array}\right) \in A\right) \\&\le \limsup _{T\rightarrow \infty }\frac{1}{\lambda _T^2} \log P\left( \frac{\sqrt{T}}{\lambda _T}\left( \begin{array}{ll}\widehat{\theta }_T-\theta \\ \widehat{\gamma }_T-\gamma \\ \end{array}\right) \in A\right) \le -\inf _{x\in \bar{A}}I(x). \end{aligned}$$

Then, we can obtain immediately that

Corollary 1.1

Under condition (1.5), the families

$$\begin{aligned} \left\{ \frac{\sqrt{T}}{\lambda _T}\big (\widehat{\theta }_T-\theta \big ), T>0\right\} ,\quad \left\{ \frac{\sqrt{T}}{\lambda _T}\big (\widehat{\gamma }_T-\gamma \big ), T>0\right\} \end{aligned}$$

satisfy the large deviations with speed \(\lambda _T^2\) and rate functions

$$\begin{aligned} J_{\theta }(x)=\frac{1}{2}\big (\mu _2-\mu _1^2\big )x^2,\quad J_{\gamma }(x)=\frac{1}{2\mu _2}\big (\mu _2-\mu _1^2\big )x^2, \end{aligned}$$

respectively.

In particular, for any \(x\ge 0\), we have

$$\begin{aligned} \lim _{T\rightarrow \infty }\frac{1}{\lambda _T^2} \log P_{x_0}\left( \frac{\sqrt{T}}{\lambda _T}\big |\widehat{\theta }_T-\theta \big |\ge x\right) =-J_{\theta }(x) \end{aligned}$$

and

$$\begin{aligned} \lim _{T\rightarrow \infty }\frac{1}{\lambda _T^2} \log P_{x_0}\left( \frac{\sqrt{T}}{\lambda _T}\big |\widehat{\gamma }_T-\gamma \big |\ge x\right) =-J_{\gamma }(x). \end{aligned}$$

The paper is organized as follows. In Sect. 2, by using the regenerative process techniques, we first state some properties of reflected Ornstein–Uhlenbeck process (1.1), and then give exponential equivalence for the functionals  \(\int _0^TX_t\mathrm{d}t, \int _0^TX_t^2\mathrm{d}t\) to their asymptotic expectations, respectively. The proof of the main result Theorem 1.1 will be postponed to Sect. 3. In Sect. 4, we extend our results to two-sided barriers case. The main methods of this paper consist of regenerative process techniques and strong Markov property, as well as the moderate deviations for martingales. Throughout this paper,  \(C_0,C_1\), depending only on  \(b_L, \theta , \gamma \) and the initial point \(x_0\), denote positive constants whose values can differ at different places.

2 Regenerative Process and Exponential Equivalence

To obtain the moderate deviations for  \(\left( \hat{\theta }_T, \hat{\gamma }_T\right) \), the key point is to show the functionals  \(\int _0^TX_tdt\) and  \(\int _0^TX_t^2dt\) are exponential equivalent to their asymptotic expectations, respectively. Notice that the existing methods (Girsanov formula technique [3,4,5,6,7, 14, 15, 19]; multiple Wiener-Itô integral [21, 27]; log-Sobolev inequality method

[13, 18, 20]) maybe not work. Here, regenerative process techniques will be employed, and we benefit a lot from Banerjee and Mukherjee  [2].

2.1 Regenerative Process View of Functionals

We first briefly recall the definition of regenerative process  [33, 34].

Definition 2.1

The process \(X=\big \{X_t, t\ge 0\big \}\) is a regenerative process, if there exist random times \(0\le \Theta _0\le \Theta _1\le \cdots \), such that for \(k\ge 1\),

  1. (1)

     \(\big (X_{\Theta _k+t}, t\ge 0\big \}\) has the same distribution as  \(\big (X_{\Theta _0+t}, t\ge 0\big \}\).

  2. (2)

     \(\big (X_{\Theta _k+t}, t\ge 0\big \}\) is independent of  \(\big (X_{t}, 0\le t\le \Theta _k\big \}\).

In particular, if \(\Theta _0=0\), the process X is called a non-delayed regenerative process. Else,  X is called a delayed regenerative process.

Loosely speaking, regenerative process starts anew at regeneration times \(\big \{\Theta _k, k\ge 1\big \}\), independent of the past. Moreover, the regeneration times split the process into renewal cycles that are independent and identically distributed, possibly except the first cycle.

For the reflected Ornstein–Uhlenbeck process X (1.1), let

$$\begin{aligned} \tau _X(x)=\inf \Big \{t\ge 0: X_t=x\Big \}. \end{aligned}$$
(2.1)

Now, we can define regenerative times in terms of hitting times as follows:

$$\begin{aligned} \alpha _{2k+1}= & {} \inf \left\{ t\ge \alpha _{2k}: X_t=b_L+1\right\} , ~\alpha _{2k+2} \nonumber \\= & {} \inf \left\{ t\ge \alpha _{2k+1}: X_t=b_L+2\right\} , ~ \alpha _0=0, \end{aligned}$$
(2.2)
$$\begin{aligned} \Theta _k= & {} \alpha _{2k+2},\quad N_T=\sup \Big \{k\ge -1: \Theta _{k}\le T\Big \}. \end{aligned}$$
(2.3)

The strong Markov property implies that  X is a regenerative process with regeneration times given by \(\Big \{\Theta _k, k\ge -1\Big \}\). Then, under \(P_{x_0}\) with \(x_0\ge b_L\),

$$\begin{aligned} \left\{ \int _{\Theta _{k-1}}^{\Theta _k}X_t\mathrm{d}t, \Theta _k-\Theta _{k-1}: k\ge 1\right\} ,\quad \left\{ \int _{\Theta _{k-1}}^{\Theta _k}X_t^2\mathrm{d}t, \Theta _k-\Theta _{k-1}: k\ge 1\right\} \end{aligned}$$

are both independent and identically distributed sequences. Moreover, we also have the following important results

$$\begin{aligned} \left| \int _0^TX_t\mathrm{d}t-\sum _{k=1}^{N_\mathrm{T}}\int _{\Theta _{k-1}}^{\Theta _k}X_t\mathrm{d}t\right| \le \left| \int _0^{\Theta _0\wedge T}X_t\mathrm{d}t\right| +\left| \int _{\Theta _{N_\mathrm{T}}}^{T}X_t\mathrm{d}t\right| \end{aligned}$$
(2.4)

and

$$\begin{aligned} \left| \int _0^TX_t^2dt-\sum _{k=1}^{N_T}\int _{\Theta _{k-1}}^{\Theta _k}X_t^2\mathrm{d}t\right| \le \int _0^{\Theta _0}X_t^2\mathrm{d}t +\int _{\Theta _{N_T}}^{\Theta _{N_T+1}}X_t^2\mathrm{d}t, \end{aligned}$$
(2.5)

where the sum is  0 if the upper index is strictly less than the lower index.

The tail asymptotic of regenerative time \(\Theta _0\), renewal rewards \(\int _0^{\Theta _0}X_t\mathrm{d}t\)\(\int _0^{\Theta _0}X_t^2\mathrm{d}t\), and  \(\int _0^{\Theta _0\wedge T}X_t\mathrm{d}t\)\(\int _{\Theta _{N_\mathrm{T}}}^{T}X_t\mathrm{d}t\) can be analyzed as follows:

Lemma 2.1

For all \(x_0\ge b_L\) and  \(\Theta _0\) defined by (2.3), there exists some positive constants \(C_0, C_1\) depending only on \(x_0, b_L, \theta \) and \(\gamma \), such that for T large enough

$$\begin{aligned} P_{x_0}\Big (\Theta _0>T\Big )\le C_0e^{-C_1 T}. \end{aligned}$$
(2.6)

Moreover, there exists some  \(\eta >0\) such that \(E_{x_0}e^{\eta \Theta _0}<\infty \).

Proof

Firstly, if \(b_L\le x_0\le b_L+1\), then \(\Theta _0=\tau _X(b_L+2)\). Define the following Ornstein–Uhlenbeck process

$$\begin{aligned} \mathrm{d}Y_t=(-\theta Y_{t}+\gamma )\mathrm{d}t+\mathrm{d}W_{t},\quad Y_0=x_0. \end{aligned}$$
(2.7)

Under \(X_0=Y_0=x_0\), we have \(X_t\ge Y_t\) for  \(t\ge 0\), and then \(\tau _X(b_\mathrm{L}+2)\le \tau _Y(b_\mathrm{L}+2)\), where

$$\begin{aligned} \tau _Y(x)=\inf \Big \{t\ge 0: Y_t=x\Big \}. \end{aligned}$$
(2.8)

By Corollary 3.1 in Alili et al. ( [1]), we have for  T large enough

$$\begin{aligned} P_{x_0}\Big (\Theta _0>T\Big )=P_{x_0}\Big (\tau _X(b_\mathrm{L}+2)>T\Big )\le P_{x_0}\Big (\tau _Y(b_L+2)>T\Big )\le C_0e^{-C_1 T},\nonumber \\ \end{aligned}$$
(2.9)

where \(C_0, C_1\) are positive constants depending only on \(x_0, b_L, \theta \) and \(\gamma \).

On the other hand, if \(x_0>b_L+1\), we have \(\tau _X(b_L+1)=\tau _Y(b_L+1)=\alpha _1\), and \(X_t=Y_t\) on the interval \([0,\tau (b_L)]\), where  Y is defined by (2.7). It holds by strong Markov property

$$\begin{aligned} P_{x_0}\Big (\Theta _0>T\Big )&\le P_{x_0}\Big (\alpha _1>T/2\Big )+P_{x_0}\Big (\alpha _2-\alpha _{1}>T/2\Big )\\&\le P_{x_0}\Big (\tau _Y(b_L+1)>T/2\Big )+P_{b_L+1}\Big (\tau _X(b_L+2)>T/2\Big )\\&\le P_{x_0}\Big (\tau _Y(b_L+1)>T/2\Big )+P_{b_L+1}\Big (\tau _Y(b_L+2)>T/2\Big ). \end{aligned}$$

Using Corollary 3.1 in Alili et al.  [1] again, we have for  T large enough

$$\begin{aligned} P_{x_0}\Big (\Theta _0>T\Big )\le C_0e^{-C_1 T}, \end{aligned}$$
(2.10)

where \(C_0, C_1\) are positive constants depending only on \(x_0, b_L, \theta \) and \(\gamma \).

Finally, by using Fubini theorem and  (2.6), we can choose some  \(\eta >0\) such that

$$\begin{aligned} E_{x_0}e^{\eta \Theta _0}\le e^{\eta T}+\eta \int _{T}^{\infty }e^{\eta x}P_{x_0}(\Theta >x)\mathrm{d}x<\infty , \end{aligned}$$

which concludes the proof of this lemma. \(\square \)

Lemma 2.2

For all \(x_0\ge b_L\), there exists some positive constants \(C_0, C_1\) depending only on \(x_0, b_L, \theta \) and \(\gamma \), such that for T large enough

$$\begin{aligned} P_{x_0}\Big (\Big |\int _0^{\Theta _0}X_t\mathrm{d}t\Big |\vee \Big |\int _0^{\Theta _0\wedge T}X_t\mathrm{d}t\Big |>T\Big )\le C_0e^{-C_1 T},\quad P_{x_0}\Big (\int _0^{\Theta _0}X_t^2dt>T\Big )\le C_0e^{-C_1 T}.\nonumber \\ \end{aligned}$$
(2.11)

In particular, there exists some \(\eta >0\) such that

$$\begin{aligned} E_{x_0}\exp \left\{ \eta \left( \left| \int _0^{\Theta _0\wedge T}X_t\mathrm{d}t\right| \vee \left| \int _0^{\Theta _0}X_t\mathrm{d}t\right| \right) \right\}<\infty ,\quad E_{x_0}\exp \left\{ \eta \int _0^{\Theta _0}X_t^2\mathrm{d}t\right\} <\infty .\nonumber \\ \end{aligned}$$
(2.12)

Proof

Firstly, if \(b_L\le x_0\le b_L+1\), then \(\Theta _0=\tau _X(b_L+2)\), and

$$\begin{aligned} \int _0^{\Theta _0}X_t^2dt\le \Big ((b_L+2)^2\vee b_L^2\Big )\tau _X(b_L+2), \end{aligned}$$

which implies by (2.9) that

$$\begin{aligned} P_{x_0}\Big (\int _0^{\Theta _0}X_t^2dt>T\Big )&\le P_{x_0}\Big (\Big ((b_L+2)^2\vee b_L^2\Big )\tau _X(b_L+2)>T\Big )\\&\le P_{x_0}\Big (\Big ((b_L+2)^2\vee b_L^2\Big )\tau _Y(b_L+2)>T\Big )\le C_0e^{-C_1 T}. \end{aligned}$$

On the other hand, suppose \(x_0>b_L+1\). Then, for \(t\in [0,\tau _X(b_L)]\)\(X_t=Y_t\) and \(\alpha _1=\tau _X(b_L+1)=\tau _Y(b_L+1)\), where  Y is defined by (2.7). Then, it holds that

$$\begin{aligned} \int _0^{\Theta _0}X_t^2\mathrm{d}t&=\int _0^{\tau _Y(b_\mathrm{L}+1)}Y_t^2\mathrm{d}t+\int _{\tau _X(b_\mathrm{L}+1)}^{\Theta _0}X_t^2\mathrm{}dt\\&\le \int _0^{\tau _Y(b_\mathrm{L}+1)}Y_t^2\mathrm{d}t+\Big ((b_\mathrm{L}+2)^2\vee b_\mathrm{L}^2\Big )\big (\Theta _0-\tau _X(b_\mathrm{L}+1)\big ). \end{aligned}$$

By using the strong Markov property, we obtain

$$\begin{aligned}&P_{x_0}\Big (\int _0^{\Theta _0}X_t^2\mathrm{d}t\ge T\Big )\\&\quad \le P_{x_0}\Big (\int _0^{\tau _Y(b_L+1)}Y_t^2\mathrm{d}t\ge 2T/3\Big )+P_{x_0}\Big (\Big ((b_L+2)^2\vee b_L^2\Big )\big (\Theta _0-\tau _X(b_L+1)\big )\ge T/3\Big )\\&\quad \le P_{x_0}\Big (\tau _Y(b_L+1)\ge \frac{\theta ^2T}{\theta +2\gamma ^2}\Big ) +P_{x_0}\Big (\frac{\theta +2\gamma ^2}{\theta ^2T}\int _0^{\frac{\theta ^2T}{\theta +2\gamma ^2}}Y_t^2\mathrm{d}t\ge \frac{2(\theta +2\gamma ^2)}{3\theta ^2}\Big )\\&\qquad +P_{b_L+1}\Big (\Big ((b_L+2)^2\vee b_L^2\Big )\tau _X(b_L+2)\ge T/3\Big ). \end{aligned}$$

Since \(\lim _{T\rightarrow \infty }\frac{1}{T}\int _0^TY_t^2\mathrm{d}t=\frac{\theta +2\gamma ^2}{2\theta ^2}\), by Lemma 2.3 in Gao and Jiang  [20] and  (2.9), we have for T large enough

$$\begin{aligned} P_{x_0}\Big (\int _0^{\Theta _0}X_t^2\mathrm{d}t>T\Big )\le C_0e^{-C_1 T}. \end{aligned}$$
(2.13)

Finally, by Hölder inequality, Lemma 2.1 and (2.13), we have

$$\begin{aligned} P_{x_0}\Big (\Big |\int _0^{\Theta _0}X_t\mathrm{d}t\Big |>T\Big )&\le P_{x_0}\Big (\Theta _0\int _0^{\Theta _0}X_t^2dt>T^2\Big )\\&\le P_{x_0}\Big (\Theta _0>T\Big )+P_{x_0}\Big (\int _0^{\Theta _0}X_t^2\mathrm{d}t>T\Big )\le C_0e^{-C_1 T} \end{aligned}$$

and

$$\begin{aligned} P_{x_0}\Big (\Big |\int _0^{\Theta _0\wedge T}X_t\mathrm{d}t\Big |>T\Big )&\le P_{x_0}\Big (\Theta _0>T\Big )+P_{x_0}\Big (\left| \int _0^{\Theta _0}X_t\mathrm{d}t\right| >T\Big )\le C_0e^{-C_1 T}, \end{aligned}$$

which complete the proof of this lemma. \(\square \)

2.2 Exponential Equivalence

In this subsection, we will show that the following functionals  \(\int _0^TX_t\mathrm{d}t,\quad \int _0^TX_t^2\mathrm{d}t\) are exponentially equivalent to their asymptotic expectations, respectively.

Since the stationary distribution of (1.1) is given by  [36]

$$\begin{aligned} \pi (\mathrm{d}x)=\frac{e^{-\theta \big (x-\gamma /\theta \big )^2}}{M}I_{[b_\mathrm{L},\infty )}\mathrm{d}x,\quad M=\int _{b_\mathrm{L}}^{\infty }e^{-\theta (x-\gamma /\theta )^2}\mathrm{d}x, \end{aligned}$$

using ergodic theorem (Theorem 1.16 in [28]), we have immediately that

Lemma 2.3

As \(T\rightarrow +\infty \), under \(P_{x_0}\) with \(x_0\ge b_\mathrm{L}\), for any \(\beta \in \mathbb {R}\)

$$\begin{aligned} \frac{1}{T}\int _0^TX_t\mathrm{d}t\rightarrow \mu _1,\quad \int _0^TX_t^2\mathrm{d}t\rightarrow \mu _2,\quad \frac{1}{T}\int _0^T\left( \beta -X_t\right) ^2\mathrm{d}t\rightarrow \beta ^2-2\beta \mu _1+\mu _2^2, \quad a.s. \end{aligned}$$

where \(\mu _1=\int _{b_\mathrm{L}}^{\infty }x\pi (\mathrm{d}x)\),  \(\mu _2=\int _{b_\mathrm{L}}^{\infty }x^2\pi (\mathrm{d}x)\).

Remark 2.1

By using Proposition 7.3 in Ross  [33], Lemma 2.2,  (2.4),  (2.5) and strong Markov property, we have

$$\begin{aligned} \mu _1=\frac{E_{b_\mathrm{L}+2}\int _{\Theta _{0}}^{\Theta _1}X_t\mathrm{d}t}{E_{b_\mathrm{L}+2}\Theta _{0}},\quad \mu _{2}=\frac{E_{b_\mathrm{L}+2}\int _{\Theta _{0}}^{\Theta _1}X_t^2\mathrm{d}t}{E_{b_\mathrm{L}+2}\Theta _{0}}. \end{aligned}$$
(2.14)

Now, we can state the exponential equivalence results as follows:

Proposition 2.1

For \(\lambda _T\) defined by (1.5) and for all \(\delta >0\) and \(x_0\ge b_L\), we have

$$\begin{aligned} \lim _{T\rightarrow \infty }\frac{1}{\lambda _T^2}\log P_{x_0}\Big (\frac{1}{T}\Big |\int _0^TX_t\mathrm{d}t-\mu _1T\Big |\ge \delta \Big )=-\infty \end{aligned}$$
(2.15)

and

$$\begin{aligned} \lim _{T\rightarrow \infty }\frac{1}{\lambda _T^2}\log P_{x_0}\Big (\frac{1}{T}\Big |\int _0^TX_t^2\mathrm{d}t-\mu _2T\Big |\ge \delta \Big )=-\infty . \end{aligned}$$
(2.16)

In particular, for any \(\beta \in \mathbb {R}\)

$$\begin{aligned} \lim _{T\rightarrow \infty }\frac{1}{\lambda _T^2}\log P_{x_0}\Big (\frac{1}{T}\Big |\int _0^T\left( \beta -X_t\right) ^2\mathrm{d}t-\big (\beta ^2-2\beta \mu _1+\mu _2\big )T\Big |\ge \delta \Big )=-\infty .\nonumber \\ \end{aligned}$$
(2.17)

Proof

To prove (2.16), applying (2.5), we have by strong Markov property

$$\begin{aligned}&P_{x_0}\Big (\frac{1}{T}\Big |\int _0^TX_t^2\mathrm{d}t-\mu _2T\Big |\ge \delta \Big )\\&\quad \le P_{x_0}\Big (\Big |\sum _{k=1}^{N_\mathrm{T}}\int _{\Theta _{k-1}}^{\Theta _k}X_t^2\mathrm{d}t-\mu _2 T\Big |\ge \delta T/2\Big ) +P_{x_0}\Big (\int _0^{\Theta _0}X_t^2\mathrm{d}t+\int _{\Theta _{N_\mathrm{T}}}^{\Theta _{N_\mathrm{T}+1}}X_t^2\mathrm{d}t\ge \delta T/2\Big )\\&\quad \le P_{b_\mathrm{L}+2}\Big (\Big |\sum _{k=0}^{N_\mathrm{T}-1}\int _{\Theta _{k-1}}^{\Theta _k}X_t^2\mathrm{d}t-\mu _2 T\Big |\ge \delta T/2\Big )\\&\qquad +P_{x_0}\Big (\int _0^{\Theta _0}X_t^2\mathrm{d}t\ge \delta T/4\Big )+P_{b_\mathrm{L}+2}\Big (\int _0^{\Theta _0}X_t^2\mathrm{d}t\ge \delta T/4\Big ). \end{aligned}$$

From (2.11), it follows that

$$\begin{aligned} \begin{aligned}&\lim _{T\rightarrow \infty }\frac{1}{\lambda _T^2} \log \left( P_{x_0}\Big (\int _0^{\Theta _0}X_t^2\mathrm{d}t\ge \delta T/4\Big )\vee P_{b_\mathrm{L}+2}\Big (\int _0^{\Theta _0}X_t^2\mathrm{d}t\ge \delta T/4\Big )\right) \\&\quad \le \lim _{T\rightarrow \infty }\frac{1}{\lambda _T^2}\bigg (\log C_0-C_1\delta T/4\bigg ) =-\infty . \end{aligned} \end{aligned}$$
(2.18)

Now, it is sufficient to show

$$\begin{aligned} \lim _{T\rightarrow \infty }\frac{1}{\lambda _T^2} \log P_{b_\mathrm{L}+2}\Big (\Big |\sum _{k=0}^{N_\mathrm{T}-1}\int _{\Theta _{k-1}}^{\Theta _k}X_t^2\mathrm{d}t-\mu _2 T\Big |\ge \delta T/2\Big )=-\infty . \end{aligned}$$

Firstly, we give some estimations for \(N_\mathrm{T}\). In fact, we have for any \(\delta '>0\)

$$\begin{aligned} \begin{aligned}&P_{b_\mathrm{L}+2}\left( N_{T}\ge \left[ {T(1+\delta ')}/{E_{b_\mathrm{L}+2}\Theta _{0}}\right] \right) \\&\quad =P_{b_\mathrm{L}+2}\left( \sum ^{\left[ {T(1+\delta ')}/{E_{b_\mathrm{L}+2}\Theta _{0}}\right] }_{i=0} (\Theta _{i}-\Theta _{i-1})\le T\right) \\&\quad =P_{b_\mathrm{L}+2}\left( \frac{\sum ^{\left[ {T(1+\delta ')}/{E_{b_\mathrm{L}+2}\Theta _{0}}\right] }_{i=0} (\Theta _{i}-\Theta _{i-1})}{\left[ {T(1+\delta ')}/{E_{b_\mathrm{L}+2}\Theta _{0}}\right] +1} \le \frac{T}{\left[ {T(1+\delta ')}/{E_{b_\mathrm{L}+2}\Theta _{0}}\right] +1}\right) . \end{aligned} \end{aligned}$$

Take T large enough such that  \(\frac{T}{\left[ {T(1+\delta ')}/{E_{b_\mathrm{L}+2}\Theta _{0}}\right] +1}<\frac{E_{b_\mathrm{L}+2}\Theta _{0}}{1+\delta '/2}\). Then, Under \(P_{b_\mathrm{L}+2}\), by using Lemma 2.1,  \(\left\{ \Theta _{i}-\Theta _{i-1}: i\ge 0\right\} \) is a sequence of independent and identical distributed variables with some finite exponential moment. Then, we have by the large deviation results

$$\begin{aligned} \lim _{T\rightarrow \infty }\frac{1}{\lambda _T^2} \log P_{b_\mathrm{L}+2}\left( N_{\mathrm{T}}\ge \left[ {T(1+\delta ')}/{E_{b_L+2}\Theta _{0}}\right] \right) =-\infty . \end{aligned}$$
(2.19)

Similarly, it holds

$$\begin{aligned} \lim _{T\rightarrow \infty }\frac{1}{\lambda _T^2} \log P_{b_\mathrm{L}+2}\left( N_{\mathrm{T}}\le \left[ {T(1-\delta ')}/{E_{b_\mathrm{L}+2}\Theta _{0}}\right] \right) =-\infty . \end{aligned}$$
(2.20)

Secondly, take T large enough such that  \(\frac{T}{\left[ {T(1+\delta ')}/{E_{b_\mathrm{L}+2}\Theta _{0}}\right] }>\frac{E_{b_\mathrm{L}+2}\Theta _{0}}{1+2\delta '}\). By (2.14), we have

$$\begin{aligned}&P_{b_\mathrm{L}+2}\left( \sum _{k=0}^{N_\mathrm{T}-1}\int _{\Theta _{k-1}}^{\Theta _k}X_t^2\mathrm{d}t-\mu _2 T\ge \delta T/4\right) \\&\quad \le P_{b_\mathrm{L}+2}\left( \sum _{k=0}^{N_T-1}\int _{\Theta _{k-1}}^{\Theta _k}X_t^2\mathrm{d}t-\mu _2 T\ge \delta T/4, N_{\mathrm{T}}\le \left[ {T(1+\delta ')}/{E_{b_L+2}\Theta _{0}}\right] \right) \\&\qquad +P_{b_\mathrm{L}+2}\left( N_{\mathrm{T}}>\left[ {T(1+\delta ')}/{E_{b_\mathrm{L}+2}\Theta _{0}}\right] \right) \\&\quad \le P_{b_\mathrm{L}+2}\left( \frac{\sum _{k=0}^{\left[ {T(1+\delta ')}/{E_{b_\mathrm{L}+2}\Theta _{0}}\right] -1} \int _{\Theta _{k-1}}^{\Theta _k}X_t^2\mathrm{d}t}{\left[ {T(1+\delta ')}/{E_{b_\mathrm{L}+2}\Theta _{0}}\right] } \ge \frac{1}{1+2\delta '}\left( \frac{\delta E_{b_\mathrm{L}+2}\Theta _{0}}{4}+E_{b_\mathrm{L}+2}\int _{\Theta _{0}}^{\Theta _1}X_t^2\mathrm{d}t\right) \right) \\&\qquad +P_{b_\mathrm{L}+2}\left( N_{\mathrm{T}}>\left[ {T(1+\delta ')}/{E_{b_\mathrm{L}+2}\Theta _{0}}\right] \right) \\ \end{aligned}$$

Now, choose \(\delta '<\frac{\delta E_{b_\mathrm{L}+2}\Theta _{0}}{8E_{b_\mathrm{L}+2}\int _{\Theta _{0}}^{\Theta _1}X_t^2\mathrm{d}t}\), and then

$$\begin{aligned} \frac{1}{1+2\delta '}\left( \frac{\delta E_{b_\mathrm{L}+2}\Theta _{0}}{4}+E_{b_\mathrm{L}+2}\int _{\Theta _{0}}^{\Theta _1}X_t^2\mathrm{d}t\right) >E_{b_\mathrm{L}+2}\int _{\Theta _{0}}^{\Theta _1}X_t^2\mathrm{d}t. \end{aligned}$$
(2.21)

Notice that, under \(P_{b_\mathrm{L}+2}\), by using Lemma 2.2,  \(\left\{ \int _{\Theta _{k-1}}^{\Theta _k}X_t^2\mathrm{d}t: k\ge 0\right\} \) is a sequence of independent and identical distributed variables with some finite exponential moment. Together with  (2.19), (2.21) and the large deviation results,

$$\begin{aligned} \lim _{T\rightarrow \infty }\frac{1}{\lambda _T^2} \log P_{b_\mathrm{L}+2}\left( \sum _{k=0}^{N_\mathrm{T}-1}\int _{\Theta _{k-1}}^{\Theta _k}X_t^2\mathrm{d}t-\mu _2 T\ge \delta T/4\right) =-\infty . \end{aligned}$$

Finally, using  (2.20) and following above procedures, we also have

$$\begin{aligned} \lim _{T\rightarrow \infty }\frac{1}{\lambda _T^2} \log P_{b_\mathrm{L}+2}\left( \sum _{k=0}^{N_\mathrm{T}-1}\int _{\Theta _{k-1}}^{\Theta _k}X_t^2\mathrm{d}t-\mu _2 T\le -\delta T/4\right) =-\infty . \end{aligned}$$

Therefore,

$$\begin{aligned} \lim _{T\rightarrow \infty }\frac{1}{\lambda _T^2} \log P_{b_\mathrm{L}+2}\left( \Big |\sum _{k=0}^{N_\mathrm{T}-1}\int _{\Theta _{k-1}}^{\Theta _k}X_t^2\mathrm{d}t-\mu _2 T\Big |\ge \delta T/4\right) =-\infty . \end{aligned}$$

Now, we turn to proving  (2.15). Indeed, by (2.4), we can write

$$\begin{aligned} \begin{aligned}&\left| \int _0^TX_t\mathrm{d}t-\sum _{k=1}^{N_\mathrm{T}}\int _{\Theta _{k-1}}^{\Theta _k}X_t\mathrm{d}t\right| \\&\quad \le \Theta _{0}^{1/2}\left| \int _0^{\Theta _0}X^{2}_t\mathrm{d}t\right| ^{1/2} +\left( \Theta _{N_\mathrm{T}+1}-\Theta _{N_\mathrm{T}}\right) ^{1/2}\left| \int _{\Theta _{N_\mathrm{T}}}^{\Theta _{N_\mathrm{T}+1}}X^{2}_t\mathrm{d}t\right| ^{1/2}\\&\quad \le \frac{1}{2}\left( \Theta _{0}+\int _0^{\Theta _0}X^{2}_t\mathrm{d}t +\left( \Theta _{N_\mathrm{T}+1}-\Theta _{N_\mathrm{T}}\right) +\int _{\Theta _{N_\mathrm{T}}}^{\Theta _{N_\mathrm{T}+1}}X^{2}_t\mathrm{d}t\right) . \end{aligned} \end{aligned}$$

Applying Lemma  2.1, we have

$$\begin{aligned} \lim _{T\rightarrow \infty }\frac{1}{\lambda ^{2}_{T}}\log \mathbb P_{x_{0}}\left( \Theta _{0}>T\delta \right) \le \lim _{T\rightarrow \infty }\frac{1}{\lambda ^{2}_{T}}\bigg (\log C_0-C_1\delta T\bigg )=-\infty . \end{aligned}$$
(2.22)

Together with (2.18) and strong Markov property, we obtain

$$\begin{aligned} \lim _{T\rightarrow \infty }\frac{1}{\lambda ^{2}_{T}}\log P_{x_{0}}\left( \Theta _{N_\mathrm{T}+1}-\Theta _{N_\mathrm{T}}>T\delta \right) =\lim _{T\rightarrow \infty }\frac{1}{\lambda ^{2}_{T}}\log P_{b_\mathrm{L}+2}\left( \Theta _{0}>T\delta \right) =-\infty ,\nonumber \\ \end{aligned}$$
(2.23)

and

$$\begin{aligned} \begin{aligned}&\lim _{T\rightarrow \infty }\frac{1}{\lambda ^{2}_{T}}\log P_{x_{0}}\left( \int _{\Theta _{N_\mathrm{T}}}^{\Theta _{N_\mathrm{T}+1}}X^{2}_t\mathrm{d}t>T\delta \right) \\&\quad =\lim _{T\rightarrow \infty }\frac{1}{\lambda ^{2}_{T}}\log P_{b_\mathrm{L}+2}\left( \int _{0}^{\Theta _0}X^{2}_t\mathrm{d}t>T\delta \right) =-\infty . \end{aligned} \end{aligned}$$
(2.24)

Together with (2.18, 2.22, 2.23) 2.24, following the similar line in the proof of (2.16), we have for any \(\delta >0\),

$$\begin{aligned} \begin{aligned}&\lim _{T\rightarrow \infty }\frac{1}{\lambda ^{2}_{T}}\log {\mathbb {P}}_{x_{0}}\left( \frac{1}{T}\left| \int _0^TX_t\mathrm{d}t-\sum _{k=1}^{N_\mathrm{T}}\int _{\Theta _{k-1}}^{\Theta _k}X_t\mathrm{d}t\right|>\delta \right) =-\infty ,\\&\lim _{T\rightarrow \infty }\frac{1}{\lambda ^{2}_{T}}\log {\mathbb {P}}_{x_{0}}\left( \frac{1}{T}\left| \sum _{k=1}^{N_\mathrm{T}}\int _{\Theta _{k-1}}^{\Theta _k}X_t\mathrm{d}t-\mu _{1}T\right| >\delta \right) =-\infty ,\\ \end{aligned} \end{aligned}$$

and thus

$$\begin{aligned} \lim _{T\rightarrow \infty }\frac{1}{\lambda ^{2}_{T}}\log \mathbb {P}_{x_{0}} \left( \frac{1}{T}\left| \int _0^TX_t\mathrm{d}t-\mu _{1}T\right| >\delta \right) =-\infty , \end{aligned}$$

which completes the proof of this proposition. \(\square \)

3 Moderate Deviations for \(\Big (\widehat{\theta }_T, \widehat{\gamma }_{T}\Big )\)

Set

$$\begin{aligned} \widehat{\mu }_T=\frac{1}{T}\int _0^TX_t\mathrm{d}t,\quad \widehat{\sigma }^2_T=\frac{1}{T}\int _0^TX_t^2\mathrm{d}t-\widehat{\mu }^2_T. \end{aligned}$$
(3.1)

For \(\widehat{\theta }_T\) and \(\widehat{\gamma }_T\) defined by (1.2) and (1.3), we have the following key martingale decomposition by straightforward calculations

$$\begin{aligned} \sqrt{T}\left( \begin{array}{ll}\widehat{\theta }_T-\theta \\ \widehat{\gamma }_T-\gamma \\ \end{array}\right) =\frac{M_\mathrm{T}}{\sqrt{T}}+R_T, \end{aligned}$$
(3.2)

with the martingale

$$\begin{aligned} M_\mathrm{T}=(\mu _2-\mu _1^2)^{-1}\left( \begin{array}{ll}\int _0^T\left( \mu _1-X_t\right) \mathrm{d}W_t\\ \int _0^T\left( \mu _2-\mu _1 X_t\right) \mathrm{d}W_t\\ \end{array}\right) \end{aligned}$$
(3.3)

and the remainder term

$$\begin{aligned} R_\mathrm{T}=\frac{1}{\sqrt{T}\widehat{\sigma }_T^2}\left( \begin{array}{ll}W_\mathrm{T}\big (\widehat{\mu }_T-\mu _1\big ) +\big (1-(\mu _2-\mu _1^2)^{-1}\widehat{\sigma }_T^2\big )\int _0^T\left( \mu _1-X_t\right) \mathrm{d}W_t\\ \widehat{\mu }_TW_\mathrm{T}\big (\widehat{\mu }_T-\mu _1\big ) +\big (\widehat{\mu }_T-\mu _1(\mu _2-\mu _1^2)^{-1}\widehat{\sigma }_T^2\big )\int _0^T\left( \mu _1-X_t\right) \mathrm{d}W_t\\ \end{array}\right) .\nonumber \\ \end{aligned}$$
(3.4)

As a martingale,  \(\big \{M_T, T>0\big \}\) is the main term in our moderate deviation analysis, while \(R_T\) will be negligible.

Lemma 3.1

For  \(\lambda _T\) defined by (1.5) and \(M_T\) defined by (3.3), the families

$$\begin{aligned} \Big \{\frac{M_T}{\sqrt{T}\lambda _T}, T>0\Big \},\quad \Big \{\frac{1}{\sqrt{T}\lambda _T}\int _0^T\left( \mu _1-X_t\right) \mathrm{d}W_t, T>0\Big \} \end{aligned}$$

satisfy the large deviations with speed \(\lambda _T^2\) and rate function

$$\begin{aligned} I(x)=\frac{1}{2}x^{\tau }\Sigma ^{-1}x,\quad J(y)=\frac{y^2}{2\big (\mu _2-\mu _1^2\big )},\quad x\in \mathbb {R}^2, y\in \mathbb {R}, \end{aligned}$$

respectively, where \(\mu _1=\int _{b_\mathrm{L}}^{\infty }x\pi (\mathrm{d}x)\),  \(\mu _2=\int _{b_\mathrm{L}}^{\infty }x^2\pi (\mathrm{d}x)\) and

$$\begin{aligned} \Sigma =\big (\mu _2-\mu _1^2\big )^{-1}\left( \begin{array}{ll}1\\ \mu _1\\ \end{array} \begin{array}{ll}\mu _1\\ \mu _2\\ \end{array}\right) . \end{aligned}$$

Proof

Note that \(\big \{M_\mathrm{T}, T>0\big \}\) and \(\Big \{\int _0^T\left( \mu _1-X_t\right) \mathrm{d}W_t, T>0\Big \}\) are martingales with predictable quadratic variations

$$\begin{aligned} <M>_T=(\mu _2-\mu _1^2)^{-2}\left( \begin{array}{ll}\int _0^T\left( \mu _1-X_t\right) ^2\mathrm{d}t\\ \int _0^T\left( \mu _1-X_t\right) \left( \mu _2-\mu _1 X_t\right) \mathrm{d}t\\ \end{array} \begin{array}{ll}\int _0^T\left( \mu _1-X_t\right) \left( \mu _2-\mu _1 X_t\right) \mathrm{d}t\\ \int _0^T\left( \mu _2-\mu _1 X_t\right) ^2dt\\ \end{array}\right) \end{aligned}$$

and \(\Big <\int _0^\cdot \left( \mu _1-X_t\right) \mathrm{d}W_t\Big >_T=\int _0^T\left( \mu _1-X_t\right) ^2\mathrm{d}t\). By Proposition  2.1, we can get for any  \(\delta >0\),

$$\begin{aligned}&\lim _{T\rightarrow \infty }\frac{1}{\lambda _T^2}\log P_{x_0}\Big (\frac{1}{T}\Big \Vert<M>_T-\Sigma \cdot T\Big \Vert \ge \delta \Big )=-\infty ,\\&\lim _{T\rightarrow \infty }\frac{1}{\lambda _T^2}\log P_{x_0}\Big (\frac{1}{T}\Big |\Big <\int _0^\cdot \left( \mu _1-X_t\right) \mathrm{d}W_t\Big >_T- \big (\mu _2-\mu _1^2\big )T\Big |\ge \delta \Big )=-\infty . \end{aligned}$$

Therefore, Proposition 1 in Dembo ( [12]) yields the conclusion of this lemma. \(\square \)

Lemma 3.2

For the remainder term \(R_\mathrm{T}\) defined by (3.4), we have the following results.

  1. (1)

    For any \(\delta >0\),

    $$\begin{aligned} \lim _{T\rightarrow \infty }\frac{1}{\lambda _T^2}\log P_{x_0}\Big (\Big |\widehat{\sigma }_T^2-\big (\mu _2-\mu _1^2\big )\Big |\ge \delta \Big )=-\infty . \end{aligned}$$
    (3.5)
  2. (2)

    For any \(\delta >0\),

    $$\begin{aligned} \lim _{T\rightarrow \infty }\frac{1}{\lambda _T^2}\log P_{x_0}\Big (\frac{1}{\lambda _T}\Big |R_T\Big |\ge \delta \Big )=-\infty . \end{aligned}$$
    (3.6)

Proof

  1. (1)

    Since \(\widehat{\sigma }^2_T=\frac{1}{T}\int _0^TX_t^2dt-\frac{1}{T^2}\left( \int _0^TX_tdt\right) ^2\), we have

    $$\begin{aligned}&P_{x_0}\Big (\Big |\widehat{\sigma }_T^2-\big (\mu _2-\mu _1^2\big )\Big |\ge \delta \Big )\\&\quad \le P_{x_0}\Big (\frac{1}{T}\Big |\int _0^TX_t^2\mathrm{d}t-\mu _2T\Big |\ge \delta /2\Big ) \\&\qquad +P_{x_0}\Big (\frac{1}{T}\Big |\int _0^TX_t\mathrm{d}t+\mu _1T\Big |\cdot \frac{1}{T}\Big |\int _0^TX_tdt-\mu _1T\Big |\ge \delta /2\Big )\\&\quad \le P_{x_0}\Big (\frac{1}{T}\Big |\int _0^TX_t^2\mathrm{d}t-\mu _2T\Big |\ge \delta /2\Big ) +P_{x_0}\Big (\frac{1}{T}\Big |\int _0^TX_t\mathrm{d}t-\mu _1T\Big |\ge |\mu _1|+1\Big )\\&\qquad +P_{x_0}\Big (\frac{1}{T}\Big |\int _0^TX_t\mathrm{d}t+\mu _1T\Big |\cdot \frac{1}{T}\Big |\int _0^TX_t\mathrm{d}t-\mu _1T\Big |\\&\quad \ge \delta /2, \frac{1}{T}\Big |\int _0^TX_tdt-\mu _1T\Big |<|\mu _1|+1\Big )\\&\quad \le P_{x_0}\Big (\frac{1}{T}\Big |\int _0^TX_t^2\mathrm{d}t-\mu _2T\Big |\ge \delta /2\Big ) +P_{x_0}\Big (\frac{1}{T}\Big |\int _0^TX_t\mathrm{d}t-\mu _1T\Big |\ge |\mu _1|+1\Big )\\&\qquad +P_{x_0}\Big (\frac{1}{T}\Big |\int _0^TX_t\mathrm{d}t-\mu _1T\Big |\ge \frac{\delta }{2(3|\mu _1|+1)}\Big ). \end{aligned}$$

    Now, we can complete the proof of (3.5) by using Proposition 2.1.

  2. (2)

    For any \(L>0\),

    $$\begin{aligned}&P_{x_0}\left( \frac{1}{\lambda _T\sqrt{T}}\Big |W_\mathrm{T}\big (\widehat{\mu }_T-\mu _1\big ) +\big (1-(\mu _2-\mu _1^2)^{-1}\widehat{\sigma }_T^2\big )\int _0^T\left( \mu _1-X_t\right) \mathrm{d}W_t\Big |\ge \delta \right) \\&\quad \le P_{x_0}\left( \frac{1}{\lambda _T\sqrt{T}}\Big |W_\mathrm{T}\big (\widehat{\mu }_T-\mu _1\big )\Big |\ge \delta /2\right) \\&\qquad +P_{x_0}\left( \frac{1}{\lambda _T\sqrt{T}}\Big |\big (1-(\mu _2-\mu _1^2)^{-1}\widehat{\sigma }_T^2\big )\int _0^T\left( \mu _1-X_t\right) \mathrm{d}W_t\Big | \ge \delta /2\right) \\&\quad \le P_{x_0}\left( \frac{1}{\lambda _T\sqrt{T}}\Big |W_\mathrm{T}\Big |\ge L\right) +P_{x_0}\left( \Big |\widehat{\mu }_T-\mu _1\Big |\ge \frac{\delta }{2L}\right) \\&\qquad +P_{x_0}\left( \frac{1}{\lambda _T\sqrt{T}}\Big |\int _0^T\left( \mu _1-X_t\right) \mathrm{d}W_t\Big | \ge L\right) \\&\qquad +P_{x_0}\left( \Big |1-(\mu _2-\mu _1^2)^{-1}\widehat{\sigma }_T^2\Big |\ge \frac{\delta }{2L}\right) . \end{aligned}$$

    Applying Proposition 2.1, Lemma 3.1,  (3.5) and classical moderate deviations for the Brownian motion, we can obtain that

    $$\begin{aligned}&\lim _{T\rightarrow \infty }\frac{1}{\lambda _T^2}\log P_{x_0}\left( \frac{1}{\lambda _T\sqrt{T}}\Big |W_\mathrm{T}\big (\widehat{\mu }_T-\mu _1\big ) \right. \\&\qquad \left. +\big (1-(\mu _2-\mu _1^2)^{-1}\widehat{\sigma }_T^2\big )\int _0^T\left( \mu _1-X_t\right) \mathrm{d}W_t\Big |\ge \delta \right) \\&\quad \le -\frac{L^2}{2}\Big (1\vee \big (\mu _2-\mu _1^2\big )^{-1}\Big ), \end{aligned}$$

    which implies immediately by letting \(L\rightarrow \infty \) that

    $$\begin{aligned} \begin{aligned}&\lim _{T\rightarrow \infty }\frac{1}{\lambda _T^2}\log P_{x_0}\left( \frac{1}{\lambda _T\sqrt{T}}\Big |W_T\big (\widehat{\mu }_T-\mu _1\big ) \right. \\&\qquad \left. +\big (1-(\mu _2-\mu _1^2)^{-1}\widehat{\sigma }_T^2\big )\int _0^T\left( \mu _1-X_t\right) \mathrm{d}W_t\Big |\ge \delta \right) \\&\quad =-\infty . \end{aligned} \end{aligned}$$
    (3.7)

Similarly, we can also have that

$$\begin{aligned} \begin{aligned}&\lim _{T\rightarrow \infty }\frac{1}{\lambda _T^2}\log P_{x_0}\left( \frac{1}{\lambda _T\sqrt{T}}\Big |\widehat{\mu }_TW_T\big (\widehat{\mu }_T-\mu _1\big )\right. \\&\qquad \left. +\big (\widehat{\mu }_T-\mu _1(\mu _2-\mu _1^2)^{-1}\widehat{\sigma }_T^2\big )\int _0^T\left( \mu _1-X_t\right) \mathrm{d}W_t\Big |\ge \delta \right) \\&\quad =-\infty . \end{aligned} \end{aligned}$$
(3.8)

Then, together with (3.5),  (3.7) and  (3.8), we can complete the proof of  (3.6). \(\square \)

Proof of Theorem 1.1

By  (3.2) and Lemma 3.2,  \(\left\{ \frac{\sqrt{T}}{\lambda _T}\left( \begin{array}{ll}\widehat{\theta }_T-\theta \\ \widehat{\gamma }_T-\gamma \\ \end{array}\right) , T>0\right\} \) is exponential equivalent to \(\Big \{\frac{1}{\lambda _T\sqrt{T}}M_\mathrm{T}, T>0\Big \}\) with speed \(\lambda _T^2\). Theorem 1.1 follows from Lemma 3.1. \(\square \)

4 The Case of Two-Sided Barriers

In this section, we focus on the drift parameter estimations for the reflected Ornstein–Uhlenbeck process with two-sided barriers \(b_L\) and \(b_U\) ( \(b_U>b_L\)):

$$\begin{aligned} \left\{ \begin{array}{lll} \mathrm{d}X_t=(-\theta X_{t}+\gamma )\mathrm{d}t+\mathrm{d}W_{t}+\mathrm{d}L_t-\mathrm{d}U_t,\\ X_t\in [b_L, b_U],~\text {for all } t\ge 0,\\ X_0=x_0\in [b_\mathrm{L}, b_\mathrm{U}], \end{array}\right. \end{aligned}$$
(4.1)

where \(\theta \in (0,+\infty )\) and \(\gamma \) are unknown, the processes \(L=\{L_t, t\ge 0\}\) and  \(U=\{U_t, t\ge 0\}\) are the minimal continuous increasing processes with \(L_0=U_0=0\), which make the process \(X_t\in [b_L, b_U]\) for all \(t\ge 0\) and satisfy

$$\begin{aligned} \int _0^{\infty }I_{\big \{X_t>b_{L}\big \}}\mathrm{d}L_t=0,\quad \int _0^{\infty }I_{\big \{X_t<b_{U}\big \}}\mathrm{d}U_t=0. \end{aligned}$$

The stationary distribution of (4.1) is given by

$$\begin{aligned} \widetilde{\pi }(dx)=\frac{e^{-\theta \big (x-\gamma /\theta \big )^2}}{\widetilde{M}}I_{[b_L, b_U]}\mathrm{d}x, \quad \widetilde{M}=\int _{b_L}^{b_U}e^{-\theta (x-\gamma /\theta )^2}\mathrm{d}x. \end{aligned}$$

4.1 Maximum Likelihood Estimators of \(\theta \) and \(\gamma \)

By Girsanov formula in Ward and Glynn  [36], Bo et al.  [9], the log-likelihood ratio process can be written as

$$\begin{aligned}&\log \left( \frac{dP_{\theta ,\gamma ,x_0}}{\mathrm{d}P_{0,0,x_0}}\bigg |_{\mathcal {F}_T}\right) \\&\quad =-\theta \int _0^TX_t\mathrm{d}(X_t-L_t+U_t)+\gamma \big (X_T-L_T+U_T-x_0\big )\\&\qquad -\frac{\theta ^2}{2}\int _0^TX_t^2\mathrm{d}t+\theta \gamma \int _0^TX_t\mathrm{d}t-\frac{\gamma ^2}{2}T, \end{aligned}$$

where \(\mathcal {F}_T=\sigma (W_s,s\le T)\). Therefore, the maximum likelihood estimators of \(\theta \) and \(\gamma \) are given by

$$\begin{aligned} \widetilde{\theta }_{T}=\frac{-T\int _{0}^{T}X_{t}\mathrm{d}(X_t-L_t+U_t)+(X_T-L_T+U_T-x_0)\int _0^TX_t\mathrm{d}t}{T\int _{0}^{T}X_{t}^2\mathrm{d}t-\big (\int _0^TX_t\mathrm{d}t\big )^2} \end{aligned}$$

and

$$\begin{aligned} \widetilde{\gamma }_{T}=\frac{-\int _0^TX_t\mathrm{d}t\int _{0}^{T}X_{t}\mathrm{d}(X_{t}-L_t+U_t)+(X_T-L_T+U_T-x_0)\int _0^TX_t^2\mathrm{d}t}{T\int _{0}^{T}X_{t}^2\mathrm{d}t-\big (\int _0^TX_t\mathrm{d}t\big )^2}. \end{aligned}$$

Similar to the one-sided barrier case in Sect. 3, we have the following key martingale decomposition

$$\begin{aligned} \sqrt{T}\left( \begin{array}{ll}\widetilde{\theta }_T-\theta \\ \widetilde{\gamma }_T-\gamma \\ \end{array}\right) =\frac{\widetilde{M}_T}{\sqrt{T}}+\widetilde{R}_T, \end{aligned}$$
(4.2)

where

$$\begin{aligned} \widetilde{M}_T= & {} (\widetilde{\mu }_2-\widetilde{\mu }_1^2)^{-1}\left( \begin{array}{ll}\int _0^T\left( \widetilde{\mu }_1-X_t\right) \mathrm{d}W_t\\ \int _0^T\left( \widetilde{\mu }_2-\widetilde{\mu }_1 X_t\right) \mathrm{d}W_t\\ \end{array}\right) , \end{aligned}$$
(4.3)
$$\begin{aligned} \widetilde{R}_T= & {} \frac{1}{\sqrt{T}\widehat{\sigma }_T^2}\left( \begin{array}{ll}W_T\big (\widehat{\mu }_T-\widetilde{\mu }_1\big ) +\big (1-(\widetilde{\mu }_2-\widetilde{\mu }_1^2)^{-1}\widehat{\sigma }_T^2\big )\int _0^T\left( \widetilde{\mu }_1-X_t\right) \mathrm{d}W_t\\ \widehat{\mu }_TW_T\big (\widehat{\mu }_T-\widetilde{\mu }_1\big ) +\big (\widehat{\mu }_T-\widetilde{\mu }_1(\widetilde{\mu }_2-\widetilde{\mu }_1^2)^{-1}\widehat{\sigma }_T^2\big )\int _0^T\left( \widetilde{\mu }_1-X_t\right) \mathrm{d}W_t\\ \end{array}\right) \nonumber \\ \end{aligned}$$
(4.4)

and

$$\begin{aligned} \widetilde{\mu }_1=\int _{b_\mathrm{L}}^{b_\mathrm{U}}x\widetilde{\pi }(\mathrm{d}x),~\widetilde{\mu }_2=\int _{b_L}^{b_U}x^2\widetilde{\pi }(\mathrm{d}x), ~\widehat{\mu }_T=\frac{1}{T}\int _0^TX_t\mathrm{d}t,~ \widehat{\sigma }^2_T=\frac{1}{T}\int _0^TX_t^2\mathrm{d}t-\widehat{\mu }^2_T.\nonumber \\ \end{aligned}$$
(4.5)

4.2 Regenerative Process View of  \(\int _0^TX_t\mathrm{d}t\) and \(\int _0^TX_t^2\mathrm{d}t\)

To analyze the deviation properties of \(\int _0^TX_tdt\) and \(\int _0^TX_t^2dt\), we will also employ regenerative process techniques. Let \(\tau _X(x)=\inf \Big \{t\ge 0: X_t=x\Big \}\). We can define regenerative times in terms of hitting times, which are slightly different from the one-sided barrier case.

$$\begin{aligned} \widetilde{\alpha }_0= & {} 0,~\widetilde{\alpha }_{2k+1}=\inf \left\{ t\ge \widetilde{\alpha }_{2k}: X_t=b_\mathrm{L}+\frac{b_\mathrm{U}-b_\mathrm{L}}{4}~\text {or}~X_t=b_L+\frac{3(b_U-b_L)}{4}\right\} ,\\ \widetilde{\alpha }_{2k+2}= & {} \inf \left\{ t\ge \widetilde{\alpha }_{2k+1}: X_t=b_L+\frac{b_U-b_L}{2}\right\} , ~\widetilde{\Theta }_k=\widetilde{\alpha }_{2k+2},~\\ \widetilde{N}_T= & {} \sup \Big \{k\ge -1: \widetilde{\Theta }_{k}\le T\Big \}. \end{aligned}$$

From the strong Markov property of reflected Ornstein–Uhlenbeck process,  X is a regenerative process with regeneration times given by \(\Big \{\widetilde{\Theta }_k, k\ge 0\Big \}\). Then, under \(P_{x_0}\) with \(x_0\in [b_\mathrm{L},b_\mathrm{U}]\),

$$\begin{aligned} \left\{ \int _{\widetilde{\Theta }_{k-1}}^{\widetilde{\Theta }_k}X_t\mathrm{d}t, \widetilde{\Theta }_k-\widetilde{\Theta }_{k-1}: k\ge 1\right\} ,\quad \left\{ \int _{\widetilde{\Theta }_{k-1}}^{\widetilde{\Theta }_k}X_t^2\mathrm{d}t, \widetilde{\Theta }_k-\widetilde{\Theta }_{k-1}: k\ge 1\right\} \end{aligned}$$

are both independent and identically distributed sequences. Moreover, we also have the following crucial formulas

$$\begin{aligned} \left| \int _0^TX_t\mathrm{d}t-\sum ^{\widetilde{N}_T}_{k=1}\int _{\widetilde{\Theta }_{k-1}}^{\widetilde{\Theta }_k}X_t\mathrm{d}t\right| \le \left| \int _0^{\widetilde{\Theta }_0\wedge T}X_t\mathrm{d}t\right| +\left| \int ^{T}_{\widetilde{\Theta }_{\widetilde{N}_T}}X_t\mathrm{d}t\right| \end{aligned}$$
(4.6)

and

$$\begin{aligned} \left| \int _0^TX_t^2\mathrm{d}t-\sum ^{\widetilde{N}_T}_{k=1}\int _{\widetilde{\Theta }_{k-1}}^{\widetilde{\Theta }_k}X_t^2\mathrm{d}t\right| \le \int _0^{\widetilde{\Theta }_0}X_t^2\mathrm{d}t +\int _{\widetilde{\Theta }_{\widetilde{N}_T}}^{\widetilde{\Theta }_{\widetilde{N}_T+1}}X_t^2\mathrm{d}t. \end{aligned}$$
(4.7)

Parallel to Lemmas 2.1 and 2.2, we have the following decay of tail probabilities.

Lemma 4.1

For all \(x_0\in [b_\mathrm{L}, b_\mathrm{U}]\), there exists some positive constants \(C_0, C_1\) depending only on \(x_0, b_\mathrm{L}, b_\mathrm{U}, \theta \) and \(\gamma \), such that for T large enough

$$\begin{aligned} P_{x_0}\Big (\widetilde{\Theta }_0>T\Big )\le C_0e^{-C_1 T} \end{aligned}$$
(4.8)

and

$$\begin{aligned} P_{x_0}\Big (\Big |\int _0^{\widetilde{\Theta }_0}X_t\mathrm{d}t\Big |\vee \Big |\int _0^{\widetilde{\Theta }_0\wedge T}X_t\mathrm{d}t\Big |>T\Big )\le C_0e^{-C_1 T},\quad P_{x_0}\Big (\int _0^{\widetilde{\Theta }_0}X_t^2dt>T\Big )\le C_0e^{-C_1 T}.\nonumber \\ \end{aligned}$$
(4.9)

In particular, there exists some \(\eta >0\) such that \(E_{x_0}e^{\eta \widetilde{\Theta }_0}<\infty \), and

$$\begin{aligned} E_{x_0}\exp \left\{ \eta \left( \left| \int _0^{\widetilde{\Theta }_0\wedge T}X_t\mathrm{d}t\right| \vee \left| \int _0^{\widetilde{\Theta }_0}X_t\mathrm{d}t\right| \right) \right\}<\infty ,\quad E_{x_0}\exp \left\{ \eta \int _0^{\widetilde{\Theta }_0}X_t^2\mathrm{d}t\right\} <\infty . \end{aligned}$$

Proof

Firstly, if \(b_L\le x_0\le b_\mathrm{L}+\frac{b_\mathrm{U}-b_\mathrm{L}}{4}\), then \(\widetilde{\Theta }_0=\tau _X(b_\mathrm{L}+\frac{b_\mathrm{U}-b_\mathrm{L}}{2})\). Under \(X_0=Y_0=x_0\), we have \(X_t\ge Y_t\) for  \(t\le \tau _X(b_\mathrm{U})\), and then \(\tau _X(b_\mathrm{L}+\frac{b_\mathrm{U}-b_\mathrm{L}}{2})\le \tau _Y(b_\mathrm{L}+\frac{b_\mathrm{U}-b_\mathrm{L}}{2})\), where  Y and  \(\tau _Y\) are defined by (2.7) and (2.8). By Corollary 3.1 in Alili et al.  [1], we have for  T large enough

$$\begin{aligned} P_{x_0}\Big (\widetilde{\Theta }_0>T\Big )= & {} P_{x_0}\Big (\tau _X(b_\mathrm{L}+\frac{b_\mathrm{U}-b_\mathrm{L}}{2})> T\Big ) \le P_{x_0}\Big (\tau _Y(b_\mathrm{L}+\frac{b_\mathrm{U}-b_\mathrm{L}}{2})>T\Big ) \nonumber \\\le & {} C_0e^{-C_1 T}, \end{aligned}$$
(4.10)

where \(C_0, C_1\) are positive constants depending only on \(x_0, b_L, b_U, \theta \) and \(\gamma \).

Secondly, if \(b_L+\frac{3(b_\mathrm{U}-b_\mathrm{L})}{4}\le x_0\le b_\mathrm{U}\), then \(\widetilde{\Theta }_0=\tau _X(b_\mathrm{L}+\frac{b_\mathrm{U}-b_\mathrm{L}}{2})\). Under \(X_0=Y_0=x_0\), we have \(X_t\le Y_t\) for  \(t\le \tau _X(b_\mathrm{L})\), and then \(\tau _X(b_\mathrm{L}+\frac{b_\mathrm{U}-b_\mathrm{L}}{2})\le \tau _Y(b_\mathrm{L}+\frac{b_\mathrm{U}-b_\mathrm{L}}{2})\). By Corollary 3.1 in Alili et al.  [1], we have for  T large enough

$$\begin{aligned} P_{x_0}\Big (\widetilde{\Theta }_0>T\Big )= & {} P_{x_0}\Big (\tau _X(b_\mathrm{L}+\frac{b_\mathrm{U}-b_\mathrm{L}}{2})\ge T\Big ) \le P_{x_0}\Big (\tau _Y(b_\mathrm{L}+\frac{b_\mathrm{U}-b_\mathrm{L}}{2})>T\Big ) \nonumber \\\le & {} C_0e^{-C_1 T}. \end{aligned}$$
(4.11)

Thirdly, if \(b_\mathrm{L}+\frac{b_\mathrm{U}-b_\mathrm{L}}{4}\le x_0\le b_L+\frac{3(b_\mathrm{U}-b_\mathrm{L})}{4}\), we have \(X_t=Y_t\) on the interval \([0,\tau _X(b_\mathrm{L})\wedge \tau _X(b_\mathrm{U})]\). Under \(X_0=Y_0=x_0\), we have

$$\begin{aligned}&\tau _X\left( b_\mathrm{L}+\frac{b_\mathrm{U}-b_\mathrm{L}}{4}\right) \wedge \tau _X\left( b_\mathrm{L}+\frac{3(b_\mathrm{U}-b_\mathrm{L})}{4}\right) \\&\quad =\tau _Y\left( b_L+\frac{b_\mathrm{U}-b_\mathrm{L}}{4}\right) \wedge \tau _Y\left( b_\mathrm{L}+\frac{3(b_\mathrm{U}-b_\mathrm{L})}{4}\right) =\widetilde{\alpha }_1. \end{aligned}$$

Consequently, it holds by strong Markov property

$$\begin{aligned}&P_{x_0}\Big (\widetilde{\Theta }_0>T\Big )\\&\quad \le P_{x_0}\Big (\widetilde{\alpha }_1>T/2\Big )+P_{x_0}\Big (\widetilde{\alpha }_2-\widetilde{\alpha }_{1}>T/2\Big )\\&\quad \le P_{x_0}\Big (\tau _X(b_\mathrm{L}+\frac{b_\mathrm{U}-b_\mathrm{L}}{4})\wedge \tau _X(b_\mathrm{L}+\frac{3(b_\mathrm{U}-b_\mathrm{L})}{4})>T/2\Big )\\&\qquad +P_{b_\mathrm{L}+\frac{b_\mathrm{U}-b_\mathrm{L}}{4}}\Big (\tau _X(b_\mathrm{L}+\frac{b_\mathrm{U}-b_\mathrm{L}}{2})>T/2\Big )\\&\qquad \cdot P_{x_0}\Big (\tau _X(b_\mathrm{L}+\frac{b_\mathrm{U}-b_\mathrm{L}}{4})<\tau _X(b_\mathrm{L}+\frac{3(b_\mathrm{U}-b_\mathrm{L})}{4})\Big )\\&\qquad +P_{b_\mathrm{L}+\frac{3(b_\mathrm{U}-b_\mathrm{L})}{4}}\Big (\tau _X(b_\mathrm{L}+\frac{b_\mathrm{U}-b_\mathrm{L}}{2})>T/2\Big )\\&\qquad \cdot P_{x_0}\Big (\tau _X(b_\mathrm{L}+\frac{b_\mathrm{U}-b_\mathrm{L}}{4})\ge \tau _X(b_\mathrm{L}+\frac{3(b_\mathrm{U}-b_\mathrm{L})}{4})\Big )\\&\quad \le P_{x_0}\Big (\tau _Y(b_\mathrm{L}+\frac{b_\mathrm{U}-b_\mathrm{L}}{4})\wedge \tau _Y(b_\mathrm{L}+\frac{3(b_\mathrm{U}-b_\mathrm{L})}{4})>T/2\Big )\\&\qquad +P_{b_\mathrm{L}+\frac{b_\mathrm{U}-b_\mathrm{L}}{4}}\Big (\tau _Y(b_\mathrm{L}+\frac{b_\mathrm{U}-b_\mathrm{L}}{2})>T/2\Big ) +P_{b_\mathrm{L}+\frac{3(b_\mathrm{U}-b_\mathrm{L})}{4}}\Big (\tau _Y(b_\mathrm{L}+\frac{b_\mathrm{U}-b_\mathrm{L}}{2})>T/2\Big ). \end{aligned}$$

Using Corollary 3.1 in Alili et al.  [1] again, we have for  T large enough

$$\begin{aligned} P_{x_0}\Big (\widetilde{\Theta }_0>T\Big )\le C_0e^{-C_1 T}. \end{aligned}$$
(4.12)

Therefore, together with (4.10,4.11, 4.12), we can complete the proof of  (4.8).

Finally, since \(\sup _{t\in [0,\infty )}|X_t|\le |b_L|\vee |b_U|\), then (4.9) can be achieved by (4.8).

\(\square \)

4.3 Exponential Equivalence and Moderate Deviations

By (4.6, 4.7) and Lemma 4.1, and using the same procedure as in the proof of Proposition 2.1, we can state the following exponential equivalence results, while the proofs are omitted.

Proposition 4.1

Let \(\lambda _T\) be defined by (1.5). For all \(\delta >0\) and \(b_{L}\le x_0\le b_U\), we have

$$\begin{aligned} \lim _{T\rightarrow \infty }\frac{1}{\lambda _T^2}\log P_{x_0}\Big (\frac{1}{T}\Big |\int _0^TX_t\mathrm{d}t-\widetilde{\mu }_1T\Big |\ge \delta \Big )=-\infty \end{aligned}$$

and

$$\begin{aligned} \lim _{T\rightarrow \infty }\frac{1}{\lambda _T^2}\log P_{x_0}\Big (\frac{1}{T}\Big |\int _0^TX_t^2\mathrm{d}t-\widetilde{\mu }_2T\Big |\ge \delta \Big )=-\infty . \end{aligned}$$

In particular, for any \(\beta \in \mathbb {R}\)

$$\begin{aligned} \lim _{T\rightarrow \infty }\frac{1}{\lambda _T^2}\log P_{x_0}\Big (\frac{1}{T}\Big |\int _0^T\left( \beta -X_t\right) ^2\mathrm{d}t-\big (\beta ^2-2\beta \widetilde{\mu }_1+\widetilde{\mu }_2\big )T\Big |\ge \delta \Big ) =-\infty . \end{aligned}$$

Now, following the similar line as in the proof of Lemma 3.2, we have

Lemma 4.2

Let \(\lambda _T\),  \(\widetilde{M}_T, \widetilde{R}_T, \widehat{\mu }_T, \widetilde{\sigma }^2_T, \widetilde{\mu }_1, \widetilde{\mu }_2\) be defined by (1.5,4.3, 4.4, 4.5).

  1. (1)

    The family \(\Big \{\frac{\widetilde{M}_T}{\sqrt{T}\lambda _T}, T>0\Big \}\) satisfy the large deviations with speed \(\lambda _T^2\) and rate function

    $$\begin{aligned} \widetilde{I}(x)=\frac{1}{2}x^{\tau }\widetilde{\Sigma }^{-1}x,\quad x\in \mathbb {R}^2, \end{aligned}$$

    where

    $$\begin{aligned} \widetilde{\Sigma }=\big (\widetilde{\mu }_2-\widetilde{\mu }_1^2\big )^{-1}\left( \begin{array}{ll}1\\ \widetilde{\mu }_1\\ \end{array} \begin{array}{ll}\widetilde{\mu }_1\\ \widetilde{\mu }_2\\ \end{array}\right) . \end{aligned}$$
  2. (2)

    For any \(\delta >0\),

    $$\begin{aligned}&\lim _{T\rightarrow \infty }\frac{1}{\lambda _T^2}\log P_{x_0}\Big (\Big |\widehat{\sigma }_T^2-\big (\widetilde{\mu }_2-\widetilde{\mu }_1^2\big )\Big |\ge \delta \Big ) =-\infty ,\quad \\&\lim _{T\rightarrow \infty }\frac{1}{\lambda _T^2}\log P_{x_0}\Big (\frac{1}{\lambda _T}\Big |\widetilde{R}_T\Big |\ge \delta \Big )=-\infty . \end{aligned}$$

By  (4.2), Lemma 4.2, and using the same way as in the proof of Theorem 1.1, we can establish the moderate deviations for  \(\left( \widetilde{\theta }_T-\theta , \widetilde{\gamma }_T-\gamma \right) \).

Theorem 4.1

For \(\lambda _T\) defined by (1.5), the family \(\left\{ \frac{\sqrt{T}}{\lambda _T}\left( \begin{array}{ll}\widetilde{\theta }_T-\theta \\ \widetilde{\gamma }_T-\gamma \\ \end{array}\right) , T>0\right\} \) satisfies the large deviations with speed \(\lambda _T^2\) and rate function \(\widetilde{I}(x)\).