1 Introduction

Given a filtered probability space \(\Lambda :=(\Omega , \mathcal {F}, \mathcal {P})\) equipped with a filtration \((\mathcal {F}_{t})_{t\ge 0}\) satisfying the usual conditions. The reflected Ornstein–Uhlenbeck processes \(\{X_{t}, t\ge 0\}\) reflected at the boundary \(b\in \mathcal {R}_{+}\cup \{0\}\) on \(\Lambda \) is defined as follows: Let \(\{X_{t}, t\ge 0\}\) be the strong solution whose existence is guaranteed by an extension of the results of Lions and Sznitman [32] to the stochastic differential equation

$$\begin{aligned} {\left\{ \begin{array}{ll} \mathrm{d}X_{t}=-\alpha X_{t} \mathrm{d}t+\sigma \mathrm{d}W_{t}+\mathrm{d}L_{t},\\ X_{t}\ge b, &{}\text{ for } \text{ all } t\ge 0,\\ X_{0}=x, \end{array}\right. } \end{aligned}$$
(1.1)

where \(b\ge 0\), \(x\in [b, +\infty )\), \(\sigma \in (0, +\infty )\), \(\alpha \in \mathcal {R}\) and \(\{W_{t}, t\ge 0\}\) is a one-dimensional standard Wiener process. \(L=(L_{t})_{t\ge 0}\) is the minimal nondecreasing and nonnegative process, which makes the process \(X_{t}\ge b\) for all \(t\ge 0\). The process L increases only when X hits the boundary b, so that

$$\begin{aligned} \int _{[0,\infty )}I(X_{t}>b)\mathrm{d}L_{t}=0, \end{aligned}$$
(1.2)

where \(I(\cdot )\) denotes the indicator function. Sometimes, L is called the regulator of the point b (see, Harrison [20]), and by virtue of Ata et al. [3], the paths of the regulator are nondecreasing, right continuous with left limits and possess the support property

$$\begin{aligned} \int _{0}^{t}I(X_{s}=b)\mathrm{d}L_{s}=L_{t}. \end{aligned}$$
(1.3)

It can be shown that (see, e.g. Harrison [20] and Ward [43]) the process L has an explicit expression as

$$\begin{aligned} L_{t}= & {} \max \bigg \{0,~\sup \limits _{s\in [0,~t]}\bigg (-x+\alpha \int _{0}^{s}X_{u}\mathrm{d}u-\sigma W_{s}\bigg ) \bigg \}\nonumber \\= & {} \max \bigg \{0,~\sup \limits _{s\in [0,~t]}\bigg (L_{s}-X_{s}\bigg ) \bigg \}. \end{aligned}$$
(1.4)

Another possibility to construct the ROU process is to apply the theory of local time. In fact, the regulator L is closely related to \(\ell =\{\ell _{t}^{b};~b\ge 0\}\), which denotes the local time process of ROU process X at point b [11], i.e.

$$\begin{aligned} L_{t}=\frac{1}{2}\ell _{t}^{b} =\lim _{\varepsilon \rightarrow 0}\frac{1}{2\varepsilon } \int _{0}^{t}I(0<X_{s}-b<\varepsilon )\mathrm{d}s. \end{aligned}$$
(1.5)

In many cases, the stochastic processes are not allowed to cross a certain boundary, or are even supposed to remain within two boundaries. The stochastic processes with the reflection behave like the standard Ornstein–Uhlenbeck processes in the interior of their domain. However, when they reaches the boundary, the sample path returns to the interior in a manner that the “pushing” force is minimal. This kind of processes, which can be applied into the field of queueing system, financial engineering and mathematical biology, has attracted the attention of scholars around the world.

Many attempts have been made to research the ROU processes in the aspects of theory and application, see, for example, Ricciardi and Sacerdote [39] applied the ROU processes into the field of mathematical biology. Krugman [26] limited the currency exchange rate dynamics in a target zone by two reflecting barriers. Goldstein and Keirstead [18] explored the term structure of interest rates for the short-rate processes with reflecting boundaries. In Hanson et al. [19], the asset pricing models with truncated price distributions had been investigated. Linetsky [31] studied the analytical representation of transition density for reflected diffusions in terms of their Sturm–Liouville spectral expansions. Bo et al. [8, 9] applied the ROU processes to model the dynamics of asset prices in a regulated market, and the conditional default probability with incomplete (or partial) market information was calculated. Ward and Glynn [40,41,42] showed that the ROU processes serve as a good approximation for a Markovian queue with reneging when the arrival rate is either close to or exceeds the processing rate and the reneging rate is small and the ROU processes also well approximate queues having renewal arrival and service processes in which customers have deadlines constraining total sojourn time. Customers either renege from the queue when their deadline expires or balk if the conditional expected waiting time given the queue length exceeds their deadline.

In practice, some important aspects of performance of a queueing system (e.g. customers’ waiting times and traffic intensities) may not be directly observable, and therefore, such performance measures and their related model parameters need to be statistically inferred from the available observed data. In the case of Ornstein–Uhlenbeck processes driven by Wiener processes, the statistical inference for these processes has been studied, and a comprehensive survey of various methods was given in Prakasa Rao [37] and Bishwal [7].

From the statistical viewpoint, for the classical Ornstein–Uhlenbeck process

$$\begin{aligned} \mathrm{d}X_{t}=\theta X_{t}\mathrm{d}t+\mathrm{d}W_{t},~t\in [0,~T], \end{aligned}$$
(1.6)

involved the unknown parameter \(\theta \in \mathcal {R}\), the maximum likelihood estimation of \(\theta \in \mathcal {R}\), from the observation of a sample path of the process along the finite interval [0,  T], is as follows

$$\begin{aligned} \hat{\theta }_{T}=\frac{\int _{0}^{T}X_{s}\mathrm{d}X_{s}}{\int _{0}^{T}X_{s}^{2}\mathrm{d}s}, \end{aligned}$$

and its behaviour as \(T\rightarrow \infty \) is well known (see, e.g. Bishwal [7], Feigin [17], Kutoyants [28]).

  1. (i)

    If the unknown parameter \(\theta <0\), the process X of (1.6) is positive recurrent, ergodic with invariant distribution \(\mathcal {N}(0,~\frac{1}{-2\theta })\), and for \(T\rightarrow \infty \) it holds

    $$\begin{aligned} \sqrt{T}(\hat{\theta }_{T}-\theta )\overset{\mathcal {D}}{\rightarrow }\mathcal {N}(0,~-2\theta ). \end{aligned}$$

    Here and in the sequel, \(\overset{\mathcal {D}}{\rightarrow }\) denotes the convergence in distribution and \(\mathcal {N}\) is the normal random variable.

  2. (ii)

    If \(\theta =0\), the process X of (1.6) is null recurrent with limiting distribution

    $$\begin{aligned} T(\hat{\theta }_{T}-\theta )\overset{\mathcal {D}}{\rightarrow }\frac{\int _{0}^{1}\tilde{W}_{t}\mathrm{d}\tilde{W}_{t}}{\int _{0}^{1}\tilde{W}_{t}^{2}\mathrm{d}t}=\frac{\tilde{W}_{1}^{2}-1}{2\int _{0}^{1}\tilde{W}_{t}^{2}\mathrm{d}t}, \end{aligned}$$

    as \(T\rightarrow \infty \), \(\{\tilde{W}_{t},~t\in [0,~\infty )\}\) is another Wiener process. Observe that this limiting distribution is neither normal nor a mixture of normals.

  3. (iii)

    If \(\theta >0\), the process X of (1.6) is not recurrent or transient; it holds \(|X_{t}|\rightarrow \infty \) as \(t\rightarrow \infty \) with probability one and

    $$\begin{aligned} \frac{1}{\sqrt{2\theta }}e^{\theta T}(\hat{\theta }_{T}-\theta )\overset{\mathcal {D}}{\rightarrow }\frac{v}{X_{0}+\xi ^{\theta }} \end{aligned}$$

    on \(\{X_{0}+\xi ^{\theta }\ne 0\}\), as \(T\rightarrow \infty \), where \(v\sim \mathcal {N}(0,~1)\) and \(\xi ^{\theta }\sim \mathcal {N}(0,~\frac{1}{2\theta })\) are two independent Gaussian random variables. Furthermore, Jiang and Xie [24] studied the asymptotic behaviours for the trajectory fitting estimator in Ornstein–Uhlenbeck process with linear drift by the method of multiple Wiener–Itô integrals derived by Major [33, 34, 36]. Zang and Zhang [45] used the tool of stochastic analysis to study parameter estimation for generalized diffusion processes with reflected boundary. The trajectory fitting estimator (TFE) was first proposed by Kutoyants [27] as a numerically attractive alternative to the well-developed maximum likelihood estimator (MLE) for continuous diffusion processes (cf. Dietz and Kutoyants [15, 16], Dietz [14], and Kutoyants [28]). Further, Hu and Long [21] applied the trajectory fitting method combined with the weighted least squares technique to Ornstein–Uhlenbeck processes driven by \(\alpha \)-stable Lévy motions. To introduce the TFE, let

    $$\begin{aligned} A_{t}:=\int _{0}^{t}X_{s}\mathrm{d}s,\\ X_{t}(\alpha ):=x-\alpha A_{t}, \end{aligned}$$

    \(t>0\). Define a distance process by

    $$\begin{aligned} D_{T}(\alpha ):=\int _{0}^{T}(X_{t}-X_{t}(\alpha ))^{2}\mathrm{d}t, \end{aligned}$$

    \(T>0\). A \(\mathcal {F}_{T}\)-measurable statistics \(\hat{\alpha }_{T}\) shall be called TFE if it holds

    $$\begin{aligned} \hat{\alpha }_{T}:=argmin_{\alpha }D_{T}(\alpha ). \end{aligned}$$

    In the present case, \(\hat{\alpha }_{T}\) can be calculated explicitly as

    $$\begin{aligned} \hat{\alpha }_{T}=-\frac{\int _{0}^{T}(X_{t}-X_{0})A_{t}\mathrm{d}t}{\int _{0}^{T}A_{t}^{2}\mathrm{d}t},~~~T>0. \end{aligned}$$
    (1.7)

It is often the case that the reflecting barrier is assumed to be zero in applications to queueing system, storage model, engineering, finance, etc. This is mainly due to the physical restriction of the state processes such as queue length, inventory level, content process, stock prices and interest rates, which take nonnegative values.

Noting that \(\sigma \) in our model is a constant which is independent of the parameter \(\alpha \) and the quadratic variation process \([X]_{t}\) equals to \(\sigma ^{2}t\), \(t\ge 0\), and we assume that \(\sigma \) is known and set it equal to one in the situation of continuous observations.

The rest of this paper is organized as follows: In Sect. 2, the Skorohod equation and the integral version of the Toeplitz lemma, which will be useful for our proofs, are formulated. In Sect. 3, the proofs of our main results including the law of iterated logarithm, strong consistency and asymptotic distribution are presented. In Sect. 4, the paper is concluded, and some opportunities for future research are outlined. In particular, we focus on the further discussion of ergodic case, i.e. \(\alpha >0\), in our model and find that this kind of estimator is not strongly consistent in ergodic case.

2 Main Results

2.1 Preliminaries

Now we give the following key lemma, which comes from Karatzas and Shreve [25] and will play an important role in the proof of our main results.

Lemma 2.1

(The Skorohod equation) Let \(z\ge 0\) be a given number and \(y(\cdot )=\{y(t);~0\le t<\infty \}\) a continuous function with \(y(0)=0\). There exists a unique continuous function \(k(\cdot )=\{k(t);~0\le t<\infty \}\) such that

  1. (1)

    \(x(t):=z+y(t)+k(t)\ge 0\), \(0\le t<\infty \),

  2. (2)

    \(k(0)=0\), \(k(\cdot )\) is nondecreasing, and

  3. (3)

    k(.) is flat off \(\{t\ge 0;~x(t)=0\}\), i.e.

    $$\begin{aligned} \int _{0}^{\infty }I(x(s)>0)\mathrm{d}k(s)=0. \end{aligned}$$

    Then, the function \(k(\cdot )\) is given by

    $$\begin{aligned} k(t)=\max \bigg [0,~\max \limits _{0\le s\le t}\{-(z+y(s))\}\bigg ],~0\le t<\infty . \end{aligned}$$

Another important lemma is the following well-known integral version of the Toeplitz lemma, which comes from Dietz and Kutoyants [15].

Lemma 2.2

(Toeplitz lemma) If \(\varphi _{T}\) is a probability measure defined on \([0,~\infty )\) such that \(\varphi _{T}([0,~T])\rightarrow 0\) as \(T\rightarrow \infty \) for each \(K>0\), then

$$\begin{aligned} \lim \limits _{T\rightarrow \infty }\int _{0}^{\infty }f_{t}\varphi _{T}(\mathrm{d}t)=f_{\infty } \end{aligned}$$

for every bounded and measurable function \(f:~[0,~\infty )\rightarrow \mathbb {R}\) for which the limit \(f_{\infty }:=\lim \limits _{T\rightarrow \infty }f_{t}\) exists.

2.2 Our Main Results

Theorem 2.1

Suppose \(\alpha <0\) in (1.1), we have

$$\begin{aligned} \lim \limits _{T\rightarrow \infty }\hat{\alpha }_{T}=\alpha ,~\mathrm{a.s.}, \end{aligned}$$

i.e. \(\hat{\alpha }_{T}\) is strongly consistent. Suppose \(\alpha =0\) in (1.1), then \(\lim \limits _{T\rightarrow \infty }\hat{\alpha }_{T}=\alpha \) in probability.

Theorem 2.2

Suppose \(\alpha <0\) in (1.1), we have

$$\begin{aligned} \frac{e^{-\alpha T}}{\sqrt{T}}(\hat{\alpha }_{T}-\alpha )\overset{\mathcal {D}}{\rightarrow }\frac{-2\alpha \nu }{|\eta +\beta |}, \end{aligned}$$
(2.1)

where \(\nu \) is a standard normal random variable which is independent of \(\eta \) and \(\beta \), respectively. \(\eta =x-\check{W}_{\frac{1}{-2\alpha }}\) , \(\beta =\max [0,~-x+\max \nolimits _{0\le s\le \frac{1}{-2\alpha }}\check{W}_{s}]\), \(\{\check{W}_{t},~t\ge 0\}\) is another Wiener process and \(-\int _{0}^{t}e^{\alpha u}\mathrm{d}W_{u}=\check{W}_{\frac{1-e^{2\alpha t}}{-2\alpha }}\) for each \(t\ge 0\).

Theorem 2.3

Suppose \(\alpha =0\) and without loss of generality \(x=0\) in (1.1), we have

$$\begin{aligned}&2T\hat{\alpha }_{T}\overset{\mathcal {D}}{\rightarrow }- \frac{\left( \int _{0}^{1}(\hat{W}_{t} +\hat{L}_{t} )\mathrm{d}t\right) ^{2}}{\int _{0}^{1}\left( \int _{0}^{t}(\hat{W}_{s} +\hat{L}_{s} )\mathrm{d}s\right) ^{2}\mathrm{d}t}, \end{aligned}$$

where \(\{\hat{W}_{t},~t\ge 0\}\) is another Wiener process and \(\hat{L}_t=\max \{0,\max _{0\le s\le t} (-\hat{W}_s)\}\).

3 Proofs

Throughout this paper, we denote \(P_{\alpha }^{T}\) for the probability measure generated by the process \(\{X_{t}, 0\le t \le T\}\) on the space \((\mathcal {C}[0, T], \mathcal {B}_{T})\), where \(\mathcal {C}[0, T]\) denotes the space of continuous functions endowed with the supremum norm, and \(\mathcal {B}_{T}\) is the corresponding Borel \(\sigma \)-algebra. Let \(E_{\alpha }\) denote expectation with respect to \(P_{\alpha }^{T}\) and \(P_{W}^{T}\) be the probability measure induced by the standard Wiener process.

Proof of Theorem 2.1

(i) If \(\alpha <0\), then the process X of (1.1) is not recurrent.

Applying Ito’s formula to the function \(e^{\alpha t}X_{t}\), we can get

$$\begin{aligned} \mathrm{d}e^{\alpha t}X_{t}= & {} \alpha e^{\alpha t}X_{t}\mathrm{d}t+e^{\alpha t}(-\alpha X_{t} \mathrm{d}t+\mathrm{d}W_{t}+\mathrm{d}L_{t})\nonumber \\= & {} e^{\alpha t}\mathrm{d}W_{t}+e^{\alpha t}\mathrm{d}L_{t}. \end{aligned}$$

Integrating both sides from 0 to t, we have

$$\begin{aligned} e^{\alpha t}X_{t}=x+\int _{0}^{t}e^{\alpha s}\mathrm{d}W_{s}+\int _{0}^{t}e^{\alpha s}\mathrm{d}L_{s}. \end{aligned}$$
(3.1)

Because the process L increases only when X hits the boundary zero, \(\int _{0}^{t}e^{\alpha s}\mathrm{d}L_{s}\) is a continuous nondecreasing process which makes \(x+\int _{0}^{t}e^{\alpha s}\mathrm{d}W_{s}+\int _{0}^{t}e^{\alpha s}\mathrm{d}L_{s}\ge 0\) and increases only when \(x+\int _{0}^{t}e^{\alpha s}\mathrm{d}W_{s}+\int _{0}^{t}e^{\alpha s}\mathrm{d}L_{s}=0\). By virtue of Lemma 2.1, we have

$$\begin{aligned} \int _{0}^{t}e^{\alpha s}\mathrm{d}L_{s}=\max \bigg [0,~\max \limits _{0\le s\le t}\{-(x+\int _{0}^{s}e^{\alpha u}\mathrm{d}W_{u})\}\bigg ]. \end{aligned}$$

For \(\int _{0}^{t}e^{\alpha u}\mathrm{d}W_{u}\), in view of time change for continuous martingale, there exists another Wiener process \(\{\check{W}_{t},~t\ge 0\}\) such that \(-\int _{0}^{t}e^{\alpha u}\mathrm{d}W_{u}=\check{W}_{\frac{1-e^{2\alpha t}}{-2\alpha }}\). Hence, we have

$$\begin{aligned} \int _{0}^{t}e^{\alpha s}\mathrm{d}L_{s}=\max \bigg [0,~-x+\max \limits _{0\le s\le \frac{1-e^{2\alpha t}}{-2\alpha }}\check{W}_{s}\bigg ]. \end{aligned}$$

Then, it is obvious to observe that

$$\begin{aligned} \lim \limits _{t\rightarrow \infty }\int _{0}^{t}e^{\alpha s}\mathrm{d}L_{s}=\max \bigg [0,~-x+\max \limits _{0\le s\le \frac{1}{-2\alpha }}\check{W}_{s}\bigg ]. \end{aligned}$$
(3.2)

By virtue of Proposition 1.26 in Revuz and Yor [38], we get that

$$\begin{aligned} \lim \limits _{t\rightarrow \infty }\int _{0}^{t}e^{\alpha s}\mathrm{d}W_{s}=:\int _{0}^{\infty }e^{\alpha s}\mathrm{d}W_{s} \end{aligned}$$
(3.3)

from the fact \(\lim \limits _{t\rightarrow \infty }[\int _{0}^{t}e^{\alpha s}\mathrm{d}W_{s}]_{t}=-\frac{1}{2\alpha }<+\infty \), where \([\cdot ]_{t}\) denotes the quadratic variation in [0,  t].

It follows from (3.1) that

$$\begin{aligned} X_{t}=e^{-\alpha t}\bigg (x+\int _{0}^{t}e^{\alpha s}\mathrm{d}W_{s}+\int _{0}^{t}e^{\alpha s}\mathrm{d}L_{s}\bigg ). \end{aligned}$$
(3.4)

Thus, together with (3.2) and (3.3), we have

$$\begin{aligned} \lim \limits _{t\rightarrow \infty }e^{\alpha t}X_{t}= & {} \lim \limits _{t\rightarrow \infty }\bigg (x+\int _{0}^{t}e^{\alpha s}\mathrm{d}W_{s}+\int _{0}^{t}e^{\alpha s}\mathrm{d}L_{s}\bigg )\nonumber \\= & {} x- \check{W}_{\frac{1}{-2\alpha }}+\max \bigg [0,~-x+\max \limits _{0\le s\le \frac{1}{-2\alpha }}\check{W}_{s}\bigg ]. \end{aligned}$$
(3.5)

On the other hand, integrating both sides from 0 to T in (3.4), we can conclude that

$$\begin{aligned} \int _{0}^{T}X_{t}\mathrm{d}t= & {} \frac{x}{\alpha }(1-e^{-\alpha T})+\frac{W_{T}-e^{-\alpha T}\int _{0}^{T}e^{\alpha s}\mathrm{d}W_{s}}{\alpha }+\frac{L_{T}-e^{-\alpha T}\int _{0}^{T}e^{\alpha s}\mathrm{d}L_{s}}{\alpha }\nonumber \\= & {} -\frac{e^{-\alpha T}(x+\int _{0}^{T}e^{\alpha s}\mathrm{d}W_{s}+\int _{0}^{T}e^{\alpha s}\mathrm{d}L_{s})}{\alpha }+\frac{x+W_{T}+L_{T}}{\alpha }\nonumber \\= & {} -\frac{X_{T}}{\alpha }+\frac{x+W_{T}+L_{T}}{\alpha }. \end{aligned}$$

Now, we are in a position to study the convergent rate of \(L_{t}\) as \(t\rightarrow \infty \). Let h be a twice continuously differentiable function on \(\mathcal {R}\) with boundary conditions

$$\begin{aligned} h(0)=0,~ h'(0)=1. \end{aligned}$$

By Ito’s formula, we have

$$\begin{aligned} \mathrm{d}h(X_{t})=(-\alpha X_{t}h'(X_{t})+\frac{1}{2}h''(X_{t}))\mathrm{d}t+h'(X_{t})\mathrm{d}W_{t}+h'(X_{t})\mathrm{d}L_{t}. \end{aligned}$$

\(L_{t}\) is a continuous process that increases only when \(X_{t}\) is at the origin. Hence for any continuous function g, one has

$$\begin{aligned} \int _{[0,~T]}g(X_{t})\mathrm{d}L_{t}=g(0)\int _{[0,~T]}I(X_{t}=0)\mathrm{d}L_{t}=g(0)L_{T}, \end{aligned}$$

for any \(T>0\). Then, we have

$$\begin{aligned} \mathrm{d}h(X_{t})=(-\alpha X_{t}h'(X_{t})+\frac{1}{2}h''(X_{t}))\mathrm{d}t+h'(X_{t})\mathrm{d}W_{t}+h'(0)\mathrm{d}L_{t}. \end{aligned}$$
(3.6)

Define the operator \(\mathcal {L}\) as follows

$$\begin{aligned} \mathcal {L}=-\alpha x \frac{\mathrm{d}}{\mathrm{d}x}+\frac{1}{2}\frac{\mathrm{d}^{2}}{\mathrm{d}x^{2}}. \end{aligned}$$

Consider the ODE

$$\begin{aligned} \mathcal {L}h=0. \end{aligned}$$

It is obvious that

$$\begin{aligned} h'(x)=f(x),~f'(x)-2\alpha x f(x)=0, \end{aligned}$$

in virtue of the method of integrating factor, we can solve the above equation

$$\begin{aligned} f(x)=C_{1}e^{\alpha x^{2}}, \end{aligned}$$

and then the general solution is

$$\begin{aligned} h(x)=C_{2}+\int _{0}^{x}f(u)\mathrm{d}u=C_{2}+C_{1}\int _{0}^{x}e^{\alpha u^{2}}\mathrm{d}u, \end{aligned}$$

so we can conclude that \(C_{1}=1\), \(C_{2}=0\) by \(h(0)=0\), \(h'(0)=1\). Together with (3.6), we have

$$\begin{aligned} L(t)-h(X_{t})+h(X_{0})=-\int _{0}^{t}h'(X_{s})\mathrm{d}W_{s}, \end{aligned}$$

which is a zero-mean square integrable martingale. It follows from the bounded property of \(h'\) that

$$\begin{aligned} \limsup \limits _{t\rightarrow \infty }\frac{\left[ \int _{0}^{t}h'(X_{s})\mathrm{d}W_{s}\right] _{t}}{t}=\limsup \limits _{t\rightarrow \infty }\frac{\int _{0}^{t}h'^{2}(X_{s})\mathrm{d}s}{t}<\infty , ~\mathrm{a.s.}, \end{aligned}$$

where \([\cdot ]_{t}\) also denotes the quadratic variation in [0,  t]. Then, the strong law of large numbers of continuous local martingale (cf. Mao [36]) yields

$$\begin{aligned} \frac{\int _{0}^{t}h'(X_{s})\mathrm{d}W_{s}}{t}\rightarrow 0 ~\mathrm{a.s.}, \end{aligned}$$

as \(t\rightarrow \infty \).

Since h is bounded, hence

$$\begin{aligned} e^{\alpha t}(h(X_{t})-h(X_{0}))\rightarrow 0 ~\mathrm{a.s.}, \end{aligned}$$

as \(t\rightarrow \infty \). Thus, we have

$$\begin{aligned} e^{\alpha t}L_{t}\rightarrow 0 ~\mathrm{a.s.} \end{aligned}$$
(3.7)

Further, by (3.5),

$$\begin{aligned} h'^{2}(X_{t})=e^{2\alpha X_t^2}\le \exp \left\{ c_0\alpha e^{-2\alpha t}\right\} \; \mathrm{a.s.} \text { for } t\text { large enough}. \end{aligned}$$

It follows that

$$\begin{aligned} \left[ \int _{0}^{t}h'(X_{s})\mathrm{d}W_{s}\right] _{\infty }=\int _{0}^{\infty }h'^{2}(X_{s})\mathrm{d}s<\infty \;\; \mathrm{a.s.} \end{aligned}$$

Hence, for arbitrary \(\varepsilon >0\),

$$\begin{aligned} \frac{\int _{0}^{t}h'(X_{s})\mathrm{d}W_{s}}{t^{\varepsilon }}\rightarrow 0 ~\mathrm{a.s.}, \end{aligned}$$

as \(t\rightarrow \infty \). It follows that

$$\begin{aligned} \frac{L_{t}}{t^{\varepsilon }}\rightarrow 0 ~\mathrm{a.s.}, \end{aligned}$$
(3.8)

as \(t\rightarrow \infty \). Thus, together with the fact

$$\begin{aligned} \frac{e^{\alpha t}(x+W_{t})}{\alpha }\rightarrow 0 ~\mathrm{a.s.}, \end{aligned}$$

we have

$$\begin{aligned} \frac{e^{\alpha t}(x+W_{t}+L_t)}{\alpha }\rightarrow 0 ~\mathrm{a.s.} \end{aligned}$$

Observe that

$$\begin{aligned} \int _{0}^{T}X_{t}^{2}\mathrm{d}t\rightarrow \infty , \end{aligned}$$

a.s. as \(T\rightarrow \infty \), which follows from a simple comparison result between the ROU process and the regular OU process with the same drift vector [12]. Then, in view of (3.4), we have

$$\begin{aligned} \lim \limits _{T\rightarrow \infty }e^{\alpha T}\int _{0}^{T}X_{t}\mathrm{d}t= & {} -\frac{x-\check{W}_{\frac{1}{-2\alpha }}+\max \bigg [0,~-x+\max \limits _{0\le s\le \frac{1}{-2\alpha }}\check{W}_{s}\bigg ]}{\alpha }\nonumber \\=: & {} -\frac{\eta +\beta }{\alpha }, \end{aligned}$$
(3.9)

and by L’Hospital rule,

$$\begin{aligned} \lim \limits _{T\rightarrow \infty }e^{2\alpha T}\int _{0}^{T}X_{t}^{2}\mathrm{d}t= & {} -\frac{\bigg (x-\check{W}_{\frac{1}{-2\alpha }}+\max \bigg [0,~-x+\max \limits _{0\le s\le \frac{1}{-2\alpha }}\check{W}_{s}\bigg ]\bigg )^{2}}{-2\alpha }\nonumber \\=: & {} -\frac{(\eta +\beta )^{2}}{2\alpha }. \end{aligned}$$
(3.10)

Through (1.7) and the simple calculation, we have

$$\begin{aligned} \hat{\alpha }_{T}-\alpha =-\frac{\int _{0}^{T}W_{t}A_{t}\mathrm{d}t}{\int _{0}^{T}A_{t}^{2}\mathrm{d}t}-\frac{\int _{0}^{T}L_{t}A_{t}\mathrm{d}t}{\int _{0}^{T}A_{t}^{2}\mathrm{d}t}=:I+ II. \end{aligned}$$
(3.11)

For I, it is easy to see that

$$\begin{aligned} \int _{0}^{T}A_{t}^{2}\mathrm{d}t\rightarrow \infty ,~\mathrm{a.s.}, \end{aligned}$$

as \(T\rightarrow \infty \). In view of L’Hospital rule and (3.9), we have

$$\begin{aligned} \lim _{T\rightarrow \infty }\frac{\int _{0}^{T}W_{t}A_{t}\mathrm{d}t}{\int _{0}^{T}A_{t}^{2}\mathrm{d}t}=\lim _{T\rightarrow \infty }\frac{W_{T}}{A_{T}}= \lim _{T\rightarrow \infty }\frac{e^{\alpha T}W_{T}}{e^{\alpha T}A_{T}}=0,~\mathrm{a.s.} \end{aligned}$$

For II, in view of L’Hospital rule, (3.7) and (3.9), we have

$$\begin{aligned} \lim _{T\rightarrow \infty }\frac{\int _{0}^{T}L_{t}A_{t}\mathrm{d}t}{\int _{0}^{T}A_{t}^{2}\mathrm{d}t}=\lim _{T\rightarrow \infty }\frac{L_{T}}{A_{T}}= \lim _{T\rightarrow \infty }\frac{e^{\alpha T}L_{T}}{e^{\alpha T}A_{T}}=0,~\mathrm{a.s.} \end{aligned}$$

This completes the desired proof.

(ii) If \(\alpha =0\) and \(x=0\), it is clear that

$$\begin{aligned} X_{t}=W_{t}+L_{t}. \end{aligned}$$

Then

$$\begin{aligned} \int _{0}^{T}A_{t}^{2}\mathrm{d}t= & {} \int _{0}^{T}\bigg (\int _{0}^{t}(W_{s}+L_{s})\mathrm{d}s\bigg )^{2}\mathrm{d}t\nonumber \\= & {} T^4\int _{0}^{1}\left( \int _{0}^{t}\left( \frac{W_{Ts}}{\sqrt{T}}+ \frac{L_{Ts}}{\sqrt{T}}\right) \mathrm{d}s\right) ^{2}\mathrm{d}t \end{aligned}$$

and

$$\begin{aligned} \int _{0}^{T}(W_{t}+L_t)A_{t}\mathrm{d}t= & {} \int _{0}^{T}A_t \mathrm{d}A_t =\frac{1}{2}A_T^2\nonumber \\= & {} \frac{1}{2}\bigg (\int _{0}^{T}(W_{s}+L_{s})\mathrm{d}s\bigg )^{2}\mathrm{d}t\\= & {} \frac{1}{2} T^3 \left( \int _{0}^{1}\left( \frac{W_{Ts}}{\sqrt{T}}+\frac{L_{Ts}}{\sqrt{T}}\right) \mathrm{d}s\right) ^{2}. \end{aligned}$$

From Skorohod Lemma, we have

$$\begin{aligned} L_{t}=\max \bigg \{0,~\max \limits _{0\le s\le t}(-W_{s})\bigg \}. \end{aligned}$$

By the scaling property of Brownian motion, we have

$$\begin{aligned} \left\{ \big (W_t, L_{t}\big ); t\ge 0\right\}&\overset{\mathcal {D}}{=}\left\{ \sqrt{T}\Big (\hat{W}_{t/T}, \max \bigg \{0,~\max \limits _{0\le u\le t}(-\hat{W}_{u/T})\bigg \}\Big );t\ge 0\right\} \\&=:\left\{ \sqrt{T}\Big (\hat{W}_{t/T},\hat{L}_{t/T}\Big );t\ge 0\right\} ,\nonumber \end{aligned}$$
(3.12)

where \(\overset{\mathcal {D}}{=}\) denotes equal in distribution, \(\{\hat{W}_{t},~t\in [0,~\infty )\}\) is another Wiener process and \(\{W_{s},~s\in [0,~\infty )\}\overset{\mathcal {D}}{=}\{\sqrt{T}\hat{W}_{\frac{s}{T}},~s\in [0,~\infty )\}\) for \(T>0\). Therefore, for each T, we have

$$\begin{aligned}&\bigg (\int _{0}^{T}A_{t}^{2}\mathrm{d}t,~\int _{0}^{T}(W_{t}+L_t)A_{t}\mathrm{d}t\bigg )\\&\quad \overset{\mathcal {D}}{=}\bigg (T^{4}\int _{0}^{1}\left( \int _{0}^{t}(\hat{W}_{s} +\hat{L}_{s} )\mathrm{d}s\right) ^{2}\mathrm{d}t, \;\; \frac{1}{2}T^{3} \ \left( \int _{0}^{1}(\hat{W}_{t} +\hat{L}_{t})\mathrm{d}t\right) ^{2} \bigg ). \end{aligned}$$

Therefore, in view of the continuous mapping theorem, we have

$$\begin{aligned} \frac{\int _{0}^{T}(W_{t}+L_t)A_{t}\mathrm{d}t}{\int _{0}^{T}A_{t}^{2}\mathrm{d}t}&\overset{\mathcal {D}}{=}&\frac{1}{2T}\frac{\left( \int _{0}^{1}(\hat{W}_{t} +\hat{L}_{t} )\mathrm{d}t\right) ^{2}}{\int _{0}^{1}\left( \int _{0}^{t}(\hat{W}_{s} +\hat{L}_{s} )\mathrm{d}s\right) ^{2}\mathrm{d}t}\\&\rightarrow 0.\nonumber \end{aligned}$$
(3.13)

Hence, by (3.11), we complete the proof. \(\square \)

Proof of Theorem 2.2

In view of direct calculation, we have

$$\begin{aligned} \frac{e^{-\alpha T}}{\sqrt{T}}(\hat{\alpha }_{T}-\alpha )=-\frac{e^{-\alpha T}\int _{0}^{T}W_{t}A_{t}\mathrm{d}t}{\sqrt{T}\int _{0}^{T}A_{t}^{2}\mathrm{d}t}-\frac{e^{-\alpha T}\int _{0}^{T}L_{t}A_{t}\mathrm{d}t}{\sqrt{T}\int _{0}^{T}A_{t}^{2}\mathrm{d}t}=:\psi _{1}+ \psi _{2}, \end{aligned}$$

where

$$\begin{aligned} \psi _{1}=-\frac{e^{-\alpha T}\int _{0}^{T}W_{t}A_{t}\mathrm{d}t}{\sqrt{T}\int _{0}^{T}A_{t}^{2}\mathrm{d}t} \end{aligned}$$

and

$$\begin{aligned} \psi _{2}=-\frac{e^{-\alpha T}\int _{0}^{T}L_{t}A_{t}\mathrm{d}t}{\sqrt{T}\int _{0}^{T}A_{t}^{2}\mathrm{d}t}. \end{aligned}$$

Furthermore, we can decompose \(L_{1}\) into

$$\begin{aligned} \psi _{1}=-\frac{e^{-\alpha T}\int _{0}^{T}W_{t}A_{t}\mathrm{d}t}{\sqrt{T}\int _{0}^{T}A_{t}^{2}\mathrm{d}t}=: \psi _{11}\times \psi _{12}, \end{aligned}$$

where

$$\begin{aligned} \psi _{11}=-\frac{e^{\alpha T}}{\sqrt{T}}\int _{0}^{T}W_{t}A_{t}\mathrm{d}t \end{aligned}$$

and

$$\begin{aligned} \psi _{12}=\frac{1}{e^{2\alpha T}\int _{0}^{T}A_{t}^{2}\mathrm{d}t}. \end{aligned}$$

For \(\psi _{11}\), we have

$$\begin{aligned} \psi _{11}=-\frac{e^{\alpha T}}{\sqrt{T}}\int _{0}^{T}W_{t}A_{t}\mathrm{d}t= & {} \frac{e^{\alpha T}}{\sqrt{T}}\int _{0}^{T}(W_{T}-W_{t})A_{t}\mathrm{d}t-\frac{e^{\alpha T}}{\sqrt{T}}W_{T}\int _{0}^{T}A_{t}\mathrm{d}t\\=: & {} \psi _{111}+ \psi _{112}. \end{aligned}$$

Hence, from (3.5), we get

$$\begin{aligned} |\psi _{111}|\le & {} \frac{e^{\alpha T}}{\sqrt{T}}\int _{0}^{T}|W_{T}-W_{t}|\bigg |\int _{0}^{t}X_{s}\mathrm{d}s\mathrm{d}t\bigg |\nonumber \\\le & {} \frac{1}{-\alpha }\sup _{t\ge 0}|e^{\alpha t}X_{t}|\times \frac{e^{\alpha T}}{\sqrt{T}}\int _{0}^{T}|W_{T}-W_{t}|e^{-\alpha t}\mathrm{d}t\nonumber \\&~\overset{P_{\alpha }^{T}}{\rightarrow }0&. \end{aligned}$$
(3.14)

In fact, by Markov’s inequality and Fubini’s theorem, we have for arbitrary \(\varepsilon >0\)

$$\begin{aligned} P_{\alpha }^{T}\bigg (\frac{e^{\alpha T}}{\sqrt{T}}\int _{0}^{T}|W_{T}-W_{t}|e^{-\alpha t}\mathrm{d}t>\varepsilon \bigg )\le & {} \frac{1}{\varepsilon }E_{\alpha }\bigg (\frac{e^{\alpha T}}{\sqrt{T}}\int _{0}^{T}|W_{T}-W_{t}|e^{-\alpha t}\mathrm{d}t\bigg )\\= & {} \frac{1}{\sqrt{T}\varepsilon }e^{\alpha T}\int _{0}^{T}E_{\alpha }|W_{T}-W_{t}|e^{-\alpha t}\mathrm{d}t\\= & {} \frac{1}{\sqrt{T}\varepsilon }\int _{0}^{T}E_{\alpha }|W_{T}-W_{t}|e^{\alpha (T-t)}\mathrm{d}t\\= & {} \frac{1}{\sqrt{T}\varepsilon }\int _{0}^{T}\sqrt{u}e^{\alpha u}\mathrm{d}u\\\le & {} \frac{(-\alpha )^{\frac{3}{2}}}{\sqrt{T}\varepsilon } \frac{\sqrt{\pi }}{2}\rightarrow 0, \end{aligned}$$

as \(T\rightarrow \infty \). For the term \(\psi _{112}\), we have

$$\begin{aligned} \psi _{112}=\frac{W_{T}}{\sqrt{T}}\times \frac{-\int _{0}^{T}A_{t}\mathrm{d}t}{e^{-\alpha T}}, \end{aligned}$$
(3.15)

For the first factor, we have

$$\begin{aligned} \frac{\frac{1}{\sqrt{T}}W_{T}}{\eta _{T}+\beta _{T}}=\frac{\frac{1}{\sqrt{T}}(W_{T} -W_{\sqrt{T}})+\frac{1}{\sqrt{T}}W_{\sqrt{T}}}{\eta _{\sqrt{T}} +(\eta _{T}-\eta _{\sqrt{T}})+\beta _{T}}, \end{aligned}$$
(3.16)

here \(\eta _{T}=x+\int _{0}^{T}e^{\alpha t}\mathrm{d}W_{t}\), \(\beta _{T}=\int _{0}^{T}e^{\alpha t}\mathrm{d}L_{t}\). We have the following claims.

(1) The random variable \(\frac{1}{\sqrt{T}}(W_{T}-W_{\sqrt{T}})\) has a normal distribution \(N(0,~1-\frac{1}{\sqrt{T}})\), which converges weakly to a standard normal random variable \(\nu \) as \(T\rightarrow \infty \).

(2) By strong law of large numbers, we have

$$\begin{aligned} \lim _{T\rightarrow \infty }\frac{1}{\sqrt{T}}W_{\sqrt{T}}=0,~\mathrm{a.s.} \end{aligned}$$

(3) It is clear that

$$\begin{aligned} \lim _{T\rightarrow \infty }\eta _{\sqrt{T}}=\eta ,~\mathrm{a.s.}, \end{aligned}$$

and it follows from (3.2) that

$$\begin{aligned} \lim _{T\rightarrow \infty }\beta _{T}=\beta ,~\mathrm{a.s.} \end{aligned}$$

(4) \(\frac{1}{\sqrt{T}}(W_{T}-W_{\sqrt{T}})\) is independent of \(\eta \) and \(\beta \).

(5) \(\eta _{T}-\eta _{\sqrt{T}}\) converges to zero in probability as \(T\rightarrow \infty \). Indeed

$$\begin{aligned} E|\eta _{T}-\eta _{\sqrt{T}}|^{2}=\int _{\sqrt{T}}^{T}e^{2\alpha t}\mathrm{d}t\rightarrow 0 \end{aligned}$$

as \(T\rightarrow \infty \).

From the above claims, we can conclude that

$$\begin{aligned} \psi _{11}\overset{\mathcal {D}}{\rightarrow }\frac{\nu (\eta +\beta )}{\alpha ^{2}} \end{aligned}$$
(3.17)

as \(T\rightarrow \infty \). On the other hand, it follows from (3.5) that

$$\begin{aligned} \lim _{T\rightarrow \infty }\psi _{12}= & {} \lim _{T\rightarrow \infty }\frac{1}{e^{2\alpha T}\int _{0}^{T}A_{t}^{2}\mathrm{d}t}\\= & {} \frac{-2\alpha ^{3}}{(\eta +\beta )^{2}}~\mathrm{a.s.}~[P_{\alpha }^{T}]. \end{aligned}$$

Thus

$$\begin{aligned} \lim _{T\rightarrow \infty }\psi _{1}\overset{\mathcal {D}}{\rightarrow }\frac{-2\alpha \nu }{|\eta +\beta |}. \end{aligned}$$
(3.18)

In order to prove our main result, it is sufficient to study the asymptotic distribution of \(\psi _{2}\). Similarly, we have

$$\begin{aligned} \psi _{2}=-\frac{e^{-\alpha T}\int _{0}^{T}L_{t}A_{t}\mathrm{d}t}{\sqrt{T}\int _{0}^{T}A_{t}^{2}\mathrm{d}t}=: \psi _{21}\times \psi _{22}, \end{aligned}$$

where

$$\begin{aligned} \psi _{21}=-\frac{e^{\alpha T}}{\sqrt{T}}\int _{0}^{T}L_{t}A_{t}\mathrm{d}t \end{aligned}$$

and

$$\begin{aligned} \psi _{22}=\frac{1}{e^{2\alpha T}\int _{0}^{T}A_{t}^{2}\mathrm{d}t}. \end{aligned}$$

For \( \psi _{21}\), by (3.8), (3.9) and L’hospital rule, we have

$$\begin{aligned} \lim _{T\rightarrow \infty }\psi _{21}= & {} -\lim _{T\rightarrow \infty }\frac{e^{\alpha T}}{\sqrt{T}}\int _{0}^{T}L_{t}A_{t}\mathrm{d}t\\= & {} -\lim _{T\rightarrow \infty }\frac{L_{T}A_{T}}{e^{-\alpha T}\left( \frac{1}{2\sqrt{T}}-\alpha \sqrt{T}\right) }\\= & {} 0 \;\; \text {in probability}. \end{aligned}$$

By (3.9) and L’Hospital rule, we have

$$\begin{aligned} \lim _{T\rightarrow \infty }\psi _{22}= & {} \lim _{T\rightarrow \infty }\frac{1}{e^{2\alpha T}\int _{0}^{T}A_{t}^{2}\mathrm{d}t}\\= & {} \frac{-2\alpha ^{3}}{\bigg (x+\check{W}_{\frac{1}{-2\alpha }}+\max \bigg [0,~-x \max \limits _{0\le s\le \frac{1}{-2\alpha }}\check{W}_{s}\bigg ]\bigg )^{2}}. \end{aligned}$$

Hence

$$\begin{aligned} \lim _{T\rightarrow \infty }\psi _{2}=0\;\; \text { in probability}. \end{aligned}$$

Then

$$\begin{aligned} \lim _{T\rightarrow \infty }\frac{e^{-\alpha T}}{\sqrt{T}}(\hat{\alpha }_{T}-\alpha )\overset{\mathcal {D}}{\rightarrow }\frac{-2\alpha \nu }{|\eta +\beta |}. \end{aligned}$$

The proof is complete. \(\square \)

Proof of Theorem 2.3

If \(\alpha =0\) and \(x=0\), it is clear that

$$\begin{aligned} X_{t}=W_{t}+L_{t}. \end{aligned}$$

Note (3.11) and (3.13). The result follows. \(\square \)

4 Concluding Remarks and Future Research

In this paper, we have provided the study of trajectory fitting estimator for nonergodic reflected Ornstein–Uhlenbeck processes. We focus on strong consistency and asymptotic distribution of this kind of estimator. Below we outline trajectory fitting estimator for ergodic case.

It is of great interest to investigate asymptotic behaviour of trajectory fitting estimator for ergodic case. If \(\alpha >0\) in our model, the process X of (1.1) is positive recurrent. It can be proved that the process \(\{X(t)\}_{t\ge 0}\) in the model is ergodic and the unique invariant density of \(\{X(t)\}_{t\ge 0}\) is given [23] by

$$\begin{aligned} p(x)=2\sqrt{2\alpha }\phi (\sqrt{2\alpha }x),~x\in [0,~\infty ), \end{aligned}$$

where \(\phi (x)=\frac{1}{\sqrt{2\pi }}e^{-\frac{x^{2}}{2}}\) is the (standard) Gaussian density function. Therefore, the mean ergodic theorem holds [23], i.e.

$$\begin{aligned} \lim \limits _{t\rightarrow \infty }\frac{1}{t}\int _{0}^{t}f(X(s))\mathrm{d}s=\int _{0}^{\infty }f(x)p(x)\mathrm{d}x~\mathrm{a.s.} ~[P_{\alpha }^{T}], \end{aligned}$$
(4.1)

for any \(x\in S:=[0,~\infty )\) and any \(f\in L_{1}(S,~\mathcal {B}(s))\). Let \(f(x)=x\), and we have

$$\begin{aligned} \lim \limits _{t\rightarrow \infty }\frac{A_{t}}{t} =\lim \limits _{t\rightarrow \infty }\frac{1}{t}\int _{0}^{t}X(s)\mathrm{d}s=\int _{0}^{\infty }xp(x)\mathrm{d}x=\frac{1}{\sqrt{\pi \alpha }}~\mathrm{a.s.} ~[P_{\alpha }^{T}]. \end{aligned}$$
(4.2)

It follows from the Toeplitz lemma the strong law of large numbers and \(\int _{0}^{T}\frac{t^{2}}{T^{3}/3}\mathrm{d}t=1\) that

$$\begin{aligned} \lim \limits _{t\rightarrow \infty }\frac{\int _{0}^{T}W_{t}A_{t}\mathrm{d}t}{\int _{0}^{T}A_{t}^{2}\mathrm{d}t} =\frac{\int _{0}^{T}\frac{W_{t}}{t}\cdot \frac{A_{t}}{t}\cdot \frac{t^{2}}{T^{3}/3}\mathrm{d}t}{\int _{0}^{T}\frac{A_{t}^{2}}{t^{2}}\cdot \frac{t^{2}}{T^{3}/3}\mathrm{d}t} =0, \end{aligned}$$

and

$$\begin{aligned} \lim \limits _{t\rightarrow \infty }\frac{\int _{0}^{T}L_{t}A_{t}\mathrm{d}t}{\int _{0}^{T}A_{t}^{2}\mathrm{d}t} =\frac{\int _{0}^{T}\frac{L_{t}}{t}\cdot \frac{A_{t}}{t}\cdot \frac{t^{2}}{T^{3}/3}\mathrm{d}t}{\int _{0}^{T}\frac{A_{t}^{2}}{t^{2}}\cdot \frac{t^{2}}{T^{3}/3}\mathrm{d}t} =C\sqrt{\pi \alpha }, \end{aligned}$$

where \(C:=\lim \limits _{t\rightarrow \infty }\frac{L_{t}}{t}\). In fact, from Mandjes and Spreij [35], one has \(\frac{L_{t}-q_{L}t}{\sqrt{t}}\) and weakly converges to \(N(0,~\tau ^{2})\), where \(\tau ^{2}=\int _{0}^{\infty }h'(x)^{2}p(x)\mathrm{d}x<\infty \), \(q_{L}=\frac{1}{2}p(0)=\sqrt{\frac{\alpha }{\pi }}\), h is a twice continuously differentiable function on \(\mathcal {R}\) with boundary conditions \(h(0)=0\), \(h'(0)=1\). Thus, we have

$$\begin{aligned} \lim \limits _{t\rightarrow \infty }\frac{L_{t}}{t}=\sqrt{\frac{\alpha }{\pi }}~\mathrm{a.s.} ~[P_{\alpha }^{T}]. \end{aligned}$$

It follows from (3.11) that

$$\begin{aligned} \lim _{T\rightarrow \infty } \widehat{\alpha }_T= \alpha -\sqrt{\frac{\alpha }{\pi }}\sqrt{\pi \alpha }=0~\mathrm{a.s.} ~[P_{\alpha }^{T}]. \end{aligned}$$

Then, we have shown that the trajectory fitting estimator of \(\alpha >0\) in our model is not strongly consistent.

Based on the continuous observations of \(\{X_{t},~t\ge 1\}\), the main findings in this paper concern the limiting behaviours of estimation of the unknown parameter in the nonstationary reflected Ornstein–Uhlenbeck processes. Our main results include both nonrecurrent and transient cases. Investigations on more statistical properties related to the estimation can be regarded as a future research topic, for example, a similar topic for our model based on discrete observations, as well as its consistency and asymptotic distribution (see, e.g. Hu et al. [23]).

On the other hand, some future work may investigate some other estimators for the other reflected diffusions. See, for example, Lee et al. [29] proposed a sequential maximum likelihood estimation (SMLE) of the unknown drift of the ROU process without jumps; the reflected jump diffusion or Levy processes has been extensively investigated in the literature (cf. Asmussen et al. [1], Asmussen and Pihlsgard [2], Atar and Budhiraja [4], Avram et al. [5, 6], Bo et al. [8,9,10,11,12], Bo and Yang [13], Xing et al. [44]); some others are concerned with the problem of statistical parameter estimation for reflected fractional Brownian motion (cf. Hu and Lee [22], Lee and Song [30]).