Abstract
In this paper, we establish a moderate deviation principle for the Langevin dynamics with strong damping. The weak convergence approach plays an important role in the proof.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
For every \(\varepsilon >0\), consider the following Langevin equation with strong damping
Here B(t) is a d-dimensional standard Wiener process, defined on some complete stochastic basis \((\Omega ,{\mathcal {F}},\{{\mathcal {F}}_t\}_{{t \ge 0}},\mathbb {P})\). The coefficients \(b, \alpha \) and \(\sigma \) satisfy some regularity conditions (see Sect. 2 for details) such that for any fixed \(\varepsilon>0,T>0\) and \( k \ge 1\), Eq.(1.1) admits a unique solution \(q^\varepsilon \) in \(L^k(\Omega ;C([0,T];\mathbb {R}^d))\). Let \(q_\varepsilon (t):= q^\varepsilon (t/\varepsilon )\), \( t \ge 0\), then Eq. (1.1) becomes
where \(w(t):=\sqrt{\varepsilon } B(t/\varepsilon )\), \(t \ge 0\), is also a \(\mathbb {R}^d\)-valued Wiener process.
In [3], Cerrai and Freidlin established a large deviation principle (LDP for short) for Eq. (1.2) as \(\varepsilon \rightarrow 0+\). More precisely, for any \(T>0\), they proved that the family \(\{q_\varepsilon \}_{\varepsilon >0}\) satisfies the LDP in the space \(C([0,T]; \mathbb {R}^d)\), with the same rate function I and the same speed function \(\varepsilon ^{-1}\) that describe the LDP of the first order equation
Explicitly, this means that
-
(1)
for any constant \(c>0\), the level set \(\{f; I(f)\le c\}\) is compact in \(C([0,T];\mathbb {R}^d)\);
-
(2)
for any closed subset \(F\subset C([0,T];\mathbb {R}^d)\),
$$\begin{aligned} \limsup _{\varepsilon \rightarrow 0+}\varepsilon \log \mathbb {P}(q_{\varepsilon }\in F)\le -\inf _{f\in F}I(f); \end{aligned}$$ -
(3)
for any open subset \(G\subset C([0,T];\mathbb {R}^d)\),
$$\begin{aligned} \liminf _{\varepsilon \rightarrow 0+}\varepsilon \log \mathbb {P}(q_{\varepsilon }\in G)\ge -\inf _{f\in G}I(f). \end{aligned}$$
The dynamics system (1.3) can be regarded as the random perturbation of the following deterministic differential equation
Roughly speaking, the LDP result in [3] shows that the asymptotic probability of \(\mathbb {P}(\Vert q_{\varepsilon } -q_0\Vert \ge \delta )\) converges exponentially to 0 as \(\varepsilon \rightarrow 0\) for any \(\delta >0\), where \(\Vert \cdot \Vert \) is the sup-norm on \(C([0,T];\mathbb {R}^d)\).
Similarly to the large deviations, the moderate deviations arise in the theory of statistical inference quite naturally. The moderate deviation principle (MDP for short) can provide us with the rate of convergence and a useful method for constructing asymptotic confidence intervals (see, e.g., recent works [6, 8, 9, 11] and references therein). Usually, the quadratic form of the rate function corresponding to the MDP allows for the explicit minimization, and particularly it allows one to obtain an asymptotic evaluation for the exit time (see [10]). Recently, the study of the MDP estimates for stochastic (partial) differential equation has been carried out as well, see e.g. [1, 7, 12, 13] and so on.
In this paper, we shall investigate the MDP problem for the family \(\{q_\varepsilon \}_{\varepsilon >0 }\) on \( C([0,T];\mathbb {R}^d)\). That is, the asymptotic behavior of the trajectory
Here the deviation scale satisfies
Due to the complexity of \(q_\varepsilon \), we mainly use the weak convergence approach to deal with this problem. Comparing with the approximating method used in Gao and Wang [5], our method is simpler since we only need the moment estimation rather than the exponential moment estimation of the solution.
The organization of this paper is as follows. In Sect. 2, we present the framework of the Langevin equation, and then state our main results. Section 3 is devoted to proving the MDP.
2 Framework and Main Results
Let \(|\cdot |\) be the Euclidean norm of a vector in \(\mathbb {R}^d\), \(\langle \cdot , \cdot \rangle \) the inner production in \(\mathbb {R}^d\), and \(\Vert \cdot \Vert _{\mathrm {HS}}\) the Hilbert-Schmidt norm in \(\mathbb {R}^{d\times d}\) (the space of \(d\times d \) matrices). For a function \(b: \mathbb {R}^d\rightarrow \mathbb {R}^d\), \(Db=\left( \frac{\partial }{\partial x_j} b^i\right) _{1\le i,j \le d}\) is the Jacobian matrix of b. Recall that \(\Vert \cdot \Vert \) is the sup-norm on \(C([0,T];\mathbb {R}^d)\). Throughout this paper, \(T > 0\) is some fixed constant, \(C(\cdot )\) is a positive constant depending on the parameters in the bracket and independent of \(\varepsilon \). The value of \(C(\cdot )\) may be different from line to line.
Assume that the coefficients \(b,\alpha \) and \(\sigma \) in (1.2) satisfy the following hypothesis.
Hypothesis 2.1
-
(a)
The mappings \(b: \mathbb {R}^d \rightarrow \mathbb {R}^d \) and \(\sigma : \mathbb {R}^d \rightarrow \mathbb {R}^{d\times d}\) are continuously differentiable, and there exists some constant \(K>0\) such that for all \(x, y\in \mathbb {R}^d\),
$$\begin{aligned} |b(x)-b(y)| \le K|x-y|, \end{aligned}$$(2.1)and
$$\begin{aligned} \Vert \sigma (x)-\sigma (y)\Vert _{\mathrm {HS}} \le K|x-y|,\ \Vert \sigma (x)\Vert _{\mathrm {HS}} \le K. \end{aligned}$$Moreover, the matrix \(\sigma (q)\) is invertible for any \(q \in \mathbb {R}^d\), and \(\sigma ^{-1}: \mathbb {R}^d \rightarrow \mathbb {R}^{d\times d}\) is bounded.
-
(b)
The mapping \(\alpha : \mathbb {R}^d \rightarrow \mathbb {R}\) belongs to \(C_b^1(\mathbb {R}^d)\) and there exist some constants \(0<\alpha _0\le \alpha _1\) and \(K>0\) such that
$$\begin{aligned} \alpha _0=\inf _{x \in \mathbb {R}^d} \alpha (x), \ \alpha _1=\sup _{x \in \mathbb {R}^d}\alpha (x) \text { and } \sup _{x \in \mathbb {R}^d}|\nabla \alpha (x)|\le K. \end{aligned}$$
Notice that:
-
(1)
\(\Vert Db\Vert _{\mathrm {HS}}\le K\) since b is continuously differentiable and satisfies (2.1);
-
(2)
\(\sigma /\alpha \) is Lipschitz continuous and bounded due to the Lipschitz-continuity and the boundness of the functions \(\sigma \) and \(1/\alpha \).
Under Hypothesis 2.1, according to [5], Theorem 2.2], we know that the family \(\left\{ (g_\varepsilon -q_{0})/[\sqrt{\varepsilon }h(\varepsilon )]\right\} _{\varepsilon >0}\) satisfies the LDP on \(C([0,T]; \mathbb {R}^d)\) with speed \(h^2 (\varepsilon )\) and a good rate function I given by
where
and
with the convention \(\inf \emptyset =\infty \). This special kind of LDP is just the MDP for the family \(\{g_\varepsilon \}_{\varepsilon >0}\) (see [4]).
The main goal of this paper is to prove that the family \(\{q_\varepsilon \}_{\varepsilon >0}\) satisfies the same MDP as the family \(\{g_\varepsilon \}_{\varepsilon >0}\) on \( C([0,T];\mathbb {R}^d)\).
Theorem 2.2
Under Hypothesis 2.1, the family \(\{(q_\varepsilon -q_{0})/[\sqrt{\varepsilon }h(\varepsilon )] \}_{\varepsilon >0}\) obeys an LDP on \(C([0,T]; \mathbb {R}^d)\) with the speed function \(h^2(\varepsilon )\) and the rate function I given by (2.2).
3 Proof of MDP
3.1 Weak Convergence Approach in LDP
In this subsection, we will give the general criteria for the LDP given in [2].
Let \((\Omega ,{\mathcal {F}},\mathbb {P})\) be a probability space with an increasing family \(\{{\mathcal {F}}_t\}_{0\le t\le T}\) of the sub-\(\sigma \)-fields of \({\mathcal {F}}\) satisfying the usual conditions. Let \({\mathcal {E}}\) be a Polish space with the Borel \(\sigma \)-field \({\mathcal {B}}({\mathcal {E}})\). The Cameron-Martin space associated with the Wiener process \(\{w(t)\}_{0\le t\le T}\) (defined on the filtered probability space given above) is given by (2.3). See [4]. The space \({\mathcal {H}}\) is a Hilbert space with inner product
Let \({\mathcal {A}}\) denote the class of all \(\{{\mathcal {F}}_t\}_{0\le t \le T}\)-predictable processes belonging to \({\mathcal {H}}\) a.s.. Define for any \(N \in \mathbb {N}\),
Consider the weak convergence topology on \({\mathcal {H}}\), i.e., for any \(h_n, h \in {\mathcal {H}}, n\ge 1\), \(h_n\) converges weakly to h as \(n \rightarrow +\infty \) if
It is easy to check that \(S_N\) is a compact set in \({\mathcal {H}}\) under the weak convergence topology. Define
We present the following result from Budhiraja et al. [2].
Theorem 3.1
([2]) Let \({\mathcal {E}}\) be a Polish space with the Borel \(\sigma \)-field \({\mathcal {B}}({\mathcal {E}})\). For any \(\varepsilon >0\), let \(\Gamma _\varepsilon \) be a measurable mapping from \(C([0,T];\mathbb {R}^d)\) into \({\mathcal {E}}\). Let \(X_\varepsilon (\cdot ):=\Gamma _\varepsilon (w(\cdot ))\). Suppose there exists a measurable mapping \(\Gamma _0:C([0,T];\mathbb {R}^d)\rightarrow {\mathcal {E}}\) such that
-
(a)
for every \(N<+\infty \), the set
$$\begin{aligned} \left\{ \Gamma _0\left( \int _0^{\cdot }\dot{h}(s)ds\right) ;\ h\in S_N\right\} \end{aligned}$$is a compact subset of \({\mathcal {E}}\);
-
(b)
for every \(N<+\infty \) and any family \(\{ h^\varepsilon \}_{\varepsilon >0}\subset {\mathcal {A}}_N\) satisfying that \(h^\varepsilon \) (as \(S_N\)-valued random elements) converges in distribution to \(h \in {\mathcal {A}}_N\) as \(\varepsilon \rightarrow 0\),
$$\begin{aligned} \Gamma _\varepsilon \left( w(\cdot )+\frac{1}{\sqrt{\varepsilon }}\int _0^{\cdot }\dot{h}^\varepsilon (s)ds\right) \ \text {converges to} \ \Gamma _0\left( \int _0^{\cdot }\dot{h}(s)ds\right) \end{aligned}$$in distribution as \(\varepsilon \rightarrow 0\).
Then the family \(\{X_\varepsilon \}_{\varepsilon >0}\) satisfies the LDP on \({\mathcal {E}}\) with the rate function I given by
with the convention \(\inf \emptyset =\infty \).
3.2 Reduction to the Bounded Case
Under Hypothesis 2.1, for every fixed \(\varepsilon >0\), Eq. (1.2) admits a unique solution \(q_\varepsilon \) in \(L^k(\Omega ;C([0,T];\mathbb {R}^d))\). According to the proof of Theorem 3.3 in [3], we know that the solution \(q_\varepsilon \) of Eq. (1.2) can be expressed in the following form:
where
with
We denote the solution functional from \(C([0,T];\mathbb {R}^d)\) into \(C([0,T];\mathbb {R}^d)\) by \({\mathcal {G}}_{\varepsilon }\), i.e.,
Let
Then \(X_\varepsilon \) solves the following equation
We shall prove that \(\{X_\varepsilon \}_{\varepsilon >0}\) obeys an LDP on \(C([0,T]; \mathbb {R}^d)\) with speed function \(h^2(\varepsilon )\) and the rate function I given by (2.2).
Since the family \(\{q_\varepsilon \}_{\varepsilon >0}\) satisfies the LDP in the space \(C([0,T]; \mathbb {R}^d)\) with the rate function I and the speed function \(\varepsilon ^{-1}\) under Hypothesis 2.1 (see Cerrai and Freidlin [3]), there exist some positive constants R, C such that
Noticing (1.6), we have
For any fixed constant \(M>R\), define
where g(x) is some infinitely differentiable function on \(\mathbb {R}^d\) such that \(b^M(x)\) is continuous differentiable on \(\mathbb {R}^d\). Then for all \(t\in [0,T]\), we denote
where the expression of \(R^M_\varepsilon (t)\) is similar to Eq. (3.3) with \(b^M, q_\varepsilon ^M\) in place of \(b, q_\varepsilon \).
Notice that \(\Vert q_0\Vert \) is finite by the continuity of b and \(\alpha \). Hence, we can choose M large enough such that \( q_0(t)=q_0^M(t),\ \text{ for } \text{ all } \ t\in [0,T]. \) Then for some M large enough, by Eq. (3.7), for all \(\delta >0\), we have
which means that \(X_\varepsilon \) is \(h^2(\varepsilon )\)-exponentially equivalent to \(X_\varepsilon ^M\). Hence, to prove the LDP for \(\{X_\varepsilon \}_{\varepsilon >0}\) on \( C([0,T];\mathbb {R}^d)\), it is enough to prove that for \(\{X_\varepsilon ^M\}_{\varepsilon >0}\), which is the task of the next part.
3.3 The LDP for \(\{X_\varepsilon ^M\}_{\varepsilon >0}\)
In this subsection, we prove that for some fixed constant M large enough , \(\{X_\varepsilon ^M\}_{\varepsilon >0}\) obeys an LDP on \(C([0,T]; \mathbb {R}^d)\) with speed function \(h^2(\varepsilon )\) and the rate function I given by (2.2). Without loss of generality, we assume that b is bounded, i.e., \(|b| \le K\) for some positive constant K. Then \(\frac{b}{\alpha }\) is also Lipschitz continuous and bounded, and by the differentiability of \(\frac{b}{\alpha }\), \(D(\frac{b}{\alpha })\) is also bounded. From now on, we can drop the M in the notations for the sake of simplicity.
3.3.1 Skeleton Equations
For any \(h\in {\mathcal {H}}\), consider the deterministic equation:
Lemma 3.2
Under Hypothesis 2.1, for any \(h\in {\mathcal {H}}\), Eq. (3.9) admits a unique solution \(g^h\) in \(C([0,T];\mathbb {R}^d)\), denoted by \(g^h(\cdot ){=:}\Gamma _0\left( \int _0^\cdot \dot{h}(s)ds\right) \). Moreover, for any \(N>0\), there exists some positive constant \(C(K,N,T,\alpha _0,\alpha _1)\) such that
Proof
The existence and uniqueness of the solution can be proved similarly to the case of stochastic differential equation (1.3), but much more simply. (3.10) follows from the boundness conditions of the coefficient functions and Gronwall’s inequality. Here we omit the relative proof. \(\square \)
Proposition 3.3
Under Hypothesis 2.1, for every positive number \(N<+\infty \), the family
is compact in \(C([0,T];\mathbb {R}^d)\).
Proof
To prove this proposition, it is sufficient to prove that the mapping \(\Gamma _0\) defined in Lemma 3.2 is continuous from \(S_N\) to \(C([0,T];\mathbb {R}^d)\), since the fact that \(K_N\) is compact follows from the compactness of \(S_N\) under the weak topology and the continuity of the mapping \(\Gamma _0\) from \(S_N\) to \(C([0,T];\mathbb {R}^d)\).
Assume that \(h_n\rightarrow h\) weakly in \(S_N\) as \(n\rightarrow \infty \). We consider the following equation
Due to Cauchy-Schwartz inequality and the boundness of functions \(\sigma ,\alpha \), we know that for any \(0 \le t_1 \le t_2 \le T\),
Hence, the family of functions \(\{I_2^n\}_{n\ge 1}\) is equiv-continuous in \(C([0,T];{\mathbb {R}}^d)\). Particularly, taking \(t_1=0\), we obtain that
where \(C( K, N, T,\alpha _0)\) is independent of n. Thus, by the Ascoli-Arzelá theorem, the set \(\{I_2^n\}_{n\ge 1}\) is compact in \(C([0,T];\mathbb {R}^d)\).
On the other hand, for any \(v\in \mathbb {R}^d\), by the boundness of \(\sigma /\alpha \), we know that the function \( \frac{\sigma (q_0)}{\alpha (q_0)}v\) belongs to \(L^2([0,T];\mathbb {R}^d)\). Since \(\dot{h}_n\rightarrow \dot{h}\) weakly in \(L^2([0,T];\mathbb {R}^d)\) as \(n \rightarrow +\infty \), we know that
Then by the compactness of \(\{I_2^n\}_{n\ge 1}\), we have
Set \(\zeta ^n(t)=\sup _{0\le s\le t}\left| g^{h_n}(s)-g^h(s)\right| \). By the boundness of \(D(b/\alpha )\), we have
By Gronwall’s inequality and (3.14), we have
which completes the proof. \(\square \)
3.3.2 MDP
For any predictable process \({\dot{u}}\) taking values in \(L^2 ([0,T]; \mathbb {R}^d)\), we denote by \(q_\varepsilon ^u(t)\) the solution of the following equation
As is well known, for any fixed \(\varepsilon >0\), \(T>0\) and \(k\ge 1\), this equation admits a unique solution \(q_\varepsilon ^u\) in \(L^k(\Omega ; C([0,T];\mathbb {R}^d))\) as follows
where \({\mathcal {G}}_\varepsilon \) is defined by (3.4).
Lemma 3.4
Under Hypothesis 2.1, for every fixed \(N\in \mathbb {N}\) and \(\varepsilon >0\), let \(u^\varepsilon \in {\mathcal {A}}_N\) and \(\Gamma _\varepsilon \) be given by (3.5). Then \(X_\varepsilon ^{u^\varepsilon }(\cdot ):=\Gamma _\varepsilon \left( w(\cdot )+h(\varepsilon )\int _0^{\cdot }\dot{u}^\varepsilon (s)ds\right) \) is the unique solution of the following equation
where
with
Furthermore, there exists a positive constant \(\varepsilon _0 >0\) such that for any \(\varepsilon \in (0,\varepsilon _0]\),
Moveover, we have
To prove Lemma 3.4 and our main result, we present the following three lemmas. The first lemma is similar to [3, Lemma 3.1].
Lemma 3.5
Under Hypothesis 2.1, for any \(T>0\), \(k\ge 1\) and \(N>0\), there exists some constant \(\varepsilon _0>0\) such that for any \(u^\varepsilon \in {\mathcal {A}}_N\) and \(\varepsilon \in (0,\varepsilon _0]\), we have
Moveover, we have
Proof
Notice that Eq. (3.15) can be rewritten as the following equation: for all \(t \in [0,T]\),
From the notation given in Eq. (3.17), we have
Integrating with respect to t, we obtain that
By Hypothesis 2.1 and Young’s inequality for integral operators, we have
Since \(\lim _{\varepsilon \rightarrow 0}\sqrt{\varepsilon }h(\varepsilon )=0\), for \(\varepsilon \) small enough, by Gronwall’s inequality,
Hence by the similar proof to that in [3, Lemma 3.1], we obtain (3.20) and (3.21). \(\square \)
For \(H_\varepsilon ^{2,u^\varepsilon }(t)\), we have the following estimation.
Lemma 3.6
Under Hypothesis 2.1, for any \(T>0\), \(k\ge 1\) and \(N\in \mathbb {N}\), there exists some constant \(\varepsilon _0>0\) such that for any \(u^\varepsilon \in {\mathcal {A}}_N\) and \(\varepsilon \in (0,\varepsilon _0]\), we have
Proof
For any \(t\in [0,T]\) and \(u^\varepsilon \in {\mathcal {A}}_N\), by the boundness of \(\sigma \) and Cauchy-Schwarz inequality, we have
Since \(A_\varepsilon ^{u^\varepsilon } (t)=\frac{1}{\varepsilon ^2} \int _0^t \alpha (q_\varepsilon ^{u^\varepsilon }(r))dr \), we have
Hence
and furthermore
which completes the proof. \(\square \)
Lemma 3.7
Under Hypothesis 2.1, for any \(T>0\) and any \(u^\varepsilon \in {\mathcal {A}}_N\), we have
Moreover, we have
Proof
Similarly to the proof [3, (3.17)], under Hypothesis 2.1, we have
Next, we will estimate \(\mathbb {E}\left\| \frac{I_\varepsilon ^{6,u^\varepsilon }}{\sqrt{\varepsilon } h(\varepsilon )}\right\| \) and \(\mathbb {E}\left\| \frac{I_\varepsilon ^{7,u^\varepsilon }}{\sqrt{\varepsilon } h(\varepsilon )}\right\| \). By Lemma 3.6, we have
By Cauchy-Schwarz inequality, we have
By (3.23), we have for all \(\varepsilon >0\) small enough,
Hence, by (3.20) and Lemma 3.6, we have
This together with (3.27) and (3.28) implies (3.25).
(3.26) can be easily obtained by applying the similar estimation process for
as given above. Hence we omit the proof. \(\square \)
Now we prove Lemma 3.4.
The proof of Lemma 3.4
For any \(\varepsilon >0\) and \(u^\varepsilon \in {\mathcal {A}}_N\), define
Since \(\frac{d\mathbb {Q}^{u^{\varepsilon }}}{d\mathbb {P}}\) is an exponential martingale, \(\mathbb {Q}^{u^\varepsilon }\) is a probability measure on \(\Omega \). By Girsanov theorem, the process
is a \(\mathbb {R}^d\)-valued Wiener process under the probability measure \(\mathbb {Q}^{u^\varepsilon }\). Rewriting Eq. (3.16) with \(\tilde{w}^{\varepsilon }(t)\), we obtain Eq. (3.6) with \(\tilde{w}^{\varepsilon }(t)\) in place of w(t). Let \(X_\varepsilon ^{u^\varepsilon }\) be the unique solution of Eq. (3.6) with \(\tilde{w}^{\varepsilon }(t)\) on the space \((\Omega ,{\mathcal {F}},\mathbb {Q}^{u^\varepsilon })\). Then \(X_\varepsilon ^{u^\varepsilon }\) satisfies Eq. (3.16), \(\mathbb {Q}^{u^\varepsilon }\)-a.s.. By the equivalence of probability measures, \(X_\varepsilon ^{u^\varepsilon }\) satisfies Eq. (3.16), \(\mathbb {P}\)-a.s..
Now we prove (3.18). By (3.26), there exists some constant \(\varepsilon _0>0\) such that for any \(\varepsilon \in (0,\varepsilon _0]\),
Notice that \(b/\alpha \) is Lipschitz continuous and \(\sigma /\alpha \) is bounded, then we have
Hence by (1.6) and (3.30), for any \(\varepsilon \in (0,\varepsilon _0]\), taking expectation in both sides in (3.31), we have
By Gronwall’s inequality, we get
then by Fubini’s theorem,
First taking supremum with respect to \(t\in [0,T]\) in (3.31), and then taking expectation in both sides, for any \(\varepsilon \in (0,\varepsilon _0]\), by BDG inequality, (1.6), (3.30) and (3.33), we obtain that
which completes the proof. \(\square \)
Proposition 3.8
Under Hypothesis 2.1, for every fixed \(N \in \mathbb {N}\), let \(\{u^\varepsilon \}_{\varepsilon >0}\) be a family of processes in \({\mathcal {A}}_N \) that converges in distribution to some \(u \in {\mathcal {A}}_N \) as \(\varepsilon \rightarrow 0\), as random variables taking values in the space \(S_N\), endowed with the weak topology. Then
in distribution in \(C([0,T]; \mathbb {R}^d)\) as \(\varepsilon \rightarrow 0\).
Proof
By the Skorokhod representation theorem, there exists a probability basis \(({\bar{\Omega }},{\bar{{\mathcal {F}}}},({\bar{{\mathcal {F}}}}_t),{\bar{\mathbb {P}}})\), and on this basis, a Brownian motion \({{\bar{w}}}\) and a family of \({\bar{{\mathcal {F}}}}_t\)-predictable processes \(\{{{\bar{u}}}^\varepsilon \}_{\varepsilon >0}, {{\bar{u}}}\) taking values in \(S_N\), \({\bar{\mathbb {P}}}\)-a.s., such that the joint law of \((u^\varepsilon ,u, w)\) under \(\mathbb {P}\) coincides with that of \(({{\bar{u}}}^\varepsilon , {{\bar{u}}}, {{\bar{w}}})\) under \({\bar{\mathbb {P}}}\) and
Let \({{\bar{X}}}_{\varepsilon }^{ {\bar{u}}^{\varepsilon }}\) be the solution of a similar equation to (3.16) with \(u^\varepsilon \) replaced by \({{\bar{u}}}^\varepsilon \) and w by \({{\bar{w}}}\), and let \({{\bar{X}}}^{{{\bar{u}}}}\) be the solution of a similar equation to (3.9) with h replaced by \( {{\bar{u}}}\). Thus, to prove this proposition, it is sufficient to prove that
From now on, we drop the bars in the notation for the sake of simplicity.
Notice that, for any \( t \in [0,T] \),
We shall prove this proposition in the following four steps.
Step 1: For the first term \(Y_\varepsilon ^{1,u^\varepsilon }\), denote \( x_\varepsilon (t):=\sqrt{\varepsilon } h(\varepsilon )X_\varepsilon ^{u^\varepsilon }(t)\), by Taylor’s formula, there exists a random variable \(\eta _\varepsilon \) taking values in (0, 1) such that
For the first term \( y_\varepsilon ^{11}\), by the boundness of \(D\left( \frac{b}{\alpha }\right) \), we have
Next we deal with the second term \(y_\varepsilon ^{12}\). For each \(R>\Vert q_0\Vert \) and \(\rho \in (0,1)\), set
Then by the continuous differentiability of \(\frac{b}{\alpha }\), we know that for any fixed \(R>0\),
Since \(\sqrt{\varepsilon } h(\varepsilon ) \rightarrow 0\) as \(\varepsilon \rightarrow 0\), there exists some \(\varepsilon _0>0\) small enough such that for all \(0<\varepsilon \le \varepsilon _0 \),
for any \(\rho \in (0,1)\).
Thus, we obtain that for any \(r>0, R>\Vert q_0\Vert \),
By (3.10) and (3.19), letting \(\varepsilon \rightarrow 0\) and then \(\rho \rightarrow 0\) in (3.37), we can prove that
Step 2: For the second term \(Y_\varepsilon ^{2,u^\varepsilon }\) we have
Using the same argument as that in the proof of (3.14), we obtain that
Since \( \left\| Y_\varepsilon ^{2,u^\varepsilon ,1}\right\| \le C(K,N,T,\alpha _0)\), by the dominated convergence theorem, Eq. (3.39) implies that
Due to the Lipschitz continuity of \(\sigma /\alpha \), we have
By (3.18) and Hölder’s inequality, we get
Hence by (1.6), we obtain that
Step 3: For the third term \(Y_\varepsilon ^{3,u^\varepsilon }\), by BDG inequality and (1.6), we have
Step 4: For the last term \(Y_\varepsilon ^{4,u^\varepsilon }\), by Lemma 3.7, we have
By Eq. (3.35) and (3.36), we obtain that
Using Gronwall’s inequality, we have that
This, together with (3.38), (3.41), (3.42) and (3.43), implies that
which completes the proof. \(\square \)
According to Theorem 3.1, the MDP of \(\{X_\varepsilon ^M\}_{\varepsilon >0}\) follows from Proposition 3.3 and Proposition 3.8, which completes the proof of our main result Theorem 2.2.
References
Budhiraja, A., Dupuis, P., Ganguly, A.: Moderate deviations principles for stochastic differential equations with jumps. Ann. Probab. 44, 1723–1775 (2016)
Budhiraja, A., Dupuis, P., Maroulas, V.: Large deviations for infinite dimensional stochastic dynamical systems. Ann. Probab. 36, 1390–1420 (2008)
Cerrai, S., Freidlin, M.: Mark large deviations for the Langevin equation with strong damping. J. Stat. Phys. 4(161), 859–875 (2015)
Dembo, A., Zeitouni, O.: Large Deviations Techniques and Applications. Applications of Mathematics, 2nd edn. Springer, Berlin (1998)
Gao, F.Q., Wang, S.: Asymptotic behaviors for functionals of random dynamical systems. Stoch. Anal. Appl. 34(2), 258–277 (2016)
Gao, F.Q., Zhao, X.Q.: Delta method in large deviations and moderate deviations for estimators. Ann. Stat. 39, 1211–1240 (2011)
Guillin, A., Liptser, R.: Examples of moderate deviations principle for diffusion processes. Discret. Contin. Dyn. Syst. Ser. B 6, 803–828 (2006)
Hall, P., Schimek, M.: Moderate-deviations-based inference for random degeneration in paired rank lists. J. Am. Stat. Assoc. 107, 661–672 (2012)
Kallenberg, W.: On moderate deviations theory in estimation. Ann. Stat. 11, 498–504 (1983)
Klebaner, F., Liptser, R.: Moderate deviations for randomly perturbed dynamical systems. Stoch. Process. Appl. 80, 157–176 (1999)
Miao, Y., Shen, S.: Moderate deviations principle for autoregressive processes. J. Multivar. Anal. 100, 1952–1961 (2009)
Wang, R., Zhai, J., Zhang, T.: A moderate deviations principle for 2-D stochastic Navier-Stokes equations. J. Differ. Equ. 258, 3363–3390 (2015)
Wang, R., Zhang, T.: Moderate deviations for stochastic reaction-diffusion equations with multiplicative noise. Potential Anal. 42, 99–113 (2015)
Acknowledgements
We thank the anonymous referees for their valuable comments and suggestions which help us improve the quality of this paper. Liu W. is supported by Natural Science Foundation of China (11571262, 11731009).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Cheng, L., Li, R. & Liu, W. Moderate Deviations for the Langevin Equation with Strong Damping. J Stat Phys 170, 845–861 (2018). https://doi.org/10.1007/s10955-018-1958-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10955-018-1958-4