Abstract
We study the Freidlin–Wentzell large deviation principle for the nonlinear one-dimensional stochastic heat equation driven by a Gaussian noise:
where \(\dot{W}\) is white in time and fractional in space with Hurst parameter \(H\in \left( \frac{1}{4},\frac{1}{2}\right) \). Recently, Hu and Wang (Ann Inst Henri Poincaré Probab Stat 58(1):379–423, 2022) have studied the well-posedness of this equation without the technical condition of \(\sigma (0)=0\) which was previously assumed in Hu et al. (Ann Probab 45(6):4561–4616, 2017). We adopt a new sufficient condition proposed by Matoussi et al. (Appl Math Optim 83(2):849–879, 2021) for the weak convergence criterion of the large deviation principle.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this paper, we consider the following nonlinear stochastic heat equation (SHE, for short) driven by a Gaussian noise which is white in time and fractional in space:
Here \({\varepsilon }>0\), \(u^{{\varepsilon }}(0,\cdot )\equiv 1\), W(t, x) is a centered Gaussian field with the covariance given by
for some \(\frac{1}{4}<H<\frac{1}{2}\), and \(\dot{W}(t,x)=\frac{\partial ^2 W}{\partial t\partial x}(t, x)\). The covariance of the noise \(\dot{W}\) is given by
where \(\Lambda \) is a distribution, whose Fourier transform is the measure \(\mu (\textrm{d}\xi )=c_{1,H}|\xi |^{1-2H}\textrm{d}\xi \) with
\(\Lambda \) can be formally written as \(\Lambda (x-y)=H(2H-1)\times |x-y|^{2H-2}\). In addition, the measure \(\mu \) satisfies the integrability condition \(\int _{\mathbb {R}}\frac{\mu (\textrm{d}\xi )}{1+|\xi |^2}<\infty \). However, the corresponding covariance \(\Lambda \) is not locally integrable and fails to be nonnegative when \(H\in (\frac{1}{4},\frac{1}{2})\). It does not satisfy the classical Dalang’s condition in [11] (there \(\Lambda \) is given by a nonnegative locally integrable function). See [1, 19] for more details. For this reason, the approaches used in references [9, 11, 12, 30, 31] cannot handle such rough covariance.
Recently, many authors have studied the existence and uniqueness for the solutions of stochastic partial differential equations (SPDEs, for short) driven by Gaussian noises which are rough in space, see, e.g., [19, 20, 22, 24, 35]. We refer to [18, 34] for surveys. When the diffusion coefficient is an affine function \(\sigma (x)=ax+b\), Balan et al. [1] proved the existence and uniqueness of the mild solution to SHE (1.1) by using the technique of Fourier analysis, and they established the Hölder continuity of the solution in [2].
For the general nonlinear coefficient \(\sigma \), which has a Lipschitz derivative and satisfies \(\sigma (0)=0\), the problem of the existence and uniqueness of the solution was proved by Hu et al. [19]. Under this condition, the large deviations, the moderate deviations and the transportation inequalities were studied in [10, 21, 23], respectively.
In [22], Hu and Wang removed the technical and unusual condition of \(\sigma (0) = 0\) and they proved the well-posedness of the solution to Eq. (1.1) under Condition \((\textbf{H})\) in Sect. 2.3. Without the condition of \(\sigma (0) = 0\), the solution is no longer in the space \(\mathcal {Z}^p_T\) (see [19] or (2.14) in Sect. 2.3 of this paper with \(\lambda (x)\equiv 1\)). To relax the restriction, Hu and Wang [22] introduced a decay weight (as the spatial variable x goes to infinity) to enlarge the solution space \(\mathcal {Z}^p_{T}\) to a weighted space \(\mathcal {Z}^p_{\lambda ,T}\) for some suitable power decay function \(\lambda (x)\), see Sect. 2.3 for details.
The aim of this paper is to establish a large deviation principle (LDP, for short) for the solution \(u^{\varepsilon }\) of (1.1) as \({\varepsilon }\rightarrow 0\). A useful approach of investigating the LDP is the well-known weak convergence method (see, e.g., [3,4,5,6,7, 14]), which is mainly based on the variational representation formula for measurable functionals of Brownian motion. For some relevant LDP results by using the weak convergence method, we refer to [25, 33, 39] and the references therein. In particular, Liu et al. [27] and Xiong and Zhai [38] proved the LDP for a large class of SPDEs with locally monotone coefficients driven by Brownian motions or Lévy noises, respectively. However, the frameworks of [27] and [38] cannot be applied to SHE (1.1) because of the spatial rough noise with \(H\in (\frac{1}{4}, \frac{1}{2})\).
Comparing with the LDP result for SHE (1.1) under the assumption of \(\sigma (0)=0\) in Hu et al. [21], our result and its proof are in the weighted space \(\mathcal {Z}^p_{\lambda ,T}\). Notice that the introduction of the weight brings many difficulties. See Hu and Wang [22] for the excellent analysis. In this paper, we adopt a new sufficient condition for the LDP (see Condition 2.2) which was proposed by Matoussi et al. [29]. This method was successfully applied to the study of LDPs for SPDEs, see, e.g., [13, 16, 17, 26, 36, 37].
The rest of this paper is organized as follows. In Sect. 2, we first present the framework introduced in [22] and recall the weak convergence method given by [6, 29]; then, we formulate the main result of the present paper. In Sect. 3, the associated skeleton equation is studied. Sect. 4 and 5 are devoted to verifying the two conditions for the weak convergence criterion. Finally, we give some useful lemmas in Appendix.
We adopt the following notations throughout this paper. We always use \(C_\alpha \) to denote a constant dependent on the parameter \(\alpha \), which may change from line to line. \(A\lesssim B\) (\(A\gtrsim B\), resp.) means that \(A\le CB\) (\(A\ge CB\), resp.) for some positive generic constant C, the value of which might change from line to line, and \(A\simeq B\) means that \(A\lesssim B\) and \(A\gtrsim B\).
2 Preliminaries and Main Result
In this section, we first give some preliminaries of SHE (1.1); then, we state the weak convergence criterion for the LDP in [6, 29] and give the main result of this paper.
2.1 Covariance Structure and Stochastic Integration
Recall some notations from [19] and [22]. Denote by \( \mathcal {D}=\mathcal {D}(\mathbb {R}) \) the space of real-valued infinitely differentiable functions with compact support on \(\mathbb {R}\), and by \(\mathcal {D}'\) the dual of \(\mathcal {D}\) with respect to the \(L^2(\mathbb {R}, \textrm{d}x)\), where \(L^2(\mathbb {R}, \textrm{d}x)\) is the Hilbert space of square integrable functions with respect to Lebesgue measure on \(\mathbb {R}\). The Fourier transform of a function \(f\in \mathcal {D}\) is defined as
Let \((\Omega ,\mathcal {F},{\mathbb {P}})\) be a complete probability space and \(\mathcal {D}(\mathbb {R}_{+}\times \mathbb {R})\) be the space of real-valued infinitely differentiable functions with compact support on \(\mathbb {R}_{+}\times \mathbb {R}\). The noise \(\dot{W}\) is a zero-mean Gaussian family \(\{W(\phi ), \phi \in \mathcal {D}(\mathbb {R}_{+}\times \mathbb {R})\}\) with the covariance structure given by
where \(H\in (\frac{1}{4} , \frac{1}{2})\), \(c_{1,H}\) is given in (1.3), and \(\mathcal {F}\phi (s,\xi )\) is the Fourier transform with respect to the spatial variable x of the function \(\phi (s,x)\). Then (2.1) defines a Hilbert scalar product on \(\mathcal {D}(\mathbb {R}_{+}\times \mathbb {R}) \). Denote \(\mathfrak H\) the Hilbert space obtained by completing \(\mathcal {D}(\mathbb {R}_{+}\times \mathbb {R}) \) with respect to this scalar product.
Proposition 2.3
([22, Proposition 2.1], [32, Theorem 3.1]) The space \(\mathfrak H\) is a Hilbert space equipped with the scalar product
where
The space \(\mathcal {D}(\mathbb {R}_{+}\times \mathbb {R})\) is dense in \(\mathfrak H\).
For any \(t\ge 0\), let \(\mathcal {F}_t\) be the filtration generated by W, that is,
where \(\mathcal {D}([0,t]\times \mathbb {R})\) is the space of real-valued infinitely differentiable functions on \([0,t]\times \mathbb {R}\).
Definition 2.2
An elementary process g is a process given by
where n and m are finite positive integers, \(0\le a_1<b_1<\cdots<a_n<b_n<\infty \), and \(h_j<l_j\) and \(X_{i,j}\) are \(\mathcal {F}_{a_i}\)-measurable random variables for \(i=1,\dots ,n\), \(j=1,\dots ,m\). The stochastic integral of an elementary process with respect to W is defined as
Hu et al. [19, Proposition 2.3] extended the notion of integral with respect to W to a broad class of adapted processes in the following way.
Proposition 2.4
([19, Proposition 2.3]) Let \(\Lambda _{H}\) be the space of predictable processes g defined on \(\mathbb {R}_{+}\times \mathbb {R}\) such that almost surely \(g\in \mathfrak H\) and \(\mathbb {E}[\Vert g\Vert _{\mathfrak H}^2]<\infty \). Then, we have that:
-
(i)
The space of the elementary processes defined in Definition 2.2 is dense in \(\Lambda _{H}\);
-
(ii)
For any \(g\in \Lambda _{H}\), the stochastic integral \(\int _{\mathbb {R}_{+}}\int _{\mathbb {R}} g(s,x)W(\textrm{d}s,\textrm{d}x)\) is defined as the \(L^2(\Omega )\)-limit of Riemann sums along elementary processes approximating g in \(\Lambda _H\), and we have
$$\begin{aligned} \mathbb {E}\left[ \left( \int _{\mathbb {R}_{+}}\int _{\mathbb {R}} g(s,x)W(\textrm{d}s,\textrm{d}x)\right) ^2\right] =\mathbb {E}\left[ \Vert g\Vert _{\mathfrak H}^2\right] . \end{aligned}$$(2.5)
Let \({\mathcal {H}}\) be the Hilbert space obtained by completing \({\mathcal {D}}(\mathbb {R}) \) with respect to the following scalar product:
By Proposition 2.3, for any orthonormal basis \(\{e_k\}_{k\ge 1}\) of the Hilbert space \(\mathcal H\), the family of processes
is a sequence of independent standard Wiener processes and the process \( B_t:=\sum _{k\ge 1}B_t^ke_k \) is a cylindrical Brownian motion on \(\mathcal H\). It is well known that (see [9] or [12]) for any \(\mathcal H\)-valued predictable process \(g\in L^2(\Omega \times [0,T];\mathcal H)\), we can define the stochastic integral with respect to the cylindrical Wiener process B as follows:
Note that the above series converges in \(L^2(\Omega , \mathcal {F},{\mathbb {P}})\) and the sum does not depend on the selected orthonormal basis. Moreover, each summand, in the above series, is a classical Itô integral with respect to a standard Brownian motion.
Let \((B,\Vert \cdot \Vert _B)\) be a Banach space with the norm \(\Vert \cdot \Vert _B\). Let \(H\in (\frac{1}{4},\frac{1}{2})\) be a fixed number. For any function \(f:\mathbb {R}\rightarrow B\), denote
if the above quantity is finite. When \(B=\mathbb {R}\), we abbreviate the notation \(\mathcal {N}_{\frac{1}{2}-H}^{\mathbb {R}}f\) as \(\mathcal {N}_{\frac{1}{2}-H}f\). As in [19], when \(B=L^p(\Omega )\), we denote \( \mathcal {N}_{\frac{1}{2}-H}^{B}\) by \(\mathcal {N}_{\frac{1}{2}-H,\,p}\), that is,
The following Burkholder–Davis–Gundy’s inequality is well known (see, e.g., [19]).
Proposition 3.1
([19, Proposition 3.2]) Let W be the Gaussian noise with the covariance (2.1), and let \(f\in \Lambda _H\) be a predictable random field. Then, we have that for any \(p\ge 2\),
where \(c_{H}\) is a constant depending only on H and \(\mathcal {N}_{\frac{1}{2}-H,\,p}f(s,y)\) denotes the application of \(\mathcal {N}_{\frac{1}{2}-H,\,p} \) to the spatial variable y.
2.2 Stochastic Heat Equation
Let \(\mathcal {C}([0, T]\times \mathbb {R})\) be the space of all continuous real-valued functions on \([0,T]\times \mathbb {R}\), equipped with the following metric:
Define the weight function
where \(c_H\) is a constant such that \(\int _{\mathbb {R}} \lambda (x) \textrm{d}x=1\).
For any \(p\ge 2\) and \(\frac{1}{4}<H<\frac{1}{2}\), we introduce a norm \(\Vert \cdot \Vert _{\mathcal {Z}_{\lambda ,T}^p}\) for a random field \(v=\{v(t,x)\}_{(t,x)\in [0,T]\times \mathbb {R}}\) as follows:
where
and
Denote \(\mathcal {Z}_{\lambda ,T}^p\) the space of all random fields \(v=\{v(t,x)\}_{(t,x)\in [0,T]\times \mathbb {R}}\) such that \(\Vert v\Vert _{\mathcal {Z}_{\lambda ,T}^p}\) is finite.
For the well-posedness of the solution and the large deviation principle, we assume the following conditions.
- (H):
-
Assume that \(\sigma (t, x, u)\in \mathcal {C}^{0,1,1}([0, T]\times \mathbb {R}^2)\) (the space of all continuous functions \(\sigma \), with continuous partial derivatives \(\sigma '_x\), \(\sigma '_u\) and \(\sigma ''_{xu}\)), and there exists a constant \(C>0\) such that
$$\begin{aligned}&\sup _{t\in [0,T],\,x\in \mathbb {R}}\left| \sigma (t, x, u)\right| \le C(1+|u|),\,\,\,\forall u\in \mathbb {R}; \end{aligned}$$(2.17)$$\begin{aligned}&\sup _{t\in [0,T],\,x\in \mathbb {R}}\left| \sigma (t, x, u)-\sigma (t, x, v)\right| \le C|u-v|,\,\,\,\forall u, v\in \mathbb {R}; \end{aligned}$$(2.18)$$\begin{aligned}&\sup _{t\in [0,T], \,x\in \mathbb {R},\, u\in \mathbb {R}} |\sigma '_u(t,x,u)| \leqslant C\,; \end{aligned}$$(2.19)$$\begin{aligned}&\sup _{t\in [0,T], \,x\in \mathbb {R}} |\sigma '_x(t,x,0)| \leqslant C\,; \end{aligned}$$(2.20)$$\begin{aligned}&\sup _{t\in [0,T], \,x\in \mathbb {R}, \, u\in \mathbb {R}} |\sigma ''_{xu}(t,x,u)| \leqslant C\,. \end{aligned}$$(2.21)Moreover, there exist some constants \(p_0>\frac{6}{4H-1}\) and \(C>0\) such that
$$\begin{aligned} \sup _{t\in [0,T], \,x\in \mathbb {R}}\lambda ^{-\frac{1}{p_0}}(x)\left| \sigma '_u(t,x,u_1)-\sigma '_u(t,x,u_2)\right| \le C |u_1-u_2| , \,\,\,\forall u_1,u_2\in \mathbb {R},\nonumber \\ \end{aligned}$$(2.22)where \(\lambda (x)\) is the weight function defined by (2.13).
Remark 1
By the monotonicity of \(\lambda (x)\), we know that if (2.22) holds for some \(p_0>\frac{6}{4H-1}\), then it also holds for all \(p\ge p_0\).
Let \(p_{t}(x):=\frac{1}{\sqrt{4\pi t}}\exp \left( -\frac{x^2}{4t}\right) \) be the heat kernel associated with the Laplacian operator \(\Delta \). Recall the following definition of the solution to SHE (1.1) from [22].
Definition 2.5
([22, Definition 1.4]) Given the initial value \(u_0(\cdot )\equiv 1\), a real-valued adapted stochastic process \(u^{\varepsilon }\) is called a mild solution to (1.1), if for all \(t\ge 0\) and \(x\in \mathbb {R}\),
The following theorem follows from [22].
Theorem 2.6
([22, Theorem 1.6]) Assume that \(\sigma \) satisfies the hypothesis \((\textbf{H})\). Then (1.1) admits a unique mild solution in \({\mathcal {C}}([0, T]\times \mathbb {R})\) almost surely.
2.3 A General Criterion for the Large Deviation Principle
Let \(\{u^{\varepsilon }\}_{{\varepsilon }>0} \) be a family of random variables defined on a probability space \((\Omega ,\mathcal {F},\mathbb {P})\) and taking values in a Polish space E.
Definition 2.7
A function \(I: E\rightarrow [0,\infty ]\) is called a rate function on E, if for each \(M<\infty \) the level set \(\{y\in E:I(y)\le M\}\) is a compact subset of E.
Definition 2.8
Let I be a rate function on E. The sequence \(\{u^{\varepsilon }\}_{{\varepsilon }>0}\) is said to satisfy a large deviation principle on E with the rate function I, if the following two conditions hold:
-
(a)
For each closed subset F of E,
$$\begin{aligned} \limsup _{{\varepsilon }\rightarrow 0}{\varepsilon }\log \mathbb {P}(u^{\varepsilon }\in F)\le {-\inf _{y\in F}}I(y); \end{aligned}$$ -
(b)
For each open subset G of E,
$$\begin{aligned} \liminf _{{\varepsilon }\rightarrow 0}{\varepsilon }\log \mathbb {P}(u^{\varepsilon }\in G)\ge -\inf _{y\in G}I(y). \end{aligned}$$
Set \(\mathbb {V}=C([0,T];{\mathbb {R}}^{\infty })\) and let \({\mathbb {U}}\) be a Polish space. Let \(\{\Gamma ^{\varepsilon }\}_{\varepsilon >0}\) be a family of measurable maps from \({\mathbb {V}}\) to \({\mathbb {U}}\). We recall a criterion for the LDP of the family \(Z^{\varepsilon }=\Gamma ^{\varepsilon }( W)\) as \(\varepsilon \rightarrow 0\), where and throughout this section W is the Gaussian process identified a sequence of independent, standard, real-valued Brownian motions, by using the representation formulae of (2.7).
Define the following space of stochastic processes:
and denote \(L^2([0,T];{\mathcal {H}})\) the space of square integrable \({\mathcal {H}}\)-valued functions on [0, T]. For each \(N\ge 1\), let
where
and \(S^N\) is equipped with the topology of the weak convergence in \(L^2([0,T];{\mathcal {H}})\). Set \({\mathbb {S}}=\bigcup _{N\ge 1} S^N\), and
Condition 2.1
There exists a measurable mapping \(\Gamma ^0:{\mathbb {V}}\rightarrow {\mathbb {U}}\) such that the following two items hold:
-
(a)
For every \(N<+\infty \), the set \(K_N=\left\{ \Gamma ^0\left( \int _0^{\cdot } g(s)\textrm{d}s\right) : g\in S^N\right\} \) is a compact subset of \({\mathbb {U}}\);
-
(b)
For every \(N<+\infty \) and \(\{g^{\varepsilon }\}_{{\varepsilon }>0}\subset {\mathcal {U}}^N\), if \(g^{\varepsilon }\) converges to g as \(S^N\)-valued random elements in distribution, then \(\Gamma ^{\varepsilon }\left( W+\frac{1}{\sqrt{\varepsilon }} \int _0^{\cdot } g^{{\varepsilon }}(s)\textrm{d}s\right) \) converges in distribution to \(\Gamma ^0\left( \int _0^{\cdot } g(s)\textrm{d}s\right) \).
Let \(I:{\mathbb {U}}\rightarrow [0,\infty ]\) be defined by
with the convention \(\inf \emptyset =\infty \).
The following result is due to Budhiraja et al. [6].
Theorem 2.9
([6, Theorem 6]) For any \({\varepsilon }>0\), let \(X^{{\varepsilon }}=\Gamma ^{\varepsilon }(W)\) and suppose that Condition 2.1 holds. Then, the family \(\{X^{{\varepsilon }}\}_{{\varepsilon }>0}\) satisfies an LDP with the rate function I defined by (2.27).
Recently, a new sufficient condition (Condition 2.2) for verifying the assumptions in Condition 2.1 for LDPs has been proposed by Matoussi et al. [29]. It turns out that this new sufficient condition seems to be more suitable for establishing the LDP for SPDEs.
Condition 2.2
There exists a measurable mapping \(\Gamma ^0:{\mathbb {V}}\rightarrow {\mathbb {U}}\) such that the following two items hold:
-
(a)
For every \(N<+\infty \) and any family \(\{g^n\}_{n\ge 1}\subset S^N\) that converges to some element g in \(S^N\) as \(n\rightarrow \infty \), \(\Gamma ^0\left( \int _0^{\cdot } g^n(s)\textrm{d}s\right) \) converges to \(\Gamma ^0\left( \int _0^{\cdot } g(s)\textrm{d}s\right) \) in the space \({\mathbb {U}}\);
-
(b)
For every \(N<+\infty \), \(\{g^{\varepsilon }\}_{{\varepsilon }>0}\subset {\mathcal {U}}^N\) and \(\delta >0\),
$$\begin{aligned} \lim _{{\varepsilon }\rightarrow 0}{\mathbb {P}}\big (\rho (Y^{{\varepsilon }}, Z^{{\varepsilon }})>\delta \big )=0, \end{aligned}$$where \(Y^{{\varepsilon }}=\Gamma ^{\varepsilon }\left( W+\frac{1}{\sqrt{\varepsilon }} \int _0^{\cdot } g^{{\varepsilon }}(s)\textrm{d}s\right) , Z^{{\varepsilon }}=\Gamma ^0\left( \int _0^{\cdot } g^{{\varepsilon }}(s)\textrm{d}s\right) \) and \(\rho (\cdot , \cdot )\) stands for the metric in the space \({\mathbb {U}}\).
2.4 Main Result
To state our main result and give its proof, we need to introduce a map \(\Gamma ^0\) appeared in Condition 2.1 (or Condition 2.2). Given \(g\in \mathbb {S}\), consider the following deterministic integral equation (the skeleton equation):
By Proposition 3.1, Eq. (2.28) admits a unique solution \(u^g\in \mathcal {C}([0,T]\times \mathbb {R})\). For any \(g\in \mathbb {S}\), we define
The following is the main result of this paper.
Theorem 2.10
Assume that the hypothesis \((\textbf{H})\) holds. Then, the family \(\{u^{{\varepsilon }}\}_{{\varepsilon }>0}\) in Eq. (1.1) satisfies an LDP in the space \(\mathcal {C}([0, T]\times \mathbb {R})\) with the rate function I given by
Proof
According to Theorem 2.9 and [29, Theorem 3.2], it suffices to show that the conditions (a) and (b) in Condition 2.2 are satisfied. Condition (a) will be proved in Proposition 4.1, and Condition (b) will be verified in Proposition 5.1. The proof is complete. \(\square \)
3 Skeleton Equation
In this section, we study the well-posedness of the skeleton equation (2.28).
For \(p\ge 2\), \(H\in (\frac{1}{4},\frac{1}{2})\), recall the space \(\mathcal Z_{\lambda , T}^p\) with the norm (2.14). The space of all non-random functions in \({\mathcal {Z}}_{\lambda , T}^p\) is denoted by \(Z_{\lambda , T}^p\), with the following norm:
where
and
Proposition 4.1
Assume that \(\sigma \) satisfies the hypothesis \((\textbf{H})\). Then, Eq. (2.28) admits a unique solution \(u^g\) in \(\mathcal {C}([0, T]\times {\mathbb {R}})\). In addition, \(\displaystyle \sup _{g\in S^N}\Vert u^g\Vert _{Z^p_{\lambda ,T}}<\infty \) for any \(N\ge 1\) and any \(p\ge 2\).
Due to the complexity of the space \({\mathcal {H}}\), it is difficult to prove Proposition 3.1 by using Picard’s iteration directly. We use the approximation method by introducing a new Hilbert space \({\mathcal {H}}_{{\varepsilon }}\) as follows.
For every fixed \(\varepsilon >0\), let
For any \(\phi , \psi \in \mathcal {D}(\mathbb {R})\), we define
where \(c_{1,H}\) is given by (1.3). Let \({\mathcal {H}}_{{\varepsilon }}\) be the Hilbert space obtained by completing \({\mathcal {D}}(\mathbb {R}) \) with respect to the scalar product given by (3.5). For any \(0\le {\varepsilon }_1<{\varepsilon }_2\), we have that for any \(\phi \in {\mathcal {H}}_{{\varepsilon }_1},\)
and we have by the dominated convergence theorem,
For any \(g\in \mathbb {S}\), let
Since \(|\xi |^{1-2H}e^{-\varepsilon |\xi |^2}\) is in \(L^1({\mathbb {R}})\), \(|f_{\varepsilon }|\) is bounded. Due to the regularity in space, the existence and uniqueness of the solution \(u^{g}_{\varepsilon }\) to Eq. (3.7) are well known, see, e.g., [28, Section 4].
For any \(t \ge 0, x, y, h\in {\mathbb {R}}\), let
and
The following lemma asserts that the approximate solution \(u_\varepsilon ^g\) is bounded in the space \(Z_{\lambda ,T}^p\) uniformly over \({\varepsilon }>0\).
Lemma 3.1
Let \(H\in (\frac{1}{4},\frac{1}{2})\) and \(N\in {\mathbb {N}}\). Assume that \(\sigma \) satisfies the hypothesis \((\textbf{H})\). Then, for any \(p\ge 2\),
Proof
We use the similar argument as in [22] replacing the stochastic integral by the deterministic integral. We first define Picard’s iteration sequence. For any \(t\ge 0, x\in {\mathbb {R}}\), let
and recursively for \(n=0, 1, 2, \cdots \),
Since \({g\in S^N}\), we know that by (3.6),
In Step 1, we prove the convergence of \(u_\varepsilon ^{g, n}(t,\cdot )\) in \(L^p_{\lambda }({\mathbb {R}})\) for any \(t\in [0,T]\). In Steps 2 and 3, we give some quantitative estimates for \(\Vert u_\varepsilon ^{g,n}(t)\Vert _{L^p_{\lambda }(\mathbb {R})}\) and \(\mathcal N^*_{\frac{1}{2}-H,\, p} u_{\varepsilon }^{g,n}(t)\) for each fixed \(\varepsilon >0\). Step 4 is devoted to proving that \(u_\varepsilon ^g\) is bounded in \(\left( Z_{\lambda ,T}^p,\, \Vert \cdot \Vert _{Z_{\lambda ,T}^p}\right) \) uniformly over \(\varepsilon >0\).
Step 1 In this step, we will prove that for any fixed \({\varepsilon }>0\), as n goes to infinity, the sequence \(u_\varepsilon ^{g, n}(t,\cdot )\) converges to \(u_\varepsilon ^{g}(t,\cdot )\) in \(L_{\lambda }^{p}({\mathbb {R}})\).
By the Cauchy–Schwarz inequality, (2.18), the boundedness of \(f_ {{\varepsilon }}\) and Jensen’s inequality with respect to \(p_{t-s}(x-y)\textrm{d}y\), we have that, for any \(t\in [0,T]\) and \(x\in \mathbb {R}\),
Integrating with respect to the spatial variable with the weight \(\lambda (x)\) and invoking (2.18), Jensen’s inequality with respect to \(p_{t-s}(x-y)\textrm{d}y\textrm{d}s\) and an application of Lemma 6.1 yield that for any \(p\ge 2\), \(t\in [0,T]\),
Then, (3.13) implies that
and that \(\{u_{\varepsilon }^{g, n}(t,\cdot )\}_{n\ge 1}\) is a Cauchy sequence in \(L^p_{\lambda }(\mathbb {R})\) for any \(t\in [0,T]\). Hence, for any fixed \(t\in [0,T]\), there exists \(u_{\varepsilon }^{g}(t,\cdot )\in L^p_{\lambda }( \mathbb {R})\) such that
Step 2 In this step, we will give a quantitative estimate for \(\Vert u_\varepsilon ^{g,n}(t,\cdot )\Vert _{L^p_{\lambda }(\mathbb {R})}\) for any fixed \({\varepsilon }>0\). By the Cauchy–Schwarz inequality, (2.6) and (3.5), we have
where
If \(|h|>1\), then we have by (2.17)
If \(|h|\le 1\), then by (2.20) and (2.21) there exists some \(\zeta \in (0,1)\) such that
By a change of variable, (3.16) and (3.17), we have
By a change of variable, Minkowski’s inequality, Lemma 6.1 and Jensen’s inequality with respect to
we have
By (2.18), a change of variable, Minkowski’s inequality, Jensen’s inequality and Lemma 6.1, we have
By (2.17), a change of variable, Minkowski’s inequality, Jensen’s inequality with respect to \((t-s)^{1-H}\left| D_{t-s}(y,h)\right| ^2 |h|^{2H-2}\textrm{d}y\textrm{d}h\) and Lemma 6.5, we have
Thus, (3.15), (3.19), (3.20) and (3.21) yield that
Step 3 This step is devoted to estimating \(\mathcal N^*_{\frac{1}{2}-H,\, p} u_{\varepsilon }^{g,n}(t)\). Using the Cauchy–Schwarz inequality and (3.6), we have
where
Therefore, by (3.3), we have
For the first term of (3.23), by (3.16), (3.17) and a change of variable, we have
By (3.24), a change of variable, Minkowski’s inequality, Lemma 6.5 and Jensen’s inequality with respect to \((t-s)^{1-H}\left| D_{t-s}(z,h)\right| ^2 |h|^{2H-2}\textrm{d}z\textrm{d}h\), we have
For the second term of (3.23), by (2.18) and a change of variable, we have
By (3.26), a change of variable, Minkowski’s inequality, Jensen’s inequality and Lemma 6.5, we have
For the last term of (3.23), by (2.17) and a change of variable, we have
By a change of variable, Minkowski’s inequality and Lemma 6.4, we have
By a change of variable, Minkowski’s inequality and Lemma 6.2, we have
Thus, by (3.23), (3.25), (3.27), (3.28) and (3.29), we have
Step 4 Let
Putting (3.22) and (3.30) together, there exists a constant \(C_{T,p,H,N}>0\) such that
By the extension of Grönwall’s lemma ([11, Lemma 15]), we have
where C is a constant independent of \({\varepsilon }\) and \(g\in S^N\).
For any fixed \({\varepsilon }>0\), by (3.14) and (3.31), we have that, for any \(t\in [0,T]\),
For any fixed t and h, we have by (3.14),
By Fatou’s lemma and (3.31), we have
By (3.32) and (3.33), we obtain that
The proof is complete. \(\square \)
Lemma 3.2
Let \(u_\varepsilon ^g\) be the approximate process defined by (3.7).
-
(i)
If \(p>\frac{6}{4H-1}\), then there exists a constant \(C_{T,p,H,N}>0\) such that
$$\begin{aligned} \sup \limits _{t\in [0,T],\,x\in \mathbb {R}}\lambda ^{\frac{1}{p}}(x)\cdot \mathcal {N}_{\frac{1}{2}-H}u_\varepsilon ^g(t,x) \le C_{T,p,H,N}\left( 1+\Vert u_\varepsilon ^g\Vert _{Z_{\lambda ,T}^p}\right) . \end{aligned}$$(3.34) -
(ii)
If \(p>\frac{3}{H}\) and \(0<\gamma <\frac{H}{2}-\frac{3}{2p}\), then there exists a constant \(C_{T ,p,H,N,\gamma }>0\) such that
$$\begin{aligned}{} & {} \sup \limits _{\begin{array}{c} t,\,t+h\in [0,T], \\ x\in \mathbb {R} \end{array}}\lambda ^{\frac{1}{p}}(x) \cdot |u_\varepsilon ^g(t+h,x)-u_\varepsilon ^g(t,x)| \nonumber \\ {}{} & {} \quad \le C_{T ,p,H,N,\gamma }|h|^{\gamma }\cdot \left( 1+\Vert u_\varepsilon ^g\Vert _{Z_{\lambda ,T}^p}\right) . \end{aligned}$$(3.35) -
(iii)
If \(p>\frac{3}{H}\) and \(0<\gamma <H-\frac{3}{p}\), then there exists a constant \(C_{T ,p,H,N,\gamma }>0\) such that
$$\begin{aligned} \sup \limits _{\begin{array}{c} t\in [0,T], \\ x,\,y\in \mathbb {R} \end{array}}\frac{|u_\varepsilon ^g(t,x)-u_\varepsilon ^g(t,y)|}{\lambda ^{-\frac{1}{p}}(x)+\lambda ^{-\frac{1}{p}}(y)} \le C_{T,p,H,N,\gamma }|x-y|^{\gamma }\cdot \left( 1+\Vert u_\varepsilon ^g\Vert _{Z_{\lambda ,T}^p}\right) . \nonumber \\ \end{aligned}$$(3.36)
Proof
(i) By (3.7), we have
Applying (3.5) and the Fubini theorem, we have that, for any \(\alpha \in (0,1)\),
where
Set
Applying a change of variable, we have
Invoking Minkowski’s inequality and Hölder’s inequality with \(\frac{1}{p}+\frac{1}{q}=1\), we have
where in the last step we have used Lemma 6.1 and the following fact:
If \(q(\alpha -\frac{3}{2}+\frac{1}{2q})>-1\), namely if \(\alpha >\frac{3}{2p}\), then
Thus, to prove part (i) we only need to prove that there exists some positive constant C, independent of \(r\in [0,T]\), such that for \(\frac{3}{2p}<\alpha <1\),
Now, it remains to show (3.41). Recall \(D_t(x,h)\) defined by (3.8). Applying the Cauchy–Schwarz inequality, (2.6), (3.5) and (3.12), we have
Set
By Minkowski’s inequality, we have
By the same technique as that in Step 2 of the proof for Lemma 3.1, we have
By (3.42), (3.43), (3.44), (3.45) and by using the same method as that in the proof of (4.27) in [22], we have
If \(-2\alpha +2H-\frac{3}{2}>-1\) and \(-2\alpha +H-1>-1\), namely if \(\alpha <H-\frac{1}{4}\), then we see that (3.41) follows from (3.46). This condition on \(\alpha \) is combined with \(\alpha >\frac{3}{2p}\) to become \(\frac{3}{2p}<\alpha <H-\frac{1}{4}\). Therefore, we have proved that for any \(p>\frac{6}{4H-1}\), (3.34) holds.
(ii) By (3.7) and (3.37), we have
In the following, we give estimates for \(\mathcal {K}_i(t,h,x)\), \(i=1,2,3\), respectively. By Hölder’s inequality, we have
Applying the Cauchy–Schwarz inequality, (2.6), (3.5) and (3.12), we have
where
By using the same technique as that in Step 3 of the proof for Lemma 3.1, we have
Putting (3.49), (3.50), (3.51) and (3.52) together, we have
If \(-2\alpha -\frac{1}{2}>-1\) and \(-2\alpha +H-1>-1\), namely if \(\alpha <\frac{H}{2}\), then we have
By (3.39), (6.9) and Lemma 6.1, for any \(\gamma \in (0,1)\), we have
Hence, if \(\alpha <\frac{H}{2}\) and \(q(\alpha -1-\gamma )+\frac{1-q}{2}>-1\), namely if \(0<\gamma <\frac{H}{2}-\frac{3}{2p}\), then we have
If \(0<\gamma <\frac{H}{2}-\frac{3}{2p}\), then by putting (3.48), (3.54) and (3.55) together, we have
Applying Hölder’s inequality, (3.39) and Lemma 6.1, we have
By (3.54) and (6.8), for any \(\gamma \in (0,1)\), if \(\alpha <\frac{H}{2}\), then we have
If \(\alpha <\frac{H}{2}\) and \(q(\alpha -1-\gamma )+\frac{1-q}{2}>-1\), namely if \(0<\gamma <\frac{H}{2}-\frac{3}{2p}\), then there exists a positive constant \(C_{T,p,H,N,\gamma }\) such that
By Hölder’s inequality and Lemma 6.1, we have
Let \(\gamma =\alpha -\frac{3}{2p}\). If \(\frac{3}{2p}<\alpha <\frac{H}{2}\), namely if \(0<\gamma <\frac{H}{2}-\frac{3}{2p}\), then (3.54) yields that
If \(p>\frac{3}{H}\) and \(0<\gamma <\frac{H}{2}-\frac{3}{2p}\), then by putting (3.47), (3.56), (3.58) and (3.60) together, we have
(iii) Notice that
For any \(\gamma \in (0,1)\), by using the same method as that in the proof of (4.35) in [22] and (3.54), we have
If \(\alpha <\frac{H}{2}\) and \((\alpha q-\frac{3q}{2}+\frac{1}{2})-\frac{\gamma q}{2}>-1\), namely if \(p>\frac{3}{H}\) and \(0<\gamma <H-\frac{3}{p}\), then we get the desired result (3.36).
The proof is complete. \(\square \)
Lemma 3.3
Assume that \(u_{n}\in Z^p_{\lambda ,T}\), \(n\ge 1\), for some \(p\ge 2\). If \(u_n\rightarrow u\) in \((\mathcal {C}([0, T]\times \mathbb {R}),d_{\mathcal {C}})\) as \(n\rightarrow \infty \), then we have the following results:
-
(i)
u is also in \(Z^p_{\lambda ,T}\);
-
(ii)
for any fixed \(t\in [0,T]\),
$$\begin{aligned} \int ^t_0\int _{\mathbb R}\Vert p_{t-s}(x-\cdot )\sigma (s,\cdot ,u(s,\cdot ))\Vert ^2_{\mathcal {H}} \lambda (x) \textrm{d}s\textrm{d}x<\infty . \end{aligned}$$(3.63)Hence, for almost all \(x\in {\mathbb {R}}\),
$$\begin{aligned} \int ^t_0\Vert p_{t-s}(x-\cdot )\sigma (s,\cdot ,u(s,\cdot ))\Vert ^2_{\mathcal {H}} \textrm{d}s<\infty . \end{aligned}$$(3.64)
Proof
(i) This result is due to [22, Lemma 4.6] by replacing \(\Vert \cdot \Vert _{L^p_{\lambda }(\Omega \times \mathbb {R})}\) by \(\Vert \cdot \Vert _{L^p_{\lambda }(\mathbb {R})}\), the proof is omitted here.
(ii) Recall \(D_t(x,h)\) given by (3.8). By (2.6) and a change of variable, we have
By (2.18), (3.18), a change of variable and Lemma 6.1, we have that, for any fixed \(t\in [0,T]\),
By (2.17), a change of variable and Lemma 6.2, we have that, for any fixed \(t\in [0,T]\),
By (3.16), (3.17), (3.18), a change of variable and Lemma 6.1, we have that, for any fixed \(t\in [0,T]\),
Putting (3.65)-(3.68) together, we obtain (3.63). In particular, (3.64) holds for almost all \(x\in {\mathbb {R}}\).
The proof is complete. \(\square \)
Proof of Proposition 3.1
(Existence) By Lemma 3.2 (ii) and (iii), we have that for any \(R>0\) and \(\gamma >0\), there exists \(\theta >0\) such that
Combining this with the fact of \(u^{g}_{{\varepsilon }}(0,\cdot )\equiv 1\), we know that the family \(\{u^{g}_{{\varepsilon }}\}_{{\varepsilon }>0}\) is relatively compact on the space \(({{\mathcal {C}}}([0,T]\times \mathbb {R}), d_{{\mathcal {C}}})\) by the Arzelà–Ascoli theorem. Thus, there is a subsequence \(\varepsilon _n\downarrow 0\) such that \(u^g_{\varepsilon _n}\) converges to a function \( u^g\) in \((\mathcal {C}([0, T]\times \mathbb {R}),d_{\mathcal {C}})\).
By using the Cauchy–Schwarz inequality, (2.18), Lemma 3.3 (ii) and the dominated convergence theorem, we have
By the uniqueness of the limit of \(\{u^g_{\varepsilon _n}\}_{n\ge 1}\), we know that \(u^g\) satisfies Eq. (2.28). By Lemma 3.1 and Lemma 3.3 (i), we have that, for any \(p\ge 2\),
(Uniqueness) Let \(u^g\) and \(v^g\) be two solutions of (2.28). By using the same technique as that in the proof of Lemma 3.2 (i), we have that, for any \(p>\frac{6}{4H-1}\),
Denote that
and
According to (3.2), (3.3) and (3.69), we know that
Recall \(D_{t}(x,h)\) defined by (3.8) and denote that
Since \(\int _0^T \Vert g(s,\cdot )\Vert _{\mathcal H}^2\textrm{d}s<\infty \), by using the Cauchy–Schwarz inequality, (2.6) and a change of variable, we have that, for any \(t\in [0,T]\),
By the integral mean value theorem, we have
where \(p_0\in \left( \frac{6}{4H-1},\infty \right) \) is the constant appeared in (2.22).
By (3.18) and Lemma 6.1, we have
By (3.18), (3.70), (3.71) and Lemma 6.1, we have
Combining (3.74), (3.75) with (3.76), we obtain that
A change of variable, Lemma 6.5 and (2.18) yield that
If \(h>1\), then we have that by (2.19),
If \(h\le 1\), then we have that by (2.21),
Thus, by (3.39), (3.79), (3.80) and Lemma 6.1, we have
By (3.73), (3.77), (3.78) and (3.81), we have
Using the similar procedure as in the proof of (3.82), we obtain that
Therefore, by (3.82) and (3.83), we have
Notice that \(S_1(t)\) and \(S_2(t)\) are uniformly bounded on [0, T] as in (3.72). By the fractional Grönwall lemma ([15, Lemma 7.1.1]), we have
This, together with the continuities of \(u^g\) and \(v^g\), implies that \(u^g=v^g\).
The proof is complete. \(\square \)
4 Verification of Condition 2.2 (a)
We now verify Condition 2.2 (a). Recall that \(\Gamma ^{0}\left( \int _0^\cdot g(s)\textrm{d}s\right) =u^g\) for \(g\in \mathbb {S}\), where \(u^g\) is the solution of Eq. (2.28).
Proposition 5.1
Assume that \(\sigma \) satisfies the hypothesis \((\textbf{H})\). For any \(N\ge 1\), let \(g_n, g\in S^N\) be such that \(g_n\rightarrow g\) weakly as \(n\rightarrow \infty \). Let \(u^{g_n}\) denote the solution to Eq. (2.28) replacing g by \(g_n\). Then, as \(n\rightarrow \infty \),
Proof
The proof is divided into two steps. In Step 1, we prove that the family \(\{u^{g_n}\}_{n\ge 1}\) is relatively compact in \(\mathcal {C}([0,T]\times \mathbb {R})\), which implies that there exists a subsequence of \(\{u^{g_n}\}_{n\ge 1}\) (still denoted by \(\{u^{g_n}\}_{n\ge 1}\)) such that \(u^{g_n}\rightarrow u\) as \(n\rightarrow \infty \) in \(\mathcal {C}([0,T]\times \mathbb {R})\) for some function \(u\in \mathcal {C}([0,T]\times \mathbb {R})\). In Step 2, we show that \(u=u^g\).
Step 1 Recall that
Since \(\{g_n\}_{n\ge 1}\subset S^N\), by Proposition 3.1, we know that for any \(p\ge 2\),
As in (3.37), for any \(\alpha \in (0,1)\), we have
where
If \(\alpha <\frac{H}{2}\), then by using the same technique as that in the proof of (3.54), we have
Fix \(\gamma \in (0,1)\). By using the same method as that in the proof of (3.61), if \(\alpha <\frac{H}{2}\), then we have
If \(\alpha <\frac{H}{2}\) and \(q(\alpha -1-\gamma )+\frac{1-q}{2}>-1\), namely if \(p>\frac{3}{H}\) and \(0<\gamma <\frac{H}{2}-\frac{3}{2p}\), then there exists a positive constant \( C_{T,p,H,N,\gamma }\) such that
By using the same method as that in the proof of (3.62), if \(\alpha <\frac{H}{2}\), then we have
If \(\alpha <\frac{H}{2}\) and \((\alpha q-\frac{3q}{2}+\frac{1}{2})-\frac{\gamma q}{2}\), namely if \(p>\frac{3}{H}\) and \(0<\gamma <H-\frac{3}{p}\), then there exists a positive constant \( C_{T,p,H,N,\gamma }\) such that
By (4.5) and (4.7), we have that, for any \(R>0\) and \(\gamma >0\), there exists \(\theta >0\) such that
Combining this with the initial value \(u^{g_n}(0,\cdot )\equiv 1\), we know that \(\{u^{g_n}\}_{n\ge 1}\) is relatively compact on the space \(({{\mathcal {C}}}([0,T]\times \mathbb {R}), d_{{\mathcal {C}}})\) by the Arzelà–Ascoli theorem. Thus, there exists a subsequence of \(\{u^{g_n}\}_{n\ge 1}\) (still denoted by \(\{u^{g_n}\}_{n\ge 1}\)) and \(u\in \mathcal {C}([0,T]\times \mathbb {R})\) such that \(u^{g_n}\rightarrow u\) as \(n\rightarrow \infty \). By Lemma 3.3 (i) and (4.2), we have that, for any \(p\ge 2\),
Step 2 In this step, we prove that \(u=u^g\). Denote that
and
According to (3.2), (3.3) and (3.69), we know that \(\displaystyle \sup _{t\in [0,T]}D_1(t)<\infty \) and \(\displaystyle \sup _{t\in [0,T]}D_2(t)<\infty \). Recall \(D_{t}(x,h)\) defined in (3.8) and denote that
By the similar technique as that in the uniqueness of the proof of Proposition 3.1, we have
On the other hand, denote that
By Lemma 3.3 (ii), we know that for almost all \(x\in {\mathbb {R}}\),
This, together with the weak convergence of \(g_n\) to g, implies that almost all \(x\in {\mathbb {R}}\),
Since \(g_n, g\in S^N\), by using the Cauchy–Schwarz inequality, (2.6), and a change of variable, we have that, for any \(p\ge 2\),
By the similar technique as that in Step 2 in the proof of Proposition 3.1, we have
It follows from [8, p. 105, Exercise 8] that \(\{G_n(t,x)\}_{n\ge 1}\) is \(L^1\)-uniformly integrable in \((\mathbb R, \lambda (x)\textrm{d}x)\), namely
By (4.12) and the dominated convergence theorem, we have
Using the same technique as that in the proof of (4.16), we can prove that
Since \(D_1(t)\) and \(D_2(t)\) are uniformly bounded on [0, T], they are integrable on [0, T]. Putting (4.9), (4.10), (4.16) and (4.17) together, by the fractional Grönwall lemma ([15, Lemma 7.1.1]), we have
In particular, \(u^{g_n}(t,\cdot )\rightarrow u^g(t,\cdot )\) as \(n\rightarrow \infty \) in the space \( L_{\lambda }^2(\mathbb {R})\) for all \(t\in [0,T]\). Since \(u^{g_n}\) also converges to u as \(n\rightarrow \infty \) in the space \(\left( \mathcal {C}([0,T]\times \mathbb {R}),d_{\mathcal {C}}\right) \), the uniqueness of the limit of \(u^{g_n}\) implies that \(u=u^g\).
The proof is complete. \(\square \)
5 Verification of Condition 2.2 (b)
For any \({\varepsilon }>0\), define the solution functional \(\Gamma ^{{\varepsilon }}: {C([0,T];\mathbb {R}^{\infty })} \rightarrow \mathcal C([0,T]\times \mathbb {R}) \) by
where \(u^{\varepsilon }\) stands for the solution of Eq. (1.1) and W can be regarded as a cylindrical Brownian motion on \(\mathcal H\) by (2.7).
Let \(\{g^{{\varepsilon }}\}_{{\varepsilon }>0}\subset {\mathcal {U}}^N\) be a given family of stochastic processes. By the Girsanov theorem, it is easily to see that \(\tilde{u}^{{\varepsilon }}:=\Gamma ^{{\varepsilon }}\left( W({\cdot })+\frac{1}{\sqrt{\varepsilon }} \int _0^{\cdot }{g}^{{\varepsilon }}(s)\textrm{d}s \right) \) is the unique solution of the equation
with the initial value \(\tilde{u}^{{\varepsilon }}(0,\cdot )\equiv 1\).
Recall the map \( \Gamma ^0\) defined by (2.29). Then \(\bar{u}^{{\varepsilon }}:=\Gamma ^0\left( \int _0^{\cdot } g^{{\varepsilon }}(s)\textrm{d}s\right) \) solves the equation
with the initial value \(\bar{u}^{{\varepsilon }}(0,\cdot )\equiv 1\).
Equivalently, we have
and
Proposition 1.13 Assume that \(\sigma \) satisfies the hypothesis \((\textbf{H})\) for some constant \(p_0>\frac{6}{4H-1}\). For every \(N<+\infty \) and \(\{g^{{\varepsilon }}\}_{{\varepsilon }>0}\subset {\mathcal {U}}^N\), it holds that for any \(\delta >0\),
Before proving Proposition 5.1, we give the following lemmas.
Lemma 5.1
Assume that \(\sigma \) satisfies the hypothesis \((\textbf{H})\) for some constant \(p_0>\frac{6}{4H-1}\). Then, it holds that for any \(p> p_0\),
Proof
We give the details of the proof for \(\tilde{u}^{{\varepsilon }}\), while the proof for \(\bar{u}^{{\varepsilon }}\) is similar but simpler which is omitted. Here, the proof is inspired by the proof of [22, Lemma 4.5].
Step 1 As in (4.38) of [22], let
Consider
Let
and recursively for \(n=0,1,2,\ldots ,\)
By using [19, Lemma 4.15] and the similar argument as that in Step 1 in the proof of Lemma 3.1, we know that for any fixed \(t\in [0,T]\) and \(\eta >0\), when n goes to infinity, \(\tilde{u}^{{\varepsilon },n}_{\eta }(t,\cdot )\) converges to \(\tilde{u}^{{\varepsilon }}_{\eta }(t,\cdot )\) in \(L^p_{\lambda }(\Omega \times \mathbb {R})\).
By using the same method as that in the proof of Lemma 3.1, we obtain that
Next, we will give some estimates for \(\Vert \Phi _{1,{\eta }}^{{\varepsilon },n}(t,\cdot )\Vert _{L^p_{\lambda }(\Omega \times \mathbb {R})}\) and \(\mathcal {N}^*_{\frac{1}{2}-H,\,p}\Phi ^{{\varepsilon },n}_{1,{\eta }}(t)\).
Step 2 In this step, we estimate \(\Vert \Phi _{1,{\eta }}^{{\varepsilon },n}(t,\cdot )\Vert _{L^p_{\lambda }(\Omega \times \mathbb {R})}\). By the Burkholder–Davis–Gundy inequality, we have that
where \(D_{t-s}(x-y,h)\) is defined by (3.8).
By using the same arguments as to that in Step 3 of the proof of Lemma 3.1, we have
and
Therefore, by (5.10), (5.11), (5.12) and (5.13), we have
Step 3 In this step, we deal with \(\mathcal {N}^*_{\frac{1}{2}-H,\,p}\Phi _{1,{\eta }}^{{\varepsilon },n}(t)\). By the Burkholder–Davis–Gundy inequality and the hypothesis \((\textbf{H})\), the similar calculation as that in Step 2 of the proof of Lemma 3.1 implies that
where \(\Box _{t-s}(x-z,y,h)\) is defined by (3.9).
Applying a change of variable, Minkowski’s inequality, Jensen’s inequality and Lemma 6.5, we have
Therefore, by (5.15), (5.16), (5.17) and (5.18), we have
Step 4 By (5.14) and (5.19), we obtain that
For any \(t\ge 0\), let
By (5.8), (5.9) and (5.20), there exists a constant \(C_{T,p,H,N}>0\) such that
By the extension of Grönwall’s lemma [11, Lemma 15], we have
where C is a constant independent of \(\eta \in (0,\infty )\) and \({\varepsilon }\in (0,1)\).
By using the same argument as that in Step 3 of the proof of [22, Lemma 4.5] (or Step 4 in the proof of Lemma 3.1), we have
where C is a constant independent of \({\varepsilon }\in (0,1)\).
By using the same methods as that in the proof of [22, Lemma 4.7 (ii), (iii)] and Lemma 3.2 (ii), (iii), we have that, for any \(R>0\) and \(\gamma >0\), there exists \(\theta >0\) such that for each \(i=1,2\),
where
By Lemma 6.7, the family \(\{\tilde{u}^{{\varepsilon }}_{\eta }\}_{\eta >0}\) is tight on the space \({{\mathcal {C}}}([0,T]\times \mathbb {R})\). Thus, \(\tilde{u}^{{\varepsilon }}_{\eta }\rightarrow \tilde{u}^{{\varepsilon }}\) almost surely in the space \(\left( \mathcal {C}([0,T]\times \mathbb {R}),d_\mathcal {C}\right) \) as \(\eta \rightarrow 0\). By the same method as that in the proofs of [22, Theorem 1.5] and proposition 3.1, we know that \(\tilde{u}^{{\varepsilon }}\) is the solution of Eq.(5.4). By (5.21) and [22, Lemma 4.6], we have
The proof is complete. \(\square \)
For any \(u\in \mathcal {Z}^p_{\lambda ,T}\) and \(g\in {\mathcal {U}}^N\), let
By using Lemma 3.2 and Minkowski’s inequality, we have the following results.
Lemma 5.2
Assume that \(\sigma \) satisfies the hypothesis \((\textbf{H})\) for some constant \(p_0>\frac{6}{4H-1}\). Then we have the following results:
-
(i)
For any \(p>p_0\), there exists a constant \( C_{T, p,H,N}>0\) such that
$$\begin{aligned} \left\| \sup _{t\in [0,T],\,x\in \mathbb {R}} \lambda ^{\frac{1}{p}}(x)\mathcal {N}_{\frac{1}{2}-H}Y(t,x)\right\| _{L^p(\Omega )}\le \, C_{T,p,H,N}\left( 1+\left\| u\right\| _{\mathcal {Z}^p_{\lambda ,T}}\right) ; \end{aligned}$$(5.23) -
(ii)
If \(p>\frac{3}{H}\) and \(0<\gamma <\frac{H}{2}-\frac{3}{2p}\), then there exists a positive constant \(C_{T,p,H,N,\gamma }\) such that
$$\begin{aligned} \begin{aligned}&\left\| \sup \limits _{\begin{array}{c} t,\,t+h\in [0,T], \\ x\in \mathbb {R} \end{array}}\lambda ^{\frac{1}{p}}(x)\big [Y(t+h,x) -Y(t,x) \big ]\right\| _{L^p(\Omega )} \\ {}&\quad \le \,C_{T,p,H,N,\gamma }|h|^\gamma \cdot \left( 1+ \left\| u \right\| _{\mathcal {Z}^p_{\lambda ,T}}\right) ; \end{aligned} \end{aligned}$$(5.24) -
(iii)
If \(p>\frac{3}{H}\) and \(0<\gamma <H-\frac{3}{p}\), then there exists a positive constant \(C_{T,p,H,N,\gamma }\) such that
$$\begin{aligned} \begin{aligned}&\left\| \sup \limits _{\begin{array}{c} t\in [0,T], \\ x,y\in \mathbb {R} \end{array}}\frac{Y(t,x)- Y(t,y) }{\lambda ^{-\frac{1}{p}}(x)+\lambda ^{-\frac{1}{p}}(y)}\right\| _{L^p(\Omega )} \\ {}&\quad \le \,C_{ T,p,H,N,\gamma }|x-y|^\gamma \cdot \left( 1+ \left\| u\right\| _{\mathcal {Z}^p_{\lambda ,T}}\right) . \end{aligned} \end{aligned}$$(5.25)
Recall \({\tilde{u}}^{{\varepsilon }}\) and \({\bar{u}}^{{\varepsilon }}\) defined by (5.4) and (5.5), respectively. For any \(k\ge 1\) and any \(p\ge p_0\), where \(p_0\) is defined by (2.22), define the stopping time
By [22, Lemma 4.7], Lemmas 5.1 and 5.2, we know that
Those, together with Chebychev’s inequality, imply that
For any \(t\in [0,T]\), let
Obviously, when \(\tau _k> T\), \(\tilde{u}^{{\varepsilon }}_k(t,\cdot )=\tilde{u}^{{\varepsilon }}(t,\cdot )\) and \(\bar{u}^{{\varepsilon }}_k(t,\cdot )=\bar{u}^{{\varepsilon }}(t,\cdot )\) for any \(t\in [0,T]\).
Lemma 5.3
Assume that \(\sigma \) satisfies the hypothesis \((\textbf{H})\) for some constant \(p_0>\frac{6}{4H-1}\). Then for any \(p\ge p_0\),
Proof
Let
and
where \(\triangle (t,x,y):=\sigma (t,x,\tilde{u}^{{\varepsilon }}_k(t,y))-\sigma (t,x,\bar{u}^{{\varepsilon }}_k(t,y))\). Then, by (5.4) and (5.5), we have
According to Lemmas 4.5 and 4.6 in [22], we have \(\big \Vert \Phi _1^{{\varepsilon }}\big \Vert _{\mathcal {Z}_{\lambda ,T}^p}<\infty \) for any \(p\ge p_0\). Now, it remains to give an estimate for \(\big \Vert \Phi _2^{{\varepsilon }}\big \Vert _{\mathcal {Z}_{\lambda ,T}^p}\).
By using the same technique as that in the proof of (5.14), we have
Next, we deal with the term \(\mathcal {N}^*_{\frac{1}{2}-H,p} \Phi _2^{{\varepsilon }}(t)\). Since \(g^{{\varepsilon }}\in \mathcal {U}^N\), by the Cauchy–Schwarz inequality and (2.6), we have
where
By a change of variable, we have
where \(p_0\) is the constant given by (2.22).
By using a change of variable, the Minkowski inequality, Jensen’s inequality and Lemma 6.5, we have
and
By (2.18), we have
Hence, by a change of variable, Minkowski’s inequality, Jensen’s inequality and Lemma 6.5, we have
By Lemma 6.5, we have
Putting (5.34), (5.35), (5.36), (5.37) and (5.38) together, we have
It follows from Lemma 5.1 that \(\left\| \tilde{u}^{{\varepsilon }}_k-\bar{u}_k^{{\varepsilon }}\right\| _{\mathcal {Z}_{\lambda ,T}^p}\) is finite. By (5.32), (5.33), (5.39) and the fractional Grönwall lemma ( [15, Lemma 7.1.1]), we have that, for any fixed \(k\ge 1\) and \(p>p_0\),
The proof is complete. \(\square \)
We now give the proof of Proposition 5.1.
Proof of Proposition 5.1
By [22, Lemma 4.7], Lemmas 5.1, 5.2 and 6.7, we know that the probability measures on the space \((\mathcal {C}([0,T]\times \mathbb {R}),\mathcal {B}(\mathcal {C}([0,T]\times \mathbb {R})), d_{\mathcal {C}})\) corresponding to the processes \(\{\tilde{u}^{{\varepsilon }}-\bar{u}^{{\varepsilon }}\}_{{\varepsilon }>0}\) are tight. Thus, there is a subsequence \({\varepsilon }_n\downarrow 0\) such that \(\tilde{u}^{{\varepsilon }_n}-\bar{u}^{{\varepsilon }_n}\) converges weakly to some stochastic process \(Z=\{Z(t,x), t\in [0,T], x\in {\mathbb {R}}\}\) in \((\mathcal {C}([0,T]\times \mathbb {R}),\mathcal {B}(\mathcal {C}([0,T]\times \mathbb {R})), d_{\mathcal {C}})\).
On the other hand, for any \(k\ge 1, p\ge p_0, \gamma >0\),
First letting \({\varepsilon }\rightarrow 0\) and then letting \(k\rightarrow \infty \), by Lemma 5.3 and (5.27), we have
Thus, for any fixed \(t\in [0,T]\), the processes \(\{\tilde{u}^{{\varepsilon }}(t, x)(\omega )-\bar{u}^{{\varepsilon }}(t,x)(\omega ); (\omega ,x)\in \Omega \times {\mathbb {R}}\}\) converges in probability in the product probability space \(( \Omega \otimes {\mathbb {R}}, \mathbb P\otimes \lambda (x)\textrm{d}x )\). This, together with the weak convergence of \(\{\tilde{u}^{{\varepsilon }_n}-\bar{u}^{{\varepsilon }_n}\}_{n\ge 1}\) and the uniqueness of the limit distribution of \(\tilde{u}^{{\varepsilon }_n}-\bar{u}^{{\varepsilon }_n}\), implies that \(Z(t,x)\equiv 0, t\in [0,T], x\in {\mathbb {R}}\), almost surely. Thus, as \({\varepsilon }\rightarrow 0\), \(\tilde{u}^{{\varepsilon }}-\bar{u}^{{\varepsilon }}\) converges weakly to 0 in \((\mathcal {C}([0,T]\times \mathbb {R}),\mathcal {B}(\mathcal {C}([0,T]\times \mathbb {R})), d_{\mathcal {C}})\). Equivalently, the sequence of real-valued random variables \( d_\mathcal {C}(\tilde{u}^{{\varepsilon }},\bar{u}^{{\varepsilon }})\) converges to 0 in distribution as \({\varepsilon }\) goes to 0. This implies that as \({\varepsilon }\rightarrow 0\),
See, e.g., [8, p. 98, Exercise 4]. The proof is complete. \(\square \)
Data Availability
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
References
Balan, R., Jolis, M., Quer-Sardanyons, L.: SPDEs with affine multiplicative fractional noise in space with index \(\frac{1}{4}< H<\frac{1}{2} \). Electron. J. Probab. 2(54), 1–36 (2015)
Balan, R., Jolis, M., Quer-Sardanyons, L.: SPDEs with rough noise in space: Hölder continuity of the solution. Stat. Probab. Lett. 119, 310–316 (2016)
Budhiraja, A., Chen, J., Dupuis, P.: Large deviations for stochastic partial differential equations driven by a Poisson random measure. Stoch. Process. Appl. 123(2), 523–560 (2013)
Budhiraja, A., Dupuis, P.: A variational representation for positive functionals of infinite dimensional Brownian motion. Probab. Math. Statist. 20(1), 39–61 (2000)
Budhiraja, A., Dupuis, P.: Analysis and Approximation of Rare Events: Representations and Weak Convergence Methods. Springer, New York (2019)
Budhiraja, A., Dupuis, P., Maroulas, V.: Large deviations for infinite dimensional stochastic dynamical systems. Ann. Probab. 36(4), 1390–1420 (2008)
Budhiraja, A., Dupuis, P., Maroulas, V.: Variational representations for continuous time processes. Ann. Inst. Henri Poincaré Probab. Stat. 47(3), 725–747 (2011)
Chung, K.L.: A Course in Probability Theory, 3rd edn. Academic Press Inc, San Diego (2001)
Da Prato, G., Zabczyk, J.: Stochastic Equations in Infinite Dimensions, 2nd edn. Cambridge University Press, Cambridge (2014)
Dai, Y., Li, R.: Transportation inequality for stochastic heat equation with rough dependence in space. Acta Math. Sin (Engl. Ser.) 38(11), 2019–2038 (2022)
Dalang, R.C.: Extending the martingale measure stochastic integral with applications to spatially homogeneous spde’s. Electron. J. Probab. 4(6), 1–29 (1999)
Dalang, R.C., Quer-Sardanyons, L.: Stochastic integrals for spde’s: a comparison. Expo. Math. 29(1), 67–109 (2011)
Dong, Z., Wu, J., Zhang, R., Zhang, T.: Large deviation principles for first-order scalar conservation laws with stochastic forcing. Ann. Appl. Probab. 30(1), 324–367 (2020)
Dupuis, P., Ellis, R.: A Weak Convergence Approach to the Theory of Large Deviations. Wiley, New York (1997)
Henry, D.: Geometric theory of Semilinear Parabolic Equations. Lecture Notes in Mathematics, vol. 840. Springer-Verlag, Berlin, New York (1981)
Hong, W., Hu S., Liu, W.: McKean-Vlasov SDEs and SPDEs with locally monotone coefficients. arXiv:2205.04043, (2022)
Hong, W., Li, S., Liu, W.: Large deviation principle for McKean–Vlasov quasilinear stochastic evolution equation. Appl. Math. Optim. 84, S1119–S1147 (2021)
Hu, Y.: Some recent progress on stochastic heat equations. Acta Math. Sci. Ser. B (Engl. Ed.) 39(3), 874–914 (2019)
Hu, Y., Huang, J., Lê, K., Nualart, D., Tindel, S.: Stochastic heat equation with rough dependence in space. Ann. Probab. 45(6), 4561–4616 (2017)
Hu, Y., Huang, J., Lê, K., Nualart, D., Tindel, S.: Parabolic Anderson model with rough dependence in space. Computation and combinatorics in dynamics, stochastics and control, 477-498. Abel Symp., 13, Springer, Cham (2018)
Hu, Y., Nualart, D., Zhang, T.: Large deviations for stochastic heat equation with rough dependence in space. Bernoulli 24(1), 354–385 (2018)
Hu, Y., Wang, X.: Stochastic heat equation with general rough noise. Ann. Inst. Henri Poincaré Probab. Stat. 58(1), 379–423 (2022)
Liu, J.: Moderate deviations for stochastic heat equation with rough dependence in space. Acta Math. Sin. (Engl. Ser.) 35(9), 1491–1510 (2019)
Liu, S., Hu, Y., Wang, X.: Nonlinear stochastic wave equation driven by rough noise. J. Differ. Equ. 331, 99–161 (2022)
Liu, W.: Large deviations for stochastic evolution equations with small multiplicative noise. Appl. Math. Optim. 61(1), 27–56 (2010)
Liu, W., Song, Y., Zhai, J., Zhang, T.: Large and moderate deviation principles for McKean–Vlasov with jumps. Potential Anal., in press
Liu, W., Tao, C., Zhu, J.: Large deviation principle for a class of SPDE with locally monotone coefficients. Sci. China Math. 63(6), 1181–1202 (2020)
Márquez-Carreras, D., Sarrà, M.: Large deviation principle for a stochastic heat equation with spatially correlated noise. Electron. J. Probab. 8(12), 1–39 (2003)
Matoussi, A., Sabbagh, W., Zhang, T.: Large deviation principles of obstacle problems for quasilinear stochastic PDEs. Appl. Math. Optim. 83(2), 849–879 (2021)
Peszat, S., Zabczyk, J.: Stochastic evolution equations with a spatially homogeneous Wiener process. Stoch. Process. Appl. 72, 187–204 (1997)
Peszat, S., Zabczyk, J.: Nonlinear stochastic wave and heat equations. Probab. Theory Relat. Fields 116, 421–443 (2000)
Pipiras, V., Taqqu, M.S.: Integration questions related to fractional Brownian motion. Probab. Theory Relat. Fields 118(2), 251–91 (2000)
Ren, J., Zhang, X.: Freidlin–Wentzell’s large deviations for stochastic evolution equations. J. Funct. Anal. 254, 3148–3172 (2008)
Song, J.: SPDEs with colored Gaussian noise: a survey. Commun. Math. Stat. 6(4), 481–492 (2018)
Song, J., Song, X., Xu, F.: Fractional stochastic wave equation driven by a Gaussian noise rough in space. Bernoulli 26(4), 2699–2726 (2020)
Wang, R., Zhang, S., Zhai, J.: Large deviation principle for stochastic Burgers type equation with reflection. Commun. Pure Appl. Anal. 21(1), 213–238 (2022)
Wu, W., Zhai, J.: Large deviations for stochastic porous media equation on general measure space. J. Differ. Equ. 269, 10002–10036 (2020)
Xiong, J., Zhai, J.: Large deviations for locally monotone stochastic partial differential equations driven by Lévy noise. Bernoulli 24, 2842–2874 (2018)
Xu, T., Zhang, T.: White noise driven SPDEs with reflection: existence, uniqueness and large deviation principles. Stoch. Process. Appl. 119, 3453–3470 (2009)
Acknowledgements
The authors are grateful to the anonymous referees for their constructive comments and corrections which have led to significant improvement of this paper. The research of R. Li is partially supported by Shanghai Sailing Program Grants 21YF1415300 and NNSFC Grant 12101392. The research of R. Wang is partially supported by NNSFC Grants 11871382 and 12071361. The research of B. Zhang is partially supported by NNSFC Grants 11971361 and 11731012.
Author information
Authors and Affiliations
Contributions
RL, RW and BZ contributed to writing—reviewing and editing.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
In this section, we give some lemmas related to the heat kernel \(p_t(x)\). Recall \(\lambda (x)\) defined by (2.13), \(D_{t}(x,h)\), \(\Box _{t}(x,y,h)\) defined by (3.8) and (3.9), respectively.
Lemma 6.1
([22, Lemma 2.5]) For any \(T>0\),
Lemma 6.2
([22, Lemma 2.8]) For any \(H\in (\frac{1}{4},\frac{1}{2})\), there exists some constant \(C_H\) such that
and
Lemma 6.3
([22, Lemma 2.10]) For any \(t>0\), there exists some constant \(C_H\) such that
where \(0<H<\frac{1}{2}\).
Lemma 6.4
([22, Lemma 2.11]) For any \(t>0\), there exists some constant \(C_H\) such that
Lemma 6.5
([22, Lemma 2.12]) For any \(t>0\), there exists some constant \(C_{T,H}\) such that
and
Lemma 6.6
([22, (4.29)], [22, (4.32)]) For some fixed \(\gamma \in (0,1)\) and \(\alpha \in (0,1)\), the following two inequalities hold:
and
Lemma 6.7
([22, Theorem 4.4]) A sequence \(\{\mathbb {P}_n\}^\infty _{n=1}\) of probability measures on \((\mathcal {C}([0,T]\times \mathbb {R}),\mathcal {B}(\mathcal {C}([0,T]\times \mathbb {R})))\) is tight if and only if the following conditions hold:
-
(i).
\(\displaystyle \lim _{\lambda \uparrow \infty } \sup _{n\ge 1} \mathbb {P}_n\left( \{\omega \in \mathcal {C}([0,T]\times \mathbb {R}):|\omega (0,0)|>\lambda \}\right) =0\).
-
(ii).
For any \(R>0\) and \(\gamma >0\),
$$\begin{aligned} \lim _{\delta \downarrow 0}\sup _{n\ge 1}\mathbb {P}_n\left( \left\{ \omega \in \mathcal {C}([0,T]\times \mathbb {R}):m^{T,R}(\omega ,\theta )>\gamma \right\} \right) =0{,} \end{aligned}$$where
$$\begin{aligned} m^{T,R}(\omega ,\theta ):=\max \limits _{\begin{array}{c} |t-s|+|x-y|\le \theta , \\ 0\le t,s\le T;\,-R\le x, y\le R \end{array}}\left| \omega (t,x)-\omega (s,y)\right| \end{aligned}$$is the modulus of continuity on \([0,T]\times [-R,R]\).
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Li, R., Wang, R. & Zhang, B. A Large Deviation Principle for the Stochastic Heat Equation with General Rough Noise. J Theor Probab 37, 251–306 (2024). https://doi.org/10.1007/s10959-022-01228-3
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10959-022-01228-3
Keywords
- Stochastic heat equation
- Fractional Brownian motion
- Large deviation principle
- Weak convergence approach