Abstract
In this paper, we investigate the long-term dynamics of fractional stochastic delay reaction-diffusion equations on unbounded domains with a polynomial drift term of arbitrary order driven by nonlinear noise. We first define a mean random dynamical system in a Hilbert space for the solutions of the equation and prove the existence and uniqueness of weak pullback mean random attractors. We then establish the existence and regularity of invariant measures of the system under further conditions on the nonlinear delay and diffusion terms. We also prove the tightness of the set of all invariant measures of the equation when the time delay varies in a bounded interval. We finally show that every limit of a sequence of invariant measures of the delay equation must be an invariant measure of the limiting system as delay approaches zero. The uniform tail-estimates and the Ascoli–Arzelà theorem are used to derive the tightness of distribution laws of solutions in order to overcome the non-compactness of Sobolev embeddings on unbounded domains.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
This paper is concerned with the long-term dynamics of the fractional stochastic delay reaction-diffusion equation with a polynomial drift term defined on \({\mathbb {R}}^n\):
with initial data
where \((-\triangle )^\alpha \) with \(\alpha \in (0,1)\) is the fractional Laplace operator, \(\lambda \) is a positive constant, \(\rho \in [0,1]\) is a time delay parameter, F, G and \(\sigma \) are nonlinear functions, and W is a two-sided cylindrical Wiener process in a Hilbert space U defined on a complete filtered probability space \(\left( \Omega ,{\mathscr {F}},\{{\mathscr {F}}_t\}_{t\in {\mathbb {R}}},{\mathbb {P}} \right) \).
We will investigate mean random attractors and invariant measures of (1.1)–(1.2) under certain conditions on the nonlinear drift term F, delay term G and diffusion term \(\sigma \). Indeed, in the non-autonomous case, we will prove the existence and uniqueness of weak mean random attractors for the dynamical system associated with (1.1)–(1.2) in \(L^2(\Omega , {\mathscr {F}}; L^2({\mathbb {R}}^n) ) \times L^2 \big ( \Omega , {\mathscr {F}}; L^2((-\rho , 0), L^2({\mathbb {R}}^n) ) \big )\) when F is a polynomial nonlinearity of arbitrary order, G and \(\sigma \) are locally Lipschitz continuous. Notice that the diffusion coefficient \(\sigma \) of white noise in (1.1) is nonlinear, and hence the pathwise random attractors theory does not apply to (1.1)–(1.2). That is why we study the weak mean random attractors instead of pathwise random attractors in this paper. Nevertheless, we remark that the pathwise random attractors theory is very effective for dealing with stochastic equations driven by linear white noise, see, e.g., [1,2,3,4,5,6,7,8,9] and the references therein.
On the other hand, because the weak mean random attractors theory is built up on reflexive Banach spaces [10,11,12] and \(C([-\rho , 0]; L^2({\mathbb {R}}^n) ) \) of continuous functions from \([-\rho , 0]\) to \(L^2({\mathbb {R}}^n)\) is not reflexive, we need to choose the Hilbert space \(L^2(\Omega , {\mathscr {F}}; L^2({\mathbb {R}}^n) ) \times L^2 \big ( \Omega , {\mathscr {F}}; L^2((-\rho , 0), L^2({\mathbb {R}}^n) ) \big )\) rather than the space \( L^2 \big ( \Omega , {\mathscr {F}}; C([-\rho , 0], L^2({\mathbb {R}}^n) )\big )\) as a phase space for studying mean random attractors of (1.1)–(1.2), though the space \(C([-\rho , 0]; L^2({\mathbb {R}}^n) ) \) is often chosen as a phase space for pathwise random attractors.
The main goal of this paper is to investigate the existence and the limiting behavior of invariant measures of the autonomous version of (1.1)–(1.2) in the product space \( L^2({\mathbb {R}}^n) \times L^2((-\rho , 0), L^2({\mathbb {R}}^n) )\) when the delay \(\rho \) varies over a bounded interval. The concept of invariant measure is an important tool for understanding the asymptotic behavior of stochastic systems from the point of statistical dynamics view. For instance, the existence of such invariant measures has been studied in [13,14,15,16] for finite-dimensional stochastic delay systems in \({\mathbb {R}}^n\), and in [17,18,19] for infinite-dimensional stochastic delay lattice systems in \(l^2\).
When \(F\equiv 0\), G and \(\sigma \) are globally Lipschitz continuous, the existence of invariant measures of (1.1)–(1.2) in \(C([-\rho , 0], L^2({\mathbb {R}}^n) )\) was recently investigated in [20]. In the present paper, we will deal with the case where F has a polynomial growth rate of arbitrary order. The polynomial nonlinearity of F introduces an essential difficulty for establishing the tightness of distribution laws of a family of solutions in \( L^2({\mathbb {R}}^n) \times L^2((-\rho , 0), L^2({\mathbb {R}}^n) )\). Indeed, in this case, we have to derive the uniform estimates of solutions in \(L^r(\Omega ,L^r({\mathbb {R}}^n))\) for sufficiently large r (see Lemma 4.6). We will employ the Ito formula for the norm of solutions in the space \(L^r({\mathbb {R}}^n)\) as given in [21] to derive such uniform estimates. Furthermore, we need to establish the regularity of solutions in \(L^2(\Omega , H^\alpha ({\mathbb {R}}^n))\) for initial data in \(L^2(\Omega , L^2 ( {\mathbb {R}}^n)) \times L^2(\Omega , L^2(-\rho , 0), L^2 ( {\mathbb {R}}^n))\) (see Lemma 4.7) as well as the regularity in \(L^{r_0}(\Omega , H^\alpha ( {\mathbb {R}}^n))\) for initial data in \(L^{r_0}(\Omega , H^\alpha ( {\mathbb {R}}^n)) \times L^{r_0}(\Omega , L^{r_0}(-\rho , 0), H^\alpha ( {\mathbb {R}}^n))\) for some appropriate \(r_0 >1\) depending on the nonlinear terms in (1.1) (see Lemma 4.9). All these uniform estimates will be used to prove the Hölder continuity of solutions in time in the space \(L^{r_0} (\Omega , L^2( {\mathbb {R}}^n))\) (see Lemma 4.10), which will be further used to obtain the pathwise equicontinuity of solutions in time based on the Kolmogorov theorem.
Note that the stochastic equation (1.1) is defined on the unbounded domain \({\mathbb {R}}^n\), and hence the standard Sobolev embeddings are non-compact. This introduces another major difficulty for proving the tightness of distribution laws of a set of solutions in \( L^2({\mathbb {R}}^n) \times L^2((-\rho , 0), L^2({\mathbb {R}}^n) )\). We will overcome this difficulty by the idea of uniform tail-estimates of solutions outside a sufficiently large ball in \({\mathbb {R}}^n\). More precisely, we will first show the uniform smallness of solutions for large space variables (see Lemma 4.4), and then apply the compactness of Sobolev embeddings in bounded domains as well as the pathwise equicontinuity of solutions to establish the tightness of distribution laws of solutions (see Lemma 4.12). The tightness of distributions of solutions immediately yields the existence of invariant measures of (1.1)–(1.2) in \( L^2({\mathbb {R}}^n) \times L^2((-\rho , 0), L^2({\mathbb {R}}^n) )\) by the Krylov-Bogolyubov method (see Theorem 4.1). For existence of invariant measures of stochastic PDEs without delay in unbounded domains, we refer the reader to [22,23,24,25,26,27,28,29,30] for more details.
Based on the existence of invariant measures, we will further investigate the limits of a family of invariant measures of (1.1)–(1.2) in \( L^2({\mathbb {R}}^n) \times L^2((-\rho , 0), L^2({\mathbb {R}}^n) )\) as the delay \(\rho \rightarrow \rho _0\in [0,1)\). To that end, we need to establish the regularity of invariant measures of (1.1)–(1.2) in \( H^\alpha ({\mathbb {R}}^n) \times L^\infty ((-\rho , 0), H^\alpha ({\mathbb {R}}^n) )\) (see Theorem 5.1), which means that every invariant measure of (1.1)–(1.2) in \( L^2({\mathbb {R}}^n) \times L^2((-\rho , 0), L^2({\mathbb {R}}^n) )\) is supported by \( H^\alpha ({\mathbb {R}}^n) \times L^\infty ((-\rho , 0), H^\alpha ({\mathbb {R}}^n) )\). We then prove the tightness of the collection of all invariant measures of (1.1)–(1.2) as \(\rho \) varies on the interval [0, 1] by using the regularity of invariant measures as well as the uniform estimates of solutions with respect to \(\rho \in [0,1]\) (see Theorem 6.1). We finally prove that every limit of a family of invariant measures of (1.1)–(1.2) as \(\rho \rightarrow \rho _0 \in [0,1)\) must be an invariant measure of the corresponding limiting system (see Theorem 7.2). For the limits of invariant measures of stochastic PDEs without delay as the noise intensity approaches zero, the reader is referred to [31] and [32] for bounded and unbounded domains, respectively.
This paper is organized as follows. In Sect. 2, we prove the existence and uniqueness of solutions and define a mean random dynamical system. Section 3 is devoted to the existence and uniqueness of weak mean random attractors in \(L^2(\Omega , {\mathscr {F}}; L^2({\mathbb {R}}^n)) \times L^2 \big ( \Omega , {\mathscr {F}}; L^2((-\rho , 0), L^2({\mathbb {R}}^n)) \big )\). In Sect. 4, we derive all necessary uniform estimates of solutions and prove the existence of invariant measures in \( L^2({\mathbb {R}}^n) \times L^2((-\rho , 0), L^2({\mathbb {R}}^n)) \). Sections 5 and 6 are devoted to the regularity of invariant measures and the tightness of the collection of all invariant measures of (1.1)–(1.2) when \(\rho \) varies on [0, 1], respectively. In the last section, we show every limit of a family of invariant measures of (1.1)–(1.2) as \(\rho \rightarrow \rho _0 \in [0,1)\) must be an invariant measure of the limiting system.
Throughout this paper, we write \(H_\rho = L^2({\mathbb {R}}^n) \times L^{2} ( (-\rho , 0), L^2({\mathbb {R}}^n) )\) if \(\rho \in (0,1]\), and \(H_\rho = L^2({\mathbb {R}}^n)\) if \(\rho =0\). For convenience, we also denote \( L^2({\mathbb {R}}^n)\) by H with inner product \((\cdot ,\cdot )\) and norm and \(\Vert \cdot \Vert \). If u(t), \(t> \tau -\rho \), is an H-valued stochastic process, then for every \(t\geqslant \tau \), define \(u_t: (-\rho , 0) \rightarrow L^2({\mathbb {R}}^n)\) by \(u_t (s) =u(t+s), \forall s\in (-\rho , 0)\). Given a Banach space Z, we use \(L^2(\Omega , {\mathscr {F}}; Z)\) for the space of all strongly \({\mathscr {F}}\)-measurable square-integrable Z-valued random variables. The notation \(L^2(\Omega , {\mathscr {F}}_t; Z)\) with \(t\in {\mathbb {R}}\) will be understood similarly. We also use \({\mathcal {L}}_2(U, H)\) for the space of Hilbert-Schmidt operators from a separable Hilbert space U to H with norm \(\Vert \cdot \Vert _{ {\mathcal {L}}_2(U, H) }\).
2 Mean random dynamical systems
In this section, we prove the existence and uniqueness of solutions of (1.1)–(1.2) and define a mean random dynamical system based on the solution operators. For that purpose, we first discuss the assumptions on the nonlinear functions in (1.1).
(F1). \(F: {\mathbb {R}} \times {\mathbb {R}}^n \times {\mathbb {R}} \rightarrow {\mathbb {R}}\) is continuous and \(F(\cdot , \cdot , 0)\in L^2_{loc}( {\mathbb {R}}, L^2({\mathbb {R}}^n) )\) and
where \(\lambda _1>0\) and \(p>2\) are constants, \(\psi _1 \in L^1_{loc}( {\mathbb {R}}, L^1({\mathbb {R}}^n) ) \), \(\psi _2 \in L^\infty _{loc}( {\mathbb {R}}, L^\infty ({\mathbb {R}}^n) ) \), \(\psi _3 \in L^q_{loc}( {\mathbb {R}}, L^q({\mathbb {R}}^n) )\) and \(\psi _4 \in L^\infty _{loc}( {\mathbb {R}}, L^\infty ({\mathbb {R}}^n) ) \cap L^{ \frac{2q}{2-q} }_{loc}( {\mathbb {R}}, L^{ \frac{2q}{2-q} }({\mathbb {R}}^n) )\) with \(\frac{1}{p} + \frac{1}{q} =1\).
(F2). F(t, x, u) is locally Lipschitz continuous in u uniformly with respect to \(t\in {\mathbb {R}}\) and \(x\in {\mathbb {R}}^n\); that is, for any bounded interval I, there exists a constant \(C_I^F>0\) such that
(G1). \(G: {\mathbb {R}} \times H \rightarrow H\) is continuous such that
where \(a>0\) is a constant and \(h \in L^2_{loc}( {\mathbb {R}}, H )\).
(G2). G(t, u) is locally Lipschitz continuous in \(u\in H\) uniformly with respect to \(t\in {\mathbb {R}}\); that is, for any \(r>0\), there exists a constant \(C_r^G>0\) such that
For the diffusion coefficients of noise, we assume that \(\sigma : {\mathbb {R}} \times H \rightarrow {\mathcal {L}}_2(U, H)\) is continuous and
(\(\Sigma \)1). \(\sigma (t,u)\) is locally Lipschitz continuous in \(u\in H\) uniformly with respect to \( t \in {\mathbb {R}}\); that is, for every \(r > 0\), there exists a constant \(C_r^\sigma >0\) such that
(\(\Sigma \)2). \(\sigma (t, u)\) grows linearly in \(u\in H\) uniformly for \(t \in {\mathbb {R}}\); that is, there exists a constant \(L>0\) such that for all \((t, u) \in {\mathbb {R}} \times H\),
Recall that for \(\alpha \in (0,1)\), the Hilbert space \(H^\alpha ({\mathbb {R}}^n)\) is defined by
with inner product
and norm \( \Vert u \Vert _{H^\alpha ({\mathbb {R}}^n)} = (u,u)^{\frac{1}{2}} _{H^\alpha ({\mathbb {R}}^n)}\) for \(u \in {H^\alpha ({\mathbb {R}}^n)}\). Note that for all \(u \in {H^\alpha ({\mathbb {R}}^n)}\),
where \(C(n,\alpha ) = \frac{ \alpha 4^\alpha \Gamma ( \frac{n+2\alpha }{2} ) }{ \pi ^{\frac{n}{2}} \Gamma (1-\alpha ) }\) and \((-\triangle )^\alpha \) is the fractional Laplace operator given by (see, e.g., [33]):
For convenience, we write \(V=H^\alpha ({\mathbb {R}}^n)\) with inner product \((\cdot , \cdot )_V\) and norm \(\Vert \cdot \Vert _V\).
A solution of problem (1.1)–(1.2) will be understood in the following sense.
Definition 2.1
Suppose \(u^0\in L^2(\Omega , {\mathscr {F}}_\tau ; H)\) and \(\varphi \in L^2 \big ( \Omega , {{\mathscr {F}}_\tau }; L^2 ( (-\rho , 0), H) \big )\). Then an H-valued stochastic process u(t), \(t\ge \tau -\rho \), is called a weak solution of problem (1.1)–(1.2) in the sense of PDEs if
-
(i)
\(u\in L^2 \big ( \Omega ,{\mathscr {F}}_\tau ; L^2 ( (\tau -\rho ,\tau ), H ) \big )\) and \(u_\tau = \varphi \).
-
(ii)
u is pathwise continuous on \([\tau , \infty )\), and \({\mathscr {F}}_t\)-adapted for all \(t\geqslant \tau \), \(u(\tau ) =u^0\), and \( u\in L^2 \big ( \Omega , C ( [\tau , \tau +T], H ) \big ) \cap L^2 \big ( \Omega , L^2 ( \tau , \tau +T; V) \big ) \cap L^p \big ( \Omega , L^p ( \tau , \tau +T; L^p({\mathbb {R}}^n) ) \big ) \) for all \(T>0\).
-
(iii)
For all \(t\geqslant \tau \) and \(\xi \in V \cap L^p({\mathbb {R}}^n)\),
$$\begin{aligned}&(u(t),\xi ) + \int _\tau ^t \big ( (-\triangle )^{\frac{\alpha }{2}} u(s),\ (-\triangle )^{\frac{\alpha }{2}} \xi \big ) ds + \lambda \int _\tau ^t \big ( u (s),\ \xi \big ) ds \\&\quad + \int _ \tau ^t \int _{{\mathbb {R}} ^n} F( s, x, u (s) ) \xi dx ds\\&\quad =\big ( u^0,\ \xi \big ) + \int _0^t \big ( G( s, u(s-\rho ) ),\ \xi \big ) ds + \int _0^t \big ( \xi ,\ \sigma (s, u (s)) dW(s) \big ), \ \ \ {\mathbb {P}} \text {-almost surely}. \end{aligned}$$
Next, we show the existence and uniqueness of solutions of problem (1.1)–(1.2).
Theorem 2.2
Suppose (F1)-(F2), (G1)-(G2) and (\(\Sigma \)1)-(\(\Sigma \)2) hold. Then for any \(u^0\in L^2(\Omega , {\mathscr {F}}_\tau ; H)\) and \(\varphi \in L^2 \big ( \Omega , {{\mathscr {F}}_\tau }; L^2 ( (-\rho , 0),H ) \big )\), problem (1.1)–(1.2) has a unique solution u in the sense of Definition 2.1. Moreover, for any \(T>0\),
where M is a positive constant independent of \(u^0\), \(\varphi \), \(\rho \), \(\tau \) and T.
Proof
We first show the existence of solutions on \([\tau , \tau +\rho ]\). By (G1) we have for any \(\varphi \in L^2 \big ( \Omega , {{\mathscr {F}}_\tau }; L^2( (-\rho , 0), H) \big )\),
In terms of (2.10), (1.1)–(1.2) on \([\tau , \tau +\rho ]\) is equivalent to the following system without delay:
Then by Theorem 6.3 in [25], under conditions (F1)-(F2), (G1)-(G2) and (\(\Sigma \) 1)-(\(\Sigma \)2), problem (2.11) has a unique solution u defined on \([\tau , \tau +\rho ]\) such that \( u \in L^2 \big ( \Omega , C ( [\tau ,\tau +\rho ], H ) \big ) \cap L^2 \big ( \Omega , L^2 ( \tau ,\tau +\rho ; V ) \big ) \cap L^p \big ( \Omega , L^p ( \tau ,\tau +\rho ; L^p({\mathbb {R}}^n) ) \big ) \). Repeating this argument, one can extend the solution u to the interval \([\tau , \infty )\) such that \( u \in L^2 \big ( \Omega , C ( [\tau ,\tau +T], H ) \big ) \cap L^2 \big ( \Omega , L^2 ( \tau ,\tau +T; V ) \big ) \cap L^p \big ( \Omega , L^p ( \tau ,\tau +T; L^p({\mathbb {R}}^n) ) \big ) \) for any \(T>0\).
Next, we derive the uniform estimates of solutions. Applying Ito’s formula to (1.1), we obtain
For the fourth term on the left-hand side of (2.12), by (2.1), we have
For the second term on the right-hand side of (2.12), by (G1), we obtain
By (2.15), we obtain
For the last two terms on the right-hand side of (2.16), by (\(\Sigma 2\) ) and Burkholder-Davis-Gundy’s inequality, we have
where c is the positive constant in Burkholder–Davis–Gundy’s inequality.
Then by (2.16)–(2.17), we obtain
From (2.18) and Gronwall’s inequality, it follows that for all \(t\in [\tau , \tau +T]\) with \(T>0\),
which together with (2.15) concludes the proof.\(\square \)
Now, for all \(\tau \in {\mathbb {R}}\) and \(t\in {\mathbb {R}}^+\), let \(\Phi (t,\tau )\) be a mapping from \(L^2(\Omega , {\mathscr {F}}_\tau ; H) \times L^2 \left( \Omega , {{\mathscr {F}}_\tau }; L^2((-\rho , 0), H) \right) \) to \(L^2(\Omega , {\mathscr {F}}_{t+\tau }; H) \times L^2 \left( \Omega , {{\mathscr {F}}_{t+\tau }}; L^2( (-\rho , 0), H ) \right) \) given by
for all \((u^0,\varphi ) \in L^2(\Omega , {\mathscr {F}}_\tau ; H) \times L^2 \left( \Omega , {{\mathscr {F}}_\tau }; L^2((-\rho , 0), H) \right) \), where \(u(t; \tau , u^0, \varphi )\) is the solution of (1.1) with initial data \(u^0\) and \(\varphi \), and \(u_{t + \tau }(\theta ; \tau , u^0, \varphi ) = u(t + \tau + \theta ; \tau , u^0, \varphi )\) for \(\theta \in (-\rho , 0)\). Then, we find that \(\Phi \) is a mean random dynamical system on
over the filtration \(\{{\mathscr {F}}_t\}_{t\in {\mathbb {R}}}\).
In what follows, we investigate the existence and uniqueness of weak mean random attractors of (1.1).
3 Weak pullback mean random attractors
In this section, we study weak pullback mean random attractors of (1.1). For simplicity, for every \(\tau \in {\mathbb {R}}\), we set \( {\mathcal {H}}_\tau = L^2(\Omega , {\mathscr {F}}_\tau ; H) \times L^2 \big ( \Omega , {{\mathscr {F}}_\tau }; L^2((-\rho , 0), H) \big ). \) Then \({\mathcal {H}}_\tau \) is a Hilbert space with inner product \( \left( (u^0,\varphi ), (v^0,\psi ) \right) _{{\mathcal {H}}_\tau } = {\mathbb {E}} \left( u^0, v^0 \right) + {\mathbb {E}} \Big ( \int _{-\rho }^{0} \left( \varphi (s), \psi (s) \right) ds \Big ) \) and norm \( \Vert (u^0,\varphi )\Vert _{{\mathcal {H}}_\tau } = \Big ( {\mathbb {E}} ( \Vert u^0 \Vert ^ 2) + \int _{-\rho }^{0} {\mathbb {E}} \left( \Vert \varphi (s)\Vert ^2 \right) ds \Big )^{\frac{1}{2}} \) for \((u^0,\varphi )\) and \((v^0,\psi ) \in {\mathcal {H}}_\tau \).
Assume a in (G1) and L in (\(\Sigma \)2) are sufficiently small in the following sense:
By (3.1), there exists a positive constant \(\mu \) such that
Let \(B=\{ B(\tau ) \subseteq {\mathcal {H}}_\tau : \tau \in {\mathbb {R}} \}\) be a family of nonempty bounded sets such that
where \(\Vert B(\tau ) \Vert _{{\mathcal {H}}_\tau } = \sup _{(u^0,\varphi )\in B(\tau ) } \Vert (u^0, \varphi ) \Vert _{{\mathcal {H}}_\tau }.\) Denote by
We will show (1.1) has a unique weak \({\mathcal {D}}\)-pullback mean random attractor for which we further assume that for every \(\tau \in {\mathbb {R}}\),
where \(\mu \) is the positive constant as in (3.2).
Lemma 3.1
Suppose (F1)-+(F2), (G1)–(G2), (\(\Sigma \)1)–(\(\Sigma \)2), (3.1) and (3.4) hold. Then for any \(\tau \in {\mathbb {R}}\) and \(B=\{B(t)\}_{t\in {\mathbb {R}}} \in {\mathcal {D}}\), there exists \(T=T(\tau , B) > \rho \) such that for all \(t\geqslant T\),
where \(\mu \) is the same constant as in (3.2) and \((u^0, \varphi ) \in B(\tau -t)\).
Proof
For any \(t>0\) and \(r\in (\tau -t, \tau ]\), by (2.12) we get
We now estimate the right-hand side of (3.6). For the third term on the right-hand side of (3.6), by (F1), we obtain
For the fourth term on the right-hand side of (3.6), by (G1) we have
For the fifth term on the right-hand side of (3.6), by (\(\Sigma \)2) we get
From (3.6)–(3.9), it follows that for all \(r\in (\tau -t, \tau ]\),
and for \(t\geqslant \rho \),
From (3.11) and (3.12), we obtain that for \(t \geqslant \rho \),
For the first two terms on the right-hand side of (3.13), by \((u^0,\varphi )\in B(\tau -t)\) we obtain
and hence there exists \(T=T(\tau , B) \geqslant \rho \) such that for all \(t\geqslant T\),
as desired. \(\square \)
In order to prove the existence and uniqueness of weak pullback mean random attractor, we need the following result (see Theorem 2.13 in [11]), which is given here for convenience.
Theorem 3.1
([11]) Suppose X is a reflexive Banach space and \(p \in (1, \infty )\). Let \({\mathcal {D}}_0\) be an inclusion-closed collection of some families of nonempty bounded subsets of \(L^p(\Omega , {\mathscr {F}}; X)\) and \(\Phi \) be a mean random dynamical system on \(L^p(\Omega , {\mathscr {F}}; X)\) over \((\Omega , {\mathscr {F}}, \{{\mathscr {F}}_t\}_{t\in {\mathbb {R}}}, {\mathbb {P}})\). If \(\Phi \) has a weakly compact \({\mathcal {D}}_0\)-pullback absorbing set \(K \in {\mathcal {D}}_0\) on \(L^p(\Omega , {\mathscr {F}}; X)\) over \((\Omega , {\mathscr {F}}, \{{\mathscr {F}}_t\}_{t\in {\mathbb {R}}}, {\mathbb {P}})\), then \(\Phi \) has a unique weak \({\mathcal {D}}_0\)-pullback mean random attractor \({\mathcal {A}} \in {\mathcal {D}}_0\) on \(L^p(\Omega , {\mathscr {F}}; X)\) over \((\Omega , {\mathscr {F}}, \{{\mathscr {F}}_t\}_{t\in {\mathbb {R}}}, {\mathbb {P}})\), which is given by, for each \(\tau \in {\mathbb {R}}\),
where the closure is taken with respect to the weak topology of \(L^p(\Omega , {\mathscr {F}}_\tau ; X)\).
We are now in a position to present the main result of this section.
Theorem 3.2
Suppose (F1)–(F2), (G1)–(G2), (\(\Sigma \) 1)–(\(\Sigma \)2), (3.1) and (3.4) hold. Then the mean random dynamical system \(\Phi \) associated with (1.1) has a unique weak \({\mathcal {D}}\)-pullback mean random attractor \({\mathcal {A}}=\{{\mathcal {A}}(\tau ): \tau \in {\mathbb {R}} \} \in {\mathcal {D}}\) in \(L^2(\Omega , {\mathscr {F}}; H) \times L^2 \big ( \Omega , {\mathscr {F}}; L^2((-\rho , 0), H) \big )\), that is:
(i) \({\mathcal {A}}(\tau )\) is weakly compact in \(L^2(\Omega , {\mathscr {F}}_{\tau }; H) \times L^2 \big ( \Omega , {\mathscr {F}}_{\tau }; L^2((-\rho , 0), H) \big )\) for all \(\tau \in {\mathbb {R}}\).
(ii) \({\mathcal {A}}\) is a \({\mathcal {D}}\)-pullback weakly attracting set of \(\Phi \).
(iii) \({\mathcal {A}}\) is the minimal element of \({\mathcal {D}}\) with properties (i) and (ii).
Proof
For each \(\tau \in {\mathbb {R}}\), define
where
Then \(K_0(\tau )\) is a bounded closed convex subset of \({\mathcal {H}}_\tau \) and hence is weakly compact in \({\mathcal {H}}_\tau \). By (3.4) we have
which means that \(K=\{ K_0(\tau ): \tau \in {\mathbb {R}}\} \in {\mathcal {D}}.\)
By Lemma 3.1, we see that for every \(\tau \in {\mathbb {R}}\) and \(B= \{ B(t) \}_{t\in {\mathbb {R}}} \in {\mathcal {D}}\), there exists \(T = T (\tau , B) \ge \rho \) such that for all \(t \geqslant T\),
Consequently, \(K_0\) is a weakly compact \({\mathcal {D}}\)-pullback absorbing set of \(\Phi \). Then by Theorem 3.1, \(\Phi \) has a unique weak \({\mathcal {D}}\)-pullback mean random attractor \({\mathcal {A}} \in {\mathcal {D}}\) in \(L^2(\Omega , {\mathscr {F}}; H) \times L^2 \big ( \Omega , {\mathscr {F}}; L^2((-\rho , 0), H) \big )\). \(\square \)
4 Existence of invariant measures
In this section, we investigate invariant measures of the autonomous version of (1.1) when the nonlinear functions F, G and \(\sigma \) are time-independent. More precisely, consider the following stochastic delay equation:
where \(\sigma _{1,k} \in L^2({\mathbb {R}}^n)\), \(\kappa \in L^2({\mathbb {R}}^n)\cap L^\infty ({\mathbb {R}}^n)\), and \(\{W_k\}_{k=1}^\infty \) is a sequence of real-valued mutually independent Wiener processes on a complete filtered probability space \((\Omega , {\mathcal {F}}, \{ {\mathcal {F}}_t\}_{t\in {\mathbb {R}} }, {\mathbb {P}})\).
The autonomous version of assumption (F1) is given below:
(F\('\) ). \(F: {\mathbb {R}}^n \times {\mathbb {R}} \rightarrow {\mathbb {R}}\) is continuous and \(F(\cdot , 0)\in L^2({\mathbb {R}}^n)\), and for all \(x \in {\mathbb {R}}^n\) and \(u \in {\mathbb {R}}\),
where \(\lambda _1>0\) and \(p>2\) are constants, \(\psi _1 \in L^1({\mathbb {R}}^n)\), \(\psi _2 \in L^\infty ({\mathbb {R}}^n)\), \(\psi _3 \in L^q({\mathbb {R}}^n)\cap L^{ 2 }({\mathbb {R}}^n)\), and \(\psi _4 \in L^\infty ({\mathbb {R}}^n) \cap L^{ \frac{2q}{2-q} }({\mathbb {R}}^n)\) with \(\frac{1}{p} + \frac{1}{q} =1\).
In addition, F(x, u) is locally Lipschitz continuous in \(u\in {\mathbb {R}}\) uniformly with respect to \(x\in {\mathbb {R}}^n\); that is, for any bounded interval I, there exists a constant \(C_I^F>0\) such that
(G\('\) ). \(G: {\mathbb {R}}^n \times {\mathbb {R}} \rightarrow {\mathbb {R}}\) is continuous such that
where \(a>0\) is a constant and \(h\in L^2( {\mathbb {R}}^n )\).
In addition, G(x, u) is Lipschitz continuous in \(u \in {\mathbb {R}}\) uniformly with respect to \(x\in {\mathbb {R}}\); that is, there exists a constant \(C^G>0\) such that
For the diffusion coefficients of noise, we now assume:
(\(\Sigma '\) ).
In addition, for each \(k\in {\mathbb {N}}\), we assume that \(\sigma _{2,k}:{\mathbb {R}} \rightarrow {\mathbb {R}}\) is globally Lipschitz continuous; that is, for each \(k\in {\mathbb {N}}\), there exists a positive number \(\alpha _{k}\) such that for all \(s_1, s_2 \in {\mathbb {R}} \),
We further assume that for each \(k\in {\mathbb {N}}\), there exist positive numbers \(\beta _k\) and \(\gamma _k\) such that
where \(\sum \limits _{k=1}^\infty ( \alpha _{k}^2 + \beta _k^2 )< + \infty \).
In order to prove the existence of invariant measures of (4.1), we need to assume \(\psi _4\), a, \(\alpha _k\) and \(\gamma _k\) in (F\('\) ), (G\('\) ) and (\(\Sigma '\) ) are sufficiently small in the sense that there exists a constant \(\theta \geqslant 1\) such that
Note that (4.11) implies the following conditions:
and
These inequalities are useful for deriving uniform estimates of solutions which are needed for proving the tightness of distribution laws of a family of solutions on the space \(H \times L^2 ( (-\rho , 0), H )\).
4.1 Uniform estimates of solutions
We now derive uniform estimates of solutions for proving existence of invariant measures. We start with the estimates in \( L^2(\Omega , {\mathscr {F}}_t; H)\).
Lemma 4.1
Suppose (F\('\) ), (G\('\) ), (\(\Sigma '\) ) and (4.12) hold. Then for any \(u^0\in L^2(\Omega , {\mathscr {F}}_0; H)\) and \(\varphi \in L^2 \big ( \Omega , {{\mathscr {F}}_0}; L^2 ( (-\rho , 0), H ) \big )\), the solution u of (4.1) satisfies that for all \(t\geqslant 0\),
and for \(t\geqslant 1+ \rho \),
where \(\nu \) and \(M_1\) are positive constant independent of \(\rho \), \(u^0\) and \(\varphi \).
Proof
By (2.12), we have for all \(t\geqslant 0\),
By (4.16) we have for all \(t>0\),
We now estimate the terms on the right-hand side of (4.17). For the first term on the right-hand side of (4.17), by (4.2) we have
For the second term on the right-hand side of (3.6), by (4.6) we have
For the third term on the right-hand side of (4.17), by (4.10) we have
It follows from (4.17)–(4.20) that for \(t\ge 0,\)
By (4.12) we infer that there exists a positive constant \(\nu \) such that
Then by (4.21), we obtain
By (4.23), we get that for \(t\geqslant 0\),
By (4.22) and (4.24), we have, for all \( t \geqslant 0\),
which yields (4.14).
Integrating (4.21) on \([t-1, t]\) for \(t\geqslant 1 + \rho \), we have
Then from (4.25) and (4.26), we get (4.15) immediately. \(\square \)
Remark 4.1
Let \((u^0, \varphi ) \in L^2(\Omega , {\mathscr {F}}_0; H) \times L^2 \big ( \Omega ,{{\mathscr {F}}_0}; L^2 ( (-\rho , 0), H ) \big )\) satisfy
for some \(R>0\). Then by Lemma 4.1, we find that the solution u of (4.1) satisfies, for \(t\geqslant 0\),
and for \(t\geqslant 1 \),
where \({\overline{M}}_1>0\) is a constant depending only on R but not on \((u^0, \varphi )\) or \(\rho \in [0,1]\). Furthermore, there exists \(T \geqslant 2\) depending only on R (but not on \((u^0, \varphi )\) or \(\rho \in [0,1]\)) such that for all \(t\geqslant T\),
and
where \({\widetilde{M}}_1>0\) is a constant independent of R, \((u^0, \varphi )\) and \(\rho \in [0,1]\).
Lemma 4.2
Suppose (F\('\) ), (G\('\) ), (\(\Sigma '\) ) and (4.12) hold. Then for any \(u^0\in L^2(\Omega , {\mathscr {F}}_0; H)\) and \(\varphi \in L^2 \big ( \Omega , {{\mathscr {F}}_0}; L^2 ( (-\rho , 0), H ) \big )\), the solution u of (4.1) satisfies that for all \(t\geqslant 1+\rho \),
where \(\nu \) and \(M_2\) are positive constant independent of \(u^0\), \(\varphi \) and \(\rho \in [0,1]\).
Proof
The proof is based on Lemma 4.1 and is similar to that of Lemma 3.2 in [20]. So the details are omitted here. \(\square \)
In order to prove the tightness of probability distributions of solutions to (4.1), we need to derive the uniform estimates on the tails of solutions with initial data in \(L^2(\Omega , {\mathscr {F}}_0; H) \times L^2 \big ( \Omega ,{{\mathscr {F}}_0}; L^2 ( (-\rho , 0), H ) \big )\).
Lemma 4.3
Suppose (F\('\) ), (G\('\) ), (\(\Sigma '\) ) and (4.12) hold. Then for every \(\rho \in [0,1]\) and every compact subset E of \(L^2(\Omega , {\mathscr {F}}_0; H) \times L^2 \big ( \Omega , {{\mathscr {F}}_0}; L^2 ( (-\rho , 0), H ) \big )\), the solution u of (4.1) satisfies
Proof
Let \(\theta \) be a smooth function on \({\mathbb {R}}\) such that
and \(0 \leqslant \theta (s)\leqslant 1\) for all \( s \in {\mathbb {R}} \).
For given \(m \in {\mathbb {N}}\), denote by \(\theta _m(x) = \theta (\frac{x}{m})\). By (4.1) and Ito’s formula, we obtain
and hence for \(t>0\),
For the first term on the right-hand side of (4.28), as (8.18) in [25] we find that there exists a positive constant \(c_1\) independent of m and \(\rho \) such that
For the second term on the right-hand side of (4.28), by (2.1) we get
For the third term on the right-hand side of (4.28), by (4.6), we deduce
For the last term on the right-hand side of (4.28), by (4.10) we get
From (4.28)–(4.32), it follows that for \(t>0\),
Let \(\nu >0\) be a constant satisfying (4.22). Then by (4.22) and (4.33) we obtain that for all \(\rho \in [0,1]\) and \(t \geqslant \rho \),
Next, we estimate the first term on the right-hand side of (4.34). By (4.12), (4.33) and Theorem 2.2, we obtain that for all \(\rho \in [0,1]\) and \(t\in [0, \rho ]\),
For the second and third terms on the right-hand side of (4.34), by Lemma 4.1, we obtain
Then from (4.34), (4.35) and (4.36), it follows that for all \(\rho \in [0,1]\) and \(t\geqslant \rho \),
For any \(\varepsilon >0\), since E is compact in \(L^2(\Omega , {\mathscr {F}}_0; H) \times L^2 \big ( \Omega , {{\mathscr {F}}_0}; L^2 ( (-\rho , 0), H ) \big )\), E has a finite open cover of balls with radius \(\frac{\sqrt{\varepsilon }}{2}\) which is denoted by \(\big \{ B\big ( (u^i, \varphi ^i), \frac{\sqrt{\varepsilon }}{2} \big ) \big \}_{i=1}^l\). Since \((u^i, \varphi ^i)\in E \subseteq L^2(\Omega , {\mathscr {F}}_0; H) \times L^2 \big ( \Omega , {{\mathscr {F}}_0}; L^2 ( (-\rho , 0), H ) \big )\) for \( i=1,2,\ldots , l\), it follows that there exists \(R_1=R_1(\varepsilon , E)\geqslant 1\) such that for all \(m\geqslant R_1\), \(i=1,2,\ldots ,l\),
Then for all \((u^0, \varphi ) \in E \) and \(m \geqslant R_1\),
From (4.38) and the definition of \(\theta _m\), we obtain that for all \((u^0, \varphi ) \in E \) and \(m \geqslant R_1\),
By (4.37) and (4.39), we infer that there exists \(R_2 = R_2(\varepsilon , E) \geqslant R_1\) such that for all \(m \geqslant R_2\), \((u^0, \varphi ) \in E\) and \(t \geqslant \rho \),
On the other hand, by (4.35) and (4.39), we find that there exists \(R_3 = R_3(\varepsilon , E) \geqslant R_2\) such that for all \(m \geqslant R_3\), \((u^0, \varphi ) \in E\) and \(t \in [0, \rho ] \),
which along with (4.40) shows that
as desired. \(\square \)
Based on Lemma 4.3 we have the following uniform tail-estimates on the segments of solutions.
Lemma 4.4
Suppose (F\('\) ), (G\('\) ), (\(\Sigma '\) ) and (4.12) hold. Then for every \(\rho \in [0,1]\) and every compact subset E in \(L^2(\Omega , {\mathscr {F}}_0; H) \times L^2 \big ( \Omega , {{\mathscr {F}}_0}; L^2 ( (-\rho , 0), H ) \big )\), the solution u of (4.1) satisfies
Proof
Let \(\theta \) be the smooth function as defined in Lemma 4.3 and \(\nu \) be the positive number determined by (4.22). For \(t \geqslant \rho \) and \(t-\rho \leqslant r\leqslant t\), by Ito’s formula, (4.2) and (4.22), we obtain that for all \(t\geqslant \rho \),
Now we estimate the terms on the right-hand of (4.41). For the second term on the right-hand of (4.41), we obtain by the arguments of (8.18) in [25] and Remark 4.1 that for all \(t\geqslant \rho \),
where \(c_2>0\) is a constant depending only on E but not on m, \((u^0, \varphi )\) or \(\rho \in [0,1]\).
For the fourth term on the right-hand of (4.41), by (4.6), we get that for all \(t \geqslant \rho \),
For the fifth term on the right-hand of (4.41), by (4.10), we have
For the sixth term on the right-hand of (4.41), by Burkholder-Davis-Gundy’s inequality and (4.44), we have
Then from (4.41)–(4.45), it follows that for all \(t\geqslant \rho \),
By (4.39), (4.46) and Lemma 4.3, we find
which concludes the proof. \(\square \)
Remark 4.2
From (4.34) and Remark 4.1, we see that for every \(R>0\) and \(\varepsilon >0\), there exist \(T=T(R,\varepsilon ) \geqslant 2\) and \(K=K(\varepsilon ) \geqslant 1\) such that for all \(t\geqslant T\), \(m\geqslant K\) and \(\rho \in [0,1]\), the solution u of (4.1) satisfies
for any \( (u^0, \varphi )\in L^2(\Omega , {\mathscr {F}}_0; H) \times L^2 \big ( \Omega , {\mathscr {F}}_0; L^2 ( (-\rho , 0), H ) \big )\) such that
Based on (4.47), similar to Lemma 4.4, one can further show that there exist \(T_1 = T_1(R,\varepsilon ) \geqslant T\) and \(K_1 = K_1(\varepsilon ) \geqslant K\) such that for all \(t\geqslant T_1\), \(m\geqslant K_1\) and \(\rho \in [0,1]\),
for any \((u^0, \varphi )\) satisfying (4.48).
In what follows, we derive uniform estimates on the higher-order moments of solutions to (4.1).
Lemma 4.5
Suppose (F\('\) ), (G\('\) ), (\(\Sigma '\) ) and (4.13) hold. If \((u^0, \varphi ) \in L^{2\theta }(\Omega , {\mathcal {F}}_0; H) \times L^{2\theta }(\Omega , {\mathcal {F}}_0; L^{2\theta }((-\rho , 0), H) )\), then there exists a positive constant \(\mu \) such that the solution u of (4.1) satisfies for any \(t\geqslant 0\),
where \(M_3\) is a positive constant independent of \(u^0, \varphi \) and \(\rho \in [0,1]\).
Proof
The proof is similar to Lemma 3.6 in [20]. For the reader’s convenience, we here sketch the main idea.
If \(\theta =1\), then this result is already covered by Lemma 4.1. Next, we assume \(\theta >1\). By (4.13), there exist positive constants \(\mu \) and \(\varepsilon _1\) such that
Given \(m\in {\mathbb {N}}\), let \(\tau _m\) be a stopping time as defined by \( \tau _m = \inf \{ t \geqslant 0: \Vert u(t) \Vert > m\}. \) As usual, \(\inf \emptyset =\infty \). Note that the pathwise continuity of u implies \( \lim _{m \rightarrow \infty } \tau _m = \infty \).
Applying Ito’s formula to (4.1), we obtain for \(t \geqslant 0\),
For the third term on the right-hand side of (4.50), by Young’s inequality and (4.2), we get
Similar to Lemma 3.6 in [20], by Young’s inequality and (4.6), we get
For the fifth term on the right-hand side of (4.50), we have
For the sixth term on the right-hand side of (4.50), by (4.53), we have
It follows from (4.50)–(4.54) that for \(t \geqslant 0\),
Then by (4.49) and (4.55) we get for \(t \geqslant 0\),
Letting \(m \rightarrow \infty \) in (4.56), by Fatou’s theorem we can obtain the desired estimate. This completes the proof. \(\square \)
4.2 Regularity of solutions
In order to prove the existence of invariant measures of (4.1), we need to derive further regularity of solutions. To that end, we assume:
where \(C>0\) is a constant and \(\phi \in V\).
When proving the Hölder continuity of solutions in Lemma 4.10, the regularity of solutions of (1.1) is needed, for which the coefficients \(\sigma _{1,k}\) and \(\kappa \) should belong to the space V instead of H. Since the nonlinear drift term F has a polynomial growth of arbitrary order, the assumptions (4.58) and (4.60) are further required when establishing the higher-order moment estimates of F in \(L^r (\Omega , L^r ({\mathbb {R}}^n))\) with \(r>0\).
Next, we derive uniform estimates of solutions in \( L^{3p-4} \big ( \Omega , L^{3p-4} ({\mathbb {R}}^n) \big ). \)
Lemma 4.6
Assume (F\('\) ), (G\('\) ), (\(\Sigma '\) ), (4.12) and (4.58)–(4.60) hold. If
with \(R>0\), then the solution u of (4.1) satisfies, for all \(t\geqslant 6\) and \(\rho \in [0,1]\),
where \(C_1\) is positive constant depending on R and p, but not on \((u^0, \varphi )\) or \(\rho \).
Proof
The proof consists of several steps. We first derive the uniform estimates of solutions in \( L^{p} \big ( \Omega , L^{p} ({\mathbb {R}}^n) \big ) \). The calculations are formal, but can be justified by a limiting procedure like the Galerkin method.
Step (i). We will show that for all \(t\geqslant 2\),
where \(L_1\) is positive constant depending on R and p, but not on \((u^0, \varphi )\) or \(\rho \).
By Ito’s formula [21], we get for \(t\geqslant r \geqslant 0\),
For the second term on the left-hand side of (4.63), we have
For the third term on the right-hand side of (4.63), by assumption (F\('\) ) and Young’s inequality we get
For the fourth term on the right-hand side of (4.63), by Young’s inequality and (4.6) we get
For the last term on the right-hand side of (4.63), by (4.10) we deduce that
Then from (4.63)–(4.67), it follows that for all \(t \geqslant r \geqslant 0\),
where \(c_1=c_1(p)>0\) is a constant. Then integrating (4.68) with respect to r on \([t-1, t]\) for \(t\geqslant 1\), we get
By (4.69) and Remark 4.1, we find that there exists a positive number \(c_2 \) depending only on R and p but not on \((u^0, \varphi )\) or \(\rho \in [0,1]\) such that for all \(t\geqslant 1\),
By (4.68) and (4.70), we obtain for \(t\geqslant 2\),
where \(c_3>0\) depends only on R and p but not on \((u^0, \varphi )\) or \(\rho \in [0,1]\). Then (4.62) follows from (4.70)–(4.71) immediately.
Step (ii). We now show that for all \(t\geqslant 4\),
where \(L_2\) is positive constant depending on R and p, but not on \((u^0, \varphi )\) or \(\rho \).
It follows from Ito’s formula [21] that for \(t\geqslant r \geqslant 0\),
For the second term on the left-hand side of (4.63),
For the third term on the right-hand side of (4.73), by assumption (F\('\) ) and Young’s inequality we get
For the fourth term on the right-hand side of (4.73), by Young’s inequality and (4.6) we get
For the last term on the right-hand side of (4.73), by (4.10) we deduce that
Then from (4.73)–(4.77), it follows that for all \(t \geqslant r\geqslant 0,\)
where \(c_4=c_4(p)>0\) is a constant. Then integrating (4.78) with respect to r on \([t-1, t]\) for \(t\geqslant 1\), we get
By (4.71) and (4.79) we find that there exists a positive number \(c_5 \) depending only on R and p but not on \((u^0, \varphi )\) or \(\rho \in [0,1]\) such that for all \(t \geqslant 3\),
By (4.78) and (4.80), we obtain for \(t\geqslant 4\),
where \(c_6>0\) depends only on R and p but not on \((u^0, \varphi )\) or \(\rho \in [0,1]\). By (4.80)–(4.81) we obtain (4.72).
Step (iii). We now prove (4.61). Again, by Ito’s formula [21], for \(t\geqslant r \geqslant 0\),
For the second term on the left-hand side of (4.82),
For the third term on the right-hand side of (4.82), by assumption (F\('\) ) and Young’s inequality we get
For the fourth term on the right-hand side of (4.82), by Young’s inequality and (4.6) we get
For the last term on the right-hand side of (4.82), by (4.10) we deduce that
Then from (4.82)–(4.86), it follows that for all \(t \geqslant r \geqslant 0,\)
where \(c_7=c_7(p)>0\) is a constant. Then integrating (4.87) with respect to r on \([t-1, t]\) for \(t\geqslant 1\), we get
By (4.81) and (4.88), we find that there exists a positive number \(c_8 \) depending only on R and p but not on \((u^0, \varphi )\) or \(\rho \in [0,1]\) such that for all \(t \geqslant 5\),
By (4.87) and (4.89), we obtain for \(t \geqslant 6\),
where \(c_9>0\) depends only on R and p but not on \((u^0, \varphi )\) or \(\rho \in [0,1]\). This concludes the proof. \(\square \)
Remark 4.3
The uniform estimates given by Lemma 4.6 can be further extended under additional assumptions. Suppose \(\psi _1\in L^r ({\mathbb {R}}^n)\) for all \(r\in [1, \infty )\) and
Then by the argument of Lemma 4.6, one can show that for every integer \(k\geqslant 0\), the solution u of (4.1) satisfies, for all \(t \geqslant 2(k+1)\) and \(\rho \in [0,1]\),
where \(L_k\) is positive constant depending on k, p and R but not on \((u^0, \varphi )\) or \(\rho \) when
In addition, by Remark 4.1 and the proof of Lemma 4.6, we know that there exists \(T>2(k+1)\) depending on R and k but not on \(u^0, \varphi \) or \(\rho \in [0,1]\) such that for \(t\geqslant T\),
where \({\tilde{L}}_k\) is positive constant depending on k and p but not on R or \(\rho \in [0,1]\)
Lemma 4.7
Suppose (F\('\) ), (G\('\) ), (\(\Sigma '\) ), (4.12) and (4.57)–(4.60) hold. Then for every \(R>0\) and initial data \((u^0, \varphi ) \in L^2(\Omega , {\mathcal {F}}_0; H) \times L^2 \big ( \Omega , {\mathcal {F}}_0; L^2 ( (-\rho , 0), H) \big )\) with
the solution u of (4.1) satisfies, for all \(t\geqslant 3\),
where \(C_2>0\) depends on R but not on \(u^0\), \(\varphi \), or \(\rho \in [0,1]\).
Proof
We formally derive (4.91). By (4.1) and Ito’s formula, we obtain for \(t \geqslant r \geqslant 0\),
For the second term on the right-hand side of (4.92), by (4.4) and (4.57), we get
For the third term on the right-hand side of (4.92), by (4.6) and Remark 4.1, we obtain for \(t\geqslant r \geqslant \rho \)
For the fourth term on the right-hand side of (4.92), by the inequality (4.13) in [20], we have
where the constant \(c_1 >0\) depends only on \(n, \alpha \) and \(\kappa \).
By (4.92)–(4.95), we have for \(t \geqslant r \geqslant \rho \),
For \(t\geqslant 1+\rho \), integrating (4.96) on \([t-1, t]\) with respect to r, we have
By Remark 4.1 and (4.97), we see that there exists \(c_2 >0 \) depending on R but not on \(u^0, \varphi \) or \(\rho \in [0,1]\) such that
Then by (4.96) and (4.98), we have for \(t\geqslant 3\),
where \(c_3>0\) depends on R but not on \(u^0, \varphi \) or \(\rho \in [0,1]\). Then (4.98)–(4.99) and Lemma 4.1 conclude the proof. \(\square \)
Lemma 4.8
Suppose (F\('\) ), (G\('\) ), (\(\Sigma '\) ), (4.12) and (4.57)–(4.60) hold. Then for any \(R>0\) and the initial data \((u^0, \varphi ) \in L^2(\Omega , {\mathcal {F}}_0;H) \times L^2 \big ( \Omega , {\mathcal {F}}_0; L^2 ( (-\rho , 0), H ) \big )\) with \( {\mathbb {E}} \Big ( \Vert u^0\Vert ^2 + \int _{-\rho }^0 \Vert \varphi (s) \Vert ^2 ds \Big ) \leqslant R, \) the solution u of (4.1) satisfies, for all \(t\geqslant 4\),
where \(C_3>0\) depends on R but not on \(u^0, \varphi \) or \(\rho \in [0,1]\).
Proof
The proof is based on Lemma 4.7 and is similar to that of Lemma 4.2 in [20]. So the details are omitted here. \(\square \)
Remark 4.4
Suppose the assumptions of Lemma 4.7 hold and \( {\mathbb {E}} \Big (\Vert u^0\Vert ^2 + \int _{-\rho }^0 \Vert \varphi (s) \Vert ^2 ds \Big ) \leqslant R \) for some \(R>0\). Then by Remark 4.1 and the proof of Lemma 4.7, we find that there exists \(T \geqslant 4\) depending only on R (but not on \(u^0, \varphi \) or \(\rho \in [0,1]\)) such that for all \(t\geqslant T\), the solution u of (4.1) satisfies \( {\mathbb {E}} \Big ( \sup _{t-1 \leqslant r \leqslant t} \Vert u(r) \Vert _V^2 \Big ) \leqslant {\widetilde{C}}_3, \) where \({\widetilde{C}}_3>0\) is a constant independent of R, \(u^0, \varphi \) and \(\rho \in [0,1]\).
Lemma 4.9
Suppose (F\('\) ), (G\('\) ), (\(\Sigma '\) ), (4.11) and (4.57)–(4.60) hold. If \(R>0\) and \((u^0, \varphi ) \in L^{2\theta }(\Omega , {\mathcal {F}}_0; V) ) \times L^{2\theta }(\Omega , {\mathcal {F}}_0; L^{2\theta } (-\rho , 0]; V) )\) such that \( {\mathbb {E}} \Big ( \Vert u^0\Vert _V^{2 \theta } + \int _{-\rho }^0 \Vert \varphi (s ) \Vert _V^{2 \theta } ds \Big ) \leqslant R, \) then the solution u of (4.1) satisfies
where \(C_4\) is a positive constant depending on R but not on \(u^0, \varphi \) or \(\rho \in [0,1]\).
Proof
The proof for \(\theta =1\) is easier. So we assume \(\theta >1\) in the sequel. Let \(\mu \) and \(\varepsilon _1\) be positive constants to be specified later. By (4.1) and Ito’s formula, we get for \(t \geqslant 0\),
For the third term on the right-hand side of (4.100), similar to (4.93), we obtain
For the fourth term on the right-hand side of (4.100), by (4.6) we obtain
For the fifth term on the right-hand side of (4.100), by (4.31) in [20], we have
For the sixth term on the right-hand side of (4.100), we have
Then by (4.100)–(4.104), we get for all \(t\geqslant 0\),
By (4.11), there exist positive constants \(\mu \) and \(\varepsilon _1\) such that
By (4.105) and (4.106) we get for all \(t\geqslant 0\),
which together with Lemma 4.5 concludes the proof. \(\square \)
The next lemma is concerned with the pathwise Hölder continuity of solutions.
Lemma 4.10
Suppose (F\('\) ), (G\('\) ), (\(\Sigma '\) ) and (4.57)–(4.60) hold. Let (4.11) be fulfilled with \(\theta =\frac{3p-4}{2p-2}\). If \(R>0\) and \((u^0, \varphi ) \in L^{2 \theta }(\Omega , {\mathcal {F}}_0; V) \times L^{2 \theta } \big ( \Omega , {\mathcal {F}}_0; L^{2 \theta } ( (-\rho , 0), V ) \big )\) such that
then the solution u of (4.1) satisfies
for all \(\rho \in [0,1]\) and \(t \geqslant r \geqslant 6\), where \(C_5>0\) is a constant depending on R but not on \(u^0, \varphi \) or \(\rho \).
Proof
Let \(A = (-\triangle )^\alpha + \lambda I\), where \(\alpha \in (0,1)\) and \(\lambda \) is the positive constant in (1.1). Then, by (4.1) we find that for \(t > r \geqslant 6\),
which implies that
For the first term on the right-hand side of (4.108), by Theorem 1.4.3 in [35] and Lemmas 4.5 and 4.9, there exists a positive number \(c_1\) depending on \(\theta \) such that for all \(t > r \geqslant 0\),
For the second term on the right-hand side of (4.108), by the contraction property of \(e^{-A t}\), (4.3) and Lemma 4.6, we obtain for all \(t > r \geqslant 6\),
For the third term on the right-hand side of (4.108), by (4.6) and Lemma 4.5, we obtain for all \(t > r \geqslant 1 \),
For the fourth term on the right-hand side of (4.108), by the BDG inequality, (4.10) and Lemma 4.5, we obtain for all \(t > r \geqslant 0\),
Then from (4.108)–(4.112), it follows that there exists \(c_7>0\) depending on R but not on \(u^0, \varphi ,\) \(\rho \), t or r, such that for all \(t>r\geqslant 6\),
This completes the proof. \(\square \)
Remark 4.5
Suppose \(R>0\) and \((u^0, \varphi ) \in L^{2 \theta }(\Omega , {\mathcal {F}}_0; V) \times L^{2 \theta } \big ( \Omega , {\mathcal {F}}_0; L^{2 \theta } ( (-\rho , 0), V ) \big )\) such that
Then by (4.107) and Lemma 4.5 we see that there exists \(T>0\) depending only on R not on \(\rho \in [0,1]\) such that for all \(t\geqslant T\)
where \({\widetilde{C}}_4\) is a positive constant independent of R, \((u^0, \varphi )\) and \(\rho \in [0,1]\).
Moreover, by Lemma 4.5, Remark 4.3 and (4.113), we find from the proof of Lemma 4.10 that there exists \(T \geqslant 6\) depending only on R but not on \(\rho \in [0,1]\) such that for all \(t, r \geqslant T\),
where \({\tilde{C}}_5\) is a positive constant independent of R, \((u^0, \varphi )\) and \(\rho \in [0,1]\), and \(\theta \) is the same as that in Lemma 4.10.
4.3 Existence of invariant measures
We now prove the existence of invariant measures of (4.1) on \(H \times L^2((-\rho , 0); H)\) for which we need to show the tightness of distributions of solutions.
By Theorem 2.2, we see that for any initial time \(t_0 \geqslant 0\) and any \( (u^0, \varphi ) \in L^2(\Omega , {\mathscr {F}}_{t_0}; H) \times L^2(\Omega , {\mathscr {F}}_{t_0}; L^2((-\rho , 0), H) )\), problem (4.1) has a unique solution \(u(t; t_0, u^0, \varphi )\) defined for \(t \in [t_0 -\rho , \infty )\). The segment of \(u(t; t_0, u^0, \varphi )\) on \((t-\rho , t)\) is written as
If \(\psi : H \times L^2((-\rho , 0); H) \rightarrow {\mathbb {R}}\) is a bounded Borel function, then for \(0 \leqslant r \leqslant t\) and \((u^0, \varphi ) \in H \times L^2((-\rho , 0), H)\), we set
In particular, for \(\Gamma \in {\mathcal {B}} \left( H \times L^2((-\rho , 0), H) \right) \), \(0\leqslant r\leqslant t\) and \((u^0, \varphi ) \in H \times L^2((-\rho , 0), H)\), we set
where \(1_\Gamma \) is the characteristic function of \(\Gamma \). We often write \(p_{0,t}\) as \(p_t\).
Let \({\mathscr {P}}\) be the space of all probability measures on \(H \times L^2 ((-\rho , 0), H)\). Recall that a probability measure \(\nu \in {\mathscr {P}}\) is called an invariant measure of (4.1) if for all \(t \geqslant 0\),
for every bounded Borel function \(\psi : H \times L^{2} ( (-\rho ,0), H ) \rightarrow {\mathbb {R}}\).
The following properties of \(\{ p_{r,t} \}_{0 \leqslant r \leqslant t}\) can be proved by standard arguments as in [34].
Lemma 4.11
If (F\('\) ), (G\('\) ) and (\(\Sigma '\) ) hold, then:
(i) The family \(\{ p_{r,t} \}_{0 \leqslant r \leqslant t}\) is Feller, and is homogeneous in time.
(ii) For any \((u^0, \varphi ) \in H \times L^2((-\rho , 0), H)\), the process \(\{ (u(t;0, u^0, \varphi ), u_t (0, u^0, \varphi ) )\}_{t \geqslant 0}\) is an \( H\times L^2((-\rho , 0), H) \)-valued Markov process with transition operators \(\{ p_{r,t} \}_{0 \leqslant r \leqslant t}\). In particular, if \(\psi : H \times L^2 ((-\rho , 0), H) \rightarrow {\mathbb {R}}\) is a bounded Borel function, then for any \(0 \leqslant s \leqslant r \le t\), \({\mathbb {P}}\)-a.s.,
and the Chapman-Kolmogorov equation is valid:
for any \(\Gamma \in {\mathcal {B}} ( H \times L^2((-\rho , 0), H) )\).
We will employ Krylov-Bogolyubov’s method to show the existence of invariant measures of (4.1). To that end, for every \(k \in {\mathbb {N}}\), we set
where \(p(0,0,0; t,\cdot )\) is the distribution law of \((u(t; 0, 0,0), u_t( 0, 0,0))\) corresponding to the solution u(t; 0, 0, 0) of (4.1) with initial condition (0, 0) at initial time 0. We first prove \(\{ \mu _k \}_{k \in {\mathbb {N}} }\) is tight on \(H \times L^2 ( (-\rho , 0), H )\).
Lemma 4.12
Suppose (F\('\) ), (G\('\) ), (\(\Sigma '\) ) and (4.57)–(4.60) hold. Let (4.11) be fulfilled with \(\theta = \frac{3p-4}{2p-2}\). Then \(\{ \mu _k \}_{k \in {\mathbb {N}} } \) is tight on \(H \times L^2 ( (-\rho , 0), H )\).
Proof
The proof is based on the uniform estimates given by Lemma 4.4, Lemma 4.8 and Lemma 4.10, and is similar to [20] regarding the tightness of distributions of solutions on \(C ( [-\rho , 0]; H )\). We here sketch the main idea of the proof. For convenience, during the proof, we write the solution u(t; 0, 0, 0) as u(t).
By Lemma 4.8, for any given \(\epsilon >0\), there exists \(R_1 = R_1(\epsilon )>0\) such that for all \(t\geqslant 4\) and \(\rho \in [0,1]\),
By Lemma 4.10, for all \(t\geqslant 7\) and \(r,s\in [-\rho ,0]\),
where \(c_1>0\) is independent of \(\rho \). By (4.116) and the technique of diadic division, we infer that for every \(\epsilon >0\), there exists \(R_2=R_2(\epsilon )>0\) such that for all \(t\geqslant 7\) and \(\rho \in [0,1]\),
From Lemma 4.4, it follows that for every \(\epsilon >0\) and \(m\in {\mathbb {N}}\), there exists \(n_m=n(\epsilon ,m) \geqslant 1\) such that \({\mathbb {E}} \Big ( \sup \limits _{t-\rho \leqslant s \leqslant t} \int _{ \vert x \vert \geqslant n_m} \vert u(s,x) \vert ^2 dx \Big ) \leqslant \frac{ \epsilon }{ 2^{2\,m+2} } \) for all \(t\geqslant 1\) and \(\rho \in [0,1]\), and hence for all \(t\geqslant 1,\)
For every \(\epsilon >0,\) denote by
and
By (4.115), (4.117) and (4.118)–(4.122) wee find that for all \(t\geqslant 7\) and every \(\rho \in [0,1]\),
Moreover, by (4.119)–(4.122) and the Ascoli-Arzelà theorem, one may verify that the set \(\{ \xi (0) \mid \xi \in {\mathcal {Z}}_{\epsilon } \}\) is compact in H and \({\mathcal {Z}}_{\epsilon }\) is compact in \(C([-\rho ,0]; H)\). Since the embedding \(C([-\rho ,0]; H) \hookrightarrow L^{2} ( (-\rho , 0), H )\) is continuous, we find that \({\mathcal {Z}}_{\epsilon }\) is also compact in \(L^{2} ( (-\rho ,0), H )\). Consequently, the set \(\widetilde{{\mathcal {Z}}}_{\epsilon } = \{ ( \xi (0), \xi ) \mid \xi \in {\mathcal {Z}}_{\epsilon } \}\) is compact in \(H \times L^{2} ( (-\rho , 0), H )\).
Furthermore, by (4.123), we have that for all \(t\geqslant 7\) and \(\rho \in [0,1]\),
which along with (4.114) implies that for every \(\rho \in [0,1]\),
Thus \(\{ \mu _k \}_{ k \in {\mathbb {N}} }\) is tight on \(H \times L^{2} ( (-\rho , 0), H )\), which completes the proof. \(\square \)
Theorem 4.1
Suppose (F\('\) ), (G\('\) ), (\(\Sigma '\) ) and (4.57)–(4.60) hold. Let (4.11) be fulfilled with \(\theta = \frac{3p-4}{2p-2}\). Then for any \(\rho \in [0,1]\), the stochastic equation (4.1) has an invariant measure on \(H \times L^{2} ( (-\rho ,0), H )\).
Proof
By Lemma 4.12 we see that \(\{ \mu _k \}_{k \in {\mathbb {N}} } \) is tight on \(H \times L^2 ( (-\rho , 0), H )\), and hence there exists a probability measure \(\mu \) on \(H \times L^2 ( (-\rho , 0), H )\) such that, up to a subsequence, \( \mu _k \rightarrow \mu . \) Then by Lemma 4.11, one can prove \(\mu \) is invariant, which completes the proof. \(\square \)
Given \(\rho \in [0,1]\), let \({\mathcal {S}}^\rho \) be the collection of all invariant measures of (4.1) with delay parameter \(\rho \). Then from Theorem 4.1 we see that \({\mathcal {S}}^\rho \) is nonempty. In the next section, we will prove the set \(\bigcup \limits _{\rho \in [0,1]} {\mathcal {S}}^\rho \) is tight.
5 Regularity of invariant measures
In this section, we establish the regularity of invariant measures of (4.1), which will be useful for proving the tightness of the set of all invariant measures of (4.1) when \(\rho \) varies on the interval [0, 1] in the next section.
Theorem 5.1
Suppose (F\('\) ), (G\('\) ), (\(\Sigma '\) ), (4.12) and (4.57)–(4.60) hold. Then for every \(\rho \in [0,1]\) and \(\mu ^\rho \in {\mathcal {S}}^\rho \), we have \(\mu ^\rho \big ( V \times L^{\infty } ( (-\rho , 0), V ) \big ) = 1\).
Proof
By Remark 4.4, we find that for every \((u^0,\varphi ) \in H \times L^2 ((-\rho ,0), H)\), there exists \(T=T(u^0, \varphi ) \geqslant 4\) (independent of \(\rho \)) such that for all \(t\geqslant T\) and \(\rho \in [0,1]\),
where \(c_1>0\) is independent of \(u^0, \varphi \) and \(\rho \).
Given \(R>0\), denote by
Then \({\tilde{B}}_R\) is a closed subset of \(H \times L^2 ((-\rho , 0), H)\).
By (5.1) we get for all \(t\geqslant T\) and \(\rho \in [0,1]\),
which implies that for all \(t\geqslant T\) and \(\rho \in [0,1]\),
If \(\mu ^\rho \in {\mathcal {S}}^\rho \), then from the invariance of \(\mu ^\rho \), it follows that for any \(s \geqslant 0\),
By (5.2), (5.3) and Fatou’s theorem we get, for all \(\rho \in [0,1]\),
Letting \(R\rightarrow +\infty \) in (5.4), since \(\lim \limits _{R\rightarrow \infty } \mu ^\rho \big ( {\tilde{B}}_R \big ) = \mu ^\rho \big ( V \times L^{\infty } ( (-\rho , 0), V ) \big )\) we obtain for all \(\rho \in [0,1]\),
which concludes the proof. \(\square \)
6 Tightness of the set of invariant measures
In this section, we prove the set of all invariant measures of (4.1) is tight when \(\rho \) varies on [0, 1]. To that end, for every \(\rho \in (0,1]\), define a restriction operator \({\mathcal {T}}_\rho : H \times L^2((-1, 0), H) \rightarrow H \times L^2( (-\rho , 0), H)\) by
where \(\varphi |_{(-\rho , 0)}\) is the restriction of \(\varphi \) to the interval \((-\rho , 0)\).
We now prove the tightness of the set of all invariant measures of (4.1) for all \(\rho \in [0,1]\).
Theorem 6.1
Suppose (F\('\) ), (G\('\) ), (\(\Sigma '\) ) and (4.57)–(4.60) hold. Let (4.11) be fulfilled with \(\theta =\frac{3p-4}{2p-2}\). Then the set \(\bigcup \limits _{\rho \in [0,1]} {\mathcal {S}}^\rho \) is tight in the sense that for every \(\varepsilon >0\), there exists a compact subset \({\mathcal {K}}\) in \(H\times L^{2} ( (-1,0), H )\) such that \( \mu ^\rho \big ( {\mathcal {T}}_\rho ( {\mathcal {K}}) \big ) > 1-\varepsilon \) for all \(\mu ^\rho \in {\mathcal {S}}^\rho \) and \(\rho \in [0,1]\).
Proof
Given \(\rho \in [0,1]\) and \((u^0, \varphi ) \in V \times L^{\infty } ( (-\rho ,0), V ) \subseteq V \times L^{ \frac{3p-4}{p-1} } ( (-\rho ,0), V )\), by Remark 4.2, we find that for every \(\varepsilon >0\) and \(m \in {\mathbb {N}}\), there exist \(T_m=T(\varepsilon , m, u^0, \varphi ) \geqslant 2\) and \(k_m=k(\varepsilon , m) \geqslant 1\) such that for all \(\rho \in [0,1]\), \(t\geqslant T_m\) and \(k\geqslant k_m\),
On the other hand, by Remark 4.4, we see that there exists \({{\widetilde{T}}}_1= {{\widetilde{T}}}_1(u^0, \varphi ) \geqslant 1\) such that for all \(t\geqslant {{\widetilde{T}}}_1\),
where \(c_1>0\) is a constant independent of \((u^0, \varphi )\) and \(\rho \), which implies that for every \(\varepsilon >0\), there exists \(R_1=R_1(\varepsilon )>0\) (independent of \((u^0, \varphi )\) and \(\rho \)) such that for all \(t\geqslant {{\widetilde{T}}}_1\),
By Remark 4.5, there exist \( {{\widetilde{T}}}_2 = {{\widetilde{T}}}_2 (u^0, \varphi ) \geqslant 1 \) and \( R_2=R_2(\varepsilon )>0 \) (independent of \((u^0, \varphi )\) and \(\rho \)) such that for all \(t\geqslant {{\widetilde{T}}}_2\),
For every \(\varepsilon >0\), denote by
It follows from (6.4)–(6.6) and the proof of Lemma 4.12 that the set
is compact in \(H \times L^{2} ( (-1,0), H )\).
In what follows, we will prove for any \(\rho \in [0,1]\) and \(\mu ^\rho \in {\mathcal {S}}^\rho \),
For any \(m\in {\mathbb {N}}\), define
and
Then \( {\mathscr {K}}_\varepsilon = \bigcap \limits _{i=1}^\infty {\mathcal {A}}_i^{\varepsilon }, \) \({\mathcal {A}}_i^{\varepsilon }\) is a closed subset of \(H \times L^2((-1,0),H)\) and \({\mathcal {A}}_i^{\varepsilon } \supseteq {\mathcal {A}}_{i+1}^{\varepsilon }\). Similarly, one can verify that \({\mathcal {T}}_\rho {\mathcal {A}}_i^{\varepsilon }\) is a closed subset of \(H \times L^2((-\rho ,0),H)\) and \( {\mathcal {T}}_\rho ( {\mathcal {A}}_i^{\varepsilon } ) \supseteq {\mathcal {T}}_\rho ( {\mathcal {A}}_{i+1}^{\varepsilon }) \) for \( i \in {\mathbb {N}} \).
We claim:
It is evident that \( \bigcap \limits _{i=1}^\infty {\mathcal {T}}_\rho ({\mathcal {A}}_i^{\varepsilon } ) \supseteq {\mathcal {T}}_\rho ( {\mathscr {K}}_\varepsilon ). \) So it is enough to prove
Let \(z_0 \in C([-\rho , 0], H)\) such that \((z_0(0), z_0) \in \bigcap \limits _{i=1}^\infty {\mathcal {T}}_\rho ({\mathcal {A}}_i^{\varepsilon } )\). Then for every \(i\in {\mathbb {N}}\), we have \((z_0(0), z_0) \in {\mathcal {T}}_\rho (\mathcal {A}_i^{\varepsilon } )\), which implies that there exists \({\widetilde{z}}_i \in C([-1, 0]; H)\) such that
Consequently, we have \( {\widetilde{z}}_i \in {\mathscr {K}}_{1, \varepsilon } \cap {\mathscr {K}}_{2, \varepsilon } \cap ( \bigcap \limits _{m=1}^i {\mathscr {K}}_{3, \varepsilon , m} )\), which together with (6.10) implies
and
Define a continuous function \(z: [-1, 0] \rightarrow H\) by
Then \((z_0(0), z_0) = {\mathcal {T}}_\rho ( z(0), z )\). Moreover, it follows from (6.11)–(6.13) that \( z \in {\mathscr {K}}_{1, \varepsilon } \cap {\mathscr {K}}_{2, \varepsilon } \cap {\mathscr {K}}_{3, \varepsilon } \), and hence \((z_0(0), z_0) \in {\mathcal {T}}_\rho ( {\mathscr {K}}_\varepsilon )\), which gives (6.9).
By (6.8) we infer that for every \(\rho \in [0,1]\) and \(\mu ^\rho \in {\mathcal {S}}^\rho \),
which implies that there exists \(N_0=N_0(\varepsilon , \rho , \mu ^\rho ) \geqslant 1\) such that for any \(i \geqslant N_0\),
Next, we will prove \( \mu ^\rho \left( {\mathcal {T}}_\rho ( {\mathcal {A}}_{N_0}^{\varepsilon }) \right) > 1 - \frac{11}{12} \varepsilon . \)
Since \(\mu ^\rho \) is an invariant measure of (4.1) with \(\rho \in [0,1]\), we get
Then by (6.15) and Theorem 5.1, we have
By (6.16) and Fatou’s theorem, we get
Next, we estimate the term on the right-hand side of (6.17). For any \((u^0, \psi ) \in V \times L^{\infty } ( (-\rho ,0), V )\), note that \(u^\rho _t(0, u^0, \psi )\) is the segment of the solution \(u^\rho ( t; 0, u^0, \psi )\) of (4.1) on the interval \([t-\rho , t]\); that is,
We now consider the segment of \(u^\rho ( t; 0, u^0, \psi )\) on the interval \([t-1, t]\) with \(t\geqslant 1\), which is denoted by \(v^\rho _t(0, u^0, \psi )\); that is,
Then for all \(t\geqslant 1\), we have \(v^\rho _t( 0, u^0, \psi ) \in C([-1, 0]; H)\) and
By (6.18) we see that if \( \big ( u^\rho ( t; 0, u^0, \psi ), u^\rho _t( 0, u^0, \psi ) \big ) \notin {\mathcal {T}}_\rho ({\mathcal {A}}_{N_0}^{\varepsilon }) \) with \(t \ge 1\), then we must have \( \big ( v^\rho _t ( 0, u^0, \psi ) (0), v^\rho _t( 0, u^0, \psi ) \big ) \notin {\mathcal {A}}_{N_0}^{\varepsilon }, \) which shows that for \(t\geqslant 1\),
By (6.2), we obtain
By (6.3), we have
By (6.1), we have
It follows from (6.19)–(6.22) that
which along with (6.17) yields
Then (6.7) follows from (6.14) and (6.23) immediately, which completes the proof. \(\square \)
7 Limits of invariant measures with respect to delay parameter
In this section, we investigate the limiting behavior of invariant measures of (4.1) as \(\rho \rightarrow \rho _0\). We will show any limit of a sequence of invariant measures of (4.1) must be an invariant measure of the limiting system. We start with an abstract result regarding the limits of invariant measures.
Let Z be a separable Hilbert space with norm \(\Vert \cdot \Vert _Z\). Assume that for every \(\rho \in (0,1]\), \(z\in Z\) and \(\varphi \in L^2((-\rho , 0), Z)\), \(\{X^\rho (t;0, (z, \varphi )): t\geqslant 0 \}\) is a stochastic process in \(Z \times L^2((-\rho , 0), Z)\) with initial data \((z, \varphi )\) at \(t=0\). We also assume that for every \(z\in Z\), \(\{X^0(t;0,z): t\geqslant 0 \}\) is a stochastic process in Z with initial conditionz when \(t=0\). Suppose the probability transition operators of \(X^\rho \) are Feller.
For each \(\rho \in (0,1]\) and \(\rho _{1} \in [0, \rho )\), for any \((z, {\varphi }) \in Z\times L^2((-\rho , 0), Z)\), let
where \(\varphi |_{(-\rho _{1}, 0)}\) is the restriction of \(\varphi \) to the interval \((-\rho _{1}, 0)\). Let \(Z_\rho = Z \times L^2((-\rho , 0), Z)\) if \(\rho \in (0,1]\), and \(Z_\rho =Z\) if \(\rho =0\).
Similar to [18, 31], we assume that \(X^{\rho _n}\) converges to \(X^{\rho }\) as \(\rho _n \rightarrow \rho ^+\) in the following sense: for every compact subset E in \(Z \times L^2( (-1, 0), Z)\), \(t\geqslant 0\) and \(\zeta >0\),
Then we have the following result as Lemma 7.1 in [18] whose proof is omitted.
Theorem 7.1
Let \(0\leqslant \rho < \rho _n \leqslant 1\) and (7.1) hold. Suppose \(\mu ^{\rho _n}\) is an invariant measure of \(X^{\rho _n}\) in \(Z\times L^2( (-\rho _n, 0), Z)\), and for any \(\epsilon >0\), there exists a compact subset \(K \subseteq Z \times L^2( (-1, 0), Z)\) such that \( \mu ^{\rho _n} \big ( {\mathcal {T}}_{1 \rightarrow \rho _n} (K ) \big ) > 1 - \epsilon , \ \forall \ n=1,2,\ldots . \) Then:
(i) The sequence \(\{ \mu ^{\rho _n}\circ {\mathcal {T}}_{\rho _n \rightarrow \rho }^{-1} \}_{n=1}^{+\infty }\) is tight in \(Z_{\rho }\).
(ii) If \(\rho _n \rightarrow \rho \) and \(\mu \) is a probability measure on \(Z_\rho \) such that \(\mu ^{\rho _n}\circ {\mathcal {T}}_{\rho _n \rightarrow \rho }^{-1} \) converges weakly to \(\mu \) as \(n \rightarrow \infty \), then \(\mu \) must be an invariant measure of \(X^\rho \).
Next, we will apply Theorem 7.1 to the stochastic system (4.1) with \(Z=H=L^2({\mathbb {R}}^n)\). Recall that \(H_\rho = H \times L^{2} ( (-\rho , 0), H )\) if \(\rho \in (0,1]\), and \(H_\rho = H\) if \(\rho =0\). We now write the solution of (4.1) with \(\rho \in (0,1]\) as \(u^\rho \), and reserve u for the the solution of (4.1) with \(\rho =0\).
Lemma 7.1
Suppose (F\('\) ), (G\('\) ) and (\(\Sigma '\) ) hold. Then for every \(\rho _1\in [0, 1)\) and every compact subset E in \(H \times L^2 ( (-1, 0), H )\), \(T \geqslant 1\) and \(\eta >0\),
where \(u^\rho (t) = u^\rho (t;0, {\mathcal {T}}_{1\rightarrow \rho } (u^0, \varphi ) )\), and \(u^{\rho }_t(s) = u^{\rho } (t+s; 0, {\mathcal {T}}_{1\rightarrow \rho } (u^0, \varphi ) )\) for \(s\in (-\rho , 0)\).
Proof
By Theorem 2.2, we find that for every \(T\geqslant 1\) and every compact subset E in \({\mathcal {H}}_1\), there exists a positive number \(c_1 = c_1(E, T)\) independent of \(\rho \in [0, 1]\) such that for all \((u^0, \varphi ) \in E\) and \(\rho \in [0,1]\),
Applying Ito’s formula to (4.1), we obtain for \(t\in [0,T]\)
We now estimate all terms on the right-hand side of (7.3). For the first term on the right-hand side of (7.3), it follows from (4.4) that
For the second term on the right-hand side of (7.3), we have
For the first term on the right-hand side of (7.5), by (4.7) we have
Since \(C([-1,0]; H)\) is dense in \(L^2((-1,0), H)\), we find that for each \(\varphi \in L^2( (-1,0), H)\), there exists \(\delta _\varphi \in (0, 1-\rho _1)\) such that \( \int _{-\rho _1}^{0} \Vert \varphi (s-h) - \varphi (s) \Vert ^2 ds < \varepsilon \) for any \(h \in (0, \delta _\varphi )\). Since E is compact in \(H \times L^2( (-1,0), H)\), we infer that there exists \(\delta = \delta (E) \in (0, 1-\rho _1)\) such that for all \(h \in (0, \delta )\) and for all \((u^0,\varphi )\in E\),
By (7.6) and (7.7), we obtain that for \(0< \rho - \rho _1 < \delta \),
For the second term on the right-hand side of (7.5), it follows from (4.6) and (7.2) that for \(0< \rho - \rho _1 < \delta \),
For the third term on the right-hand side of (7.5), by (4.7) we have
Next, we consider the second term on the right-hand side of (7.10). Since E is compact in \(H \times L^2( (-1,0), H)\), we see that for every \(\varepsilon >0\), E has a finite open cover of balls with radius \(\varepsilon \) in \(H \times L^2( (-1,0), H)\), which is denoted by \(\left\{ B\big ( (u_i, \varphi _i), \varepsilon \big ) \right\} _{i=1}^m\). Then for each \((u^0,\varphi ) \in E\), there exists \(i_0\in \{1, 2, \ldots , m\}\) such that \((u^0,\varphi )\in B\big ( (u_{i_0}, \varphi _{i_0}), \varepsilon \big )\); that is,
Note that
Since for all \(i = 1, 2, \ldots , m\), \(u^{\rho _1}(s; 0, {\mathcal {T}}_{1\rightarrow \rho _1} (u_{i}, \varphi _{i})) \in C([0,T], L^2(\Omega , {\mathcal {F}}_0;H))\), which implies that \(u^{\rho _1}(s; 0, {\mathcal {T}}_{1\rightarrow \rho _1} (u_{i}, \varphi _{i})): [0,T] \rightarrow L^2(\Omega , {\mathcal {F}}_0;H)\) is uniformly continuous, and thus there exists \(\delta _i =\delta _i(\varepsilon , T, u_{i}, \varphi _{i} )>0\) such that for all \(t_1, t_2 \in [0,T]\) with \(|t_1-t_2|<\delta _i\),
Let \({\tilde{\delta }} = {\mathord {\textrm{min}}}\{\delta _i \mid i=1,2,\ldots ,m \}\). Then for all \(0< \rho -\rho _1 < {\tilde{\delta }}\),
for all \(s \in [0, T-\rho ], \ i=1,2,\ldots ,m\). Then by (7.13) we obtain
On the other hand, by Ito’s formula and together with (4.4), (4.7), and (4.9), we obtain
Applying Gronwall’s inequality to (7.15), by (7.11) we have
By (7.16), we have for \(t\in [0,T]\)
In addition, by (7.16) we have
So by (7.12), (7.14), (7.17) and (7.18), we obtain for \(0< \rho -\rho _1 < {\bar{\delta }}\),
which along with (7.10) yields that for \(0< \rho -\rho _1 < {\bar{\delta }}\),
Let \({\hat{\delta }}={\mathord {\textrm{min}}}\{ \delta , {\bar{\delta }} \}\). Then for \(0< \rho -\rho _1 < {\hat{\delta }}\), it follows from (7.5), (7.8), (7.9) and (7.20) that
For the third term on the right-hand side of (7.3), by (4.9), we have
For the fourth term on the right-hand side of (7.3), by (\(\Sigma \)1\('\) ) and the Burkholder-Davis-Gundy inequality we have
Then by (7.3), (7.4), (7.21)–(7.23), we obtain, for \(\varepsilon \in (0,1)\) and \(0< \rho -\rho _1 < {\hat{\delta }}\),
where \(c_2, c_3, c_4\) and \(c_5\) are positive numbers depending only on E and T but not on \(u^0, \varphi \), \(\varepsilon \) or \(\rho \). By (7.24) and Gronwall’s inequality, we obtain that for all \(t\in [0,T]\), \((u^0, \varphi ) \in E\) and \(0< \rho -\rho _1 < {\hat{\delta }}\),
Furthermore, by (7.25) we obtain for all \(t\in [0,T]\), \((u^0, \varphi ) \in E\) and \(0< \rho -\rho _1 < {\hat{\delta }}\),
Since E is compact in \(H\times L^2((-1,0), H)\), there exists \(\delta _0 =\delta _0 (\varepsilon , E)>0\) such that for all \(h\in (0, \delta _0)\),
Let \({\hat{\delta }} _0 ={\mathord {\textrm{min}}}\{ \delta _0, {\hat{\delta }}\}\). By (7.25), (7.26) and (7.27) we get for all \((u^0, \varphi ) \in E\) and \(0< \rho -\rho _1 < {\hat{\delta }}_0\),
It follows from (7.28) that for all \(0< \rho -\rho _1 < {\hat{\delta }}_0\),
By (7.29), we obtain
as desired. \(\square \)
We are now ready to present the main result of this section.
Theorem 7.2
Suppose (F\('\) ), (G\('\) ), (\(\Sigma '\) ) and (4.58)–(4.60) hold. Let (4.11) be fulfilled with \(\theta =\frac{3p-4}{2p-2}\). Take \(\rho _0 \in [0,1)\) and \(\rho _n \in (\rho _0, 1]\). If \(\rho _n \rightarrow \rho _0\) and \(\mu ^{\rho _n} \in {\mathcal {S}}^{\rho _n}\), then there exist a subsequence \(\{ \rho _{n_k} \}_{k=1}^{\infty }\) and an invariant measure \(\mu ^{\rho _0} \in {\mathcal {S}}^{\rho _0}\) such that \(\mu ^{\rho _{n_k}}\circ {\mathcal {T}}^{-1}_{\rho _{n_k} \rightarrow \rho _0} \rightarrow \mu ^{\rho _0}\) weakly.
Proof
Note that \(\{ \mu ^{\rho _n} \}_{n=1}^{ \infty }\) is tight by Theorem 6.1. Therefore, there exist a subsequence \(\{ \rho _{n_k} \}_{k=1}^{\infty }\) and probability measure \(\mu ^*\) such that \(\mu ^{\rho _{n_k}} \circ {\mathcal {T}}^{-1}_{\rho _{n_k} \rightarrow \rho _0} \rightarrow \mu ^*\) weakly. Since \( \rho _{n_k} \rightarrow \rho _0\), by Lemma 7.1 and Theorem 7.1 we infer that \(\mu ^*\) must be an invariant probability measure of (4.1) with \( \rho =\rho _0\), which concludes the proof. \(\square \)
Data availability statements
This manuscript has no associated data.
References
Bates, P.W., Lu, K., Wang, B.: Random attractors for stochastic reaction–diffusion equations on unbounded domains. J. Differ. Equ. 246, 845–869 (2009)
Caraballo, T., Han, X., Kloeden, P.: Nonautonomous chemostats with variable delays. SIAM J. Math. Anal. 47(3), 2178–2199 (2015)
Crauel, H., Flandoli, F.: Attractors for random dynamical systems. Probab. Theory Relat. Fields 100, 365–393 (1994)
Flandoli, F., Schmalfuss, B.: Random attractors for the 3D stochastic Navier–Stokes equation with multiplicative noise. Stoch. Stoch. Rep. 59, 21–45 (1996)
Gess, B., Liu, W., Röckner, M.: Random attractors for a class of stochastic partial differential equations driven by general additive noise. J. Differ. Equ. 251, 1225–1253 (2011)
Li, D., Lu, K., Wang, B., Wang, X.: Limiting dynamics for non-autonomous stochastic retarded reaction-diffusion equations on thin domains. Discrete Contin. Dyn. Syst. 39(7), 3717–3747 (2019)
Wang, X., Lu, K., Wang, B.: Random attractors for delay parabolic equations with additive noise and deterministic nonautonomous forcing. SIAM J. Appl. Dyn. Syst. 14(2), 1018–1047 (2015)
Schmalfuss, B.: Backward cocycles and attractors of stochastic differential equations, International Seminar on Applied Mathematics-Nonlinear Dynamics: Attractor Approximation and Global Behavior, Dresden, pp. 185–192 (1992)
Wang, B.: Sufficient and necessary criteria for existence of pullback attractors for non-compact random dynamical systems. J. Differ. Equ. 253, 1544–1583 (2012)
Kloeden, P., Lorenz, T.: Mean-square random dynamical systems. J. Differ. Equ. 253, 1422–1438 (2012)
Wang, B.: Weak pullback attractors for mean random dynamical systems in Bochner spaces. J. Dyn. Differ. Equ. 31, 2177–2204 (2019)
Wang, B.: Weak pullback attractors for stochastic Navier–Stokes equations with nonlinear diffusion terms. Proc. Am. Math. Soc. 147(4), 1627–1638 (2019)
Es-Sarhir, A., van Gaans, O., Scheutzow, M.: Invariant measures for stochastic functional differential equations with superlinear drift term. Differ. Integr. Equ. 23, 189–200 (2010)
Butkovsky, O., Scheutzow, M.: Invariant measures for stochastic functional differential equations. Electron. J. Probab. 22, 1–23 (2017)
Bo, L., Yuan, C.: Stochastic delay differential equations with jump reflection: invariant measure. Stochastics 88(6), 841–863 (2016)
Wu, F., Yin, G., Mei, H.: Stochastic functional differential equations with infinite delay: existence and uniqueness of solutions, solution maps, Markov properties, and ergodicity. J. Differ. Equ. 262, 1226–1252 (2017)
Chen, Z., Li, X., Wang, B.: Invariant measures of stochastic delay lattice systems. Discrete Contin. Dyn. Syst. Ser. B 26, 3235–3269 (2021)
Li, D., Wang, B., Wang, X.: Limiting behavior of invariant measures of stochastic delay lattice systems. J. Dyn. Differ. Equ. 34, 1453–1487 (2022)
Li, D., Wang, B., Wang, X.: Periodic measures of stochastic delay lattice systems. J. Differ. Equ. 272, 74–104 (2021)
Chen, Z., Wang, B.: Invariant measures of fractional stochastic delay reaction–diffusion equations on unbounded domains. Nonlinearity 34, 3969–4016 (2021)
Krylov, N.V.: Itô’s formula for the \(L_p\)-norm of stochastic \(W^1_p\)-valued processes. Probab. Theory Relat. Fields 147, 583–605 (2010)
Kim, J.: On the stochastic Benjamin-Ono equation. J. Differ. Equ. 228, 737–768 (2006)
Kim, J.: Periodic and invariant measures for stochastic wave equations. Electron. J. Differ. Equ. 2004(5), 1–30 (2004)
Kim, J.: Invariant measures for a stochastic nonlinear Schrödinger equation. Indiana Univ. Math. J. 55, 687–717 (2006)
Wang, B.: Dynamics of fractional stochastic reaction–diffusion equations on unbounded domains driven by nonlinear noise. J. Differ. Equ. 268, 1–59 (2019)
Brzezniak, Z., Ondrejat, M., Seidler, J.: Invariant measures for stochastic nonlinear beam and wave equations. J. Differ. Equ. 260, 4157–4179 (2016)
Brzezniak, Z., Motyl, E., Ondrejat, M.: Invariant measure for the stochastic Navier–Stokes equations in unbounded 2D domains. Ann. Probab. 45, 3145–3201 (2017)
Misiats, O., Stanzhytskyi, O., Yip, N.: Existence and uniqueness of invariant measures for stochastic reaction-diffusion equations in unbounded domains. J. Theor. Probab. 29, 996–1026 (2016)
Wang, R., Guo, B., Wang, B.: Well-posedness and dynamics of fractional FitzHugh–Nagumo systems on \({\mathbb{R} }^N\) driven by nonlinear noise. Sci. China Math. 64(11), 2395–2436 (2021)
Liu, Z., Shi, Z.: Ergodicity of stochastic Burgers equation in unbounded domains with space-time white noise, arXiv: 2401.15349 (2024)
Chen, L., Dong, Z., Jiang, J., Zhai, J.: On limiting behavior of stationary measures for stochastic evolution systems with small noise intensity. Sci. China Math. 63(8), 1463–1504 (2020)
Chen, Z., Wang, B.: Limit measures and ergodicity of fractional stochastic reaction-diffusion equations on unbounded domains. Stoch. Dyn. 22(2), 2140012 (2022)
Di Nezza, E., Palatucci, G., Valdinoci, E.: Hitchhiker’s guide to the fractional Sobolev spaces. Bull. Sci. Math. 136, 521–573 (2012)
Da Prato, G., Zabczyk, J.: Stochastic Equations Infinite Dimensions. Cambridge University Press, Cambridge (1992)
Henry, D.: Geometric Theory of Semilinear Parabolic Equations. Lecture Notes in Mathematics, vol. 840. Springer, New York (1981)
Acknowledgements
This work was supported by the NNSF of China (Grant numbers 11471190, 11971260). The authors are grateful to the editor and referees for their very valuable suggestions and comments.
Funding
Open access funding provided by SCELC, Statewide California Electronic Library Consortium.
Author information
Authors and Affiliations
Contributions
Zhang Chen and Bixiang Wang wrote the main manuscript. Both authors reviewed the manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no Conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Chen, Z., Wang, B. Long-term dynamics of fractional stochastic delay reaction–diffusion equations on unbounded domains. Stoch PDE: Anal Comp (2024). https://doi.org/10.1007/s40072-024-00334-z
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s40072-024-00334-z