1 Introduction and Main Results

We study the long-time behavior of the equation

$$\begin{aligned} \left\{ \begin{array}{ll} \frac{\partial }{\partial t} u(t,x) = A u(t,x) + f(x,u(t,x)) + \sigma (x,u(t,x)) \dot{W}(t,x),&{}\quad t > 0,\; x \in G;\\ u(0,x) = u_0(x).&{}\\ \end{array}\right. \end{aligned}$$
(1)

Here, \(G \subset \mathbb {R}^{d}\) is a (possibly unbounded) domain, \(A\) is an elliptic operator, \(f\) and \(\sigma \) are measurable real functions, and the Gaussian noise \(\dot{W}(t,x)\) is white in time and colored in space. In particular, we are interested in the existence and uniqueness of invariant measures for Eq. (1).

Equations of this type model the behavior of various dynamical systems in physics and mathematical biology. For instance, this equation describes the well-known Hodgkin–Huxley model in neurophysiology (where \(u\) is the electric potential on nerve cells [22]), as well as the Dawson and Fleming model of population genetics [8] (\(u(t, \cdot )\) is the mass distribution of population). Besides, Eq. (1) with infinite-dimensional noise is an interesting object from the mathematical point of view since its analysis involves subtle interplay between PDE and probabilistic techniques.

Reaction–diffusion equations of type (1) have been extensively studied by a variety of authors. The analysis of the long-time behavior of solutions of (1) is a nontrivial question even in the deterministic case \(\sigma (x,u) \equiv 0\). This question was addressed, for example, by Dirr and Yip [13] and references therein. In their work, the authors describe a certain class of nonlinearities \(f(x,u)\), for which the deterministic equation (1) admits a bounded solution (as \(t \rightarrow \infty \)), while for a different class of nonlinearities all solutions of the deterministic equation (1) have linear growth in \(t\) (and hence are not uniformly bounded). The transition between those two classes of nonlinearities is also studied in the paper.

A comprehensive study of stochastic equation (1) has been performed by Da Prato and Zabczyk ([11, 12] and references therein). The ergodic properties of the solutions of (1) are a question of separate interest in these works. This question was addressed from the point of view of the existence of an invariant measure for (1), which is a key step in the study of the ergodic behavior of the underlying physical systems [12, Theorems 3.2.4, 3.2.6]. Based on the pioneering work of Kryloff and Bogoliouboff [18], the authors suggested the following approach to establishing the existence of invariant measures:

  • Establish the compactness and Feller property of the semigroup generated by \(A\);

  • Establish the existence of a solution which is bounded for \(t \in [0, \infty )\) in certain probability sense.

The existence of invariant measures using the aforementioned procedure was established in [3, 17, 21], in particular in the case when \(A = \varDelta \) and \(G\) is a bounded domain.

A different approach to the existence of invariant measures, based on the coupling method, was used by Bogachev and Roechner [2] and Mueller [23]. This method can be applied even for space-white noise but only in the case when the space dimension \(d\) is one.

The existence and uniqueness of the solutions of stochastic reaction–diffusion equations in bounded domains with Dirichlet boundary condition, as well as the existence of an invariant measure, was studied by Cerrai [57] and references therein.

The question of the existence of invariant measures in unbounded domains with \(A = \varDelta \) was studied in [1, 12, 15, 26]. The key condition for the existence of a solution bounded in probability, and hence, the existence of an invariant measure in these works is the following dissipation condition for the nonlinearity \(f\): for some \(k>0\),

$$\begin{aligned} \left\{ \begin{array}{ll} f(u) \ge -k u - c,&{}\quad u \le 0;\\ f(u) \le -k u + c,&{}\quad u \ge 0.\\ \end{array}\right. \end{aligned}$$
(2)

To the best of our knowledge, the only case the existence of an invariant measure in \(\mathbb {R}^d\) is proved when \(f(u)\) does not satisfy the dissipativity condition (2) is the work of Assing and Manthey [1]. For spatial dimensions three or higher, these authors show the existence of an invariant measure for (1) if \(f(u) \equiv 0\) and \(\sigma (u)\) is a Lipschitz function of \(u\) with a sufficiently small Lipschitz constant. One of the goals of the present work is to extend the results of [1] to incorporate \(f\) which might not satisfy the condition (2).

We establish two types of existence results for invariant measures in unbounded domains. The first is to make use of the boundedness and compactness property of the solution. The dissipativity required comes not from the nonlinear function \(f\) but from the decaying property of the Green’s function in three and higher dimensions in \(\mathbb {R}^d\). The second is to make use of the exponential stability of the equation. This approach also gives the uniqueness of the invariant measure. Both strategies are similar to [9, 10], while the analytical framework is different.

Before describing our results, we introduce some weighted \(L^2\)-space. Let \(\rho \) be a nonnegative continuous \(L^1(\mathbb {R}^d) \bigcap L^\infty (\mathbb {R}^d)\) function. Following [26], we call \(\rho \) to be an admissible weight if for every \(T>0\) there exists \(C(T)>0\) such that

$$\begin{aligned} G(t, \cdot ) * \rho \le C(T) \rho , \ \forall t \in [0,T], \,\text {where}\,\,G(t,x) = \frac{1}{(4 \pi t)^{d/2}}\text {e}^{ - \frac{|x|^2}{4 t}}. \end{aligned}$$

Some examples of admissible weights include \(\rho (x) = \text {exp}(-\gamma |x|)\) for \(\gamma >0\), and \(\rho (x) = (1+|x|^n)^{-1}\) for \(n>d\).

For an admissible weight \(\rho \), define

$$\begin{aligned} H = L^2_{\rho }(\mathbb {R}^d):=\left\{ w: \mathbb {R}^d \rightarrow \mathbb {R}, \int _{\mathbb {R}^d} |w(x)|^2 \rho (x)\, \text {d}x < \infty \right\} \end{aligned}$$
(3)

and

$$\begin{aligned} \Vert w\Vert ^2_{H}:= \int _{\mathbb {R}^d} |w(x)|^2 \rho (x)\, \text {d}x. \end{aligned}$$

The choice of \(\rho \) is more flexible for the first part, while it is quite specific for the second. The noise process \(W\) is defined and constructed at the beginning of Sects. 2 and 3.

Our first set of results is stated as follows.

Theorem 1

Let \(d\ge 3\). Assume

  1. 1.

    \(\sigma :\mathbb {R}^d \times \mathbb {R}\rightarrow \mathbb {R}\) satisfies \(|\sigma (x,u_1) - \sigma (x,u_2)| \le c|u_1 - u_2|\) and \(|\sigma (x,u)| \le \sigma _0\) for some \(\sigma _0 > 0\).

  2. 2.

    \(f:\mathbb {R}^d \times \mathbb {R}\rightarrow \mathbb {R}\) satisfies \(|f(x,u_1) - f(x,u_2)| \le c|u_1 - u_2|\) and there exists \(\varphi (x) \in L^1(\mathbb {R}^d) \cap L^{\infty }(\mathbb {R}^d)\) such that

    $$\begin{aligned} |f(x,u)|\le \varphi (x), \forall (x,u) \in \mathbb {R}^d \times \mathbb {R}. \end{aligned}$$
    (4)

Let \(u(t,x)\) be a solution of (1) with \(\mathbb {E}\Vert u(0,x)\Vert ^2_{L^2(\mathbb {R}^d)}<\infty \). Then, we have

$$\begin{aligned} \sup _{t \ge 0} \mathbb {E}\Vert u(t,x)\Vert ^2_{H} < \infty . \end{aligned}$$

Our second result deals with a nonlocal stochastic reaction–diffusion equation.

$$\begin{aligned} \left\{ \begin{array}{ll} \frac{\partial }{\partial t} u = A u + f(u) + \sigma (u) \dot{W}, &{} t \ge 0;\\ u(0,x) = u_0(x)&{}.\\ \end{array}\right. \end{aligned}$$
(5)

For example, in this case \(f:L^2(\mathbb {R}^d) \rightarrow L^2(\mathbb {R}^d)\) is a nonlocal map of the form

$$\begin{aligned} f(u) = \int _{\mathbb {R}^d} g(x,y,u(t,y)) \text {d}y. \end{aligned}$$

Nonlocal deterministic equations of such type are well known in the literature and have a wide range of applications. In particular, this equation is used in modeling phytoplankton growth in the work [14] by Du and Hsu. Nonlocal equations also model the distant interactions in epidemics models (see, e.g., Capasso and Fortunato [4]). A comprehensive description of deterministic and stochastic nonlocal reaction–diffusion equations can be found in the monograph [27].

The conditions for the existence of bounded solutions for nonlocal equations of type (5) were obtained by Da Prato and Zabczyk [12], Proposition 6.1.6, in terms of Liapunov functions. These conditions are rather general. In this paper, we establish the existence of a bounded solution for (5) in the case when the conditions of Proposition 6.1.6 [12] are not fulfilled.

A particular example of a nonlocal nonlinearity, which is used in the model of nonlocal consumption of resources, as well as in nonlocal Fisher–KPP equation, is \(f(u) = u (1-\Vert u\Vert )\) if \(\Vert u\Vert \le 1\) and 0 otherwise (see, e.g., [27, 28]). For nonlinearities of this type, we have the following result:

Theorem 2

Let \(d\ge 3\). Assume

  1. (i)

    \(\forall u,v \in L^2(\mathbb {R}^d)\), \(\Vert f(u)-f(v)\Vert _{L^2(\mathbb {R}^d)} \le C \Vert u-v\Vert _{L^2(\mathbb {R}^d)}\) and \(\Vert \sigma (\cdot ,u(\cdot )) - \sigma (\cdot ,v(\cdot ))\Vert _{L^2(\mathbb {R}^d)} \le C \Vert u-v\Vert _{L^2(\mathbb {R}^d)}\);

  2. (ii)

    For some \(N > 0\), \(f(u) = 0\) if \(\Vert u\Vert _{L^2(\mathbb {R}^d)} \ge N\).

  3. (iii)

    There exists \(\psi (x) \in L^2(\mathbb {R}^d)\) such that

    $$\begin{aligned} |\sigma (x,u)|\le \psi (x), \forall (x,u) \in \mathbb {R}^d \times \mathbb {R}. \end{aligned}$$
    (6)

Let \(u(t,x)\) be a solution of (1) with \(u(0,x) = u_0(x)\in L^2(\mathbb {R}^d)\). Then,

$$\begin{aligned} \sup _{t \ge 0} \mathbb {E}\Vert u(t,x)\Vert ^2_{L^2(\mathbb {R}^d)} < \infty . \end{aligned}$$
(7)

Remark 1

Note that (7) implies \(\sup _{t \ge 0} \mathbb {E}\Vert u(t,x)\Vert ^2_{L^2_{\rho } (\mathbb {R}^d)} < \infty \) for any weight \(\rho \in L^\infty (\mathbb {R}^d)\).

Remark 2

For both of the above theorems, the Lipschitz conditions for \(f\) and \(\sigma \) are mainly used for the existence and uniqueness of the solutions, while their global bounds and constraints are for proving the uniform boundedness in time.

Remark 3

Comparing with the results of [1], we do not require the smallness of the Lipschitz constants of \(f\) and \(\sigma \). These are replaced by their somewhat more global conditions.

Roughly speaking, in the case \(d \ge 3\), the Laplace operator has sufficiently strong dissipative properties which compensate for the lack of dissipation coming from \(f(u)\). These results, in conjunction with the compactness property of the semigroup for the Laplace operator in some weighted space defined on \(\mathbb {R}^d\), yield the existence of an invariant measure for (1) using the Krylov–Bogoliubov approach [12, Theorem 6.1.2].

In the analysis of the ergodic behavior of dynamical systems, the uniqueness of invariant measures is a key step. As shown in [12, Theorem 3.2.6], the uniqueness of the invariant measure implies that the solution process is ergodic. However, establishing the uniqueness property of the invariant measure is highly nontrivial. One approach, illustrated in [12, Chapter 7], shows that the uniqueness is a consequence of a strong Feller property and irreducibility. Typically, in order to apply this result, one needs to impose rather restrictive conditions both on the diffusion coefficient and on the semigroup \(\{S(t)\}_{t\ge 0}\) generated by the elliptic operator. In particular, the diffusion operator has to be bounded and nondegenerate, while the semigroup has to be square integrable in some Hilbert–Schmidt norm [12], Hypothesis 7.1(iv)]. However, this condition does not hold for the Laplace operator in unbounded domains.

In the second part of our work, we use a different approach to establishing the uniqueness of invariant measures which does not require [12], Hypothesis 7.1(iv)]. This approach, reminiscent of [12, Theorem 6.3.2], is based on the fact that if the semigroup has an exponential contraction property

$$\begin{aligned} \Vert S(t) u\Vert \le M \text {e}^{-\gamma t} \Vert u\Vert , \end{aligned}$$
(8)

for some \(M,\,\,\gamma >0\), then the corresponding dynamical system possesses a unique solution which is stable and uniformly bounded in expectation. This solution is utilized in the proof of the uniqueness of the invariant measure. The condition (8) holds in particular if \(A\) is the Laplace operator \(\varDelta \) in a bounded domain \(G\) with Dirichlet boundary condition. Our result, however, deals with an example when \(G\) is unbounded.

Consider

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{\partial }{\partial t} u(t,x) = A u(t,x) + f(x,u(t,x)) + \sigma (x,u(t,x)) \dot{W}(t,x), t > 0,\,\,\,x \in G;\\ u(0,x) = u_0(x) \end{array}\right. } \end{aligned}$$
(9)

where

  • \(G = \mathbb {R}^d_+:=\{x = (x_1, x_2,\ldots , x_d) \in \mathbb {R}^d, x_d>0\}\);

  • \(\rho (x):= \text {e}^{-|x|^2}\), \(x \in G\);

  • \(H = L^2_{\rho } (G)\);

    $$\begin{aligned} A u := \frac{1}{\rho } \text {div}(\rho \nabla u); \end{aligned}$$
    (10)
  • \(D(A) := H^2_{\rho }(G) \cap H^1_{0,\rho }(G)\);

  • \(f(x,u): G \times \mathbb {R}\rightarrow \mathbb {R}\) and \(\sigma (x,u): G \times \mathbb {R}\rightarrow \mathbb {R}\) satisfy

    $$\begin{aligned} |f(x,u_1)-f(x,u_2)| \le L |u_1-u_2|; \ \ |\sigma (x,u_1)-\sigma (x,u_2)| \le L |u_1-u_2|,\nonumber \\ \end{aligned}$$
    (11)

    with Lipschitz constant \(L\) independent of \(x\);

  • $$\begin{aligned} f(x,0) \in L^{\infty }(G) \text { and } \sigma (x,0)\in L^{\infty }(G). \end{aligned}$$
    (12)

Note that the elliptic operator \(A\) given by (10) appears in quantum mechanics in the analysis of the energy levels of harmonic oscillator.

Under the assumptions above, the initial value problem (9) is well posed (see Theorem 4, p. 14). Our main result for (9) is the following theorem.

Theorem 3

Assume the Lipschitz constant \(L\) in (11) is sufficiently small [see (43) and (44) below]. Then, Eq. (9) has a unique solution \(u^{*}(t,x)\) which is defined for all \(t \in \mathbb {R}\) and satisfies

$$\begin{aligned} \sup _{t \in \mathbb {R}} \mathbb {E}\Vert u^{*}(t,x)\Vert _H^2 < \infty . \end{aligned}$$

This solution is exponentially stable (in the sense of Definition 3, p. 14).

In Sect. 4, the above solution will be used to prove the existence and uniqueness of the invariant measure for (9). In fact, it will be shown that \(u^*\) is a stationary random process.

Remark 4

Our approach was motivated by the following simple observation: if \(v(t,x), x \in [0,1], t \in \mathbb {R}\) solves

$$\begin{aligned} {\left\{ \begin{array}{ll} v_t(t,x) = v_{xx}(t,x);&{}\\ v(t,0) = v(t,1) = 0, &{}\quad t \in \mathbb {R};\\ v(0,x) = \varphi (x), &{}\quad x \in [0,1], \end{array}\right. } \end{aligned}$$
(13)

then the only exponentially stable solution that satisfies

$$\begin{aligned} \sup _{t \in \mathbb {R}} \Vert v(t,x)\Vert ^2_{L^2([0,1])} < \infty \end{aligned}$$

is \(v\equiv 0\) (with \(\varphi \equiv 0\)). Theorem 3 is an analog of this fact for the nonlinear stochastic reaction–diffusion equation (9).

Remark 5

In contrast to Theorems 1 and 2, where the condition \(d \ge 3\) is essential, here there is no restriction on the spatial dimension.

The paper is organized as follows. Section 2 deals with the existence of invariant measure for the reaction–diffusion equation (1) with \(A = \varDelta \) in \(\mathbb {R}^d\) and \(d \ge 3\) (Theorems 1, 2). Section 3 is devoted to the proof of Theorem 3 for Eq. (9). The uniqueness of the invariant measure as a consequence of Theorem 3 is established in Sect. 4.

2 Invariant Measure in the Entire Space

In this section, we study the problem (1) with \(A = \varDelta \) and \(G = \mathbb {R}^d\). Let \(\{e_k, k \ge 1\}\) be an orthonormal basis in \(L^2(\mathbb {R}^d)\) such that

$$\begin{aligned} \sup _{k} \Vert e_k(x)\Vert _{L^{\infty }(\mathbb {R}^d)} \le 1. \end{aligned}$$
(14)

We note that such a basis exists. For example, consider

$$\begin{aligned} \text {e}^{(k)}_{n}(x):= \left\{ \frac{1}{\pi } \sin \left( n x\right) \chi _{\left[ 2 \pi k, 2 \pi (k+1)\right] }(x), \cos \left( n x \right) \chi _{\left[ 2 \pi k, 2 \pi (k+1)\right] }(x) \right\} , \, n \ge 0, \, k \in \mathbb {Z}, \end{aligned}$$

where \(\chi _{[2 \pi k, 2 \pi (k+1)]}(x)\) is the characteristic function of \([2 \pi k, 2 \pi (k+1)]\). Clearly,

$$\begin{aligned} \sup _{n \ge 0, k \in \mathbb {Z}} \Vert e_n^{(k)}(x)\Vert _{L^{\infty }(\mathbb {R})} \le 1, \end{aligned}$$

and

$$\begin{aligned} \left\{ e_n^{(k)}(x), n \ge 0, k \in \mathbb {Z}\right\} \text { is a basis in } L^2(\mathbb {R}). \end{aligned}$$

The basis in \(\mathbb {R}^d\) for \(d>1\) can be constructed analogously.

We now define the Wiener process \(W(t,x)\) as

$$\begin{aligned} W(t,x) := \sum _{k=1}^{\infty }\sqrt{a_k} \beta _k(t) e_k(x) \end{aligned}$$
(15)

with

$$\begin{aligned} a:= \sum _{k=1}^{\infty } a_k < \infty \end{aligned}$$

In the above, the \(\beta _k(t)\)s are independent standard one-dimensional Wiener processes on \(t \ge 0\). Let \((\Omega , \mathcal {F}, P)\) be a probability space, and \(\mathcal {F}_{t}\) is a right-continuous filtration such that \(W(t,x)\) is adapted to \(\mathcal {F}_t\) and \(W(t)-W(s)\) is independent of \(\mathcal {F}_s\) for all \(s<t\). As shown in [11, pp. 88–89], (15) is convergent both in mean square and with probability one.

We next proceed with a rigorous definition of a mild solution of (1) [11, 12]:

Definition 1

Let \(H\) be a Hilbert space of functions defined on \(\mathbb {R}^d\). An \(\mathcal {F}_t\)-adapted random process \(u(t,\cdot ) \in H\) is called a mild solution of (1) if it satisfies the following integral relation for \(t \ge 0\):

$$\begin{aligned} u(t,\cdot ) = S(t)u_0(\cdot ) + \int _{0}^{t} S(t-s)f(\cdot , u(s,\cdot )) \text {d}s + \int _{0}^{t} S(t-s) \sigma (u(s,\cdot )) \text {d} W(s, \cdot )\nonumber \\ \end{aligned}$$
(16)

where \(\{S(t), t \ge 0\}\) is the semigroup for the linear heat equation, i.e.,

$$\begin{aligned} S(t) u(x) := \int _{\mathbb {R}^d} G(t,x-y) u(y) \text {d}y. \end{aligned}$$

It was shown (see, for example, [1, 19, 20]) that if both \(f\) and \(\sigma \) are Lipschitz in \(u\), the initial value problem (1) admits a unique mild solution \(u(t,x)\) if \(H = L^2_{\rho }(\mathbb {R}^d)\). Moreover, as proved in [26, Proposition 2.1], if two nonnegative admissible weights \(\rho (x)\) and \(\zeta (x)\) in \(\mathbb {R}^d\) satisfy

$$\begin{aligned} \int _{\mathbb {R}^d} \frac{\zeta (x)}{\rho (x)} \, \text {d}x < \infty , \end{aligned}$$
(17)

then

$$\begin{aligned} S(t): L^2_{\rho }(\mathbb {R}^d) \rightarrow L^2_{\zeta }(\mathbb {R}^d) \text { is a compact map. } \end{aligned}$$
(18)

Based on this result, the theorem of Krylov–Bogoliuibov yields the existence of invariant measure on \(L_\zeta ^2(\mathbb {R}^d)\) provided

$$\begin{aligned} \sup _{t \ge 0} \mathbb {E}\Vert u(t,x)\Vert ^2_{L^2_{\rho }(\mathbb {R}^d)} < \infty . \end{aligned}$$
(19)

([26, Theorem 3.1] and [1, Theorem 2]). The statements of Theorems 1 and 2 exactly show the existence of a solution satisfying the above condition.

We now proceed to the proof of Theorem 1.

Proof

Let \(u(t,x)\) be a solution of (1). Applying the elementary inequality \((a+b+c)^2 \le 3 (a^2 + b^2 + c^2)\) to (16), we have

$$\begin{aligned} \Vert u(t,x)\Vert ^2_{H} \le 3 \Big (I_1(t) + I_2(t) + I_3(t)\Big ) \end{aligned}$$

where

$$\begin{aligned} I_1(t)= & {} \int _{\mathbb {R}^d}|S(t) u(0,x)|^2 \rho \text {d}x;\\ I_2(t)= & {} \int _{\mathbb {R}^d}\left| \int _{0}^{t} S(t-s)f(x, u(s,x)) \text {d}s\right| ^2 \rho \text {d}x;\\ I_3(t)= & {} \int _{\mathbb {R}^d}\left| \int _{0}^{t} S(t-s) \sigma (u(s,x)) \text {d}W(s, x)\right| ^2 \rho \text {d}x. \end{aligned}$$

We will show that

$$\begin{aligned} \sup _{t \ge 0} \mathbb {E}I_{i}(t) < \infty , \ \ \ i = 1,2,3. \end{aligned}$$

For \(I_1\), we have by the \(L^2\)-contraction property of \(S(t)\) that

$$\begin{aligned} \sup _{t \ge 0} \mathbb {E}I_{1}(t) \le \Vert \rho \Vert _{\infty } \sup _{t \ge 0} \mathbb {E}\Vert S(t) u(0,x)\Vert ^2_{L^2(\mathbb {R}^d)} \le \Vert \rho \Vert _{\infty } \mathbb {E}\Vert u(0,x)\Vert ^2_{L^2(\mathbb {R}^d)} < \infty . \end{aligned}$$

We next estimate \(I_2\) for \(t \ge 1\) in the following manner:

$$\begin{aligned} I_2(t)= & {} \int _{\mathbb {R}^d}\left| \int _{0}^{t} \int _{\mathbb {R}^d}G(t-s,x-y)f(y, u(s,y)) \text {d}y \text {d}s\right| ^2 \rho \text {d}x\\&\le 2\int _{\mathbb {R}^d}\left| \int _{0}^{t-1} \int _{\mathbb {R}^d}G(t-s,x-y)f(y, u(s,y)) \text {d}y \text {d}s\right| ^2 \rho \text {d}x\\&+\, 2\int _{\mathbb {R}^d}\left| \int _{t-1}^{t} \int _{\mathbb {R}^d}G(t-s,x-y)f(y, u(s,y)) \text {d}y \text {d}s\right| ^2 \rho \text {d}x. \end{aligned}$$

First, using (4), we have

$$\begin{aligned} \int _{\mathbb {R}^d}\left| \int _{t-1}^{t} \int _{\mathbb {R}^d}G(t-s,x-y)f(y, u(s,y)) \text {d}y \text {d}s\right| ^2 \rho \text {d}x \le \Vert \varphi \Vert ^2_{\infty } \Vert \rho \Vert _{L^1(\mathbb {R}^d)} \end{aligned}$$

Second, consider,

$$\begin{aligned}&\int _{\mathbb {R}^d}\left| \int _{0}^{t-1} \int _{\mathbb {R}^d}G(t-s,x-y)f(y, u(s,y)) \text {d}y \text {d}s\right| ^2 \rho \text {d}x\\&\quad \le \int _{\mathbb {R}^d}\left| \int _{0}^{t-1} \int _{\mathbb {R}^d}\frac{1}{(4 \pi (t-s))^{d/2}}\text {e}^{ - \frac{|x-y|^2}{4 (t-s)}} \varphi (y) \text {d}y \text {d}s\right| ^2 \rho \text {d}x\\&\quad \le \Vert \rho \Vert _{L^1(\mathbb {R}^d)} \Vert \varphi \Vert ^2_{L^1(\mathbb {R}^d)} \left| \int _{0}^{t-1}\frac{\text {d}s}{(4 \pi (t-s))^{d/2}}\right| ^2. \end{aligned}$$

Therefore,

$$\begin{aligned} \sup _{t \ge 0} \mathbb {E}I_2(t) \le \Vert \varphi \Vert ^2_{\infty } \Vert \rho \Vert _{L^1(\mathbb {R}^d)} + \frac{1}{(4 \pi )^{d}} \Vert \rho \Vert _{L^1(\mathbb {R}^d)} \Vert \varphi \Vert ^2_{L^1(\mathbb {R}^d)} \left( \int _{1}^{\infty }\frac{d \tau }{\tau ^{d/2}}\right) ^2 < \infty \end{aligned}$$

where the condition \(d\ge 3\) is used in the last step.

It remains to show that \(\displaystyle \sup _{t \ge 0} \mathbb {E}I_3(t) < \infty \). First note that

$$\begin{aligned}&\mathbb {E}\left| \int _{0}^{t} \int _{\mathbb {R}^d}G(t-s,x-y) \sigma (y,u(s,y)) \text {d}y \text {d}W(s,y)\right| ^2\\&\quad = \mathbb {E}\int _{0}^{t} \sum _{k=1}^{\infty } a_k \left( \int _{\mathbb {R}^d} G(t-s,x-y) \sigma (y,u(s,y)) e_k(y) \text {d}y\right) ^2 \text {d}s\\&\quad = \mathbb {E}\int _{0}^{t-1} \sum _{k=1}^{\infty } a_k \left( \int _{\mathbb {R}^d} G(t-s,x-y) \sigma (y,u(s,y)) e_k(y) \text {d}y\right) ^2 \text {d}s\\&\qquad +\, \mathbb {E}\int _{t-1}^{t} \sum _{k=1}^{\infty } a_k \left( \int _{\mathbb {R}^d} G(t-s,x-y) \sigma (y,u(s,y)) e_k(y) \text {d}y\right) ^2 \text {d}s\\&\quad \le \sigma _0^2 \int _{0}^{t-1} \int _{\mathbb {R}^d} G^2(t-s,x-y) \text {d}y \sum _{k=1}^{\infty } a_k \int _{\mathbb {R}^d} e_k^2(y) \text {d}y \\&\qquad +\, \sigma _0^2 \int _{t-1}^{t} \sum _{k=1}^{\infty } a_k \left( \int _{\mathbb {R}^d} G(t-s,x-y) \text {d}y\right) ^2 \text {d}s.\\&\quad \le a \sigma _0^2 \left( \int _{0}^{t-1} \int _{\mathbb {R}^d} G^2(t\!-\!s,y) \text {d}y \text {d}s \!+\! 1 \right) \le a \sigma _0^2 \left( \int _{0}^{t-1} \frac{1}{(t\!-\!s)^{\frac{d}{2}}} \text {d}s \!+\! 1 \right) \!\le \! C \!<\! \infty \end{aligned}$$

Therefore,

$$\begin{aligned} \mathbb {E}I_3(t) \!=\! \int _{\mathbb {R}^d} \mathbb {E}\left| \!\int _{0}^{t} \!\int _{\mathbb {R}^d}G(t\!-\!s,x\!-\!y) \sigma (y,u(s,y)) \text {d}y \text {d}W(s,y) \right| ^2 \rho \text {d}x \le C\Vert \rho \Vert _{L^1(\mathbb {R}^d)} \end{aligned}$$

which is uniformly bounded independent of \(t\), thus concluding the proof. \(\square \)

We next prove Theorem 2.

Proof

(For simplicity, we omit the \(x\) variable in \(f\) and \(\sigma \).) Let \(\Vert u(0,x) \Vert _{L^2(\mathbb {R}^d)} = Z\) and \(M:=\max \{Z,N\}\) where \(N\) is given by the condition (ii). For given \(t>0\), consider the random variable

$$\begin{aligned} \tau = {\left\{ \begin{array}{ll} \sup \left\{ 0 < s \le t: \Vert u(s,x)\Vert _{L^2(\mathbb {R}^d)} = M + 1\right\} &{}\text { if the given set is nonempty }\\ t, &{}\text { otherwise. } \end{array}\right. } \end{aligned}$$

Introduce

$$\begin{aligned} C :=\left\{ \omega \in \Omega : \Vert u(t,x,\omega )\Vert _{\mathbb {R}^d} > M+1\right\} \end{aligned}$$

It follows from the local Hölder continuity in time of solutions of (1) [25] that \(\Vert u(s,x,\omega )\Vert _{L^2(\mathbb {R}^d)}\) is continuous in \(s\). Therefore, for \(\omega \in \left\{ \tau (\omega ) < t\right\} \bigcap C\), we have

$$\begin{aligned} \Vert u(s,x,\omega )\Vert _{L^2(\mathbb {R}^d)}> M+1, \ \ s \in (\tau , t] \end{aligned}$$

Note that a stochastic integral \(f(t):=\int _{0}^{t}g(s)\text {d}W(s)\) is an a.e. continuous function of \(t\). Thus, if \(\tau \) is another random variable, the expression \(f(\tau )\) is well defined [16]. This fact, in conjunction with the uniqueness property of the mild solution, enables us to write

$$\begin{aligned} u(t) = S(t-\tau ) u(\tau ) + \int _{\tau }^{t} S(t-s) f(u(s)) \text {d}s + \int _{\tau }^{t} S(t-s) \sigma (u(s)) \text {d}W(s).\nonumber \\ \end{aligned}$$
(20)

Furthermore,

$$\begin{aligned} \mathbb {E}\Vert u(t,\omega )\Vert _{L^2(\mathbb {R}^d)}^2= & {} \int _{\left\{ \omega : \Vert u\Vert _{L^2(\mathbb {R}^d)} \le M+1\right\} } \Vert u(t,\omega )\Vert _{L^2(\mathbb {R}^d)}^2 dP(\omega ) \\&+\, \int _{C}\Vert u(t,\omega )\Vert _{L^2(\mathbb {R}^d)}^2 dP(\omega )\\&\le \, (M\!+\!1)^2 \!+\! \int _{C} \Vert u(t,\omega )\Vert ^2 dP(\omega )\quad \end{aligned}$$

It follows from the condition (ii) and (20) that for \(\omega \in C\)

$$\begin{aligned} u(t, \omega ) = S(t-\tau ) u(\tau , \omega ) + \int _{\tau }^{t} S(t - \tau ) \sigma (u(s)) \text {d}W(s) \end{aligned}$$

then

$$\begin{aligned}&\int _{C} \Vert u(t,\omega )\Vert ^2_{L^2(\mathbb {R}^d)} d P(\omega ) \le 2 \left[ \int _{C}\Vert S(t-\tau )u(\tau )\Vert _{L^2(\mathbb {R}^d)}^2 dP(\omega )\right. \nonumber \\&\quad \quad \left. +\, \int _{C}\left\| \int _{\tau }^{t} S(t-s) \sigma (u(s)) \text {d}W(s)\right\| _{L^2(\mathbb {R}^d)}^2 dP(\omega ) \right] \nonumber \\&\quad \le 2 \left[ \mathbb {E}\Vert S(t-\tau )u(\tau )\Vert _{L^2(\mathbb {R}^d)}^2 + \mathbb {E}\left\| \int _{\tau }^{t} S(t-s) \sigma (u(s)) \text {d}W(s)\right\| _{L^2(\mathbb {R}^d)}^2\right] \end{aligned}$$
(21)

The first term is bounded by using the contraction property of \(S(t)\) in \(L^2(\mathbb {R}^d)\):

$$\begin{aligned} \mathbb {E}\Vert S(t-\tau )u(\tau )\Vert _{L^2(\mathbb {R}^d)}^2 \le \mathbb {E}\Vert u(\tau )\Vert _{L^2(\mathbb {R}^d)}^2 = (M+1)^2 \end{aligned}$$

For the second term in (21), we compute,

$$\begin{aligned}&\mathbb {E}\left\| \int _{\tau }^{t} S(t-s) \sigma (u(s)) \text {d}W(s)\right\| _{L^2(\mathbb {R}^d)}^2 \\&\quad \le \mathbb {E}\left( \sup _{0 \le \nu \le t} \left\| \int _{\nu }^{t} S(t-s) \sigma (u(s)) \text {d}W(s)\right\| _{L^2(\mathbb {R}^d)}^2\right) \\&\quad \le 2 \mathbb {E}\left\| \int _{0}^{t} S(t-s) \sigma (u(s)) \text {d}W(s)\right\| ^2_{L^2(\mathbb {R}^d)} \\&\qquad +\, 2 \mathbb {E}\left( \sup _{0 \le \nu \le t} \left\| \int _{0}^{\nu } S(t-s) \sigma (u(s)) \text {d}W(s)\right\| ^2_{L^2(\mathbb {R}^d)}\right) \end{aligned}$$

By the following Doob’s inequality for martingales,

$$\begin{aligned} \mathbb {E}\left( \sup _{0 \le \nu \le t}\left| \int _{0}^{\nu } \sum _{k=1}^{\infty } g_k(s) d \beta _{k}(s)\right| ^2\right) \le 4 \sum _{k=1}^{\infty } \mathbb {E}\int _{0}^{t} |g_k(s)|^2 \text {d}s, \end{aligned}$$

we have

$$\begin{aligned}&\mathbb {E}\left( \sup _{0\le \nu \le t} \left\| \int _{0}^{\nu } S(t-s) \sigma (u(s)) \text {d}W(s)\right\| ^2_{L^2(\mathbb {R}^d)} \right) \nonumber \\&\quad = \mathbb {E}\left( \sup _{0\le \nu \le t} \left( \!\int _{\mathbb {R}^d} \left| \int _{0}^{\nu }\! \sum _{k=1}^{\infty } \sqrt{a_k} \int _{\mathbb {R}^d}\!G(t-s,x-y) \sigma (y,u(s,y,\omega )) e_k(y) \!\text {d}y d \beta _k(s)\right| ^2 \text {d}x\right) \right) \nonumber \\&\quad \le 4 \int _{\mathbb {R}^d}\left( \mathbb {E}\int _{0}^{t} \sum _{k=1}^{\infty } a_k \left( \int _{\mathbb {R}^d}G(t-s,x-y)\sigma (y,u(s,y,\omega )) e_k(y) \text {d}y\right) ^2 \text {d}s \right) \text {d}x\nonumber \\ \end{aligned}$$
(22)

Similar to the proof of Theorem 1, we split \(\int _{0}^{t} =\int _{0}^{t-1} + \int _{t-1}^{t}\). Then,

$$\begin{aligned}&\int _{\mathbb {R}^d}\left( \mathbb {E}\int _{t-1}^{t} \sum _{k=1}^{\infty } a_k \left( \int _{\mathbb {R}^d}G(t-s,x-y)\sigma (y,u(s,y,\omega )) e_k(y) \text {d}y\right) ^2 \text {d}s \right) \text {d}x\\&\quad \le \!\!\sum _{k=1}^{\infty } a_k \!\int _{t\!-1}^{t} \!\int _{\mathbb {R}^d} \left( \int _{\mathbb {R}^d}\!G\!(t\!-\!s,x\!-\!y) \, \text {d}y\right) \!\left( \int _{\mathbb {R}^d}\!G\!(t\!-\!s,x\!-\!y) \psi ^2(y) e_k^2 (y) \text {d}y\!\right) \!\text {d}x \text {d}s\\&\quad = \sum _{k=1}^{\infty } a_k \int _{t-1}^{t} \int _{\mathbb {R}^d} \int _{\mathbb {R}^d}G(t-s,x-y) \psi ^2(y) e_k^2 (y) \text {d}y \text {d}x \text {d}s\\&\quad = \!\sum _{k=1}^{\infty } a_k \int _{t-1}^{t} \!\int _{\mathbb {R}^d}G(t\!-\!s,x\!-\!y) \text {d}x \!\int _{\mathbb {R}^d}\psi ^2(y) e_k^2 (y) \text {d}y \text {d}s \le \sum _{k=1}^{\infty } a_k \Vert \psi \Vert _{L^2(\mathbb {R}^d)}^2 \!\!<\! \infty . \end{aligned}$$

Next,

$$\begin{aligned}&\int _{\mathbb {R}^d}\left( \mathbb {E}\int _{0}^{t-1} \sum _{k=1}^{\infty } a_k \left( \int _{\mathbb {R}^d}G(t-s,x-y)\sigma (y,u(s,y,\omega )) e_k(y) \text {d}y\right) ^2 \text {d}s \right) \text {d}x\\&\quad \le \sum _{k=1}^{\infty } a_k \int _{0}^{t-1} \int _{\mathbb {R}^d} \left( \int _{\mathbb {R}^d} G^2(t-s,x-y) e_k^2(y) \, \text {d}y \right) \left( \int _{\mathbb {R}^d} \psi ^2(y) \, \text {d}y\right) \text {d}x \text {d}s\\&\quad \le \sum _{k=1}^{\infty } a_k \Vert \psi \Vert ^2_{L^2(\mathbb {R}^d)} \int _{0}^{t-1} \int _{\mathbb {R}^d} G^2(t-s,z) \text {d}z \text {d}t\\&\quad \le \sum _{k=1}^{\infty } a_k \Vert \psi \Vert ^2_{L^2(\mathbb {R}^d)} \int _{0}^{t-1} \frac{1}{(t-s)^{\frac{d}{2}}}\text {d}t < \infty . \end{aligned}$$

The above complete the proof of Theorem 2. \(\square \)

3 Proof of Theorem 3

In this section, we analyze Eq. (9). We follow the notations immediately after (9) on page 5. For the proof, we introduce the following infinite-dimensional Wiener process:

$$\begin{aligned} W(t,x) = \sum _{k=1}^{\infty }\sqrt{a_k} \beta _k(t) e_k(x) \end{aligned}$$
(23)

where \(e_k(x)\)s satisfy (14) and we also require

$$\begin{aligned} a:= \sum _{k=1}^{\infty } a_k < \infty \end{aligned}$$

In contrast to the previous section, the Wiener process in this section is defined for all \(t \in \mathbb {R}\). This can be constructed by the following formula:

$$\begin{aligned} \beta _k(t) = \left\{ \begin{array}{ll} \beta _k^{(1)}(t),&{}\quad \text {for}\,\,\,t \ge 0\\ \beta _k^{(2)}(-t),&{}\quad \text {for}\,\,\,t \le 0 \end{array} \right. , \end{aligned}$$

where \(\beta _k^{(1)}\) and \(\beta ^{(2)}_k\) are independent standard one-dimensional Wiener processes. Also, let

$$\begin{aligned} \mathcal {F}_t := \bigcup \left\{ \beta _k(v) - \beta _k(u): u \le v \le t, k \ge 1\right\} \end{aligned}$$

be the \(\sigma \)-algebra generated by \(\{\beta _k(v) - \beta _k(u): u \le v \le t, k \ge 1\}\).

Our proof heavily relies on the spectral properties of the operator \(A\) in some weighted space. These are described next.

3.1 Eigenvalue Problem for \(A\)

In the case \(d=1\), consider the weight function \(\rho = \text {e}^{-x^2}\). We then have the following problem for determining the spectrum: find all \(\mu \in \mathbb {R}\) and \(w \in H = L^2_{\rho }(\mathbb {R}^+)\) such that

$$\begin{aligned} \text {e}^{x^2}\frac{d}{d x}\left( \frac{d w}{d x} \text {e}^{-x^2}\right) = \mu w, \quad x > 0; \end{aligned}$$
(24)

satisfying

$$\begin{aligned} \int _{0}^{\infty } w^2 \text {e}^{-x^2} \text {d}x < \infty \end{aligned}$$
(25)

and

$$\begin{aligned} w(0) = 0. \end{aligned}$$
(26)

The problem (24) is a well-known problem for harmonic oscillator [24, pp. 218–219]. It has a nonzero solution satisfying (25) only for \(\mu = - 2 n, n = 0, 1, 2, \ldots \). The solutions are the Hermite polynomials \(w_n = H_n(x)\). Moreover, the condition (26) implies that \(n\) must be odd. Therefore, the eigenvalues of (24) are \(\mu = 2 - 4p, p = 1, 2, 3,\ldots \)

If \(d > 1\), the eigenvalue problem reads as

$$\begin{aligned} \Delta w - 2 (\nabla w, x) = \mu w, \end{aligned}$$
(27)

subject to

$$\begin{aligned} \int _{\mathbb {R}^d_+} w^2 \text {e}^{-|x|^2} \text {d}x < \infty \end{aligned}$$
(28)

and

$$\begin{aligned} w(x_1,\ldots ,x_{d-1}, 0) = 0. \end{aligned}$$
(29)

We proceed with looking for the solutions of (27) using separation of variables,

$$\begin{aligned} w(x_1,x_2,\ldots ,x_d) = w_1(x_1) w_2(x_2) \ldots w_d(x_d), \end{aligned}$$

with \(w_i\) solving

$$\begin{aligned} \text {e}^{x_i^2}\frac{\text {d}}{\text {d} x_i}\left( \frac{\text {d} w_i}{\text {d} x_i} \text {e}^{-x_i^2}\right) = \lambda _i w_i, \ i = 1,\ldots ,d \end{aligned}$$
(30)

subjects to

$$\begin{aligned} \int _{\mathbb {R}} w_i^2(x) \text {e}^{-x^2} d x < \infty , i = 1,\ldots , d-1; \end{aligned}$$
(31)

and

$$\begin{aligned} \int _{0}^{\infty } w_d^2(x) \text {e}^{-x^2} d x < \infty , \ w_d(0) = 0. \end{aligned}$$
(32)

It follows from the condition (31) that for \(i = 1,\ldots ,d-1\), we have

$$\begin{aligned} \lambda _i = -2 p, p = 0, 1, 2,\ldots \end{aligned}$$

while due to (32)

$$\begin{aligned} \lambda _d = -2 - 4p, p=0, 1, 2, \ldots \end{aligned}$$

An arbitrary eigenvalue \(\mu \) of (27) satisfies \(\mu = \lambda _1 + \cdots + \lambda _d\). In particular, the largest eigenvalue of (24) is given by \(\mu _1 = -2\) (which corresponds to \(\lambda _1 = \ldots = \lambda _{d-1} = 0, \lambda _d = -2\)).

With the above, we have the following technical lemmas.

Lemma 1

Let \(S(t):H \rightarrow H\) be a semigroup generated by \(A\), i.e., \(S(t)u_0(x): = u(t,x)\), where \(u(t,x)\) solves

$$\begin{aligned} {\left\{ \begin{array}{ll} u_t(t,x) = A u(t,x)\\ u(0,x) = u_0(x). \end{array}\right. } \end{aligned}$$
(33)

Then,

$$\begin{aligned} \Vert S(t)u_0\Vert _{H} \le \text {e}^{-2 t}\Vert u_0\Vert _{H} \end{aligned}$$
(34)

Proof

Let \(0 > \mu _1 > \mu _2 \ge \mu _3 \ge \cdots \), with \(\mu _1 = -2\), be the eigenvalues of \(A\), and let \(\{\varphi _k(x), k \ge 1\} \in H\) be the corresponding orthonormal eigenbasis. We have the following representations for \(u_0 \in H\) and \(u(t,x) \in H\):

$$\begin{aligned} u_0(x) = \sum _{k=1}^{\infty } c^0_k \varphi _k(x) \end{aligned}$$

and

$$\begin{aligned} u(t,x) = \sum _{k=1}^{\infty } c_k(t) \varphi _k(x) \end{aligned}$$

It follows from (33) that

$$\begin{aligned} \sum _{k=1}^{\infty } c_k^{'}(t) \varphi _k(x) = \sum _{k=1}^{\infty } c_k(t) A \varphi _k(x) = \sum _{k=1}^{\infty } \mu _k c_k(t) \varphi _k(x) \end{aligned}$$

Thus,

$$\begin{aligned} c_k(t) = c^{0}_k \text {e}^{\mu _k t} \end{aligned}$$

Hence,

$$\begin{aligned} \Vert u(t,x)\Vert ^2_{H} = \sum _{k=1}^{\infty }c_k^{2}(t) = \text {e}^{-4 t} \sum _{k=1}^{\infty } \text {e}^{(2\mu _k + 4)t} (c^{0}_{k})^{2} \le \text {e}^{-4 t} \Vert u_0(x)\Vert _{H}^2 \end{aligned}$$

concluding the proof. \(\square \)

Lemma 2

For any \(u \in H\), we have

$$\begin{aligned} \mathbb {E}\left\| \int _{t_0}^{t} S(t-s) \sigma (u(s)) \text {d}W(s) \right\| ^2_{H} \le \sum _{k=1}^{\infty } a_k \int _{t_0}^{t} \text {e}^{-4(t-s)} \mathbb {E}\Vert \sigma (u(s))\Vert _H^2 \text {d}s \end{aligned}$$
(35)

Proof

It is a consequence of the following computation.

$$\begin{aligned}&\mathbb {E}\left\| \int _{t_0}^{t} S(t-s) \sigma (u(s)) \text {d}W(s) \right\| ^2_{H} \!=\! \mathbb {E}\left\| \sum _{k=1}^{\infty } \sqrt{a_k} \int _{t_0}^{t} S(t\!-\!s) \sigma (u(s)) e_k(x) d \beta _k(s) \right\| ^2_{H}\\&\quad = \int _G \mathbb {E}\left( \sum _{k=1}^{\infty } \sqrt{a_k} \int _{t_0}^{t} S(t-s) \sigma (u(s)) e_k(x) d \beta _k(s)\right) ^2 \rho (x) \, \text {d}x\\&\quad = \int _G \sum _{k=1}^{\infty } a_k \mathbb {E}\left( \int _{t_0}^{t} S(t-s) \sigma (u(s)) e_k(x) d \beta _k(s)\right) ^2 \rho (x) \, \text {d}x\\&\quad = \sum _{k=1}^{\infty } a_k \int _G \int _{t_0}^{t} \mathbb {E}\left( S(t-s) \sigma (u(s)) e_k(x)\right) ^2 \text {d}s \, \rho (x) \, \text {d}x\\&\quad = \sum _{k=1}^{\infty } a_k \int _{t_0}^{t} \mathbb {E}\left\| S(t-s) \sigma (u(s)) e_k(x)\right\| _H^2 \text {d}s \!\le \! \sum _{k=1}^{\infty } a_k \!\int _{t_0}^{t} \text {e}^{-4(t-s)} \mathbb {E}\Vert \sigma (u(s))\Vert _H^2 \text {d}s. \end{aligned}$$

\(\square \)

We proceed with the existence and uniqueness result for (9). For simplicity again, we omit the \(x\) variable in \(f\) and \(\sigma \).

Theorem 4

Assume that \(f\) and \(\sigma \) satisfy (11) and (12). Then, for given \(u_0(x) \in H\), there exists a unique mild solution of (9) (see Definition 1).

The proof of this result uses fairly standard techniques and is presented in the “Appendix.”

Next, we will construct and analyze solutions of (9) defined for all \(t \in \mathbb {R}\). First we introduce the following definition.

Definition 2

We say that an \(H\)-valued process \(u(t)\) is a mild solution of (9) on \(\mathbb {R}^1\) if

  1. 1.

    for \(\forall t \in \mathbb {R}\), \(u(t)\) is \(\mathcal {F}_t\)-measurable;

  2. 2.

    \(u(t)\) is continuous almost surely in \(t \in \mathbb {R}\) with respect to \(H\)-norm;

  3. 3.

    \(\forall t \in \mathbb {R}\), \(\mathbb {E}\Vert u(t)\Vert _H^2 < \infty \)

  4. 4.

    for all \(-\infty < t_0 < t < \infty \) with probability 1 we have

    $$\begin{aligned} u(t) = S(t-t_0)u(t_0) + \int _{t_0}^{t} S(t-s) f(u(s)) \text {d}s + \int _{t_0}^{t} S(t-s) \sigma (u(s)) \text {d}W(s)\nonumber \\ \end{aligned}$$
    (36)

The proof of Theorem 3 is divided into its linear and nonlinear versions.

3.2 Proof of Theorem 3: Linear Version

Let \(\mathcal {B}\) be the class of \(H\)-valued, \(\mathcal {F}_t\)-measurable random processes \(\xi (t)\) defined on \(\mathbb {R}^1\) such that

$$\begin{aligned} \sup _{t \in \mathbb {R}^1} E \Vert \xi (t)\Vert ^2_{H} < \infty \end{aligned}$$
(37)

For \(\varphi (t)\) and \(\alpha (t)\) in \(\mathcal {B}\) consider

$$\begin{aligned} \text {d}u = (A u + \alpha (t)) \text {d}t + \varphi (t) \text {d} W(t) \end{aligned}$$
(38)

Definition 3

A solution \(u^*\) is exponentially stable in mean square if there exist \(K>0\) and \(\gamma >0\) such that for any \(t_0\) and any other solution \(\eta (t)\), with \(\mathcal {F}_{t_0}\) measurable \(\eta (t_0)\) and \(E\Vert \eta (t_0)\Vert _{H}^2 < \infty \), we have

$$\begin{aligned} \mathbb {E}\Vert u^*(t) - \eta (t)\Vert _H^2 \le K \text {e}^{-\gamma (t-t_0)} \mathbb {E}\Vert u^{*}(t_0) - \eta (t_0) \Vert ^2_{H} \end{aligned}$$

for \(t \ge t_0\).

Theorem 5

Equation (38) has a unique solution \(u^{*}\) in the sense of the Definition 2. This solution is in \(\mathcal {B}\) and is exponentially stable in the sense of Definition 3.

Proof

Define

$$\begin{aligned} u^{*}(t):= \int _{-\infty }^{t} S(t-s)\alpha (s) \text {d}s + \int _{-\infty }^{t}S(t-s) \varphi (s) \text {d}W(s) \end{aligned}$$
(39)

We start with showing that the function given by (39) is well defined in the sense that the improper integrals are convergent. Let

$$\begin{aligned} \xi _n (t):= & {} \int _{-n}^{t} S(t-s) \alpha (s) \text {d}s\end{aligned}$$
(40)
$$\begin{aligned} \zeta _n (t):= & {} \int _{-n}^{t} S(t-s) \varphi (s) \text {d}W(s) \end{aligned}$$
(41)

For \(n>m\), we have

$$\begin{aligned}&\mathbb {E}\Vert \xi _n(t) - \xi _m(t)\Vert _{H}^2 \le \mathbb {E}\left( \int _{-n}^{-m} \Vert S(t-s) \alpha (s)\Vert _H \text {d}s\right) ^2 \\&\quad \le \mathbb {E}\left( \int _{-n}^{-m} \text {e}^{-2(t-s)} \Vert \alpha (s)\Vert _H \text {d}s\right) ^2\\&\quad \le \int _{-n}^{-m} \text {e}^{-2(t-s)} \text {d}s \cdot \int _{-n}^{-m} \text {e}^{-2(t-s)} \mathbb {E}\Vert \alpha (s)\Vert _H^2 \text {d}s\\&\quad \le \sup _{t \in \mathbb {R}}\mathbb {E}\Vert \alpha (t)\Vert _H^2 \cdot \left( \int _{-n}^{-m} \text {e}^{-2(t-s)} \text {d}s\right) ^2 \end{aligned}$$

which can be made as small as possible as \(n,m \rightarrow \infty .\) Thus, for all \(t \in \mathbb {R}\) the sequence (40) is a Cauchy sequence.

Similarly, using Lemma 2, we have

$$\begin{aligned}&\mathbb {E}\Vert \zeta _n(t) - \zeta _m(t)\Vert _H^2 \\&\quad = \mathbb {E}\left\| \int _{-n}^{-m} S(t-s) \varphi (s) \text {d}W(s)\right\| _H^2 \le \sum _{k=1}^{\infty } a_k \int _{-n}^{-m} \text {e}^{-2(t-s)} \, \text {d}s \sup _{t \in \mathbb {R}} \mathbb {E}\Vert \varphi (t)\Vert _H^2 \end{aligned}$$

which is again uniformly small for all large \(n\) and \(m\). Thus, \(\left\{ \zeta _n\right\} _n\) is also a Cauchy sequence. The above show that the process given by (39) is well defined.

We will show that this process is the solution in the sense of Definition 2. First, we note that \(u^{*}(t)\) is \(\mathcal {F}_t\)-measurable. Furthermore, the continuity of \(u^*\) in time with probability 1 follows from the factorization formula for the stochastic integrals [12, Theorem 5.2.5]. Next, we show that

$$\begin{aligned} \sup _{t \in \mathbb {R}} \mathbb {E}\Vert u^{*}(t)\Vert _{H}^2 < \infty \end{aligned}$$
(42)

From (39), we have

$$\begin{aligned}&\mathbb {E}\left\| \int _{-\infty }^{t} S(t-s) \alpha (s) \text {d}s\right\| _H^2 \le \mathbb {E}\left( \int _{-\infty }^{t} \Vert S(t-s)\alpha (s)\Vert _H \text {d}s \right) ^2\\&\quad \le \int _{-\infty }^{t} \text {e}^{-2(t-s)} \text {d}s \int _{-\infty }^{t} \text {e}^{-2(t-s)} \text {d}s \sup _{t \in \mathbb {R}} \mathbb {E}\Vert \alpha (t)\Vert _H^2 = \frac{1}{4}\sup _{t \in \mathbb {R}} \mathbb {E}\Vert \alpha (t)\Vert _H^2 \end{aligned}$$

as well as

$$\begin{aligned} \mathbb {E}\left\| \int _{-\infty }^{t} S(t-s) \varphi (s) \text {d}W(s)\right\| _H^2 \le \sum _{k=1}^{\infty } a_k \int _{-\infty }^{t} \text {e}^{-4(t-s)} \sup _{t \in \mathbb {R}} \mathbb {E}\Vert \varphi (t)\Vert _H^2 < \infty . \end{aligned}$$

Thus, (42) holds.

Finally, since

$$\begin{aligned} u^{*}(t_0) = \int _{-\infty }^{t_0} S(t_0-s) \alpha (s) \text {d}s + \int _{-\infty }^{t_0} S(t_0-s) \varphi (s) \text {d}W(s) \end{aligned}$$

we compute:

$$\begin{aligned} u^{*}(t)= & {} \int _{-\infty }^{t} S(t-s) \alpha (s) \text {d}s + \int _{-\infty }^{t} S(t-s) \varphi (s) \text {d}W(s)\\= & {} \int _{-\infty }^{t_0} S(t-s) \alpha (s) \text {d}s + \int _{-\infty }^{t_0} S(t-s) \varphi (s) \text {d}W(s)\\&\quad +\, \int _{t_0}^t S(t-s) \alpha (s) \text {d}s + \int _{t_0}^t S(t-s) \varphi (s) \text {d}W(s)\\= & {} \int _{-\infty }^{t_0} S(t-t_0)S(t_0-s) \alpha (s) \text {d}s + \int _{-\infty }^{t_0} S(t-t_0)S(t_0-s) \varphi (s) \text {d}W(s)\\&\quad +\, \int _{t_0}^t S(t-s) \alpha (s) \text {d}s + \int _{t_0}^t S(t-s) \varphi (s) \text {d}W(s)\\= & {} S(t-t_0) u^{*}(t_0) + \int _{t_0}^{t} S(t-s) \alpha (s) \text {d}s + \int _{t_0}^{t} S(t-s) \varphi (s) \text {d}W(s). \end{aligned}$$

Hence, \(u^{*}\) is a solution in the sense of Definition 2.

To show the exponential stability of \(u^{*}\) (in the sense of Definition 3), let \(\eta (t)\) be another solution of (38), such that \(\mathbb {E}\Vert \eta (t_0)\Vert _H^2 < \infty \). Then,

$$\begin{aligned} \eta (t) = S(t-t_0) \eta (t_0) + \int _{t_0}^{t}S(t-s) \alpha (s) \text {d}s + \int _{t_0}^{t} S(t-s) \varphi (s) \text {d}W(s), \end{aligned}$$

and thus

$$\begin{aligned} \mathbb {E}\Vert u^{*}(t) - \eta (t)\Vert _H^2 = \mathbb {E}\Vert S(t-t_0)(u^{*}(t_0) - \eta (t_0))\Vert _H^2 \le \text {e}^{-4(t-t_0)}\mathbb {E}\Vert u^*(t_0) - \eta (t_0)\Vert ^2_H \end{aligned}$$

which implies the stability of \(u^*\).

Finally, we show the uniqueness of \(u^{*}\). Let \(u_0\) be another solution, such that

$$\begin{aligned} \sup _{t \in \mathbb {R}} \mathbb {E}\Vert u_0(t)\Vert _{H}^2 < \infty \end{aligned}$$

Then, \(z(t) = u^{*}(t) - u_0(t)\) satisfies

$$\begin{aligned} \mathbb {E}\Vert z(t)\Vert ^2 \le \text {e}^{-4(t-\tau )} \mathbb {E}\Vert z(\tau )\Vert _H^2 \end{aligned}$$

for arbitrary \(\tau \le t\). Clearly, \(\sup _{t \in \mathbb {R}} \mathbb {E}\Vert z(t)\Vert ^2_{H} \le \text {e}^{-4(t-\tau )} C\) for some \(C>0\). Letting \(\tau \rightarrow - \infty \), we have \(\mathbb {E}\Vert z(t)\Vert ^2 = 0\) for all \(t \in \mathbb {R}\). Therefore,

$$\begin{aligned} \mathbb {P}\left( u_0(t) \ne u^{*}(t)\right) = 0, \ \forall t \in \mathbb {R}. \end{aligned}$$

Since the processes \(u_0\) and \(u^*\) are continuous in time with probability 1,

$$\begin{aligned} \mathbb {P}\left( \sup _{t \in \mathbb {R}}\Vert u_0(t) - u^*(t)\Vert _H>0\right) = 0. \end{aligned}$$

Now we are ready to prove Theorem 3.

3.3 Proof of Theorem 3: Nonlinear Version

Proof

Suppose the constant \(L\) in (11) satisfies

$$\begin{aligned}&L^2 + L^2 \sum _{k=1}^{\infty } a_k < 1\end{aligned}$$
(43)
$$\begin{aligned}&\frac{L^2}{2} + L^2 \sum _{k=1}^{\infty } a_k < \frac{2}{3}. \end{aligned}$$
(44)

The idea of the proof is to construct a sequence of approximations which converges to the solution \(u^*(t,x)\). Let \(u_0 \equiv 0\). For \(n \ge 0\), define \(u_{n+1}(t,x)\) as

$$\begin{aligned} d u_{n+1} = (A u_{n+1} + f(x,u_n))\, \text {d}t + \sigma (x,u_n) \text {d}W(t) \end{aligned}$$
(45)

Equation (45) satisfies the conditions of the Theorem 5, since

$$\begin{aligned} \sup _{t \in \mathbb {R}} \mathbb {E}\Vert f(x,u_n(t,x))\Vert ^2_H \le C \sup _{t \in \mathbb {R}} \mathbb {E}\int _{G}(1+|u_n(t,x)|^2) \text {e}^{-|x|^2} \, \text {d}x < \infty . \end{aligned}$$

for some \(C>0\). The bound for \(\sigma (x, u_n)\) is obtained analogously. Therefore, by Theorem 5, we can find the unique \(u_{n+1}(t,x)\) satisfying

$$\begin{aligned} \sup _{t \in \mathbb {R}} \mathbb {E}\Vert u_{n+1}\Vert ^2_H < \infty . \end{aligned}$$

First, we show that \(\sup _{t \in \mathbb {R}} \mathbb {E}\Vert u_n\Vert ^2_H\) has a bound which is independent of \(n\). To this end, \(u_{n+1}\) has the presentation

$$\begin{aligned} u_{n+1}(t) = \int _{-\infty }^{t} S(t-s) f(u_n(s)) \text {d}s + \int _{-\infty }^{t}S(t-s)\sigma (u_n(s)) \text {d}W(s):= I_1 + I_2 \end{aligned}$$

thus

$$\begin{aligned} E \Vert u_{n+1}(t)\Vert _H^2 \le 2 \mathbb {E}\Vert I_1\Vert _{H}^2 + 2 E \Vert I_2\Vert _{H}^2. \end{aligned}$$

We now estimate each term separately:

$$\begin{aligned} \mathbb {E}\Vert I_1\Vert _{H}^2= & {} \mathbb {E}\left\| \int _{-\infty }^{t}S(t-s) f(u_n(s)) \text {d}s \right\| _H^2\\&\le 2 \, \mathbb {E}\left\| \int _{-\infty }^{t}S(t-s) f(0) \text {d}s \right\| _H^2\\&\quad +\, 2 \, \mathbb {E}\left\| \int _{-\infty }^{t}S(t-s) [f(u_n(s)) - f(0)] \text {d}s \right\| _H^2\\&\le C_0 \int _{-\infty }^{t} \text {e}^{-2(t-s)} \text {d}s + L^2 \mathbb {E}\int _{-\infty }^{t} \text {e}^{-2(t-s)} \Vert u_n(s)\Vert _H^2 \text {d}s \le C_0\\&\quad +\, \frac{L^2}{2}\sup _{t \in \mathbb {R}} \mathbb {E}\Vert u_n(t)\Vert _H^2 \end{aligned}$$

Applying Lemma 2, we proceed with a similar estimate for \(I_2\):

$$\begin{aligned} \mathbb {E}\Vert I_2\Vert _{H}^2= & {} \mathbb {E}\left\| \int _{-\infty }^{t}S(t-s)\sigma (u_n(s)) \text {d}W(s)\right\| _H^2\\\le & {} 2 \, \mathbb {E}\left\| \int _{-\infty }^{t} S(t-s)\sigma (0) \text {d}W(s)\right\| _H^2\\&+\, 2 \, \mathbb {E}\left\| \int _{-\infty }^{t} S(t-s)[\sigma (u_n(s)) - \sigma (0)] \text {d}W(s)\right\| _H^2\\\le & {} C_1 + 2 \sum _{k=1}^{\infty } a_k \int _{-\infty }^{t} \text {e}^{-4(t-s)} \mathbb {E}\Vert u_n(s)\Vert ^2_H \, \text {d}s \le C_1 \\&+\, \sum _{k=1}^{\infty } a_k \frac{L^2}{2} \sup _{t \in \mathbb {R}} \mathbb {E}\Vert u_n(t)\Vert _H^2 \end{aligned}$$

so that we have

$$\begin{aligned} \sup _{t \in \mathbb {R}} E \Vert u_{n+1}(t)\Vert _H^2 \le C_2 + L^2(1 + \sum _{k=1}^{\infty } a_k)\sup _{t \in \mathbb {R}} \mathbb {E}\Vert u_n(t)\Vert _H^2 \end{aligned}$$

where \(C_2 = 2 C_0 + 2 C_1\) does not depend on \(L\). Hence, if \(L^2(1 + \sum _{k=1}^{\infty } a_k) < 1\) [condition (43)], we have a bound for \(\sup _{t \in \mathbb {R}} E \Vert u_{n}(t)\Vert _H^2 \) which is independent of \(n\):

$$\begin{aligned} \sup _{t \in \mathbb {R}} E \Vert u_{n}(t)\Vert _H^2 \le \frac{C_2}{1 - L^2(1 + \sum _{k=1}^{\infty } a_k)} \end{aligned}$$
(46)

The bound (46) follows from the fact that if a nonnegative numerical sequence \(\{x_n, n\ge 1\}\) satisfies

$$\begin{aligned} x_{n+1} \le a + b x_n \end{aligned}$$

with \(b<1\), then \(x_n \le \frac{a}{1-b}\).

Second, we establish that \(u_n\) is convergent.

$$\begin{aligned} u_{n+1}(t) - u_{n}(t)= & {} \int _{-\infty }^{t} S(t-s) [f(u_n(s)) - f(u_{n-1}(s))] \text {d}s + \\&+\, \int _{-\infty }^{t}S(t-s)[\sigma (u_n(s)) - \sigma (u_{n-1}(s))] \text {d}W(s):= J_1 + J_2. \end{aligned}$$

Thus,

$$\begin{aligned} \mathbb {E}\Vert u_{n+1}(t) - u_{n}(t)\Vert _H^2 \le 2 \mathbb {E}\, \Vert J_1\Vert ^2_{H} + 2 \mathbb {E}\, \Vert J_2\Vert ^2_H. \end{aligned}$$

Estimating the first term, we have

$$\begin{aligned} \Vert J_1\Vert ^2_{H}= & {} \mathbb {E}\left\| \int _{-\infty }^{t}S(t-s) [f(u_n(s)) - f(u_{n-1}(s))] \text {d}s \right\| _H^2\\&\le \frac{L^2}{2} \int _{-\infty }^{t} \text {e}^{-2(t-s)} \mathbb {E}\Vert u_n(s) - u_{n-1}(s)\Vert _H^2 \text {d}s\\&\quad \le \frac{L^2}{4} \sup _{t \in \mathbb {R}} \mathbb {E}\Vert u_n(t) - u_{n-1}(t)\Vert _H^2. \end{aligned}$$

Using Lemma 2 again, we have

$$\begin{aligned}&\Vert J_2\Vert ^2_{H} \le L^2 \sum _{k=1}^{\infty } a_k \int _{-\infty }^{t} \text {e}^{-4(t-s)}\mathbb {E}\Vert u_n(s) - u_{n-1}(s)\Vert _H^2 \text {d}s\\&\quad \le \frac{L^2}{4} \sum _{k=1}^{\infty } a_k \sup _{t \in \mathbb {R}} \mathbb {E}\Vert u_n(t) - u_{n-1}(t)\Vert _H^2. \end{aligned}$$

Therefore,

$$\begin{aligned} \sup _{t \in \mathbb {R}} \mathbb {E}\Vert u_{n+1}(t) - u_{n}(t)\Vert _H^2 \le \frac{L^2}{2} \left( 1 + \sum _{k=1}^{\infty } a_k \right) \sup _{t \in \mathbb {R}} \mathbb {E}\Vert u_n(t) - u_{n-1}(t)\Vert _H^2. \end{aligned}$$
(47)

where, due to (43),

$$\begin{aligned} \frac{L^2}{2} \left( 1 + \sum _{k=1}^{\infty } a_k\right) < \frac{1}{2}. \end{aligned}$$

Iterating (47), we get

$$\begin{aligned} \sup _{t \in \mathbb {R}} \mathbb {E}\Vert u_{n+1}(t) - u_{n}(t)\Vert _H^2 \le \frac{C}{2^n} \end{aligned}$$

for some positive constant \(C\). Therefore, \(\forall n, m \ge 1\)

$$\begin{aligned}&\sup _{t \in \mathbb {R}} \sqrt{\mathbb {E}\Vert u_n(t) - u_{m}(t)\Vert ^2_{H}} = \sup _{t \in \mathbb {R}} \sqrt{\mathbb {E}\left\| \sum _{i=m}^{n}(u_{i+1}(t) - u_{i}(t))\right\| ^2_{H}}\\&\quad \le \sum _{i=m}^{n}\sqrt{\sup _{t \in \mathbb {R}} \mathbb {E}\Vert u_{i+1}(t) - u_{i}(t)\Vert ^2_{H}} \rightarrow 0, \,\,\text {as}\,\, n,m \rightarrow \infty , \end{aligned}$$

and thus, \(u_n(t)\) is a Cauchy sequence. Consequently, there is a limiting function \(u^{*}(t, \cdot ) \in H\) such that

$$\begin{aligned} \sup _{t \in \mathbb {R}} \mathbb {E}\Vert u_n(t) - u^{*}(t)\Vert ^2_H \rightarrow 0, \ n \rightarrow \infty . \end{aligned}$$

Using (46), it follows from Fatou’s Lemma that

$$\begin{aligned} \sup _{t \in \mathbb {R}} \mathbb {E}\Vert u^{*}\Vert ^2_{H} \le \frac{C_2}{1 - L^2(1 + \sum _{k=1}^{\infty } a_k)} \end{aligned}$$

The function \(u^{*}(t)\) is \(\mathcal {F}_{t}\)-measurable as a limit of \(\mathcal {F}_t\)-measurable processes.

Third, we show that \(u^{*}\) solves Eq. (9). To this end, we need to pass to the limit in the identity

$$\begin{aligned} u_{n+1}(t)= & {} S(t-t_0) u_{n+1}(t) + \int _{t_0}^{t} S(t-t_0) f(u_n(s)) \text {d}s\nonumber \\&+\, \int _{t_0}^{t} S(t-s)\sigma (u_n(s)) \text {d}W(s) \end{aligned}$$
(48)

Using Markov’s inequality, \(\forall \varepsilon > 0\)

$$\begin{aligned} \sup _{t \in \mathbb {R}} \mathbb {P}\left\{ \Vert u_n(t) - u^{*}(t)\Vert _{H} > \varepsilon \right\} \le \frac{\sup _{t \in \mathbb {R}} \mathbb {E}\Vert u_n(t) - u^{*}(t)\Vert ^2}{\varepsilon ^2} \rightarrow 0, \ n \rightarrow \infty . \end{aligned}$$

So \(u_n(t) \rightarrow u^{*}(t)\), \(n \rightarrow \infty \) in probability, uniformly in \(t\). Thus, since \(S(t-t_0)\) is a bounded operator,

$$\begin{aligned} S(t-t_0) u_{n+1}(t) \rightarrow S(t-t_0) u^{*}(t), \ n \rightarrow \infty . \end{aligned}$$

Next, \(\forall \varepsilon > 0\)

$$\begin{aligned}&\mathbb {P}\left\{ \left\| \int _{t_0}^{t} S(t-s)[f(u_n(s)) - f(u^{*}(s))] \text {d}s\right\| _{H} > \varepsilon \right\} \\&\quad \le \mathbb {P}\left\{ \int _{t_0}^{t}\left\| S(t-s)[f(u_n(s)) - f(u^{*}(s))]\right\| _{H} \text {d}s > \varepsilon \right\} \\&\quad \le \mathbb {P}\left\{ L \int _{t_0}^{t} \text {e}^{-2(t-s)} \left\| u_n(s) - u^{*}(s)\right\| _{H} \text {d}s > \varepsilon \right\} \\&\quad \le \frac{L}{\varepsilon ^2} \sup _{t \in \mathbb {R}} \sqrt{\mathbb {E}(u_n(t) - u^{*}(t))} \rightarrow 0, n \rightarrow \infty . \end{aligned}$$

So

$$\begin{aligned} \int _{t_0}^{t} S(t-s) f(u_n(s))\, \text {d}s \rightarrow \int _{t_0}^{t} S(t-s) f(u^*(s)) \, \text {d}s \end{aligned}$$

in probability pointwise for every \(t \in \mathbb {R}\) as \(n \rightarrow \infty \). Finally, using Lemma 2,

$$\begin{aligned}&\mathbb {E}\left\| \int _{t_0}^{t} S(t-s)[\sigma (u_n(s)) - \sigma (u^{*}(s))] \, \text {d}W(s)\right\| _H^2\\&\quad \le \int _{t_0}^{t} \mathbb {E}\left\| S(t-s)[\sigma (u_n(s)) - \sigma (u^{*}(s))]\right\| ^2_{H} \, \text {d}s\\&\quad \le L^2 \sum _{k=1}^{\infty } a_k \int _{t_0}^{t} \text {e}^{-4(t-s)} \mathbb {E}\Vert u_n(s) - u^{*}(s)\Vert _H^2 \, \text {d}s\\&\quad \le \frac{L^2}{4} \sum _{k=1}^{\infty } a_k \sup _{t \in \mathbb {R}} \mathbb {E}\Vert u_n(t) - u^{*}(t)\Vert _H^2 \rightarrow 0, n \rightarrow \infty . \end{aligned}$$

It follows from Proposition 4.16 [11] that

$$\begin{aligned} \int _{t_0}^{t} S(t-s)\sigma (u_n(s)) \text {d}W(s) \rightarrow \int _{t_0}^{t} S(t-s)\sigma (u^*(s)) \text {d}W(s) \end{aligned}$$

in probability. Therefore, passing to the limit in (48), we have

$$\begin{aligned} u^*(t)= & {} S(t-t_0) u^*(t_0) + \int _{t_0}^{t} S(t-t_0) f(u^*(s)) \text {d}s\nonumber \\&+\, \int _{t_0}^{t} S(t-s)\sigma (u^*(s)) \text {d}W(s) \end{aligned}$$
(49)

The process \(u^*\), defined through the integral relation (49), has continuous trajectories with probability 1. Indeed, while the continuity of the first two terms can be checked straightforwardly, the continuity of the third one is a consequence of the factorization formula [12, Theorem 5.2.5].

We now show that \(u^*\) is a stable solution. To this end, let \(\eta (t)\) be another solution of (9) such that \(\eta (t_0)\) is \(\mathcal {F}_{t_0}\) measurable and \(\mathbb {E}\Vert \eta (t_0)\Vert ^2_H < \infty \). We have

$$\begin{aligned}&\mathbb {E}\Vert u^{*}(t) - \eta (t)\Vert _{H}^2 \le 3 \, \mathbb {E}\left\| S(t-t_0)(u^{*}(t_0) - \eta (t_0)) \right\| _H^2\nonumber \\&\quad +\, 3\, \mathbb {E}\left( \int _{t_0}^{t} \left\| S(t-s)[f(u^{*}(s)) - f(\eta (s))] \right\| _{H} \, \text {d}s \right) ^2 \nonumber \\&\quad + \, 3\, \mathbb {E}\left\| \int _{t_0}^{t} S(t-s)(\sigma (u^*(s)) - \sigma (\eta (s))) \text {d}W(s) \right\| _{H}^{2}. \end{aligned}$$
(50)

Estimating each term separately, we have

$$\begin{aligned}&\mathbb {E}\left\| S(t-t_0)(u^{*}(t_0) - \eta (t_0)) \right\| _H^2 \le \text {e}^{-4(t-t_0)} \mathbb {E}\Vert u^{*}(t_0) - \eta (t_0)\Vert ^2_H\\&\quad \le \text {e}^{-2(t-t_0)} \mathbb {E}\Vert u^{*}(t_0) - \eta (t_0)\Vert ^2_H, \mathbb {E}\left( \int _{t_0}^{t} \left\| S(t-s)[f(u^{*}(s)) - f(\eta (s))] \right\| _{H} \, \text {d}s \right) ^2\\&\quad \le \frac{L^2}{2} \int _{t_0}^{t} \text {e}^{-2(t-s)} \mathbb {E}\Vert u^{*}(s) - \eta (s)\Vert ^2_{H} \text {d}s, \end{aligned}$$

and, using Lemma 2,

$$\begin{aligned}&\mathbb {E}\left\| \int _{t_0}^{t} S(t-s)(\sigma (u^*(s)) - \sigma (\eta (s))) \text {d}W(s) \right\| _{H}^{2}\\&\quad \le L^2 \sum _{k=1}^{\infty } a_k \int _{t_0}^{t} \text {e}^{-4(t-s)} \mathbb {E}\Vert u^{*}(s) - \eta (s)\Vert ^2_{H} \text {d}s\\&\quad \le L^2 \sum _{k=1}^{\infty } a_k \int _{t_0}^{t} \text {e}^{-2(t-s)} \mathbb {E}\Vert u^{*}(s) - \eta (s)\Vert ^2_{H} \text {d}s. \end{aligned}$$

Thus, (50) reads as

$$\begin{aligned}&\mathbb {E}\Vert u^{*}(t) - \eta (t)\Vert _{H}^2 \le 3 \text {e}^{-2(t-t_0)} \mathbb {E}\Vert u^{*}(t_0) - \eta (t_0)\Vert ^2_H \nonumber \\&\quad +\, 3 \left( \frac{L^2}{2} + L^2 \sum _{k=1}^{\infty } a_k\right) \int _{t_0}^{t} \text {e}^{-2(t-s)} \mathbb {E}\Vert u^{*}(s) - \eta (s)\Vert ^2_{H} \text {d}s. \end{aligned}$$
(51)

Rewriting (51) as

$$\begin{aligned}&\text {e}^{2t} \mathbb {E}\Vert u^{*}(t) - \eta (t)\Vert _{H}^2 \le 3 \text {e}^{2 t_0} \mathbb {E}\Vert u^{*}(t_0) - \eta (t_0)\Vert ^2_H + \nonumber \\&\quad +\, 3 L^2 \left( \frac{1}{2} + \sum _{k=1}^{\infty } a_k\right) \int _{t_0}^{t} \text {e}^{2 s} \mathbb {E}\Vert u^{*}(s) - \eta (s)\Vert ^2_{H} \text {d}s, \end{aligned}$$
(52)

we are now in position to apply Gronwall’s inequality to conclude that

$$\begin{aligned} \mathbb {E}\Vert u^{*}(t) - \eta (t)\Vert _{H}^2 \le 3 \text {e}^{\left( -2 +3\left( \frac{L^2}{2} + L^2 \sum _{k=1}^{\infty } a_k\right) \right) (t- t_0)} \mathbb {E}\Vert u^{*}(t_0) - \eta (t_0)\Vert ^2_H \end{aligned}$$

Thus, \(u^{*}\) is stable, provided (44) holds.

The uniqueness of \(u^{*}\) can be shown similarly to the linear case. \(\square \)

4 Uniqueness of the Invariant Measure

In this section, we show that the solution \(u^{*}(t)\) is a stationary process for \(t \in \mathbb {R}\), which defines an invariant measure \(\mu \) for (9). The stability property of \(u^*\) gives the uniqueness of the invariant measure. We follow the overall procedure in [11, Section 11.1] and [12, Theorem 6.3.2].

Following [12], \(u^*\) defines a probability transition semigroup

$$\begin{aligned} \mathbb {P}_t \varphi (x):= \mathbb {E}\varphi (u^*(t,x)), \ x \in H \end{aligned}$$

so that its dual \(\mathbb {P}^{*}_t\) is an operator in the space of probability measures \(\mu \):

$$\begin{aligned} \mathbb {P}^{*}_t \mu (\Gamma ) = \int _{H}P_t(u_0,\Gamma ) \mu (d u_0), \ t\ge 0, \ \Gamma \subset H. \end{aligned}$$

Here

$$\begin{aligned} \mathbb {P}_t(u_0,\Gamma ) = \mathbb {E}\chi _{\Gamma }(u(t,u_0)), \end{aligned}$$

and \(\chi _{\Gamma }\) is the characteristic function of the set \(\Gamma \). An invariant measure \(\mu \) is a fixed point of \(\mathbb {P}^{*}_t\), i.e., \(\mathbb {P}^*_t\mu = \mu \) for all \(t\ge 0\).

Throughout this section, \(u(t,t_0,u_0)\) will denote the solution of

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{\partial }{\partial t} u(t,x) = A u(t,x) + f(x,u(t,x)) + \sigma (x,u(t,x)) \dot{W}(t,x),\,\,\, t \ge t_0,\,\,x \in G;\\ u(t_0,x) = u_0(x) \end{array}\right. }\nonumber \\ \end{aligned}$$
(53)

for \(t \ge t_0\). Here, \(t_0\) can be any number, in particular we can choose \(t_0=0\). Also, for any \(H\)-valued random variable \(X\), we use \(\mathcal {L}(X)\) to denote the law of \(X\), which is the following measure on \(H\):

$$\begin{aligned} \mathcal {L}(X)(A):=P(\omega : X(\omega ) \in A), A \subset H \end{aligned}$$

We now show that \(\mu := \mathcal {L}(u^{*}(t_0))\) is the unique invariant measure for (9). Following [11], Prop 11.2, 11.4], it is sufficient to show that

$$\begin{aligned} \forall u_0 \in H, \,\,\,\,\,\, P_{t}^{*} \delta _{u_0} = \mathcal {L}(u(t,t_0,u_0)) \rightarrow \mu \text { weakly as } t \rightarrow \infty . \end{aligned}$$
(54)

Since the equation is autonomous and \(t - t_0 = t_0 - (2t_0 - t))\), we have the following property of the solution

$$\begin{aligned} \mathcal {L}(u(t,t_0,x)) = \mathcal {L}(u(t_0, 2 t_0-t, u_0)), \,\,\,\,\,\,\text {for all}\,\,\,t > t_0. \end{aligned}$$
(55)

By the stability property (Definition 3) of (9), we have

$$\begin{aligned}&\mathbb {E}\Vert u(t_0, 2 t_0 - t, u_0) - u^{*}(t_0)\Vert ^2_H \le \text {e}^{-2(t_0-2 t_0 + t)} \mathbb {E}\Vert u(2 t_0 - t, 2 t_0 - t, u_0)\\&\qquad - u^{*}(2 t_0 - t)\Vert ^2_H \\&\quad = \text {e}^{-2(t-t_0)} \mathbb {E}\Vert u^{*}(2 t_0 - t) - u_0\Vert _H^2 \rightarrow 0, \ t\rightarrow \infty , \end{aligned}$$

since \(\sup _{t \in \mathbb {R}} \mathbb {E}\Vert u^{*}(2 t_0 - t) - u_0\Vert _H^2 < \infty \). Thus, \(u(t_0, 2 t_0 -t, u_0)\) converges in probability to \(u^{*}(t_0)\), which, in turn, implies the weak convergence (54). The above simultaneously proves the existence and uniqueness of the invariant measure for (9).

The stationarity of \(u^*\) follows from [11], Prop 11.5].