1 Introduction

The main purpose of the paper is to study the long time behavior of the stochastic damped Schrödinger equation

$$\begin{aligned} du + (\lambda u+i\Delta u +i \alpha |u|^{2\sigma }u)dt = \Phi dW_t \end{aligned}$$
(1.1)

in an unbounded domain. Our main result provides the existence of an invariant measure of the Markov semigroup for the Eq. (1.1) driven by an additive noise. In addition, using the asymptotic compactness, we prove that the set of invariant measures is closed and convex leading to an existence of an ergodic measure.

The problem of existence of an invariant measure for stochastic partial differential equations with dissipation and in a bounded domain is now relatively well-understood with the construction of the invariant measure following the classical Krylov–Bogolyubov procedure. The smoothing properties of the equation and the boundedness of the domain guarantee the necessary compactness. For example, the existence of invariant measures for the reaction diffusion equations, for the Navier–Stokes equations, complex Ginzburg–Landau, and fractionally dissipated Euler equations was established in [6, 14, 15]. Also, for the primitive equations, the invariant measure was constructed in [16]. For invariant measures in unbounded domains, cf. [4, 12, 26].

Dissipation is not necessary for construction of invariant measures. For instance, the invariant measures were constructed for the stochastic nonlinear damped wave equation [2, 5] and scalar conservation laws [10].

In the case of nondegenerate noise, a coupling method can be used to establish existence and uniqueness of the ergodic measure. For instance, in the case of Schrödinger equation with nondegenerate noise and when the domain is bounded, Debussche and Odasso established in [9] the existence of a unique ergodic measure (cf. also [7, 10, 17, 24, 27, 28]).

The main goal of this paper is to address the existence of an invariant measure for the stochastic damped Schrödinger equation in an unbounded domain. The main difficulties are the lack of smoothing and compactness properties of the solution operator in finite time. For instance, the coupling method is not expected to work in this situation since Foias-Prodi type estimates, necessary for the approach, are not available.

In order to overcome these difficulties, we establish an asymptotic compactness property of the solution operator (cf. Lemma 3.5). Namely, we prove that for every sequence of solutions resulting from \(H^1\)-bounded initial conditions and for every sequence of times diverging to \(\infty \), there exists a subsequence of solutions and a sequence of times such that marginals of these solutions at these times converge in distribution in \(H^1\). For this purpose we employ the conserved quantities used classically for the deterministic analog of the equations. We also use the energy equation approach introduced in the deterministic setting case by Ball [1]. His method was further developed to more general deterministic situations, in particular to establish the existence and regularity of attractors for the damped KdV equation [23, 32] and for the damped Schrödinger equation [18,19,20,21,22]. Two byproducts of the asymptotic compactness property established in this paper are the existence of an invariant measure for the stochastic Schrödinger equation and the compactness of the set of invariant measures. We note that the existence and uniqueness of solutions was established by de Bouard and Debussche in [8].

Previously, in [26], Kim obtained the existence of an invariant measure for the defocusing Schrödinger equation (the case \(\alpha =-1\)) in \(L^2\) for a restricted range of exponents \(\sigma <2/d\), where d is the space dimension. The proof in [26] is based on the existence of two invariants, yielding uniform bounds in \(L^2\) and \(H^1\) along with the Feller property in \(L^2\). Note that the \(L^2\)-Feller property can be obtained only when \(\sigma <2/d\) (cf. [25, p. 257]).

In order to extend the range of exponents to \(\sigma \ge 2/d\), we need to work in the space \(H^1\). However, we face the problem of the lack of an invariant controlling the \(H^2\) norm (yielding the necessary compactness). In order to circumvent this difficulty and obtain the necessary tightness of Krylov–Bogolyubov averages, we need to rely directly on the equation. The main contribution is to develop the stochastic version of the energy equation method due to Ball in the deterministic setting.

Therefore we are able to extend the range of exponents to \(\sigma <2d/(d-2)\) in the defocusing case and also establish the existence of an invariant measure in the focusing case (i.e., \(\alpha =1\)) for \(\sigma <2/d\). Note that, in the defocusing case and in \(d=1,2\) there is no restriction on the degree of the polynomial in the nonlinearity.

Also, we emphasize that there are additional advantages when constructing the invariant measure in \(H^1\) as in the present paper. For instance, we have the asymptotic compactness property in \(H^{1}\) (cf. Theorem 6.1 below).

The paper is organized as follows. In Sect. 4, we prove an abstract tightness result that links the evolution of some scalar quantities to the asymptotic compactness stated above. The main feature of the kth order scalar quantity is that it is equivalent to the \(H^k\) norm, while the drift of square of its expectation is continuous in the \(H^{k-1}\) norm. We also make an Aldous type continuity assumption (cf. (iii) in Definition 4.4) which allows us to use the Aldous criterion [3] for convergence of distributions in \(L_\mathrm{loc}^{2}\) to pass to a limiting martingale solution [11, 29, 30]. We note that while the linear part is assumed to be a Schrödinger type operator \(i\Delta \), our criterion can be used for more general linear operators as well after suitable adjustments. In Sect. 5, we use this asymptotic compactness criterion for the Schrödinger equation by considering the first two classical Schrödinger invariants and prove the main tightness lemma. The paper is concluded by showing that the set of invariant measures is closed and convex, which implies the existence of an ergodic measure.

2 Notations

For functions \(u,v\in L^2({\mathbb {R}}^{d})=L^2({\mathbb {R}}^d;{\mathbb {C}})\), denote by \(\Vert u\Vert _{L^2}\) the \(L^2({\mathbb {R}}^d)\) norm of u and by \((u,v)=\int _{{\mathbb {R}}^d}u(x){{\overline{v}}}(x)dx\) the \(L^2\)-inner product of u and v. We fix an orthonormal basis \(\{e_i\}_{i\ge 0}\) of \(L^2({\mathbb {R}}^d)\).

For a Banach space B and with \(T>0\) and \(p\ge 1\), denote by \(L^p([0,T];B)\) the space of functions from [0, T] into B with integrable pth power over [0, T] and by C([0, T]; B) the set of continuous functions from [0, T] into B. Similarly to functional spaces, for \(p>0\), denote by \({\mathbb {L}}^p (\Omega ,B)\) the space of random variables with values in B and a finite pth moment.

Denote by \(\Delta =\sum _i\partial ^2_{i}\) the Laplace operator and by \(H^r ({\mathbb {R}}^d)\) the Sobolev space of functions u satisfying

(2.1)

with the inner product denoted by \((u,v)_{H^r}\). Write \({{{\mathcal {B}}}} (H^1({\mathbb {R}}^d))\) for the set of Borel measurable subsets of \(H^1({\mathbb {R}}^d)\). Also, denote by \(L^2_{\mathrm{loc}}({\mathbb {R}}^d)\) the space of locally square integrable functions which with the usual metric is a complete metric space.

For a Hilbert space H, we write \({\mathrm{HS}(L^2,H)}\) for the space of linear operators \(\Phi :L^2({\mathbb {R}}^d) \rightarrow H\) with finite Hilbert–Schmidt norm

$$\begin{aligned} \Vert \Phi \Vert _{\mathrm{HS}(L^2,H)} = \left( \sum _{i=1}^\infty \Vert \Phi e_i\Vert _H^2\right) ^{1/2} .\end{aligned}$$
(2.2)

3 The Schrödinger equation

We fix a probability space \((\Omega , \{{\mathbb {F}}_t\},{\mathbb {P}})\) carrying a countable family of independent Brownian motions \(\{B^i_t\}_{i\in {\mathbb {N}},t\ge 0}\) adapted to the filtration \(\{{\mathbb {F}}_t\}\) and define the Wiener process

$$\begin{aligned} W_t=\sum _{i\in {\mathbb {N}}}e_i B_t^i.\end{aligned}$$
(3.1)

Fix \(\lambda >0\) and \(\alpha \in \{-1,1\}\). In this paper, we investigate the long time behavior of solutions of the stochastic damped nonlinear Schrödinger equation

$$\begin{aligned} du + (\lambda u+i\Delta u + i \alpha |u|^{2\sigma } u )dt = \Phi dW_t, \end{aligned}$$
(3.2)

on the space-time domain \([0,\infty )\times {\mathbb {R}}^d\) with an additive noise, by establishing the existence of an invariant measure and the asymptotic tightness of solutions of the equation. We emphasize that unlike in [28, Assumption H1], our problem in the whole space \({\mathbb {R}}^d\) does not allow any compact embeddings.

Recall the functionals

$$\begin{aligned} M(v)&=|v|^2_{L^2}\end{aligned}$$
(3.3)
$$\begin{aligned} H(v)&=\frac{1}{2}\int _{{\mathbb {R}}^d} |\nabla v(x)|^2 dx -\frac{\alpha }{2\sigma +2}\int _{{\mathbb {R}}^d}|v(x)|^{2\sigma +2}dx \end{aligned}$$
(3.4)

which are the classical invariant quantities for the Schrödinger equation. The existence of solutions for the Eq. (3.2) was proven in [8]. In order to be able to apply the existence results in [8], we make the following assumptions.

Assumption 3.1

(i) If \(\alpha =-1\) and \(d\ge 3\), then we require .

If \(\alpha =-1\) and \(d=1,2\), then we require \(\sigma \ge 0\).

If \(\alpha =1\), then we require .

(ii) \(\Phi \in HS(L^2({\mathbb {R}}^d); H^1({\mathbb {R}}^d))\).

We now recall the existence result from [8, Theorem 3.4, Propositions 3.2 and 3.3].

Theorem 3.2

Under Assumptions 3.1, for every \({\mathbb {F}}_0\) measurable, \(H^1({\mathbb {R}}^d)\) valued random variable \(u_0\), there exists an \(H^1({\mathbb {R}}^d)\)-valued and continuous solution \(\{u_t\}_{t\ge 0}\) of (3.2) with the initial condition \(u_0\). Additionally, the quantities M and H evolve as

$$\begin{aligned} dM(u_s) +2\lambda M(u_s)ds =2 \sum _{i} Re \left( u_s,{\Phi e_i}\right) dB^i(s)+ \Vert \Phi \Vert _{HS(L^2; L^2)}^2ds \end{aligned}$$
(3.5)

and

$$\begin{aligned}&dH(u_s)+2\lambda H(u_s)ds \nonumber \\&\quad = \frac{\alpha \lambda \sigma }{\sigma +1} \int |u(s,x)|^{2\sigma +2}dx d s - \sum _i {{\mathbb {R}}}\mathrm{e}\left( \Delta u(s) +\alpha |u(s)|^{2\sigma } u(s),\Phi e_i \right) dB^i_s\nonumber \\&\qquad +\left( \frac{\Vert \nabla \Phi \Vert ^2_{{HS(L^2; L^2)}}}{2} -\frac{\alpha \Vert |u(s)|^\sigma \Phi \Vert ^2_{{HS(L^2; L^2)}}}{2}\right) ds \nonumber \\&\qquad -{\sigma \alpha }\sum _i \left( |u(s)|^{2\sigma -2},({{\mathbb {R}}}\mathrm{e}({{\overline{u}}}(s) \Phi e_i))^2\right) ds, \end{aligned}$$
(3.6)

where \(|u(s)|^\sigma \Phi \) is the operator that to a function v associates the function \(|u(s)|^\sigma \Phi v\).

Note that the results in [8] are given for \(\lambda =0\) but one can easily pass from \(\lambda =0\) to any \(\lambda > 0\).

3.1 The semigroup

Let \(u_0\in H^1({\mathbb {R}}^d)\) be a deterministic initial condition, and let u be the corresponding solution of (3.2). For all \(B\in {{{\mathcal {B}}}} (H^1({\mathbb {R}}^d) )\) we define the transition probabilities of the equation by

$$\begin{aligned} P_t (u_0,B)={\mathbb {P}}(u_t\in B) .\end{aligned}$$
(3.7)

For any \(H^1({\mathbb {R}}^d)\)-valued measure \(\nu \), we denote by \((\nu P_t)(\cdot )=\int _{H^1({\mathbb {R}}^d)} P_t(v,\cdot )\nu (dv)\) the distribution at time t of the solution of (3.2) with the initial condition having the distribution \(\nu \).

For any function \(\xi \in C_b(H^1({\mathbb {R}}^d);{\mathbb {R}})\) and \(t\ge 0\), denote

$$\begin{aligned} P_t \xi (u_0) ={\mathbb {E}}\left[ \xi (u_t)\right] =\int _{H^1({\mathbb {R}}^d)} \xi (v)P_t(u_0, dv) .\end{aligned}$$
(3.8)

Definition 3.3

Let \(\mu \) be a probability measure on \(H^1({\mathbb {R}}^d)\). We say that \(\mu \) is an invariant measure for \(P_t\) if we have

$$\begin{aligned} \int _{H^1({\mathbb {R}}^d)} \xi (v)\mu ( dv)=\int _{H^1({\mathbb {R}}^d)} P_t\xi (v') \mu ( dv') \end{aligned}$$
(3.9)

for all \(\xi \in C_b(H^1({\mathbb {R}}^d);{\mathbb {R}})\) and \(t\ge 0\).

3.2 Main results concerning the Schrödinger equation

The following statement is the main result of this paper.

Theorem 3.4

Under Assumptions 3.1, there exists an invariant measure for \(P_t\).

The main ingredient in the proof is the next lemma.

Lemma 3.5

Under Assumptions 3.1 the following two tightness assertions hold.

(i) For all sequences of times \(t_n\rightarrow \infty \) and \({\mathbb {F}}_0\)-measurable initial conditions \(u^n_0\in H^1({\mathbb {R}}^d)\) with distributions \(\nu ^n\) satisfying

for some \(R>0\), the family of measures

$$\begin{aligned} \bigl \{ (\nu ^n P_{t_n})(\cdot ) : n\in {\mathbb {N}}\bigr \} \end{aligned}$$
(3.10)

on \(H^{1}({{\mathbb {R}}^d})\) is tight.

(ii) For all compact sets \(K\subseteq H^1({\mathbb {R}}^d)\) the family of probabilities

$$\begin{aligned} \left\{ P_{s}(v,\cdot ):s\in [0,1],\,v\in K\right\} \end{aligned}$$
(3.11)

on \(H^1({\mathbb {R}}^d)\) is tight.

Assuming the lemma, we now prove the main theorem. The lemma is then proven in Sect. 5 below.

Proof of Theorem 3.4

An invariant measure is constructed using the classical Krylov–Bogolyubov theorem, which requires the Feller property of the semigroup and the tightness of averaged measures

$$\begin{aligned} \mu _n(\cdot ):=\frac{1}{n}\int _0^n P_t(0,\cdot ) dt.\end{aligned}$$
(3.12)

The Feller property is a consequence of [8, Proposition 3.5]. Thus in order to conclude the proof, we only need to show tightness of the family of measures \(\mu _{n}\).

Let \(\epsilon >0\). Lemma 3.5 applied to the family \(\{P_{k}(0,\cdot ):k\in {\mathbb {N}}\}\) gives the existence of a compact set \(K_\epsilon \subseteq H^1({\mathbb {R}}^d)\) such that

$$\begin{aligned} \sup _{k} P_k(0,K_\epsilon ^c) \le \frac{\epsilon }{2} .\end{aligned}$$
(3.13)

We then consider the family of probabilities \(\{P_s(v,\cdot ): s\in [0,1],\, v\in K_\epsilon \}\). By the second part of Lemma 3.5, this family is tight. Therefore, there exists another compact set \(A_\epsilon \subseteq H^1({\mathbb {R}}^d)\) such that

$$\begin{aligned} \sup _{ s\in [0,1], v\in K_\epsilon } P_s(v,A_\epsilon ^c) \le \frac{\epsilon }{2} .\end{aligned}$$
(3.14)

By a direct computation

$$\begin{aligned} \mu _n(A^c_\epsilon )&=\frac{1}{n}\int _0^n P_t(0,A_\epsilon ^c) dt = \frac{1}{n}\sum _{k=0}^{n-1} \int _{k}^{k+1} P_t(0,A_\epsilon ^c) dt \end{aligned}$$
(3.15)
$$\begin{aligned}&=\frac{1}{n}\sum _{k=0}^{n-1} \int _{k}^{k+1} \int _{H^1({\mathbb {R}}^d)}P_{k} (0,dv) P_{t-k}(v,A_\epsilon ^c) dt \end{aligned}$$
(3.16)
$$\begin{aligned}&=\frac{1}{n}\sum _{k=0}^{n-1} \int _{k}^{k+1} \left( \int _{H^1({\mathbb {R}}^d)\cap K_\epsilon ^c}P_{k} (0,dv) P_{t-k}(v,A_\epsilon ^c)\right. \nonumber \\&\qquad \qquad \qquad \qquad \quad \left. +\int _{H^1({\mathbb {R}}^d)\cap K_\epsilon }P_{k} (0,dv) P_{t-k}(v,A_\epsilon ^c) \right) dt \end{aligned}$$
(3.17)

whence

$$\begin{aligned} \mu _n(A^c_\epsilon )&\le \frac{1}{n}\sum _{k=0}^{n-1} \left( P_{k} (0,K_\epsilon ^c) +P_{k} (0,K_\epsilon ) \sup _{ s\in [0,1],\, v\in K_\epsilon } P_s(v,A_\epsilon ^c)\right) \end{aligned}$$
(3.18)
$$\begin{aligned}&\le \frac{1}{n}\sum _{k=0}^{n-1} \left( P_{k} (0,K_\epsilon ^c) +\sup _{ s\in [0,1],\, v\in K_\epsilon } P_s(v,A_\epsilon ^c)\right) \le \epsilon .\end{aligned}$$
(3.19)

We have thus shown that the set of measures \(\{\mu _n\}\) is tight, concluding the proof of the theorem. \(\square \)

The rest of the paper is devoted to the proof of Lemma 3.5 and to establishing the compactness of the set of invariant measures.

4 An abstract tightness result

In this section, we give certain distributional convergence results that we use below to prove Lemma 3.5.

Lemma 4.1

Let \(k\in {{\mathbb {N}}}_0\), and let \(\xi _n\) and \(\xi \) be an \(H^k({\mathbb {R}}^d)\)-valued square integrable random variables such that \(\xi _n \rightarrow \xi \) in distribution in \(L^2_{\mathrm{loc}}({\mathbb {R}}^d)\). Assume that \({\mathbb {E}}[\Vert \xi _n\Vert ^2_{H^k}]\rightarrow {\mathbb {E}}[\Vert \xi \Vert ^2_{H^k}]\) as \(n\rightarrow \infty \) and suppose that the family \(\{\Vert \xi _n\Vert ^2_{H^k}:n\in {{\mathbb {N}}}\}\) is uniformly integrable. Then \(\xi _n\) converges to \(\xi \) in distribution in \(H^k({\mathbb {R}}^d)\).

Note that when \(k=0\), we have \(H^0({\mathbb {R}}^d)=L^2({\mathbb {R}}^d)\).

Proof Lemma 4.1

Let \(\{f_i\}\) be a complete orthonormal system for \(H^k({\mathbb {R}}^d)\) consisting of smooth compactly supported functions. We first claim that

$$\begin{aligned} \lim _{N\rightarrow \infty }\sup _n {\mathbb {E}}\left[ \sum _{i=N}^\infty |(\xi _n,f_i)_{H^k}|^2\right] =0 \end{aligned}$$
(4.1)

which then quickly implies asserted convergence. Let \(\epsilon >0\). By the uniform integrability assumption, there exists \(R>0\) such that

$$\begin{aligned} \sup _n {\mathbb {E}}\left[ \Vert \xi _n\Vert ^2_{H^k}{{\mathbf {1}}}_{ \{\Vert \xi _n\Vert ^2_{H^k}\ge R\}}\right] \le \epsilon \end{aligned}$$
(4.2)

and, by possibly enlarging R, we may also assume that

$$\begin{aligned} {\mathbb {E}}\left[ \Vert \xi \Vert ^2_{H^k}{{\mathbf {1}}}_{ \{\Vert \xi \Vert ^2_{H^k}\ge R\}}\right] \le \epsilon .\end{aligned}$$
(4.3)

For all \(N\in {{\mathbb {N}}}\), the convergence in distribution in \(L^2_{\mathrm{loc}}({\mathbb {R}}^d)\) and the fact that \(\{f_i\}\) have compact support imply

$$\begin{aligned} {\mathbb {E}}\left[ \left( \sum _{i=1}^N |(\xi _n,f_i)_{H^k}|^2\right) \wedge R\right] \rightarrow {\mathbb {E}}\left[ \left( \sum _{i=1}^N |(\xi ,f_i)_{H^k}|^2\right) \wedge R\right] \end{aligned}$$
(4.4)

as \(n\rightarrow \infty \). Since

$$\begin{aligned}&\left| {\mathbb {E}}\left[ \sum _{i=1}^N |(\xi _n,f_i)_{H^k}|^2\right] -{\mathbb {E}}\left[ \sum _{i=1}^N |(\xi ,f_i)_{H^k}|^2\right] \right| \nonumber \\&\quad \le \left| {\mathbb {E}}\left[ \left( \sum _{i=1}^N |(\xi _n,f_i)_{H^k}|^2\right) \wedge R\right] -{\mathbb {E}}\left[ \left( \sum _{i=1}^N |(\xi ,f_i)_{H^k}|^2\right) \wedge R\right] \right| \nonumber \\&\qquad + {\mathbb {E}}\left[ \Vert \xi _n\Vert ^2_{H^k}{{\mathbf {1}}}_{ \{\Vert \xi _n\Vert ^2_{H^k}\ge R\}}\right] + {\mathbb {E}}\left[ \Vert \xi \Vert ^2_{H^k}{{\mathbf {1}}}_{ \{\Vert \xi \Vert ^2_{H^k}\ge R\}}\right] \end{aligned}$$
(4.5)

we have that

$$\begin{aligned} \lim _n {\mathbb {E}}\left[ \sum _{i=1}^N |(\xi _n,f_i)_{H^k}|^2\right] ={\mathbb {E}}\left[ \sum _{i=1}^N |(\xi ,f_i)_{H^k}|^2\right] .\end{aligned}$$
(4.6)

This convergence combined with the assumption \({\mathbb {E}}[\Vert \xi _n\Vert ^2_{H^k}]\rightarrow {\mathbb {E}}[\Vert \xi \Vert ^2_{H^k}]\) implies

$$\begin{aligned} {\mathbb {E}}\left[ \sum _{i=N+1}^\infty |(\xi _n,f_i)_{H^k}|^2\right] \rightarrow {\mathbb {E}}\left[ \sum _{i=N+1}^\infty |(\xi ,f_i)_{H^k}|^2\right] .\end{aligned}$$
(4.7)

Since \(\xi \) is \(H^k({\mathbb {R}}^d)\)-square integrable, there is \(N_0\in {{\mathbb {N}}}_0\) such that

$$\begin{aligned} {\mathbb {E}}\left[ \sum _{i=N_0+1}^\infty |(\xi ,f_i)_{H^k}|^2\right] \le \frac{\epsilon }{2} . \end{aligned}$$
(4.8)

Then, using (4.7), there exists \(n_\epsilon \in {{\mathbb {N}}}\) for which

$$\begin{aligned} \sup _{n\ge n_\epsilon } {\mathbb {E}}\left[ \sum _{i=N_0+1}^\infty |(\xi _n,f_i)_{H^k}|^2\right] \le \epsilon .\end{aligned}$$
(4.9)

The family \(\{\Vert \xi _n\Vert _{H^k}: n=1,\ldots , n_\epsilon -1\}\) is square integrable. Therefore,

$$\begin{aligned} \lim _{N\rightarrow \infty } {\mathbb {E}}\left[ \sum _{i=N}^\infty |(\xi _n,f_i)_{H^k}|^2\right] =0 \mathrm{,\quad {}} n\le n_{\epsilon }-1 .\end{aligned}$$
(4.10)

By (4.9) and (4.10), there exists \(N_1\ge N_0\) such that

$$\begin{aligned} \sup _{n\in {{\mathbb {N}}}} {\mathbb {E}}\left[ \sum _{i=N_1+1}^\infty |(\xi _n,f_i)_{H^k}|^2\right] \le \epsilon . \end{aligned}$$
(4.11)

Therefore, (4.1) is established.

By [31, Theorem 1.13], the convergence (4.1) then implies the tightness in distribution in \(H^k({\mathbb {R}}^d)\) of the laws of \(\{\xi _n\}\). Note that any limiting measure can only be the distribution of \(\xi \). Thus

$$\begin{aligned} \xi _n\rightarrow \xi \end{aligned}$$
(4.12)

in distribution in \(H^k({\mathbb {R}}^d)\). \(\square \)

We shall work on the space \({{{\mathcal {Z}}}}=C([0,T]; L^{2}_\mathrm{loc}({\mathbb {R}}^{d}))\). Denote by z the canonical process on this space and \({{{\mathcal {D}}}}\) its right continuous filtration. We state our main theorem for an SPDE of the form

$$\begin{aligned}&du(t) = \bigl ( -i \Delta u(t)+b(u(t)) \bigr )dt +\Phi dW_t \end{aligned}$$
(4.13)

with

$$\begin{aligned}&u(t)\in L^2_{\mathrm{loc}}({\mathbb {R}}^{d}) \end{aligned}$$

where \(b:{\mathbb {C}}\rightarrow {\mathbb {C}}\) is, for simplicity, a finite sum of terms of the form \(u^{m} |u|^{a}\) where \(m\in {{\mathbb {N}}_0}\) and \(a\ge 0\) are such that

$$\begin{aligned} m+a<\frac{2 d}{d-2k} \end{aligned}$$

if \(d>2k\). We need this bound to establish that for smooth compactly supported \(\phi \) the mapping

$$\begin{aligned} v\in L^2_{\mathrm{loc}}({\mathbb {R}}^d)\rightarrow \int b(v(x))\phi (x)dx \end{aligned}$$
(4.14)

is continuous on bounded sets of \(H^k\). This can be established by using Gagliardo–Nirenberg inequality and Hölder inequality. We note that under Assumptions 3.1, this upper bound for \(m+a\) is satisfied for the Eq. (3.2) as \(k=1\).

Definition 4.2

A measure \(\nu \) on \({{{\mathcal {Z}}}}\) is a martingale solution of the Eq. (4.13) if for all \(\phi \) smooth and compactly supported functions

$$\begin{aligned} \int _0^T \big (|b(z_s)|,|\phi |\bigr ) ds <\infty , \, \nu \text{-a.s. } \end{aligned}$$
(4.15)

and if

$$\begin{aligned} M^\phi _t =(z_t-z_0,\phi ) -\int _0^t (-i\Delta z_s +b(z_s),\phi )ds \end{aligned}$$
(4.16)

and

$$\begin{aligned} (M^\phi _t)^2-\int _0^t \sum _i \left( \Phi e_i,\phi \right) ^2ds \end{aligned}$$
(4.17)

are \(\nu \)-local martingales. We say that \(\nu \) is a \(H^k\) square integrable martingale solution if

$$\begin{aligned} \sup _{t\in [0,T]} {\mathbb {E}}^\nu \left[ \Vert z_t\Vert _{H^k}^2\right] <\infty .\end{aligned}$$
(4.18)

Remark 4.3

(i) Note that a martingale solution can be obtained from any strong solution of (4.13). Indeed, let u be a solution of (4.13) on the interval [0, T]. Define the measure

$$\begin{aligned} \nu (dz) =\int _\Omega \delta _{\{\{u_{s}(\omega )\}_{s\in [0,T]}\}}(dz){\mathbb {P}}(d\omega ) \end{aligned}$$
(4.19)

meaning the measure on \({{{\mathcal {Z}}}}\) such that for all continuous bounded \(F:{{{\mathcal {Z}}}}\rightarrow {\mathbb {R}}\) we have

$$\begin{aligned} \int _{{{\mathcal {Z}}}}F(z)\nu (dz)= {\mathbb {E}}[F(\{u(s)\}_{s\in [0,T]})] .\end{aligned}$$
(4.20)

In order to facilitate the statement of the main result of this section, we introduce the concept of \(H^{k}\)-evolution property.

Definition 4.4

Let \(k\in {{\mathbb {N}}}\). The Eq. (4.13) has the \(H^k\)-norm evolution property if for \(i=0,\ldots , k\) there exist continuous functions \(F_i:L^2_{\mathrm{loc}}\rightarrow {\mathbb {R}}\), \({{\widetilde{F}}}_i :{\mathbb {R}}\times L^2_{\mathrm{loc}}\rightarrow {\mathbb {R}}\), and \( G_i :{\mathbb {R}}\times {\mathbb {R}}\times L^2_{\mathrm{loc}}\rightarrow {\mathbb {R}}\) satisfying the following conditions.

(i) For all tr the functions \(F_i(\cdot ),{{\widetilde{F}}}_i(t,\cdot )\) and \(G_i(t,r,\cdot )\) are continuous in \(H^{i-1}\)-topology on bounded sets of \(H^{i}\) (for \(i=0\) we require the continuity in \(L^2_\mathrm{loc}\) on bounded sets of \(L^2\)).

(ii) For all tr the functions \(F_i(\cdot ),{{\widetilde{F}}}_i(r,\cdot )\) and \(G(r,\cdot )\) have at most polynomial growth in the \(H^i\)-norm.

(iii) For all \(H^k\) square integrable martingale solutions \(\nu \) of (4.13), the conservation equality

$$\begin{aligned} {\mathbb {E}}^\nu \left[ \Vert z_t\Vert ^2_{H^i}\right] -e^{-2\lambda (t-s)}{\mathbb {E}}^\nu \left[ \Vert z_s\Vert ^2_{H^i}\right]= & {} {\mathbb {E}}^\nu \left[ F_i(z_t)-\widetilde{F}_i(t-s,z_s)\right] \nonumber \\&+\int _s^t {\mathbb {E}}^\nu \left[ G_i(t-r,z_r)\right] dr \end{aligned}$$
(4.21)

holds for all \(t\ge s\).

We now state a theorem, which, combined with Lemma 4.1, gives us a tightness result needed for the Krylov–Bogolyubov procedure.

Theorem 4.5

Assume that the Eq. (4.13) has the \(H^k\)-norm evolution property, and let \(u^n\) be a sequence of strong solutions of (4.13) satisfying the following conditions:

(a) We have a uniform bound

$$\begin{aligned} \gamma= & {} \sup _{r\ge 0}\sup _{k\ge i\ge 0} \sup _{n,t} {\mathbb {E}}\left[ \Vert b(u^n_t)\Vert _{L^1}^2+| F_i(u^n_t)|^2+|\widetilde{F}_i(r,u^n_t)|^2\right. \nonumber \\&+\left. |G_i(r,u^n_t)|^2+ \Vert u^n_t\Vert ^4_{H^k}\right] <\infty .\end{aligned}$$
(4.22)

(b) For every sequence of stopping times \(T_n\) and positive numbers \(\delta _n\) such that \(\delta _n\rightarrow 0\) as \(n\rightarrow \infty \), we have

$$\begin{aligned} {\mathbb {E}}\left[ \Vert u^n_{T_n+\delta _n}-u^n_{T_n}\Vert ^2_{L^2}\right] \rightarrow 0 \text{ as } n\rightarrow \infty .\end{aligned}$$
(4.23)

(c) There exists a sequence \(t_n\rightarrow \infty \) and an \(H^k\)-valued random variable \(\xi \) such that \(u^n_{t_n}\rightarrow \xi \) in distribution in \(L^2_{\mathrm{loc}}\).

Then \({\mathbb {E}}[\Vert u^n_{t_n}\Vert ^2_{H^k}]\rightarrow {\mathbb {E}}[\Vert \xi \Vert ^2_{H^k}]\) as \(n\rightarrow \infty \).

Remark 4.6

The powers in (4.22) have been chosen so we can obtain the uniform integrability of the family and then the De la Vallee Poussin’s theorem can be applied.

Proof of Theorem 4.5

Since

$$\begin{aligned} \liminf _n {\mathbb {E}}[\Vert u^n_{t_n}\Vert ^2_{H^k}]\ge {\mathbb {E}}[\Vert \xi \Vert ^2_{H^k}] \end{aligned}$$
(4.24)

we only need to prove

$$\begin{aligned} \limsup _n {\mathbb {E}}[\Vert u^n_{t_n}\Vert ^2_{H^k}]\le {\mathbb {E}}[\Vert \xi \Vert ^2_{H^k}] .\end{aligned}$$
(4.25)

We establish (4.25) by induction on k, reasoning by contradiction at each step. For \(k=0\) (cf. Step 1), we use (4.23) and the Aldous’s criterion to obtain a compactness of measures induced by the process \(\{u^n\}\). Then using (4.21) for a limiting measure we obtain a contradiction.

Step 1: First we prove (4.25) for \(k=0\). We assume that the convergence does not hold. This means that, passing to a subsequence, there exists \(\epsilon >0\) such that

$$\begin{aligned} {\mathbb {E}}[ \Vert u^n_{t_n}\Vert ^2_{L^2}] \ge {\mathbb {E}}[ \Vert \xi \Vert ^2_{L^2}]+\epsilon \mathrm{,\quad {}} n\in {{\mathbb {N}}}.\end{aligned}$$
(4.26)

We now pick \(T>0\) such that \(3 \gamma ^{1/2} e^{-2\lambda T}\le \epsilon \). Note that, by (4.22), the sequence \(\{u^n_{t_n-T}\}\) satisfies

$$\begin{aligned} \sup _{n} {\mathbb {E}}\left[ \Vert u^n_{t_n-T}\Vert ^4_{L^2}\right] \le \gamma .\end{aligned}$$
(4.27)

Therefore, passing to a further subsequence, there exists an \(L^2\)-valued random variable \(\xi _{-T}\) such that \(u^n_{t_n-T}\) converges in distribution in \(L^2_{\mathrm{loc}}({\mathbb {R}}^d)\) to \(\xi _{-T}\). Define a sequence of measures \(\nu ^n\) on \({{{\mathcal {Z}}}}\) by

$$\begin{aligned} \nu ^n(dz) =\int _\Omega \delta _{\{\{u^n_{t_n-T+r}(\omega )\}_{r\in [0,T]}\}}(dz){\mathbb {P}}(d\omega ).\end{aligned}$$
(4.28)

The assumption (4.23) and the Aldous criterion [1, Theorem 16.10] imply that the sequence \(\{\nu ^n\}_{n=1}^{\infty }\) is tight in distribution in \({{{\mathcal {Z}}}}\). Taking a further subsequence, we obtain the existence of \(\nu \) such that

$$\begin{aligned} {\mathbb {E}}^{\nu ^n}\left[ F(z)\right] ={\mathbb {E}}\left[ F(\{u^n_{t_n-T+s}\}_{s\in [0,T]})\right] \rightarrow {\mathbb {E}}^{\nu }\left[ F(z)\right] \text{ as } n\rightarrow \infty \mathrm{,\quad {}} F\in C_{b}({{{\mathcal {Z}}}}) .\nonumber \\ \end{aligned}$$
(4.29)

Identifying the marginals, we easily see that the distribution of \(z_T\) under \(\nu \) is the same as the distribution of \(\xi \). Similarly, the distribution of \(z_0\) under \(\nu \) is the same as the distribution of \(\xi _{-T}\). We write the Eq. (4.21) at times \(t_n\) and \(t_n-T\) for the measure \(\nu ^n\)

$$\begin{aligned}&{\mathbb {E}}\left[ \Vert u^n_{t_n}\Vert ^2_{L^2}\right] -e^{-2\lambda T }{\mathbb {E}}\left[ \Vert u^n_{t_n-T}\Vert _{L^2}^2\right] ={\mathbb {E}}\left[ F_0(u^n_{t_n})-\widetilde{F}_0(T,u^n_{t_n-T})\right] \nonumber \\&\quad +\int _0^T {\mathbb {E}}\left[ G_0(T-r,u^n_{t_n-T+r})\right] dr.\end{aligned}$$
(4.30)

We claim that by the assumptions (i), (ii), and (a), we have sufficient integrability and continuity at the right hand side of the equation to use the convergence of \(\nu ^n\) to \(\nu \) and pass to the limit to obtain

$$\begin{aligned}&\lim _n \Bigl ( {\mathbb {E}}\left[ \Vert u^n_{t_n}\Vert ^2_{L^2}\right] -e^{-2\lambda T }{\mathbb {E}}\left[ \Vert u^n_{t_n-T}\Vert ^2_{L^2}\right] \Bigr ) ={\mathbb {E}}^\nu \left[ F_0(z_T)-{{\widetilde{F}}}_0(T,z_0)\right] \nonumber \\&\quad +\int _0^T {\mathbb {E}}^\nu \left[ G_0(T-r,z_r)\right] dr.\end{aligned}$$
(4.31)

Indeed, the convergence of \(\nu ^n\) to \(\nu \) in \({{{\mathcal {Z}}}}\) implies that for all \(s\in [0,T]\) and every function \(\xi :L^2_\mathrm{loc}({\mathbb {R}}^d)\rightarrow {\mathbb {R}}\) continuous and bounded we have

$$\begin{aligned} {\mathbb {E}}\left[ \xi (u^n_{t_n-T+s})\right] \rightarrow {\mathbb {E}}^\nu \left[ \xi (z_s)\right] .\end{aligned}$$
(4.32)

Note that, by the assumption (i), the mappings \(F_0(\cdot )\), \(\widetilde{F}_0(T,\cdot )\), and \(G_0(T-s,\cdot )\) are continuous in \(L^2_\mathrm{loc}({\mathbb {R}}^d)\) on bounded sets of \(L^2({\mathbb {R}}^d)\). Additionally the assumption (ii) and the uniform bound (4.22) allows us to truncate \(F_0(\cdot )\), \({{\widetilde{F}}}_0(T,\cdot )\), and \(G_0(T-s,\cdot )\) when they are large in order to obtain

$$\begin{aligned} {\mathbb {E}}\left[ \psi (u^n_{t_n-T+s})\right] \rightarrow {\mathbb {E}}^\nu \left[ \psi (z_s)\right] \end{aligned}$$
(4.33)

for \(\psi =F_0(\cdot )\), \(\psi ={{\widetilde{F}}}(T,\cdot )\), and \(\psi =G_0(T-s,\cdot )\). Thus

$$\begin{aligned} \lim _n \Bigl ({\mathbb {E}}\left[ \Vert u^n_{t_n}\Vert ^2_{L^2}\right] -e^{-2\lambda T }{\mathbb {E}}\left[ \Vert u^n_{t_n-T}\Vert ^2_{L^2}\right] \Bigr )= & {} {\mathbb {E}}^\nu \left[ F_0(z_T)-\widetilde{F}_0(T,z_0)\right] \nonumber \\&+\int _0^T {\mathbb {E}}^\nu \left[ G_0(T-r,z_r)\right] dr.\nonumber \\ \end{aligned}$$
(4.34)

We shall show in Step 3 that \(\nu \) is a \(L^2\)-square integrable martingale solution of (4.13). Using this result and by the assumption (4.21) one has

$$\begin{aligned} {\mathbb {E}}^\nu \left[ F_0(z_T)-{{\widetilde{F}}}_0(T,z_0)\right] +\int _0^T {\mathbb {E}}^\nu \left[ G_0(T-r,z_r)\right] dr= & {} {\mathbb {E}}^\nu \left[ \Vert z_T\Vert ^2_{L^2}\right] \nonumber \\&-e^{-2\lambda T}{\mathbb {E}}^\nu \left[ \Vert z_0\Vert ^2_{L^2}\right] .\nonumber \\ \end{aligned}$$
(4.35)

Noting the bound (4.22), we may pass to the limit and obtain

$$\begin{aligned} \lim _n \left( {\mathbb {E}}\left[ \Vert u^n_{t_n}\Vert ^2_{L^2}\right] -e^{-2\lambda T }{\mathbb {E}}\left[ \Vert u^n_{t_n-T}\Vert ^2_{L^2}\right] \right)= & {} {\mathbb {E}}^\nu \left[ \Vert z_T\Vert ^2_{L^2}\right] -e^{-2\lambda T}{\mathbb {E}}^\nu \left[ \Vert z_0\Vert ^2_{L^2}\right] \nonumber \\= & {} {\mathbb {E}}\left[ \Vert \xi \Vert ^2_{L^2}\right] -e^{-2\lambda T}{\mathbb {E}}\left[ \Vert \xi _{-T}\Vert ^2_{L^2}\right] . \end{aligned}$$
(4.36)

By convergence of \(u_{t_n-T}^{n}\) to \(\xi _{-T}\) in distribution in \(L^2_{\mathrm{loc}}\) and the Fatou’s lemma, we obtain

$$\begin{aligned} {\mathbb {E}}\left[ \Vert \xi _{-T}\Vert ^2_{L^2}\right] \le \liminf _n {\mathbb {E}}\left[ \Vert u^n_{t_n-T}\Vert ^2_{L^2}\right] \le \gamma ^{1/2} .\end{aligned}$$
(4.37)

Using (4.26) and (4.36), we obtain

$$\begin{aligned} \epsilon&\le \liminf _n {\mathbb {E}}\left[ \Vert u^n_{t_n}\Vert ^2_{L^2}\right] - {\mathbb {E}}\left[ \Vert \xi \Vert ^2_{L^2}\right] \nonumber \\ {}&\le \limsup _n e^{-2\lambda T }\left( {\mathbb {E}}\left[ \Vert u^n_{t_n-T}\Vert ^2_{L^2}\right] -{\mathbb {E}}\left[ \Vert \xi _{-T}\Vert ^2_{L^2}\right] \right) \le \gamma ^{1/2} e^{-2\lambda T } \le \frac{2\epsilon }{3} \end{aligned}$$
(4.38)

which is a contradiction.

Step 2: Now we prove that \(\nu \) is a \(L^2\)-square integrable martingale solution of (4.13). Note that the uniform bound (4.22), the lower semicontinuity of the \(L^2\) norm with respect to the \(L^2_{\mathrm{loc}}\) topology, and the distributional convergence of \(\nu ^n\) to \(\nu \) give that for all \(t\in [0,T]\)

$$\begin{aligned} {\mathbb {E}}^\nu \left[ \Vert z_t\Vert _{L^2}^2\right] \le \gamma .\end{aligned}$$

Additionally the choice of the power for b implies that the mapping \(z\in {{{\mathcal {Z}}}}\rightarrow M_t^{\phi }(z)\) is continuous. Thus for all f bounded continuous

$$\begin{aligned} \lim _n {\mathbb {E}}^{\nu ^n}[f(M_t^\phi )]= {\mathbb {E}}^{\nu }[f(M_t^\phi )] .\end{aligned}$$

For all \(\phi \) smooth, there exists \(K_{\phi ,T}\) depending only on \(\phi \) and T such that

$$\begin{aligned} |M_t^\phi |^2 \le K_{\phi ,T}\left( \Vert z_0\Vert ^2_{L^2}+\Vert z_t\Vert ^2_{L^2}+\int _0^t \Vert z_r\Vert ^2_{L^2}+\Vert b(z_r)\Vert ^2_{L^1} dr\right) \mathrm{,\quad {}} t\in [0,T] .\end{aligned}$$
(4.39)

We will use these points to prove that \(M_t^\phi \) is a martingale under \(\nu \). We fix a family of smooth truncation functions \(\Psi _R\) satisfying \(|\Psi _R|\le 2R\) and \(\Psi _R(x)=x\) if \(|x|\le R\). For \(0\le s_1\le s_2\le \ldots \le s_m\le s\le t\le T\) smooth compactly supported functions functions \(\phi _i\) and a random variable of the form \(F=F(\int \phi _1 z_{s_1} dx,\ldots ,\int \phi _m z_{s_m}dx)\) smooth and bounded by 1, we have the equalities

$$\begin{aligned}&\left| {\mathbb {E}}^\nu \left[ (M^{\phi }_t-M^\phi _s)F\right] \right| \\&\quad = \left| {\mathbb {E}}^\nu \left[ (\Psi _R(M^{\phi }_t)-\Psi _R(M^\phi _s))F\right] \right| + {\mathbb {E}}^\nu \left[ |\Psi _R(M^{\phi }_t)-M^{\phi }_t|\right] \\&\qquad +{\mathbb {E}}^\nu \left[ |\Psi _R(M^{\phi }_s)-M^{\phi }_s|\right] \\&\quad =\lim _n \left| {\mathbb {E}}^{\nu ^n} \left[ (\Psi _R(M^{\phi }_t)-\Psi _R(M^\phi _s))F\right] \right| + \frac{1}{R}\left( {\mathbb {E}}^\nu \left[ |M^{\phi }_t|^2\right] +{\mathbb {E}}^\nu \left[ |M^{\phi }_s|^2\right] \right) \\&\quad =\lim _n \left| {\mathbb {E}}^{\nu ^n} \left[ (M^{\phi }_t-M^\phi _s)F\right] \right| + \frac{4T\gamma K_{\phi ,T}}{R} .\end{aligned}$$

Note that by the martingale property of \(M_t^\phi \) under \(\nu ^n\) we have \({\mathbb {E}}^{\nu ^n} \left[ (M^{\phi }_t-M^\phi _s)F\right] =0\). Thus, taking R to infinity we get \( {\mathbb {E}}^\nu \left[ (M^{\phi }_t-M^\phi _s)F\right] =0\) which is sufficient to claim that \(M_t^\phi \) is a \(\nu \) martingale. Due to the smoothness of \(\phi \), the continuity of z in \(L^2_{\mathrm{loc}}\) and (4.39), \(M_t^\phi \) is a continuous and square integrable martingale under \(\nu \). We now proceed to characterize its quadratic variation.

By the definition of martingale solutions under \(\nu ^n\) the process is a Brownian motion and thus has Gaussian independent increments. By the distributional convergence of \(\nu ^n\) and the continuity of in the \(L^2_{\mathrm{loc}}\) topology with respect to z the distribution and the independence of the increments still hold under \(\nu \). Thus the continuous process is a Brownian motion under \(\nu \) which implies that (4.17) holds under \(\nu \).

Step 3: For the induction step, assume that \({\mathbb {E}}[\Vert u^n_{t_n}\Vert ^2_{H^r}]\rightarrow {\mathbb {E}}[\Vert \xi \Vert ^2_{H^r}]\) as \(n\rightarrow \infty \) for \(r=0,\ldots ,k-1\). We need to show that

$$\begin{aligned} {\mathbb {E}}[\Vert u^n_{t_n}\Vert ^2_{H^k}]\rightarrow {\mathbb {E}}[\Vert \xi \Vert ^2_{H^k}].\end{aligned}$$
(4.40)

Note that using Lemma 4.1 at each step of the induction one can also show that

$$\begin{aligned} u^n_{t_n}\rightarrow \xi \end{aligned}$$
(4.41)

in distribution in \(H^r({\mathbb {R}}^d)\) for all \(r\le k-1\).

In order to obtain a contradiction, assume that the convergence we are proving does not hold. This means that, up to a subsequence, there exists \(\epsilon >0\) such that

$$\begin{aligned} {\mathbb {E}}[ \Vert u^n_{t_n}\Vert ^2_{H^k}] \ge {\mathbb {E}}[ \Vert \xi \Vert ^2_{H^k}]+\epsilon \mathrm{,\quad {}} n\in {{\mathbb {N}}}.\end{aligned}$$
(4.42)

Similarly to the previous step, we introduce T such that \(3\gamma ^{1/2} e^{-2\lambda T}\le \epsilon \) and define the measures \(\nu ^n\) on \({{{\mathcal {Z}}}}\). We also prove similarly that there exist an \(H^k\)-valued random variable \(\xi _{-T}\), a distribution \(\nu \) on \({{{\mathcal {Z}}}}\) which is a \(H^k\) square integrable solution of (4.13) and a subsequence of \(t_n\) (still denoted \(t_n\)) such that \(\nu ^n\rightarrow \nu \) on \({{{\mathcal {Z}}}}\) as \(n\rightarrow \infty \) and \(u^n_{t_n-T}\rightarrow \xi _{-T}\) in distribution in \(L^2_{\mathrm{loc}}({\mathbb {R}}^d)\). Note that for all \(s\in [0,T]\) the family \(u^n_{t_n-T+s}\) converges in distribution in \(L^2_{\mathrm{loc}}({\mathbb {R}}^d)\) to the distribution of \(z_s\) under \(\nu \). Therefore, using the induction hypothesis on the family \(u^n_{t_n-T+s}\) and \(z_s\) we obtain

$$\begin{aligned} u^n_{t_n-T+s}\rightarrow z_s \end{aligned}$$
(4.43)

in distribution in \(H^r({\mathbb {R}}^d)\) and

$$\begin{aligned} {\mathbb {E}}[\Vert u^n_{t_n-T+s}\Vert ^2_{H^r}]\rightarrow {\mathbb {E}}^\nu [\Vert z_s\Vert ^2_{H^r}] \end{aligned}$$
(4.44)

for \(r\le k-1\) as \(n\rightarrow \infty \).

We first use (4.21) on \(\nu ^n\) for k to obtain

$$\begin{aligned}&{\mathbb {E}}\left[ \Vert u^n_{t_n}\Vert ^2_{H^k}\right] -e^{-2\lambda T}{\mathbb {E}}\left[ \Vert u^n_{t_n-T}\Vert ^2_{H^k}\right] ={\mathbb {E}}\left[ F_k( u^n_{t_n})-{{\widetilde{F}}}_k(T, u^n_{t_n})\right] \nonumber \\&\quad +\int _0^T {\mathbb {E}}\left[ G_k(T-s, u^n_{t_n-T+s})\right] ds .\end{aligned}$$
(4.45)

We have proven that the distribution of \(u^n_{t_n-T+s}\) converges in distribution in \(H^{k-1}({\mathbb {R}}^d)\) to the distribution of \(z_s\) under \(\nu \). Similarly to the previous step, we have enough integrability and continuity on the right hand side of the equation to use this convergence and pass to the limit to obtain

$$\begin{aligned}&\lim _n {\mathbb {E}}\left[ \Vert u^n_{t_n}\Vert ^2_{H^k}\right] -e^{-2\lambda T}{\mathbb {E}}\left[ \Vert u^n_{t_n-T}\Vert ^2_{H^k}\right] ={\mathbb {E}}^\nu \left[ F_k(z_T)-\widetilde{F}_k(T,z_0)\right] \nonumber \\&\quad +\int _0^T {\mathbb {E}}^\nu \left[ G_k(T-s,z_s)\right] ds.\end{aligned}$$
(4.46)

Using Fatou’s lemma and (4.22) we have that \(\nu \) is a \(H^k({\mathbb {R}}^d)\) square integrable solution of (4.13). Thus, by assumption, Definition 4.4 (iii) gives

$$\begin{aligned} {\mathbb {E}}^\nu \left[ \Vert z_T\Vert ^2_{H^k}\right] -e^{-2\lambda T}{\mathbb {E}}^\nu \left[ \Vert z_0\Vert ^2_{H^k}\right]= & {} {\mathbb {E}}^\nu \left[ F_k(z_T)-\widetilde{F}_k(T,z_0)\right] \nonumber \\&+\int _0^T {\mathbb {E}}^\nu \left[ G_k(T-s,z_s)\right] ds, \end{aligned}$$
(4.47)

which implies

$$\begin{aligned} \lim _n \Bigl ({\mathbb {E}}\left[ \Vert u^n_{t_n}\Vert ^2_{H^k}\right] -e^{-2\lambda T}{\mathbb {E}}\left[ \Vert u^n_{t_n-T}\Vert ^2_{H^k}\right] \Bigr )= & {} {\mathbb {E}}^\nu \left[ \Vert z_T\Vert ^2_{H^k}\right] \nonumber \\&-e^{-2\lambda T}{\mathbb {E}}^\nu \left[ \Vert z_0\Vert ^2_{H^k}\right] .\end{aligned}$$
(4.48)

Using the same arguments as in the previous step, we obtain a contradiction. \(\square \)

5 Proofs of tightness for the Schrödinger equation

We now return to the Schrödinger equation (3.2). We fix \(\lambda >0\) and \(\alpha \in \{-1,1\}\); thus all the constants are allowed to depend on \(\lambda ,\alpha \). Also, recall that we impose Assumptions 3.1 on \(\sigma \) and \(\Phi \).

Lemma 5.1

For every \(k\in {{\mathbb {N}}}\) we have

$$\begin{aligned} \sup _{t\ge 0} {\mathbb {E}}[M(u(s))^{k}] \le C_k({\mathbb {E}}[|M(u_0)|^k]+1) \end{aligned}$$
(5.1)

and

$$\begin{aligned} \sup _{t\ge 0} {\mathbb {E}}[H(u(s))^{k}]\le C_k\left( {\mathbb {E}}\left[ |H(u_0)|^k+ {{\mathbf {1}}}_{\{\alpha =1\}}\Vert u(0)\Vert ^{2k+\frac{4k\sigma }{2-\sigma d}}_{L^2}\right] +1\right) \end{aligned}$$
(5.2)

where \(C_k\ge 0\) is a constant.

Proof of Lemma 5.1

Using similar ideas as in [13], one can show that the local martingale appearing in (3.5) is a martingale. Thus we have

$$\begin{aligned} {\mathbb {E}}[M(u(t))]+2\lambda \int _0^t {\mathbb {E}}[M(u(s))]ds ={\mathbb {E}}[M(u_0)]+ t \Vert \Phi \Vert ^2_{{HS(L^2;L^)}}.\end{aligned}$$
(5.3)

Solving this ODE for \({\mathbb {E}}[M(u(t))]\), we get

$$\begin{aligned} {\mathbb {E}}[M(u(t))]= & {} e^{-2\lambda t}{\mathbb {E}}[M(u_0)]+\Vert \Phi \Vert ^2_{{HS(L^2; L^2)}}\int _0^t e^{-2\lambda (t-s)}ds\le {\mathbb {E}}[M(u_0)]\nonumber \\&+\frac{1}{2\lambda } \Vert \Phi \Vert ^2_{{HS(L^2; L^2)}} \end{aligned}$$
(5.4)

which proves (5.1) for \(k=1\). For general k we proceed by induction. We assume the existence of \(C_k\) for a given \(k\ge 1\) and apply Ito’s lemma to \(M(u(t))^{k+1}\) to obtain

$$\begin{aligned}&dM^{k+1}(u(t))+2(k+1)\lambda M^{k+1}(u(t))dt \nonumber \\&\quad = (k+1) M^{k}(u(t))\Vert \Phi \Vert _{{HS(L^2; L^2)}}^2 dt\nonumber \\&\qquad + \frac{k(k+1)}{2} M^{k-1}(u(t)) \sum _i Re(u(t),\Phi e_i)^2 dt+{{\widetilde{M}}} \end{aligned}$$
(5.5)

where similarly \({{\widetilde{M}}}\) can be shown to be a martingale. Thus the function \({\mathbb {E}}\left[ M^{k+1}(u(t))\right] \) satisfies the ODE

$$\begin{aligned}&\left( {\mathbb {E}}\left[ M^{k+1}(u(t))\right] \right) '+2(k+1)\lambda {\mathbb {E}}\left[ M^{k+1}(u(t))\right] \nonumber \\&\quad = (k+1){\mathbb {E}}\left[ M^{k}(u(t))\Vert \Phi \Vert _{{HS(L^2; L^2)}}^2\right] \nonumber \\&\qquad + \frac{k(k+1)}{2} {\mathbb {E}}\left[ M^{k-1}(u(t)) \sum _i Re(u(t),\Phi e_i)^2\right] =:g_k(t), \end{aligned}$$
(5.6)

where \(g_k\) is a bounded function of t by the induction assumption. By solving this ODE, we see that the function \({\mathbb {E}}\left[ M^{k+1}(u(t))\right] \) is bounded.

To obtain the bounds for for H we treat two cases separately.

Case \(\alpha =-1\) In this case the Eq. (3.6) gives that \(f(s):={\mathbb {E}}[H(u_s)]\) satisfies the ODE

$$\begin{aligned} f'(s)+2\lambda f(s)&\le \frac{\Vert \nabla \Phi \Vert ^2_{HS(L^2,L^2)}}{2} +{\mathbb {E}}\left[ \frac{\Vert |u(s)|^\sigma \Phi \Vert ^2_{HS(L^2,L^2)}}{2}\right] \\&\quad +\alpha \sigma \sum _i{\mathbb {E}}\left[ (|u(s)|^{2\sigma -2},(Re({{\overline{u}}}(s) \Phi e_i))^2)\right] \\&\le C \left( 1+\sum _i \int |u(s,x)|^{2\sigma }|\Phi e_i(x)|^2 dx\right) \\&\le C\left( 1+ \left( \int |u(s,x)|^{2\sigma +2}dx\right) ^{\frac{2\sigma }{2\sigma +2}} \sum _i\Vert \Phi e_i\Vert _{L^{2\sigma +2}}^2 \right) \\&\le C\left( 1\!+ \!\left( \int |u(s,x)|^{2\sigma +2}dx\right) ^{\frac{2\sigma }{2\sigma +2}} \sum _i\Vert \Phi e_i\Vert _{L^{2}}^{2-\frac{\sigma d}{\sigma +1}} \Vert \nabla \Phi e_i\Vert _{L^{2}}^{\frac{\sigma }{\sigma +1}d} \!\right) \\&\le C\left( \epsilon , \Vert \Phi \Vert ^2_{HS(L^2,H^1)}\right) +\epsilon \int |u(s,x)|^{2\sigma +2}dx \end{aligned}$$

where we have successively used the Hölder, Gagliardo–Nirenberg, and \(\epsilon \)-Young inequalities. Note that \(\alpha =-1\), we can absorb this last term by \(\lambda f(s)\) to obtain

$$\begin{aligned} f'(s)+\lambda f(s)\le C \end{aligned}$$

which implies

$$\begin{aligned} f_t\le C(f_0+1). \end{aligned}$$

Case \(\alpha =1\) In this case f satisfies the inequality

$$\begin{aligned} f'(s)+2\lambda f(s)\le C+ \frac{\lambda \sigma }{\sigma +1}{\mathbb {E}}\left[ \Vert u(s)\Vert _{L^{2\sigma +2}}^{2\sigma +2}\right] . \end{aligned}$$

Note that by the Gagliardo–Nirenberg inequality and \(\sigma d<2\) we obtain

$$\begin{aligned} \frac{\lambda \sigma }{\sigma +1}\Vert u(s)\Vert _{L^{2\sigma +2}}^{2\sigma +2}\le \lambda H(u(s))+ C\Vert u(s)\Vert ^{2+\frac{4\sigma }{2-\sigma d}}_{L^2}. \end{aligned}$$

This shows that

$$\begin{aligned} f_t\le C \left( 1+f_0+\sup _{s\ge 0}{\mathbb {E}}\left[ \Vert u(s)\Vert ^{2+\frac{4\sigma }{2-\sigma d}}_{L^2}\right] \right) . \end{aligned}$$

We now use (5.1) to obtain that

$$\begin{aligned} f_t\le C\left( 1+ {\mathbb {E}}\left[ |H(u_0)|+\Vert u(0)\Vert ^{2+\frac{4\sigma }{2-\sigma d}}_{L^2}\right] \right) . \end{aligned}$$

Repeating the same argument for \(H^k(u(t))\) we obtain (5.2). \(\square \)

In order to obtain the tightness of the averaged measures, we use Lemma 4.1 and Theorem 4.5. The last ingredient we need is the following lemma.

Lemma 5.2

Under the assumptions of Lemma 3.5, for all stopping times \(T_n\) and real numbers \(\delta _n\) such that \(\delta _n\rightarrow 0\) as \(n\rightarrow \infty \), we have

$$\begin{aligned} {\mathbb {E}}\left[ \Vert u^n_{T_n+\delta _n}-u^n_{T_n}\Vert ^2_{L^2}\right] \rightarrow 0 \text{ as } n\rightarrow \infty .\end{aligned}$$
(5.7)

Proof of Lemma 5.2

We denote by \(S^\lambda \) the semigroup associated with the linear part of the equation. With this notation, we have

$$\begin{aligned} u^n_{T_n+\delta _n}-u^n_{T_n}&= S^\lambda (\delta _n) u^n_{T_n}- u_{T_n} + i\int _{0}^{\delta _n} S^\lambda (\delta _n-s)(|u^n(T_n+s)|^{2\sigma }u^n(s)) ds\nonumber \\&\quad +\int _{0}^{\delta _n} S^\lambda (\delta _n-s)\Phi dW_{T_n+s} .\end{aligned}$$
(5.8)

The lemma would follow from the following three convergence statements:

$$\begin{aligned}&{\mathbb {E}}\left[ \Vert S^\lambda (\delta _n) u^n_{T_n}- u^n_{T_n}\Vert ^2_{L^2}\right] \rightarrow 0, \end{aligned}$$
(5.9)
$$\begin{aligned}&{\mathbb {E}}\left[ \left\| \int _{0}^{\delta _n} S^\lambda (\delta _n-s)(|u^n(T_n+s)|^{2\sigma }u^n(s)) ds \right\| ^2_{L^2}\right] \rightarrow 0, \end{aligned}$$
(5.10)
$$\begin{aligned}&{\mathbb {E}}\left[ \left\| \int _{0}^{\delta _n} S^\lambda (\delta _n-s)\Phi dW_{T_n+s}\right\| ^2_{L^2}\right] \rightarrow 0 .\end{aligned}$$
(5.11)

By PDE arguments, the first convergence is obvious. For the second convergence (5.10), we simply write

$$\begin{aligned}&{\mathbb {E}}\left[ \left\| \int _{0}^{\delta _n} S^\lambda (\delta _n-s)(|u(T_n+s)|^{2\sigma }u(s)) ds\right\| ^2_{L^2}\right] \nonumber \\&\quad \le \delta _n \int _{0}^{\delta _n} {\mathbb {E}}\left[ \Vert S^\lambda (\delta _n-s)(|u(T_n+s)|^{2\sigma }u(s)) \Vert ^2_{L^2}\right] ds .\end{aligned}$$
(5.12)

Given the uniform bounds (5.1), the integrand is uniformly bounded and the convergence thus holds. For the third convergence (5.11), we use the the Burkholder–Davis–Gundy Inequality [7, Lemma 5.24] and obtain

$$\begin{aligned} {\mathbb {E}}\left[ \left\| \int _{0}^{\delta _n} S^\lambda (\delta _n-s)\Phi dW_{T_n+s}\right\| ^2_{L^2}\right] \le \int _0^{\delta _n }\Vert S^\lambda (\delta _n-s)\Phi \Vert _{HS(L^2,L^2)}^2 ds \rightarrow 0\nonumber \\ \end{aligned}$$
(5.13)

as \(n\rightarrow \infty \). \(\square \)

5.1 Proof of Lemma 3.5

Proof of (i): We show that the assumptions of Theorem 4.5 with \(k=1\) are satisfied for the Eq. (3.2) and the set \(\{(\nu ^nP_{t_n})(\cdot ):n\in {\mathbb {N}}\}\) is relatively weakly compact over \(H^1({\mathbb {R}}^d)\). We define

$$\begin{aligned} F_0= & {} {{\widetilde{F}}}_0=0 \nonumber \\ G_0(t,r,v):= & {} e^{-2\lambda (t-r)}\Vert \Phi \Vert _{HS(L^2,L^2)}^2, \end{aligned}$$
(5.14)
$$\begin{aligned} F_1(v)= & {} \frac{\alpha }{2\sigma +2} \int |v(x)|^{2\sigma +2} dx \end{aligned}$$
(5.15)
$$\begin{aligned} {{\widetilde{F}}}_1(r,v):= & {} e^{-2\lambda r}\frac{\alpha }{2\sigma +2} \int |v(x)|^{2\sigma +2} dx, \end{aligned}$$
(5.16)
$$\begin{aligned} G_1(r,v):= & {} e^{-2\lambda r}\biggl (\int _{{\mathbb {R}}^d}\frac{\alpha \lambda \sigma }{\sigma +1} |v(x)|^{2\sigma +2} dx +\frac{\Vert \nabla \Phi \Vert ^2_{HS(L^2,L^2)}}{2}\nonumber \\&- \frac{\alpha }{2}\Vert |v|^\sigma \Phi \Vert ^2_{HS(L^2,L^2)}-\alpha \sigma \sum _i Re(|v|^{2\sigma -2 }v^2, (\Phi e_i)^2) \biggr ) .\nonumber \\ \end{aligned}$$
(5.17)

We apply the Gagliardo–Nirenberg interpolation inequality to obtain

$$\begin{aligned} \Vert v\Vert _{L^{2\sigma +2}}^{2\sigma +2} \le C\Vert v\Vert _{H^1}^{d\sigma }\Vert v\Vert ^{\sigma (2-d)+2}_{L^2} \end{aligned}$$
(5.18)

which shows that \(F_1(\cdot )\), \({{\widetilde{F}}}_1(r,\cdot )\), and \(G_1(r,\cdot )\) are continuous in \(L^2({\mathbb {R}}^d)\) on bounded sets of \(H^1({\mathbb {R}}^d)\). They also have at most polynomial growth in \(H^1({\mathbb {R}}^d)\) and given the bounds on \(u^n_0\) and Lemma 5.1, with \(b(u)=|u|^{2\sigma }u\), we have the bound (4.22). Additionally, given Assumption 3.1 on \(\sigma \), we can easily verify that the degree of b satisfy for \(d> 2\),

$$\begin{aligned} 2\sigma +1 <\frac{d+2}{d-2}\le \frac{2d}{d-2}. \end{aligned}$$
(5.19)

Since \(\nu \) is a \(H^1\)-square integrable martingale solution of (3.2), by [11, Theorem 2.4], we can extend the probability space \(({{{\mathcal {Z}}}},{{{\mathcal {D}}}},\nu )\) to obtain a family of Brownian motions \({{\hat{B}}}^i\) such that the \(H^{-1}({\mathbb {R}}^d)\)-valued continuous martingale \( M_t= z(t)-z(0) +\int _0^t (\lambda z(s)-i\Delta z(s) -i|z(s)|^{2\sigma } z(s) )ds\) can be represented as

$$\begin{aligned} dM_t=\sum _i \Phi e_i d{{\hat{B}}}^i_t.\end{aligned}$$
(5.20)

Similarly to [8, Propositions 3.2 and 3.3], we apply Ito’s lemma to M(z(t)) and H(z(t)) on this probability space to obtain that (4.21) holds for \(i=0,1\) under \(\nu \). This shows that the Eq. (3.2) has the \(H^1\)-norm evolution property.

Next, we consider the sequence of measures \({(\nu ^nP_{t_n})(dv)}\) as measures on the space \(L^2_{\mathrm{loc}}({\mathbb {R}}^d)\). Denote by \(B_k\subseteq {\mathbb {R}}^d\) the ball with radius k, centered at the origin. Given the uniform estimates (5.2), by the compact embedding of the space \(H^1(B_k)\) in \(L^2(B_k)\) and a successive application of Prokhorov’s theorem, we obtain that there exists a subsequence of \(\{t_n,u^n_0\}\), which we still denote \(\{t_n,u^n_0\}\), and a distribution \(\mu \) on \(H^1({\mathbb {R}}^d)\) such that

$$\begin{aligned} {(\nu ^nP_{t_n})(dv)}\rightarrow \mu \text{ in } \text{ distribution } \text{ in } L^2_\mathrm{loc}({\mathbb {R}}^d).\end{aligned}$$
(5.21)

We also note that the solutions \(u^n\) satisfy the assumptions (iv) and (v) as consequences of (5.2) and (5.7) respectively. Thus by Lemma 4.1 and Theorem 4.5, the convergence

$$\begin{aligned} {(\nu ^nP_{t_n})(dv)}\rightarrow \mu \end{aligned}$$
(5.22)

is in fact in distribution in \(H^1({\mathbb {R}}^d)\) which is what we claimed.

Proof of (ii): We choose \((s_n,v_n)\in [0,T]\times K\). By the compactness of the two sets there exist a subsequence of \((s_n,v_n)\), still denoted \((s_n,v_n)\), and \((s,v)\in [0,1]\times K\) such that \((s_n,v_n)\rightarrow (s,v)\). We claim that \(P_{s_n}(v_n,\cdot )\) converges in distribution in \(H^1({\mathbb {R}}^d)\) to \(P_s(v,\cdot )\).

We denote by \(u^n\) and u the solutions of (3.2) with initial data \(v^n\) and v respectively. In order to show this convergence we prove that we have

$$\begin{aligned} \sup _{t\in [0,1]} \bigl (\Vert u^n_t -u_t\Vert _{H^1}+\Vert u_{s_n}-u_s\Vert _{H^1}\bigr )\rightarrow 0,\,{\mathbb {P}}\text{-a.s }. \end{aligned}$$
(5.23)

The convergence \(\Vert u_{s_n}-u_s\Vert _{H^1}\rightarrow 0\) is a direct consequence of \(u\in C([0,1],H^1),\,{\mathbb {P}}\)-a.s. It is shown in [8] that

(5.24)

Thus, applying [8, Proposition 3.5], we also have \({\mathbb {P}}\)-a.s. \(\Vert u^n-u\Vert _{C([0,1];H^1({\mathbb {R}}^d))}\rightarrow 0\) as \(n\rightarrow \infty \).

We now show that (5.23) implies the convergence \(P_{s_n}(v^n,\cdot ) \rightarrow P_s(v,\cdot )\). We pick \(\xi :H^1({\mathbb {R}}^d)\rightarrow {\mathbb {R}}\) uniformly continuous and bounded. Then

$$\begin{aligned} |P_s\xi (v)-P_{s_n}\xi (v^n)|&\le {\mathbb {E}}[|\xi (u_s)-\xi (u^n_{s_n})|] \nonumber \\&\le {\mathbb {E}}[|\xi (u_s)-\xi (u_{s_n})|]+ {\mathbb {E}}[|\xi (u_{s_n})-\xi (u^n_{s_n})|] .\end{aligned}$$
(5.25)

Note that (5.23) and the uniform continuity of \(\xi \) imply that \({\mathbb {P}}\)-a.s. \(|\xi (u_s)-\xi (u_{s_n})|+|\xi (u_{s_n})-\xi (u^n_{s_n})|\rightarrow 0\) as \(n\rightarrow \infty \). By the dominated convergence theorem, we obtain \(|P_s\xi (v)-P_{s_n}\xi (v^n)|\rightarrow 0\) as \(n\rightarrow \infty \). \(\square \)

6 Compactness of the set of invariant measures

In this section, we establish the existence of an ergodic measure.

Theorem 6.1

Under Assumptions 3.1, the set of \(H^1({\mathbb {R}}^d)\)-valued invariant measures is a convex and compact subset of the space of probability measures on \(H^1({\mathbb {R}}^d)\).

Proof

Note that the convexity is trivial, so we only need to show compactness. Let \(\mu \) be such a measure and u(t) the solution of (3.2) having distribution \(\mu \) at all time. For simplicity of notation, we denote \(M_s=M(u(s))\) and \(H_s=H(u(s))\). Our first objective is to prove the integrability of these semi-martingales.

We fix \(R,R_0>0\) and define \(\tau _{R}:=\inf \{s\ge 0: M_s\ge R\}\). We apply (3.5) on the event \({\{M_0\le R_0\}}\) and obtain

$$\begin{aligned} M_{t\wedge \tau _R} =&M_0 e^{-2\lambda {t\wedge \tau _R}} +\Vert \Phi \Vert _{HS(L^2; L^2)}^2 \int _0^{t\wedge \tau _R} e^{-2\lambda ({t\wedge \tau _R}-s)} ds \end{aligned}$$
(6.1)
$$\begin{aligned}&+2 \int _0^{t\wedge \tau _R} e^{-2\lambda ({t\wedge \tau _R}-s)}Re(u(s),\Phi e_i) dB^i_s .\end{aligned}$$
(6.2)

Note that, by the localization, the expectation of the stochastic integral vanishes. Therefore,

$$\begin{aligned} {\mathbb {E}}\left[ M_{t\wedge \tau _R}{{\mathbf {1}}}_{\{M_0\le R_0\}}\right]&={\mathbb {E}}\left[ M_0{{\mathbf {1}}}_{\{M_0\le R_0\}} e^{-2\lambda {t\wedge \tau _R}} \right] \nonumber \\&\quad +\Vert \Phi \Vert _{HS(L^2; L^2)}^2{\mathbb {E}}\left[ \int _0^{t\wedge \tau _R} e^{-2\lambda ({t\wedge \tau _R}-s)} {{\mathbf {1}}}_{\{M_0\le R_0\}} ds\right] .\end{aligned}$$
(6.3)

For a fixed \(R_0\), the integrands on the right hand side are uniformly bounded and the integrand on the left hand side is non-negative. We apply the dominated convergence theorem for the right side and Fatou’s lemma for the left to obtain that

$$\begin{aligned} {\mathbb {E}}\left[ M_t{{\mathbf {1}}}_{\{M_0\le R_0\}} \right] \le {\mathbb {E}}\left[ M_0{{\mathbf {1}}}_{\{M_0\le R_0\}} \right] e^{-2\lambda t}+\frac{\Vert \Phi \Vert _{HS(L^2; L^2)}^2}{2\lambda }.\end{aligned}$$
(6.4)

Therefore, we can choose \(t_{R_0}>0\) such that for all \({R_0}>0\), we have

$$\begin{aligned} {\mathbb {E}}\left[ M_{t_{R_0}}{{\mathbf {1}}}_{\{M_0\le {R_0}\}} \right] \le \frac{\Vert \Phi \Vert _{HS(L^2; L^2)}^2}{\lambda }.\end{aligned}$$
(6.5)

Noting also that the distribution of \(M_{t_{R_0}}\) is \(\mu \) we obtain there exists \(f_{R_0}(v) \rightarrow 1\) \(\mu \)-a.s. as \({R_0}\rightarrow \infty \) and

$$\begin{aligned} {\mathbb {E}}\left[ M_{t_{R_0}}{{\mathbf {1}}}_{\{M_0\le {R_0}\}} \right] =\int \Vert v\Vert _{L^2}^2f_{R_0}(v)\mu (dv)\le \frac{\Vert \Phi \Vert _{HS(L^2; L^2)}^2}{\lambda }.\end{aligned}$$
(6.6)

Taking the limit \({R_0}\rightarrow \infty \), we obtain

$$\begin{aligned} \int \Vert v\Vert _{L^2}^2\mu (dv)\le \frac{\Vert \Phi \Vert _{HS(L^2; L^2)}^2}{\lambda } .\end{aligned}$$
(6.7)

Similarly to the proof of Lemma 5.1, we apply Ito’s lemma to \(M^{k+1}(u(t))\), localize with stopping times and prove that there exists \(C_k(\Phi ,\lambda )\) which may a priori depend on \(\mu \) such that

$$\begin{aligned} \int \Vert v\Vert _{L^2}^{2k}\mu (dv)\le C_k(\Phi ,\lambda )<\infty \mathrm{,\quad {}} k=1,2,\ldots .\end{aligned}$$
(6.8)

We also apply the same procedure to \(H_t\) to obtain that there exists \({{\widetilde{C}}}_k (\Phi ,\lambda )\) that may again depend on \(\mu \) such that

$$\begin{aligned} \int \Vert v\Vert _{H^1}^{2k}\mu (dv)\le \widetilde{C}_k(\Phi ,\lambda )<\infty .\end{aligned}$$
(6.9)

Given this integrability, we return to (3.5) and (3.6) to prove that \(C_k(\Phi ,\lambda )\) and \(\widetilde{C}_k (\Phi ,\lambda )\) can be taken independent of \(\mu \). Since \(\mu \) is an invariant measure, we get \(d {\mathbb {E}}[M_t]=d{\mathbb {E}}[H_t]=0\) and

$$\begin{aligned} {\mathbb {E}}[M_t]= \frac{\Vert \Phi \Vert _{HS(L^2; L^2)}^2}{2\lambda }.\end{aligned}$$
(6.10)

Using the same invariance we obtain

$$\begin{aligned} 2\lambda {\mathbb {E}}[M^{k+1}_t]&= \Vert \Phi \Vert _{HS(L^2,L^2)}^2 {\mathbb {E}}[M^{k}_t] +\frac{k}{2} {\mathbb {E}}[M^{k-1}_t\sum _i Re (u(t),\Phi e_i)^2] \end{aligned}$$
(6.11)
$$\begin{aligned}&\le \left( \Vert \Phi \Vert _{HS(L^2,L^2)}^2 + \frac{k}{2}\right) {\mathbb {E}}[M^{k}_t], \end{aligned}$$
(6.12)

which shows by induction that \(C_k(\Phi ,\lambda )\) may be taken independent of \(\mu \). Applying the same procedure to the Eq. (3.6), we obtain that \({{\widetilde{C}}}_k(\Phi ,\lambda )\) can be taken independent of \(\mu \).

We now prove the sequential compactness of the set of \(H^1({\mathbb {R}}^d)\)-valued invariant measures. Let \(\mu ^n\) be a sequence of such invariant measures of the Eq. (3.2). Without loss of generality, we assume that the \(\sigma \)-algebra \({\mathbb {F}}_0\) is rich enough so that there exists a family of \({\mathbb {F}}_0\)-measurable random variables \(u^n_0\) with distribution \(\mu ^n\). The uniform bounds we have proven give us

$$\begin{aligned} \sup _n \int \Vert v\Vert _{L^2}^{2k}\mu ^n (dv)\le C_k(\Phi ,\lambda ) \end{aligned}$$
(6.13)

and

$$\begin{aligned} \sup _n \int \Vert v\Vert _{H^1}^{2}\mu ^n (dv)\le {{\widetilde{C}}}_k (\Phi ,\lambda ), \end{aligned}$$
(6.14)

which imply

(6.15)

Therefore, Lemma 3.5 and the fact that \(\mu ^n\) is an invariant measure show that the family

$$\begin{aligned} \bigl \{(\mu ^n P_{t_n})(\cdot ):n\in {\mathbb {N}}\bigr \}=\{\mu ^n:n\in {\mathbb {N}}\} \end{aligned}$$
(6.16)

is tight. Noting that the set of invariant measures is closed, we obtain the required compactness. \(\square \)

Corollary 6.2

Under Assumptions 3.1, there exists an ergodic invariant measure.

Proof

By the Krein–Milman theorem, the compactness of the set of invariant measures implies that there exists at least one invariant measure that is an extremal point of this set. Proposition 3.2.7 of [7] then implies that such a measure is ergodic. \(\square \)