1 Introduction

In this work we investigate the long-time behaviour of periodic scalar first-order conservation laws with stochastic forcing. In space dimension one, there is a famous paper of E et al. [28] where the authors prove existence and uniqueness, and also analyse the invariant measure for the periodic inviscid Burgers equation with stochastic forcing. The analysis is done by use of the Lax-Oleinik formula, which means that the random unknown \(u\) is derived from a potential \(\psi \) which satisfies a periodic Hamilton–Jacobi with stochastic forcing. This can be extended to higher dimension [19] and has also been extended to the case of fractional noise by Saussereau and Stoica [25]. Recently, Boritchev [5] has been able to derive sharp estimates on the solutions of the stochastic Burgers equation in the viscous case. They hold independently of the viscosity (see Remark 2 on that point).

Note that in all these articles, the space domain is compact. A recent work by Bakhtin [1] deals with a scalar first-order conservation law with Poisson random forcing set on the whole line. We also would like to mention, for stochastically forced Hamilton–Jacobi equations, the thorough analysis of the invariant measure for such problems by Dirr and Souganidis in [12].

Our purpose is to generalize [28] to higher dimension and, mainly, to relax the hypothesis of uniform convexity which is necessary when the Lax-Oleinik formula is used. The first step was to have a satisfactory framework for existence and uniqueness of solutions. This has been accomplished by several authors and this point is now rather well understood (see [2, 911, 14, 20, 21, 26]).

In this article we aim at proving existence and uniqueness of invariant measures for the type of stochastic conservation laws studied in [10]. There, a general result of existence and uniqueness result was obtained thanks to the kinetic formulation (see [22, 24]) suitably generalized to the stochastic case. This kinetic formulation seems well adapted for this purpose since it allows to keep track of the dissipation due to the shocks. Note that, as explained below, we need here a notion of solution in \(L^1\), which we develop in this article in the framework of kinetic formulation.

Under an hypothesis of non-degeneracy of the flux, we show the existence of an invariant measure in any dimension of space. For technical reasons we need to assume that the flux does not grow faster than a cubic polynomial. Moreover, for sub-quadratic fluxes we show the uniqueness of the invariant measure, see our main result, Theorem 1. An essential tool for the proofs is the use of averaging lemma for kinetic equations. We use such an averaging lemma well adapted to stochastic equation issued from Bouchut and Desvillettes [6].

More precisely, let \((\Omega ,\mathcal {F},\mathbb {P},(\mathcal {F}_t),(\beta _k(t)))\) be a stochastic basis and let \(T>0\). We study the invariant measure for the first-order scalar conservation law with stochastic forcing

$$\begin{aligned} du+\mathrm{div}(A(u))dt=\Phi dW(t),\quad x\in \mathbb {T}^N, t\in (0,T). \end{aligned}$$
(1)

The equation is periodic in the space variable \(x\): \(x\in \mathbb {T}^N\) where \(\mathbb {T}^N\) is the \(N\)-dimensional torus. The flux function \(A\) in (1) is supposed to be of class \(C^2\): \(A\in C^2(\mathbb {R};\mathbb {R}^N)\) and its derivatives have at most polynomial growth. We assume that the filtration \((\mathcal F_t)_{t\ge 0}\) is complete and that \(W\) is a cylindrical Wiener process: \(W=\sum _{k\ge 1}\beta _k e_k\), where the \(\beta _k\) are independent Brownian processes and \((e_k)_{k\ge 1}\) is a complete orthonormal system in a Hilbert space \(H\). The map \(\Phi :H\rightarrow L^2(\mathbb {T}^N)\) is defined by \(\Phi e_k=g_k\) where \(g_k\) is a regular function on \(\mathbb {T}^N\). More precisely, we assume \(g_k\in C(\mathbb {T}^N)\), with the bounds

$$\begin{aligned}&\mathbf {G}^2(x)=\sum _{k\ge 1}|g_k(x)|^2\le D_0,\end{aligned}$$
(2)
$$\begin{aligned}&\sum _{k\ge 1}|g_k(x)-g_k(y)|^2\le D_1|x-y|^2, \end{aligned}$$
(3)

for all \(x,y\in \mathbb {T}^N\). Note in particular that \(\Phi :H\rightarrow L^2(\mathbb {T}^N)\) is Hilbert-Schmidt since \(\Vert g_k\Vert _{L^2(\mathbb {T}^N)}\le \Vert g_k\Vert _{C(\mathbb {T}^N)}\) and thus

$$\begin{aligned} \sum _{k\ge 1}\Vert g_k\Vert _{L^2(\mathbb {T}^N)}^2\le D_0. \end{aligned}$$

Existence and uniqueness of a solution \(u^\omega (t)\) to (1) satisfying a given initial condition \(u(0)=u_0\in L^\infty (\mathbb {T}^N)\) has been proved in [10]. This result can easily be extended to initial data in \(L^p(\mathbb {T}^N)\) for \(p\) larger than the degree of polynomial growth of \(A\). However, as stated below, the invariant measure is supported by \(L^r(\mathbb {T}^N)\) with \(r\) small and we do not know whether \(r\) can be taken sufficiently large so that the result of [10] cannot be used for such invariant measures. Thus, we need to develop a theory of existence and uniqueness for initial data in \(L^r(\mathbb {T}^N)\) for small \(r\). In fact, we generalize to the stochastic context the notion of solutions in \(L^1(\mathbb {T}^N)\) developed in [7]. Note however that in [7] second-order, possibly degenerate equations are considered, so that, strictly speaking, what we generalize is the “fully degenerate” case contained in [7]. Anyway, what matter technically here is to prove that the kinetic measure decay sufficiently fast at infinity.

Let us now give some precisions on our framework. We assume that the noise is additive and that the functions \(g_k\) satisfy the cancellation condition

$$\begin{aligned} \forall k\ge 1,\quad \int _{\mathbb {T}^N} g_k(x)dx=0. \end{aligned}$$
(4)

The solution then satisfies \(\bar{u}(t)=\int _{\mathbb {T}^N}u(t,x) dx = \int _{\mathbb {T}^N} u(0,x)dx\) almost surely. We will consider initial data with non random space average \(\bar{u}\). Then this remains the case for all time \(t\ge 0\). Changing \(u\) to \(u-\bar{u}\), we see that it is no loss of generality to consider the case \(\bar{u}=0\). Thus in all the article, we only consider such initial data.

To introduce the non-degeneracy condition on which we will work on, let us denote \(a:=A'\) and set

$$\begin{aligned} \iota (\varepsilon )=\sup _{\alpha \in \mathbb {R},\beta \in \mathbb {S}^{N-1}} |\{\xi \in \mathbb {R}; |\alpha +\beta \cdot a(\xi )|<\varepsilon \}|, \end{aligned}$$
(5)

and

$$\begin{aligned} \eta (\varepsilon ):=\int _0^\infty e^{-t}\iota (t\varepsilon ) dt, \end{aligned}$$
(6)

where \(|B|\) denotes the Lebesgue measure of a set \(B\subset \mathbb {R}\). It is known that if \(a\) satisfies the following non-degeneracy condition:

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\iota (\varepsilon )=0, \end{aligned}$$
(7)

then all solutions of the deterministic equation, i.e. with \(\Phi =0\), converge to zero (cf. [27] for example, in that case the condition can be localized. See also [8] for the quasilinear parabolic case). The condition (7) is a condition of non-stationarity of \(\xi \mapsto a(\xi )\).

In this paper, we use the approach developed in [6]. There, an averaging Lemma based on the Fourier transform in \(x\) is developed. The non-degeneracy assumption is strengthened into:

$$\begin{aligned} \iota (\varepsilon )\le c_1 \varepsilon ^b, \end{aligned}$$
(8)

for some \(c_1>0\) and \(b>0\). Note that \(b\le 1\), unless \(a\equiv 0\), and that (8) is equivalent to

$$\begin{aligned} \eta (\varepsilon )\le c_1 \varepsilon ^b, \end{aligned}$$
(9)

for possibly a different constant \(c_1\). We will use the averaging lemma (under the form developed in [6], cf. Sects. 3 and 4) to prove the following result.

Theorem 1

(Invariant measure) Let \(A\in C^2(\mathbb {R};\mathbb {R}^N)\) satisfy the non-degeneracy condition (8) where \(\iota \) is defined by (5). Assume conditions (2)–(3) on the noise, the cancellation condition (4) and that \(A\) is at most cubic in the following sense:

$$\begin{aligned} |a'(\xi )|\le c_1(|\xi |+1),\; \xi \in \mathbb {R}. \end{aligned}$$
(10)

Then there exists an invariant measure for (1) in \(L^1(\mathbb {T}^N)\). Moreover, it is supported by \(L^r(\mathbb {T}^N)\) for \(r<2+\frac{b}{2}\) if \(N=1\) and \(r<\frac{N}{N-1}\) if \(N\ge 2\). If the condition (10) is strengthened into the hypothesis that \(A\) is sub-quadratic in the following sense:

$$\begin{aligned} |a'(\xi )|\le c_2,\quad \xi \in \mathbb {R}, \end{aligned}$$
(11)

then the invariant measure is unique.

Thanks to the kinetic formulation, it is easy to see that a stationary solution develops shocks (see Remark 8). Thus the noise does not have a regularizing effect here (see [15] for a review of such effects in SPDEs). Note that the presence of shocks was already observed in [28] and their structure was carefully studied.

In the statement of Theorem 1 and in the sequel, we use the letter \(C\) or \(c\) to denote a constant. Its value may change from one line to another. Sometimes, we precise its dependence on some parameters.

Remark 2

The existence of an invariant measure is closely related to existence of uniform in time estimates for the solution to (1) (we do not specify in which norm in this discussion). Note that such uniform estimates, with respect to time and with respect to the viscosity parameter also since second-order equations are considered, have been given by Boritchev [5], for the generalized Burgers equation with a flux \(A\) satisfying an hypothesis of strict convexity and an hypothesis of (strict) sub-quadratic growth of \(A'\). Compare to (10) here.

To prove Theorem 1, we first describe in Sect. 2 the concept of \(L^1\) solution for (1) used in this article. The well posedness in \(L^1\) is proved in Appendix 5. Then, in Sect. 3, we prove some bounds uniform in time on the solution which give the existence of the invariant measure. These bounds are used again in Sect. 4, together with a quite classical argument of smallness of the noise, to obtain the uniqueness of the invariant measure. The difficulty in the proof of uniqueness is that, contrary to the convex case treated in the references above, we do not have any uniformity with respect to the initial data on the proximity between the deterministic solution and a solution with small noise. A similar thing appears for the 2D stochastic Navier–Stokes equations with large viscosity (see [23]) but in this case the a priori estimates are easier to obtain and the contraction of the trajectories is much stronger.

2 Kinetic solution

Let us give the notion of solution to (1) we need to use in this article. It has been introduced in [10]. We slightly modify the definition to adapt the \(L^1\) setting.

Definition 3

(Kinetic measure) We say that a map \(m\) from \(\Omega \) to the set of non-negative Radon measures over \(\mathbb {T}^N\times [0,T]\times \mathbb {R}\) is a kinetic measure if

  1. 1.

    \(m\) is measurable, in the sense that for each \(\phi \in C_c(\mathbb {T}^N\times [0,T]\times \mathbb {R})\), \(\langle m,\phi \rangle :\Omega \rightarrow \mathbb {R}\) is,

  2. 2.

    \(m\) vanishes for large \(\xi \) in the sense that

    $$\begin{aligned} \lim _{n\rightarrow +\infty }\mathbb {E}m(A_n)=0, \end{aligned}$$
    (12)

    where \(A_n=\mathbb {T}^N\times [0,T]\times \{\xi \in \mathbb {R},n\le |\xi |\le n+1\}\),

  3. 3.

    for all \(\phi \in C_c(\mathbb {T}^N\times \mathbb {R})\), the process

    $$\begin{aligned} t\mapsto \int _{\mathbb {T}^N\times [0,t]\times \mathbb {R}} \phi (x,\xi )dm(x,s,\xi ) \end{aligned}$$

    is predictable.

Definition 4

(Solution) Let \(u_{0}\in L^1(\mathbb {T}^N)\). A measurable function \(u:\mathbb {T}^N\times [0,T]\times \Omega \rightarrow \mathbb {R}\) is said to be a solution to (1) with initial datum \(u_0\) if \((u(t))\) is predictable, if

$$\begin{aligned} \mathbb {E}\left( {{\mathrm{ess\,sup}}}_{t\in [0,T]}\Vert u(t)\Vert _{L^1(\mathbb {T}^N)}\right) <+\infty , \end{aligned}$$
(13)

and if there exists a kinetic measure \(m\) such that \(f:=\mathbf {1}_{u>\xi }\) satisfies: for all \(\varphi \in C^1_c(\mathbb {T}^N\times [0,T)\times \mathbb {R})\),

$$\begin{aligned}&\int _0^T\langle f(t),\partial _t \varphi (t)\rangle dt+\langle f_0,\varphi (0)\rangle +\int _0^T \langle f(t),a(\xi )\cdot \nabla \varphi (t)\rangle dt\nonumber \\&\quad =-\sum _{k\ge 1}\int _0^T\int _{\mathbb {T}^N}g_k(x)\varphi (x,t,u(x,t)) dxd\beta _k(t)\nonumber \\&\qquad -\frac{1}{2}\int _0^T\int _{\mathbb {T}^N}\partial _\xi \varphi (x,t,u(x, t))\mathbf {G}^2(x) dx dt+m(\partial _\xi \varphi ), \end{aligned}$$
(14)

where \(f_0(x,\xi )=\mathbf {1}_{u_0(x)>\xi }\).

We have used the brackets \(\langle \cdot ,\cdot \rangle \) to denote the duality between \(C^\infty _c(\mathbb {T}^N\times \mathbb {R})\) and the space of distributions over \(\mathbb {T}^N\times \mathbb {R}\). In what follows, we will denote similarly the integral

$$\begin{aligned} \langle F,G\rangle =\int _{\mathbb {T}^N}\int _\mathbb {R}F(x,\xi )G(x,\xi ) dx d\xi ,\quad F\in L^p(\mathbb {T}^N\times \mathbb {R}), G\in L^q(\mathbb {T}^N\times \mathbb {R}), \end{aligned}$$

where \(1\le p\le +\infty \) and \(q\) is the conjugate exponent of \(p\). In (14) also, we have used the shorthand \(m(\phi )\) for

$$\begin{aligned} m(\phi )=\int _{\mathbb {T}^N\times [0,T]\times \mathbb {R}} \phi (x,t,\xi )dm(x,t,\xi ), \quad \phi \in C_b(\mathbb {T}^N\times [0,T]\times \mathbb {R}). \end{aligned}$$

Equation (15) is naturally associated to the original problem (1). Indeed, if \(u\) is a solution to (1), we introduce a new variable \(\xi \) and consider the equation satisfied by \(f(t,x,\xi )=\mathbf {1}_{u(t,x)>\xi }\). For a smooth function \(\phi \in C^\infty (\mathbb {R})\) with compact support, we set \(\theta (\xi )=\int _{-\infty }^\xi \phi (\zeta )d\zeta \). We have the obvious identity

$$\begin{aligned} (\mathbf {1}_{u>\xi },\phi ) =\int _\mathbb {R}\mathbf {1}_{u>\xi }\theta '(\xi ) d\xi = \int _{-\infty }^u \theta '(\xi ) d\xi =\theta (u) \end{aligned}$$

where \((\cdot ,\cdot )\) is the inner product in \(L^2(\mathbb {R})\), with respect to the variable \(\xi \). We use the same notations for the duality between continuous functions and measures.

By Itô Formula, we deduce

$$\begin{aligned} d(\mathbf {1}_{u>\xi },\phi )= d\theta (u) \displaystyle =\theta '(u)(-a(u)\cdot \nabla u dt+\Phi (u)d W)+\frac{1}{2}\theta ''(u)\mathbf {G}^2(u) dt. \end{aligned}$$

Let us rewrite the three terms of the left hand side as follows:

$$\begin{aligned} \displaystyle \theta '(u)a(u)\cdot \nabla u&= \displaystyle \mathrm{div}\left( \int ^u_{-\infty } a(\xi )\theta '(\xi )d\xi \right) =\mathrm{div}(a\mathbf {1}_{u>\xi },\phi )\\&= (a\cdot \nabla \mathbf {1}_{u>\xi },\phi ), \\ \theta ''(u)\mathbf {G}^2(u)&= (\mathbf {G}^2(\xi )\delta _{u=\xi },\phi '(\xi ))=- (\partial _\xi (\mathbf {G}^2(\xi )\delta _{u=\xi }),\phi (\xi )),\\ \theta '(u)\Phi (u)d W&= (\delta _{u=\xi }\Phi (\xi )dW,\phi (\xi )). \end{aligned}$$

We deduce

$$\begin{aligned} d(\mathbf {1}_{u>\xi },\phi )=-(a\cdot \nabla \mathbf {1}_{u>\xi },\phi )dt- \frac{1}{2}(\partial _\xi (\mathbf {G}^2\delta _{u=\xi }),\phi )dt+(\delta _{u= \xi },\phi \Phi dW). \end{aligned}$$

We then multiply by a smooth function \(\psi \) of \(x,\, t\) with compact support in \(\mathbb {T}^d\times [0,T)\), integrate in \(x,\, t\) and perform intergration by parts to obtain (14) with \(\varphi (x,t,\xi )=\psi (x,t)\phi (\xi )\). Recall that such functions are dense in \(C^1_c(\mathbb {T}^N\times [0,T)\times \mathbb {R})\).

Note however that the kinetic measure is missing in the above equation. The reason is that in fact the computation is not rigorous: the solution of (1) are not smooth.

To understand why the measure is present, recall that solutions of (1) are not expected to be unique unless an extra condition is imposed. This is the entropy condition and entropy solutions are obtained as limit of solutions of the equation with an extra viscous term.

Thus, we consider the viscous stochastic equation, with \(\eta >0\):

$$\begin{aligned} \left\{ \begin{array}{l} du^\eta +\mathrm{div}(A(u^\eta ))dt-\eta \Delta u^\eta dt=\Phi (u^\eta )dW(t),\quad t>0,x\in \mathbb {T}^N,\\ u^\eta (x,0)=u_0(x),\quad x\in \mathbb {T}^N. \end{array}\right. \end{aligned}$$

We proceed as above and by Itô Formula, we have, for \(\theta (\xi )=\int _{-\infty }^\xi \phi (\zeta )d\zeta \),

$$\begin{aligned} \displaystyle d(\mathbf {1}_{u^\eta >\xi },\phi )&= d\theta (u^\eta ) \displaystyle =\theta '(u^\eta )(-a(u^\eta )\cdot \nabla u^\eta dt+\eta \Delta u^\eta dt+\Phi (u^\eta ) dW)\\&+\,\frac{1}{2}\theta ''(u^\eta )\mathbf {G}_\eta ^2 dt. \end{aligned}$$

We rewrite the second term as

$$\begin{aligned} \displaystyle \theta '(u^\eta )\eta \Delta u^\eta \displaystyle&= \eta \Delta \theta (u^\eta ) dt-\eta |\nabla u^\eta |^2\theta ''(u^\eta )\\ \displaystyle&= \eta \Delta (\mathbf {1}_{u^\eta >\xi },\theta ')dt +(\partial _\xi (\eta |\nabla u^\eta |^2\delta _{u^\eta =\xi }),\theta '), \end{aligned}$$

and obtain viscous kinetic formulation

$$\begin{aligned} \displaystyle d\mathbf {1}_{u^\eta >\xi }+a\cdot \nabla \mathbf {1}_{u^\eta >\xi } dt\displaystyle&= \delta _{u^\eta =\xi }\Phi (\xi ) dW - \partial _\xi \left( \frac{1}{2}\mathbf {G}_\eta ^2\delta _{u^\eta =\xi }\right) dt \\&\displaystyle +\, \eta \Delta \mathbf {1}_{u^\eta >\xi }dt +\partial _\xi m^\eta dt. \end{aligned}$$

with the viscous kinetic measure \( m^\eta = \eta |\nabla u^\eta |^2\delta _{u^\eta =\xi }\). The sequence \((m^\eta )_{\eta >0}\) does not converge to zero. At the limit \(\eta \rightarrow 0\), a measure \(m\) remains.

Note that since \(u^\eta \) is smooth (see [17]), the above computation is rigorous.

Also, it is easy to derive the energy inequality:

$$\begin{aligned} \mathbb {E}\left( \Vert u^\eta (t)\Vert ^p_{L^p(\mathbb {T}^N)}\right) +\eta \mathbb {E}\int _0^T \int _{\mathbb {T}^N} |u^\eta (t,x)|^{p-2}|\nabla u^\eta (t)|^2 dxdt \le C(p,u_0,T). \end{aligned}$$

This can rewritten as a bound on the moments of the measures \(m^\eta \) and \(\nu ^\eta =\delta _{u^\eta =\xi }\):

$$\begin{aligned} \displaystyle \mathbb {E}\int _{\mathbb {T}^N}\int _\mathbb {R}|\xi |^p d\nu ^\eta _{x,t}(\xi )dx+ \mathbb {E}\int _{\mathbb {T}^N\times [0,T]\times \mathbb {R}} |\xi |^{p-2}dm^\eta (x,t,\xi )\le C(p,u_0,T). \end{aligned}$$

Using other similar estimates, we have for a subsequence \((\eta _n)\downarrow 0\)

  1. 1.

    \(\nu ^{\eta _n}\rightarrow \nu \) in the sense of Young measures indexed by \(\Omega \times \mathbb {T}^N\times [0,T]\) and \(f^{\eta _n}=\mathbf {1}_{u^{\eta _n}>\xi }\rightharpoonup f\) in \(L^\infty (\Omega \times \mathbb {T}^N\times (0,T)\times \mathbb {R})-weak-*\). Moreover \(\nu _{x,t}=-\partial _\xi f\)

  2. 2.

    \(m^{\eta _n}\rightharpoonup m\) in \(L^2(\Omega ; \mathcal {M}_b)\)-weak star, where \(\mathcal {M}_b\) denote the space of bounded Borel measures over \(\mathbb {T}^N\times [0,T]\times \mathbb {R}.\)

Then, since the equation is linear in \(f, \, m,\, \nu \), it is easy to pass to the limit and prove that (14) holds. We say (cf. [10]) that \(f\) is a generalized kinetic solution.

We only have \(f\in [0,1]\) and \(\nu _{x,t}\) is a probability measure. To get a kinetic solution, we need \(f\in \{0,1\}\); then, necessarily, \(f=\mathbf {1}_{u>\xi }\) and \(\nu _{x,t}=\delta _{u=\xi }\). To prove this fact, one first prove uniqueness of generalized solutions using a doubling of variables and see that this implies the result. The difficulty at that point is to get time regularity of \(f\).

This proof is given in [10] (see also [11] where minor errors of [10] have been corrected) in an \(L^p(\mathbb {T}^d)\) framework. It is also shown that \(u\) is an entropy solution.

As a last comment on the notion of kinetic solution, let us remark that it is sometimes problematic to work with the unknown \(\mathbf {1}_{u>\xi }\). Indeed, it is not integrable with respect to \(\xi \). However, switching from \(\mathbf {1}_{u>\xi }\) to \(\chi _u(\xi ):=\mathbf {1}_{u>\xi >0}-\mathbf {1}_{0>\xi >u}=\mathbf {1}_{u >\xi }-\mathbf {1}_{0>\xi }\), it turns out that Eq. (14) is the weak form of the following equation

$$\begin{aligned} (\partial _t+a(\xi )\cdot \nabla )f=\delta _{u=\xi }\dot{W}+\partial _\xi \left( m-\frac{1}{2}\mathbf {G}^2\delta _{u=\xi }\right) ,\quad f=\chi _u. \end{aligned}$$
(15)

Note in particular that \(f\in L^1(\mathbb {T}^N\times [0,T]\times \mathbb {R})\) if \(u\in L^1(\mathbb {T}^N\times [0,T])\). Actually \(u\mapsto f=\chi _u\) is then an isometry. This simple fact is at the basis of the resolution of (1) in \(L^1\). In Appendix 5 we prove the following result which generalizes [10] (cf. Theorem 11, Corollary 12, and Theorem 19 in [10] in particular for a precise definition of the “parabolic approximation”).

Theorem 5

(Resolution of (1) in \(L^1\)) Let \(u_0\in L^1(\mathbb {T}^N)\). There exists a unique measurable \(u:\mathbb {T}^N\times [0,T]\times \Omega \rightarrow \mathbb {R}\) solution to (1) with initial datum \(u_0\) in the sense of Definition 4. Besides, \(u\) has almost surely continuous trajectories in \(L^1(\mathbb {T}^N)\) and \(u\) is the a.s. limit in \(L^1(\mathbb {T}^N\times (0,T))\) of the parabolic approximation to (1). Moreover, given \(u_0^1\) and \(u_0^2\in L^1(\mathbb {T}^N)\), the following holds:

$$\begin{aligned} \Vert u^1(t)-u^2(t)\Vert _{L^1(\mathbb {T}^N)}\le \Vert u_0^1-u_0^2\Vert _{L^1(\mathbb {T}^N)},\; a.s. \end{aligned}$$
(16)

We need this generalization because we are not able to prove that the invariant measure we construct has its support in \(L^p(\mathbb {T}^N)\) for \(p\) sufficiently large to apply the result in [10]. Note that it follows from the proof that if \(u_0\in L^r(\mathbb {T}^N)\) for some \(r>1\) then the solution lives in \(L^r(\mathbb {T}^N)\) and an estimate similar to (13) but in \(L^r(\mathbb {T}^N)\) holds.

By Theorem 5, we can define the transition semigroup in \(L^1(\mathbb {T}^N)\):

$$\begin{aligned} P_t\phi (u_0)=\mathbb {E}(\phi (u(t))),\; \phi \in \mathcal B_b(L^1(\mathbb {T}^N)). \end{aligned}$$

It follows easily from the property of \(L^1\)-contraction (16) that \((P_t)\) is Feller.

Our aim is to construct an invariant measure for \((P_t)\) under assumption (10). Recall that this is a probability measure \(\mu \) on \(L^1(\mathbb {T}^d)\) satisfying:

$$\begin{aligned} P_t^*\mu =\mu , \; t\ge 0. \end{aligned}$$

We prove the first part of Theorem 1 (existence of an invariant measure) in the next section thanks to Krylov-Bogoliubov theorem. We conclude this section with some remarks.

Since the solution of (1) has continuous trajectories, it is easy to prove that it satisfies the following weak formulation:

$$\begin{aligned}&-\langle f(t),\varphi \rangle dt+\langle f_0,\varphi \rangle +\int _0^t \langle f(s),a(\xi )\cdot \nabla \varphi \rangle ds\nonumber \\&\quad =-\sum _{k\ge 1}\int _0^t\int _{\mathbb {T}^N}g_k(x)\varphi (x,u(x,t)) dxd\beta _k(s)\nonumber \\&\qquad -\frac{1}{2}\int _0^t\int _{\mathbb {T}^N} \partial _\xi \varphi (x,u(x,s)) \mathbf {G}^2(x) dx dt+m(\partial _\xi \varphi ), \end{aligned}$$
(17)

for all \(\varphi \in C^1_c(\mathbb {T}^N\times \mathbb {R})\).

In fact, we will even use stronger formulation by using the following remark.

Remark 6

(Regularity of the parabolic approximation) Let the parabolic approximation to (1) be given by:

$$\begin{aligned} du^\eta +\mathrm{div}(A_\eta (u))dt-\eta \Delta u^\eta =\Phi _\eta dW(t),\quad x\in \mathbb {T}^N, t\in (0,T), \end{aligned}$$
(18)

where \(A_\eta \) and \(\Phi _\eta \) are smooth. By Hofmanová [17], if \(u^\eta (0)\) is in \(W^{m,p}(\mathbb {T}^N)\cap W^{1,mp}(\mathbb {T}^N)\), then \(u^\eta \) belongs to \(L^q(\Omega ;C([0,T];W^{m,p}(\mathbb {T}^N)\cap W^{1,mp}(\mathbb {T}^N)))\) for any \(q\ge 1\). Also by Theorem 5, if one approximates the solution of (1) by the solution of (18) with \(u^\eta (0) \rightarrow u_0\) in \(L^1(\mathbb {T}^N)\), then \(u^\eta \rightarrow u\) in \(L^\infty (0,T;L^1(\Omega \times \mathbb {T}^N))\).

Remark 7

(Multiplicative noise) A natural extension of our study is to consider a noise depending on the solution of the form

$$\begin{aligned} \Phi (u)dW=\sum _{k\ge 1} \Phi (u)e_kd\beta _k=\sum _{k\ge 1} g_k(u)d\beta _k. \end{aligned}$$

With such noise, the spatial average satisfies:

$$\begin{aligned} \int _{\mathbb {T}^N} u(x,t)dx = \int _{\mathbb {T}^N} u_0(x)dx +\sum _{k\ge 1}\int _0^t \int _{\mathbb {T}^N} g_k(u(x,s)dx d\beta _k(s). \end{aligned}$$

It seems difficult to provide any bound on this average for a general noise. A first possibility is to consider a noise satisfying

$$\begin{aligned} \int _{\mathbb {T}^N}g_k(u(x))dx=0,\; k\ge 1. \end{aligned}$$

Then the mass is preserved exactly. This would hold for \(g_k(u)=\frac{\partial }{\partial x}h_k(u)\) or \(g_k(u)=h_k(u)-\frac{1}{|\mathbb {T}^N|}\int _{\mathbb {T}^N} h_k(u)dx.\) But such noises are not suited to the framework of kinetic solutions and are not covered by our theory. The first one is considered in [21] but the longtime behaviour is not analysed there. The second noise is non local and does not seem to be justified physically.

It would be also possible to consider noises of the form \(u(1-u)dW\). Then, the solution is easily proved to live in \([0,1]\) and the existence of an invariant measure is easy. The longtime behaviour in this case is expected to be completely different (see [18] for the case of an SDE and [4] for the case of a parabolic SPDE).

Remark 8

(Shocks do not disappear) Once we have proved Theorem 1, it is not difficult to construct a stationary solution of (1) with the invariant law given by this result. It is of course a kinetic solution. Assume that \(N=1\), then the invariant measure is supported by \(L^p(\mathbb {T}^N)\) for some \(p>2\). It is easy to see that a preliminary step of truncation allows to take the test function \(\varphi (\xi )=\xi \) in (17) and we obtain:

$$\begin{aligned} \mathbb {E}\left( \Vert u(t_0+1)\Vert _{L^2(\mathbb {T}^N)}^2\right) +\mathbb {E}m\left( \mathbb {T}^N \times \mathbb {R}\times [t_0,t_0+1]\right) =\mathbb {E}\left( \Vert u(t)\Vert _{L^2(\mathbb {T}^N)}^2\right) +D_0. \end{aligned}$$

Thus, by stationarity,

$$\begin{aligned} \mathbb {E}m\left( \mathbb {T}^N\times \mathbb {R}\times [t_0,t_0+1]\right) =D_0. \end{aligned}$$

As expected, the lhs does not depend on \(t_0\). More interestingly, it is not \(0\). This shows that shocks are present in the stationary solutions and noise has no regularizing effect. If \(N\ge 2\), one can use the test functions of the proof of Proposition 16 to see that again shocks are present for a stationary solution.

Remark 9

We could also have constructed a random dynamical system associated to (1) and consider invariant measure in the co-cycle sense. This is the approach in [28]. It is not difficult to see that, with the results obtained in the following section, we can prove that for almost all \(\omega \in \Omega \) there exists a unique solution defined on the whole real line and that there exists a random attractor which is reduced to a single random point.

3 Uniform bound and tightness for the stochastically forced equation

3.1 Decomposition of \(u\)

Let \(\alpha , \gamma ,\delta >0\) be some positive parameters. On \(L^2(\mathbb {T}^N)\), we consider the following regularization of the operator \(-a(\xi )\cdot \nabla \):

$$\begin{aligned} A_\gamma :=-a(\xi )\cdot \nabla -B_\gamma ,\quad A_{\gamma ,\delta }=A_\gamma -\delta \mathrm {Id},\quad B_\gamma :=\gamma (-\Delta )^{\alpha }. \end{aligned}$$

Let also \(S_{A_\gamma }(t)\) and \(S_{A_{\gamma ,\delta }}(t)\) be the associated semi-groups on \(L^2(\mathbb {T}^N)\):

$$\begin{aligned} S_{A_\gamma }(t)v(x)=(e^{-tB_\gamma }v)(x-a(\xi )t),\quad S_{A_{\gamma ,\delta }}(t)v(x)=e^{-\delta t}(e^{-tB_\gamma }v)(x-a(\xi )t). \end{aligned}$$

Note that the semi-group \(e^{-tB_\gamma }\) has the following regularizing properties, which we will use later.

Lemma 10

For \(\gamma >0\), \(\alpha \in (0,1]\), let \(B_\gamma =\gamma (-\Delta )^{\alpha }\) on \(\mathbb {T}^N\) and let \(e^{-tB_\gamma }\) denote the associated semi-group. Then there exists a constant \(c_3\) depending on \(N,\, n,\, m,\, \alpha ,\, \beta \) such that

$$\begin{aligned} \Vert (-\Delta )^{\beta /2}e^{-tB_\gamma }\Vert _{L^m\rightarrow L^n}\le \frac{c_3}{(\gamma t)^{\frac{N}{2\alpha }\left( \frac{1}{m}-\frac{1}{n}\right) + \frac{\beta }{2\alpha }}}, \end{aligned}$$

for all \(1\le m\le n\le +\infty \) and \(\beta \ge 0\).

Proof

it is sufficient to consider the case \(\gamma =1\). Note that \(e^{-tB_1}\varphi =K_{t}*\varphi \) for all \(\varphi \in \mathcal {C}^2(\mathbb {T}^N)\), where

$$\begin{aligned} K_t(x)=\sum _{n\in \mathbb {Z}^d}e^{-t|n|^{2\alpha }+2\pi i n\cdot x}. \end{aligned}$$

Let \(\mathcal {F}\) denote the Fourier transform

$$\begin{aligned} \mathcal {F}u(\xi )=\int _{\mathbb {R}^N}u(x)e^{-2\pi ix\cdot \xi }dx, \end{aligned}$$

for \(u\in \mathcal {S}(\mathbb {R}^N)\). Let \(H_t\) denote the inverse Fourier transform of \(\xi \mapsto e^{-t|\xi |^{2\alpha }}\) (this is the “heat” kernel associated to \((-\Delta )^{\alpha }\) on \(\mathbb {R}^N\)) and let \(H_t^\mathrm {per}\) denote the periodic function generated by \(H_t\):

$$\begin{aligned} H_t^\mathrm {per}=\sum _{l\in \mathbb {Z}^N}H_t(x+l),\quad x\in \mathbb {R}^N. \end{aligned}$$

For a fixed \(x\in \mathbb {R}^N\), we apply the Poisson formula

$$\begin{aligned} \frac{1}{(2\pi )^N}\sum _{k\in \mathbb {Z}^N}\mathcal {F}\theta (k)=\sum _{l\in \mathbb {Z}^N}\theta (2\pi l), \quad \theta \in \mathcal {S}'(\mathbb {R}^N) \end{aligned}$$

to the function \(\theta (z)=H_t(x+z/2\pi )\). We obtain \(H_t^\mathrm {per}=K_t\). In particular, we have

$$\begin{aligned} \Vert K_t\Vert _{L^r(\mathbb {T}^N)}\le \Vert H_t\Vert _{L^r(\mathbb {R}^N)},\quad 1\le r\le +\infty . \end{aligned}$$

By the Young inequality, we also have

$$\begin{aligned} \Vert e^{-tB_1}\Vert _{L^m(\mathbb {T}^N)\rightarrow L^n(\mathbb {T}^N)}\le \Vert K_t\Vert _{L^r(\mathbb {T}^N)},\quad \frac{1}{m}+\frac{1}{r}=1+\frac{1}{n}, \end{aligned}$$

and, therefore,

$$\begin{aligned} \Vert e^{-tB_1}\Vert _{L^m(\mathbb {T}^N)\rightarrow L^n(\mathbb {T}^N)}\le \Vert H_t\Vert _{L^r(\mathbb {R}^N)}. \end{aligned}$$

By homogeneity, \(\Vert H_t\Vert _{L^r(\mathbb {R}^N)}=\frac{c}{t^{\frac{N}{2 \alpha }\left( \frac{1}{m}-\frac{1}{n}\right) }}\). The case \(\beta \not =0\) is similar. \(\square \)

Now, we formally decompose the solutions using the following rewriting of (15):

$$\begin{aligned} (\partial _t-A_{\gamma ,\delta })f=(B_\gamma +\delta \text{ Id }) + p+\partial _\xi q,\quad f=\chi _u. \end{aligned}$$
(19)

with \(p= \delta _{u=\xi }\dot{W}\), \(q=m-\frac{1}{2}\mathbf {G}^2\delta _{u=\xi }\).

More precisely, by a preliminary step of regularization, we may use the test function \(S^*_{A_{\gamma ,\delta }}(T-t)\varphi \) with \(\varphi \in C(\mathbb {T}^N)\) in (14). Then, by the commutation identity

$$\begin{aligned} S_{A_{\gamma ,\delta }}(t)\partial _\xi g=\partial _\xi \left( S_{A_{\gamma ,\delta }}(t)g\right) +tXS_{A_{\gamma ,\delta }}(t) g,\quad X=a'(\xi )\cdot \nabla , \end{aligned}$$

we obtain the following decomposition of the solution:

$$\begin{aligned} u=u^0+u^\flat +P+Q, \end{aligned}$$
(20)

where

$$\begin{aligned} u^0(t)&=\int _\mathbb {R}S_{A_{\gamma ,\delta }}(t)f(0,\xi )d\xi , \end{aligned}$$
(21)
$$\begin{aligned} u^\flat (t)&=\int _\mathbb {R}\int _0^t S_{A_{\gamma ,\delta }}(s) (B_\gamma f+\delta f)(t-s,\xi )ds d\xi ,\end{aligned}$$
(22)
$$\begin{aligned} \langle P(t),\varphi \rangle&=\sum _{k\ge 1} \int _{\mathbb {T}^N}\int _0^t \left( S_{A_{\gamma ,\delta }}^*(t-s)\varphi \right) (x,u(s,x)) g_k(x) d\beta _k(s)dx,\end{aligned}$$
(23)
$$\begin{aligned} \mathrm {and}\nonumber \\ \langle Q(t),\varphi \rangle&=\int _{\mathbb {T}^N\times [0,t]\times \mathbb {R}} (t-s)a'(\xi ) \cdot \nabla S_{A_{\gamma ,\delta }}^*(t-s)\varphi dq(x,s,\xi ), \end{aligned}$$
(24)

where \(\varphi \in C(\mathbb {T}^N)\) and \(\langle M,\varphi \rangle \) denote the duality product between the space of finite Borel measures on \(\mathbb {T}^N\) and \(C(\mathbb {T}^N)\).

We estimate each term separately. In fact, we estimate the parabolic approximation of \(u\). Since it is as smooth as we need (cf. Remark 6), the computations below are easily justified. In particular, for the parabolic approximation, the kinetic measure is given by

$$\begin{aligned} m^\eta =\eta |\nabla u^\eta |^2 \delta _{u^\eta =\xi } \end{aligned}$$

which is smooth in \(t\) and \(x\). To lighten the computation below, we will however omit to write the dependence on \(\eta \), except in Sect. 3.7 where we derive the final estimate on the true solution \(u\).

3.2 Estimate of \(u^0\)

We use the Fourier transform with respect to \(x\in \mathbb {T}^d\), i.e.

$$\begin{aligned} \hat{v}(n)=\int _{\mathbb {T}^N} v(x) e^{-2\pi i n\cdot x},\quad n\in \mathbb {Z}^N,\quad v\in L^1(\mathbb {T}^N). \end{aligned}$$

After Fourier transform, \(S_{A_{\gamma ,\delta }}(t)\) is multiplication by \(e^{-(ia(\xi )\cdot n +\gamma |n|^{2\alpha }+\delta )t}\), hence, for all \(n\in \mathbb {Z}^N\), \(n\ne 0\),

$$\begin{aligned} \hat{u}^0(n,t) =\int _\mathbb {R}e^{-(ia(\xi )\cdot n +\gamma |n|^{2\alpha }+\delta )t}\widehat{f_0}(n,\xi )d\xi . \end{aligned}$$

Let us set, for \(\varphi \,:\, \mathbb {Z}^N\times \mathbb {R}\rightarrow \mathbb {C}\) satisfying \(\varphi (n,\cdot )\in L^2(\mathbb {R})\) for all \(n\in \mathbb {Z}^N\):

$$\begin{aligned} {\mathcal G}\varphi (n,z)=\int _\mathbb {R}e^{-ia(\xi )\cdot z} \varphi (n,\xi )d\xi . \end{aligned}$$
(25)

The operator \(\mathcal {G}\) is well-defined by [6]. Let us also define

$$\begin{aligned} \omega _n=\gamma |n|^{2\alpha -1} +\delta |n|^{-1}. \end{aligned}$$
(26)

Then, for \(T\ge 0\), \(n\ne 0\),

$$\begin{aligned} \int _0^T|\hat{u}^0(n,t)|^2 dt&\displaystyle = \int _0^T e^{-2(\gamma |n|^{\alpha }+\delta )t} \left| {\mathcal G}\widehat{f}_0 (n,nt)\right| ^2 dt\\&\le \frac{1}{|n|}\int _{\mathbb {R}^+} e^{-2\omega _n s} \left| {\mathcal G}\widehat{f}_0 \left( n,\frac{n}{|n|}s\right) \right| ^2 ds, \end{aligned}$$

thanks to the change of variable \(s=|n| t\). We now use Lemma 2.4 in [6], which gives an estimate of the oscillatory integral (25); we also use the obvious inequality \(e^{-a}\le 1/(1+a^2)\) to deduce:

$$\begin{aligned} \int _0^T|\hat{u}^0(n,t)|^2 dt&\displaystyle \le \frac{1}{|n|}\int _{\mathbb {R}^+} \frac{1}{1+4\omega _n^2s^2} \left| {\mathcal G}\widehat{f}_0 \left( n,\frac{n}{|n|}s\right) \right| ^2 ds\\&\le \frac{1}{|n|}\frac{2\pi }{2\omega _n}\eta (\omega _n) \left\| \widehat{f}_0(n,\cdot )\right\| _{L^2_\xi }^2. \end{aligned}$$

Recall that \(\eta \) was defined in (6) and satisfies the decay condition (9). Consequently, we have the estimate

$$\begin{aligned} \int _0^T|\hat{u}^0(n,t)|^2 dt \le c\frac{1}{|n|}\omega _n^{b-1}\left\| \widehat{f}_0(n,\cdot )\right\| _{L^2_\xi }^2. \end{aligned}$$

Clearly,

$$\begin{aligned} |n|\omega _n^{1-b}\ge \gamma ^{1-b} |n|^{2\alpha (1-b)+b}, \end{aligned}$$

for \(n\not =0\). Summing over \(n\in \mathbb {Z}^N\) and noticing that

$$\begin{aligned} \hat{u}^0(0,t)= \int _{\mathbb {T}^N} u^0(t,x) dx&= \int _\mathbb {R}\int _{\mathbb {T}^d} e^{-\delta t} f(0,x,\xi ) d\xi dx\\&= \int _{\mathbb {T}^d} e^{-\delta t} u_0(x) dx=0, \end{aligned}$$

we obtain:

$$\begin{aligned} \int _0^T\Vert u^0(t)\Vert _{H^{\alpha +\left( \frac{1}{2}-\alpha \right) b}}^2 dt \le c\gamma ^{b-1}\left\| f_0\right\| _{L^2_{x,\xi }}^2. \end{aligned}$$

Since \(\left\| f_0\right\| _{L^2_{x,\xi }}^2= \left\| f_0\right\| _{L^1_{x,\xi }}= \left\| u_0\right\| _{L^1_{x}}\), this gives

$$\begin{aligned} \int _0^T\Vert u^0(t)\Vert _{H^{\alpha +\left( \frac{1}{2}-\alpha \right) b}}^2 dt \le c\gamma ^{b-1}\left\| u_0\right\| _{L^1_{x}}. \end{aligned}$$
(27)

3.3 Estimate of \(u^\flat \)

Recall that

$$\begin{aligned} u^\flat (t)=\int _\mathbb {R}\int _0^t S_{A_{\gamma ,\delta }}(s)(B_\gamma f+\delta f)(t-s,\xi )ds d\xi . \end{aligned}$$

We use the same argument as above. We first notice that

$$\begin{aligned} \widehat{u}^\flat (0,t)=\int _{\mathbb {T}^N} u^\flat (x,t)dx=0 \end{aligned}$$

for all \(t\ge 0\), almost surely. This is easily seen using that

$$\begin{aligned} \int _{\mathbb {T}^N}\int _\mathbb {R}f(x,t,\xi )d\xi \, dx=0, \end{aligned}$$

which follows from (4) and (17) with test functions approximating \(\varphi (x,\xi )=1\). Then, we write the Fourier transform of \(u^\flat \) for \(n\ne 0\):

$$\begin{aligned} \hat{u}^\flat (n,t)&\displaystyle =\int _0^t \int _\mathbb {R}(\gamma |n|^{2\alpha }+\delta ) e^{-s(-i a(\xi )\cdot n +\gamma |n|^{2\alpha }+\delta )} \widehat{f}(n,\xi ,t-s)d\xi ds\\&\displaystyle = \int _0^t (\gamma |n|^{2\alpha }+\delta ) e^{-s(\gamma |n|^{2\alpha }+\delta )} \mathcal G \widehat{f}(n,ns,t-s)ds, \end{aligned}$$

with, as in (25),

$$\begin{aligned} \mathcal G \widehat{f}(n,z,s)= \int _\mathbb {R}e^{-i a(\xi )\cdot z} \widehat{f}(n,\xi ,s)d\xi . \end{aligned}$$

By Jensen’s inequality, we have

$$\begin{aligned} |\hat{u}^\flat (n,t)|^2 \le \int _0^t (\gamma |n|^{2\alpha }+\delta ) e^{-s(\gamma |n|^{2\alpha }+\delta )} \left| \mathcal G \widehat{f}(n,ns,t-s)\right| ^2 ds. \end{aligned}$$

Then, by simple manipulation and Lemma 2.4 in [6], we deduce the following sequence of inequality for \(T\ge 0\) (recall that \(\omega _n\) is defined by (26)):

$$\begin{aligned} \int _0^T |\hat{u}^\flat (n,t)|^2 dt&\displaystyle \le \int _0^T \int _0^t (\gamma |n|^{2\alpha }+\delta ) e^{-s(\gamma |n|^{2\alpha }+\delta )} \left| \mathcal G \widehat{f}(n,ns,t-s)\right| ^2 ds dt\\&\displaystyle \le \int _0^T \int _0^T (\gamma |n|^{2\alpha }+\delta ) e^{-s(\gamma |n|^{2\alpha }+\delta )} \left| \mathcal G \widehat{f}(n,ns,t)\right| ^2 ds dt\\&\displaystyle \le \int _0^T \int _{\mathbb {R}^+} \omega _n e^{-\omega _n s} \left| \mathcal G \widehat{f}\left( n,\frac{n}{|n|}s,t\right) \right| ^2 ds dt\\&\displaystyle \le \int _0^T \int _{\mathbb {R}^+} \omega _n \frac{1}{1+\omega _n^2s^2} \left| \mathcal G \widehat{f}\left( n,\frac{n}{|n|}s,t\right) \right| ^2 ds dt\\&\displaystyle \le c\omega _n^b \int _0^T \left| \widehat{f}\left( n,t\right) \right| ^2_{L^2_\xi } dt. \end{aligned}$$

We conclude by summing over \(n\in \mathbb {Z}^N\), \(n\ne 0\):

$$\begin{aligned} \int _0^T\Vert u^\flat (t)\Vert _{H^{\left( \frac{1}{2}-\alpha \right) b}}^2 dt \le c\gamma ^{b}\int _0^T\left\| f(t)\right\| _{L^2_{x,\xi }}^2. \end{aligned}$$

Since \(\left\| f(t)\right\| _{L^2_{x,\xi }}^2=\left\| u(t)\right\| _{L^1_{x}}\), we obtain

$$\begin{aligned} \int _0^T\Vert u^\flat (t)\Vert _{H^{\left( \frac{1}{2}-\alpha \right) b}}^2 dt\le c\gamma ^{b}\int _0^T\left\| u(t)\right\| _{L^1_{x}}dt. \end{aligned}$$
(28)

3.4 Estimate of \(P\)

Recall that \(P\) is a random measure on \(\mathbb {T}^N\) given by:

$$\begin{aligned} \langle P(t),\varphi \rangle =\sum _{k\ge 1} \int _{\mathbb {T}^N}\int _0^t \left( S_{A_{\gamma ,\delta }}^*(t-s)\varphi \right) (x,u(s,x)) g_k(x) d\beta _k(s)dx,\, \varphi \in C(\mathbb {T}^N). \end{aligned}$$

Note that, for each \(k\in \mathbb {N}\),

$$\begin{aligned} \begin{aligned}&\int _{\mathbb {T}^N}\int _0^t\left( S_{A_{\gamma ,\delta }}^*(t-s)\varphi \right) (x,u(s,x)) g_k(x)d\beta _k(s)dx\\&\quad = \int _{\mathbb {T}^N}\int _0^t e^{-\delta (t-s)}\left( e^{-B_\gamma (t-s)} \varphi \right) \left( x+a(u(s,x))(t-s)\right) g_k(x) d\beta _k(s)dx\\&\quad = \int _{\mathbb {T}^N}\int _0^t e^{-\delta (t-s)}\varphi (x) e^{-B_\gamma (t-s)} \left( g_k(\cdot -a(u(\cdot ,s))(t-s))\right) (x) d\beta _k(s)dx. \end{aligned} \end{aligned}$$

By (2), the mapping \(x\mapsto g_k(x-a(u(x,s))(t-s))\) is a bounded function, therefore by Lemma 10 we can write for \(\sigma \ge 0\):

$$\begin{aligned} \Vert e^{B_\gamma (t-s)}\left( g_k(\cdot -a(u(\cdot ,s))(t-s))\right) \Vert _{H^\sigma (\mathbb {T}^N)}\le c \left( \gamma (t-s) \right) ^{-\frac{\sigma }{2\alpha }} \Vert g_k\Vert _{L^2(\mathbb {T}^N)}. \end{aligned}$$

We deduce that, for \(\sigma \in [0,\alpha )\), this defines a function in \(H^{\sigma }(\mathbb {T}^N)\) and

$$\begin{aligned}&\mathbb {E}\left( \left\| \int _0^t e^{-\delta (t-s)}e^{-B_\gamma (t-s)} \left( g_k(\cdot -a(u(\cdot ,s))(t-s))\right) d\beta _k(s) \right\| ^2_{H^{\sigma }(\mathbb {T}^N)}\right) \nonumber \\&\quad \le c \int _0^t e^{-2\delta (t-s)}\left( \gamma (t-s) \right) ^{-\frac{\sigma }{\alpha }} ds\Vert g_k\Vert _{L^2(\mathbb {T}^N)}^2\nonumber \\&\quad \le c \gamma ^{-\frac{\sigma }{\alpha }}\delta ^{\frac{\sigma }{\alpha }-1} \int _{\mathbb {R}^+}e^{2s}s^{-\frac{\sigma }{\alpha }}ds \Vert g_k\Vert _{L^2(\mathbb {T}^N)}^2 \end{aligned}$$
(29)

Since the sum over \(k\) of the right hand side of (29) is finite, we may then conclude:

$$\begin{aligned}&\mathbb {E}\left( \left\| \sum _{k\ge 1}\int _0^t e^{B_\gamma (t-s)} \left( g_k(\cdot -a(u(\cdot ,s))(t-s))\right) d\beta _k(s) \right\| ^2_{H^{\sigma }(\mathbb {T}^N)}\right) \\&\quad \le c \gamma ^{-\frac{\sigma }{\alpha }}\delta ^{\frac{\sigma }{\alpha }-1} \left( \int _{\mathbb {R}^+}e^{-2s}s^{-\frac{\sigma }{\alpha }}ds\right) D_0. \end{aligned}$$

This shows that in fact \(P(t)\) is more regular than a measure. It is in the space \(L^2(\Omega ;H^\sigma (\mathbb {T}^N))\) for \(\sigma < \alpha \) and

$$\begin{aligned} \mathbb {E}\left( \left\| P(t)\right\| _{H^\sigma (\mathbb {T}^N)}^2 \right) \le c \gamma ^{-\frac{\sigma }{\alpha }}\delta ^{\frac{\sigma }{\alpha }-1} \left( \int _{\mathbb {R}^+}e^{-2s}s^{-\frac{\sigma }{\alpha }}ds\right) D_0. \end{aligned}$$
(30)

3.5 Bound on the kinetic measure

Before we estimate \(Q\), we give an estimate on the kinetic measure.

Lemma 11

Let \(u:\mathbb {T}^N\times [0,T]\times \Omega \rightarrow \mathbb {R}\) be the solution to (1) with initial datum \(u_0\). Then the measure \(q:=m-\frac{1}{2}\mathbf {G}^2\delta _{u=\xi }\) satisfies

$$\begin{aligned} \mathbb {E}\int _{\mathbb {T}^N\times [0,T]\times \mathbb {R}}\theta (\xi )d|q|(x,t,\xi )\le D_0 \mathbb {E}\Vert \theta (u)\Vert _{L^1(\mathbb {T}^N\times [0,T])} + \mathbb {E}\int _{\mathbb {T}^N} \Theta (u_0(x))dx \end{aligned}$$
(31)

for all non-negative \(\theta \in C_c(\mathbb {R})\), where \(\Theta (s)=\int _0^s\int _0^\sigma \theta (r)dr d\sigma \).

Proof

Let \(\theta \in C_c(\mathbb {R})\) be non-negative. Set

$$\begin{aligned} \Theta (s)=\int _0^s\int _0^\sigma \theta (r)dr d\sigma . \end{aligned}$$

We test the kinetic formulation (14) with \(\Theta '(\xi )\) and sum over \(x\in \mathbb {T}^N\), \(\xi \in \mathbb {R}\):

$$\begin{aligned} \frac{d\;}{dt}\int _{\mathbb {T}^N}\mathbb {E}\Theta (u) dx=\frac{1}{2}\mathbb {E}\int _{\mathbb {T}^N} \mathbf {G}^2\theta (u)dx-\mathbb {E}\int _{\mathbb {T}^N\times \mathbb {R}}\theta (\xi ) dm(x,t,\xi ). \end{aligned}$$

Since \(\Theta (0)=0\) and \(\Theta (u)\ge 0\), we deduce that

$$\begin{aligned} \mathbb {E}|\langle |q|,\theta \rangle |&\le \frac{1}{2}\mathbb {E}\int _{\mathbb {T}^N\times [0,T]} \mathbf {G}^2\theta (u)dx+\mathbb {E}\int _{\mathbb {T}^N\times [0,T]\times \mathbb {R}}\theta (\xi ) dm(x,t,\xi ),\\&\le \mathbb {E}\int _{\mathbb {T}^N\times [0,T]} \mathbf {G}^2\theta (u)dx+ \mathbb {E}\int _{\mathbb {T}^N} \Theta (u_0(x))dx\\&\le D_0 \mathbb {E}\Vert \theta (u)\Vert _{L^1(\mathbb {T}^N\times [0,T])}+ \mathbb {E}\int _{\mathbb {T}^N} \Theta (u_0(x))dx. \end{aligned}$$

\(\square \)

3.6 Estimate of \(Q\)

Recall that

$$\begin{aligned} \langle Q(t),\varphi \rangle =\int _{\mathbb {T}^N\times [0,t]\times \mathbb {R}} (t-s)a'(\xi ) \cdot \nabla S_{A_{\gamma ,\delta }}^*(t-s)\varphi dq(x,s,\xi ). \end{aligned}$$

For \(\varepsilon >0\), we define the measure:

$$\begin{aligned} \langle Q_\varepsilon (t),\varphi \rangle =\int _{\mathbb {T}^N\times [0,(t-\varepsilon )\wedge 0]\times \mathbb {R}} (t-s)a'(\xi )\cdot \nabla S_{A_{\gamma ,\delta }}^*(t-s)\varphi dq(x,s,\xi ), \end{aligned}$$

where \(\varphi \in C(\mathbb {T}^N)\). We choose \(\varphi \) depending also on \(\omega \in \Omega \) and \(t\in [0,T]\). We then have, for \(\lambda \in (0,2]\),

$$\begin{aligned}&\mathbb {E}\int _0^T \langle (-\Delta )^{\frac{\lambda }{2}}Q_\varepsilon (t),\varphi (t) \rangle dt\\&\quad = \mathbb {E}\int _\varepsilon ^T \int _{\mathbb {T}^N\times [0,t-\varepsilon ]\times \mathbb {R}} (t-s)a'(\xi )\cdot \nabla (-\Delta )^{\frac{\lambda }{2}} S_{A_{\gamma ,\delta }}^*(t-s)\varphi (t) dq(x,s,\xi ) dt. \end{aligned}$$

By Lemma 10, for \(p\ge 1\) and \(p'\) its conjugate exponent, we have

$$\begin{aligned}&\Vert (-\Delta )^{\frac{\lambda }{2}}\nabla S_{A_{\gamma ,\delta }}^*(t-s) \varphi (t)\Vert _{L^{\infty }(\mathbb {T}^N\times \mathbb {R})}\\&\quad = e^{-\delta (t-s)}\Vert (-\Delta )^{\frac{\lambda }{2}}\nabla e^{B_\gamma (t-s)}\varphi (t)\Vert _{L^{\infty }(\mathbb {T}^N)}\\&\quad \le c(\gamma (t-s))^{-2+\mu _{N,\alpha ,\lambda ,p}} e^{-\delta (t-s)}\Vert \varphi (t)\Vert _{L^{p'}(\mathbb {T}^N)}, \end{aligned}$$

with

$$\begin{aligned} \mu _{N,\alpha ,\lambda ,p}=2-\frac{N+\lambda +1}{2\alpha }+ \frac{N}{2p\alpha }. \end{aligned}$$
(32)

Assume \(\mu _{N,\alpha ,\lambda ,p}>0\). By the Fubini Theorem, we have then

$$\begin{aligned}&\mathbb {E}\int _0^T \langle (-\Delta )^{\frac{\lambda }{2}} Q_\varepsilon (t),\varphi (t)\rangle dt\\&\quad \le \frac{\kappa }{\gamma } \Vert \varphi \Vert _{L^\infty (\Omega \times (0,T);L^{p'}(\mathbb {T}^N))} \mathbb {E}\int _{\mathbb {T}^N\times [0,T]\times \mathbb {R}} |a'(\xi )| d|q|(x,s,\xi ), \end{aligned}$$

where

$$\begin{aligned} \kappa =\sup _{s\in [0,T],\; T>0}\int _{s+\varepsilon }^T(\gamma (t-s))^{-1+\mu } e^{-\delta (t-s)}dt \le \gamma ^{-1+\mu }\delta ^{-\mu }\int _{\mathbb {R}^+} \sigma ^{\mu -1 }e^{-\sigma }d\sigma , \end{aligned}$$

with \(\mu =\mu _{N,\alpha ,\lambda ,p}\). It follows that for all \(\varepsilon >0\), \(Q_\varepsilon \in L^1(\Omega \times (0,T);W^{\lambda ,p}(\mathbb {T}^N))\) with

$$\begin{aligned}&\mathbb {E}\Vert Q_\varepsilon \Vert _{L^1((0,T);W^{\lambda ,p}(\mathbb {T}^N))}\\&\quad \le C_{N,\alpha ,\lambda ,p}\gamma ^{-2} \left( \frac{\gamma }{\delta }\right) ^{\mu _{N,\alpha ,\lambda ,p}} \mathbb {E}\int _{\mathbb {T}^N\times [0,T]\times \mathbb {R}} |a'(\xi )| d|q|(x,s,\xi ), \end{aligned}$$

for some constant \(C_{N,\alpha ,\lambda ,p}\). We let \(\varepsilon \rightarrow 0\) and use Lemma 11 to obtain eventually the following estimate on \(Q\):

$$\begin{aligned} \begin{aligned}&\mathbb {E}\Vert Q\Vert _{L^1((0,T);W^{\lambda ,p}(\mathbb {T}^N))}\\&\quad \le C_{N,\alpha ,\lambda ,p}\gamma ^{-2}\left( \frac{\gamma }{\delta }\right) ^{\mu _{N,\alpha ,\lambda ,p}}\left( D_0 \mathbb {E}\Vert a'(u)\Vert _{L^1(\mathbb {T}^N \times (0,T))} + \mathbb {E}\int _{\mathbb {T}^N} \Theta (u_0) dx\right) , \end{aligned} \end{aligned}$$
(33)

where \(\Theta (s)=\int _0^s\int _0^\sigma |a'(r)|drd\sigma \), provided \(\mu _{N,\alpha ,\lambda ,p}>0\).

3.7 Conclusion and proof of the existence part in Theorem 1

Under the growth hypothesis (10) (sub-linearity of \(a'\)), the bound (33) gives

$$\begin{aligned} \begin{aligned}&\mathbb {E}\Vert Q\Vert _{L^1((0,T);W^{\lambda ,p}(\mathbb {T}^N))}\\&\quad \le C_{N,\alpha ,\lambda ,p,D_0}\gamma ^{-2}\left( \frac{\gamma }{\delta }\right) ^{\mu _{N,\alpha ,\lambda ,p}}\left( 1+\mathbb {E}\Vert u\Vert _{L^1(\mathbb {T}^N \times (0,T))} + \mathbb {E}\Vert u_0\Vert _{L^3(\mathbb {T}^N)}^3\right) . \end{aligned} \end{aligned}$$

We choose \(\gamma ,\, \delta >0\) such that:

$$\begin{aligned} C_{N,\alpha ,\lambda ,p,D_0}\gamma ^{-2} \left( \frac{\gamma }{\delta }\right) ^{\mu _{N,\alpha ,\lambda ,p}}\le \frac{1}{4} \end{aligned}$$

and obtain

$$\begin{aligned}&\mathbb {E}\Vert Q\Vert _{L^1(0,T;W^{\lambda ,p}(\mathbb {T}^N))}\nonumber \\&\quad \le \frac{1}{4}\mathbb {E}\Vert u\Vert _{L^1(0,T;L^p(\mathbb {T}^1))} + C(N,\alpha , \lambda ,p,\gamma ,\delta ,D_0)(\mathbb {E}\Vert u_0\Vert _{L^3(\mathbb {T}^N)}^3+1). \end{aligned}$$
(34)

Consider now the estimates (27), (28), (30) and assume that \(H^{\left( \frac{1}{2}-\alpha \right) b}\) and \(H^\sigma \) are both imbedded in a given Sobolev space \(W^{s,q}(\mathbb {T}^N)\) with \(s> 0\) (the exponent \(s\), \(q\) will be determined later). Then (27), (28), (30) give

$$\begin{aligned}&\mathbb {E}\left( \Vert u^0+u^\flat +P\Vert _{L^2(0,T;W^{s,q}(\mathbb {T}^N))}^2\right) \nonumber \\&\quad \le C(\delta ,\gamma ,D_0)\left( 1+\mathbb {E}\Vert u\Vert _{L^1(\mathbb {T}^N\times (0,T))}+ \mathbb {E}\Vert u_0\Vert _{L^1(\mathbb {T}^1)}+T\right) . \end{aligned}$$

By the Cauchy-Schwarz inequality and the Young inequality, we obtain

$$\begin{aligned}&\mathbb {E}\left( \Vert u^0+u^\flat +P\Vert _{L^1(0,T;W^{s,q}(\mathbb {T}^N))}\right) \nonumber \\&\quad \le C(\delta ,\gamma ,D_0)\left( 1+\mathbb {E}\Vert u_0\Vert _{L^1(\mathbb {T}^N)}+T\right) + \frac{1}{4}\mathbb {E}\Vert u\Vert _{L^1(0,T;L^1(\mathbb {T}^N))}. \end{aligned}$$
(35)

Assuming also that \(W^{\lambda ,p}(\mathbb {T}^N)\) is embedded in \(W^{s,q}(\mathbb {T}^N)\), we deduce from (20), (34), (35) the estimate

$$\begin{aligned}&\mathbb {E}\Vert u\Vert _{L^1(0,T;W^{s,q}(\mathbb {T}^N))} \\&\quad \le C(N,\alpha ,\lambda ,p,\gamma ,\delta ,D_0) (\mathbb {E}\Vert u_0\Vert _{L^3 (\mathbb {T}^1)}^3+1+T)+\frac{1}{2}\mathbb {E}\Vert u\Vert _{L^1(0,T;L^1(\mathbb {T}^N))}, \end{aligned}$$

and thus

$$\begin{aligned} \mathbb {E}\Vert u\Vert _{L^1(0,T;W^{s,q}(\mathbb {T}^N))} \le C(N,\alpha ,\lambda ,p,\gamma , \delta ,D_0)(\mathbb {E}\Vert u_0\Vert _{L^3(\mathbb {T}^N)}^3+1+T). \end{aligned}$$
(36)

Recall that the estimates above are actually obtained for \(u^\eta \), the solution of (18) which converges to the solution of (1) in \(L^\infty (0,T;L^1(\Omega \times \mathbb {T}^1))\). Thanks to the closedness of balls of \(L^1(\Omega \times (0,T);W^{\lambda ,p}(\mathbb {T}^1))\) for the \(L^\infty (0,T;L^1(\Omega \times \mathbb {T}^1))\) topology, these estimates actually hold for the true solution \(u\). Then, by Krylov-Bogoliubov theorem and the compactness of the embedding of \(W^{s,q}(\mathbb {T}^N)\) in \(L^1(\mathbb {T}^N)\), existence of an invariance measure follows. Moreover, by the Sobolev embedding (37) below, this invariant measure is supported by \(L^r(\mathbb {T}^N)\) for any \(r\) such that \(\frac{s}{N}-\frac{1}{q}\ge -\frac{1}{r}\).

There remains to determine for which \(s,q\) we do have (36). Let us recall that we have the injection

$$\begin{aligned} W^{r,p}(\mathbb {T}^N)\hookrightarrow W^{s,q}(\mathbb {T}^N) \end{aligned}$$
(37)

if \(r\ge s\) and \(\frac{r}{N}-\frac{1}{p}\ge \frac{s}{N}-\frac{1}{q}\), where \(\frac{s}{N}-\frac{1}{q}\) can be considered as the index of regularity of the Sobolev space \(W^{s,q}(\mathbb {T}^N)\). Then (27), (28) and (30) provide us with the indices

$$\begin{aligned} \mathrm {ind}_0=\frac{\alpha +\left( \frac{1}{2}-\alpha \right) b}{N}-\frac{1}{2},\; \mathrm {ind}_\flat =\frac{\left( \frac{1}{2}-\alpha \right) b}{N}-\frac{1}{2},\; \mathrm {ind}_P=\frac{\alpha -}{N}-\frac{1}{2}, \end{aligned}$$

where \(\alpha -\) denotes any positive number \(\sigma <\alpha \). The condition \(\mu _{N,\alpha ,\lambda ,p}>0\) on the parameter \(\mu _{N,\alpha ,\lambda ,p}\) (defined by (32)) reads

$$\begin{aligned} \mathrm {ind}_Q:=\frac{\lambda }{N}-\frac{1}{p}<\mathrm {ind}_Q^+:= \frac{4\alpha }{N}-\frac{N+1}{N}, \end{aligned}$$

where \(\mathrm {ind}_Q\) is associated to (33). Clearly, we have \(\mathrm {ind}_0>\mathrm {ind}_\flat \). We look for a sobolev space \(W^{s,q}(\mathbb {T}^N)\) of index lower than the minimum of \(\mathrm {ind}_\flat \), \(\mathrm {ind}_P\), \(\mathrm {ind}_Q^+\), i.e.

$$\begin{aligned} s<\frac{N}{q}+\min \left( \left( \frac{1}{2}-\alpha \right) b-\frac{N}{2}, \alpha -\frac{N}{2},4\alpha -(N+1)\right) , \end{aligned}$$
(38)

and satisfying also

$$\begin{aligned} 0<\,&s,\end{aligned}$$
(39)
$$\begin{aligned} s<\,&\min \left( \left( \frac{1}{2}-\alpha \right) b,\alpha \right) . \end{aligned}$$
(40)

The constraint (40) is contained in (38) as soon as \(q\ge 2\). The constraint (39), together with (38) gives (for the maximal admissible value \(\alpha =\frac{1}{2}\)) the condition \(\frac{1}{N}+\frac{1}{q}>1\), which is compatible with \(q\ge 2\) only if \(N=1\). Therefore we treat separately the cases \(N=1\) and \(N>1\).

Case \(N=1\) : We choose \(q=2\) and then optimize the right-hand side of (38). The maximum is \(\frac{b}{2(b+4)}\), reached for \(\alpha =\frac{b+3}{2(b+4)}\). Then we choose \(p=q\) and \(s=\lambda \in \big (0,\frac{b}{2(b+4)}\big )\). We obtain that the invariant measure is supported by \(L^r(\mathbb {T}^1)\) for any \(r<2+\frac{b}{2}\).

Case \(N>1\) : take \(q\) satisfying \(\frac{1}{N}+\frac{1}{q}=1+\frac{\eta }{N}\), \(\eta >0\) small. We then have to consider the maximum of

$$\begin{aligned} \min \left( \left( \frac{1}{2}-\alpha \right) b,\alpha ,4\alpha -2+\eta \right) , \end{aligned}$$

for \(\alpha \in [0,\frac{1}{2}]\), which is \(\frac{\eta b}{b+4}\), obtained for \(\alpha =\frac{1}{2}-\frac{\eta }{b+4}\). We choose then \(p=q\), \(\lambda =s\in \left( 0,\frac{\eta b}{b+4}\right) \). We obtain that the invariant measure is supported by \(L^r(\mathbb {T}^N)\) for any \(r<\frac{N}{N-1}\). \(\square \)

4 Uniqueness of the invariant measure, ergodicity

We have proved in (36) that there exists \(q>1\) and \(s>0\) such that:

$$\begin{aligned} \mathbb {E}\Vert u\Vert _{L^1(0,T;W^{s,q}(\mathbb {T}^N))} \le \kappa _0(\mathbb {E}\Vert u_0 \Vert _{L^{3}(\mathbb {T}^N)}^{3}+1+T) \end{aligned}$$
(41)

where \(\kappa _0\) depends on \(\lambda ,p,D_0\). Below, in Sect. 4.1, we prove that this estimate implies that any solution enters a ball of some fixed radius in finite time. Then in the case of subquadratic flux \(A\), we show that if the driving noise is small for some time, then the solution becomes very small, cf. Sect. 4.2. This smallness depends on the size of the initial data. These two facts are then shown to imply the second part of Theorem 1 (Sect. 4.3).

4.1 Time to enter a ball in \(L^1(\mathbb {T}^N)\)

For further purposes, we need to consider two solutions \(u^1\) and \(u^2\) starting from \(u^1_0\), \(u^2_0\) deterministic and in \(L^{3}(\mathbb {T}^N)\). It is easy to generalize (41) to two solutions on the interval \([t,t+T]\) for \(t,\, T\ge 0\). This implies:

$$\begin{aligned}&\mathbb {E}\left( \int _t^{t+T}\Vert u^1(s)\Vert _{L^{1}(\mathbb {T}^N)}+\Vert u^2(s)\Vert _{L^{1} (\mathbb {T}^N)}ds\big | \mathcal F_t\right) \nonumber \\&\quad \le \kappa _1(\Vert u^1(t)\Vert _{L^{3}(\mathbb {T}^N)}^{3}+\Vert u^2(t)\Vert _{L^{3} (\mathbb {T}^N)}^{3}+1+T) \end{aligned}$$
(42)

Moreover, we easily prove for \(i=1,\, 2\), thanks to the Itô formula:

$$\begin{aligned}&\Vert u^i(t)\Vert ^3_{L^3(\mathbb {T}^N)}\nonumber \\&\quad \le \Vert u^i_0\Vert ^3_{L^3(\mathbb {T}^N)} + 3\int _0^t ((u^i)^2(s),\Phi dW(s))_{L^2(\mathbb {T}^N)} +3D_0\int _0^t\Vert u^i(s)\Vert _{L^1(\mathbb {T}^N)}ds. \end{aligned}$$
(43)

It follows that

$$\begin{aligned}&\mathbb {E}\left( \int _t^{t+T}\Vert u^1(s)\Vert _{L^{1}(\mathbb {T}^N)}+\Vert u^2(s)\Vert _{L^{1}(\mathbb {T}^N)} ds\big | \mathcal F_t\right) \\&\quad \le \kappa _1\left[ \Vert u^1_0\Vert ^3_{L^3(\mathbb {T}^N)}+\Vert u^2_0\Vert ^3_{L^3(\mathbb {T}^N)} +3 D_0 \int _0^t \Vert u^1(s)\Vert _{L^1(\mathbb {T}^N)}+\Vert u^2(s)\Vert _{L^1(\mathbb {T}^N)}ds\right. \\&\qquad \left. +\, 3\int _0^t ((u^1(s))^2+(u^2(s))^2,\Phi dW(s))_{L^2(\mathbb {T}^N)} +1+T\right] \end{aligned}$$

We now define recursively the sequences of deterministic times \((t_k)_{k\ge 0}\) and \((r_k)_{k\ge 0}\) by

$$\begin{aligned}&\displaystyle t_0=0,\\&\displaystyle t_{k+1}=t_k+r_k, \end{aligned}$$

where \((r_k)_{k\ge 0}\) will be chosen below. We also define the events:

$$\begin{aligned} A_k=\left\{ \inf _{s\in [t_\ell ,t_{\ell +1}]}\left( \Vert u^1(s)\Vert _{L^{1} (\mathbb {T}^N)}+\Vert u^2(s)\Vert _{L^{1}(\mathbb {T}^N)}\right) \ge 2\kappa _1,\; \ell =0,\dots , k-1\right\} . \end{aligned}$$

Then, for all \(k\ge 0\),

$$\begin{aligned}&\mathbb {P}\left( \inf _{s\in [t_k,t_{k+1}]}\left( \Vert u^1(s)\Vert _{L^{1}(\mathbb {T}^N)}+\Vert u^2 (s)\Vert _{L^{1}(\mathbb {T}^N)}\right) \ge 2\kappa _1 \left| \mathcal F_{t_k}\right) \right. \nonumber \\&\quad \left. \le \mathbb {P}\left( \frac{1}{r_k}\int _{t_k}^{t_k+r_k}\Vert u^1(s)\Vert _{L^{1}(\mathbb {T}^N)}+ \Vert u^2(s)\Vert _{L^{1}(\mathbb {T}^N)} ds\ge 2\kappa _1 \right| \mathcal F_{t_k}\right) \nonumber \\&\quad \le \frac{1}{2r_k}\left[ \Vert u^1_0\Vert ^3_{L^3(\mathbb {T}^N)}+\Vert u^2_0\Vert ^3_{L^3(\mathbb {T}^N)} \right. \nonumber \\&\qquad \left. +\,3 D_0 \int _0^{t_k}\Vert u^1(s)\Vert _{L^1(\mathbb {T}^N)}+\Vert u^2(s)\Vert _{L^1(\mathbb {T}^N)}ds +1\right] \nonumber \\&\qquad +\,\frac{1}{2} + \frac{3}{2r_k}\int _0^{t_k} ((u^1(s))^2+(u^2(s))^2,\Phi dW(s))_{L^2(\mathbb {T}^N)}. \end{aligned}$$
(44)

We will choose \(r_k\) so that the following inequality is satisfied for all \(k\ge 0\):

$$\begin{aligned} \frac{1}{2r_k}(\Vert u^1_0\Vert ^3_{L^3(\mathbb {T}^N)}+\Vert u^2_0\Vert ^3_{L^3(\mathbb {T}^N)}+1)\le \frac{1}{8}. \end{aligned}$$

Let us then multiply (44) by \(\mathbf {1}_{A_k}\) and take the expectation to obtain

$$\begin{aligned} \displaystyle \mathbb {P}\left( A_{k+1}\right)&\le \displaystyle \frac{5}{8}\mathbb {P}\left( A_{k}\right) + \frac{3D_0}{2r_k}\mathbb {E}\left( \int _0^{t_k} \Vert u^1(s)\Vert _{L^1(\mathbb {T}^N)} + \Vert u^2(s)\Vert _{L^1(\mathbb {T}^N)}ds\mathbf {1}_{A_k}\right) \\&\displaystyle +\,\frac{3}{2r_k}\mathbb {E}\left( \int _0^{t_k} ((u^1(s))^2+(u^2(s))^2,\Phi dW(s))_{L^2(\mathbb {T}^N)}\mathbf {1}_{A_k}\right) . \end{aligned}$$

By the Cauchy-Schwarz inequality, Itô isometry and the bound (2) on the covariance of the noise, we have:

$$\begin{aligned}&\mathbb {E}\left( \int _0^t ((u^1(s))^2+(u^2(s))^2,\Phi dW(s))_{L^2(\mathbb {T}^N)}\mathbf {1}_{A_k}\right) \\&\quad \le (2D_0)^{1/2} \left( \mathbb {E}\int _0^t\Vert u^1(s)\Vert _{L^2(\mathbb {T}^N)}^4+\Vert u^2 (s)\Vert _{L^2(\mathbb {T}^N)}^4ds\right) ^{1/2}\mathbb {P}(A_k)^{1/2}. \end{aligned}$$

By Itô formula applied to \(\Vert u^i(t)\Vert _{L^2(\mathbb {T}^N}^2\) and \(\Vert u^i(t)\Vert _{L^2(\mathbb {T}^N}^4\), \(i=1,2\), we obtain the bound:

$$\begin{aligned} \mathbb {E}\left( \Vert u^i(t)\Vert _{L^2(\mathbb {T}^N)}^2\right) \le \mathbb {E}\left( \Vert u^i_0\Vert _{L^2(\mathbb {T}^N)}^2\right) +D_0t \end{aligned}$$

and

$$\begin{aligned} \mathbb {E}\left( \Vert u^i(t)\Vert _{L^2(\mathbb {T}^N)}^4\right) \le \mathbb {E}\left( \Vert u^i_0\Vert _{L^2 (\mathbb {T}^N)}^4+6 D_0\int _0^t\Vert u^i(s)\Vert _{L^2(\mathbb {T}^N)}^2ds\right) . \end{aligned}$$

Thus, integrating in time, we obtain a bound

$$\begin{aligned} \mathbb {E}\int _0^{t_k} \Vert u^1(s)\Vert ^4_{L^2(\mathbb {T}^N)}+\Vert u^2(s)\Vert ^4_{L^2(\mathbb {T}^N)}ds \le 2[\Vert u^1_0\Vert ^4_{L^2(\mathbb {T}^N)}\!+\!\Vert u^2_0\Vert ^4_{L^2(\mathbb {T}^N)}]t_k \!+\! 24 D_0^2t_k^3. \end{aligned}$$

Similarly, we have the estimate

$$\begin{aligned} \mathbb {E}\left| \int _0^{t_k} \Vert u^1(s)\Vert _{L^1(\mathbb {T}^N)}+\Vert u^2(s)\Vert _{L^1(\mathbb {T}^N)}ds\right| ^2 \le t_k^2[\Vert u^1_0\Vert ^2_{L^2(\mathbb {T}^N)}\!+\!\Vert u^2_0\Vert ^2_{L^2(\mathbb {T}^N)}]\!+\!D_0 t_k^3. \end{aligned}$$

It follows that

$$\begin{aligned} \mathbb {P}\left( A_{k+1}\right) \le \frac{3}{4}\mathbb {P}\left( A_{k}\right) +\frac{C}{r_k^2} \left( [\Vert u^1_0\Vert ^4_{L^2(\mathbb {T}^N)}+\Vert u^2_0\Vert ^4_{L^2(\mathbb {T}^N)}]t_k+t_k^4+1\right) , \end{aligned}$$

where the constant \(C\) can be written explicitely in term of \(D_0\). We choose \(r_k\) so that:

$$\begin{aligned} \frac{C}{r_k^2}\left( [\Vert u^1_0\Vert ^4_{L^2(\mathbb {T}^N)}+\Vert u^2_0\Vert ^4_{L^2(\mathbb {T}^N)}]t_k + t_k^4 +1 \right) \le \left( \frac{3}{4}\right) ^k, \end{aligned}$$

and obtain: \(\mathbb {P}\left( A_{k+1}\right) \le \frac{3}{4}\mathbb {P}\left( A_{k}\right) + \left( \frac{3}{4}\right) ^k\), which gives therefore

$$\begin{aligned} \mathbb {P}\left( A_{k}\right) \le k\left( \frac{3}{4}\right) ^{k-1}. \end{aligned}$$

By Borel–Cantelli Lemma, we deduce that

$$\begin{aligned} k_0=\inf \{k\ge 0\; |\; \inf _{s\in [t_\ell ,t_{\ell +1}]}\Vert u^1(s)\Vert _{L^{1} (\mathbb {T}^N)}+\Vert u^2(s)\Vert _{L^{1}(\mathbb {T}^N)}\le 2\kappa _1\} \end{aligned}$$

is almost surely finite. We then define the stopping time

$$\begin{aligned} \tau ^{u^1_0,u^2_0}=\inf \{t\ge 0\;|\;\Vert u^1(t)\Vert _{L^{1}(\mathbb {T}^N)}+ \Vert u^2(t)\Vert _{L^{1}(\mathbb {T}^N)} \le 2\kappa _1\}. \end{aligned}$$

Clearly \(\tau ^{u^1_0,u^2_0}\le t_{k_0+1}\) so that \(\tau ^{u^1_0,u^2_0}<\infty \) almost surely. It follows that for \(T>0\) the following stopping times are also almost surely finite:

$$\begin{aligned} \tau _\ell =\inf \{t\ge \tau _{\ell -1}+T\;|\; \Vert u^1(t)\Vert _{L^{1}(\mathbb {T}^N)}+ \Vert u^2(t)\Vert _{L^{1}(\mathbb {T}^N)} \le 2\kappa _1 \},\; \tau _0=0. \end{aligned}$$

4.2 The solution is small if the noise is small

Proposition 12

Assume that \(a\) satisfies (11). Then, for any \(\varepsilon >0\), there exists \(T>0\) and \(\eta >0\) such that:

$$\begin{aligned} \frac{1}{T}\int _0^T \Vert u(s)\Vert _{L^1(\mathbb {T}^N)}ds \le \frac{\varepsilon }{2} \end{aligned}$$

if

$$\begin{aligned} \Vert u(0)\Vert _{L^1(\mathbb {T}^N)}\le 2\kappa _1 \text{ and } \sup _{t\in [0,T]} \Vert W\Vert _{W^{1,\infty }(\mathbb {T}^N) }\le \eta . \end{aligned}$$

Proof

We first take \(\tilde{u}_0\in L^2(\mathbb {T}^N)\) such that

$$\begin{aligned} \Vert u_0-\tilde{u}_0\Vert _{L^1(\mathbb {T}^N)}\le \frac{\varepsilon }{8}, \; \Vert \tilde{u}_0\Vert _{L^2(\mathbb {T}^N)}\le C \kappa _1 \varepsilon ^{-N/2}. \end{aligned}$$

It is easy to see that this can be achieved by taking \(\tilde{u}_0=\rho _\varepsilon * u_0\) for some regularizing kernel \((\rho _\varepsilon )_{\varepsilon >0}\). By (16), we know that

$$\begin{aligned} \Vert u(t)-\tilde{u}(t)\Vert _{L^1(\mathbb {T}^N)}\le \frac{\varepsilon }{8}, \; t\ge 0, \end{aligned}$$

where \(\tilde{u}\) is the solution starting from \(\tilde{u}_0\). We set \(v=\tilde{u}-W\) so that \(v\) is a kinetic solution to

$$\begin{aligned} \partial _t v+\mathrm{div}A (v+W)=0 \end{aligned}$$

and \(g=\mathbf {1}_{v(x,t)>\xi }-\mathbf {1}_{0>\xi }\) satisfies:

$$\begin{aligned} \partial _t g+a(\xi )\cdot \nabla g= \partial _\xi \mathrm {n} +(a(\xi )-a(\xi +W))\cdot \nabla g -a(\xi +W)\cdot \nabla W \delta _{v=\xi }. \end{aligned}$$
(45)

The kinetic measure \(\mathrm {n}\) is easily related to the kinetic measure in the equation satisfied by \(f=\mathbf {1}_{u>\xi }-\mathbf {1}_{0>\xi }\). Also since both \(u\) and \(W\) have a zero spatial average, so does \(v\).

We use again \(B_\gamma \) and \(A_{\gamma ,\delta }\) introduced in Sect. 3.1. We now take \(\alpha =\frac{1}{2}\). Then (45) rewrites:

$$\begin{aligned} \partial _t g+A_{\gamma ,\delta } g&= (B_\gamma +\delta \text{ Id })g+ \partial _\xi \mathrm {n} +(a(\xi )-a(\xi +W))\nonumber \\&\cdot \nabla g -a(\xi +W)\cdot \nabla W \delta _{v=\xi }. \end{aligned}$$
(46)

By solving (46) and summing over \(\xi \), we are led to the following decomposition of \(v\):

$$\begin{aligned} v=v^0+v^\flat +v^\#+P^W+N^W, \end{aligned}$$

where

$$\begin{aligned} v^0(t)&:=\int _\mathbb {R}S_{A_{\gamma ,\delta }}(t)g(0,\xi )d\xi ,\\ v^\flat (t)&:=\int _\mathbb {R}\int _0^t S_{A_{\gamma ,\delta }}(s)\, (B_\gamma +\delta \text{ Id })g(t-s,\xi )ds d\xi ,\\ v^\#(t)&:=\int _\mathbb {R}\int _0^t S_{A_{\gamma ,\delta }}(s)(a(\xi )-a(\xi + W))\cdot \nabla g (t-s,\xi )\, ds d\xi ,\\ \langle P^W(t),\varphi \rangle&:=-\int _{\mathbb {T}^N\times [0,t]} \left( a(v+W)\cdot \nabla W \right) (x,s)\, \left( S_{A_\gamma , \delta }^*(t-s)\varphi \right) (x,v(x,s)) \,ds dx,\\ \langle N^W(t),\varphi \rangle&:=\int _{\mathbb {T}^N\times [0,t]\times \mathbb {R}} (t-s)a'(\xi )\cdot \nabla S_{A_{\gamma },\delta }^*(t-s)\varphi d\mathrm {n}(x,s,\xi ), \end{aligned}$$

where \(\langle \cdot ,\cdot \rangle \) is the duality product between the space of finite Borel measures on \(\mathbb {T}^N\) and \(C(\mathbb {T}^N)\).

Reproducing the argument of Sect. 3.2 and 3.3, we obtain the estimates:

$$\begin{aligned} \int _0^T\Vert v^0(t)\Vert _{H^{\frac{1}{2}}(\mathbb {T}^N)}^2 dt \le c\,\gamma ^{b-1} \left\| u_0\right\| _{L^1(\mathbb {T}^N)} \end{aligned}$$
(47)

and

$$\begin{aligned} \int _0^T\Vert v^\flat (t)\Vert _{L^2(\mathbb {T}^N)}^2 dt \le c\,\gamma ^{b} \int _0^T\left\| v(t)\right\| _{L^1(\mathbb {T}^N)}dt. \end{aligned}$$
(48)

We estimate \(v^\#\) similarly. We take the Fourier transform in \(x\) (denoted by \(\mathcal {F}\) here) and integrate in time. Note that

$$\begin{aligned}&{\mathcal F}\left[ (a(\cdot )-a(\cdot +W))\cdot \nabla g\right] (n,\xi ,t)\\&\quad =2\pi i {\mathcal F}\left[ (a(\cdot )-a(\cdot +W))g\right] (n,\xi ,t) \cdot n+{\mathcal F}\left[ (a'(\cdot +W)\cdot \nabla W)g\right] (n,\xi ,t) \end{aligned}$$

and thus

$$\begin{aligned}&\int _0^T |\hat{v}^\#(n,t)|^2dt \\&\quad \le c \int _0^T \left| \int _\mathbb {R}\int _0^t e^{-i(a(\xi )\cdot n +\gamma |n|+\delta )s} {\mathcal F}\left[ (a(\cdot )-a(\cdot \!+\!W)) g\right] (n,\xi ,t\!-\!s)\cdot n \, ds d\xi \right| ^2 dt\\&\qquad +c \int _0^T \left| \int _\mathbb {R}\int _0^t e^{-i(a(\xi )\cdot n +\gamma |n|+\delta )s} {\mathcal F}\left[ (a'(\cdot \!+\!W)\cdot \nabla W)g \right] (n,\xi ,t\!-\!s) \, ds d\xi \right| ^2 dt. \end{aligned}$$

Denote by \(A_1\) and \(A_2\) the two terms on the right hand side. \(A_1\) is of the form

$$\begin{aligned} A_1=\int _0^T\left| \int _0^t e^{-(\gamma |n|+\delta ) s} n\cdot {\mathcal G}h(n,ns,t-s)ds\right| ^2dt, \end{aligned}$$

where

$$\begin{aligned} h(n,\xi ,t)=\mathcal {F}\left[ \left( a(\cdot )-a(\cdot +W)\right) g\right] (n,\xi ,t) \end{aligned}$$

and the integral operator \(\mathcal {G}\) has been defined in (25). We proceed as in Sect. 3.2 and 3.3 (in particular we use Lemma 2.4 of [6] for the estimate of the oscillatory integral \(\mathcal {G}h\)) to obtain

$$\begin{aligned} A_1\le c\gamma ^{-2+b}\int _0^T\left\| h(n,\xi ,t)\right\| ^2_{L^2 (\xi )}dt, \end{aligned}$$

and, similarly,

$$\begin{aligned} A_2\le \frac{c\gamma ^{-2+b}}{|n|}\int _0^T\Vert k(n,\xi ,t)\Vert _{L^2_\xi }^2dt \end{aligned}$$

with \(k(n,\xi ,t)={\mathcal F}\left[ (a'(\cdot +W)\cdot \nabla W)g\right] (n,\xi ,t)\). Summing over \(n\ne 0\), we get therefore

$$\begin{aligned}&\int _0^T \Vert v^{\#,1}(t)\Vert _{L^2(\mathbb {T}^N)}^2dt\\&\quad \le C\gamma ^{-2+b} \int _0^T\Vert \left( a(\cdot )-a(\cdot +W)\right) g\Vert _{L^2_{x,\xi }}^2+\Vert (a'(\cdot +W)\cdot \nabla W)g\Vert _{L^2_{x,\xi }}^2dt, \end{aligned}$$

where \(v^{\#,1}=v^\#-\int _{\mathbb {T}^N}v^\#dx\). We then use the hypothesis (11), i.e. \(a\) sublinear, and the “smallness” of the noise (actually no smallness assumption on \(\eta \) has been done up to now) to write

$$\begin{aligned} |a(\xi )-a(\xi +W)|\le c\,|W|\le c\,\eta ,\quad |a'(\xi +W)\cdot \nabla W |\le c\,\eta \end{aligned}$$

and deduce

$$\begin{aligned} \int _0^T \Vert v^{\#,1}(t)\Vert _{L^2(\mathbb {T}^N)}^2dt\le C \eta ^2 \gamma ^{-2+b}\int _0^T \Vert v\Vert _{L^{1}(\mathbb {T}^N)}dt \end{aligned}$$
(49)

since \(\Vert g\Vert _{L^2_{x,\xi }}^2=\Vert v\Vert _{L^{1}(\mathbb {T}^N)}\). The total mass \(\int _{\mathbb {T}^N}v^\#dx\) will be estimated later, once we have finished to estimate \(P^W\) and \(N^W\). Regarding \(P^W\), we have, by Hypothesis (11),

$$\begin{aligned} \langle P^W(t),\varphi \rangle&\le C_{}\int _0^t(1+\Vert v(s)\Vert _{L^1(\mathbb {T}^N)}+ \Vert W(s)\Vert _{L^1(\mathbb {T}^N)})\nonumber \\&\times \, \Vert \nabla W(s)\Vert _{L^\infty (\mathbb {T}^N)} \Vert S_{A_\gamma ,\delta }^*(t-s)\varphi \Vert _{L^\infty (\mathbb {T}^N)} \,ds. \end{aligned}$$

Therefore \(P^W\) is more regular than a measure and Lemma 10 implies, for \(\eta \le 1\),

$$\begin{aligned} \Vert P^W(t)\Vert _{L^1(\mathbb {T}^N)}\le C\eta \int _0^t(1+\Vert v(s)\Vert _{L^1 (\mathbb {T}^N)})e^{-\delta (t-s)}ds. \end{aligned}$$

In particular, we have

$$\begin{aligned} \int _0^T \Vert P^W(t)\Vert _{L^1(\mathbb {T}^N)}dt \le C\frac{\eta }{\delta } \int _0^T (1+\Vert v(s)\Vert _{L^1(\mathbb {T}^N)}) ds. \end{aligned}$$
(50)

We finally estimate the last term containing the measure \(\mathrm n\). We write, using again Lemma 10 and the hypothesis (11),

$$\begin{aligned} \langle N^W(t),\varphi \rangle&=\int _{\mathbb {T}^N\times [0,t]\times \mathbb {R}} (t-s)a'(\xi )\cdot \nabla S_{A_{\gamma }}^*(t-s)\varphi d\mathrm {n}(x,s,\xi )\\ \displaystyle&\le c\,\Vert \varphi \Vert _{L^\infty (\mathbb {T}^N)} \int _{\mathbb {T}^N\times [0,t]\times \mathbb {R}} e^{-\delta (t-s)} d|\mathrm {n}|(x,s,\xi ). \end{aligned}$$

Again, we may prove that \(N^W\) is more regular than a measure and deduce:

$$\begin{aligned} \int _0^T \Vert N^W(t)\Vert _{L^1(\mathbb {T}^N)} dt\le \frac{C}{\delta }\int _{\mathbb {T}^N\times [0,T]\times \mathbb {R}} d|\mathrm {n}|(x,s,\xi ). \end{aligned}$$
(51)

To complete our estimate, it remains to evaluate the mass of the measure \(\mathrm {n}\). We proceed as in Lemma 11 and test equation (46) against \(\xi \) to obtain:

$$\begin{aligned}&\frac{1}{2}\Vert v(t)\Vert ^2_{L^2(\mathbb {T}^N)} + |\mathrm n| ([0,t]\times \mathbb {T}^N\times \mathbb {R})\\&\quad \le \frac{1}{2}\Vert u_0\Vert ^2_{L^2(\mathbb {T}^N)}+\left| \,\int _{[0,t]\times \mathbb {T}^N \times \mathbb {R}} \xi (a(\xi )-a(\xi +W))\cdot \nabla g \, dx \, d\xi \, ds\right| \\&\qquad +\,\left| \,\int _{[0,t]\times \mathbb {T}^N} v(x,s) a(v(x,s) +W(x,s))\cdot \nabla W(x,s)dxds\right| . \end{aligned}$$

We use an integration by parts in the second term. By (11) we obtain after easy manipulations:

$$\begin{aligned}&\frac{1}{2}\Vert v(t)\Vert ^2_{L^2(\mathbb {T}^N)} + |\mathrm {n}| ([0,t]\times \mathbb {T}^N\times \mathbb {R}) \le \frac{1}{2} \Vert \tilde{u}_0\Vert ^2_{L^2(\mathbb {T}^N)}\\&\quad + \eta C\int _0^t \left( \Vert v(s)\Vert ^2_{L^2(\mathbb {T}^N)}+1\right) ds. \end{aligned}$$

Gronwall lemma then gives \(\Vert v(t)\Vert ^2_{L^2(\mathbb {T}^1)}\le e^{\eta C t}(\Vert \tilde{u}_0\Vert ^2_{L^2(\mathbb {T}^1)} +C)\), and therefore

$$\begin{aligned} |\mathrm {n}| ([0,T]\times \mathbb {T}^N\times \mathbb {R}) \le e^{\eta C T}(\Vert \tilde{u}_0\Vert ^2_{L^2(\mathbb {T}^N)} +C). \end{aligned}$$

Since \(\Vert \tilde{u}_0\Vert _{L^2(\mathbb {T}^N)}\le C\kappa _1\varepsilon ^{-N/2}\), it follows by (51) that

$$\begin{aligned} \int _0^T \Vert N^W(t)\Vert _{L^1(\mathbb {T}^N)} dt\le \frac{C}{\delta }e^{\eta C T}(\kappa _1^2\varepsilon ^{-N} +1). \end{aligned}$$
(52)

Finally, since \(v\), \(v^0\) and \(v^\flat \) have zero spatial averages, we have

$$\begin{aligned} \int _{\mathbb {T}^N}v^\# +P^W+N^W dx=0. \end{aligned}$$

Consequently we can estimate the total mass of \(v^\#\) by

$$\begin{aligned} \left| \int _{\mathbb {T}^N}v^\# dx\right| \le \Vert P^W\Vert _{L^1(\mathbb {T}^N)}+ \Vert N^W\Vert _{L^1(\mathbb {T}^N)}. \end{aligned}$$

We may now gather all the estimates obtained, i.e. (47), (48), (49), (50), (52), and deduce:

$$\begin{aligned}&\int _0^T\Vert v^0(t)+v^\flat (t)+v^{\#,1}(t)\Vert _{L^1(\mathbb {T}^1)}dt \\&\quad \le T^{1/2}\left( \int _0^T \Vert v^0(t)\Vert _{L^2(\mathbb {T}^1)}^2 +\Vert v^\flat (t)\Vert _{L^2(\mathbb {T}^1)}^2+\Vert v^{\#,1}(t)\Vert _{L^2(\mathbb {T}^1)}^2 dt \right) ^{1/2}\\&\quad \le CT^{1/2}\left( \gamma ^{b-1}\kappa _1 +(\gamma ^b+\eta ^2 \gamma ^{-2+b})\int _0^T\Vert v(t)\Vert _{L^1(\mathbb {T}^1)}dt\right) ^{1/2}\\&\quad \le \frac{1}{4}\int _0^T\Vert v(t)\Vert _{L^1(\mathbb {T}^1)}dt + CT (\gamma ^b+ \eta ^2\gamma ^{-2+b})+C(T\gamma ^{b-1}\kappa _1)^{1/2} \end{aligned}$$

and

$$\begin{aligned}&\int _0^T\left\| \int _{\mathbb {T}^N}v^\#(x)dx+P^W(t)+N(t)\right\| _{L^1(\mathbb {T}^N)}\\&\quad \le C\left( \frac{\eta }{\delta }\int _0^T(1+\Vert v(t)\Vert _{L^1(\mathbb {T}^N)})dt+ \frac{1}{\delta }e^{\eta C T}(\kappa _1^2\varepsilon ^{-N} +1)\right) . \end{aligned}$$

Let \(r>0\) be a ratio that we will fix later. We choose (in that order), \(\gamma \), \(T\), \(\delta \) such that

$$\begin{aligned} C\gamma ^b\le r\varepsilon ,\quad C\left( \frac{\gamma ^{b-1}\kappa _1}{T} \right) ^{1/2} \le r\varepsilon ,\quad \frac{C}{\delta }e^{C T}(\kappa _1^2\varepsilon ^{-N} +1)\le r\varepsilon , \end{aligned}$$

and then \(\eta \) small enough so that

$$\begin{aligned} C\frac{\eta }{\delta }\le \min \left( r\varepsilon ,\frac{1}{4}\right) ,\quad C\eta ^2\gamma ^{-2+b}\le r\varepsilon . \end{aligned}$$

With those choices, and for \(r=\frac{1}{40}\), we obtain, if furthermore \(\eta \le \frac{\varepsilon }{8}\),

$$\begin{aligned} \frac{1}{T}\int _0^T\Vert v(t)\Vert _{L^1(\mathbb {T}^N)}dt\le \frac{\varepsilon }{4},\quad \frac{1}{T}\int _0^T\Vert \tilde{u}(t)\Vert _{L^1(\mathbb {T}^N)}dt\le \frac{3 \varepsilon }{8} \end{aligned}$$

and

$$\begin{aligned} \frac{1}{T}\int _0^T\Vert u(t)\Vert _{L^1(\mathbb {T}^N)}dt\le \frac{\varepsilon }{2}<\varepsilon . \end{aligned}$$

This is the desired conclusion. \(\square \)

4.3 Conclusion

We assume that (11) holds. Let \(u_0^1,\; u_0^2\) be in \(L^1(\mathbb {T}^N)\). Let \(\varepsilon >0\), we take \(\tilde{u}_0^1,\; \tilde{u}_0^2\) in \(L^3(\mathbb {T}^N)\) such that \(\Vert u_0^i-\tilde{u}_0^i\Vert _{L^1(\mathbb {T}^N)}\le \frac{\varepsilon }{4}\). We denote by \(u^1,\; u^2,\;\tilde{u}^1,\;\tilde{u}^2\) the corresponding solutions. We associate to \(\tilde{u}_0^1,\; \tilde{u}_0^2\) the sequence of stopping times constructed in Sect. 4.1. We choose \(T\) and \(\eta \) given by Proposition 12 and obtain thanks to the \(L^1\)-contraction (16),

$$\begin{aligned}&\mathbb {P}\left( \frac{1}{T}\int _{\tau _\ell }^{\tau _\ell +T} \Vert u^1(s)-u^2(s)\Vert _{L^1 (\mathbb {T}^N)}ds \le \varepsilon \; \left| \; \mathcal F_{\tau _\ell }\right. \right) \\&\quad \left. \ge \mathbb {P}\left( \frac{1}{T}\int _{\tau _\ell }^{\tau _\ell +T} \Vert \tilde{u}^1(s)- \tilde{u}^2(s)\Vert _{L^1(\mathbb {T}^N)}ds \ \le \frac{\varepsilon }{2}\; \right| \; \mathcal F_{\tau _\ell }\right) \\&\quad \ge \mathbb {P}\left( \sup _{[\tau _\ell ,\tau _\ell +T]}\Vert W(t)-W(\tau _\ell ) \Vert _{W^{1,\infty }(\mathbb {T}^N)}\le \eta \; \big |\; \mathcal F_{\tau _\ell }\right) . \end{aligned}$$

By the strong Markov property, the right hand side is non random and independent on \(\ell \). It is clearly positive. We denote it by \(\lambda \). For \(\ell _0, k\in \mathbb {N}\) we then have

$$\begin{aligned} \mathbb {P}\left( \frac{1}{T}\int _{\tau _\ell }^{\tau _\ell +T} \Vert u^1(s)-u^2(s) \Vert _{L^1(\mathbb {T}^N)}ds \ge \varepsilon ,\; \text{ for } \ell =\ell _0, \dots ,\ell _0+k\right) \le (1-\lambda )^k, \end{aligned}$$

and therefore

$$\begin{aligned}&\mathbb {P}\left( \lim _{\ell \rightarrow \infty }\frac{1}{T}\int _{\tau _\ell }^{\tau _\ell +T} \Vert u^1(s)-u^2(s)\Vert _{L^1(\mathbb {T}^N)}ds \ge \varepsilon \right) \nonumber \\&\quad =\mathbb {P}\left( \exists \ell _0\in \mathbb {N}\,;\frac{1}{T}\int _{\tau _\ell }^{\tau _\ell +T} \Vert u^1(s)-u^2(s)\Vert _{L^1(\mathbb {T}^N)}ds \ge \varepsilon ,\; \text{ for } \ell \ge \ell _0\right) \nonumber \\&\quad =0. \end{aligned}$$
(53)

Note the limit exists since, by (16), \(t\mapsto \Vert u^1(t)-u^2(t)\Vert _{L^1(\mathbb {T}^N}\) is a.s. non-increasing. This latter property is again used to deduce from (53) that

$$\begin{aligned} \mathbb {P}\left( \lim _{t \rightarrow \infty } \Vert u^1(t)-u^2(t)\Vert _{L^1(\mathbb {T}^N)} \ge \varepsilon \right) =0. \end{aligned}$$

We have thus proved:

$$\begin{aligned} \lim _{t \rightarrow \infty } \Vert u^1(t)-u^2(t)\Vert _{L^1(\mathbb {T}^N)}=0,\; a.s. \end{aligned}$$

This easily implies the second statement (uniqueness part) of Theorem 1.