1 Introduction

In this work, we are concerned with the existence and uniqueness of strong solution for a stochastic incompressible third grade fluid model in a two-dimensional (2D) or three-dimensional (3D) bounded domain with smooth boundary. More precisely, the evolution equation is given by

$$\begin{aligned}{} & {} dv+\Bigl (-\nu \Delta y+(y\cdot \nabla )v+\sum _{j}v^j\nabla y^j-(\alpha _1+\alpha _2)\text {div}(A^2) -\beta \text {div}(|A|^2A)\Bigr )dt\nonumber \\{} & {} \qquad =( -\nabla {\textbf{P}} +U)dt+ G(\cdot ,y)d{\mathcal {W}}, \end{aligned}$$
(1.1)

where \(v:=v(y)=y-\alpha _1\Delta y, A:=A(y)= \nabla y+\nabla y^T,\) and \({\mathcal {W}}\) is a cylindrical Wiener process with values in a Hilbert space \(H_0\). The constant \(\nu \) represents the fluid viscosity, \(\alpha _1,\alpha _2\), \(\beta \) are the material moduli, and \({\textbf{P}}\) denotes the pressure.

Recently, special attention has been devoted to the study of non-Newtonian viscoelastic fluids of differential type, which include natural biological fluids, geological flows and others, and arise in polymer processing, coating, colloidal suspensions and emulsions, ink-jet prints, etc. (see e.g. [18, 25]). It is worth to mention that several simulations studies have been performed by using the third grade fluid models, in order to understand and explain the characteristics of several nanofluids (see [24, 26] and references therein). We recall that nanofluids are engineered colloidal suspensions of nanoparticles (typically made of metals, oxides, carbides, or carbon nanotubes) in a base fluid as water, ethylene glycol and oil, which exhibit enhanced thermal conductivity compared to the base fluid, which turns out to be of great potential to be used in technology, including heat transfer, microelectronics, fuel cells, pharmaceutical processes, hybrid-powered engines, engine cooling/vehicle thermal management, etc. Therefore the mathematical analysis of third grade fluids equations should be relevant to predict and control the behavior of these fluids, in order to design optimal flows that can be successfully used and applied in the industry.

In this work, we study the stochastic evolutionary equation (1.1) supplemented with a homogeneous Navier-slip boundary condition, which allows the slippage of the fluid against the boundary wall (see Sect. 2 for more details). Besides the most studies on fluid dynamic equations consider the Dirichlet boundary condition, which assumes that the particles adjacent to the boundary surface have the same velocity as the boundary, there are physical reasons to consider slip boundary conditions. Namely, practical studies (see e.g. [25]) show that viscoelastic fluids slip against the boundary, and on the other hand, mathematical studies turn out that the Navier boundary conditions are compatible with the vanishing viscosity transition (see [9, 10, 21]). It is worth mentioning that the study of the small viscosity/large Reynolds number regime is crucial to understand the turbulent flows. The third grade fluid equation with the Dirichlet boundary condition was studied in [2, 28], where the authors proved the existence and the uniqueness of local solutions for initial conditions in \(H^3\) or global in time solution for small initial data when compared with the viscosity (see also [3]). Later on [7, 8], the authors considered the equation with a homogeneous Navier-slip boundary condition and established the well-posedness of a global solution for initial condition in \(H^2\), without any restriction on the size of the data. Concerning the stochastic third grade fluid equations, recently the authors in [1] studied the existence of weak probabilistic (martingale) solutions with \(H^2\)-initial data in 3D and the authors in [13] showed the existence of strong probabilistic (pathwise) solution with \(H^2\)-initial data in 2D. Nevertheless, to tackle relevant problems it is necessary to improve the \(H^2\)-regularity of the solutions with respect to the space variable.

This article is devoted to show the existence and uniqueness of a local strong solution, both from the PDEs and probabilistic point of view. Namely, the local strong solution will be defined on the original probability space and it will satisfy the equation in a pointwise sense (not in distributional sense) with respect to the space variable, up to a certain stopping time. An important motivation to consider strong solutions is the study of the stochastic optimal control problem constrained by the equation (1.1), in 2D as well as in 3D, where \(H^3\)-regularity is a key ingredient to establish the first-order necessary optimality condition (see e.g. [12, 31] for the 2D case and [32] for the 3D case). However, the construction of \(H^3\)-solutions, in the presence of a stochastic noise is not an easy task even in the 2D case. In addition, the presence of strongly nonlinear terms in the equation makes the analysis much more challenging when dealing with 3D physical domains. We should say that the method in [13] based on deterministic compactness results conjugated with an uniqueness type argument are not expected to work in 3D (where the global uniqueness is an open problem for the deterministic equation). Here, we establish the existence and the uniqueness of a local \(H^3\)-solution in 2D and 3D by following a different strategy, which is based on the introduction of an appropriate cut-off system. To the best of the author’s knowledge, the problem of the existence and uniqueness of \(H^3\)-solutions for the stochastic third grade fluid equation is being addressed here for the first time.

The article is organized as follows: in Sect. 2, we state the third grade fluid model and define the appropriate functional spaces and stochastic setting. Section 3 is devoted to the presentation of some definition and the main result of this paper. In Sect. 4, we introduce an approximated system, by using an appropriate cut-off function and we prove the existence of Martingale (probabilistic weak) solution to the approximated problem. The analysis combines a stochastic compactness arguments based on Prokhorov and Skorkhod theorems. Section 5 concerns the introduction of a “modified problem”, where the uniqueness holds globally in time and we are able to construct a probabilistic strong solution by using [22, Thm. 3.14]. Finally, Sect. 6 combines the previous results to prove the main result of this work.

2 Content of the study

Let \((\Omega ,{\mathcal {F}},P)\) be a complete probability space and \({\mathcal {W}}\) be a cylindrical Wiener process defined on \((\Omega ,{\mathcal {F}},P)\) endowed with the right-continuous filtration \(({\mathcal {F}}_t)_{t\in [0,T]}\) generated by \(\{{\mathcal {W}}(t)\}_{t\in [0,T]}\). We assume that \({\mathcal {F}}_0\) contains all the P-null subset of \(\Omega \) (see Sect. 2.2 for the assumptions on the noise). Our aim is to study the well posedness of the third grade fluids equation on a bounded and simply connected domain \(D \subset {\mathbb {R}}^d,\; d=2,3,\) with regular (smooth) boundary \(\partial D\), supplemented with a Navier-slip boundary condition, which reads

$$\begin{aligned} {\left\{ \begin{array}{ll} d(v(y))=\big (-\nabla {\textbf{P}}+\nu \Delta y-(y\cdot \nabla )v(y)\\ \quad \quad \qquad \qquad -\displaystyle \sum _{j}v^j(y)\nabla y^j+(\alpha _1+\alpha _2)\text {div}(A^2) \\ \quad \quad \qquad \qquad +\beta \text {div}(|A|^2A)+U\big )dt+ G(\cdot ,y)d{\mathcal {W}} \quad &{}\text {in } D \times (0,T),\\ \text {div}(y)=0 \quad &{}\text {in } D \times (0,T),\\ y\cdot \eta =0, \quad \left( \eta \cdot {\mathbb {D}}(y)\right) \big \vert _{\text {tan}}=0 \quad &{}\text {on } \partial D \times (0,T),\\ y(x,0)=y_0(x) \quad &{}\text {in } D , \end{array}\right. } \end{aligned}$$
(2.1)

where \(y:=(y^1,\dots , y^d)\) is the velocity of the fluid, \({\textbf{P}}\) is the pressure and U corresponds to the external force. The operators v, A, \({\mathbb {D}}\) are defined by \(v(y)=y-\alpha _1 \Delta y:=(y^1-\alpha _1 \Delta y^1,\dots ,y^d-\alpha _1 \Delta y^d)\) and \( A:=A(y)=\nabla y+\nabla y^T=2{\mathbb {D}}(y)\). The vector \(\eta \) denotes the outward normal to the boundary \(\partial D\) and \(u\vert _{\text {tan}}\) represents the tangent component of a vector u defined on the boundary \(\partial D\).

In addition, \(\nu \) denotes the viscosity of the fluid and \(\alpha _1,\alpha _2\), \(\beta \) are material moduli satisfying

$$\begin{aligned} \nu \ge 0, \quad \alpha _1> 0, \quad |\alpha _1+\alpha _2 |\le \sqrt{24\nu \beta }, \quad \beta \ge 0. \end{aligned}$$
(2.2)

It is worth noting that (2.2) allows the motion of the fluid to be compatible with thermodynamic laws (see e.g. [18]). We consider the usual notations for the scalar product \(A\cdot B:=tr(AB^T)\) between two matrices \(A, B \in {\mathcal {M}}_{d\times d},\) and set \(\vert A\vert ^2:=A\cdot A. \) In addition, we recall that

$$\begin{aligned} A^2:=AA=\biggl (\sum _{k=1}^da_{ik}a_{kj}\biggr )_{1\le i,j\le d } \text { for any } A=(a_{ij})_{1\le i,j\le d}\in M_{d\times d}. \end{aligned}$$

The divergence of a matrix \(A\in {\mathcal {M}}_{d\times d}\) is given by

$$\begin{aligned} (\text {div}(A)_i)_{i=1,\ldots ,d} =\left( \sum _{j=1}^d\partial _ja_{ij}\right) _{i=1,\ldots ,d}. \end{aligned}$$

The diffusion coefficient G will be specified in Sect. 2.2.

2.1 The functional setting

We denote by \({\mathcal {D}}(u)=(u,\nabla u)\) the vector of \({\mathbb {R}}^{d^2+d}\) whose components are the components of u and the first-order derivatives of these components. Similarly, \({\mathcal {D}}^k(u)=(u,\nabla u,\ldots ,\nabla ^ku)\) the vector of \({\mathbb {R}}^{d^{k+1}+\cdots +d^2+d}\) whose components are the components of u together with the derivatives of order up to k of these components.

\(Q= D\times [0,T], \quad \Omega _T=\Omega \times [0,T].\) We will denote by CK generic constants, which may varies from line to line.

Let \(m\in {\mathbb {N}}^*\) and \(1\le p< \infty \), we denote by \(W^{m,p}(D)\) the standard Sobolev space of functions whose weak derivative up to order m belong to the Lebesgue space \(L^p(D)\) and set \(H^m(D)=W^{m,2}(D)\) and \(H^0(D)=L^2(D)\). Following [27, Thm. 1.20 & Thm. 1.21 ], we have the continuous embeddings:

$$\begin{aligned}&\text {if }p<d,\quad W^{1,p}(D) \hookrightarrow L^a(D),\ \forall a \in [1,p^*]\text { and it is compact if }a \in [1,p^*),\nonumber \\&\text {if }p=d,\quad W^{1,p}(D) \hookrightarrow L^a(D),\ \forall a <+\infty \text { is compact},\\&\text {if }p>d,\quad W^{1,p}(D) \hookrightarrow C({{\overline{D}}})\text { is compact, }\nonumber \end{aligned}$$
(2.3)

where \(p^*=\frac{pd}{d-p}\) if \(p<d\), denotes the Sobolev embedding exponent. Proceeding by induction, one gets the Sobolev embedding for \(W^{m,p}(D)\) instead of \(W^{1,p}(D)\), we refer to [16, Sections 5.6 & 5.7] for more details. For a Banach space X, we define

$$\begin{aligned} (X)^k:=\{(f_1,\ldots ,f_k): f_l\in X,\quad l=1,\ldots ,k\}\;\text { for positive integer } k. \end{aligned}$$

For the sake of simplicity, we do not distinguish between scalar, vector or matrix-valued notations when it is clear from the context. In particular, \(\Vert \cdot \Vert _X\) should be understood as follows

  • \(\Vert f\Vert _X^2= \Vert f_1\Vert _X^2+\cdots +\Vert f_d\Vert _X^2\) for any \(f=(f_1,\ldots ,f_d) \in (X)^d\).

  • \(\Vert f\Vert _{X}^2= \displaystyle \sum _{i,j=1}^d\Vert f_{ij}\Vert _X^2\) for any \(f\in {\mathcal {M}}_{d\times d}(X)\).

We recall that

$$\begin{aligned} (u,v)= & {} \sum _{i=1}^d\int _Du_iv_idx, \quad \forall u,v \in (L^2(D))^d,\\ (A,B)= & {} \int _D A\cdot Bdx ; \quad \forall A,B \in {\mathcal {M}}_{d\times d}(L^2(D)). \end{aligned}$$

The unknowns in the system (2.1) are the velocity random field and the scalar pressure random field:

$$\begin{aligned} y:\Omega \times D\times [0,T]{} & {} \rightarrow {\mathbb {R}}^d, \;d=2,3\\ (\omega ,x,t){} & {} \mapsto (y^1(\omega ,x,t), \dots , y^d(\omega ,x,t));\\ p:\Omega \times D\times [0,T]{} & {} \rightarrow {\mathbb {R}}\\ (\omega ,x,t){} & {} \mapsto p(\omega ,x,t). \end{aligned}$$

Now, let us introduce the following functional Hilbert spaces:

$$\begin{aligned} H= & {} \{ y \in (L^2(D))^d \,\vert \text { div}(y)=0 \text { in } D \text { and } y\cdot \eta =0 \text { on } \partial D\}, \nonumber \\ V= & {} \{ y \in (H^1(D))^d \,\vert \text { div}(y)=0 \text { in } D \text { and } y\cdot \eta =0 \text { on } \partial D\}, \nonumber \\ W= & {} \{ y \in V\cap (H^2(D))^d\; \vert \, (\eta \cdot {\mathbb {D}}(y))\big \vert _{\text {tan}} =0 \text { on } \partial D\},\quad {\widetilde{W}}=(H^3(D))^d\cap W,\nonumber \\ \end{aligned}$$
(2.4)

and recall the Leray-Helmholtz projector \({\mathbb {P}}: (L^2(D))^d \rightarrow H\), which is a linear bounded operator characterized by the following \(L^2\)-orthogonal decomposition \(v={\mathbb {P}}v+\nabla \varphi ,\; \varphi \in H^1(D). \)

We consider on H the \(L^2\)-inner product \((\cdot ,\cdot )\) and the associated norm \(\Vert \cdot \Vert _{2}\). The spaces V, W and \({\widetilde{W}}\) will be endowed with the following inner products, which are related with the structure of the equation

$$\begin{aligned} (u,z)_V&:=(v(u),z)=(u,z)+2\alpha _1({\mathbb {D}}(u),{\mathbb {D}}(z)),\\ (u,z)_W&:=(u,z)_V+({\mathbb {P}}v(u),{\mathbb {P}}v(z)),\\ (u,z)_{{\widetilde{W}}}&:=(u,z)_V+(\text {curl}v(u),\text {curl}v(z)), \end{aligned}$$

and denote by \(\Vert \cdot \Vert _V,\Vert \cdot \Vert _W\) and \(\Vert \cdot \Vert _{{\widetilde{W}}}\) the corresponding norms. We recall that the norms \(\Vert \cdot \Vert _V\) and \(\Vert \cdot \Vert _{H^1}\) are equivalent due to the Korn inequality. In addition, the norms \(\Vert \cdot \Vert _W\) and \(\Vert \cdot \Vert _{{\widetilde{W}}}\) are equivalent to the classical Sobolev norms \( \Vert \cdot \Vert _{H^2}\) and \(\Vert \cdot \Vert _{H^3}\), respectively, thanks to Navier boundary conditions (2.1)\(_{(3)}\) and divergence free property, see [8, Corollary 6 ].

The usual norms on the classical Lebesgue and Sobolev spaces \(L^p(D)\) and \(W^{m,p}(D)\) will be denoted by denote \(\Vert \cdot \Vert _p\) and \(\Vert \cdot \Vert _{W^{m,p}}\), respectively. In addition, given a Banach space X, we will denote by \(X^\prime \) its dual.

\({\mathcal {C}}^{\gamma }([0,T],X)\) stands for the space of \(\gamma \)-Hölder-continuous functions with values in X, where \(\gamma \in ]0,1[\).

For \(T>0\), \(0<s<1\) and \(1\le p <\infty \), let us recall the definition of the fractional Sobolev space

$$\begin{aligned} W^{s,p}(0,T;X):=\{ f \in L^p(0,T;X) \; \vert \; \Vert f\Vert _{W^{s,p}(0,T;X)} <\infty \}, \end{aligned}$$

where \(\Vert f \Vert _{W^{s,p}(0,T;X)}= \Big ( \Vert f\Vert _{L^p(0,T;X)}^p+\displaystyle \int _0^T\int _0^T\dfrac{\Vert f(r)-f(t)\Vert _X^p}{\vert r-t\vert ^{sp+1}}drdt\Big )^{\frac{1}{p}}\).

Since \(L^{\infty }(0,T;{\widetilde{W}})\) is not separable, it’s convenient to introduce the following space:

$$\begin{aligned}{} & {} L^p_{w-*}(\Omega ;L^\infty (0,T;{\widetilde{W}}))\\{} & {} \quad =\{ u:\Omega \rightarrow L^\infty (0,T;{\widetilde{W}}) \text { is weakly-* measurable and } {\mathbb {E}}\Vert u\Vert _{L^\infty (0,T;{\widetilde{W}})}^p<\infty \}, \end{aligned}$$

where weakly-* measurable stands for the measurability when \(L^\infty (0,T;{\widetilde{W}})\) is endowed with the \(\sigma \)-algebra generated by the Borel sets of weak-* topology, see e.g. [34, Rmq. 2.1].

It will be convenient to introduce the following trilinear form

$$\begin{aligned} b(y,z,\phi )=((y\cdot \nabla ) z,\phi )=\int _D((y\cdot \nabla ) z)\cdot \phi \, dx, \quad \forall y,z,\phi \in (H^1(D))^d, \end{aligned}$$

which is anti-symmetric in the last two variables, namely

$$\begin{aligned} b(y,z,\phi )=-b(y,\phi ,z),\quad \forall y \in V,\; \forall z,\phi \in (H^1(D))^d. \end{aligned}$$

The results on the following modified Stokes problem will very usefull to our analysis

$$\begin{aligned} {\left\{ \begin{array}{ll} h-\alpha _1\Delta h+\nabla p=f, \quad \text {div}(h)=0 \quad &{}\text {in } D,\\ h\cdot \eta =0, \quad \left( \eta \cdot {\mathbb {D}}(h)\right) \big \vert _{\text {tan}}=0 \quad &{}\text {on } \partial D. \end{array}\right. } \end{aligned}$$
(2.5)

The solution h will be denoted by \(h=(I-\alpha _1{\mathbb {P}}\Delta )^{-1}f\). We recall the existence and the uniqueness results, as well as the regularity of the solution (hp). Additional information can be found in [6, Theorem 3] and [11, Lemma 3.2] for the 3D and 2D cases, respectively.

Theorem 1

Suppose that \(f \in (H^m(D))^d,\, m=0,1\). Then there exists a unique (up to a constant for p) solution \((h,p) \in (H^{m+2}(D))^d\times H^{m+1}(D)\) of the Stokes problem (2.5) such that

$$\begin{aligned} \Vert h \Vert _{H^{m+2}}+\Vert p \Vert _{H^{m+1}} \le C(m)\Vert f\Vert _{H^m},\;\text { where } C(m)\text { is a positive constant. } \end{aligned}$$

Furthermore, the following properties hold:

  • (hp) is the solution of (2.5) in the variational sense, namely

    $$\begin{aligned} (v(h),z)=(h,z)_V:= (h,z)+2\alpha _1({\mathbb {D}}(h),{\mathbb {D}}(z)) =(f,z);\quad \forall z\in V. \end{aligned}$$
    (2.6)
  • The operator \((I-\alpha _1{\mathbb {P}}\Delta )^{-1}:(H^m(D))^d \rightarrow (H^{m+2}(D))^d\) is linear and continuous, thanks to Theorem 1. In particular, we have \((I-\alpha _1{\mathbb {P}}\Delta )^{-1}:(L^2(D))^d \rightarrow W\) is linear and continuous.

Let us notice that the relation (2.6) holds for \(z=e_i, \) where \((e_i)_{i\in {\mathbb {N}}}\) is the orthonormal basis of V satisfying (4.3). We refer to the discussion after [6, Theorem 3] for more details about the variational formulation (2.6).

Despite the specificities related to 2D and 3D frameworks, we aim to present a uniform analysis. In order to clarify the reading, throughout the text, we will emphasize the relevant differences in 2D comparing to 3D (see Remarks 4, 5 and 6). Before presenting the stochastic setting and the main results, let us mention some relevant differences between the 2D and 3D cases:

  • In 2D, we have the explicit relation between the normal and tangent vectors to the boundary, \( \eta =(\eta _1,\eta _2)\) and \(\tau =(-\eta _2,\eta _1)\), which is very useful for managing boundary terms arising from integration by parts. In 3D, we do not have a similar explicit relation, then dealing with the boundary terms in 3D is much more complicated, see e.g. [32, Section 10].

  • In 2D, the \(\text {curl}\) operator is the scalar \(\partial _1u_2-\partial _2u_1\) but in 3D it is a vector field (see e.g. [6, Section 2]), which is more delicate to handle in order to get higher regularity estimates, more precisely \(H^3\)-regularity in our setting. In particular, the management of the non linear terms becomes more delicate after applying the \(\text {curl}\) operator to the equation. This is the main raison to use the cut-off (4.1) to construct \(H^3\)-solution, see also Remark 4.

  • The Sobolev embedding inequalities, see (2.3).

2.2 The stochastic setting

Let \((\Omega ,{\mathcal {F}},P)\) be a complete probability space endowed with a right-continuous filtration \(({\mathcal {F}}_t)_{t\ge 0}\).

Let us consider a cylindrical Wiener process \({\mathcal {W}}\) defined on \((\Omega ,{\mathcal {F}},P)\), which can be written as

$$\begin{aligned} {\mathcal {W}}(t)= \sum _{{\textbf{k}}\ge 1} e_{\textbf{k}}\beta _{\textbf{k}}(t), \end{aligned}$$

where \((\beta _{\textbf{k}})_{{\textbf{k}}\ge 1}\) is a sequence of mutually independent real valued standard Wiener processes and \((e_{\textbf{k}})_{{\textbf{k}}\ge 1}\) is a complete orthonormal system in a separable Hilbert space \({\mathbb {H}}\). Notice that \({\mathcal {W}}(t)= \sum _{{\textbf{k}}\ge 1} e_{\textbf{k}}\beta _{\textbf{k}}(t)\) does not convergence on \({\mathbb {H}}\). In fact, the sample paths of \({\mathcal {W}}\) take values in a larger Hilbert space \(H_0\) such that the embedding \({\mathbb {H}}\hookrightarrow H_0\) is an Hilbert-Schmidt operator. For example, the space \(H_0\) can be defined as follows

$$\begin{aligned} H_0=\bigg \{ u=\sum _{{\textbf{k}}\ge 1}\gamma _{{\textbf{k}}}e_{\textbf{k}}\;\vert \quad \sum _{ {\textbf{k}}\ge 1} \dfrac{\gamma _{\textbf{k}}^2}{{\textbf{k}}^2} <\infty \bigg \}, \end{aligned}$$

endowed with the norm

$$\begin{aligned} \Vert u\Vert _{H_0}^2=\sum _{ {\textbf{k}}\ge 1} \dfrac{\gamma _{\textbf{k}}^2}{{\textbf{k}}^2}, \quad \quad u=\sum _{{\textbf{k}}\ge 1}\gamma _{{\textbf{k}}}e_{\textbf{k}}. \end{aligned}$$

Hence, P-a.s. the trajectories of \({\mathcal {W}}\) belong to the space \(C([0,T],H_0)\) (cf. [14, Chapter 4]).

In order to define the stochastic integral in the infinite dimensional framework, let us consider another Hilbert space E and denote by \(L_2({\mathbb {H}},E)\) the space of Hilbert-Schmidt operators from \({\mathbb {H}}\) to E, which is the subspace of the linear operators defined as

$$\begin{aligned} L_2({\mathbb {H}},E):=\bigg \{ G:{\mathbb {H}} \rightarrow E \;\vert \quad \Vert G\Vert ^2_{L_2({\mathbb {H}},E)}:=\sum _{{\textbf{k}}\ge 1}\Vert G {\text {e}}_{\textbf{k}}\Vert _E^2 < \infty \bigg \}. \end{aligned}$$

Given a \(E-\)valued predictableFootnote 1 process \(G\in L^2(\Omega ;L^2(0,T;L_2({\mathbb {H}},E)))\), and taking \(\sigma _{\textbf{k}}=Ge_{\textbf{k}}\), we may define the Itô stochastic integral by

$$\begin{aligned} \int _0^tGd{\mathcal {W}}=\sum _{{\textbf{k}}\ge 1} \int _0^t\sigma _{\textbf{k}}d\beta _k, \quad \forall t\in [0,T]. \end{aligned}$$

Moreover, the following Burkholder–Davis–Gundy inequality holds

$$\begin{aligned} {\mathbb {E}}\biggl [\sup _{s\in [0,T]}\biggl \Vert \sum _{{\textbf{k}}\ge 1}\int _0^s\sigma _{\textbf{k}}d\beta _{\textbf{k}}\biggr \Vert _E^r\biggr ]= & {} {\mathbb {E}}\biggl [\sup _{s\in [0,T]}\biggl \Vert \int _0^sGd{\mathcal {W}}\biggr \Vert _E^r\biggr ]\\\le & {} C_r {\mathbb {E}}\biggl [\int _0^T\Vert G\Vert _{L_2({\mathbb {H}},E)}^2dt\biggr ]^{r/2} \\= & {} C{\mathbb {E}}\biggl [\sum _{{\textbf{k}}\ge 1}\int _0^T\Vert \sigma _{\textbf{k}}\Vert _{E}^2dt\biggl ]^{r/2}, \quad \forall r \ge 1. \end{aligned}$$

Let us precise the assumptions on the noise.

2.2.1 Multiplicative noise

Let us consider a family of Carathéodory functions

$$\begin{aligned} \sigma _{\textbf{k}}:(t,\lambda )\in [0,T]\times {\mathbb {R}}^d\mapsto {\mathbb {R}}^d, \quad {\textbf{k}}\in {\mathbb {N}}, \end{aligned}$$

satisfying \(\sigma _{\textbf{k}}(t,0)=0\),Footnote 2 and there exists \(L > 0\) such that for a.e. \(t\in (0,T)\), and any \(\lambda ,\mu \in {\mathbb {R}}^d\),

$$\begin{aligned}&\sum _{{\textbf{k}}\ge 1}\big | \sigma _{\textbf{k}}(t,\lambda )-\sigma _{\textbf{k}}(t,\mu )\big |^2 \le L |\lambda -\mu |^2, \end{aligned}$$
(2.7)
$$\begin{aligned}&\vert \nabla \sigma _{\textbf{k}}(\cdot ,\cdot )\vert \le a_k, \quad \sum _{{\textbf{k}}\ge 1} a_k^2 <\infty . \end{aligned}$$
(2.8)

We notice that, in particular, (2.7) gives \(\displaystyle \,{\mathbb {G}}^2(t,\lambda ):= \sum _{{\textbf{k}}\ge 1} \sigma _{\textbf{k}}^2(t,\lambda )\le L\,|\lambda |^2. \)

For each \(t\in [0,T]\) and \(y\in V\), we consider the linear mapping \(G(t,y): {\mathbb {H}}\rightarrow (H^1(D))^d\) defined by

$$\begin{aligned} G(t,y)e_{\textbf{k}}= \{ x \mapsto \sigma _{\textbf{k}}\big (t,y(x)\big )\}, \quad {\textbf{k}}\ge 1. \end{aligned}$$

By the above assumptions, G(ty) is an Hilbert-Schmidt operator for any \(t\in [0,T]\), \(y\in V\), and

$$\begin{aligned} G:[0,T]\times V \rightarrow L_2({\mathbb {H}},(H^1(D))^d). \end{aligned}$$

Remark 1

Notice that \(G:[0,T]\times V \rightarrow L_2({\mathbb {H}},(L^2(D))^d)\) is a Carathéodory function, \(L-\)Lipschitz-continuous in y,  uniformly in time. Hence, it is \({\mathcal {B}}([0,T])\otimes {\mathcal {B}}(V) \)-measurable and the stochastic process \(G(\cdot ,y(\cdot ))\) is also predictable, for any V-valued predictable process \(y(\cdot )\). Since the embedding \(H^1(D)\hookrightarrow L^2(D)\) is continuous, \(G(\cdot ,y(\cdot ))\) is equally a predictable process with values in \(L_2({\mathbb {H}},(L^2(D))^d)\) or in \(L_2({\mathbb {H}},(H^1(D))^d),\) thanks to Kuratowski’s theorem [33, Th. 1.1 p. 5].

Following Remark 1, if y is predictable, \((H^1(D))^d\) (resp. \((L^2(D))^d\))-valued process such that

$$\begin{aligned} y\in L^2\big ( \Omega \times ]0,T[,(H^1(D))^d\big )\quad \text { (resp. } y\in L^2\big ( \Omega \times ]0,T[,(L^2(D))^d\big )), \end{aligned}$$

and G satisfies the above assumptions, the stochastic integral

$$\begin{aligned} \int _0^tG(\cdot ,y)d{\mathcal {W}}=\sum _{{\textbf{k}}\ge 1}\int _0^t\sigma _{\textbf{k}}(\cdot ,y)d\beta _{\textbf{k}}\end{aligned}$$

is a well-defined \(({\mathcal {F}}_t)_{t\ge 0}\)-martingale with values in \((H^1(D))^d\) (resp. \((L^2(D))^d\)).

Now, let us recall the following result by F. Flandoli and D. Gatarek [17, Lemma 2.1] about the Sobolev regularity for the stochastic integral.

Lemma 2

Let \(p\ge 2, \eta \in [0,\dfrac{1}{2}[\) be given. Let \(G=\{\sigma _{\textbf{k}}\}_{k\ge 1}\) satisfy, for some \(m\in {\mathbb {R}},\)

$$\begin{aligned} {\mathbb {E}}\Big [\int _0^T\big ( \sum _{{\textbf{k}}\ge 1} \Vert \sigma _{\textbf{k}}\Vert _{2,m}^2\big )^{p/2} dt\Big ] < \infty \quad \big (\Vert \cdot \Vert _{2,m}\text { denotes the norm on } W^{m,2}(D)\big ). \end{aligned}$$

Then

$$\begin{aligned} t\mapsto \int _0^tGd{\mathcal {W}}\in L^p\big (\Omega ;W^{\eta ,p}\big (0,T; W^{m,2}(D)\big )\big ), \end{aligned}$$

and there exists a constant \(c=c(\eta ,p)\) such that

$$\begin{aligned} {\mathbb {E}}\Bigg [\biggl \Vert \int _0^t Gd{\mathcal {W}}\biggr \Vert _{W^{\eta ,p}\big (0,T; W^{m,2}(D)\big )}^p\Bigg ] \le c(\eta ,p) {\mathbb {E}}\Bigg [\int _0^T\bigg ( \sum _{{\textbf{k}}\ge 1} \Vert \sigma _{\textbf{k}}\Vert _{2,m}^2\bigg )^{p/2} dt\Bigg ]. \end{aligned}$$

In the sequel, given a random variable \(\xi \) with values in a Polish space E, we will denote by \({\mathcal {L}}(\xi )\) its law

$$\begin{aligned} {\mathcal {L}}(\xi )(\Gamma )=P(\xi \in \Gamma ) \quad \text {for any Borel subset } \Gamma \text { of } E. \end{aligned}$$

.

Let us recall the following version of the Skorohod representation theorem, which will be used later.

Theorem 3

[5, Theorem C.1] Let \((\Omega ,{\mathcal {F}},P)\) be a probability space and \(U_1,U_2\) be two separable metric spaces. Let \(\xi _n:\Omega \rightarrow U_1\times U_2,\, n\in {\mathbb {N}}\), be a family of random variables, such that the sequence of the laws \(({\mathcal {L}}(\xi _n))_{n\in {\mathbb {N}}}\) is weakly convergent on \(U_1\times U_2\).

For \(i=1,2\) let \(\pi _i:U_1\times U_2\) be the projection onto \(U_i\), i.e.

$$\begin{aligned} U_1\times U_2 \ni \xi =(\xi ^1,\xi ^2) \mapsto \pi _i(\xi )=\xi ^i \in U_i. \end{aligned}$$

Finally let us assume that there exists a random variable \(\rho :\Omega \rightarrow U_1\) such that

$$\begin{aligned} {\mathcal {L}}(\pi _1 \circ \xi _n)={\mathcal {L}}(\rho ),\, \forall n \in {\mathbb {N}}. \end{aligned}$$

Then, there exists a probability space \(({{\bar{\Omega }}}, \bar{{\mathcal {F}}},{{\bar{P}}})\), a family of \(U_1\times U_2\)-valued random variables \(({{\bar{\xi }}}_n)_{n\in {\mathbb {N}}}\) defined on \(({{\bar{\Omega }}}, \bar{{\mathcal {F}}},{{\bar{P}}})\) and a random variable \(\xi _\infty : {{\bar{\Omega }}} \rightarrow U_1\times U_2\) such that

  1. 1.

    \({\mathcal {L}}({{\bar{\xi }}}_n)={\mathcal {L}}(\xi _n),\, \forall n \in {\mathbb {N}}\);

  2. 2.

    \({{\bar{\xi }}}_n \rightarrow \xi _\infty \) in \(U_1\times U_2\) \({{\bar{P}}}\)-a.s.;

  3. 3.

    \(\pi _1\circ {{\bar{\xi }}}_n({{\bar{\omega }}})=\pi _1\circ \xi _\infty ({{\bar{\omega }}})\) for all \({{\bar{\omega }}}\in {{\bar{\Omega }}}\).

3 The main results

First, let us precise the assumptions on the initial data \(y_0\) and the force U.

\({\mathcal {H}}_0:\):

we consider \(y_0: \Omega \rightarrow {\widetilde{W}}\) and \( U:\Omega \times [0,T] \rightarrow (H^1(D))^d\) such that

\(\bullet \):

\(y_0\) is \({\mathcal {F}}_0-\)measurable and U is predictable.

\(\bullet \):

\(y_0\) and U satisfy the following regularity assumption

$$\begin{aligned} U\in L^p(\Omega \times (0,T), (H^1(D))^d),\quad y_0\in L^p(\Omega ,{\widetilde{W}}), \end{aligned}$$
(3.1)

where \(p>4\).

Now, we introduce the notion of the local solution.

Definition 1

Let \((\Omega ,{\mathcal {F}},({\mathcal {F}}_t)_{t\ge 0},P)\) be a stochastic basis and \({\mathcal {W}}\) be a \(({\mathcal {F}}_t)\)-cylindrical Wiener process. We say that a pair \((y,\tau )\) is a local strong (pathwise) solution to (2.1) if and only if:

  • \(\tau \) is an a.s. strictly positive \(({\mathcal {F}}_t)\)-stopping time.

  • The velocity y is a W-valued predictable process satisfying

    $$\begin{aligned} y(\cdot \wedge \tau ) \in L^p(\Omega ;{\mathcal {C}}([0,T],(W^{2,4}(D))^d))\cap L^p_{w-*}(\Omega ;L^\infty (0,T;{\widetilde{W}})). \end{aligned}$$
  • P-a.s. for all \(t\in [0,T]\)

    $$\begin{aligned} (y(t\wedge \tau ),\phi )_V&=(y_0,\phi )_V+\displaystyle \int _0^{t\wedge \tau }\big (\nu \Delta y-(y\cdot \nabla )v(y)\\&\quad -\sum _{j}v(y)^j\nabla (y)^j +(\alpha _1+\alpha _2)\text {div}[A(y)^2]\\&\quad +\beta \text {div}[|A(y)|^2A(y)]+ U,\phi \big ) dt\\&\quad +\displaystyle \int _0^{t\wedge \tau }(G(\cdot ,y),\phi )d{\mathcal {W}} \text { for all } \phi \in V. \end{aligned}$$

Taking into account the meaning of a local solution, the pathwise uniqueness will be naturally undestood in the following local sense.

Definition 2

  1. (i)

    We say that local pathwise uniqueness holds if for any given pair \((y^1,\tau ^1), (y^2,\tau ^2)\) of local strong solutions of (2.1) with the same data, we have \(y^1(t)=y^2(t)\) P-a.s. More precisely

    $$\begin{aligned} P\big (y^1(t)=y^2(t);\; \forall t\in [0,\tau ^1\wedge \tau ^2]\big )=1. \end{aligned}$$
  2. (ii)

    We say that \(((y^M)_{M\in {\mathbb {N}}}, (\tau _M)_{M\in {\mathbb {N}}},{\textbf{t}})\) is a maximal strong local solution to (2.1) if and only if for each \(M \in {\mathbb {N}}\), the pair \((y^M,\tau _M)\) is a local strong solution, \((\tau _M)\) is an increasing sequence of stopping times such that

    $$\begin{aligned} {\textbf{t}}:=\displaystyle \lim _{M\rightarrow \infty } \tau _M >0, \quad \text {P-a.s.} \end{aligned}$$

    and P-a.s.

    $$\begin{aligned} \sup _{t\in [0,\tau _M]}\Vert y(t)\Vert _{W^{2,4}} \ge M \text { on }\quad \{{\textbf{t}} <T\}, \quad \forall M\in {\mathbb {N}}. \end{aligned}$$
    (3.2)

Remark 2

Notice that the expression (3.2) means that \([0,{\textbf{t}}]\) is the maximal interval where the trajectories with \(H^3\)-regularity are defined, since P-a.s.

$$\begin{aligned} \sup _{t\in [0,{\textbf{t}}]}\Vert y(t)\Vert _{H^3}= \infty \quad \text {on}\quad \{{\textbf{t}} <T\} . \end{aligned}$$

We are in position to state our main result.

Theorem 4

There exists a unique maximal strong (pathwise) local solution to (2.1).

Remark 3

Following the Definition 1, we ask (2.1) to be satisfied in the strong sense. In other words, the solution is strong from the probabilistic and PDEs points of view, since it is satisfied on a given stochastic basis \((\Omega ,{\mathcal {F}},({\mathcal {F}}_ t)_{t\ge 0},P)\) and pointwise with respect to the space variables (not in distributions sense), thanks to the \(H^3-\)regularity of the solution.

Before entering in the proof of Theorem 4, let us describe the different steps to construct local strong solution. Firstly, we introduce an appropriate cut-off system (Sect. 4) with a strong non-linear terms and the difficulty consists in the use of stochastic compactness arguments to pass to the limit in the associated finite dimensional approximated problem constructed via Galerkin method. Secondly, the lack of global-in-time uniqueness for the cut-off system motivates the introduction of a modified problem. In this last modified problem, we can see the local solution of the cut-off system as a global solution and the uniquness holds, globally in time. Then, we will use the result of Kurtz [22, Theorem 3.14] to get the existence and uniqueness of probabilistically strong solution of the modified problem. Finally, we define the local solution of (2.1) by using an appropriate sequence of stopping time (Sect. 6).

4 Approximation (cut-off system)

This section is devoted to study an appropriate cut-off system. Using the Galerkin method, the cut-off system is approximated by a sequence of finite dimensional problems. Applying the Banach fixed point theorem, we prove the existence and the uniqueness of the solution for each finite dimensional problem. Then, a compactness argument based on the Prokhorov and Skorkhod’s theorems will guarantee the existence of a martingale (probabilistic weak) solution defined in some probability space for the cut-off system.

Let \(M>0\) and consider a family of smooth cut-off functions \(\theta _M:[0,\infty [ \rightarrow [0,1]\) satisfying

$$\begin{aligned} \theta _M(x)={\left\{ \begin{array}{ll} 1, &{}\quad 0\le x\le M,\\ 0, &{}\quad 2M\le x. \end{array}\right. } \end{aligned}$$
(4.1)

We recall that \(H^3(D) \hookrightarrow W^{2,6}(D)\) and \(H^3(D) \underset{Compact}{\hookrightarrow }\ W^{2,q}(D)\) if \(1\le q<6\) (see [27, Thm. 1.20 & Thm. 1.21]). In fact, \( H^3(D) \hookrightarrow W^{2,a}(D),\ \forall a <+\infty \text { and compactly} \) in the 2D case, see (2.3). Let us denote by \(\theta _M\) the functions defined on \(W^{2,q}(D)\) as following

$$\begin{aligned} \theta _M(u)=\theta _M (\Vert u \Vert _{W^{2,4}}), \quad \forall u \in W^{2,q}(D),\quad 4\le q<6. \end{aligned}$$

In order to construct a local pathwise solution to (2.1), the first step is to consider the following approximated problem

$$\begin{aligned} {\left\{ \begin{array}{ll} d(v(y))=\big \{-\nabla p+\nu \Delta y-\theta _M(y)(y\cdot \nabla )v\\ \quad \quad \quad \quad \qquad -\sum _{j}\theta _M(y)v^j\nabla y^j+(\alpha _1+\alpha _2)\theta _M(y)\text {div}(A^2) \\ \quad \quad \quad \quad \qquad +\beta \theta _M(y)\text {div}(|A|^2A)+U\big \}dt+ \theta _M(y)G(\cdot ,y)d{\mathcal {W}} &{}\quad \text {in } D\times (0,T),\\ \text {div}(y)=0 &{}\quad \text {in } D\times (0,T),\\ y\cdot \eta =0, \quad [\eta \cdot {\mathbb {D}}(y)]\cdot \tau =0 &{}\quad \text {on } \partial D\times (0,T),\\ y(x,0)=y_0(x) &{}\quad \text {in } D. \end{array}\right. } \end{aligned}$$
(4.2)

In the first stage, we construct a martingale solution to (4.2), according to the next definition.

Definition 4.1

We say that (4.2) has a martingale solution, if and only if there exist a probability space \(({\bar{\Omega }}, \bar{{\mathcal {F}}},{\bar{P}}),\) a filtration \((\bar{{\mathcal {F}}}_t)\), a cylindrical Wiener process \(\bar{{\mathcal {W}}} \), \(({{\bar{U}}},{\bar{y}}(0)) \in L^p({{\bar{\Omega }}}\times (0,T), (H^1(D))^d)\times L^p({{\bar{\Omega }}},{\widetilde{W}})\) adapted with respect to \((\bar{{\mathcal {F}}}_t)\) and a predictable process \({{\bar{y}}}: {{\bar{\Omega }}}\times [0,T] \rightarrow W\) with a.e. paths

$$\begin{aligned} {{\bar{y}}}(\omega ,\cdot ) \in {\mathcal {C}}([0,T],(W^{2,4}(D))^d)\cap L^\infty (0,T;{\widetilde{W}}), \end{aligned}$$

such that \({{\bar{y}}}\in L^p_{w-*}({\bar{\Omega }};L^\infty (0,T;{\widetilde{W}}))\) and P-a.s. in \({\bar{\Omega }}\) for all \(t\in [0,T]\), the following equality holds

$$\begin{aligned} ({{\bar{y}}}(t),\phi )_V&=({\bar{y}}(0),\phi )_V+\displaystyle \int _0^t \big \{\big (\nu \Delta {{\bar{y}}}-\theta _M({{\bar{y}}})({{\bar{y}}}\cdot \nabla )v({{\bar{y}}})\nonumber \\&\quad -\sum _{j}\theta _M({{\bar{y}}})v({{\bar{y}}})^j\nabla {{\bar{y}}}^j+(\alpha _1+\alpha _2)\theta _M({{\bar{y}}}) \text {div}[A({{\bar{y}}})^2],\phi \big ) \nonumber \\&\quad +\big (\beta \theta _M({{\bar{y}}})\text {div}[|A({{\bar{y}}})|^2A({{\bar{y}}})] +{{\bar{U}}},\phi \big )\big \}dt\\&\quad +\displaystyle \int _0^t\theta _M({{\bar{y}}})\big (G(\cdot ,{{\bar{y}}}),\phi \big ) d\bar{{\mathcal {W}}}\quad \text { for all } \phi \in V, \end{aligned}$$

and \({\mathcal {L}}({{\bar{y}}}(0),{{\bar{U}}})={\mathcal {L}}(y_0,U)\).

Now, we are able to present the following result.

Theorem 5

(Existence of a martingale solution) Assume that \({\mathcal {H}}_0\) holds with \(p>4\). Then, there exists a (martingale) solution to (4.2) in the sense of Definition 4.1.

Proof

See Sect. 4.6. \(\square \)

4.1 Approximation

Let \(\{e_i\}_{i\in {\mathbb {N}}} \subset (H^4(D))^d \cap W\) be an orthonormal basis in V (see e.g. [11]) satisfies

$$\begin{aligned} (v,e_i)_{{\widetilde{W}}}=\lambda _i(v,e_i)_V, \quad \forall v \in {\widetilde{W}}, \quad i \in {\mathbb {N}}, \end{aligned}$$
(4.3)

where the sequence \(\{\lambda _i\}\) of the corresponding eigenvalues fulfils the properties: \(\lambda _i >0, \forall i\in {\mathbb {N}},\) and \(\lambda _i \rightarrow \infty \) as \(i \in \infty \). Note that \(\{{\widetilde{e}}_i=\dfrac{1}{\sqrt{\lambda _i}}e_i\}\) is an orthonormal basis for \({{\widetilde{W}}}\). Let us consider

$$\begin{aligned} y_{n,0}=\sum _{i=1}^n(y_0,e_i)_Ve_i=\sum _{i=1}^n (y_0,{\widetilde{e}}_i)_{{\widetilde{W}}}{\widetilde{e}}_i. \end{aligned}$$

Let \(W_n=span\{e_1, e_2,\ldots , e_n\}\) and set \(y_n=\displaystyle \sum _{ i=1}^nc_i(t)e_i\), then the approximation of (4.2) reads

$$\begin{aligned} \left\{ \begin{aligned}&d(v_n,e_i)\\&\quad =\big (\nu \Delta y_n-\theta _M(y_n)(y_n\cdot \nabla )v_n -\sum _{j}\theta _M(y_n)v_n^j\nabla y^j_n+(\alpha _1+\alpha _2) \theta _M(y_n)\text {div}(A_n^2) \\&\qquad +\beta \theta _M(y_n) \text {div}(|A_n|^2A_n)+U, e_i\big )dt+ \big (\theta _M(y_n)G(\cdot ,y_n),e_i\big ) d{\mathcal {W}}, \forall i=1,\ldots ,n,\\&y_n(0)=y_{n,0}, \end{aligned}\right. \end{aligned}$$
(4.4)

where \(v_n=y_n-\alpha _1\Delta y_n\) and \(A_n:=A(y_n)=\nabla y_n+(\nabla y_n)^T.\) Denote by \(U:=(H^4(D))^d\cap W\) and \(P_n\), the projection operator from \(U^\prime \) to \(W_n\) defined by \(P_n:U^\prime \rightarrow W_n;\quad u\mapsto P_nu=\sum _{i=1}^n\langle u,e_i\rangle _{U^\prime ,U} e_i. \) In particular, the restriction of \(P_n\) to V, denoted by the same way, is the \((\cdot ,\cdot )_V\)-orthogonal projection from V to \(W_n\) and given by \(P_n:V\rightarrow W_n;\quad u\mapsto P_nu=\displaystyle \sum _{i=1}^n( u,e_i)_{V} e_i. \) Denote by \(P_n^*\) the adjoint of \(P_n\).

Notice that the restriction projection operator \(P_n\) is linear and continous on \({\widetilde{W}}\). Moreover

$$\begin{aligned} \Vert P_n y_0\Vert _V=\Vert y_n(0)\Vert _V \le \Vert y_0\Vert _V \text { and } \Vert P_n y_0\Vert _{{{\widetilde{W}}}}=\Vert y_n(0)\Vert _{{{\widetilde{W}}}} \le \Vert y_0\Vert _{{{\widetilde{W}}}}. \end{aligned}$$

Thanks to Lebesgue convergence theorem, we have \( P_n y_0 \rightarrow y_0 \text { in } L^{q}(\Omega ,{\widetilde{W}})\cap L^{q}(\Omega ,V); \quad \forall q\in [1,\infty [\).

We will use “Banach fixed point theorem” to show the existence of solution to (4.4) on the whole interval [0, T]. For that, consider the following mapping

$$\begin{aligned} u \mapsto {\mathcal {S}}u:W_n&\rightarrow W_n,\nonumber \\ ({\mathcal {S}}u,e_i)_V&= (y_0,e_i)_V+\nu \int _0^\cdot (\Delta u,e_i) dt-\int _0^\cdot \theta _M(u)\big ((u\cdot \nabla )v(u),e_i\big )dt\nonumber \\&\quad -\sum _{j}\int _0^\cdot \theta _M(u)\big (v(u)^j\nabla u^j,e_i\big )dt\nonumber \\&\quad +(\alpha _1+\alpha _2)\int _0^\cdot \theta _M(u) \big (\text {div}(A(u)^2),e_i\big )dt\nonumber \\&\quad +\beta \int _0^\cdot \theta _M(u)\big (\text {div}(|A(u)|^2A(u)), e_i\big )dt+\int _0^\cdot (U,e_i)dt \nonumber \\&\quad +\int _0^\cdot \theta _M(u)\big (G(\cdot ,u),e_i \big )d{\mathcal {W}}, \quad i=1,\ldots ,n. \end{aligned}$$
(4.5)

Lemma 6

There exists \(T^* >0\) such that \({\mathcal {S}}\) is a contraction on \({\textbf{X}}=L^2(\Omega ;{\mathcal {C}}([0,T^*],W_n))\).

Proof

Let us recall that \(W^{2,q}(D)\hookrightarrow W^{1,\infty }(D)\cap W^{2,4}(D),\; 4 \le q <6,\) and all norms in \(W_n\) are equivalent, which we will use repeatedly in the following. Let \(u_1,u_2 \in W_n\), then we have

$$\begin{aligned} ({\mathcal {S}}u_1-{\mathcal {S}}u_2,e_i)_V&=\nu \int _0^\cdot (\Delta (u_1-u_2),e_i) dt-\int _0^\cdot \big (\{\theta _M(u_1)(u_1\cdot \nabla )v(u_1)\\&\quad -\theta _M(u_2)(u_2\cdot \nabla )v(u_2)\},e_i\big )dt\\&\quad -\sum _{j}\int _0^\cdot \big ([\theta _M(u_1)v(u_1)^j\nabla u_1^j-\theta _M(u_2)v(u_2)^j\nabla u_2^j],e_i\big )dt\\&\quad +(\alpha _1+\alpha _2)\int _0^\cdot \big (\theta _M(u_1)\text {div} (A(u_1)^2)-\theta _M(u_2)\big (\text {div}(A(u_2)^2),e_i\big )dt\\&\quad +\beta \int _0^\cdot \big (\theta _M(u_1)\text {div}(|A(u_1)|^2A(u_1)) -\theta _M(u_2)\text {div}(|A(u_2)|^2A(u_2)),e_i\big )dt\\&\quad +\int _0^\cdot \big (\theta _M(u_1)G(\cdot ,u_1)-\theta _M(u_2)G(\cdot , u_2),e_i\big )d{\mathcal {W}}, \quad i=1,\ldots ,n. \end{aligned}$$

Itô formula ensures that

$$\begin{aligned} ({\mathcal {S}}u_1-{\mathcal {S}}u_2,e_i)_V^2&=2\nu \int _0^\cdot ({\mathcal {S}}u_1-{\mathcal {S}}u_2,e_i)_V (\Delta (u_1-u_2),e_i) dt\\&\quad -2\int _0^\cdot ({\mathcal {S}}u_1-{\mathcal {S}}u_2,e_i)_V \big (\{\theta _M(u_1)(u_1\cdot \nabla )v(u_1)\\&\quad -\theta _M(u_2)(u_2 \cdot \nabla )v(u_2)\},e_i\big )dt\\&\quad -2\sum _{j}\int _0^\cdot ({\mathcal {S}}u_1-{\mathcal {S}}u_2,e_i)_V \big ([\theta _M(u_1)v(u_1)^j\nabla u_1^j\\&\quad -\theta _M(u_2)v(u_2)^j \nabla u_2^j],e_i\big )dt\\&\quad +2(\alpha _1+\alpha _2)\int _0^\cdot ({\mathcal {S}}u_1-{\mathcal {S}} u_2,e_i)_V\big (\theta _M(u_1)\text {div}(A(u_1)^2)\\&\quad -\theta _M(u_2) \big (\text {div}(A(u_2)^2),e_i\big )dt\\&\quad +2\beta \int _0^\cdot ({\mathcal {S}}u_1-{\mathcal {S}}u_2,e_i)_V \big (\theta _M(u_1)\text {div}(|A(u_1)|^2A(u_1))\\&\quad -\theta _M(u_2) \text {div}(|A(u_2)|^2A(u_2)),e_i\big )dt\\&\quad +2\int _0^\cdot ({\mathcal {S}}u_1-{\mathcal {S}}u_2,e_i)_V \big (\theta _M(u_1)G(\cdot ,u_1)-\theta _M(u_2)G(\cdot ,u_2),e_i \big )d{\mathcal {W}}\\&\quad +\sum _{{\textbf{k}}\ge 1}\int _0^\cdot (\theta _M(u_1)\sigma _{\textbf{k}}( \cdot ,u_1)-\theta _M(u_2)\sigma _{\textbf{k}}(\cdot ,u_2),e_i)^2 dt, \quad i=1,\ldots ,n. \end{aligned}$$

Summing up from \(i=1\) to n, we deduce

$$\begin{aligned}&\Vert {\mathcal {S}}u_1-{\mathcal {S}}u_2\Vert _{W_n}^2\\&\quad :=\Vert {\mathcal {S}}u_1-{\mathcal {S}}u_2\Vert _V^2\\&\quad =2\nu \int _0^\cdot (P_n\Delta (u_1-u_2),{\mathcal {S}}u_1-{\mathcal {S}}u_2) dt\\&\qquad -2\int _0^\cdot \big (P_n\{\theta _M(u_1)(u_1\cdot \nabla )v(u_1)-\theta _M(u_2)(u_2\cdot \nabla )v(u_2)\},{\mathcal {S}}u_1-{\mathcal {S}}u_2\big )dt\\&\qquad -2\sum _{j}\int _0^\cdot \big (P_n[\theta _M(u_1)v(u_1)^j\nabla u_1^j-\theta _M(u_2)v(u_2)^j\nabla u_2^j],{\mathcal {S}}u_1-{\mathcal {S}}u_2\big )dt\\&\qquad +2(\alpha _1+\alpha _2)\int _0^\cdot \big (P_n(\theta _M(u_1)\text {div}(A(u_1)^2)-\theta _M(u_2)\text {div}(A(u_2)^2)),{\mathcal {S}}u_1-{\mathcal {S}}u_2\big )dt\\&\qquad +2\beta \int _0^\cdot \big (P_n(\theta _M(u_1)\text {div}(|A(u_1)|^2A(u_1))-\theta _M(u_2)\text {div}(|A(u_2)|^2A(u_2))),{\mathcal {S}}u_1-{\mathcal {S}}u_2\big )dt\\&\qquad +2\int _0^\cdot \big (P_n(\theta _M(u_1)G(\cdot ,u_1)-\theta _M(u_2)G(\cdot ,u_2)),{\mathcal {S}}u_1-{\mathcal {S}}u_2\big )d{\mathcal {W}}\\&\qquad +\sum _{{\textbf{k}}\ge 1}\sum _{i=1}^n\int _0^\cdot (\theta _M(u_1)\sigma _{\textbf{k}}(\cdot ,u_1)-\theta _M(u_2)\sigma _{\textbf{k}}(\cdot ,u_2),e_i)^2 dt\\&\quad =I_1+I_2+I_3+I_4+I_5+I_6+I_7. \end{aligned}$$

Let us consider \(\delta >0\) and \( T^*>0\) (to be chosen later). We have

$$\begin{aligned} {\mathbb {E}}\sup _{[0,T^*]}\vert I_1 \vert&= 2{\mathbb {E}}\sup _{r\in [0,T^*]}\vert \int _0^r (P_n\Delta (u_1-u_2),{\mathcal {S}}u_1-{\mathcal {S}}u_2) ds \vert \\&\le 2{\mathbb {E}}\int _0^{T^*}\Vert \Delta (u_1-u_2)\Vert _{2}\Vert {\mathcal {S}}u_1-{\mathcal {S}}u_2\Vert _{2} ds\\ {}&\le \delta {\mathbb {E}}\sup _{[0,T^*]}\Vert {\mathcal {S}}u_1-{\mathcal {S}}u_2\Vert _{2}^2+C_\delta T^*{\mathbb {E}}\sup _{[0,T^*]}\Vert u_1-u_2\Vert _{H^2}^2\\&\le \delta {\mathbb {E}}\sup _{[0,T^*]}\Vert {\mathcal {S}}u_1-{\mathcal {S}}u_2\Vert _{W_n}^2+C_\delta (n) T^*{\mathbb {E}}\sup _{[0,T^*]}\Vert u_1-u_2\Vert _{W_n}^2. \end{aligned}$$

In order to estimate \(I_2\), we notice that

$$\begin{aligned}&\big (\{\theta _M(u_1)(u_1\cdot \nabla )v(u_1)-\theta _M(u_2)(u_2\cdot \nabla )v(u_2)\},P_n^*({\mathcal {S}}u_1-{\mathcal {S}}u_2)\big )\\&\quad =-[\theta _M(u_1)-\theta _M(u_2)]b(u_1,P_n^*({\mathcal {S}} u_1-{\mathcal {S}}u_2),v(u_1))\\&\qquad -\theta _M(u_2)[b(u_1-u_2,P_n^*({\mathcal {S}}u_1 -{\mathcal {S}}u_2),v(u_1))\\&\qquad -b(u_2,P_n^*({\mathcal {S}}u_1 -{\mathcal {S}}u_2),v(u_1)-v(u_2))]\\&\quad \le K(M)\Vert u_1-u_2\Vert _{W^{2,4}}\Vert u_1\Vert _{4} \Vert {\mathcal {S}}u_1-{\mathcal {S}}u_2\Vert _V\Vert u_1\Vert _{W^{2,4}}\\&\qquad +\Vert u_1-u_2\Vert _{4}\Vert {\mathcal {S}}u_1-{\mathcal {S}}u_2\Vert _V\Vert u_1\Vert _{W^{2,4}}\\&\qquad +\Vert u_2\Vert _{4}\Vert {\mathcal {S}}u_1 -{\mathcal {S}}u_2\Vert _V\Vert u_1-u_2\Vert _{W^{2,4}}\\&\quad \le K(M,n)\Vert {\mathcal {S}}u_1-{\mathcal {S}}u_2\Vert _{W_n}\Vert u_1-u_2\Vert _{W_n}. \end{aligned}$$

Concerning \(I_3\), we write

$$\begin{aligned}&\sum _{j}\big ([\theta _M(u_1)v(u_1)^j\nabla u_1^j-\theta _M(u_2) v(u_2)^j\nabla u_2^j],P_n^*({\mathcal {S}}u_1-{\mathcal {S}}u_2)\big )\\&\quad =[\theta _M(u_1)-\theta _M(u_2)]b(P_n^*({\mathcal {S}}u_1 -{\mathcal {S}}u_2),u_1,v(u_1))\\&\qquad +\theta _M(u_2)[b(P_n^*({\mathcal {S}}u_1-{\mathcal {S}}u_2), u_1,v(u_1)-v(u_2))\\&\qquad +b(P_n^*({\mathcal {S}}u_1-{\mathcal {S}}u_2),u_1-u_2,v(u_2))]\\&\quad \le K(M,n)\Vert {\mathcal {S}}u_1-{\mathcal {S}}u_2 \Vert _{W_n}\Vert u_1-u_2\Vert _{W_n}. \end{aligned}$$

Therefore, we infer that

$$\begin{aligned} {\mathbb {E}}\sup _{[0,T^*]}\vert I_2+I_3\vert \le \delta {\mathbb {E}}\sup _{[0,T^*]}\Vert {\mathcal {S}}u_1-{\mathcal {S}}u_2\Vert _{W_n}^2+C_\delta K^2(M,n)T^*{\mathbb {E}}\sup _{[0,T^*]} \Vert u_1-u_2\Vert _{W_n}^2. \end{aligned}$$

For \(I_4, I_5\), we have

$$\begin{aligned}&\big (\theta _M(u_1)\text {div}(A(u_1)^2)-\theta _M(u_2) \big (\text {div}(A(u_2)^2),P_n^*({\mathcal {S}}u_1-{\mathcal {S}}u_2)\big )\\&\quad =\big ([\theta _M(u_1)-\theta _M(u_2)]\text { div} (A(u_1)^2),P_n^*({\mathcal {S}}u_1-{\mathcal {S}}u_2)\big )\\&\qquad +\theta _M(u_2)\big ( \text { div}([A(u_1)-A(u_2)] A(u_1))\\&\qquad +\text { div}(A(u_2)[A(u_1)-A(u_2)]),P_n^*({\mathcal {S}}u_1 -{\mathcal {S}}u_2))\big )\\&\quad \le \vert \theta _M(u_1)-\theta _M(u_2)\vert \Vert u_1\Vert _{W^{1,\infty }}\Vert u_1\Vert _{H^2}\Vert {\mathcal {S}}u_1-{\mathcal {S}}u_2\Vert _{2}\\&\qquad +(\Vert u_1\Vert _{W^{1,\infty }}+\Vert u_2\Vert _{W^{1, \infty }})\Vert u_1-u_2\Vert _{H^2}\Vert {\mathcal {S}}u_1 -{\mathcal {S}}u_2\Vert _{2}\\&\qquad +(\Vert u_1\Vert _{H^2}+\Vert u_2\Vert _{H^2})\Vert u_1 -u_2\Vert _{W^{1,\infty }}\Vert {\mathcal {S}}u_1-{\mathcal {S}}u_2\Vert _{2}\\&\quad \le K(M)\Vert u_1-u_2\Vert _{W^{2,4}} \Vert u_1 \Vert _{W^{1,\infty }}\Vert u_1\Vert _{H^2}\Vert {\mathcal {S}}u_1 -{\mathcal {S}}u_2\Vert _{2}\\&\qquad +(\Vert u_1\Vert _{W^{1,\infty }}+\Vert u_2\Vert _{W^{1, \infty }})\Vert u_1-u_2\Vert _{H^2}\Vert {\mathcal {S}}u_1 -{\mathcal {S}}u_2\Vert _{2}\\&\qquad +(\Vert u_1\Vert _{H^2}+\Vert u_2\Vert _{H^2})\Vert u_1 -u_2\Vert _{W^{1,\infty }}\Vert {\mathcal {S}}u_1-{\mathcal {S}}u_2\Vert _{2}\\&\quad \le K(M,n)\Vert {\mathcal {S}}u_1-{\mathcal {S}}u_2 \Vert _{W_n}\Vert u_1-u_2\Vert _{W_n}. \end{aligned}$$

On the other hand, we notice that

$$\begin{aligned}&\big (\theta _M(u_1)\text {div}(|A(u_1)|^2A(u_1))-\theta _M(u_2) \text {div}(|A(u_2)|^2A(u_2)),P_n^*({\mathcal {S}}u_1-{\mathcal {S}}u_2)\big )\\&\quad =(\theta _M(u_1)-\theta _M(u_2))\big (\text {div}(|A(u_1)|^2A(u_1) ),P_n^*({\mathcal {S}}u_1-{\mathcal {S}}u_2)\big )\\&\qquad +\theta _M(u_2)\big (\text {div}(|A(u_1)|^2A(u_1-u_2)), P_n^*({\mathcal {S}}u_1-{\mathcal {S}}u_2)\big )\\&\qquad +\theta _M(u_2)\big (\text {div}([A(u_1)\cdot A(u_1-u_2)\\&\qquad +A(u_1-u_2)\cdot A(u_2)]A(u_2)),P_n^*({\mathcal {S}}u_1-{\mathcal {S}}u_2)\big )\\&\quad \le K(M)\Vert u_1-u_2\Vert _{W^{2,4}}\Vert u_1\Vert _{W^{1, \infty }}^2\Vert u_1\Vert _{H^2}\Vert {\mathcal {S}}u_1-{\mathcal {S}} u_2\Vert _{2}\\&\qquad +C(\Vert u_2\Vert _{W^{1,\infty }}\Vert u_1\Vert _{H^2} +\Vert u_1\Vert _{W^{1,\infty }}\Vert u_2\Vert _{H^2})\Vert u_1 -u_2\Vert _{W^{1,\infty }}\Vert {\mathcal {S}}u_1-{\mathcal {S}}u_2\Vert _{2}\\&\qquad +C(\Vert u_1\Vert _{W^{1,\infty }}+\Vert u_2\Vert _{W^{1, \infty }})\Vert u_2\Vert _{W^{1,\infty }}\Vert u_1-u_2\Vert _{H^2} \Vert {\mathcal {S}}u_1-{\mathcal {S}}u_2\Vert _{2}\\&\qquad +C\Vert u_1-u_2\Vert _{W^{1,\infty }}\Vert u_2\Vert _{H^2} \Vert u_2\Vert _{W^{1,\infty }}\Vert {\mathcal {S}}u_1-{\mathcal {S}}u_2\Vert _{2}\\&\quad \le K(M,n)\Vert {\mathcal {S}}u_1-{\mathcal {S}}u_2\Vert _{W_n} \Vert u_1-u_2\Vert _{W_n}. \end{aligned}$$

Therefore

$$\begin{aligned} {\mathbb {E}}\sup _{[0,T^*]}\vert I_4+I_5\vert \le \delta {\mathbb {E}}\sup _{[0,T^*]}\Vert {\mathcal {S}}u_1-{\mathcal {S}}u_2\Vert _{W_n}^2+C_\delta K^2(M,n)T^*{\mathbb {E}}\sup _{[0,T^*]} \Vert u_1-u_2\Vert _{W_n}^2. \end{aligned}$$

Let \({{\widetilde{\sigma }}}_{\textbf{k}}\) be the solution of (2.5) with RHS \(f_{\textbf{k}}=\theta _M(u_1)\sigma _{\textbf{k}}(\cdot ,u_1)-\theta _M(u_2)\sigma _{\textbf{k}}(\cdot ,u_2), \;{\textbf{k}}\in {\mathbb {N}}^*\). Then it follows that, by using the variational formulation (2.6) and that \((e_i)_i\) is an orthonormal basis for V

$$\begin{aligned}&\sum _{i=1}^n\int _0^\cdot (\theta _M(u_1)\sigma _{\textbf{k}}(\cdot ,u_1) -\theta _M(u_2)\sigma _{\textbf{k}}(\cdot ,u_2),e_i)^2dt\\&\quad =\sum _{i=1}^n \int _0^\cdot ({{\widetilde{\sigma }}}_{\textbf{k}},e_i)_V^2 dt = \int _0^\cdot \Vert P_n{{\widetilde{\sigma }}}_{\textbf{k}}\Vert _V^2dt \le \int _0^\cdot \Vert {{\widetilde{\sigma }}}_{\textbf{k}}\Vert _V^2dt\\&\quad \le K \int _0^\cdot \Vert \theta _M(u_1)\sigma _{\textbf{k}}(\cdot ,u_1) -\theta _M(u_2)\sigma _{\textbf{k}}(\cdot ,u_2) \Vert _2^2dt. \end{aligned}$$

Taking into account (2.7), we derive

$$\begin{aligned} {\mathbb {E}}\sup _{[0,T^*]}\vert I_7\vert&\le K {\mathbb {E}}\sum _{{\textbf{k}}\ge 1}\int _0^{T^*} \Vert \theta _M(u_1)\sigma _{\textbf{k}}(\cdot ,u_1)-\theta _M(u_2)\sigma _{\textbf{k}}(\cdot ,u_2) \Vert _2^2dt\nonumber \\&\le K {\mathbb {E}}\sum _{{\textbf{k}}\ge 1}\int _0^{T^*} \vert \theta _M(u_1)-\theta _M(u_2)\vert ^2\Vert \sigma _{\textbf{k}}(\cdot ,\theta _{2M}(u_1)u_1) \Vert _2^2dt \nonumber \\&\quad +K {\mathbb {E}}\sum _{{\textbf{k}}\ge 1}\int _0^{T^*} |\theta _M(u_2)|^2\Vert \sigma _{\textbf{k}}(\cdot ,\theta _{2M}(u_1)u_1)-\sigma _{\textbf{k}}(\cdot ,\theta _{2M}(u_2)u_2) \Vert _2^2dt \nonumber \\&\le K(M) {\mathbb {E}}\int _0^{T^*}(\Vert u_1-u_2\Vert _{W^{2,4}}^2+\Vert u_1-u_2\Vert _{2}^2)dt\nonumber \\&\le K(M,n)T^*{\mathbb {E}}\sup _{[0,T^*]}\Vert u_1-u_2\Vert _{W_n}^2, \end{aligned}$$
(4.6)

where we used the fact that all the norms are equivalent on \(W_n\).

Using the Burkholder–Davis–Gundy and the Young inequalities and thanks to (4.6), we deduce the following relation, for any \(\delta >0\)

$$\begin{aligned} {\mathbb {E}}\sup _{[0,T^*]}\vert I_6\vert&= 2{\mathbb {E}}\sup _{r\in [0,T^*]}\vert \int _0^r\big (P_n(\theta _M(u_1)G(\cdot ,u_1)-\theta _M(u_2)G (\cdot ,u_2)),{\mathcal {S}}u_1-{\mathcal {S}}u_2\big )d{\mathcal {W}}\vert \\&\le 2{\mathbb {E}}\big [\sum _{{\textbf{k}}\ge 1}\int _0^{T^*}\Vert \theta _M(u_1) \sigma _{\textbf{k}}(\cdot ,u_1)-\theta _M(u_2)\sigma _{\textbf{k}}(\cdot ,u_2) \Vert _{2}^2\Vert {\mathcal {S}}u_1-{\mathcal {S}}u_2 \Vert _2^2ds\big ]^{1/2}\\&\le \delta {\mathbb {E}}\sup _{[0,T^*]} \Vert {\mathcal {S}}u_1-{\mathcal {S}}u_2\Vert _2^2\\&\quad +C_\delta {\mathbb {E}}\sum _{{\textbf{k}}\ge 1}\int _0^{T^*} \Vert \theta _M(u_1)\sigma _{\textbf{k}}(\cdot ,u_1) -\theta _M(u_2)\sigma _{\textbf{k}}(\cdot ,u_2) \Vert _2^2dt\\&\le \delta {\mathbb {E}}\sup _{[0,T^*]}\Vert {\mathcal {S}}u_1 -{\mathcal {S}}u_2\Vert _2^2+K(M,n)T^*{\mathbb {E}}\sup _{[0,T^*]} \Vert u_1-u_2\Vert _{W_n}^2,\\&\le \delta {\mathbb {E}}\sup _{[0,T^*]}\Vert {\mathcal {S}}u_1 -{\mathcal {S}}u_2\Vert _{W_n}^2+K(M,n)T^*{\mathbb {E}}\sup _{[0,T^*]} \Vert u_1-u_2\Vert _{W_n}^2. \end{aligned}$$

Gathering the previous estimates and choosing an appropriate value for \(\delta \), we deduce the existence of \(K(M,n)>0\) such that

$$\begin{aligned}&{\mathbb {E}}\sup _{[0,T^*]}\Vert {\mathcal {S}}u_1-{\mathcal {S}}u_2\Vert _{W_n}^2 \le K(M,n)T^*{\mathbb {E}}\sup _{[0,T^*]}\Vert u_1-u_2\Vert _{W_n}^2. \end{aligned}$$
(4.7)

The inequality (4.7) shows that \({\mathcal {S}}\) is a contraction on \({\textbf{X}}\) for some deterministic time \(T^*>0\). Hence, there exists a unique \({\mathcal {F}}_t\)-adapted function \(y_n\) defined on \(\Omega \) with values on \( {\mathcal {C}}([0,T^*],W_n)\). Furthermore, \(y_n\) is predictable stochastic process with values in \(W_n\). \(\square \)

Finally, a standard argument using the decomposition of the interval [0, T] into finite number of small subintervals (e.g. of length \(\frac{T}{2K(M,n)}\)) and gluing the corresponding solutions yields the next lemma.

Lemma 7

There exists a unique predictable solution \(y_n\in L^2(\Omega ;{\mathcal {C}}([0,T];W_n)) \) for (4.4).

4.2 A priori estimates

For each \(N\in {\mathbb {N}}\), let us define the following sequence of stopping times

$$\begin{aligned} \tau _N^n:=\inf \{t\ge 0: \Vert y_n(t)\Vert _{V} \ge N \}\wedge T. \end{aligned}$$

Setting

$$\begin{aligned} f_n&=f(y_n)\nonumber \\&= \nu \Delta y_n+\{-(y_n\cdot \nabla )v_n-\sum _{j=1}^dv_n^j\nabla y^j_n+(\alpha _1+\alpha _2)\text {div}(A_n^2) \nonumber \\&\quad +\beta \text {div}(|A_n|^2A_n)\}\theta _M(y_n)+U, \end{aligned}$$
(4.8)

By using (4.4), we infer for each \(i=1,\ldots ,n\)

$$\begin{aligned} d(y_n,e_i)_V&=(f_n,e_i)dt+\theta _M(y_n)(G(\cdot ,y_n),e_i)d{\mathcal {W}}\nonumber \\&:=(f_n,e_i)dt+\theta _M(y_n)\sum _{{\textbf{k}}\ge 1}(\sigma _{\textbf{k}}(\cdot ,y_n),e_i)d\beta _{\textbf{k}}. \end{aligned}$$
(4.9)

Applying Itô’s formula, we deduce

$$\begin{aligned} d(y_n,e_i)_V^2&=2(y_n,e_i)_V(f_n,e_i)dt+2(y_n,e_i)_V\theta _M(y_n) (G(\cdot ,y_n),e_i)d{\mathcal {W}}\\&\quad +\sum _{k\ge 1} (\sigma _k(\cdot ,y_n),e_i)^2dt. \end{aligned}$$

Summing over \(i=1,\ldots ,n\), we obtain

$$\begin{aligned} \Vert y_n(s)\Vert _V^2-\Vert y_{n,0}\Vert _V^2&=2\int _0^s(f_n,y_n)dt+2\int _0^s\theta _M(y_n) (G(\cdot ,y_n),y_n)d{\mathcal {W}}\\&\quad +\int _0^s (\theta _M(y_n))^2\sum _{i=1}^n\sum _{k\ge 1} (\sigma _k(\cdot ,y_n),e_i)^2dt\\&=J_1+J_2+J_3, \qquad \forall s\in [0,\tau _N^n]. \end{aligned}$$

By using integration by parts and the Navier boundary conditions (2.1)\(_3\), we derive

$$\begin{aligned} J_1&=2\int _0^s(f_n,y_n)dt\\&=-4\nu \int _0^s\Vert D y_n\Vert _{2}^2dt+2\int _0^s(U,y_n)dt\\&\quad -2\int _0^s\theta _M(y_n)[b(y_n,v_n,y_n)+b(y_n,y_n,v_n)] dt\\&\quad +2(\alpha _1+\alpha _2)\int _0^s\theta _M(y_n)(\text {div} (A_n^2),y_n)dt\\&\quad + 2\beta \int _0^s\theta _M(y_n)(\text {div}(|A_n|^2A_n),y_n)dt\\&=-4\nu \int _0^s\Vert D y_n\Vert _{2}^2dt+2\int _0^s(U,y_n)dt -2(\alpha _1+\alpha _2)\int _0^s\theta _M(y_n)(A_n^2,\nabla y_n)dt\\&\quad -\beta \int _0^s\theta _M(y_n)\int _D|A_n|^4dxdt\\&\le -4\nu \int _0^s\Vert D y_n\Vert _{2}^2dt+\int _0^s\Vert U \Vert _2^2dt+\int _0^s\Vert y_n\Vert _2^2dt\\&\quad -\dfrac{\beta }{2}\int _0^s\theta _M(y_n)\int _D|A_n|^4dxdt\\&\quad +C(\alpha _1,\alpha _2,\beta )\int _0^s\Vert y_n \Vert _{H^1}^2dt. \end{aligned}$$

Concerning \(J_3\), let \({{\widetilde{\sigma }}}_{\textbf{k}}^n\) be the solution of (2.5) with RHS \(f=\sigma _{\textbf{k}}(\cdot ,y_n), \;{\textbf{k}}\in {\mathbb {N}}^*\). By using the variational formulation (2.6) and Theorem 1, we get

$$\begin{aligned}&\int _0^s (\theta _M(y_n))^2\sum _{i=1}^n\sum _{{\textbf{k}}\ge 1} (\sigma _{{\textbf{k}}}(\cdot ,y_n),e_i)^2dt\\&=\int _0^s (\theta _M(y_n))^2\sum _{i=1}^n\sum _{{\textbf{k}}\ge 1} ({{\widetilde{\sigma }}}_{\textbf{k}}^n,e_i)_V^2dt= \int _0^s (\theta _M(y_n))^2\sum _{{\textbf{k}}\ge 1} \Vert P_n{{\widetilde{\sigma }}}_k^n\Vert _V^2dt\\&\le \int _0^s \sum _{{\textbf{k}}\ge 1} \Vert {{\widetilde{\sigma }}}_k^n\Vert _{V}^2dt\le C \int _0^s\sum _{{\textbf{k}}\ge 1} \Vert \sigma _{{\textbf{k}}}(\cdot ,y_n)\Vert _2^2dt \le C(L) \int _0^s \Vert y_n\Vert _2^2dt. \end{aligned}$$

Let us estimate the stochastic term \(J_2\). By using Burkholder–Davis–Gundy and Young inequalities, for any \(\delta >0\) we can write

$$\begin{aligned} {\mathbb {E}}\sup _{s\in [0,\tau _N^n]}\vert \int _0^s\theta _M(y_n)(G(\cdot ,y_n),y_n)d{\mathcal {W}}\vert&\le C{\mathbb {E}}\big [\sum _{{\textbf{k}}\ge 1}\int _0^{\tau _N^n}\Vert \theta _M(y_n) \sigma _{\textbf{k}}(\cdot ,y_n)\Vert _{2}^2\Vert y_n\Vert _2^2ds\big ]^{1/2}\\&\le C{\mathbb {E}}\big [\sum _{{\textbf{k}}\ge 1}\int _0^{\tau _N^n}\Vert \sigma _{\textbf{k}}(\cdot , y_n)\Vert _{2}^2\Vert y_n\Vert _2^2ds\big ]^{1/2}\\&\le \delta {\mathbb {E}}\sup _{s\in [0,\tau _N^n]} \Vert y_n\Vert _V^2+C_\delta L \int _0^{\tau _N^n} \Vert y_n \Vert _2^2dt. \end{aligned}$$

Hence, an appropriate choice of \(\delta \) ensures

$$\begin{aligned}&{\mathbb {E}}\sup _{s\in [0,\tau _N^n]} \Vert y_n\Vert _V^2+4\nu {\mathbb {E}}\int _0^{\tau _N^n}\Vert D y_n\Vert _{2}^2dt+\dfrac{\beta }{2}{\mathbb {E}}\int _0^{\tau _N^n}\theta _M(y_n)\int _D|A_n|^4dxdt \nonumber \\&\quad \le {\mathbb {E}}\Vert y_{n,0}\Vert _V^2 +{\mathbb {E}}\int _0^{T}\Vert U\Vert _2^2dt+C(\alpha _1,\alpha _2,\beta ,L){\mathbb {E}}\int _0^{\tau _N^n}\Vert y_n \Vert _{H^1}^2dt. \end{aligned}$$

Then, the Gronwall’s inequality gives

$$\begin{aligned} {\mathbb {E}}\sup _{s\in [0,\tau _N^n]} \Vert y_n\Vert _V^2 \le e^{CT} ({\mathbb {E}}\Vert y_{n,0}\Vert _V^2 +{\mathbb {E}}\int _0^{T}\Vert U\Vert _2^2dt). \end{aligned}$$

Let us fix \(n\in {\mathbb {N}}\), we notice that

$$\begin{aligned} {\mathbb {E}}\sup _{s\in [0,\tau _N^n]} \Vert y_n\Vert _V^2 \ge {\mathbb {E}}(\sup _{s\in [0,\tau _N^n]} 1_{\{ \tau _N^n<T\}} \Vert y_n\Vert _V^2) \ge N^2P(\tau _N^n <T), \end{aligned}$$

which implies that \(\tau _N^n \rightarrow T\) in probability, as \(N\rightarrow \infty \). Then there exists a subsequence, denoted by the same way, such that

$$\begin{aligned} \tau _N^n \rightarrow T \quad \text { a.s. as } N \rightarrow \infty . \end{aligned}$$

Since the sequence \(\{\tau _N^n\}_N\) is monotone, the monotone convergence theorem allows to pass to the limit, as \(N\rightarrow \infty ,\) and deduce that

$$\begin{aligned}{} & {} {\mathbb {E}}\sup _{s\in [0,T]} \Vert y_n\Vert _V^2+4\nu {\mathbb {E}}\int _0^{T}\Vert D y_n\Vert _{2}^2dt+\dfrac{\beta }{2}{\mathbb {E}}\int _0^{T}\theta _M(y_n)\int _D|A_n|^4dxdt\nonumber \\{} & {} \quad \le e^{cT} ({\mathbb {E}}\Vert y_{0}\Vert _V^2 +{\mathbb {E}}\int _0^{T}\Vert U\Vert _2^2dt). \end{aligned}$$
(4.10)

In order to get \({\widetilde{W}}\)-regularity for the solution of (4.4), we define the following sequence of stopping times

$$\begin{aligned} {\textbf{t}}_N^n=\inf \{t\ge 0: \Vert y_n(t)\Vert _{{\widetilde{W}}} \ge N \}\wedge T, \quad N\in {\mathbb {N}}. \end{aligned}$$

Let \({\widetilde{\sigma }}_{{\textbf{k}}}^n,\;{\widetilde{f}}_n\) be the solutions of (2.5) with RHS \(f=\sigma _{{\textbf{k}}}(\cdot ,y_n),\; f=f_n\), respectively. Since \(e_i\in V\), by using the variational formulation (2.6) we write

$$\begin{aligned} ({\widetilde{f}}_n,e_i)_V=(f_n,e_i), \quad ({\widetilde{\sigma }}_{{\textbf{k}}}^n,e_i)_V=(\sigma _{\textbf{k}}(\cdot ,y_n),e_i). \end{aligned}$$

Now, by multiplying (4.9) by \(\lambda _i\) and using (4.3), we write

$$\begin{aligned} d(y_n,e_i)_{{\widetilde{W}}}&=({{\widetilde{f}}}_n,e_i )_{{\widetilde{W}}}dt+\theta _M(y_n)\sum _{{\textbf{k}}\ge 1}({{\widetilde{\sigma }}}_{\textbf{k}}^n,e_i)_{{\widetilde{W}}}d\beta _{\textbf{k}}. \end{aligned}$$

Now, the Itô’s formula ensures that

$$\begin{aligned} d(y_n,e_i)_{{\widetilde{W}}}^2&=2(y_n,e_i)_{{\widetilde{W}}} ({{\widetilde{f}}}_n,e_i)_{{\widetilde{W}}}dt+2(y_n,e_i)_{{\widetilde{W}}} \theta _M(y_n)\sum _{{\textbf{k}}\ge 1}({{\widetilde{\sigma }}}_{\textbf{k}}^n,e_i)_{{\widetilde{W}}}d\beta _{\textbf{k}}\\&\quad +(\theta _M(y_n))^2\sum _{{\textbf{k}}\ge 1}({{\widetilde{\sigma }}}_{\textbf{k}}^n,e_i)_{{\widetilde{W}}}^2dt. \end{aligned}$$

By multiplying the last equality by \(\dfrac{1}{\lambda _i}\) and summing over \(i=1,\ldots ,n\), we obtain

$$\begin{aligned}&d(\Vert \text {curl} v(y_n)\Vert _{2}^2+\Vert y_n\Vert _V^2) =2 (\text {curl} f_n, \text {curl} v(y_n))dt \nonumber \\&\quad +2(f_n,y_n)dt + 2\theta _M(y_n)(\text {curl}G(\cdot ,y_n),\text {curl}v(y_n)) d{\mathcal {W}}\nonumber \\&\qquad +2\theta _M(y_n)(G(\cdot ,y_n),y_n)d{\mathcal {W}} +(\theta _M(y_n))^2\sum _{{\textbf{k}}\ge 1}\sum _{i=1}^n \dfrac{1}{\lambda _i} ({{\widetilde{\sigma }}}_{\textbf{k}}^n,e_i)_{{\widetilde{W}}}^2dt\nonumber \\&\quad =2 (\text {curl} f_n, \text {curl} v(y_n))dt+2(f_n,y_n)dt + 2\theta _M(y_n)(G(\cdot ,y_n),y_n)d{\mathcal {W}}\nonumber \\&\qquad + 2\theta _M(y_n)(\text {curl}G(\cdot ,y_n),\text {curl}v(y_n)) d{\mathcal {W}}+(\theta _M(y_n))^2\sum _{{\textbf{k}}\ge 1} \Vert P_n{{\widetilde{\sigma }}}_{\textbf{k}}^n\Vert _{{\widetilde{W}}}^2dt\nonumber \\&\quad =A_1+A_2+A_3+A_4+A_5 , \end{aligned}$$
(4.11)

where we used the definition of inner product in \({\widetilde{W}}\) to obtain the last equalities.

Let us estimate the terms \(A_i,\; i=1,\ldots ,5\).

$$\begin{aligned} A_1&=2\theta _M(y_n)\big (-\text {curl}[(y_n\cdot \nabla )v_n] -\sum _{j=1}^d\text {curl}[v_n^j\nabla y^j_n]+(\alpha _1+\alpha _2) \text {curl}[\text {div}(A_n^2)],\text {curl} v(y_n) \big )\\&\quad +2\beta \theta _M(y_n)(\text {curl} [\text {div}(|A_n|^2A_n)] ,\text {curl} v(y_n))+2( \nu \text {curl}\Delta y_n+\text {curl} U, \text {curl} v(y_n))\\&=A_1^1+A_1^2+A_1^3. \end{aligned}$$

By using [8, Section 4], note that

$$\begin{aligned} \vert A_1^1\vert&\le C\theta _M(y_n)\int _D\vert {\mathcal {D}}(y_n) \vert \vert {\mathcal {D}}^3(y_n)\vert \vert {\mathcal {D}}^3(y_n) \vert dx\\&\quad +C\theta _M(y_n)\int _D\vert {\mathcal {D}}^2(y_n)\vert \vert {\mathcal {D}}^2(y_n)\vert \vert {\mathcal {D}}^3(y_n)dx\\&\le C\theta _M(y_n)[ \Vert {\mathcal {D}}(y_n)\Vert _{L^\infty } \Vert y_n\Vert _{H^3}^2+ \Vert {\mathcal {D}}^2(y_n)\Vert _{L^4}^2 \Vert y_n\Vert _{H^3} ]\\&\le K(M)\Vert y_n\Vert _{H^3}^2,\\ \vert A_1^2\vert&\le C\theta _M(y_n)\biggl [\int _D \vert {\mathcal {D}}(y_n)\vert ^2\vert {\mathcal {D}}^3(y_n)\vert ^2dx+\int _D \vert {\mathcal {D}}(y_n)\vert \vert {\mathcal {D}}^2(y_n)\vert ^2\vert {\mathcal {D}}^3(y_n)\vert dx\biggr ]\\&\le C\theta _M(y_n)[ \Vert {\mathcal {D}}(y_n)\Vert _{L^\infty }^2\Vert y_n\Vert _{H^3}^2+ \Vert {\mathcal {D}}(y_n)\Vert _{L^\infty }\Vert {\mathcal {D}}^2(y_n)\Vert _{L^4}^2\Vert y_n\Vert _{H^3} ]\\&\le K(M)\Vert y_n\Vert _{H^3}^2, \end{aligned}$$

where we used the fact that \(\Vert {\mathcal {D}}(y_n)\Vert _{L^\infty }+\Vert {\mathcal {D}}^2(y_n)\Vert _{L^4}\le K(M)\), thanks to the properties cut-off function (4.1). On the other hand, we can deduce

$$\begin{aligned} A_1^3 \le -\dfrac{2\nu }{\alpha _1}\Vert \text {curl}v(y_n)\Vert _{2}^2+C\Vert y_n\Vert _V^2+C\Vert \text {curl}(U)\Vert _{2}^2+\delta \Vert \text {curl}v(y_n)\Vert _{2}^2\quad \text {for any}\quad \delta >0. \end{aligned}$$

Setting \(\delta =\dfrac{\nu }{\alpha _1}\), we get

$$\begin{aligned} A_1^3 \le -\dfrac{\nu }{\alpha _1}\Vert \text {curl}v(y_n)\Vert _{2}^2+C\Vert y_n\Vert _V^2+C\Vert \text {curl}(U)\Vert _{2}^2. \end{aligned}$$

Due to the estimate of \(J_1\), we have

$$\begin{aligned} A_2&\le -4\nu \int _0^s\Vert D y_n\Vert _{2}^2dt+\int _0^s\Vert U\Vert _2^2dt+\int _0^s\Vert y_n\Vert _2^2dt-\dfrac{\beta }{2}\int _0^s\theta _M(y_n)\int _D|A_n|^4dxdt\\&\quad +C(\alpha _1,\alpha _2,\beta )\int _0^s\Vert y_n \Vert _{H^1}^2dt. \end{aligned}$$

The term \(A_5\) satisfies

$$\begin{aligned} A_5\le \sum _{{\textbf{k}}\ge 1} \Vert {{\widetilde{\sigma }}}_k^n\Vert _{{\widetilde{W}}}^2\le C\sum _{{\textbf{k}}\ge 1}\Vert \sigma _{\textbf{k}}(\cdot ,y_n)\Vert _{H^1}^2\le C\Vert y_n\Vert _V^2, \end{aligned}$$

where we used Theorem 1 with \(m=1\), (2.7) and (2.8) to deduce the last estimate.

Similarly to the estimate of \(J_2\), for any \(\delta >0\), the stochastic integral \(A_3\) verifies

$$\begin{aligned} {\mathbb {E}}\sup _{s\in [0,{\textbf{t}}_N^n]}\left| \int _0^s \theta _M(y_n)(G(\cdot ,y_n),y_n)d{\mathcal {W}}\right| \le \delta {\mathbb {E}}\sup _{s\in [0,{\textbf{t}}_N^n]} \Vert y_n\Vert _V^2+C_\delta K\int _0^{{\textbf{t}}_N^n} \Vert y_n \Vert _2^2dt. \end{aligned}$$

Now, thanks to Burkholder–Davis–Gundy inequality, for any \(\delta >0\), it follows that

$$\begin{aligned} 2&{\mathbb {E}}\sup _{s\in [0, {\textbf{t}}_N^n]}\biggl \vert \int _0^s \theta _M(y_n)(\text {curl}G(\cdot ,y_n), \text {curl}v(y_n))d{\mathcal {W}}\biggr \vert \\&=2{\mathbb {E}}\sup _{s\in [0, {\textbf{t}}_N^n]}\biggl \vert \sum _{{\textbf{k}}\ge 1}\int _0^s\theta _M(y_n) (\text {curl}\sigma _{\textbf{k}}(\cdot ,y_n), \text {curl}v(y_n))d\beta _{\textbf{k}}\biggr \vert \\&\le C{\mathbb {E}}\biggl [\sum _{{\textbf{k}}\ge 1}\int _0^{{\textbf{t}}_N^n} (\text {curl}\sigma _{\textbf{k}}(\cdot ,y_n), \text {curl}v(y_n))^2ds\biggr ]^{1/2}\\&\le \delta {\mathbb {E}}\sup _{s\in [0, {\textbf{t}}_N^n]}\Vert \text {curl}v(y_n)\Vert _2^2+C_\delta {\mathbb {E}}\int _0^{{\textbf{t}}_N^n}\Vert y_n\Vert _V^2dr, \end{aligned}$$

where we used (2.8) to deduce the last inequality.

Gathering the previous estimates, and choosing an appropriate \(\delta >0\), we deduce

$$\begin{aligned}&{\mathbb {E}}\sup _{s\in [0,{\textbf{t}}_N^n]}[\Vert \text {curl}v(y_n)\Vert _2^2+\Vert y_n\Vert _V^2]+C(\nu , \alpha _1){\mathbb {E}}\int _0^{{\textbf{t}}_N^n}[\Vert D y_n\Vert _{2}^2+\Vert \text {curl}v(y_n)\Vert _{2}^2]dt\\&\qquad +C(\beta ){\mathbb {E}}\int _0^{{\textbf{t}}_N^n}\theta _M(y_n)\int _D|A_n|^4dxdt\\&\quad \le {\mathbb {E}}\Vert y_0\Vert _{{{\widetilde{W}}}}^2+{\mathbb {E}}\int _0^{T}\Vert U\Vert _2^2dt+C{\mathbb {E}}\int _0^T\Vert \text {curl}(U)\Vert _{2}^2dt\\&\qquad +K(L,M,\alpha _1,\alpha _2,\beta ){\mathbb {E}}\int _0^{{\textbf{t}}_N^n}\Vert y_n\Vert _{H^3}^2dt. \end{aligned}$$

The Gronwall’s inequality yields

$$\begin{aligned} {\mathbb {E}}\sup _{s\in [0,{\textbf{t}}_N^n]}[\Vert \text {curl}v(y_n)\Vert _2^2+\Vert y_n\Vert _V^2]&\le K(L,M,\alpha _1,\alpha _2,\beta ,T)\big ({\mathbb {E}}\Vert y_0\Vert _{{{\widetilde{W}}}}^2\\&\quad +{\mathbb {E}}\int _0^{T}\Vert U\Vert _2^2dt+C{\mathbb {E}}\int _0^T\Vert \text {curl}U\Vert _{2}^2dt\big ). \end{aligned}$$

Let us fix \(n\in {\mathbb {N}}.\) Since

$$\begin{aligned} {\mathbb {E}}\sup _{s\in [0,{\textbf{t}}_N^n]} \Vert y_n\Vert _{{\widetilde{W}}}^2 \ge {\mathbb {E}}\biggl (\sup _{s\in [0,{\textbf{t}}_N^n]} 1_{\{ {\textbf{t}}_N^n<T\}} \Vert y_n\Vert _{{\widetilde{W}}}^2\biggr ) \ge N^2P({\textbf{t}}_N^n <T), \end{aligned}$$

we infer that \({\textbf{t}}_N^n \rightarrow T\) in probability, as \(N\rightarrow \infty \). Then there exists a subsequence (still denoted by \(({\textbf{t}}_N^n)\)) such that

$$\begin{aligned} {\textbf{t}}_N^n \rightarrow T \quad \text { a.s. as } N \rightarrow \infty . \end{aligned}$$

Since the sequence \(\{{\textbf{t}}_N^n\}_N\) is monotone, the monotone convergence theorem can be applied to pass to the limit, as \(N\rightarrow \infty \), in order to obtain

$$\begin{aligned} {\mathbb {E}}\sup _{s\in [0,T]}[\Vert \text {curl}v(y_n)\Vert _2^2+\Vert y_n\Vert _V^2]&\le K(L,M,\alpha _1,\alpha _2,\beta ,T) \\&\qquad \Bigg ({\mathbb {E}}\Vert y_0\Vert _{{{\widetilde{W}}}}^2 +{\mathbb {E}}\int _0^{T}\Vert U\Vert _2^2dt+C{\mathbb {E}}\int _0^T\Vert \text {curl}U\Vert _{2}^2dt\Bigg ). \end{aligned}$$

Therefore, we have the following result

Lemma 8

Assume that \({\mathcal {H}}_0\) holds, then there exists a constant

$$\begin{aligned} K:=K(L,M,\alpha _1,\alpha _2,\beta ,T,\Vert y_0\Vert _{L^2(\Omega ;{\widetilde{W}})}, \Vert U\Vert _{L^2(\Omega \times [0,T];H^1(D))}) \end{aligned}$$

such that

$$\begin{aligned}&{\mathbb {E}}\sup _{s\in [0,T]} \Vert y_n\Vert _V^2+4\nu {\mathbb {E}}\int _0^{T}\Vert D y_n\Vert _{2}^2dt+\dfrac{\beta }{2}{\mathbb {E}}\int _0^{T}\theta _M(y_n)\int _D|A_n|^4dxdt\nonumber \\&\quad \le e^{cT} ({\mathbb {E}}\Vert y_{0}\Vert _V^2 +{\mathbb {E}}\int _0^{T}\Vert U\Vert _2^2dt),\nonumber \\&{\mathbb {E}}\sup _{s\in [0,T]}\Vert y_n\Vert _{{\widetilde{W}}}^2:= {\mathbb {E}}\sup _{s\in [0,T]}[\Vert \text {curl }v(y_n)\Vert _2^2+\Vert y_n\Vert _V^2]\le K. \end{aligned}$$
(4.12)

Now, let us notice that for any \(p\ge 1\), the Burkholder–Davis–Gundy inequality yields

$$\begin{aligned}&2{\mathbb {E}}\left[ \sup _{s\in [0, {\textbf{t}}_N^n]}\left| \int _0^s (\text {curl}G(\cdot ,y_n), \text {curl}v(y_n))d{\mathcal {W}}\right| \right] ^p\\&\quad =2{\mathbb {E}}\sup _{s\in [0, {\textbf{t}}_N^n]}\left| \sum _{{\textbf{k}}\ge 1}\int _0^s (\text {curl}\sigma _{\textbf{k}}(\cdot ,y_n), \text {curl}v(y_n))d\beta _{\textbf{k}}\right| ^p\\&\quad \le C_p{\mathbb {E}}\left[ \sum _{{\textbf{k}}\ge 1}\int _0^{{\textbf{t}}_N^n} (\text {curl}\sigma _{\textbf{k}}(\cdot ,y_n), \text {curl}v(y_n))^2ds\right] ^{p/2}\\&\quad \le C_p(L){\mathbb {E}}\left[ \sup _{s\in [0, {\textbf{t}}_N^n]}\Vert \text {curl}v(y_n)\Vert _2^2\int _0^{{\textbf{t}}_N^n}\Vert y_n\Vert _V^2dr\right] ^{p/2}\\&\quad \le \delta {\mathbb {E}}\sup _{s\in [0, {\textbf{t}}_N^n]}\Vert \text {curl}v(y_n)\Vert _2^{2p}+C_\delta (L,T){\mathbb {E}}\int _0^{{\textbf{t}}_N^n}\Vert y_n\Vert _V^{2p}dr, \end{aligned}$$

and

$$\begin{aligned}&{\mathbb {E}}\left[ \sup _{s\in [0,{\textbf{t}}_N^n]}\left| \int _0^s\theta _M(y_n)(G(\cdot ,y_n), y_n)d{\mathcal {W}}\right| \right] ^p \\&\quad \le \delta {\mathbb {E}}\sup _{s\in [0,{\textbf{t}}_N^n]} \Vert y_n\Vert _V^{2p}+C_\delta (L,T) K\int _0^{{\textbf{t}}_N^n} \Vert y_n \Vert _2^{2p}dt. \end{aligned}$$

From (4.11), for any \(t\in [0,{\textbf{t}}_N^n]\), the following expression holds

$$\begin{aligned}&\sup _{s\in [0,{\textbf{t}}_N^n]}[\Vert y_n(t)\Vert _{V}^2 +\Vert \text {curl} v(y_n(t))\Vert _{2}^2] \\&\quad \le \Vert y_0\Vert _{{{\widetilde{W}}}}^2 + K(M)\biggl [\int _0^{{\textbf{t}}_N^n}\Vert y_n\Vert _{H^3}^2 ds+ \int _0^T\Vert U\Vert _2^2ds+ \int _0^T\Vert \text { curl }U\Vert _2^2 ds\biggr ]\\&\qquad +2\sup _{s\in [0, {\textbf{t}}_N^n]} \biggl \vert \int _0^s (\text {curl}G(\cdot ,y_n), \text {curl}v(y_n))d{\mathcal {W}}\biggr \vert +\sup _{s\in [0,{\textbf{t}}_N^n]}\biggl \vert \int _0^s\theta _M(y_n) (G(\cdot ,y_n),y_n)d{\mathcal {W}}\biggr \vert \end{aligned}$$

Taking the \(p^{th}\) power, applying the expectation, choosing \(\delta \) small enough and then applying the Gronwall inequality, we deduce

Lemma 9

For any \(p\ge 1\), there exists \(K(M,T,p)>0\) such that

$$\begin{aligned} {\mathbb {E}}\sup _{ [0,T]} \Vert y_n \Vert _{{\widetilde{W}}}^{2p} \le K(M,T,p)(1+{\mathbb {E}}\Vert y_0\Vert _{{{\widetilde{W}}}}^{2p}+{\mathbb {E}}\int _0^T\Vert U\Vert _2^{2p}ds+ {\mathbb {E}}\int _0^T\Vert \text { curl }U\Vert _2^{2p} ds). \end{aligned}$$
(4.13)

Remark 4

We wish to draw the reader’s attention to the fact that the cut-off function (4.1) plays a crucial role to obtain \(H^3\)-estimate in 2D and 3D cases, which leads to bound dependent on M. In the deterministic case, the authors in [8, Section 5] proved the \(H^3\)-regularity by using some interpolation inequalities (available only on 2D) to bound \(A_1^1\) and \(A_1^2\) above, see (4.11). Then, solving a differential inequality. Unfortunately, it is not clear how to use the same arguments because of the presence of the stochastic integral and the expectation, we refer to [8, Section 5] for the interested reader.

4.3 Compactness

We will use Lemma 8 and the regularity of the stochastic integral (Lemma 2) to get compactness argument leading to the existence of martingale solution (see Definition 4.1) to (4.2). For that, define the following path space

$$\begin{aligned} {\textbf{Y}}:={\mathcal {C}}([0,T], H_0)\times {\mathcal {C}}([0,T], (W^{2,4}(D))^d)\times L^p(0,T; (H^1(D))^d))\times {\widetilde{W}}. \end{aligned}$$

Denote by \(\mu _{y_n}\) the law of \(y_n\) on \( {\mathcal {C}}([0,T], (W^{2,4}(D))^d)\), \(\mu _{U_n}\) the law of \(P_nU\) on \(L^p(0,T; (H^1(D))^d)\), \(\mu _{y_0^n}\) the law of \(P_ny_0\) on \({\widetilde{W}}\), and \(\mu _{{\mathcal {W}}}\) the law of \({\mathcal {W}}\) on \({\mathcal {C}}([0,T], H_0)\) and their joint law on \({\textbf{Y}}\) by \(\mu _n\).

Lemma 10

The sets \(\{ \mu _{U_n}; n\in {\mathbb {N}}\}\) and \(\{ \mu _{y_0^n}; n\in {\mathbb {N}}\}\) are tight on \(L^p(0,T; (H^1(D))^d)\) and \({\widetilde{W}}\), respectively.

Proof

By using the properties of the projection operator \(P_n\), we know that \(P_nU\) converges strongly to U in \(L^p(\Omega _T; (H^1(D))^d)\). Since \(L^p(0,T; (H^1(D))^d)\) is separable Banach space, from Prokhorov theorem, for any \(\epsilon >0\), there exists a compact set \(K_\epsilon \subset L^p(0,T; (H^1(D))^d)\) such that

$$\begin{aligned} \mu _U(K_\epsilon )=P(P_nU \in K_\epsilon ) \ge 1-\epsilon . \end{aligned}$$

A similar argument yields the tightness of \(\{ \mu _{y_0^n}; n\in {\mathbb {N}}\}\), which conclude the proof. \(\square \)

Lemma 11

The sets \(\{ \mu _{y_n}; n\in {\mathbb {N}}\}\) and \(\{\mu _{{\mathcal {W}}}\}\) are, respectively, tight on \({\mathcal {C}}([0,T], (W^{2,4}(D))^d)\) and \({\mathcal {C}}([0,T], H_0)\).

Proof

Recall that \(P_n:(L^2(D))^d \rightarrow W_n\) corresponds to the projection operator, which is continuous on \((L^2(D))^d\), then (4.4) can be written as

$$\begin{aligned} \left\{ \begin{aligned} v(y_n(t))&=P_nv(y_0)+\displaystyle \int _0^t\bigg (\nu P_n\Delta y_n-\theta _M(y_n)P_n[(y_n\cdot \nabla )v_n]-\sum _{j}\theta _M(y_n)P_n[v_n^j\nabla y^j_n]\\&\quad +(\alpha _1+\alpha _2)\theta _M(y_n)P_n[\text {div} (A_n^2)]+\beta \theta _M(y_n) P_n[\text {div}(|A_n|^2A_n)]+P_nU\bigg ) ds\\&\quad + \displaystyle \int _0^t\theta _M(y_n)P_nG(\cdot ,y_n) d{\mathcal {W}}:=y_n^{det}(t)+y_n^{sto}(t). \end{aligned}\right. \end{aligned}$$

Let us prove the following estimate:

$$\begin{aligned} {\mathbb {E}}\Vert y_n^{det}\Vert _{{\mathcal {C}}^{\eta }([0,T],(L^2(D))^d)} \le K(M), \quad \forall \eta \in ]0, 1-\frac{1}{p}]. \end{aligned}$$
(4.14)

First, thanks to Lemma 8 and in particular due to \({\widetilde{W}}\)-estimate for \((y_n)\), we know that \(y_n^{det}\) is a predictable continuous stochastic process with values in \((L^2(D))^d\). Next, by using the Sobolev embedding \(W^{2,4}(D)\hookrightarrow W^{1,\infty }(D)\) and \(\Vert P_nu\Vert _2\le \Vert u\Vert _2, \forall u\in (L^2(D))^d\), we are able to infer

$$\begin{aligned} \Vert P_nv(y_0) \Vert _2^2&\le \Vert v(y_0) \Vert _2^2 \le \Vert y_0 \Vert _W^2;\nonumber \\ \Vert P_n U \Vert _2^2&\le \Vert U \Vert _2^2 \text { and } \Vert P_n\Delta y_n \Vert _2^2\le \Vert \Delta y_n \Vert _2^2 \le \Vert y_n\Vert _W^2, \end{aligned}$$
(4.15)
$$\begin{aligned} \Vert \theta _M(y_n)P_n[(y_n\cdot \nabla )v_n] \Vert _2^2&\le \theta _M(y_n)\Vert y_n\Vert _\infty ^2 \Vert y_n \Vert _{{\widetilde{W}}}^2\nonumber \\&\le \theta _M(y_n)\Vert y_n\Vert _{W^{2,4}}^2 \Vert y_n \Vert _{{\widetilde{W}}}^2 \le 4M^2 \Vert y_n \Vert _{{\widetilde{W}}}^2,\nonumber \\ \Vert \sum _{j}\theta _M(y_n)P_n[v_n^j\nabla y^j_n] \Vert _2^2&\le \theta _M(y_n)\Vert y_n \Vert _{W}^2\Vert y_n\Vert _{W^{1,\infty }}^2\nonumber \\&\le \theta _M(y_n)\Vert y_n \Vert _{{\widetilde{W}}}^2\Vert y_n\Vert _{W^{2,4}}^2 \le 4M^2 \Vert y_n \Vert _{{\widetilde{W}}}^2,\nonumber \\ \Vert \theta _M(y_n)P_n[\text {div} (A_n^2)] \Vert _2^2&\le \theta _M(y_n)\Vert \text {div} (A_n^2) \Vert _2^2\nonumber \\&\le C\theta _M(y_n)\int _D \vert {\mathcal {D}}(y_n)\vert ^2 \vert {\mathcal {D}}^2(y_n)\vert ^2 dx\nonumber \\&\le C\theta _M(y_n)\Vert y_n\Vert _{W^{1,\infty }}^2\Vert y_n \Vert _{W}^2\le C\theta _M(y_n)\Vert y_n\Vert _{W^{2,4}}^2\Vert y_n \Vert _{{\widetilde{W}}}^2 \nonumber \\&\le 4CM^2 \Vert y_n \Vert _{{\widetilde{W}}}^2,\nonumber \\ \Vert \theta _M(y_n)P_n[\text {div}(|A_n|^2A_n)] \Vert _2^2&\le \theta _M(y_n)\Vert \text {div}(|A_n|^2A_n) \Vert _2^2\nonumber \\&\le C \theta _M(y_n)\int _D \vert {\mathcal {D}}(y_n)\vert ^4 \vert {\mathcal {D}}^2(y_n)\vert ^2 dx\nonumber \\&\le C\theta _M(y_n)\Vert y_n\Vert _{W^{1,\infty }}^4\Vert y_n \Vert _{W}^2\nonumber \\&\le C\theta _M(y_n)\Vert y_n\Vert _{W^{2,4}}^4\Vert y_n \Vert _{{\widetilde{W}}}^2 \nonumber \\&\le 16CM^4 \Vert y_n \Vert _{{\widetilde{W}}}^2. \end{aligned}$$
(4.16)

Therefore, there exists \(C>0\) independent of n such that

$$\begin{aligned} {\mathbb {E}}\sup _{t\in [0,T]} \Vert y_n^{\det }(t)\Vert _2&\le C+{\mathbb {E}}\Vert y_0 \Vert _W^2+C{\mathbb {E}}\int _0^T(1+M^4) \Vert y_n(s) \Vert _{{\widetilde{W}}}^2ds\nonumber \\&\quad +{\mathbb {E}}\int _0^T\Vert U(s) \Vert _2^2ds \le K(M), \end{aligned}$$
(4.17)

thanks to (3.1) and (4.12). Now, let us show that for \(\eta \in ]0,1-\dfrac{1}{p}],\) we have the following

$$\begin{aligned} {\mathbb {E}} \sup _{s,t \in [0,T], s\ne t}\dfrac{\Vert y_n^{det}(t) -y_n^{det}(s)\Vert _{2}}{\vert t-s\vert ^{\eta }}\le K(M). \end{aligned}$$

Indeed, let \(0< s<t\le T\) we have

$$\begin{aligned}&\Vert y_n^{det}(t) -y_n^{det}(s)\Vert _{2}\\&\quad \le \displaystyle \int _s^t\biggl \Vert \bigg (\nu P_n\Delta y_n-\theta _M(y_n)P_n[(y_n\cdot \nabla )v_n]-\sum _{j}\theta _M(y_n)P_n[v_n^j\nabla y^j_n]\\&\qquad +(\alpha _1+\alpha _2)\theta _M(y_n)P_n[\text {div} (A_n^2)] +\beta \theta _M(y_n) P_n[\text {div}(|A_n|^2A_n)]+P_nU\bigg )\biggr \Vert _{2} dr. \end{aligned}$$

We recall that \(p>4\). Set \(p=2q,\quad q>2\), by using Holder inequality and (4.15)-(4.16), we obtain

$$\begin{aligned}&\Vert y_n^{det}(t) -y_n^{det}(s)\Vert _{2} \le (t-s)^{\frac{p-1}{p}}\bigg (\displaystyle \int _s^t \bigg \Vert \bigg (\nu P_n\Delta y_n-\theta _M(y_n)P_n[(y_n\cdot \nabla )v_n]\nonumber \\&\qquad -\sum _{j}\theta _M(y_n)P_n[v_n^j\nabla y^j_n]+(\alpha _1+\alpha _2)\theta _M(y_n)P_n[\text {div} (A_n^2)] \nonumber \\&\qquad +\beta \theta _M(y_n) P_n[\text {div}(|A_n|^2A_n)]+P_nU\bigg )\bigg \Vert _{2}^{2q} dr\bigg )^{1/2q}\nonumber \\&\quad \le (t-s)^{\frac{p-1}{p}}\bigg (\displaystyle C(1+M^4)^q\int _s^t\Vert y_n \Vert _{{\widetilde{W}}}^{2q}dr+\int _s^t\Vert P_nU\Vert _{2}^{2q} dr\bigg )^{1/2q}\nonumber \\&\quad \le (t-s)^{\frac{p-1}{p}}\bigg (\displaystyle C(1+M^4)^{\frac{p}{2}}\int _s^t\Vert y_n \Vert _{{\widetilde{W}}}^{p}dr+\int _s^t\Vert U\Vert _{2}^{p} dr\bigg )^{1/p}. \end{aligned}$$
(4.18)

Considering (4.18) and applying the Holder inequality, we deduce

$$\begin{aligned}&{\mathbb {E}}\sup _{s,t \in [0,T], s\ne t}\dfrac{\Vert y_n^{det}(t) -y_n^{det}(s)\Vert _{2}}{\vert t-s\vert ^{1-\frac{1}{p}}}\nonumber \\&\quad \le \bigg (\displaystyle C(1+M^4)^{\frac{p}{2}}\int _0^T{\mathbb {E}}\Vert y_n \Vert _{{\widetilde{W}}}^{p}dr+\int _0^T{\mathbb {E}}\Vert U\Vert _{2}^{p} dr\bigg )^{1/p}\le K(M), \end{aligned}$$
(4.19)

where we used (3.1) and (4.12). Consequently, the estimates (4.19) and (4.17) yield (4.14).

We recall that (see e.g. [17])

$$\begin{aligned} W^{s,p}(0,T; L^2(D)) \hookrightarrow {\mathcal {C}}^{\eta }([0,T],L^2(D))\quad \text {if} \quad 0<\eta < sp-1. \end{aligned}$$

Let us take \(s \in \big [0,\dfrac{1}{2}\big [\) and \(sp>1\) (recall that \(p>4\); see \({\mathcal {H}}_0\)). For \(\eta \in \big ]0, sp-1\big [\), we can use Lemma 2 and (4.13) to deduce

$$\begin{aligned} {\mathbb {E}}\Vert y_n^{sto}\Vert _{{\mathcal {C}}^{\eta }([0,T],L^2(D))}^p&={\mathbb {E}}\big \Vert \displaystyle \int _0^\cdot \theta _M(y_n)P_nG(\cdot ,y_n)d{\mathcal {W}} \Vert ^p_{{\mathcal {C}}^{\eta }([0,T],L^2(D))}\\&\le C{\mathbb {E}}\Big [\big \Vert \int _0^\cdot G(\cdot ,y_n)d{\mathcal {W}}\Vert _{W^{s,p}\big (0,T; L^{2}(D)\big )}^p\Big ] \\&\le c(s,p) {\mathbb {E}}\Big [\int _0^T\big ( \sum _{{\textbf{k}}\ge 1} \Vert \sigma _{\textbf{k}}(\cdot ,y_n)\Vert _{2}^2\big )^{p/2} dt\Big ]\le K(M). \end{aligned}$$

Hence \((v(y_n))_n\) is bounded in \(L^1(\Omega ,{\mathcal {C}}^{\eta }([0,T],(L^2(D))^d)).\) Therefore \((y_n)_n\) is bounded in

$$\begin{aligned} L^1(\Omega ,{\mathcal {C}}^{\eta }([0,T],(H^2(D))^d))\cap L^2(\Omega ,L^\infty (0,T; {\widetilde{W}})),\quad \forall \eta \in \big ]0,sp-1\big [, \end{aligned}$$

where we used [6, Proposition 3]. We recall that the embedding \({\widetilde{W}} \hookrightarrow W^{2,q}(D)\) is compact for any \( 1\le q<6\). The following compact embedding holds

$$\begin{aligned} {\textbf{Z}}:=L^\infty (0,T; {\widetilde{W}})\cap {\mathcal {C}}^{\eta }([0,T],(H^2(D))^d) \hookrightarrow {\mathcal {C}}([0,T], (W^{2,4}(D))^d). \end{aligned}$$

Indeed, we have \({\widetilde{W}} \underset{compact}{\hookrightarrow } W^{2,4}(D)\hookrightarrow H^2(D)\), see (2.3). Let \( {\textbf{A}}\) be a bounded set of \({\textbf{Z}}\). Following [30, Thm. 5] (the case \(p=\infty \)), it is enough to check the following conditions:

  1. 1.

    \({\textbf{A}}\) is bounded in \(L^\infty (0,T; {\widetilde{W}})\).

  2. 2.

    Let \(h>0\), \(\Vert f(\cdot +h)-f(\cdot )\Vert _{L^\infty (0,T-h;(H^2(D))^d} \rightarrow 0\) as \(h\rightarrow 0\) uniformly for \(f\in {\textbf{A}}\).

First, note that (1) is satisfied by assumptions. Concerning the second condition, let \(h>0\) and \(f\in {\textbf{A}}\), by using that \(f\in {\mathcal {C}}^{\eta }([0,T],(H^2(D))^d)\) we infer

$$\begin{aligned} \Vert f(\cdot +h)-f(\cdot )\Vert _{L^\infty (0,T-h;(H^2(D))^d}&=\sup _{r\in [0,T-h]}\Vert f(r+h)-f(r)\Vert _{H^2}\\&\le Ch^{\eta }\rightarrow 0, \text { as } h\rightarrow 0, \end{aligned}$$

where \(C>0\) is independent of f.

Let \(R>0\) and set \( B_{{\textbf{Z}}}(0,R):=\{ v\in {\textbf{Z}} \; \vert \; \Vert v\Vert _{{\textbf{Z}}} \le R\}\). Then \(B_{{\textbf{Z}}}(0,R)\) is a compact subset of \({\mathcal {C}}([0,T], (W^{2,4}(D))^d).\) On the other hand, there exists a constant \(C >0\) (related to the boundedness of\(\{y_n\}_n\) in \(L^1(\Omega ,{\mathcal {C}}([0,T], (W^{2,4}(D))^d))\)), which is independent of R, such that the following relation holds

$$\begin{aligned} \mu _{y_n}(B_{{\textbf{Z}}}(0,R))&=1-\mu _{y_n}(B_{{\textbf{Z}}}(0,R)^c)=1-\int _{\{ \omega \in \Omega , \Vert y_n\Vert _{{\textbf{Z}}}>R\}}1dP\\&\ge 1-\dfrac{1}{R}\int _{\{ \omega \in \Omega , \Vert y_n\Vert _{{\textbf{Z}}}>R\}}\Vert y_n\Vert _{{\textbf{Z}}}dP\\&\ge 1-\dfrac{1}{R}{\mathbb {E}}\Vert y_n\Vert _{{\textbf{Z}}}=1-\dfrac{C}{R},\quad \text {for any}\; R>0, \quad \text {and any} \;n\in {\mathbb {N}}. \end{aligned}$$

Therefore, for any \(\delta >0\) we can find \(R_\delta >0\) such that

$$\begin{aligned} \mu _{y_n}(B_{{\textbf{Z}}}(0,R_\delta )) \ge 1-\delta , \text { for all } n\in {\mathbb {N}}. \end{aligned}$$

Thus the family of laws \(\{ \mu _{y_n}; n\in {\mathbb {N}}\}\) is tight on \({\mathcal {C}}([0,T], (W^{2,4}(D))^d).\)

Since the law \(\mu _{{\mathcal {W}}}\) is a Radon measure on \({\mathcal {C}}([0,T], H_0)\), the second part of the Lemma 11 follows. \(\square \)

Remark 5

By using (2.3) and [30, Thm. 5], one can prove, similarly to the above arguments, that \({\textbf{Z}}\) is compactly embedded in \({\mathcal {C}}([0,T], (W^{2,q}(D))^d)\) for \( q<6\) in the 3D case and that \({\textbf{Z}}\) is compactly embedded in \({\mathcal {C}}([0,T], (W^{2,a}(D))^d)\) for \(a<\infty \), in the 2D case.

As a conclusion, we have the following corollary:

Corollary 12

The set of joint law \(\{\mu _n; n\in {\mathbb {N}}\}\) is tight on \({\textbf{Y}}\).

4.4 Subsequence extractions

Using Corollary 12 and the Prokhorov’s theorem, we can extract a (not relabeled) subsequence from \(\mu _n\) which converges in law to some probability measure \(\mu \), i.e.

$$\begin{aligned} \mu _n:=(\mu _{{\mathcal {W}}}, \mu _{y_n}, \mu _{U_n}, \mu _{y_0^n} ) \rightarrow \mu \text { on } {\textbf{Y}}. \end{aligned}$$

Applying the Skorohod Representation Theorem [5, Thm. C.1], we obtain the following result:

Lemma 13

There exists a probability space \(({{\bar{\Omega }}}, \bar{{\mathcal {F}}},{{\bar{P}}})\), and a family of \({\textbf{Y}}\)-valued random variables \(\{ (\bar{{\mathcal {W}}}_n, {{\bar{y}}}_n, {{\bar{U}}}_n, {{\bar{y}}}_0^n ), n \in {\mathbb {N}}\}\) and \(\{( {\mathcal {W}}_\infty , y_\infty , {{\bar{U}}}, {{\bar{y}}}_0)\}\) defined on \(({{\bar{\Omega }}}, \bar{{\mathcal {F}}},{{\bar{P}}})\) such that

  1. 1.

    \(\mu _n={\mathcal {L}}( \bar{{\mathcal {W}}}_n, {{\bar{y}}}_n, {{\bar{U}}}_n, {{\bar{y}}}_0^n ), \forall n \in {\mathbb {N}}\);

  2. 2.

    the law of \(( {\mathcal {W}}_\infty , y_\infty , {{\bar{U}}}, {{\bar{y}}}_0)\) is given by \(\mu \);

  3. 3.

    \(( \bar{{\mathcal {W}}}_n, {{\bar{y}}}_n, {{\bar{U}}}_n, {{\bar{y}}}_0^n )\) converges to \(( {\mathcal {W}}_\infty , y_\infty , {{\bar{U}}}, {{\bar{y}}}_0)\) \({{\bar{P}}}\)-a.s. in \({\textbf{Y}}\);

  4. 4.

    \(\bar{{\mathcal {W}}}_n({{\bar{\omega }}})={\mathcal {W}}_\infty ({{\bar{\omega }}})\) for all \({{\bar{\omega }}}\in {{\bar{\Omega }}}\).

Definition 4.2

For a filtered probability space \((\Omega ,{\mathcal {F}},({\mathcal {F}}_t),P)\), the smallest complete, right-continuous filtration containing \(({\mathcal {F}}_t)\) is called the augmentation of \(({\mathcal {F}}_t)\).

Let us denote by \(({\mathcal {F}}_t^n)\) the augmentation of the filtration

$$\begin{aligned} \sigma ( {{\bar{y}}}_n(s),{\mathcal {W}}_\infty (s), \int _0^s{{\bar{U}}}_n(r)dr)_{0\le s\le t}, \quad t\in [0,T], \end{aligned}$$

and by \(({\mathcal {F}}_t^\infty )\) the augmentation of the filtration

$$\begin{aligned} \sigma (y_\infty (s),{\mathcal {W}}_\infty (s),\int _0^s{{\bar{U}}}(r)dr)_{0\le s\le t},\quad t\in [0,T]. \end{aligned}$$

Since \(( {\mathcal {W}},y_n,P_nU, P_ny_0)\) and \(( \bar{{\mathcal {W}}}_\infty , {{\bar{y}}}_n, {{\bar{U}}}_n,{{\bar{y}}}_0^n)\) have the same law, by the properties of the image measure and by an adapted stepwise approximation of Itô’s integral, one has that \({{\bar{y}}}_n\) is the solution of (4.4) for given

$$\begin{aligned} ({{\bar{\Omega }}}, \bar{{\mathcal {F}}},({\mathcal {F}}_t^n),{{\bar{P}}}, {\mathcal {W}}_\infty , {{\bar{U}}}_n, {{\bar{y}}}_0^n). \end{aligned}$$

In other words, the following equations holds \({{\bar{P}}}\)-a.s. in \({{\bar{\Omega }}}\)

$$\begin{aligned} \left\{ \begin{aligned} d(v({{\bar{y}}}_n),e_i)&=\big (\nu \Delta {{\bar{y}}}_n -\theta _M({{\bar{y}}}_n)({{\bar{y}}}_n\cdot \nabla )v({{\bar{y}}}_n)\\&\quad -\sum _{j}\theta _M({{\bar{y}}}_n)v({{\bar{y}}}_n)^j\nabla {{\bar{y}}}^j_n+(\alpha _1+\alpha _2)\theta _M({{\bar{y}}}_n)\text {div}(A({{\bar{y}}}_n)^2)\\&\quad +\beta \theta _M({{\bar{y}}}_n) \text {div}(|A({{\bar{y}}}_n)|^2A({{\bar{y}}}_n))+{{\bar{U}}}_n, e_i\big )dt\\&\quad +\big (\theta _M({{\bar{y}}}_n)G(\cdot ,{{\bar{y}}}_n),e_i\big )d{\mathcal {W}}_\infty , \forall i=1,\ldots ,n,\\ {{\bar{y}}}_n(0)&={{\bar{y}}}_{0}^n, \end{aligned}\right. \end{aligned}$$
(4.20)

As a consequence of (8) and Lemma 9, we have the following result

Lemma 14

There exists a constant

$$\begin{aligned} K:=K(L,M,\alpha _1,\alpha _2,\beta ,T,\Vert {{\bar{y}}}_0\Vert _{L^p({{\bar{\Omega }}};{\widetilde{W}})}, \Vert {{\bar{U}}}\Vert _{L^p({{\bar{\Omega }}}\times [0,T];(H^1(D))^d)}) \end{aligned}$$

such that

$$\begin{aligned}&{{\bar{{\mathbb {E}}}}} \sup _{s\in [0,T]} \Vert {{\bar{y}}}_n\Vert _V^2+4\nu {{\bar{{\mathbb {E}}}}} \int _0^{T}\Vert D {{\bar{y}}}_n\Vert _{2}^2dt+\dfrac{\beta }{2}{{\bar{{\mathbb {E}}}}} \int _0^{T}\theta _M({{\bar{y}}}_n)\int _D|{{\bar{A}}}_n|^4dxdt\nonumber \\&\quad \le e^{cT} \bigl ({{\bar{{\mathbb {E}}}}}\Vert {{\bar{y}}}_{0}\Vert _V^2 +{{\bar{{\mathbb {E}}}}}\int _0^{T}\Vert {{\bar{U}}}\Vert _2^2dt\bigr ), \end{aligned}$$
(4.21)
$$\begin{aligned}&{{\bar{{\mathbb {E}}}}}\sup _{s\in [0,T]}\Vert {{\bar{y}}}_n\Vert _{{\widetilde{W}}}^2= {{\bar{{\mathbb {E}}}}}\sup _{s\in [0,T]}[\Vert \textrm{curl }\,v({{\bar{y}}}_n)\Vert _2^2+\Vert {{\bar{y}}}_n\Vert _V^2]\le K,\end{aligned}$$
(4.22)
$$\begin{aligned}&{{\bar{{\mathbb {E}}}}} \sup _{ [0,T]} \Vert y_n \Vert _{{\widetilde{W}}}^{p}\nonumber \\&\quad \le K(M,T,p)\bigl (1+{{\bar{{\mathbb {E}}}}}\Vert {{\bar{y}}}_0\Vert _{{{\widetilde{W}}}}^{p}+{{\bar{{\mathbb {E}}}}}\int _0^T\Vert {{\bar{U}}}\Vert _2^{p}ds+ {{\bar{{\mathbb {E}}}}}\int _0^T\Vert \text { curl } {{\bar{U}}}\Vert _2^{p} ds\bigr ), \quad \forall p >2, \end{aligned}$$
(4.23)

where \({{\bar{{\mathbb {E}}}}}\) means that the expectation is taken on \({{\bar{\Omega }}}\) with respect to the probability measure \({{\bar{P}}}\).

4.5 Identification of the limit and martingale solutions

Thanks to Lemma 14, we have:

Lemma 15

There exist \( {\mathcal {F}}_t^\infty \)-predictable processes \(y_\infty , {{\bar{U}}}\) such that the following convergences hold (up to subsequence), as \(n \rightarrow \infty \):

$$\begin{aligned}&{{\bar{y}}}_n \text { converges strongly to } y_\infty \text { in } L^4({{\bar{\Omega }}};{\mathcal {C}}([0,T],(W^{2,4}(D))^d)) \text { and a.e. in } Q\times {{\bar{\Omega }}}; \end{aligned}$$
(4.24)
$$\begin{aligned}&{{\bar{y}}}_n \text { converges weakly to } y_\infty \text { in } L^4({{\bar{\Omega }}};L^2(0,T;{\widetilde{W}}));\end{aligned}$$
(4.25)
$$\begin{aligned}&{{\bar{y}}}_n \text { converges weakly-* to } y_\infty \text { in } L^4_{w-*}({{\bar{\Omega }}};L^\infty (0,T;{\widetilde{W}})) ;\end{aligned}$$
(4.26)
$$\begin{aligned}&\theta _M({{\bar{y}}}_n) \text { converges to } \theta _M({{\bar{y}}}_\infty ) \text { in } L^p({{\bar{\Omega }}}\times [0,T]) \quad \forall p \in [1, \infty [;\end{aligned}$$
(4.27)
$$\begin{aligned}&{{\bar{U}}}_n \text { converges to } {{\bar{U}}} \text { in } L^4({{\bar{\Omega }}};L^4(0,T;(H^1(D))^d)) ; \end{aligned}$$
(4.28)
$$\begin{aligned}&{{\bar{y}}}_0^n \text { converges to } {{\bar{y}}}_0= y_\infty (0) \text { in } L^4({{\bar{\Omega }}};(W^{2,4}(D))^d) . \end{aligned}$$
(4.29)

Proof

From Lemma 13, we know that

$$\begin{aligned} {{\bar{y}}}_n \text { converges strongly to } y_\infty \text { in } {\mathcal {C}}([0,T],(W^{2,4}(D))^d)\quad {\bar{P}}\text {-a.s. in } {{\bar{\Omega }}}. \end{aligned}$$

Then the Vitali’s theorem yields the first part of (4.24), since \(p>4\). The second part is a consequence of the convergence in \({\mathcal {C}}([0,T],(W^{2,4}(D))^d)\) \({{\bar{P}}}\)-a.s. in \( {{\bar{\Omega }}}\).

By the compactness of the closed balls in the space \(L^4({{\bar{\Omega }}};L^2(0,T;{\widetilde{W}}))\) with respect to the weak topology, there exists \(\Xi \in L^4({{\bar{\Omega }}};L^2(0,T;{\widetilde{W}}))\) such that \({{\bar{y}}}_n \rightharpoonup \Xi \), and the uniqueness of the limit gives \(\Xi =y_\infty \).

Concerning (4.26), the sequence \(({\bar{y}}_n)\) is bounded in \(L^4({{\bar{\Omega }}}, L^\infty (0,T;{\widetilde{W}}))\), thus in

$$\begin{aligned} L_{w-*}^4({{\bar{\Omega }}}, L^\infty (0,T;{\widetilde{W}}))\simeq ( L^{4/3}({{\bar{\Omega }}}, L^1(0,T;{\widetilde{W}}^\prime )))^\prime , \end{aligned}$$

where \(w-*\) stands for the weak-* measurability and \(L^{4}_{w-*}(\Omega ,L^{\infty }(0,T;{\widetilde{W}}))\) is defined as following:

$$\begin{aligned}{} & {} L^4_{w-*}(\Omega ;L^\infty (0,T;{\widetilde{W}}))\\{} & {} \quad =\{ u:\Omega \rightarrow L^\infty (0,T;{\widetilde{W}}) \text { is weakly-* measurable and } {\mathbb {E}}\Vert u\Vert _{L^\infty (0,T;{\widetilde{W}})}^4<\infty \}, \end{aligned}$$

see [15, Thm. 8.20.3] and [29, Lemma 4.3] for a similar argument. Hence, Banach-Alaoglu theorem’s ensures (4.26) and \(y_\infty \in L_{w-*}^4({{\bar{\Omega }}}, L^\infty (0,T;{\widetilde{W}}) \).

Since \( {{\bar{y}}}_n \text { converges strongly to } y_\infty \text { in } {\mathcal {C}}([0,T],(W^{2,4}(D))^d) {{\bar{P}}}\text {-a.s. in } {{\bar{\Omega }}}\), then \(y_n(t) \) converges to \(y_\infty (t)\) in \( (W^{2,4}(D))^d\) \({{\bar{P}}}\) a.s. in \({{\bar{\Omega }}}\), for any \(t\in [0,T]\). Hence \(\Vert {{\bar{y}}}_n(t)\Vert _{W^{2,4}} \rightarrow \Vert {{\bar{y}}}_\infty (t)\Vert _{W^{2,4}} \) \({{\bar{P}}}\)-a.s. in \({{\bar{\Omega }}}\), for any \(t\in [0,T]\). Since \(0 \le \theta _M(\cdot ) \le 1\), Lebesgue convergence theorem ensures (4.27).

By combining the convergence (3) in Lemma 13 and the Vitali’s theorem, we obtain (4.28) and (4.29). The equality \( y_\infty (0)={{\bar{y}}}_0\) is a consequence of (4.24). \(\square \)

We recall that \({\mathcal {L}}(P_nU, P_ny_0)={\mathcal {L}}({{\bar{U}}}_n, {{\bar{y}}}_0^n)\) and \((P_nU,P_ny_0)\) converges strongly to \((U,y_0)\) in the space \(L^4(\Omega ;L^4(0,T;H^1(D)))\times L^4(\Omega ,{\widetilde{W}})\). Therefore, we have

$$\begin{aligned} {\mathcal {L}}({{\bar{U}}})= {\mathcal {L}}(U) \text { and } {\mathcal {L}}({{\bar{y}}}_0)= {\mathcal {L}}(y_0). \end{aligned}$$
(4.30)

Lemma 16

The following convergences hold, as \(n\rightarrow \infty \)

$$\begin{aligned}&\theta _M({{\bar{y}}}_n)({{\bar{y}}}_n\cdot \nabla ){{\bar{v}}}_n \rightarrow \theta _M(y_\infty )(y_\infty \cdot \nabla )v_\infty \text { in } L^2(\Omega _T,V^\prime ), \end{aligned}$$
(4.31)
$$\begin{aligned}&\sum _{j}\theta _M({{\bar{y}}}_n){{\bar{v}}}_n^j\nabla {{\bar{y}}}^j_n \rightarrow \sum _{j}\theta _M( y_\infty ) v_\infty ^j\nabla y^j_\infty \text { in } L^2(\Omega _T,V^\prime ),\end{aligned}$$
(4.32)
$$\begin{aligned}&\theta _M({{\bar{y}}}_n)\text {div}({{\bar{A}}}_n^2) \rightarrow \theta _M(y_\infty )\text {div}(A_\infty ^2) \text { in } L^2(\Omega _T,V^\prime ),\end{aligned}$$
(4.33)
$$\begin{aligned}&\theta _M({{\bar{y}}}_n) \text {div}(|{{\bar{A}}}_n|^2{{\bar{A}}}_n) \rightarrow \theta _M(y_\infty ) \text {div}(|A_\infty |^2A_\infty )\text { in } L^2(\Omega _T,V^\prime ) \end{aligned}$$
(4.34)
$$\begin{aligned}&\theta _M({{\bar{y}}}_n)G(\cdot ,{{\bar{y}}}_n) \rightarrow \theta _M({{\bar{y}}}_\infty )G(\cdot , y_\infty ) \text { in } L^2\Big ({{\bar{\Omega }}}, L^2\big (0,T:L_2({\mathbb {H}},(L^2(D))^d)\big )\Big ) , \end{aligned}$$
(4.35)

where we use the notations \({{\bar{v}}}_n=v({{\bar{y}}}_n)\) and \(v_\infty =v(y_\infty )\).

Proof

It is worth recalling that for any \(u_1,u_2 \in (W^{2,4}(D))^d\)

$$\begin{aligned} \vert \theta _M(u_1)-\theta _M(u_2)\vert \le K(M)\Vert u_1-u_2 \Vert _{W^{2,4}} \text { and } \theta _M(u_1) \le 1. \end{aligned}$$
(4.36)

Let \(\varphi \in V\). Using (4.36) we write

$$\begin{aligned}&\vert \left( \{\theta _M({{\bar{y}}}_n)({{\bar{y}}}_n\cdot \nabla )v({{\bar{y}}}_n)-\theta _M(y_\infty )(y_\infty \cdot \nabla )v(y_\infty )\},\varphi \right) \vert \\&\quad =\left| -[\theta _M({{\bar{y}}}_n)-\theta _M(y_\infty )] b({{\bar{y}}}_n,\varphi ,v({{\bar{y}}}_n))-\theta _M(y_\infty )\right. \\&\qquad \left. [b({{\bar{y}}}_n-y_\infty ,\varphi ,v({{\bar{y}}}_n))-b(y_\infty , \varphi ,v({{\bar{y}}}_n)-v(y_\infty ))]\right| \\&\quad \le K(M)\Vert {{\bar{y}}}_n-y_\infty \Vert _{W^{2,4}} \Vert {{\bar{y}}}_n\Vert _{4}\Vert \varphi \Vert _V\Vert {{\bar{y}}}_n\Vert _{W^{2,4}} \\&\qquad +\Vert {{\bar{y}}}_n -y_\infty \Vert _{4}\Vert \varphi \Vert _V\Vert {{\bar{y}}}_n \Vert _{W^{2,4}}+\Vert y_\infty \Vert _{4}\Vert \varphi \Vert _V \Vert {{\bar{y}}}_n-y_\infty \Vert _{W^{2,4}} \\&\quad \le K(M) \Vert \varphi \Vert _V \Vert {{\bar{y}}}_n-y_\infty \Vert _{W^{2,4}} \left( 1+\Vert {{\bar{y}}}_n\Vert _{W^{2,4}}^2 +\Vert {{\bar{y}}}_n\Vert _{W^{2,4}}+\Vert y_\infty \Vert _{4} \right) . \end{aligned}$$

This estimate together with the Lemma 14 and convergence (4.24) give

$$\begin{aligned}&{{\bar{{\mathbb {E}}}}}\int _0^T \left( \{\theta _M({{\bar{y}}}_n)({{\bar{y}}}_n\cdot \nabla )v({{\bar{y}}}_n)-\theta _M(y_\infty )(y_\infty \cdot \nabla )v(y_\infty )\},\varphi \right) dt\\&\quad \le K(M, \Vert \varphi \Vert _V){{\bar{{\mathbb {E}}}}}\int _0^T \Vert {{\bar{y}}}_n-y_\infty \Vert _{W^{2,4}} \left( 1+\Vert {{\bar{y}}}_n\Vert _{W^{2,4}}^2+\Vert {{\bar{y}}}_n\Vert _{W^{2,4}}+\Vert y_\infty \Vert _{4} \right) dt\\&\quad \le K(M, \Vert \varphi \Vert _V)\Vert {{\bar{y}}}_n-y_\infty \Vert _{L^4({{\bar{\Omega }}}_T;(W^{2,4}(D))^d)}\rightarrow 0. \end{aligned}$$

In a similar way, we can deduce (4.32). Namely, we have

$$\begin{aligned}&\left| \sum _{j}\big (\theta _M({{\bar{y}}}_n)v({{\bar{y}}}_n)^j\nabla {{\bar{y}}}_n^j-\theta _M(y_\infty )v(y_\infty )^j\nabla y_\infty ^j,\varphi \big )\right| \\&\quad =\vert [\theta _M({{\bar{y}}}_n)-\theta _M(y_\infty )]b (\varphi ,{{\bar{y}}}_n,v({{\bar{y}}}_n))+\theta _M(y_\infty )\\&\qquad [b(\varphi ,{{\bar{y}}}_n,v({{\bar{y}}}_n)-v(y_\infty )) +b(\varphi ,{{\bar{y}}}_n-y_\infty ,v(y_\infty ))]\vert \\&\quad \le K(M)\Vert \varphi \Vert _{V}\Vert {{\bar{y}}}_n -y_\infty \Vert _{W^{2,4}}\left( 1+\Vert {{\bar{y}}}_n\Vert ^2_W+ \Vert y_\infty \Vert _W\right) , \end{aligned}$$

and using again Lemma 14 and convergence (4.24), we deduce

$$\begin{aligned} {{\bar{{\mathbb {E}}}}}\int _0^T \Vert \varphi \Vert _{V}\Vert {{\bar{y}}}_n-y_\infty \Vert _{W^{2,4}}\left( 1+\Vert {{\bar{y}}}_n\Vert ^2_W+ \Vert y_\infty \Vert _W\right) dt \rightarrow 0, \end{aligned}$$

which yields (4.32). Proceeding with the same reasoning, we derive

$$\begin{aligned}&\vert \big (\theta _M({{\bar{y}}}_n)\text {div}(A({{\bar{y}}}_n)^2)-\theta _M( y_\infty )\big (\text {div}(A( y_\infty )^2),\varphi \big )\vert \\&\quad =\vert \big ([\theta _M({{\bar{y}}}_n)-\theta _M( y_\infty )]\text { div}(A({{\bar{y}}}_n)^2),\varphi \big )\\&\qquad +\theta _M( y_\infty )\big ( \text { div}([A({{\bar{y}}}_n)-A( y_\infty )] A({{\bar{y}}}_n))+\text { div}(A( y_\infty )[A({{\bar{y}}}_n)-A( y_\infty )]),\varphi )\big )\vert \\&\quad \le \vert \theta _M({{\bar{y}}}_n)-\theta _M( y_\infty )\vert \Vert {{\bar{y}}}_n\Vert _{W^{1,\infty }}\Vert {{\bar{y}}}_n\Vert _{H^2}\Vert \varphi \Vert _{2}\\&\qquad +(\Vert {{\bar{y}}}_n\Vert _{W^{1,\infty }}+\Vert y_\infty \Vert _{W^{1,\infty }})\Vert {{\bar{y}}}_n- y_\infty \Vert _{H^2}\Vert \varphi \Vert _{2}\\&\qquad +(\Vert {{\bar{y}}}_n\Vert _{H^2}+\Vert y_\infty \Vert _{H^2})\Vert {{\bar{y}}}_n- y_\infty \Vert _{W^{1,\infty }}\Vert \varphi \Vert _{2}\\&\quad \le K(M)\Vert {{\bar{y}}}_n- y_\infty \Vert _{W^{2,4}}\Vert {{\bar{y}}}_n\Vert _{W^{1,\infty }}\Vert {{\bar{y}}}_n\Vert _{H^2}\Vert \varphi \Vert _{2}\\&\qquad +(\Vert {{\bar{y}}}_n\Vert _{W^{1,\infty }}+\Vert y_\infty \Vert _{W^{1,\infty }})\Vert {{\bar{y}}}_n- y_\infty \Vert _{H^2}\Vert \varphi \Vert _{2}\\&\qquad +(\Vert {{\bar{y}}}_n\Vert _{H^2}+\Vert y_\infty \Vert _{H^2})\Vert {{\bar{y}}}_n- y_\infty \Vert _{W^{1,\infty }}\Vert \varphi \Vert _{2}\\&\quad \le K(M)\Vert \varphi \Vert _V\Vert {{\bar{y}}}_n- y_\infty \Vert _{W^{2,4}}\left( \Vert {{\bar{y}}}_n\Vert _{W^{1,\infty }}\Vert {{\bar{y}}}_n\Vert _{H^2}+\Vert {{\bar{y}}}_n\Vert _{W^{1,\infty }}\right. \\&\qquad \left. +\Vert y_\infty \Vert _{W^{1,\infty }}+\Vert {{\bar{y}}}_n\Vert _{H^2}+\Vert y_\infty \Vert _{H^2}\right) , \end{aligned}$$

and conclude that

$$\begin{aligned} {{\bar{{\mathbb {E}}}}}\int _0^T \vert \big (\theta _M({{\bar{y}}}_n)\text {div}(A({{\bar{y}}}_n)^2)-\theta _M( y_\infty )\big (\text {div}(A( y_\infty )^2),\varphi \big )\vert dt \rightarrow 0. \end{aligned}$$

Concerning (4.34), we have

$$\begin{aligned}&\vert \big (\theta _M({{\bar{y}}}_n)\text {div}(|A({{\bar{y}}}_n)|^2A({{\bar{y}}}_n))-\theta _M(y_\infty )\text {div}(|A(y_\infty )|^2A(y_\infty )),\varphi \big )\vert \\&\quad =\vert (\theta _M({{\bar{y}}}_n)-\theta _M(y_\infty ))\big (\text {div}(|A({{\bar{y}}}_n)|^2A({{\bar{y}}}_n)),\varphi \big )\\&\qquad +\theta _M(y_\infty )\big (\text {div}(|A({{\bar{y}}}_n)|^2A({{\bar{y}}}_n-y_\infty )),\varphi \big )\\&\qquad +\theta _M(y_\infty )\big (\text {div}([A({{\bar{y}}}_n)\cdot A({{\bar{y}}}_n-y_\infty )+A({{\bar{y}}}_n-y_\infty )\cdot A(y_\infty )]A(y_\infty )),\varphi \big )\vert \\&\quad \le K(M)\Vert {{\bar{y}}}_n-y_\infty \Vert _{W^{2,4}}\Vert {{\bar{y}}}_n\Vert _{W^{1,\infty }}^2\Vert {{\bar{y}}}_n\Vert _{H^2}\Vert \varphi \Vert _{2}\\&\qquad +C(\Vert y_\infty \Vert _{W^{1,\infty }}\Vert {{\bar{y}}}_n\Vert _{H^2}+\Vert {{\bar{y}}}_n\Vert _{W^{1,\infty }}\Vert y_\infty \Vert _{H^2})\Vert {{\bar{y}}}_n-y_\infty \Vert _{W^{1,\infty }}\Vert \varphi \Vert _{2}\\&\qquad +C(\Vert {{\bar{y}}}_n\Vert _{W^{1,\infty }}+\Vert y_\infty \Vert _{W^{1,\infty }})\Vert y_\infty \Vert _{W^{1,\infty }}\Vert {{\bar{y}}}_n-y_\infty \Vert _{H^2}\Vert \varphi \Vert _{2}\\&\qquad +C\Vert {{\bar{y}}}_n-y_\infty \Vert _{W^{1,\infty }}\Vert y_\infty \Vert _{H^2}\Vert y_\infty \Vert _{W^{1,\infty }}\Vert \varphi \Vert _{2}, \end{aligned}$$

which gives

$$\begin{aligned} {{\bar{{\mathbb {E}}}}}\int _0^T \vert \big (\theta _M({{\bar{y}}}_n)\text {div}(|A({{\bar{y}}}_n)|^2A({{\bar{y}}}_n))-\theta _M(y_\infty )\text {div}(|A(y_\infty )|^2A(y_\infty )),\varphi \big )\vert dt \rightarrow 0. \end{aligned}$$

Finally, the property (2.7) and (4.36) allow to write

$$\begin{aligned}&\Vert \theta _M({{\bar{y}}}_n)G(\cdot ,{{\bar{y}}}_n)-\theta _M({{\bar{y}}}_\infty )G(\cdot , y_\infty )\Vert _{L^2\big ({{\bar{\Omega }}}, L^2\big (0,T:L_2({\mathbb {H}},(L^2(D))^d)\big )\big )}^2\\&\quad = {{\bar{{\mathbb {E}}}}}\sum _{{\textbf{k}}\ge 1}\int _0^{T} \Vert \theta _M({{\bar{y}}}_n)\sigma _{\textbf{k}}(\cdot ,{{\bar{y}}}_n)-\theta _M(y_\infty )\sigma _{\textbf{k}}(\cdot ,y_\infty ) \Vert _2^2dt\\&\quad \le K(M) {{\bar{{\mathbb {E}}}}}\sum _{{\textbf{k}}\ge 1}\int _0^{T} \Big (\vert \theta _M({{\bar{y}}}_n)-\theta _M(y_\infty )\vert ^2\Vert \sigma _{\textbf{k}}(\cdot ,{{\bar{y}}}_n) \Vert _2^2 \\&\qquad +|\theta _M(y_\infty )|^2\Vert \sigma _{\textbf{k}}(\cdot ,{{\bar{y}}}_n)-\sigma _{\textbf{k}}(\cdot ,y_\infty ) \Vert _2^2\Big )dt\\&\quad \le K(M,L) {{\bar{{\mathbb {E}}}}}\int _0^{T}\Vert {{\bar{y}}}_n-y_\infty \Vert _{W^{2,4}}^2\left( 1+\Vert {{\bar{y}}}_n\Vert ^2_2\right) dt . \end{aligned}$$

Using Lemma 14 and (4.24), we obtain

$$\begin{aligned} {{\bar{{\mathbb {E}}}}}\int _0^{T}\Vert {{\bar{y}}}_n-y_\infty \Vert _{W^{2,4}}^2\left( 1+\Vert {{\bar{y}}}_n\Vert ^2_2\right) dt \rightarrow 0, \text { as } n\rightarrow \infty , \end{aligned}$$

which give (4.35). \(\square \)

The convergence (4.35) implies the following convergence of the stochastic term.

Lemma 17

We have

$$\begin{aligned}&\int _0^\cdot \theta _M({{\bar{y}}}_n)G(\cdot ,{{\bar{y}}}_n)d{\mathcal {W}}_\infty \nonumber \\&\quad \rightarrow \int _0^\cdot \theta _M(y_\infty )G(\cdot ,y_\infty )d{\mathcal {W}}_\infty \text { in } L^2\big ({{\bar{\Omega }}}, {\mathcal {C}}\big ([0,T],(L^2(D))^d\big )\big ), \quad \text {as }n\rightarrow \infty . \end{aligned}$$
(4.37)

Proof

Thanks to Burkholder–Davis–Gundy inequality and (4.35), we obtain

$$\begin{aligned}&{{\bar{{\mathbb {E}}}}} \sup _{r\in [0,T]}\left\| \int _0^\cdot [\theta _M({{\bar{y}}}_n)G(\cdot ,{{\bar{y}}}_n) -\theta _M(y_\infty )G(\cdot ,y_\infty )]d{\mathcal {W}}_\infty \right\| _{2}^2\\&\quad \le C {{\bar{{\mathbb {E}}}}} \sum _{{\textbf{k}}\ge 1} \int _0^T\Vert \theta _M({{\bar{y}}}_n)\sigma _{\textbf{k}}(\cdot ,{{\bar{y}}}_n) -\theta _M(y_\infty )\sigma _{\textbf{k}}(\cdot ,y_\infty )\Vert _{2}^2 ds\rightarrow 0, \quad \text {as }n\rightarrow \infty . \end{aligned}$$

\(\square \)

4.6 Proof of Theorem 5

Let \(e_i \in W_n\) and \(t\in [0,T]\), from (4.20) we have

$$\begin{aligned} \left\{ \begin{aligned}&(v({{\bar{y}}}_n(t)),e_i)-(v({{\bar{y}}}_{0}^n),e_i)\\&\quad =\displaystyle \int _0^t\big (\nu \Delta {{\bar{y}}}_n-\theta _M({{\bar{y}}}_n)({{\bar{y}}}_n\cdot \nabla )v({{\bar{y}}}_n)\\&\qquad -\sum _{j}\theta _M({{\bar{y}}}_n)v({{\bar{y}}}_n)^j\nabla {{\bar{y}}}^j_n +(\alpha _1+\alpha _2)\theta _M({{\bar{y}}}_n)\text {div}(A({{\bar{y}}}_n)^2) \\&\qquad +\beta \theta _M({{\bar{y}}}_n) \text {div}(|A({{\bar{y}}}_n)|^2A({{\bar{y}}}_n))+{{\bar{U}}}_n, e_i\big )dt+\displaystyle \int _0^t\big (\theta _M({{\bar{y}}}_n) G(\cdot ,{{\bar{y}}}_n),e_i\big )d{\mathcal {W}}_\infty ,\\&{{\bar{y}}}_n(0)={{\bar{y}}}_{0}^n. \end{aligned}\right. \end{aligned}$$
(4.38)

By letting \(n\rightarrow \infty \) in (4.38), and combining Lemmas 15, 16 and 17 and the equality 4.30, we deduce

$$\begin{aligned} \left\{ \begin{aligned}&(v(y_\infty (t)),e_i)-(v({{\bar{y}}}_0),e_i)\\&\quad =\displaystyle \int _0^t\big (\nu \Delta y_\infty -\theta _M(y_\infty )(y_\infty \cdot \nabla )v(y_\infty )-\sum _{j}\theta _M(y_\infty )v(y_\infty )^j\nabla y_{\infty }^j\\&\qquad +(\alpha _1+\alpha _2)\theta _M(y_\infty )\text {div}(A(y_\infty )^2)+\beta \theta _M(y_\infty ) \text {div}(|A(y_\infty )|^2A(y_\infty ))+{{\bar{U}}}, e_i\big )dt \\&\qquad +\displaystyle \int _0^t\big (\theta _M(y_\infty )G(\cdot ,y_\infty ),e_i\big )d{\mathcal {W}}_\infty , \\&y_\infty (0)={{\bar{y}}}_{0}. \end{aligned}\right. \end{aligned}$$
(4.39)

Since W is separable Hilbert space, the last equality holds for any \(\phi \in W\). Consequently, P-a.s. and for any \(t\in [0,T]\)

$$\begin{aligned} (y_\infty (t),\phi )_V&=(y_\infty (0),\phi )_V+\displaystyle \int _0^t\big \{\big (\nu \Delta y_\infty -\theta _M(y_\infty )(y_\infty \cdot \nabla )v(y_\infty )\nonumber \\&\quad -\sum _{j}\theta _M(y_\infty )v(y_\infty )^j\nabla y_\infty ^j\nonumber \\&\quad +(\alpha _1+\alpha _2)\theta _M(y_\infty )\text {div}[A(y_\infty )^2],\phi \big )\nonumber \\&\quad +\big (\beta \theta _M(y_\infty )\text {div}[|A(y_\infty )|^2A(y_\infty )]+{{\bar{U}}},\phi \big )\big \}dt\nonumber \\&\quad +\displaystyle \int _0^t\theta _M(y_\infty )\big (G(\cdot ,y_\infty ),\phi \big ) d{{\mathcal {W}}}_\infty \text { for all } \phi \in V, \end{aligned}$$
(4.40)

and \({\mathcal {L}}(y_\infty (0),{{\bar{U}}})={\mathcal {L}}(y_0,U)\).

It is very important to note that, a priori, (4.40) holds \({{\bar{P}}}\)-a.s, for all \(t\in [0,T]\) in \(V^\prime \) but we have proved that \(y_\infty \in L^p({{\bar{\Omega }}}; L^\infty (0,T;{\widetilde{W}}))\), which ensures that the third derivative of \(y_\infty \) belongs to \(L^p({{\bar{\Omega }}}; L^\infty (0,T;(L^2(D))^d))\). Therefore, (4.40) holds in \(L^2(D)\)-sense (not in the distributional sense).

Our aim is to construct probabilistic strong solution. The idea is to prove an uniqueness result and use the link between probabilistic weak and strong solutions via Yamada–Watanabe theorem. Unfortunately, the solution of (4.40) is governed by strongly non-linear system and the uniqueness for (4.40) does not hold globally in time. For that, we will introduce a modified problem based on (4.40), where the uniqueness holds, then we will use the generalization of Yamada–Watanable–Engelbert theorem (see [22]) to get a probabilistic strong solution for the modifed problem. This will be the aim of the next Sect. 5.

5 The strong solution

5.1 Local martingale solution of (2.1)

In order to define strong local solution to (2.1), we need to construct the solution on the initial probability space. For that, define the following sequence of stopping time

$$\begin{aligned} \tau _M:=\inf \{ t\ge 0: \Vert y_\infty (t)\Vert _{W^{2,4}} \ge M\}\wedge T. \end{aligned}$$

From (4.24), we recall that \(y_\infty \in L^2({{\bar{\Omega }}}; {\mathcal {C}}\big ([0,T];(W^{2,4}(D))^d)\big )\) and \(\tau _M\) is well-defined stopping time. It’s worth noting that, since \(y_\infty \) is bounded in \(L^p({{\bar{\Omega }}};L^\infty (0,T;{\widetilde{W}}))\), \(\tau _M\) is a.s. strictly positive provided M is chosen large enough. Then \((y_\infty ,\tau _M)\) is a local martingale solution of (2.1) such that

$$\begin{aligned} y_\infty (\cdot \wedge \tau _M) \in {\mathcal {C}}([0,T]; (W^{2,4}(D))^d)\quad {{\bar{P}}} \text { a.s.} \end{aligned}$$

and \(y_\infty (\cdot \wedge \tau _M) \in L^p({{\bar{\Omega }}};L^\infty (0,T;{\widetilde{W}})).\) Set \({{\bar{y}}}(t):=y_\infty (t\wedge \tau _M)\) for \(t\in [0,T]\) and note that, since \(y_\infty \) is continuous, one has

$$\begin{aligned} \tau _M=\inf \{ t\ge 0: \Vert {{\bar{y}}}(t)\Vert _{W^{2,4}} \ge M\} \wedge T. \end{aligned}$$
(5.1)

We will refer to \({{\bar{y}}}\) as the solution of the "modified problem". From Theorem 5, \(({{\bar{y}}}, \tau _M)\) (\(\tau _M\) is given by (5.1)) satisfies the following equation:

$$\begin{aligned}&({{\bar{y}}}(t),\phi )_V-\displaystyle \int _0^{t\wedge \tau _M}\big \{(\nu \Delta {{\bar{y}}}-({{\bar{y}}}\cdot \nabla )v({{\bar{y}}})-\sum _{j}v({{\bar{y}}})^j\nabla {{\bar{y}}}^j,\phi )\nonumber \\&\quad +((\alpha _1+\alpha _2)\text {div}(A({{\bar{y}}})^2)+\beta \text {div}(|A({{\bar{y}}})|^2A({{\bar{y}}}))+{{\bar{U}}},\phi )\big \}ds\nonumber \\&\quad =({\bar{y}}(0),\phi )_V+\displaystyle \int _0^{t\wedge \tau _M}(G(\cdot ,{{\bar{y}}}),\phi )d\mathcal {{{\bar{W}}}} \quad {{\bar{P}}}\text { a.s. in } {\bar{\Omega }} \text { for all } t\in [0,T]. \end{aligned}$$
(5.2)

5.2 Local stability for (5.2)

Our aim is to prove the following stability result of (5.2).

Lemma 18

Assume that \(({\mathcal {W}}(t))_{t\ge 0}\) is a cylindrial Wiener process in \(H_0\) with respect to the stochastic basis \((\Omega ,{\mathcal {F}}, ({\mathcal {F}}_t)_{t\ge 0},P)\) and \(y_1,y_2\) are two solutions to (5.2) with respect to the initial conditions \(y_0^1,y_0^2\) and the forces \(U_1,U_2\), respectively, on \((\Omega ,{\mathcal {F}}, ({\mathcal {F}}_t)_{t\ge 0},P)\). Then, there exists \(C(M,L,T)>0\) such that

$$\begin{aligned} {\mathbb {E}}\sup _{s\in [0, \tau _M^1\wedge \tau _M^2]}\Vert y_1(s)-y_2(s)\Vert _V^2&\le C(M,L,T)\big [{\mathbb {E}}\Vert y_0^1-y_0^2\Vert _{V}^2\nonumber \\&\quad +{\mathbb {E}}\int _0^{\tau _M^1\wedge \tau _M^2}\Vert U_1(s)-U_2(s)\Vert _{2}^2ds\big ]. \end{aligned}$$
(5.3)

Proof

Let \((y_1,\tau _M^1)\) and \((y_2,\tau _M^2)\), where \(y_i \in {\mathcal {C}}([0,T]; (W^{2,4}(D))^d), i=1,2\), P-a.s. be two solutions of (5.2) with the initial conditions \(y_0^1,y_0^2\) and the forces \(U_1,U_2\), respectively.

Set \(y=y_1-y_2, y_0=y_0^1-y_0^2\) and \(U=U_1-U_2\), then we have for any \(t\in [0,\tau _M^1\wedge \tau _M^2]\)

$$\begin{aligned} v(y(t))-v(y_0)&=-\int _0^t\nabla ({\bar{{\textbf{P}}}}_1 -{\bar{{\textbf{P}}}}_2)ds+\nu \int _0^t\Delta y-\big [(y\cdot \nabla )y_1+(y_2\cdot \nabla )y\big ]ds\\&\quad +\int _0^t[\text {div}(N(y_1))-\text {div}(N(y_2))]ds +\int _0^t[\text {div}(S(y_1))-\text {div}(S(y_2))]ds\\&\quad +\int _0^tUds+ \int _0^t[G(\cdot ,y_1)-G(\cdot ,y_2)]d{\mathcal {W}}, \end{aligned}$$

where we used an equivalent form of (5.2), see [7, Appendix], such that

$$\begin{aligned} S(y):=\beta \Big ( |A(y)|^2A(y)\Big ),\quad N(y):=\alpha _1\big ( y \cdot \nabla A(y)+(\nabla y)^TA(y)+A(y)\nabla y\big )+\alpha _2(A(y))^2. \end{aligned}$$

Let \(t\in [0,\tau _M^1\wedge \tau _M^2]\), by applying the operator \((I-\alpha _1{\mathbb {P}}\Delta )^{-1}\) to the last equations and using Itô formula, one gets

$$\begin{aligned}&d\Vert y_1-y_2\Vert _V^2+4\nu \Vert {\mathbb {D}} y\Vert _2^2dt\\&\quad =-2\int _D\big [(y\cdot \nabla )y_1+(y_2\cdot \nabla )y\big ]y dx dt +2\langle \text { div}(N(y_1)-N(y_2)), y\rangle dt \\&\qquad + 2\langle \text { div}(S(y_1)-S(y_2)),y\rangle dt+2(U_1-U_2, y_1-y_2)dt\\&\qquad +2(G(\cdot ,y_1)-G(\cdot ,y_2), y_1-y_2)d{\mathcal {W}}+\sum _{{\textbf{k}}\ge 1}\Vert {{\tilde{\sigma }}}_{\textbf{k}}^1-{\tilde{\sigma }}_{\textbf{k}}^2\Vert _V^2dt\\&\quad =(I_1+I_2+I_3+I_4)dt+I_5d{\mathcal {W}}+I_6dt, \end{aligned}$$

where \(\tilde{\sigma _{\textbf{k}}}^i\) is the solution of (2.5) with \(f_i=\sigma _{\textbf{k}}(\cdot ,y_i),\forall {\textbf{k}}\ge 1, \quad i=1,2\). Notice that, by using [6, Theorem 3] and (2.7) we deduce

$$\begin{aligned} I_6=\sum _{{\textbf{k}}\ge 1}\Vert {{\tilde{\sigma }}}_{\textbf{k}}^1-{\tilde{\sigma }}_{\textbf{k}}^2\Vert _V^2 \le \sum _{{\textbf{k}}\ge 1}\Vert \sigma _{\textbf{k}}(\cdot ,y_1)-\sigma _{\textbf{k}}(\cdot ,y_2)\Vert _2^2 \le L \Vert y_1-y_2\Vert _{2}^2. \end{aligned}$$

We will estimate \(I_i, i=1,\ldots ,4\). Since \(V\hookrightarrow L^4(D)\), the first term verifies

$$\begin{aligned} \vert I_1\vert&=2\left| \int _D(y\cdot \nabla )y_1\cdot y dx\right| \le C\Vert y\Vert _4^2\Vert \nabla y_1\Vert _2\le C\Vert y\Vert _V^2\Vert \nabla y_1\Vert _2 \le C\Vert y\Vert _V^2\Vert y_1\Vert _{H^1}. \end{aligned}$$

After an integration by parts, the term \(I_3\), can be treated using the same arguments as in [8, Sect. 3], the term on the boundary vanish and we have

$$\begin{aligned} I_3&= 2\langle \text {div}(S(y_1)-S(y_2)),y_1-y_2\rangle =-2\int _D(S(y_1)-S(y_2))\cdot \nabla ydx\\&=-\dfrac{\beta }{2}(\int _D(|A(y_1)|^2-|A(y_2)|^2)^2dx +\int _D(|A(y_1)|^2+|A(y_2)|^2)|A(y_1-y_2)|^2dx) \le 0. \end{aligned}$$

Concerning \(I_4\), one has

$$\begin{aligned} |I_4|= 2\left| \int _D(U_1-U_2)\cdot ydx\right| \le \Vert U_1-U_2\Vert _2^2+\Vert y\Vert _2^2\le \Vert U_1-U_2\Vert _2^2+\Vert y\Vert _V^2. \end{aligned}$$

Let us estimate the term \(I_2\). Integrating by parts and taking into account that the boundary terms vanish (see [8, Sect. 3]), we deduce

$$\begin{aligned} I_2&= 2\langle \text { div}(N(y_1)-N(y_2)), y\rangle =-2 \int _D(N(y_1)-N(y_2))\cdot \nabla ydx\\&=-\alpha _2\int _D\big (A(y_1)^2-A(y_2)^2\big )\cdot A(y)dx-\alpha _1\int _D\big (y_1 \cdot \nabla A(y_1)-y_2 \cdot \nabla A(y_2)\big )\cdot A(y)dx\\&\quad -\alpha _1\int _D((\nabla y_1)^TA(y_1)+A(y_1)\nabla y_1-(\nabla y_2)^TA(y_2)-A(y_2)\nabla y_2)\cdot A(y)dx\\&=-\alpha _2I_2^1-\alpha _1I_2^2-\alpha _1I_2^3. \end{aligned}$$

Since

$$\begin{aligned} I_2^1&= \int _D\big (A(y_1)^2-A(y_2)^2\big )\cdot A(y)dx=\int _D\big (A(y)A(y_1)+A(y_2)A(y)\big )\cdot A(y)dx;\\ I_2^2&=\int _D\big (y_1 \cdot \nabla A(y_1)-y_2 \cdot \nabla A(y_2)\big )\cdot A(y)dx\\ {}&=\int _D\big (y_1 \cdot \nabla A(y_1-y_2)+(y_1-y_2)\cdot \nabla A(y_2)\big )\cdot A(y)dx\\&=\int _D\big (y\cdot \nabla A(y_2)\cdot A(y)dx;\\ I_2^3&=\int _D((\nabla y_1)^TA(y_1)+A(y_1)\nabla y_1-(\nabla y_2)^TA(y_2)-A(y_2)\nabla y_2)\cdot A(y)dx\\ {}&=2\int _D\big (A(y_1)A(y))\cdot \nabla y_1-(A(y_2)A(y))\cdot \nabla y_2\big )dx\\&=2\int _D\big ((A(y))^2\cdot \nabla y_1+(A(y_2)A(y))\cdot \nabla y\big ) dx; \end{aligned}$$

the Hölder’s inequality and the embedding \(H^1(D) \hookrightarrow L^4(D)\) yield

$$\begin{aligned} \vert I_2^1\vert&\le \int _D\vert \big (A(y)A(y_1)+A(y_2)A(y)\big )\vert \cdot \vert A(y)\vert dx \le C(\Vert y_1\Vert _{W^{1,\infty }}+\Vert y_2\Vert _{W^{1,\infty }})\Vert \nabla y\Vert _{2}^2;\\ \vert I_2^2\vert&\le \int _D\vert \big (y\cdot \nabla A(y_2)\cdot A(y) \vert dx\le C\Vert y\Vert _4\Vert y_2\Vert _{W^{2,4}}\Vert \nabla y\Vert _{2}\le C\Vert y_2\Vert _{W^{2,4}}\Vert \nabla y\Vert _{2}^2;\\ \vert I_2^3\vert&\le C\int _D\vert \big ((A(y))^2\cdot \nabla y_1+(A(y_2)A(y))\cdot \nabla y\big )\vert dx \le C(\Vert y_1\Vert _{W^{1,\infty }}+\Vert y_2\Vert _{W^{1,\infty }})\Vert \nabla y\Vert _{2}^2. \end{aligned}$$

Then the embedding \(W^{2,4}(D)\hookrightarrow W^{1,\infty }(D)\) gives \(\vert I_2\vert \le C (\Vert y_1\Vert _{W^{2,4}}+\Vert y_2\Vert _{W^{2,4}})\Vert y\Vert _{V}^2.\)

By gathering the previous estimates, there exists \(M_0>0\) such that

$$\begin{aligned}&\Vert y(t)\Vert _V^2+4\nu \int _0^t\Vert {\mathbb {D}}y\Vert _{2}^2ds\nonumber \\&\quad \le \Vert y_0\Vert _{V}^2+M_0\int _0^t(\Vert y_1\Vert _{W^{2,4}}+\Vert y_2\Vert _{W^{2,4}}+1)\Vert y\Vert _{V}^2ds+\int _0^t\Vert U_1-U_2\Vert _{2}^2ds\nonumber \\&\qquad +2\int _0^t(G(\cdot ,y_1)-G(\cdot ,y_2), y_1-y_2)d{\mathcal {W}}. \end{aligned}$$

Thanks to Burkholder–Davis–Gundy inequality, for any \(\delta >0\), one has

$$\begin{aligned}&2{\mathbb {E}}\sup _{s\in [0, \tau _M^1\wedge \tau _M^2]}\vert \int _0^s (G(\cdot ,y_1)-G(\cdot ,y_1), y)d{\mathcal {W}}\vert \\&\quad =2{\mathbb {E}}\sup _{s\in [0, \tau _M^1\wedge \tau _M^2]}\left| \sum _{{\textbf{k}}\ge 1}\int _0^s (\sigma _{\textbf{k}}(\cdot ,y_1)-\sigma _{\textbf{k}}(\cdot ,y_2),y)d\beta _{\textbf{k}}\right| \\&\quad \le C{\mathbb {E}}[\sum _{{\textbf{k}}\ge 1}\int _0^{\tau _M^1\wedge \tau _M^2} (\sigma _{\textbf{k}}(\cdot ,y_1)-\sigma _{\textbf{k}}(\cdot ,y_2), y)^2ds]^{1/2}\\&\quad \le \delta {\mathbb {E}}\sup _{s\in [0, \tau _M^1\wedge \tau _M^2]}\Vert y\Vert _2^2+C_\delta {\mathbb {E}}\int _0^{\tau _M^1\wedge \tau _M^2}\Vert y\Vert _2^2dr. \end{aligned}$$

An appropriate choice of \(\delta \) and taking into account that \(t\in [0,\tau _M^1\wedge \tau _M^2]\) yield

$$\begin{aligned} {\mathbb {E}}\sup _{s\in [0, t\wedge \tau _M^1\wedge \tau _M^2]}\Vert y(s)\Vert _V^2&\le {\mathbb {E}}\Vert y_0\Vert _{V}^2+{\mathbb {E}}\int _0^{t\wedge \tau _M^1\wedge \tau _M^2}\Vert U_1(s)-U_2(s)\Vert _{2}^2ds \nonumber \\&\quad +M_0{\mathbb {E}}\int _0^{t\wedge \tau _M^1\wedge \tau _M^2}(\Vert y_1(s)\Vert _{W^{2,4}}+\Vert y_2(s)\Vert _{W^{2,4}}+1)\Vert y(s)\Vert _{V}^2ds\nonumber \\&\le {\mathbb {E}}\Vert y_0\Vert _{V}^2+{\mathbb {E}}\int _0^{t\wedge \tau _M^1\wedge \tau _M^2}\Vert U_1(s)-U_2(s)\Vert _{2}^2ds\nonumber \\&\quad +M_0(2M+1){\mathbb {E}}\int _0^{t\wedge \tau _M^1\wedge \tau _M^2}\Vert y(s)\Vert _{V}^2ds. \end{aligned}$$
(5.4)

Finally, Gronwall’s inequality ensures Lemma 18. \(\square \)

5.3 Pathwise uniqueness of (5.2)

If \(y_0^1=y_0^2\) and \(U_1=U_2\), it follows from Lemma 18 that the corresponding solutions \(y_1\) and \(y_2\) coincide \({{\bar{P}}}\)-a.s. for any \(t\in [0,\tau _M^1\wedge \tau _M^2]\). Then from the definition of stopping time (5.1), we obtain \(\tau _M^1=\tau _M^2\) \({{\bar{P}}}\)-a.s. Moreover, notice that \(y_i(t)=y_i(\tau _M^i)\) for any \(\tau _M^i< t\le T, i=1,2\) and we are able to conclude that pathwise uniqueness holds for (5.2).

5.4 Strong solution of (5.2)

Let \((\Omega ,{\mathcal {F}},({\mathcal {F}}_t)_{t\ge 0},P)\) be a stochastic basis and \(({\mathcal {W}}(t))_{t\ge 0}\) be a \(({\mathcal {F}}_t)\)-cylindrical Wiener process with values in \(H_0\). From Sects. 5.1 and 5.3, it follows the existence of weak probabilistic solution and pathwise (pointwise) uniqueness for compatible solutions (see [22, Def. 3.1 & Rmk. 3.5]) of the modified problem (5.2). By using Theorem [22, Thm. 3.14], we are able to deduce

Lemma 19

Let \(M \in {\mathbb {N}}\) be large enough, there exist a unique strong solution defined on \((\Omega ,{\mathcal {F}},({\mathcal {F}}_t)_{t\ge 0},P)\), denoted by \(y^M\) and \((\zeta _M)_M\), a sequence of a.s. strictly positive \(({\mathcal {F}}_t)\)-stopping time such that:

  • \(y^M\) is a W-valued predictable process and \( \zeta _M:=\inf \{ t\ge 0: \Vert y^M(t)\Vert _{W^{2,4}} \ge M\}\wedge T. \)

  • \(y^M\) belongs to the space

    $$\begin{aligned} L^p(\Omega ;{\mathcal {C}}([0,T],(W^{2,4}(D))^d))\cap L^p_{w-*}(\Omega ;L^\infty (0,T;{\widetilde{W}})); \end{aligned}$$
  • \(y^M\) satisfies the following equality, P-a.s. for all \(t\in [0,T]\)

    $$\begin{aligned} (y^M(t),\phi )_V&=(y_0,\phi )_V+\displaystyle \int _0^{t\wedge \zeta _M}\big (\nu \Delta y^M-(y^M\cdot \nabla )v(y^M)-\sum _{j}v(y^M)^j\nabla (y^M)^j\nonumber \\&\quad +(\alpha _1+\alpha _2)\text {div}(A(y^M)^2) +\beta \text {div}(|A(y^M)|^2A(y^M))+ U, \phi \big ) ds \\&\quad +\displaystyle \int _0^{t\wedge \zeta _M}(G(\cdot ,y^M),\phi )d{\mathcal {W}}, \quad \text { for all } \phi \in V.\qquad \nonumber \end{aligned}$$
    (5.5)

6 Proof of Theorem 4

Let \(M \in {\mathbb {N}}\) be large enough and note that \((y^M, \zeta _M)\) (see Lemma 19) is a local strong solution to (2.1) in the sense of Definition 1.

6.1 Local pathwise uniqueness

Let \((z_1,\varrho _1)\) and \((z_2,\varrho _2)\) be two local strong solutions to (2.1), in the sense of Definition 1. Define the stopping time

$$\begin{aligned} \theta _S:=\inf \{ t\ge 0: \Vert z_1(t\wedge \varrho _1)\Vert _{W^{2,4}}+\Vert z_2(t\wedge \varrho _2)\Vert _{W^{2,4}} \ge S\}\wedge T; \quad S\in {\mathbb {N}}. \end{aligned}$$

Note \(\theta _S \rightarrow T\) as \(S\rightarrow \infty \), since \((z_i)_{i=1,2}\) are bounded in \(L^p_{w-*}(\Omega ;L^\infty (0,T;{\widetilde{W}}))\) by a positive constant independent of S. By using the same arguments of the proof of Lemma 18, we deduce

$$\begin{aligned} P\big (z_1(t)=z_2(t); \quad \forall t \in [0,\varrho _1\wedge \varrho _2\wedge \theta _S]\big )=1. \end{aligned}$$

By letting \(S\rightarrow \infty \), we are able to get the local pathwise uniqueness, in the sense of Definition 2 (i). Namely

$$\begin{aligned} P\big (z_1(t)=z_2(t); \quad \forall t \in [0,\varrho _1\wedge \varrho _2]\big )=1. \end{aligned}$$

6.2 Maximal strong solution

Our aim is to show that the solution can be extended until a maximal time interval. It is worth mentioning that analogous extension results can be found in the literature (see e.g. [4, 19, 20]).

Let \({\mathcal {A}}\) be the set of all stopping times corresponding to a local pathwise solution of (2.1) starting from the initial datum \(y_0\) and in the presence of the external force U. Thanks to Lemma 19, the set \({\mathcal {A}}\) is nonempty. Set \({\textbf{t}}=\sup {\mathcal {A}}\) and choose an increasing sequence \((\zeta _M)_M\subset {\mathcal {A}}\) such that \(\displaystyle \lim _{M\rightarrow \infty } \zeta _M={\textbf{t}}\), we recall that \( \zeta _M:=\inf \{ t\ge 0: \Vert y^M(t)\Vert _{W^{2,4}} \ge M\}\wedge T \) and \(y^M\) satisfies (5.5). Due to the pathwise uniqueness, we define a solution y on \(\displaystyle \bigcup _{M\in {\mathbb {N}}}[0,\zeta _M]\) by setting \(y:=y^M\) on \([0,\zeta _M]\).

For each \(m>0\), consider

$$\begin{aligned} \sigma _m={\textbf{t}}\wedge \inf \{ 0\le t\le T\; \vert \quad \Vert y(t)\Vert _{W^{2,4}} \ge m\}. \end{aligned}$$

Recall that y is continuous with values in \((W^{2,4}(D))^d\) and \(\sigma _m\) is a well-defined stopping time. On the other hand, note that for a.e. \(\omega \in \Omega \), there exists \(m>0\) such that \(\sigma _m >0\) i.e. \(\sigma _m\) is a strictly positive stopping time P-a.s. It follows that \((y,\sigma _m)\) is a local strong solution for each \(m>0\), by using the continuity and the uniqueness of the solution.

Let us show that \(\sigma _m <{\textbf{t}}\) on \([{\textbf{t}} <T]\). Assume that \(P(\sigma _m={\textbf{t}}) >0\), since \((y,\sigma _m)\) is a local strong solution then there exists another stopping time \(\rho >\sigma _m\) and a process \(y^*\) such that \((y^*,\rho )\) is a local strong solution with the same data, which contradict the maximality of \({\textbf{t}}\). Therefore, \(P({\textbf{t}}=\sigma _m)=0\). In conclusion, \(\sigma _m\) is an increasing sequence of stopping time, which converges to \({\textbf{t}}\). Additionaly, on the set \([{\textbf{t}} <T]\), one has

$$\begin{aligned} \sup _{t\in [0,\sigma _m]}\Vert y(t)\Vert _{W^{2,4}} \ge m \end{aligned}$$

and \( \displaystyle \sup _{t\in [0,{\textbf{t}})}\Vert y(t)\Vert _{W^{2,4}} =\infty \text { on } [{\textbf{t}} <T].\)

Remark 6

Thanks to Remark 5, we obtain that \( y^M \in L^p(\Omega ;{\mathcal {C}}([0,T], (W^{2,q}(D))^d))\) for \( q<6\) in the 3D case. Therefore, one can replace \(\zeta _M\) (see Lemma 19) by the following stopping time

$$\begin{aligned} \widetilde{\zeta _M}:=\inf \{ t\ge 0: \Vert y^M(t)\Vert _{W^{2,q}} \ge M\}\wedge T. \end{aligned}$$

\(\bullet \) In the 2D case, we obtain that \( y^M \in L^p(\Omega ;{\mathcal {C}}([0,T], (W^{2,a}(D))^d))\) for large finite \(a<\infty \) and the stopping time \(\zeta _M\) (see Lemma 19) can be replaced by

$$\begin{aligned} \widetilde{\widetilde{\zeta _M}}:=\inf \{ t\ge 0: \Vert y^M(t)\Vert _{W^{2,a}} \ge M\}\wedge T, \quad \text { for large } a<\infty . \end{aligned}$$

In other words, the life span of the trajectories of the solution to (2.1) is larger in 2D than 3D case.

Remark 7

  • An important multiplicative noise that can be considered corresponds to the following linear noise

    $$\begin{aligned} G(\cdot ,y)d{\mathcal {W}}_t= H(u)d{{\textbf {B}}}_t:=(u-\alpha _1\Delta u)d{{\textbf {B}}}_t, \end{aligned}$$

    where \(({{\textbf {B}}}_t)_{t\ge 0}\) is one dimensional \({\mathbb {R}}-\)valued Brownian motion. Notice that \(H:{\widetilde{W}}\rightarrow L_2({\mathbb {R}}, (H^1(D))^d)\) and

    $$\begin{aligned} \Vert H(u)\Vert _{ L_2({\mathbb {R}}, (H^1(D))^d))}^2\equiv \Vert u-\alpha _1\Delta u\Vert _{(H^1(D))^d}^2\le C\Vert u\Vert _{{\widetilde{W}}}^2. \end{aligned}$$

    By performing minor modifications, we are able to prove Theorem 4 by replacing \(G(\cdot ,u)d{\mathcal {W}}\) by \(H(u)d{{\textbf {B}}}_t\).

  • We wish to draw the reader’s attention to the fact that the same analysis can be applied to an additive noise case, with \(G \in L^p\big (\Omega ; {\mathcal {C}}([0,T],L_2({\mathbb {H}},V))\big )\). One example is the following: let \(\sigma _{\textbf{k}}:[0,T] \rightarrow V\) such that \(\displaystyle \sup _{t\in [0,T]}\sum _{ k\ge 1}\Vert \sigma _{\textbf{k}}(t)\Vert _V^2 <\infty \), we can define \(G:[0,T]\rightarrow L_2({\mathbb {H}},V)\) by \(Ge_{\textbf{k}}=\sigma _{\textbf{k}}\), \({\textbf{k}}\in {\mathbb {N}}\). The noise can be understood in the following sense

    $$\begin{aligned} \int _0^TGd{\mathcal {W}}=\sum _{{\textbf{k}}\ge 1}\int _0^T \sigma _{\textbf{k}}d\beta _{\textbf{k}}\end{aligned}$$

    and \(\displaystyle \int _0^T\Vert G(t)\Vert _{L_2({\mathbb {H}},V)}^2dt=\sum _{{\textbf{k}}\ge 1}\int _0^T\Vert \sigma _{\textbf{k}}(t)\Vert _V^2dt\).

  • If one sets \(\beta =0\) in (2.1), then a similar estimates can be obtained and the same result holds for second grade fluids model, by following the same analysis.