1 Introduction

We consider the stochastic Landau-Lifshitz-Bloch equation, which models the phenomenon of ferromagnetism for temperatures \(\mathbb{T}\) both below and above the Curie temperature \(\mathbb{T}_{c}\). The magnetization vector field \(m = \left ( m^{1}, m^{2}, m^{3} \right ) : \mathbb{R}^{d} \to \mathbb{R}^{3}\), \(d=1,2,3\) satisfies the Landau-Lifshitz-Bloch (LLB) equation

$$\begin{aligned} \frac{\partial m}{\partial t} = \lambda \left ( m \times H_{ \text{eff}}(m) \right ) + L_{1} \frac{ \left ( m \cdot H_{\text{eff}}(m)\right ) m }{\left | m \right |^{2}} - L_{2} \frac{ m \times \left ( m \times H_{\text{eff}}(m) \right ) }{\left | m \right |^{2} }. \end{aligned}$$
(1.1)

with initial data \(m(0)=m_{0}\). Here, \(L_{1}\), \(L_{2}\) are the longitudinal and transverse kinetic coefficients, and \(\lambda \) is the gyromagnetic ratio. The effective field \(H_{\text{eff}}(m)\) is the negative of the derivative of the total energy \(\mathcal{E}(m)\) is given by

$$ H_{\text{eff}}(m) = \frac{\partial \mathcal{E}}{\partial m}. $$
(1.2)

Here,

$$ \mathcal{E}(m) = - \mathbb{H} \cdot m + \frac{1}{2} \left | \nabla m \right |^{2} + \frac{1}{2} m \cdot \left ( \nabla \times m \right ) + \frac{1}{8\chi _{||}} \left ( 1 + \left | m \right |^{2} \right )^{2}. $$
(1.3)

ℍ denotes the applied external field. \(\chi _{||}\) is a positive constant that depends on the material. The second term on the right-hand side of (1.3) denotes the contribution from the exchange energy and the third term denotes the contribution from the helicity term or the chiral interaction term, which arises from the Dzyaloshinskii-Moriya interaction. Note that we have not considered the anisotropy term here. For calculations regarding the stochastic LLB equation with Anisotropy, we refer the interested readers to [9]. For more background on the model’s physics, we refer the readers to [68]. For more on magnetism, one can refer to the works of Brown [1, 2] In this work, we assume the external field ℍ to be 0.

For the temperatures \(\mathbb{T}\) above the Curie temperature \(\mathbb{T}_{c}\), we have \(L_{1} = L_{2}\). Further, simplifying the equation (1.1), we obtain the following form of the LLB equation.

$$\begin{aligned} \partial _{t} m = H_{\text{eff}}(m) + \lambda \, m \times H_{ \text{eff}}(m), \end{aligned}$$
(1.4)

with

$$ H_{\text{eff}}(m) = \lambda \, \Delta m + \nabla \times m - \frac{1}{2\chi _{||}} \left ( 1 + \left | m \right |^{2}\right ) m , $$
(1.5)

Le in [11] showed the existence of a solution for the above LLB equation (without the helicity term) for dimensions \(d=1,2,3\). Pu and Yang in [12] showed the global existence of smooth solutions for the LLB equation with helicity.

It turns out that the LLB equation is not sufficient, for example, it is unable to describe the dispersion of individual trajectories at higher temperatures. The authors in [5, 8] discussed and formulated a stochastic form of the LLB equation. We add noise to the effective field, resulting in the form of the stochastic LLB equation that we now describe. Let \(\mathcal{O}\subset \mathbb{R}^{d}\), \(d=1,2\) be a bounded domain with a smooth boundary \(\partial \mathcal{O}\). Let \(0 < T < \infty \) be also fixed. Let \(\left (\Omega , \mathcal{F} , \mathbb{P}\right )\) denote a probability space with \(\sigma \)-algebra ℱ. Let \(\mathbb{F}=\left \{\mathcal{F}_{t}\right \}_{t\in [0,T]}\) denote a filtration on the above probability space. Let \(W\) denote a real-valued Wiener process on the said probability space. For \(t\in [0,T]\), we consider the following stochastic LLB equation driven by a real-valued Wiener process \(W\).

$$\begin{aligned} m(t) = & \, m_{0} + \int _{0}^{t} \big[ \Delta m(s) + m(s) \times \Delta m(s) - \left ( 1 + \left | m(s) \right |_{\mathbb{R}^{3}}^{2} \right ) m(s) \\ & + \nabla \times m(s) + m(s) \times \left ( \nabla \times m(s) \right ) \big] \, ds + \int _{0}^{t} \left [ m(s) \times h \right ] \circ dW(s), \end{aligned}$$
(1.6)

Homogeneous Neumann boundary conditions are assumed. Note that the differential is assumed to be in the Stratonovich sense. Here, \(h:\mathcal{O}\to \mathbb{R}^{3}\) is given. Also, we have replaced all the coefficients by 1 for simplicity.

Brzezniak, Goldys and Le in [4] proved the existence of a weak martingale solution to the stochastic LLB equation for dimensions \(d=1,2,3\). For \(d=1,2\), they showed that the solution is pathwise unique. Further, they showed the existence of invariant measures. Regarding the Wong-Zakai type approximations, Gokhale and Manna in [9] established Wong-Zakai type approximations for the stochastic LLB equation (with non-zero anisotropy) for dimensions \(d=1,2\). In a way, this work can be considered as an extension of [9], as we consider the stochastic LLB equation with helicity.

Our aim in this work is to prove Wong-Zakai type approximations for stochastic LLB equation. Let \(\{W^{n}\}_{n\in{\mathbb{N}}}\) be a sequence of processes on \(\left (\Omega , \mathcal{F} , \mathbb{P}\right )\) with continuously differentiable paths. Later on, we go on to assume that similar processes \(W^{n}\) approximate the process \(W\) in \(C([0,T]:\mathbb{R})\). For \(W^{n}\), the corresponding equation is given by

$$\begin{aligned} m^{n}(t) = & \, m^{n}_{0} + \int _{0}^{t} \big[ \Delta m^{n}(s) + m^{n}(s) \times \Delta m^{n}(s) - \left ( 1 + \left | m^{n}(s) \right |_{ \mathbb{R}^{3}}^{2} \right ) m^{n}(s) \\ & + \nabla \times m^{n}(s) + m^{n}(s) \times \left ( \nabla \times m^{n}(s) \right ) \big] \, ds \\ & + \int _{0}^{t} \left [ m^{n}(s) \times h \right ] \, dW^{n}(s), \end{aligned}$$
(1.7)

We want to show that whenever \(W^{n}\) approximates the given process \(W\), then \(m_{n}\) should approximate the corresponding \(m\) in some appropriate sense.

We use an indirect method to prove the result. We first convert (using a Doss-Sussmann type transform (2.11)) the given stochastic partial differential equation (1.6) into a deterministic partial differential equation with random coefficients (2.12). We then show that the solution to this transformed equation depends continuously on the driving Wiener process \(W\). Using the properties of the transformation (2.11), we show the same result for the originally considered equation (1.6). That is, we show that the solution of the equation (1.6) depends continuously on the driving Wiener process \(W\).

We conclude the section by providing some notations that we will be using throughout the work. Firstly, for \(k\in \mathbb{Z}\) and \(p\geq 1\), \(W^{k,p}(\mathcal{O}:\mathbb{R}^{3})\) denotes the Sobolev space of functions defined on the bounded domain \(\mathcal{O}\subset \mathbb{R}^{d}\), \(d=1,2\), and will be denoted by just \(W^{k,p}\) for brevity. We will use \(H^{k}\) when \(p=2\). Similarly, \(L^{p}\) will be used to denote the space \(L^{p}(\mathcal{O}:\mathbb{R}^{3})\). The given data \(h\) will be assumed to be in \(W^{2,\infty}\). We will henceforth assume that \(\left (\Omega , \mathcal{F} , \mathbb{F} , \mathbb{P}\right )\) is a filtered probability space (with filtration \(\mathbb{F}=\left \{\mathcal{F}_{t}\right \}_{t\in [0,T]}\)) that satisfies the usual hypothesis. Further, \(C\) will be used to denote a generic (positive) constant, the value for which can change from line to line.

2 The Transformation

In this section, we define the said (Doss-Sussmann type) transform. For \(p \geq 1\), let us first define the mapping \(G:L^{p} \to L^{p}\) by

$$ G(v) = v \times h. $$
(2.1)

For \(h\in W^{2,\infty}\), the above mapping is well defined. We further define the exponential operator \(e^{tG} : L^{2} \to L^{2}\) by

$$ e^{tG}v = \sum _{i=0}^{\infty}\frac{t^{n}}{n!}G^{n}(v). $$
(2.2)

2.1 Some Technical Results

In particular, we have the following properties of the operator \(G\) and the exponential operator defined in (2.1) and (2.2) respectively. We skip the proof and refer the reader to, for example, Sect. 3 in [10], see also [3, 9].

Lemma 2.1

Let \(t\in \mathbb{R}\) and \(v,v_{1},v_{2}\in L^{2}\) and \(n\in \mathbb{N}\). Then the following hold. Let the given function \(h\) be such that \(\left | h(x) \right |_{\mathbb{R}^{3}} = 1\), for a.a. \(x\in \mathcal{O}\).

  1. (1)
    $$ e^{tG}v = v + \sin{(t)}\,Gv + \left ( 1 - \cos (t) \right ) G^{2} v, $$
    (2.3)
  2. (2)
    $$ e^{-tG}e^{tG}v = v, $$
    (2.4)
  3. (3)
    $$ e^{tG}\left ( v_{1} \times v_{2} \right ) = e^{tG}v_{1} \times e^{tG}v_{2}, $$
    (2.5)
  4. (4)
    $$\begin{aligned} G^{2n + 1 }(v) = (-1)^{n} G(v), \end{aligned}$$
    (2.6)
  5. (5)
    $$\begin{aligned} G^{2n}(v) = (-1)^{n} G^{2}(v), \end{aligned}$$
    (2.7)
  6. (6)
    $$ \left | e^{tG} v(x) \right |_{\mathbb{R}^{3}}^{2} = \left | v(x) \right |_{\mathbb{R}^{3}}^{2},\ \textit{for Leb. a.e.}\ x\in \mathcal{O}. $$
    (2.8)
  7. (7)

    Let \(f\) be a scalar-valued function. Then

    $$ e^{tG}(fv) = f e^{tG}v. $$
    (2.9)

Using the above-mentioned properties of the operator \(G\) and the given \(h\), we have

$$ e^{tG}v = v + \sin{t}Gv + ( 1 - \cos{t} ) G^{2}(v). $$
(2.10)

Let \(M\) be a given process. We define a new process \(m\) from \(M\) as follows.

$$ m(t) = e^{W(t)G}M(t) $$
(2.11)

Following the proof of Lemma 4.1 in [10], see also the Appendix in [9] one can show that the new process \(m\) is a solution to the stochastic LLB equation (1.6) if and only if the process \(M\) is a solution to the following (transformed) equation.

$$\begin{aligned} dM(t) = \left [ \Delta M(t) + M(t) \times \Delta M(t) - \left ( 1 + \left | M(t) \right |_{\mathbb{R}^{3}}^{2}\right ) M(t) + F(M,W)(t) \right ] \, dt. \end{aligned}$$
(2.12)

Here

$$\begin{aligned} F(M,W) =& F_{1}(M,W) + M \times F_{1}(M,W) - F_{3}(M,W) + \mathscr{F}_{1}(M,W) + M \times \mathscr{F}_{1}(M,W) \\ =& e^{-WG} \bigg[ \sin (W) \mathscr{C}M + \left ( 1 - \cos (W) \right )\left ( G\mathscr{C}M - \mathscr{C}GM \right ) \bigg] \\ & + M \times \left [ e^{-WG} \bigg( \sin (W) \mathscr{C}M + \left ( 1 - \cos (W) \right )\left ( G\mathscr{C}M - \mathscr{C}GM \right ) \bigg) \right ] \\ & -\bigg( M + (\sin (W))^{2} \left | GM \right |_{\mathbb{R}^{3}}^{2} M + (1 - \cos (W))^{2} \left | G^{2}M \right |_{\mathbb{R}^{3}}^{2} M \\ & + 2\langle M , (1 - \cos (W))G^{2}M \rangle _{\mathbb{R}^{3}} M \bigg) \\ & + e^{-WG} \left [ \nabla \times e^{WG} M + e^{WG} M \times \left ( \nabla \times e^{WG} M \right ) \right ] . \end{aligned}$$
(2.13)

Here \(\mathscr{C}\) is given by

$$ \mathscr{C}(u) = u \times \Delta h + 2 \nabla u \times \nabla h. $$
(2.14)

Therefore

$$\begin{aligned} G\mathscr{C}M = \left ( M \times \Delta h \right ) \times h + 2 \left (\nabla M \times \nabla h \right ) \times h, \end{aligned}$$
(2.15)
$$\begin{aligned} \mathscr{C}GM = \left ( M \times h \right )\times \Delta h + 2\left ( \nabla M \times h \right ) \times \nabla h + 2\left ( M \times \nabla h \right ) \times \nabla h. \end{aligned}$$
(2.16)

We have the notations

$$\begin{aligned} F_{1}(M,W) = e^{-WG} \bigg[ \sin (W) \mathscr{C}M + \left ( 1 - \cos (W) \right )\left ( G\mathscr{C}M - \mathscr{C}GM \right ) \bigg], \end{aligned}$$
(2.17)

and

$$\begin{aligned} F_{2}(M,W) = M \times F_{1}(M,W). \end{aligned}$$
(2.18)

Further,

$$\begin{aligned} \mathscr{F}_{1}(M,W) = e^{-WG} \left ( \nabla \times e^{WG} M \right ), \end{aligned}$$
(2.19)

and we have

$$\begin{aligned} \mathscr{F}_{2}(M,W) = M \times \mathscr{F}_{1}. \end{aligned}$$
(2.20)

Remark 2.2

Equivalence of \(M\) and \(m\)

Observing the way the transformation (2.11) has been defined, we can prove that if \(M\) is a solution of (2.12), then the process \(m\) defined from \(M\) using (2.11) is a solution of (1.6). For a rigorous proof, we refer the reader to [9, 10].

Remark 2.3

Some observations

We state some (formal) observations about the operators mentioned above

  1. (1)

    Firstly, the exponential operator \(e^{WG}:L^{2} \to L^{2}\) is linear. That is,

    $$ e^{WG}(v_{1}+v_{2}) = e^{WG}(v_{1}) + e^{WG}(v_{2}), v_{1},v_{2}\in L^{2}. $$
    (2.21)
  2. (2)

    Both \(F_{1}\) and \(\mathscr{F}_{1}\) are linear in \(M\) (the first variable).

  3. (3)

    Both \(F_{1}\) and \(\mathscr{F}_{1}\) are Lipschitz continuous in the second variable. To see this, recall the equality (2.10). The mappings \(t\mapsto \sin{t}\) and \(t\mapsto \cos{t}\) are Lipschitz continuous. Hence, as a continuous combination of these, the mapping \(t\mapsto e^{tG}(\cdot )\) is also Lipschitz continuous.

  4. (4)

    Both \(F_{1}\) and \(\mathscr{F}_{1}\) exhibit the following growth estimates. There exists a constant \(C>0\) such that

    $$ \left | F_{1}(v) \right |_{L^{2}} \leq C \left |v\right |_{L^{2}},\ v \in L^{2}, $$
    (2.22)

    and

    $$ \left | F_{1}(v) \right |_{H^{1}} \leq C \left |v\right |_{H^{1}},\ v \in H^{1}. $$
    (2.23)

    Similarly,

    $$\begin{aligned} \left | \mathscr{F}_{1}(v) \right |_{L^{2}} \leq C \left |v\right |_{L^{2}}, \ v\in L^{2}, \end{aligned}$$
    (2.24)

    and

    $$\begin{aligned} \left | \mathscr{F}_{1}(v) \right |_{H^{1}} \leq C \left |v\right |_{H^{1}}, \ v\in H^{1}. \end{aligned}$$
    (2.25)

We now state a couple of results about some growth estimates for the operator \(G\) and effectively the operator \(e^{tG}\).

Proposition 2.4

The map \(G:L^{2} \to L^{2}\) is linear. Further,

$$ \left | Gv \right |_{L^{2}} \leq C \left | v \right |_{L^{2}},\ v\in L^{2}, $$
(2.26)

and

$$ \left | Gv \right |_{H^{1}} \leq C \left | v \right |_{H^{1}},\ v\in H^{1}. $$
(2.27)

Moreover (similarly)

$$ \left | G^{2}v \right |_{L^{2}} \leq C \left | v \right |_{L^{2}},\ v \in L^{2}, $$
(2.28)

and

$$ \left | G^{2}v \right |_{H^{1}} \leq C \left | v \right |_{H^{1}},\ v \in H^{1}. $$
(2.29)

Lemma 2.5

There exists a constant \(C>0\) that can depend on \(h\) such that

$$ \left | e^{tG} v \right |_{L^{2}} \leq C \left | v \right |_{L^{2}}, \ v\in L^{2}. $$
(2.30)
$$ \left | e^{tG} v \right |_{H^{1}} \leq C \left | v \right |_{H^{1}}, \ v\in H^{1}. $$
(2.31)

In particular, we also have

$$ \left | \nabla \times \left ( e^{tG} v \right ) \right |_{L^{2}} \leq C \left | v \right |_{H^{1}},\ v\in H^{1}. $$
(2.32)

Both Proposition 2.4 and Lemma 2.5 can be proven as consequences of Lemma 2.1

3 Existence of a Solution for the Transformed Equation

Definition 3.1

Weak solution of (2.12)

For any function \(W\in C([0,T]:\mathbb{R})\), the problem (2.12) is said to admit a weak solution \(M\), with initial condition \(M(0) = M_{0}\) if

  1. (1)

    The process \(M\) takes values in the space \(C([0,T]:L^{2})\).

  2. (2)

    There exists a constant \(C>0\) such that

    $$ \sup _{t\in [0,T]} \left | M(t) \right |_{H^{1}} \leq C, $$
    (3.1)

    and

    $$ \int _{0}^{T} \left | M(t) \right |_{H^{2}}^{2} \, dt \leq C. $$
    (3.2)
  3. (3)

    For \(V\in W^{1,4}\), \(M\) satisfies the following equality.

    $$\begin{aligned} \left \langle M(t) , V \right \rangle _{L^{2}} = & \left \langle M_{0} , V \right \rangle - \int _{0}^{t} \left \langle \nabla M(s) + M(s) \times \nabla M(s) , \nabla V \right \rangle \, ds \\ &+ \int _{0}^{t} \bigg\langle \left ( 1 + \left | M(s) \right |_{\mathbb{R}^{3}}^{2} \right ) M(s) \\ & + \nabla \times M(s) + M(s) \times \left ( \nabla \times M(s) \right ) + F(M(s),W(s)) , V \bigg\rangle \, ds. \end{aligned}$$
    (3.3)

Theorem 3.2

Existence and uniqueness of solution for (2.12)

Let \(h\in W^{2,\infty}\) be such that \(\left | h(x) \right |_{\mathbb{R}^{3}} = 1\) for Leb. a.a. \(x\in \mathcal{O}\). Further, let \(W\in C([0,T]:\mathbb{R})\) and \(M_{0}\in H^{1}\). Then the problem (2.12) admits a unique weak solution as given in Definition 3.1.

Proof of Theorem 3.2

We prove the existence of a weak solution by Faedo-Galerkin type arguments. We give a sketch of the proof. First, let \(\left \{ e_{i} \right \}_{i\in \mathbb{N}}\) denote an orthonormal basis of \(L^{2}\), consisting of the eigen functions of the Neumann Laplacian operator \(\Delta \). Let \(H_{n}\) denote the span of \(e_{1},\dots ,e_{n}\) and let \(P_{n}\) denote the orthogonal projection operator onto \(H_{n}\). Let us define a cut-off function. Let \(n\in \mathbb{N}\) and let \(\Psi _{n}: \mathbb{R} \to [0,1]\) denote a smooth function such that

$$\begin{aligned} \Psi _{n}(x) = \textstyle\begin{cases} 1,&\ \text{if}\ \left |x\right | \leq n, \\ 0,&\ \text{if}\ \left |x\right | \geq 2n, \end{cases}\displaystyle \end{aligned}$$
(3.4)

We now approximate the equation (2.12) in \(H_{n}\).

$$\begin{aligned} dM_{n}(t) = P_{n} \left [ \Delta M_{n}(t) + M_{n}(t) \times \Delta M_{n}(t) - \left ( 1 + \left | M_{n}(t) \right |_{\mathbb{R}^{3}}^{2}\right ) M(t) + F_{n}(M_{n},W)(t) \right ] \, dt, \end{aligned}$$
(3.5)

with initial data \(M_{n}(0) = P_{n} (M_{0})\). Here,

$$ F_{n}(M_{n},W) = F_{1}(M_{n},W) + F_{2}(M_{n},W) + \Psi _{n}(\left | M_{n} \right |_{L^{4}}) \left [\mathscr{F}_{1}(M_{n},W) + \mathscr{F}_{2}(M_{n},W) \right ]. $$
(3.6)

The following proposition enlists some properties of the operator \(F_{n}\).

Proposition 3.3

Let \(\varepsilon >0\). Then there exists a constant \(C>0\), which can depend on \(\varepsilon \), such that

$$\begin{aligned} \left | \left \langle F_{n}(M_{n},W) , M_{n} \right \rangle _{L^{2}} \right | \leq & \varepsilon \left | M_{n} \right |_{H^{1}}^{2} + C( \varepsilon ) \left | M_{n} \right |_{L^{2}}^{2}, \end{aligned}$$
(3.7)

and

$$\begin{aligned} \left | \left \langle F_{n}(M_{n},W) , - \Delta M_{n} \right \rangle _{L^{2}} \right | \leq & \varepsilon \left | M_{n} \right |_{H^{2}}^{2} + C( \varepsilon ) \left | M_{n} \right |_{H^{1}}^{2}. \end{aligned}$$
(3.8)

Lemma 3.4

There exists a constant \(C>0\) such that the following inequalities hold for each \(n\in \mathbb{N}\).

$$ \left | M_{n} \right |_{L^{\infty}(0,T:L^{2})}\leq C, $$
(3.9)
$$ \left | M_{n} \right |_{L^{2}(0,T:H^{1})}\leq C, $$
(3.10)
$$ \left | M_{n} \right |_{L^{4}(0,T:L^{4})}\leq C. $$
(3.11)

Proof of Lemma 3.4

The main idea is to multiply both sides of the equality (3.5) by \(M_{n}\) and then simplify. The equality (after multiplication) is

$$\begin{aligned} \left \langle dM_{n}(t) , M_{n}(t) \right \rangle _{L^{2}} = & \left \langle P_{n} \left [ \Delta M_{n}(t) + M_{n}(t) \times \Delta M_{n}(t)\right.\right. \\ &- \left.\left.\left ( 1 + \left | M_{n}(t) \right |_{\mathbb{R}^{3}}^{2}\right ) M(t) \right ] , M_{n}(t) \right \rangle _{L^{2}} \, dt \\ & + \left \langle P_{n} \left [ F_{n}(M_{n},W)(t) \right ] , M_{n}(t) \right \rangle _{L^{2}} \, dt. \end{aligned}$$
(3.12)

Using the self adjoint property of the projection operator \(P_{n}\), we have the following inequality (see [4, 11] for example)

$$\begin{aligned} &\left \langle P_{n} \left [ \Delta M_{n}(t) + M_{n}(t) \times \Delta M_{n}(t) - \left ( 1 + \left | M_{n}(t) \right |_{\mathbb{R}^{3}}^{2}\right ) M(t) \right ] , M_{n}(t) \right \rangle _{L^{2}} \\ &\quad = -\left | \nabla M_{n}(t) \right |_{L^{2}}^{2} \\ &\qquad {} - \left | M_{n}(t) \right |_{L^{2}}^{2} - \left | M_{n}(t) \right |_{L^{4}}^{4}. \end{aligned}$$
(3.13)

Further, using Proposition 3.3, we can write the following inequality. Let \(\varepsilon >0\). Then there exists a constant \(C>0\) such that

$$\begin{aligned} \frac{1}{2} \left | M_{n}(t) \right |_{L^{2}}^{2} + \int _{0}^{t} \left | M_{n}(s) \right |_{H^{1}}^{2} \, ds + \int _{0}^{t} \left | M_{n}(s) \right |_{L^{4}}^{4} \, ds \leq & \frac{1}{2} \left | M_{n}(0) \right |_{L^{2}}^{2} + \varepsilon \int _{0}^{t} \left | M_{n}(s) \right |_{H^{1}}^{2} \, ds \\ & + C \int _{0}^{t} \left | M_{n}(s) \right |_{L^{2}}^{2} \, ds. \end{aligned}$$
(3.14)

Firstly, \(\varepsilon > 0\) can be chosen small enough so that the coefficient \(1-\varepsilon \) remains positive. Then, all three terms on the left-hand side of the above inequality are non-negative, and hence any of them can be neglected without changing the inequality. In particular, keeping the first term and neglecting the second and third terms (for now), we have

$$\begin{aligned} \frac{1}{2} \left | M_{n}(t) \right |_{L^{2}}^{2} \leq & \frac{1}{2} \left | M_{n}(0) \right |_{L^{2}}^{2} + + \int _{0}^{t} \left | M_{n}(s) \right |_{L^{2}}^{2} \, ds. \end{aligned}$$

Taking the supremum over \([0,T]\) of both sides, followed by using the Gronwall inequality gives the desired inequality (3.9). Similarly, neglecting the first and third terms on the left-hand side of (3.14) and using (3.9) gives the bound (3.10). The bound (3.11) can be obtained similarly by neglecting the first and second terms on the left-hand side of (3.14). □

Lemma 3.5

There exists a constant \(C>0\) such that the following inequality holds for each \(n\in \mathbb{N}\).

$$ \left | M_{n} \right |_{L^{\infty}(0,T:H^{1})}\leq C, $$
(3.15)

and

$$ \left | M_{n} \right |_{L^{2}(0,T:H^{2})}\leq C. $$
(3.16)

Proof of Lemma 3.5

The structure of the proof of Lemma 3.5 is the same as that of the proof of Lemma 3.4. Here, we multiply (3.5) by (\(-\Delta M_{n}\)) to obtain

$$\begin{aligned} \left \langle dM_{n}(t) , -\Delta M_{n}(t) \right \rangle _{L^{2}} = & \big\langle P_{n} \big[ \Delta M_{n}(t) + M_{n}(t) \times \Delta M_{n}(t) \\ & - \quad \left ( 1 + \left | M_{n}(t) \right |_{\mathbb{R}^{3}}^{2} \right ) M(t) \big] , -\Delta M_{n}(t) \big\rangle _{L^{2}} \, dt \\ & + \left \langle P_{n} \left [ F_{n}(M_{n},W)(t) \right ] , -\Delta M_{n}(t) \right \rangle _{L^{2}} \, dt \end{aligned}$$
(3.17)

Using the self adjoint property of the projection operator \(P_{n}\) and the fact that for \(v\in H_{n}\), \(\Delta v\in H_{n}\), we have

$$\begin{aligned} \left \langle P_{n} \left [ \Delta M_{n}(t) + M_{n}(t) \times \Delta M_{n}(t) - \left ( 1 + \left | M_{n}(t) \right |_{\mathbb{R}^{3}}^{2}\right ) M(t) \right ] , -\Delta M_{n}(t) \right \rangle _{L^{2}} \leq - \left | \Delta M_{n}(t) \right |_{L^{2}}^{2}. \end{aligned}$$
(3.18)

Using the above estimate and Proposition 3.3, we obtain

$$\begin{aligned} \left | \nabla M_{n}(t) \right |_{L^{2}}^{2} + \int _{0}^{t} \left | \Delta M_{n}(s) \right |_{L^{2}}^{2} \, ds \leq & \varepsilon \int _{0}^{t} \left | M_{n}(s) \right |_{H^{2}}^{2} \, ds + C \int _{0}^{t} \left | M_{n}(s) \right |_{H^{1}}^{2} \, ds. \end{aligned}$$
(3.19)

We choose \(\varepsilon \) small enough so that the coefficient \(1-\varepsilon \) stays positive. The resulting inequality is

$$\begin{aligned} \frac{1}{2} \left | M_{n}(t) \right |_{L^{2}}^{2} + \left (1 - \varepsilon \right ) \int _{0}^{t} \left | M_{n}(s) \right |_{H^{2}}^{2} \, ds \leq & \frac{1}{2} \left | M_{n}(0) \right |_{H^{1}}^{2} + C + \varepsilon \int _{0}^{t} \left | M_{n}(s) \right |_{H^{2}}^{2} \, ds \\ & + C \int _{0}^{t} \left | M_{n}(s) \right |_{H^{1}}^{2} \, ds. \end{aligned}$$
(3.20)

Remark 3.6

Note that we have replaced the term containing \(\left | \Delta M_{n} \right |_{L^{2}}^{2}\) by one containing \(\left | M_{n} \right |_{H^{2}}^{2}\). This can be done by adding \(\int _{0}^{t} \left | M_{n} \right |_{H^{1}}^{2} \, ds\) to both sides, thus completing the \(H^{2}\) norm. Adding the said \(H^{1}\) norm term just adds 1 to the constant \(C\) already existing on the right-hand side. Also, we have replaced the seminorm \(\left |\nabla M_{n}\right |_{L^{2}}\) by the full norm \(\left | M_{n}\right |_{H^{1}}\). We use the same logic, adding \(\left |M_{n}\right |_{L^{2}}\) on both sides to complete the norm, and use (3.9) to bound the added term by a constant.

For \(\varepsilon >0\) small enough, both the terms on the left-hand side of (3.20) are non-negative, and hence can be neglected as and when required, without changing the inequality. Not considering the second term on the left-hand side of (3.20) for now, taking the supremum of both sides over \([0,T]\) gives

$$\begin{aligned} \sup _{t\in [0,T]} \left | M_{n}(t) \right |_{H^{1}}^{2} \leq C + C \int _{0}^{T} \left | M_{n}(s) \right |_{H^{1}}^{2} \, ds. \end{aligned}$$
(3.21)

Applying the Gronwall inequality gives (3.15). Further, neglecting the first term on the right-hand side of (3.20), taking the supremum over \([0,T]\) of both sides and then using the bound (3.15) gives the bound (3.16).

Lemma 3.7

There exists a constant \(C>0\) such that for every \(n\in \mathbb{N}\), the following inequality holds.

$$ \left | M_{n} \right |_{H^{1}\left (0,T:L^{\frac{3}{2}}\right )} \leq C. $$
(3.22)

Proof of Lemma 3.7

The proof can be given by the bounds already established in Lemma 3.4, Lemma 3.5 and the equality (3.5). □

As a corollary of the above result, we have

Corollary 3.8

There exists a constant \(C>0\) such that the following holds uniformly in \(n\in \mathbb{N}\).

$$ \left | M_{n} \right |_{H^{1}(0,T:\left ( H^{1} \right )^{\prime })} \leq C. $$
(3.23)

Proof

The following embedding is continuous.

$$ H^{1} \hookrightarrow L^{3}. $$
(3.24)

Therefore

$$ L^{\frac{3}{2}}\hookrightarrow \left ( H^{1} \right )^{\prime }. $$
(3.25)

The corollary follows as a result of the above embedding and Lemma 3.7. □

Using the bounds established in Lemmata 3.4, 3.5 and 3.7, we prove that the sequence \(\left \{ M_{n} \right \}_{n\in \mathbb{N}}\) admits a subsequence (with the same notation) that converges in \(C([0,T]:(H^{1})^{\prime }) \times L^{4}(0,T:L^{4}) \times L^{2}(0,T:H^{1})\) to a process \(M\). We then show that the process \(M\) satisfies the same bounds as the processes \(M_{n}\). In particular, we can show term by term convergence of the terms on the right hand side of (3.5) to the corresponding terms of (2.12) (with the test function \(V\) as given in (3.3)). We then conclude that the process \(M\) satisfies the equality (2.12) as in Definition 3.1, thus proving the existence of a solution \(M\). This concludes the proof of the existence of a weak solution. The arguments for uniqueness are standard, and hence are skipped. The interested reader can refer to [9] for similar arguments.  □

Remark 3.9

Existence and uniqueness for \(M^{n}\)

A definition for a solution for \(M^{n}\) to the problem (1.7) can be given in the same spirit as that of Definition 3.1. In fact, the proof of Theorem 3.2 works for the driving process \(W^{n}\) (with corresponding solution \(M^{n}\)) as well since the result holds true for any continuous function \(W\). The only difference will be due to the term \(F(M^{n},W_{n})\), which will involve the approximation \(W^{n}\). This may give us bounds with some constants \(C_{n}\) that are dependent on \(n\) But since the sequence \(\left \{ W_{n} \right \}_{n\in \mathbb{N}}\) is uniformly bounded, we can choose a constant \(C\) large enough so that \(\sup _{n\in \mathbb{N}} C_{n} \leq C\).

Remark 3.10

Equivalence of (2.12) and (1.6)

Owing to the definition of the transformation (2.11), one can readily see (at least formally) that whenever \(M\) is a solution to (2.12), the corresponding (transformed) \(m\) is a solution to (1.6). One can argue similarly for the case when the driving process is the approximation \(W^{n}\) (corresponding solutions then are \(M^{n}\) and \(m^{n}\)). For a rigorous proof, one can follow the proof of Lemma 7.1 in [9] (see also Lemma 4.1 in [10]).

4 Robustness: Continuous Dependence of \(m\) on \(W\)

Brief idea of the section

Let \(\{W^{n}\}_{n\in \mathbb{N}}\) be a sequence of processes, having continuously differentiable paths, that approximate \(W\) in \(C([0,T]:\mathbb{R})\). In the main theorem (Theorem 4.8), we show that the solution \(m\) of (1.6) depends continuously on the driving process \(W\). Towards that, we first show that the solution \(M\) of the transformed equation (2.12) depends continuously on the driving process \(W\) (Theorem 4.1). We then use the properties of the transformation (2.11) to show that \(m\) depends continuously on \(W\) (Theorem 4.8). □

4.1 Robustness: Continuous Dependence of \(M\) on \(W\)

Theorem 4.1

Robustness for \(M\)

Let the processes \(\{W^{n}\}\), \(n\in \mathbb{N}\) be processes with continuously differentiable paths, that approximate the process \(W\). That is

$$ W^{n}\to W\ \textit{in }C([0,T]:\mathbb{R}). $$
(4.1)

Let \(M^{n}\) be the solution of (2.12) corresponding to \(W^{n}\). Then

$$ M^{n}\to M\ \textit{in}\ L^{\infty}(0,T:L^{2})\cap L^{2}(0,T:H^{1}). $$
(4.2)

Before giving a proof of the theorem, we give some technical results that will be used.

Proposition 4.2

For \(v\in L^{2}\), there exists a constant (independent of \(n\)) \(C>0\) such that the following holds for each \(n\in \mathbb{N}\).

$$\begin{aligned} \left | e^{W^{n}G}v - e^{WG}v \right |_{L^{2}} \leq C \left | W^{n} - W \right | \left | v \right |_{L^{2}}, \end{aligned}$$
(4.3)

and

$$\begin{aligned} \left | \nabla \times \left ( e^{W^{n}G}v \right ) - \nabla \times \left ( e^{WG}v \right ) \right |_{L^{2}} \leq C \left | W^{n} - W \right | \left | v \right |_{H^{1}}. \end{aligned}$$
(4.4)

Proof of Proposition 4.2

The proof for the first inequality follows from the properties of the exponential operator, in particular 2.1, the expression (2.10), the linearity of the operator \(G\) and the bounds on \(G\) from 2.4. We give brief calculations for the second inequality.

$$\begin{aligned} \left | \nabla \times \left ( e^{W^{n}G}v \right ) - \nabla \times \left ( e^{WG}v \right ) \right |_{L^{2}} \leq & \, \bigg| \nabla \big( v - v + \sin{W^{n}} Gv - \sin{W} Gv \\ & + \left ( 1 - \cos{W^{n}} \right ) G^{2}v - \left ( 1 - \cos{W} \right ) \big) \bigg|_{L^{2}} \\ \leq & \, \bigl| \left [ \sin{W^{n}} - \sin{W} \right ] \nabla Gv \bigr|_{L^{2}} \\ &+ \left | \left [ \cos{W} - \cos{W^{n}} \right ] \nabla G^{2}v \right |_{L^{2}} \\ \leq & \, C \left | v \right |_{H^{1}}. \end{aligned}$$
(4.5)

 □

Proposition 4.3

There exists a constant \(C>0\) such that the following holds uniformly in \(n\in \mathbb{N}\).

$$\begin{aligned} \left |\tilde{F}_{1}(M,W^{n})\right |_{L^{2}} \leq C \left | M \right |_{H^{1}}. \end{aligned}$$
(4.6)
$$\begin{aligned} \left | \mathscr{F}_{1}(M,W) \right |_{L^{2}} \leq C \left | M \right |_{H^{1}}. \end{aligned}$$
(4.7)

Proof of Proposition 4.3

For a proof of the first inequality (4.6), we refer the reader to [9] and give a brief calculation for the second inequality. Recalling the definition of \(\mathscr{F}_{1}\) from (2.19)

$$\begin{aligned} \left | \mathscr{F}_{1}(M,W) \right |_{L^{2}} = & \left |e^{-WG} \left ( \nabla \times \left ( e^{WG} M \right ) \right ) \right |_{L^{2}} \\ \leq & C \left | \nabla \times \left ( e^{WG} M \right ) \right |_{L^{2}} \\ \leq & C \left | e^{WG} M \right |_{H^{1}} \\ \leq & C \left | M \right |_{H^{1}}. \end{aligned}$$
(4.8)

 □

Lemma 4.4

There exists a constant \(C>0\) such that the following inequalities hold.

$$\begin{aligned} \left |F_{1}(M,W^{n}) - F_{1}(M,W)\right |_{L^{2}} \leq C \left | W - W^{n} \right | \left | M \right |_{H^{1}}. \end{aligned}$$
(4.9)
$$\begin{aligned} \left |\mathscr{F}_{1}(M,W^{n}) - \mathscr{F}_{1}(M,W)\right |_{L^{2}} \leq C \left | W - W^{n} \right | \left | M \right |_{H^{1}}. \end{aligned}$$
(4.10)

Proof of Lemma 4.4

The inequality (4.9) can be proven using the linearity of \(F_{1}\) and the bound (4.6). We give some calculations for the proof of (4.10). Let us recall from Remark 2.3 that the operator \(\mathscr{F}_{1}\) is linear in the first variable and Lipschitz continuous in the second variable.

$$\begin{aligned} \left |\mathscr{F}_{1}(M,W^{n}) - \mathscr{F}_{1}(M,W)\right |_{L^{2}} = & \left | \nabla \times \left ( e^{W^{n}G} M \right ) - \nabla \times \left ( e^{WG} M \right ) \right |_{L^{2}} \\ \leq & \left | \left [ \sin{W^{n}} - \sin{W} \right ] \nabla G M \right |_{L^{2}} \\ & + \left | \left [ \cos{W} - \cos{W^{n}} \right ] \nabla G^{2} M \right |_{L^{2}} \\ \leq & C \left | W^{n} - W \right | \left | M \right |_{H^{1}}. \end{aligned}$$
(4.11)

 □

Proposition 4.5

Let \(\varepsilon >0\). Then there exists a constant \(C>0\) such that

$$\begin{aligned} \left |\int _{0}^{t} \langle F_{2}(M^{n},W^{n}) - F_{2}(M,W^{n}) , M^{n} - M \rangle _{L^{2}} \, ds\right | \leq & \frac{\varepsilon}{2}\int _{0}^{t} \left |M^{n} - M\right |_{H^{1}}^{2} \, ds \\ &+ \frac{C^{2}}{2\varepsilon}\int _{0}^{t}\left |M\right |_{H^{2}}^{2} \left |M^{n} - M\right |_{L^{2}}^{2} \, ds. \end{aligned}$$
(4.12)
$$\begin{aligned} \left |\int _{0}^{t} \langle \mathscr{F}_{2}(M^{n},W^{n}) - \mathscr{F}_{2}(M,W^{n}) , M^{n} - M \rangle _{L^{2}} \, ds\right | \leq & \frac{\varepsilon}{2}\int _{0}^{t} \left |M^{n} - M\right |_{H^{1}}^{2} \, ds \\ &+ \frac{C^{2}}{2\varepsilon}\int _{0}^{t}\left |M\right |_{H^{2}}^{2} \left |M^{n} - M\right |_{L^{2}}^{2} \, ds. \end{aligned}$$
(4.13)

Proof of Proposition 4.5

The inequality (4.12) can be shown using a combination of (4.9) and the definition (2.18) of \(F_{2}\). For the second inequality (4.13), we recall from (2.20) that

$$ \mathscr{F}_{2}(M,W) = M \times \mathscr{F_{1}}(M,W), $$
(4.14)

Now,

$$\begin{aligned} \mathscr{F}_{2}(M^{n},W^{n}) - \mathscr{F}_{2}(M,W^{n}) = & M^{n} \times \mathscr{F}_{1}(M^{n},W^{n}) - M \times \mathscr{F}_{1}(M,W^{n}) \\ = & M^{n} \times \mathscr{F}_{1}(M^{n},W^{n}) - M \times \mathscr{F}_{1}(M^{n},W^{n}) \\ & + M \times \mathscr{F}_{1}(M^{n},W^{n}) - M \times \mathscr{F}_{1}(M,W^{n}) \\ = & \left ( M^{n} - M \right ) \times \mathscr{F}_{1}(M^{n},W^{n}) + M \times \mathscr{F}_{1}(M^{n} - M,W^{n}) \end{aligned}$$
(4.15)

Therefore,

$$\begin{aligned} &\left |\int _{0}^{t} \langle \mathscr{F}_{2}(M^{n},W^{n}) - \mathscr{F}_{2}(M,W^{n}) , M^{n} - M \rangle _{L^{2}} \, ds\right | \\ &\quad \leq \left |\int _{0}^{t} \langle M \times \mathscr{F}_{1}( M^{n} - M ,W) , M^{n} - M \rangle _{L^{2}} \, ds\right | \\ &\quad \leq \int _{0}^{t} \left | M \right |_{L^{\infty}} \left | \mathscr{F}_{1}( M^{n} - M ) \right |_{L^{2}} \left | M^{n} - M \right |_{L^{2}} \\ &\quad \leq C \int _{0}^{t} \left | M \right |_{H^{2}} \left | M^{n} - M \right |_{H^{1}} \left | M^{n} - M \right |_{L^{2}} \\ &\quad \leq \frac{\varepsilon}{2} \int _{0}^{t} \left | M^{n} - M \right |_{H^{1}}^{2} \, ds \\ &\qquad {} + C(\varepsilon ) \int _{0}^{t} \left | M \right |_{H^{2}}^{2} \left | M^{n} - M \right |_{L^{2}}^{2} \, ds. \end{aligned}$$
(4.16)

 □

Proposition 4.6

There exists a constant \(C>0\) such that

$$\begin{aligned} \left |\int _{0}^{t} \langle F_{2}(M,W^{n}) - F_{2}(M,W) , M^{n} - M \rangle _{L^{2}} \, ds\right | \leq & C \int _{0}^{t} \left | W^{n} - W \right |^{2} \, ds \\ & + \frac{1}{2}\int _{0}^{t} \left | M \right |_{H^{2}}^{2} \left | M^{n} - M \right |_{L^{2}}^{2} \, ds. \end{aligned}$$
(4.17)

and

$$\begin{aligned} \left |\int _{0}^{t} \langle \mathscr{F}_{2}(M,W^{n}) - \mathscr{F}_{2}(M,W) , M^{n} - M \rangle _{L^{2}} \, ds\right | \leq & C \int _{0}^{t} \left | W^{n} - W \right |^{2} \, ds \\ & + \frac{1}{2}\int _{0}^{t} \left | M \right |_{H^{2}}^{2} \left | M^{n} - M \right |_{L^{2}}^{2} \, ds. \end{aligned}$$
(4.18)

Proof of Proposition 4.6

We skip the proof for (4.17) and refer the reader to [9]. For the second inequality (4.18), we have

$$\begin{aligned} \mathscr{F}_{2}(M,W^{n}) - \mathscr{F}_{2}(M,W) = M \times \left ( \mathscr{F}_{1}(M,W^{n}) - \mathscr{F}_{1}(M,W) \right ). \end{aligned}$$
(4.19)

Therefore

$$\begin{aligned} & \left | \int _{0}^{t} \langle \mathscr{F}_{2}(M,W^{n}) - \mathscr{F}_{2}(M,W) , M^{n} - M \rangle _{L^{2}} \, ds \right | \\ = & \left | \int _{0}^{t} \langle M \times \left ( \mathscr{F}_{1}(M,W^{n}) - \mathscr{F}_{1}(M,W) \right ) , M^{n} - M \rangle _{L^{2}} \, ds \right | \\ \leq & \int _{0}^{t} \left | M \right |_{L^{\infty}} \left | \mathscr{F}_{1}(M,W^{n}) - \mathscr{F}_{1}(M,W) \right |_{L^{2}} \left | M^{n} - M \right |_{L^{2}} \, ds \\ \leq & C \int _{0}^{t} \left | M \right |_{H^{2}} \left | M \right |_{H^{1}} \left | W^{n} - W \right |_{L^{2}} \left | M^{n} - M \right |_{L^{2}} \, ds \\ \leq & \frac{C}{2} \sup _{t\in [0,T]} \left | M \right |_{H^{1}}^{2} \int _{0}^{t} \left | W^{n} - W \right | \, ds + \frac{1}{2} \int _{0}^{t} \left | M \right |_{H^{2}}^{2} \left | M^{n} - M \right |_{L^{2}}^{2} \, ds \\ \leq & C \int _{0}^{t} \left | W^{n} - W \right |^{2} \, ds + \frac{1}{2}\int _{0}^{t} \left | M \right |_{H^{2}}^{2} \left | M^{n} - M \right |_{L^{2}}^{2} \, ds. \end{aligned}$$
(4.20)

 □

Lemma 4.7

There exists a constant \(C>0\) such that

$$\begin{aligned} \bigg|\int _{0}^{t}& \langle F_{2}(M^{n},W^{n}) - F_{2}(M,W) , M^{n} - M \rangle _{L^{2}} \, ds\bigg| \\ \leq & \frac{\varepsilon}{2}\int _{0}^{t} \left |M^{n} - M\right |_{H^{1}}^{2} \, ds + C\int _{0}^{t}\left |M\right |_{H^{2}}^{2}\left |M^{n} - M \right |_{L^{2}}^{2} \, ds + C \int _{0}^{t} \left | W^{n} - W \right |^{2} \, ds . \end{aligned}$$
(4.21)

and

$$\begin{aligned} \bigg|\int _{0}^{t}& \langle \mathscr{F}_{2}(M^{n},W^{n}) - \mathscr{F}_{2}(M,W) , M^{n} - M \rangle _{L^{2}} \, ds\bigg| \\ \leq & \, \varepsilon \int _{0}^{t} \left |M^{n} - M\right |_{H^{1}}^{2} \, ds + C\int _{0}^{t}\left |M\right |_{H^{2}}^{2}\left |M^{n} - M \right |_{L^{2}}^{2} \, ds + C \int _{0}^{t} \left | W^{n} - W \right |^{2} \, ds . \end{aligned}$$
(4.22)

Proof of Lemma 4.7

The inequality (4.21) follows from (4.12) and (4.17). For the second term, we have

$$\begin{aligned} \mathscr{F}_{2}(M^{n},W^{n}) - \mathscr{F}_{2}(M,W) = & \mathscr{F}_{2}(M^{n},W^{n}) - \mathscr{F}_{2}(M,W^{n}) + \mathscr{F}_{2}(M,W^{n}) - \mathscr{F}_{2}(M,W). \end{aligned}$$

The inequality (4.22) now follows from using (4.13) and (4.18)

$$\begin{aligned} \mathscr{F}_{2}(M^{n},W^{n}) - \mathscr{F}_{2}(M,W) = & M^{n} \times \mathscr{F}_{1}(M^{n},W^{n}) - M \times \mathscr{F}_{1}(M,W) \\ = & M^{n} \times \mathscr{F}_{1}(M^{n},W^{n}) - M \times \mathscr{F}_{1}(M^{n},W^{n}) \\ & + M \times \mathscr{F}_{1}(M^{n},W^{n}) - M \times \mathscr{F}_{1}(M,W^{n}) \\ & + M \times \mathscr{F}_{1}(M,W^{n}) - M \times \mathscr{F}_{1}(M,W) \\ = & \left ( M^{n} - M \right ) \times \mathscr{F}_{1}(M^{n},W^{n}) \\ & + M \times \mathscr{F}_{1}(M^{n} - M,W^{n}) \\ & + M \times \left ( \mathscr{F}_{1}(M,W^{n}) - \mathscr{F}_{1}(M,W) \right ). \end{aligned}$$
(4.23)

Therefore,

$$\begin{aligned} &\left | \int _{0}^{t} \langle \mathscr{F}_{2}(M^{n},W^{n}) - \mathscr{F}_{2}(M,W) , M^{n} - M \rangle _{L^{2}} \, ds \right | \\ \leq & \left | \int _{0}^{t} \langle M \times \mathscr{F}_{1}(M^{n} - M,W^{n}) , M^{n} - M \rangle _{L^{2}} \, ds \right | \\ & + \left | \int _{0}^{t} \langle M \times \left ( \mathscr{F}_{1}(M,W^{n}) - \mathscr{F}_{1}(M,W) \right ) , M^{n} - M \rangle _{L^{2}} \, ds \right | \end{aligned}$$
(4.24)

 □

Proof of Theorem 4.1

We take the difference \(M^{n} - M\) and consider the equation satisfied by this difference. For each \(n\in \mathbb{N}\), \(M^{n}\) satisfies the following equation.

$$\begin{aligned} dM^{n}(t) = \left [ \Delta M^{n}(t) + M^{n}(t) \times \Delta M^{n}(t) - \left ( 1 + \left | M^{n}(t) \right |_{\mathbb{R}^{3}}^{2}\right ) M^{n}(t) + F(M^{n},W^{n})(t) \right ] \, dt. \end{aligned}$$
(4.25)

Similarly, \(M\) satisfies the following equation.

$$\begin{aligned} dM(t) = \left [ \Delta M(t) + M(t) \times \Delta M(t) - \left ( 1 + \left | M(t) \right |_{\mathbb{R}^{3}}^{2}\right ) M(t) + F(M,W)(t) \right ] \, dt. \end{aligned}$$
(4.26)

Therefore the difference \(M^{n} - M\) satisfies the following equation.

$$\begin{aligned} &M^{n}(t) - M(t) - M^{n}(0) + M(0) \\ &\quad = \int _{0}^{t}\left [ \Delta \left (M^{n}(s) - M(s)\right )\right ]\, ds \\ &\qquad {} + \int _{0}^{t} \left [ M^{n}(s) \times \Delta M^{n}(s) - M(s) \times \Delta M(s) \right ] \, ds \\ &\qquad {}- \int _{0}^{t}\left [ \left ( 1 + \left | M^{n}(s) \right |_{ \mathbb{R}^{3}}^{2}\right ) M^{n}(s) - \left ( 1 + \left | M(s) \right |_{\mathbb{R}^{3}}^{2}\right ) M(s)\right ] \, ds \\ &\qquad {}+ \int _{0}^{t} \left [ F(M^{n}(s),W^{n}) - F(M(s),W) \right ] \, ds. \end{aligned}$$
(4.27)

The idea now is to multiply both the sides of the above equality by \(M^{n} - M\) and simplify. This is followed by the application of Gronwall’s inequality. On multiplying both sides of (4.27) by \(\left (M^{n} - M\right )\), we get the following equality.

$$\begin{aligned} \big|M^{n}(t) & - M(t)\big|_{L^{2}}^{2} - \big|M^{n}(0) + M(0)\big|_{L^{2}}^{2} \\ =& \int _{0}^{t} \langle \Delta \left (M^{n}(s) - M(s)\right ) , M^{n}(s) - M(s) \rangle _{L^{2}}\, ds \\ & + \int _{0}^{t} \langle M^{n}(s) \times \Delta M^{n}(s) - M(s) \times \Delta M(s) , M^{n}(s) - M(s) \rangle _{L^{2}} \, ds \\ &- \int _{0}^{t} \langle \left ( 1 + \left | M^{n}(s) \right |_{ \mathbb{R}^{3}}^{2}\right ) M^{n}(s) - \left ( 1 + \left | M(s) \right |_{\mathbb{R}^{3}}^{2}\right ) M(s) , M^{n}(s) - M(s) \rangle _{L^{2}} \, ds \\ &+ \int _{0}^{t} \langle F(M^{n}(s),W^{n}) - F(M(s),W) , M^{n}(s) - M(s) \rangle _{L^{2}} \, ds \\ = & \sum _{i=1}^{4} C_{i} I_{i}(t). \end{aligned}$$
(4.28)

The calculations for the first three terms (\(I_{1}\), \(I_{2}\), \(I_{3}\)) are similar to the calculations for the corresponding terms in the proof for pathwise uniqueness in [4, 11], Sect. 6 in [9] since the terms do not directly depend on the function \(W^{n}\) or \(W\). The difference is with the term \(F\). The term \(F\) can be handled using Lemma 4.4 and Lemma 4.7.

Simplifying (4.28) and using the above mentioned estimates, we obtain a constant \(C>0\) and an integrable functio n \(\phi _{C}\) such that

$$\begin{aligned} \sup _{t\in [0,T]} & \left | M^{n}(t) - M(t) \right |_{L^{2}}^{2} + \frac{1}{2}\int _{0}^{T}\left | M^{n}(s) - M(s) \right |_{H^{1}}^{2} \, ds \\ \leq & \, C_{1} \left | M^{n}(0) - M(0) \right |_{L^{2}}^{2} + C_{2} \int _{0}^{T} \left |W^{n}(s) - W(s)\right |^{2} \, ds \\ & + C \int _{0}^{t} \left ( 1 + \phi _{C}(s) \right ) \sup _{r\in [0,s]} \left | M^{n}(r) - M(r) \right |_{L^{2}}^{2} \, ds . \end{aligned}$$
(4.29)

Since \(\phi _{C}\) is integrable, applying the Gronwall inequality and using the fact that all the processes \(M^{n},n\in \mathbb{N}\) have the same initial data \(M_{0}\), along with the convergence of \(W^{n}\) to \(W\) from Theorem 4.1 gives the required convergence in (4.2). □

4.2 Robustness: Continuous Dependence of \(m\) on \(W\)

Our aim for this section, which is also the main aim of the work, is to show that the solution \(m\) of (1.6) depends continuously on the driving process \(W\). The following theorem, which is the main result of this work will make the statement rigorous.

Theorem 4.8

Let \(\{W^{n}\}_{n\in \mathbb{N}}\) be a sequence of processes with continuously differentiable paths that approximate the Wiener process \(W\), ℙ-a.s. That is, let

$$ W^{n}(\omega ) \to W(\omega )\ \textit{in}\ C([0,T]:\mathbb{R})\ \textit{for}\ \mathbb{P}-\textit{a.s.}\ \omega \in \Omega . $$

Then the solution \(m\) of (1.6) depends continuously on the Wiener process \(W\). That is, if \(m^{n}\) denotes the solution for (1.7) corresponding to \(W^{n}\) (and \(m\) is the solution for (1.6) corresponding to \(W\)) then

$$ m^{n}(\omega ) \to m(\omega )\ \textit{in}\ L^{\infty}(0,T:L^{2})\cap L^{2}(0,T:H^{1}) \ \textit{for}\ \mathbb{P}-\textit{a.s.}\ \omega \in \Omega . $$
(4.30)

To prove the result, we will use the convergence of the corresponding \(M^{n}\) to \(M\). For that, we use the following lemma. We skip the proof for the lemma, as it follows from the properties of the exponential operator given in Sect. 2.1 and the Lipschitz continuity of the sin and cos functions. For a detailed proof, the reader can refer to [3, 9].

Lemma 4.9

Let \(v_{1},v_{2}\in L^{\infty}(0,T:L^{2})\cap L^{2}(0,T:H^{1})\). Then there exists a constant \(C>0\) such that

$$ \left |e^{t_{1}G} v_{1} - e^{t_{2}G}v_{2}\right |^{2}_{L^{2}} \leq C \left | t_{1} - t_{2} \right |^{2} \left | v_{1}\right |^{2}_{L^{2}} + C \left | v_{1} - v_{2} \right |^{2}_{L^{2}}. $$
(4.31)
$$ \left |e^{t_{1}G} v_{1} - e^{t_{2}G}v_{2}\right |^{2}_{H^{1}} \leq C \left | t_{1} - t_{2} \right |^{2} \left | v_{1}\right |^{2}_{H^{1}} + C \left | v_{1} - v_{2} \right |^{2}_{H^{1}}. $$
(4.32)

This lemma essentially shows that if the sequence \(M^{n}\) converges to \(M\) in \(L^{\infty}(0,T:L^{2})\cap L^{2}(0,T:H^{1})\) and \(W^{n}\) converges to \(W\) in \(C([0,T:\mathbb{R}])\) then \(m^{n}\), defined by the transformation (2.11) converge to \(m\) in \(L^{\infty}(0,T:L^{2})\cap L^{2}(0,T:H^{1})\). To see this, replace \(v_{1}\) by \(M^{n}\), \(v_{2}\) by \(M\) and \(t_{1}\) and \(t_{2}\) by \(W^{n}(t)\) and \(W(t)\) in the above result to get

$$\begin{aligned} \left |e^{W(t)G} M^{n} - e^{W(t)G}M\right |_{L^{2}}^{2} \leq C\left | W^{n}(t) - W(t) \right |^{2} \left | M^{n}(t) \right |_{L^{2}}^{2} + C \left | M^{n}(t) - M(t) \right |_{L^{2}}^{2}. \end{aligned}$$
(4.33)

Therefore

$$\begin{aligned} \sup _{t\in [0,T]}\left |e^{W(t)G} M^{n} - e^{W(t)G}M\right |_{L^{2}}^{2} \leq & C\sup _{t\in [0,T]}\left | W^{n}(t) - W(t) \right |^{2} \sup _{t \in [0,T]}\left | M^{n}(t) \right |_{L^{2}}^{2} \\ &+ C \sup _{t\in [0,T]}\left | M^{n}(t) - M(t) \right |_{L^{2}}^{2}. \end{aligned}$$
(4.34)

Similarly, we have the following from the second inequality in the lemma (after integrating).

$$\begin{aligned} \int _{0}^{T} \left |e^{W(t)G}(M^{n}) - e^{W(t)G}M\right |_{H^{1}}^{2} \, dt \leq & C\sup _{t\in [0,T]} \left | W^{n}(t) - W(t) \right |^{2} \int _{0}^{T} \left | M^{n}(t) \right |_{L^{2}}^{2} \, dt \\ &+ C \int _{0}^{T} \left | M^{n}(t) - M(t) \right |_{H^{1}}^{2} \, dt. \end{aligned}$$
(4.35)

Therefore adding the two inequalities (4.34), (4.35) gives

$$\begin{aligned} &\sup _{t\in [0,T]} \left | m^{n}(t) - m(t) \right |_{L^{2}}^{2} + \int _{0}^{T} \left | m^{n}(t) - m(t) \right |_{H^{1}}^{2} \, dt \\ &\quad \leq C\sup _{t\in [0,T]}\left | W^{n}(t) - W(t) \right |^{2} \sup _{t \in [0,T]}\left | M^{n}(t) \right |_{L^{2}}^{2} \\ &\qquad {}+ C \sup _{t\in [0,T]}\left | M^{n}(t) - M(t) \right |_{L^{2}}^{2} \\ &\qquad {}+ C\sup _{t\in [0,T]} \left | W^{n}(t) - W(t) \right |^{2} \int _{0}^{T} \left | M^{n}(t) \right |_{L^{2}}^{2} \, dt \\ &\qquad {} + C \int _{0}^{T} \left | M^{n}(t) - M(t) \right |_{H^{1}}^{2} \, dt. \end{aligned}$$
(4.36)

Since \(M^{n}\) converges to \(M\) in \(L^{\infty}(0,T:L^{2})\cap L^{2}(0,T:H^{1})\) and \(W^{n}\) converges to \(W\) in \(C([0,T:\mathbb{R}])\), the right hand side of the above inequality goes to 0 as \(n\) goes to infinity. Therefore the left hand side also goes to 0 as \(n\) goes to infinity. Hence

$$ m^{n} \to m\ \text{in}\ L^{\infty}(0,T:L^{2})\cap L^{2}(0,T:H^{1}). $$
(4.37)

This concludes the proof of Theorem 4.8, thereby concluding the continuous dependence of \(m\) on \(W\).