1 Introduction

A stochastic spatial model of epidemic has been described by N’zi et al. [10] to study the oubreak of infectious diseases in a bounded domain. Such a model takes into account heterogeneity, spatial connectivity and movement of individuals, which play an important role in the spread of the infectious diseases. It is based on the compartmental SIR model of Kermack and Mckendrick [6]. Let us summarize the results in N’zi et al. [10] in the case of one dimensional space.

Consider a deterministic and a stochastic SIR model on a grid \(\textrm{D}_\varepsilon \) of the torus \(\mathbb {T}^1= [0,1)\) with migration between neighboring sites (two neighboring sites are at distance \(\varepsilon \) apart, \( \varepsilon ^{-1}\in \mathbb {N}^*\)). Let \(S_\varepsilon (t,x_i)\) (resp. \(I_\varepsilon (t,x_i)\), resp. \(R_\varepsilon (t,x_i)\)) be the proportion of the total population which is both susceptible (resp. infectious, resp. removed) and located at site \(x_i\) at time t. The dynamics of susceptible, infected and removed individuals at each site can be expressed as

$$\begin{aligned} \left\{ \begin{aligned} \dfrac{d\,S_{\varepsilon }}{dt}(t,x_i)&= \mu _S\,\Delta _{\varepsilon } S_{\varepsilon }(t,x_i)- \dfrac{\beta (x_i)\, S_{\varepsilon }(t,x_i)I_{\varepsilon }(t,x_i)}{S_{\varepsilon }(t,x_i)+I_{\varepsilon }(t,x_i)+R_{\varepsilon }(t,x_i)}, \\ \dfrac{d\,I_{\varepsilon }}{dt}(t,x_i)&= \mu _I\,\Delta _{\varepsilon } I_{\varepsilon }(t,x_i) + \dfrac{\beta (x_i)\, S_{\varepsilon }(t,x_i)I_{\varepsilon }(t,x_i)}{S_{\varepsilon }(t,x_i)+I_{\varepsilon }(t,x_i)+R_{\varepsilon }(t,x_i)}-\alpha (x_i) \,I_{\varepsilon }(t,x_i), \\ \dfrac{d\,R_{\varepsilon }}{dt}(t,x_i)&= \mu _R\,\Delta _{\varepsilon } R_{\varepsilon }(t,x_i)+\alpha (x_i) \,I_{\varepsilon }(t,x_i), \; \; (t,x_i) \in (0,T)\times \textrm{D}_\varepsilon , \\&\hspace{-1.5cm} S_{\varepsilon }(0,x_i), I_{\varepsilon }(0,x_i), R_{\varepsilon }(0,x_i)\ge 0, \; 0<S_{\varepsilon }(0,x_i)+ I_{\varepsilon }(0,x_i)+ R_{\varepsilon }(0,x_i) \le M,\\&\hspace{-1.5cm} \text {for some} \; M < \infty , \end{aligned} \right. \end{aligned}$$
(1)

\(\Delta _{\varepsilon }\) is the discrete Laplace operator defined as follows

$$\begin{aligned} \Delta _{\varepsilon }f(x_i) := \varepsilon ^{-2}\big [f(x_i+\varepsilon )-2f(x_i)+f(x_i-\varepsilon ) \big ]. \end{aligned}$$

The rates \( \beta : [0,1] \longrightarrow \mathbb {R}_+\) and \( \alpha :[0,1] \longrightarrow \mathbb {R}_+\) are continuous periodic functions, and \( \mu _S\), \(\mu _I\) and \(\mu _R\) are positive diffusion coefficients for the susceptible, infectious and removed subpopulations, respectively.

In what follows, we use the notations \( S_{\varepsilon }(t) := \left( \begin{array}{cl} S_{\varepsilon }(t,x_1)\\ \vdots \\ S_{\varepsilon }(t,x_{\ell }) \end{array} \right) \), \( I_{\varepsilon }(t) := \left( \begin{array}{cl} I_{\varepsilon }(t,x_1)\\ \vdots \\ I_{\varepsilon }(t,x_{\ell }) \end{array} \right) \), \( R_{\varepsilon }(t) := \left( \begin{array}{cl} R_{\varepsilon }(t,x_1)\\ \vdots \\ R_{\varepsilon }(t,x_{\ell }) \end{array} \right) \), and \(\displaystyle \quad Z_{\varepsilon }(t)= \big ( S_{\varepsilon }(t)\, ,\, I_{\varepsilon }(t)\, ,\, R_{\varepsilon }(t) \big )^{\texttt{T}}\). Here \(\ell = \varepsilon ^{-1}\).

Note that (1) is the discrete space approximation of the following system of PDEs

$$\begin{aligned} \left\{ \begin{aligned} \dfrac{\partial \,{\textbf{s}}}{\partial t}(t,x)=&\mu _S \,\Delta {\textbf{s}}(t,x)-\dfrac{\beta (x)\, {\textbf{s}}(t,x){\textbf{i}}(t,x) }{{\textbf{s}}(t,x)+{\textbf{i}}(t,x)+{\textbf{r}}(t,x)},\\ \dfrac{\partial \,{\textbf{i}}}{\partial t}(t,x)=&\mu _I\, \Delta {\textbf{i}}(t,x)+\dfrac{\beta (x)\, {\textbf{s}}(t,x){\textbf{i}}(t,x) }{{\textbf{s}}(t,x)+{\textbf{i}}(t,x)+{\textbf{r}}(t,x)} - \alpha (x)\, {\textbf{i}}(t,x),\\ \dfrac{\partial \,{\textbf{r}}}{\partial t}(t,x)=&\mu _R\, \Delta {\textbf{r}}(t,x)+\alpha (x)\, {\textbf{i}}(t,x), \quad (t,x) \in (0,T)\times D, \\&\hspace{-1.5cm} {\textbf{s}}(0,x), {\textbf{i}}(0,x), {\textbf{r}}(0,x)\ge 0 , \; 0< {\textbf{s}}(0,x)+{\textbf{i}}(0,x)+{\textbf{r}}(0,x)\le M, \end{aligned} \right. \end{aligned}$$
(2)

where \( \displaystyle \Delta = \dfrac{\partial ^2}{\partial x^2}\). In the sequel, we set \({\textbf{X}}:=({\textbf{s}} \, ,\, {\textbf{i}} \, ,\, {\textbf{r}})^{\texttt{T}}\).

Let \({\textbf{N}}\) be the total population size. The stochastic version of (1) is given by the following system

$$\begin{aligned} \hspace{-0.3cm}\left\{ \begin{aligned} S_{{\textbf{N}},\varepsilon }(t,x_i)&= S_{{\textbf{N}},\varepsilon }(0,x_i) - \frac{1}{{\textbf{N}}}\textrm{P}_{x_i}^{inf}\left( {\textbf{N}}\int _0^t \dfrac{\beta (x_i) S_{{\textbf{N}},\varepsilon }(r,x_i)I_{{\textbf{N}},\varepsilon }(r,x_i)}{S_{{\textbf{N}},\varepsilon }(r,x_i)+I_{{\textbf{N}},\varepsilon }(r,x_i)+R_{{\textbf{N}},\varepsilon }(r,x_i)}dr \right) \\&\quad - \sum _{y_i\sim x_i}\frac{1}{{\textbf{N}}}\textrm{P}_{S,x_i,y_i}^{mig}\left( {\textbf{N}}\int _0^t \frac{\mu _S }{\varepsilon ^2}S_{{\textbf{N}},\varepsilon }(r,x_i)dr \right) \\&\quad + \sum _{y_i\sim x_i}\frac{1}{{\textbf{N}}}\textrm{P}_{S,y_i,x_i}^{mig}\left( {\textbf{N}} \int _0^t \frac{\mu _S }{\varepsilon ^2}S_{{\textbf{N}},\varepsilon }(r,y_i)dr \right) , \\ I_{{\textbf{N}},\varepsilon }(t,x_i)&= I_{{\textbf{N}},\varepsilon }(0,x_i) + \frac{1}{{\textbf{N}}}\textrm{P}_{x_i}^{inf}\left( {\textbf{N}}\int _0^t \dfrac{\beta (x_i) S_{{\textbf{N}},\varepsilon }(r,x_i)I_{{\textbf{N}},\varepsilon }(r,x_i)}{S_{{\textbf{N}},\varepsilon }(r,x_i)+I_{{\textbf{N}},\varepsilon }(r,x_i)+R_{{\textbf{N}},\varepsilon }(r,x_i)}dr \right) \\&\quad - \frac{1}{{\textbf{N}}}\textrm{P}_{x_i}^{rec}\left( {\textbf{N}}\int _0^t \alpha (x_i)I_{{\textbf{N}},\varepsilon }(r,x_i)dr \right) - \sum _{y_i\sim x_i}\frac{1}{{\textbf{N}}}\textrm{P}_{I,x_i,y_i}^{mig}\left( {\textbf{N}}\int _0^t\frac{\mu _I }{\varepsilon ^2}I_{{\textbf{N}},\varepsilon }(r,x_i)dr \right) \\&\quad + \sum _{y_i\sim x_i}\frac{1}{{\textbf{N}}}\textrm{P}_{I,y_i,x_i}^{mig} \left( {\textbf{N}}\int _0^t\frac{\mu _I }{\varepsilon ^2}I_{{\textbf{N}},\varepsilon }(r,y_i)dr \right) , \\ R_{{\textbf{N}},\varepsilon }(t,x_i)&= R_{{\textbf{N}},\varepsilon }(0,x_i) + \frac{1}{{\textbf{N}}}\textrm{P}_{x_i}^{rec}\left( {\textbf{N}}\int _0^t \alpha (x_i)I_{{\textbf{N}},\varepsilon }(r,x_i)dr \right) \\&\quad -\sum _{y_i\sim x_i}\frac{1}{{\textbf{N}}}\textrm{P}_{R,x_i,y_i}^{mig}\left( {\textbf{N}}\int _0^t \frac{\mu _R}{\varepsilon ^2}R_{{\textbf{N}},\varepsilon }(r,x_i)dr \right) \\&\quad + \frac{1}{{\textbf{N}}}\sum _{y_i\sim x_i}\textrm{P}_{R,y_i,x_i}^{mig}\left( {\textbf{N}}\int _0^t \frac{\mu _R }{\varepsilon ^2}R_{{\textbf{N}},\varepsilon }(r,y_i)dr \right) ,\\&(t,x_i) \in [0,T]\times \textrm{D}_\varepsilon \end{aligned} \right. \nonumber \\ \end{aligned}$$
(3)

where all the \(\textrm{P}_j\)’s are standard Poisson processes, which are mutually independent. For each given site, these processes count the number of new infectious, recoveries and the migrations between sites. \( y_i \sim x_i \) means that \( y_i \in \{x_i+\varepsilon \, , \, x_i-\varepsilon \}\).

Let \( S_{{\textbf{N}},\varepsilon }(t) := \left( \begin{array}{cl} S_{{\textbf{N}},\varepsilon }(t,x_1)\\ \vdots \\ S_{{\textbf{N}},\varepsilon }(t,x_{\ell }) \end{array} \right) \) , \( I_{{\textbf{N}},\varepsilon }(t) := \left( \begin{array}{cl} I_{{\textbf{N}},\varepsilon }(t,x_1)\\ \vdots \\ I_{{\textbf{N}},\varepsilon }(t,x_{\ell }) \end{array} \right) \),

\( R_{{\textbf{N}},\varepsilon }(t) := \left( \begin{array}{cl} R_{{\textbf{N}},\varepsilon }(t,x_1)\\ \vdots \\ R_{{\textbf{N}},\varepsilon }(t,x_{\ell }) \end{array} \right) \),

\(\displaystyle \; Z_{{\textbf{N}},\varepsilon }(t):= \big ( S_{{\textbf{N}},\varepsilon }(t),\, I_{{\textbf{N}},\varepsilon }(t), \, R_{{\textbf{N}},\varepsilon }(t) \big )^{\texttt{T}}\) and \(\displaystyle b_{\varepsilon }\big (t,Z_{{\textbf{N}},\varepsilon }(t)\big ) := \sum _{j=1}^{K}h_j \beta _j( Z_{{\textbf{N}}, \varepsilon }(t))\) (K being the number of Poisson’s processes in the system), where the vectors \(h_j \in \{ -1 , 0 , 1\}^{3\ell }\) denote the respective jump directions with jump rates \(\beta _j\). The SDE (3) can be rewritten as follows

$$\begin{aligned} Z_{{\textbf{N}},\varepsilon }(t) = Z_{{\textbf{N}},\varepsilon }(0)+\int _0^t b_{\varepsilon }\big (r,Z_{{\textbf{N}},\varepsilon }(r\big )dr +\dfrac{1}{{\textbf{N}}}\sum _{j=1}^{K}h_j \textrm{P}_j\left( {\textbf{N}}\int _{0}^{t}\beta _j\left( Z_{{\textbf{N}},\varepsilon }(r)\right) dr\right) .\nonumber \\ \end{aligned}$$
(4)

Also the sytem (1) can be written as follows

$$\begin{aligned} \dfrac{dZ_{\varepsilon }(t)}{dt}= b_{\varepsilon }(t,Z_{\varepsilon }(t)). \end{aligned}$$
(5)

The authors show the consistency of the two models by a law of large numbers. More precisely, the following two results were proved in N’zi et al. [10].

Theorem 1.1

(Law of Large Numbers: \({\mathbf {{\varvec{N}}}}\rightarrow \infty \), \(\varepsilon \) being fixed) 

Let \( Z_{{\textbf{N}},\varepsilon } \) denote the solution (4) and \( Z_{\varepsilon }\) the solution of (5).

Let us fix an arbitrary \(T > 0\) and assume that \( Z_{{\textbf{N}},\varepsilon }(0) \longrightarrow Z_{\varepsilon }(0) \), as \( {\textbf{N}}\rightarrow + \infty \).

Then \( \displaystyle \underset{0\le t\le T}{\sup }\Big \Vert Z_{{\textbf{N}},\varepsilon }(t)-Z_{\varepsilon }(t) \Big \Vert \longrightarrow 0 \; \text {a.s.} \; , \; \; as \; \; {\textbf{N}}\rightarrow + \infty \) .

Moreover, for all \( x_i \in D_{\varepsilon }\), \(V_i :=[x_i-\varepsilon /2, x_i+\varepsilon /2 )\) denote the cell centered in the site \(x_i\). We define

$$\begin{aligned} \displaystyle {\mathcal {S}}_{\varepsilon }(t,x):= & {} \sum _{i=1}^{\varepsilon ^{-1}} S_{\varepsilon }(t,x_i)\mathbbm {1}_{V_i}(x), \; \; {\mathcal {I}}_{\varepsilon }(t,x)\\= & {} \sum _{i=1}^{\varepsilon ^{-1}} I_{\varepsilon }(t,x_i)\mathbbm {1}_{V_i}(x), \;\; {\mathcal {R}}_{\varepsilon }(t,x) := \sum _{i=1}^{\varepsilon ^{-1}} R_{\varepsilon }(t,x_i)\mathbbm {1}_{V_i}(x),\\ \displaystyle \beta (x):= & {} \sum _{i=1}^{\varepsilon ^{-1}}\beta (x_i)\mathbbm {1}_{V_i}(x), \; \alpha (x) := \sum _{i=1}^{\varepsilon ^{-1}}\alpha (x_i)\mathbbm {1}_{V_i}(x) \end{aligned}$$

, and we set

$$\begin{aligned} {\textbf{X}}_{\varepsilon } :=({\mathcal {S}}_{\varepsilon }\, ,\, {\mathcal {I}}_{\varepsilon }\, ,\, {\mathcal {R}}_{\varepsilon })^{\texttt{T}}. \end{aligned}$$
(6)

We introduce the canonical projection \(P_\varepsilon : L^2(\mathbb {T}^1) \longrightarrow \texttt{H}_\varepsilon \) defined by

$$\begin{aligned} f \longmapsto P_\varepsilon f(x)= \varepsilon ^{-1}\int _{V_i}f(y)dy, \; \text {if}\; \; x\in V_i . \end{aligned}$$

Throughout this paper, we assume that the initial condition satisfies

Assumption 1.1

\({\textbf{s}}(0,.)\), \({\textbf{i}}(0,.)\), \({\textbf{r}}(0,.)\) \(\in C^1(\mathbb {T}^1)\), \(\forall x\in \mathbb {T}^1\), \({\mathcal {S}}_{\varepsilon }(0,x)=P_\varepsilon {\textbf{s}}(0,x)\), \({\mathcal {I}}_{\varepsilon }(0,x)=P_\varepsilon {\textbf{i}}(0,x)\), \({\mathcal {R}}_{\varepsilon }(0,x)=P_\varepsilon {\textbf{r}}(0,x)\), and \(\displaystyle \int _{\mathbb {T}^1} \left( {\textbf{s}}(0,x)+{\textbf{i}}(0,x)+{\textbf{r}}(0,x)\right) dx=1. \)

Assumption 1.2

There exists a constant \(c>0\) such that \(\displaystyle \inf _{x \in \mathbb {T}^1}{\textbf{s}}(0,x)\ge c\).

We use the notation \( \Vert f \Vert _{\infty } := \underset{x\in [0, 1]}{\sup }\vert f(x) \vert \) to denote the supremun norm of f in [0, 1] and define \( \Big \Vert \big (f , g, h \big )^{\! \!\texttt{T}} \Big \Vert _{\infty } := \big \Vert f \big \Vert _{\infty } + \big \Vert g \big \Vert _{\infty }+\big \Vert h \big \Vert _{\infty }.\)

We have the

Theorem 1.2

For all \(T > 0 \), \(\displaystyle \underset{0\le t\le T}{\sup }\Big \Vert {\textbf{X}}_{\varepsilon }(t)- {\textbf{X}}(t)\Big \Vert _{\infty } \longrightarrow 0 \), as   \( \varepsilon \rightarrow 0 . \)

Next, defining \(\displaystyle {\mathcal {S}}_{{\textbf{N}},\varepsilon }(t,x) := \sum _{i=1}^{\varepsilon ^{-1}} S_{{\textbf{N}},\varepsilon }(t,x_i)\mathbbm {1}_{V_i}(x), \; \; {\mathcal {I}}_{{\textbf{N}},\varepsilon }(t,x) := \sum _{i=1}^{\varepsilon ^{-1}} I_{{\textbf{N}},\varepsilon }(t,x_i)\mathbbm {1}_{V_i}(x),\)

\(\displaystyle {\mathcal {R}}_{{\textbf{N}},\varepsilon }(t,x) := \sum _{i=1}^{\varepsilon ^{-1}} R_{{\textbf{N}},\varepsilon }(t,x_i)\mathbbm {1}_{V_i}(x),\) and setting \({\textbf{X}}_{{\textbf{N}},\varepsilon }:=\big ({\mathcal {S}}_{{\textbf{N}},\varepsilon }\, ,\,{\mathcal {I}}_{{\textbf{N}},\varepsilon }\, ,\, {\mathcal {R}}_{{\textbf{N}},\varepsilon }\big )^{\texttt{T}}\), the following theorem is proved in N’zi et al. [10].

Theorem 1.3

Let us assume that \((\varepsilon ,{\textbf{N}})\rightarrow (0,\infty )\), in such way that

  1. (i)

    \(\dfrac{{\textbf{N}}}{\log (1/\varepsilon )}\longrightarrow \infty \) as \({\textbf{N}} \rightarrow \infty \) and \( \varepsilon \rightarrow 0 \);

  2. (ii)

    \(\Big \Vert {\textbf{X}}_{{\textbf{N}},\varepsilon }(0)- {\textbf{X}}(0) \Big \Vert _{\infty } \longrightarrow 0 \) in probability.

Then for all \( T > 0 \), \( \underset{0\le t\le T}{\sup }\Big \Vert {\textbf{X}}_{{\textbf{N}},\varepsilon }(t)- {\textbf{X}}(t) \Big \Vert _{\infty } \longrightarrow 0 \) in probability .

We devote this paper to study the deviation of the stochastic model from the deterministic one as the mesh size of the grid goes to zero. In this work, we focus our attention to the periodic boundary conditions on the unit interval [0, 1], which we denote by \(\mathbb {T}^1\). Let us mention that Blount [2] and Kotelenez [7] described similar spatial model for chemical reactions. The resulting process has one component and is compared with the corresponding deterministic model. They proved a functional central limit theorem under some restriction on the respective speeds of convergence of the initial number of particles in each cell and the number of cells.

The rest of this paper is organized as follows. In Sect. 2, we give some notations and preliminaries which will be useful in the sequel of this paper. In Sect 3, we establish a functional central limit theorem, the main result of this paper, by letting the mesh size \(\varepsilon \) of the grid go to zero. The fluctuation limit is a distribution valued generalized Ornstein–Uhlenbeck Gaussian process and can be represented as the solution of a linear stochastic partial differential equation, whose driving terms are Gaussian martingales.

2 Notations and preliminaries

In this section, we give some notations and collect some standard facts on the Sobolev spaces \(\textrm{H}^{\gamma }(\mathbb {T}^1)\), \(\gamma \in \mathbb {R}\). First of all, let us describe some of the properties of the (discrete)-Laplace operator. Let \( \texttt{H}_{\varepsilon } \subset L^2(\mathbb {T}^1)\) denote the space of real valued step functions that are constant on each cell \( V_i\). For \(f\in \texttt{H}_{\varepsilon }\), let us define

$$\begin{aligned} \nabla _{\!\!\varepsilon }^+f(x_i) :=\frac{f(x_i +\varepsilon )-f(x_i)}{\varepsilon }\; \; \text {and}\; \; \nabla _{\!\!\varepsilon }^-f(x_i) :=\frac{f(x_i)- f(x_i-\varepsilon )}{\varepsilon }. \end{aligned}$$

For \( f, g \in L^2(\mathbb {T}^1)\), \(\displaystyle \langle \; f , g \; \rangle := \int _{\mathbb {T}^1} f(x)g(x)dx \) denotes the scalar product in \(\textrm{L}^2(\mathbb {T}^1).\)

It is not hard to see that

$$\begin{aligned} \langle \; \nabla _{\!\!\varepsilon }^+f, g \; \rangle = -\langle \; f , \nabla _{\!\!\varepsilon }^-g \; \rangle \; \; \text {and} \; \; \Delta _{\varepsilon }f=\nabla _{\!\!\varepsilon }^-\nabla _{\!\!\varepsilon }^+f= \nabla _{\!\!\varepsilon }^+\nabla _{\!\!\varepsilon }^-f. \end{aligned}$$

For m even and \( x\in \mathbb {R} \) we define

$$\begin{aligned} \varphi _{m}(x):=\left\{ \begin{array}{ll} &{} 1 , \quad \quad \quad \quad \quad \; \text{ for } \; \; m = 0 \\ &{} \sqrt{2}\cos ( m\pi x), \; \text {for} \; \; m\ne 0 \; \text {and even}, \end{array} \right. \\ \psi _{m}(x):=\left\{ \begin{array}{ll} &{} 0 , \quad \quad \quad \quad \quad \text{ for } \; \; m = 0 \\ &{} \sqrt{2}sin(m\pi x), \; \text {for} \; \; m\ne 0 \; \text {and even}. \end{array} \right. \end{aligned}$$

\(\left\{ 1\, , \, \varphi _m \, , \, \psi _m \, , \, \; m=2k \, , \, k\ge 1 \right\} \) is a complete orthonormal system (CONS) of eigenvectors of \( \Delta \) in \(L^2(\mathbb {T}^1)\) with eigenvalues \(\displaystyle -\lambda _m =-\pi ^2 m^2.\) Consequently, the semigroup \(\textsf {T}(t)\!:= \!\exp (\Delta \,t)\) acting on \(L^2(\mathbb {T}^1)\) generated by \( \Delta \) can be represented as

$$\begin{aligned} \textsf {T}(t)f\!=\langle \, f, 1 \, \rangle +\!\sum _{k\ge 1}\exp (-\lambda _{2k} t )\Big [\langle \; f , \varphi _{2k} \; \rangle \varphi _{2k}+\langle \; f , \psi _{2k}\;\rangle \psi _{2k} \Big ] ,\; \; f\in L^2(\mathbb {T}^1). \end{aligned}$$

Assume that \( \varepsilon ^{-1}\) is an odd integer. For \( m \in \left\{ 0 ,2 , \cdots , \varepsilon ^{-1}-1 \right\} \), we define \(\displaystyle \varphi _{m}^{\varepsilon }(x)=\sqrt{2}\cos (\pi m j\varepsilon )\), if \(x\in V_j\) and \(\displaystyle \psi _{m}^{\varepsilon }(x)=\sqrt{2}\sin (\pi m j\varepsilon )\), if \(x\in V_j\). \(\{\, \varphi _{m}^{\varepsilon }, \, \psi _{m}^{\varepsilon }, m \,\} \) form an orthonormal basis of \( \texttt{H}_{\varepsilon }\) as a subspace of \(L^2\left( \mathbb {T}^1\right) \). These vectors are eigenfunctions of \( \Delta _{\varepsilon } \) with the associated eigenvalues \(\displaystyle -\lambda _{m}^{\varepsilon }=-2\varepsilon ^{-2}\big ( 1 - \cos (m\pi \varepsilon ) \big ).\) Note that \( \displaystyle \lambda _{m}^{\varepsilon } \longrightarrow \lambda _{m}\), as \(\varepsilon \rightarrow 0\). Basic computations show that there exists a constant c, such that for each m and \(\varepsilon \), \(\varepsilon ^{-2}\big ( 1 - \cos (\pi m \varepsilon ) \big ) > c\, m^2.\) Let us set \( n_\varepsilon =\frac{\varepsilon ^{-1}-1}{2}.\) \( \Delta _{\varepsilon } \) generates a contraction semigroup \( \textsf {T}_{\!\varepsilon }(t) :=\exp (\Delta _{\varepsilon } t) \) whose action on each \(f\in \texttt{H}_{\varepsilon }\) is given by

$$\begin{aligned} \textsf {T}_{\!\varepsilon }(t)f= \sum _{k=0}^{n_\varepsilon }\exp (-\lambda _{2k}^{\varepsilon } t )\Big [\langle \; f , \varphi _{2k}^{\varepsilon } \; \rangle \varphi _{2k}^{\varepsilon }+\langle \; f , \psi _{2k}^{\varepsilon }\;\rangle \psi _{2k}^{\varepsilon } \Big ]. \end{aligned}$$
(7)

Note that both \( \Delta _{\varepsilon }\) and \(\textsf {T}_{\!\varepsilon }(t)\) are self-adjoint and that \(\textsf {T}_{\!\varepsilon }(t)\Delta _{\varepsilon }\varphi =\Delta _{\varepsilon }\textsf {T}_{\!\varepsilon }(t)\varphi .\)

For any \( J\in \{S, I, R\}\), the semigroup generated by \( \mu _{\!_J}\Delta \) is \(\textsf {T}(\mu _{\!_J}t)\). In the sequel, we will use the notation \( \textsf {T}_{\!\!_J}(t):=\textsf {T}(\mu _{\!_J}t)\) and similarly, in the discrete case, we will use the notation \(\textsf {T}_{\!\varepsilon ,J}(t):=\textsf {T}_{\varepsilon }(\mu _{\!_J}t)\). Also, for any \(J\in \{S, I, R\}\), we set \( \lambda _{m,J}:=\mu _{\!_J}\lambda _{m}\) and \( \lambda _{m,J}^{\varepsilon }:=\mu _{\!_J}\lambda _{m}^{\varepsilon }. \) For \( \gamma \in \mathbb {R}_+\), we define the Hilbert space \(\textrm{H}^{\gamma }(\mathbb {T}^1)\) as follows.

$$\begin{aligned} \textrm{H}^{\gamma }(\mathbb {T}^1):=\big \{\, f \in L^2\left( \mathbb {T}^1\right) , \Vert f \Vert _{_{\textrm{H}^{\gamma }}}^2 := \sum _{m \; \text {even}}\big [\langle \;f , \varphi _m\;\rangle ^2+\langle \;f , \psi _m\;\rangle ^2\big ](1+\lambda _m)^{\gamma } < \infty \,\big \}. \end{aligned}$$

We shall use the notations \(\textrm{H}^{\gamma } := \textrm{H}^{\gamma }(\mathbb {T}^1)\) and \(L^2 := L^2(\mathbb {T}^1)\).

Note that \( \displaystyle \Vert \varphi \Vert _{_{\textrm{H}^{\gamma }}}= \Vert ( {\textbf{I}}-\Delta )^{\gamma /2}\varphi \Vert _{_{L^2}} \), where \( {\textbf{I}} \) is the identity operator on \(L^2\left( \mathbb {T}^1\right) .\) For any three-dimensional vector-valued function \( \Phi =(\Phi _1,\Phi _2,\Phi _3)^{\texttt{T}}\), we use the notation \(\Vert \Phi \Vert _{_{\textrm{H}^{\gamma }}} := \Big (\Vert \Phi _1 \Vert _{_{\textrm{H}^{\gamma }}}^2 +\Vert \Phi _2 \Vert _{_{\textrm{H}^{\gamma }}}^2 +\Vert \Phi _3 \Vert _{_{\textrm{H}^{\gamma }}}^2\Big )^{1/2} \).

For \( \gamma \in \mathbb {R}\), we also define

$$\begin{aligned} \Vert f \Vert _{_{\textrm{H}^{\gamma ,\varepsilon }}} := \Bigg [\sum _{m \; \text {even}} \big (\langle \;f , \varphi _m^{\varepsilon }\;\rangle ^2+\langle \;f , \psi _m^{\varepsilon }\;\rangle ^2\big )(1+\lambda _m^{\varepsilon })^{\gamma }\Bigg ]^{1/2}, \; f \in \texttt{H}_{\varepsilon }. \end{aligned}$$

For f , \(g \in \texttt{H}_{\varepsilon }\), we have

$$\begin{aligned} \big \vert \langle f , g\rangle \big \vert\le & {} \Vert f\Vert _{_{\textrm{H}^{-\gamma ,\varepsilon }}}\Vert g\Vert _{_{\textrm{H}^{\gamma ,\varepsilon }}}\; , \quad \gamma \ge 0. \end{aligned}$$
(8)

Elementary calculation shows that for \( f\in \texttt{H}_{\varepsilon }\), and \(\gamma >0\) there exist positive constants \(c_1(\gamma )\) and \(c_2(\gamma )\) such that for all \(\varepsilon >0\)

$$\begin{aligned} c_1(\gamma )\Vert f\Vert _{_{\textrm{H}^{-\gamma ,\varepsilon }}}\le \Vert f\Vert _{_{\textrm{H}^{-\gamma }}}\le c_2(\gamma )\Vert f\Vert _{_{\textrm{H}^{-\gamma ,\varepsilon }}}. \end{aligned}$$
(9)

\( f^{\prime }:=\dfrac{\partial f}{\partial x }\) will denote the derivative of f.

In the sequel of this paper we may use the same notation for different constants (we use the generic notation C for a positive constant). These constants can depend upon some parameters of the model, as long as these are independent of \( \varepsilon \) and \( {\textbf{N}}\), we will not necessarily mention this dependence explicitly. However, we use \(C(\gamma , T)\) to denote a constant which depends on \(\gamma \) and T (and possibly on some unimportant constants). The exact value may change from line to line.

Let us now consider the deviation of the stochastic model around its determinsitc law of large numbers limit. To this end we introduce the rescaled difference between \(Z_{{\textbf{N}},\varepsilon }(t) \) and \(Z_{\varepsilon }\), namely

$$\begin{aligned} \Psi _{{\textbf{N}},\varepsilon }(t) := \left( \begin{array}{cl} U_{{\textbf{N}},\varepsilon }(t)\\ V_{{\textbf{N}},\varepsilon }(t)\\ W_{{\textbf{N}},\varepsilon }(t) \end{array} \right) , \end{aligned}$$

where

$$\begin{aligned}{} & {} U_{{\textbf{N}},\varepsilon }(t) := \left( \begin{array}{cl} \sqrt{{\textbf{N}}}\Big (S_{{\textbf{N}},\varepsilon }(t, x_1)- S_\varepsilon (t,x_1)\Big )\\ \vdots \\ \sqrt{{\textbf{N}}}\Big (S_{{\textbf{N}},\varepsilon }(t, x_{\ell })- S_\varepsilon (t,x_{\ell })\Big ) \end{array} \right) ,\\{} & {} \quad V_{{\textbf{N}},\varepsilon }(t) := \left( \begin{array}{cl} \sqrt{{\textbf{N}}}\Big (I_{{\textbf{N}},\varepsilon }(t, x_1)- I_\varepsilon (t,x_1)\Big )\\ \vdots \\ \sqrt{{\textbf{N}}}\Big (I_{{\textbf{N}},\varepsilon }(t, x_{\ell })- I_\varepsilon (t,x_{\ell })\Big ) \end{array} \right) \end{aligned}$$

and

\(\hspace{3cm} W_{{\textbf{N}},\varepsilon }(t) := \left( \begin{array}{cl} \sqrt{{\textbf{N}}}\Big (R_{{\textbf{N}},\varepsilon }(t, x_1)- R_\varepsilon (t,x_1)\Big )\\ \vdots \\ \sqrt{{\textbf{N}}}\Big (R_{{\textbf{N}},\varepsilon }(t, x_{\ell })- R_\varepsilon (t,x_{\ell })\Big ) \end{array} \right) .\)

In the sequel, we denote by \("\Longrightarrow "\) weak convergence. By fixing the mesh size \(\varepsilon \) of the grid and letting \({\textbf{N}}\) go to infinity, we obtain the following theorem.

Theorem 2.1

(Central Limit Theorem : \({\mathbf {{\varvec{N}}}}\rightarrow \infty \), \(\varepsilon \) being fixed

Assume that \(\sqrt{{\textbf{N}}}\big ( Z_{{\textbf{N}},\varepsilon }(0)- Z_{\varepsilon }(0)\big ) \longrightarrow 0 \), as \( {\textbf{N}}\rightarrow \infty \).

Then, as \( {\textbf{N}} \rightarrow + \infty \) , \(\big \{\begin{array}{rl} \Psi _{{\textbf{N}},\varepsilon } (t) , \; t\ge 0 \end{array} \big \} \Longrightarrow \big \{\begin{array}{rl} \Psi _{\varepsilon }(t) , \; t\ge 0 \end{array}\big \} , \) for the topology of locally uniform convergence, where the limit process \( \Psi _{\varepsilon }(t) := \left( \begin{array}{cl} U_{\varepsilon }(t)\\ V_{\varepsilon }(t)\\ W_{\varepsilon }(t) \end{array} \right) \) satisfies

$$\begin{aligned} \Psi _{\varepsilon }(t) = \int _0^t \nabla _{\!\!z} b_{\varepsilon }\big (r,Z_{\varepsilon }(r)\big ) \Psi _{\varepsilon }(r)dr + \sum _{j=1}^{K}\int _0^t \sqrt{\beta _j\big (r,Z_{\varepsilon }(r)}\big )dB_j(r) , \quad t \ge 0, \end{aligned}$$
(10)

and \( \displaystyle \{ B_1(t), B_2(t), \cdots , B_K(t) \}\) are mutually independent standard Brownian motions.

More precisely, by setting \( A_{\varepsilon }=S_{\varepsilon } +I_{\varepsilon }+R_{\varepsilon }\), for any site \(x_i\), the limit \((U_{\varepsilon }, V_{\varepsilon }, W_{\varepsilon })^{\texttt{T}}\) satisfies the following system

$$\begin{aligned} U_{\varepsilon }(t,x_i)= & {} \mu _{S} \int _0^t \Delta _{\varepsilon }U_{\varepsilon }(r,x_i)dr \\{} & {} - \int _0^t \beta (x_i) \dfrac{I_{\varepsilon }(r,x_i)\big (I_{\varepsilon }(r,x_i)+R_{\varepsilon }(r,x_i)\big )V_{\varepsilon }(r,x_i)}{A_{\varepsilon }^2(r,x_i)} dr \\{} & {} - \int _0^t\beta (x_i) \dfrac{S_{\varepsilon }(r,x_i)\big (S_{\varepsilon }(r,x_i)+R_{\varepsilon }(r,x_i)\big )U_{\varepsilon }(r,x_i)}{A_{\varepsilon }^2(r,x_i)} dr\\{} & {} +\int _0^t \sqrt{\beta (x_i)\dfrac{S_{\varepsilon }(r,x_i)I_{\varepsilon }(r,x_i)}{A_{\varepsilon }(r,x_i)}}\; \; dB_{x_i}^{inf}(r) \\{} & {} -\sum _{y_i \sim x_i}\int _0^t\sqrt{\dfrac{\mu _{S}}{\varepsilon ^2} S_{\varepsilon }(r,x_i)}\; \; dB_{x_iy_i}^S(r) +\sum _{y_i \sim x_i}\int _0^t\sqrt{\dfrac{\mu _{S}}{\varepsilon ^2} S_{\varepsilon }(r,y_i)}\; \; dB_{y_ix_i}^S(r), \\ V_{\varepsilon }(t,x_i)= & {} \mu _{I} \int _0^t \Delta _{\varepsilon }V_{\varepsilon }(r,x_i)dr +\int _0^t \beta (x_i) \dfrac{I_{\varepsilon }(r,x_i)\big (I_{\varepsilon }(r,x_i)+R_{\varepsilon }(r,x_i)\big )V_{\varepsilon }(r,x_i)}{A_{\varepsilon }^2(r,x_i)} dr \\{} & {} +\int _0^t\beta (x_i) \dfrac{S_{\varepsilon }(r,x_i)\big (S_{\varepsilon }(r,x_i)+R_{\varepsilon }(r,x_i)\big )U_{\varepsilon }(r,x_i)}{A_{\varepsilon }^2(r,x_i)}dr-\int _0^t \alpha (x_i)V_{\varepsilon }(r,x_i)dr\\{} & {} -\int _0^t\sqrt{\beta (x_i)\dfrac{S_{\varepsilon }(r,x_i)I_{\varepsilon }(r,x_i)}{A_{\varepsilon }^2(r,x_i)}}\; \; dB_{x_i}^{inf}(r)+\int _0^t\sqrt{\alpha (x_i) I_{\varepsilon }(r,x_i)}\; \; dB_{x_i}^{rec}(r) \\{} & {} - \sum _{y_i \sim x_i}\int _0^t\sqrt{\dfrac{\mu _{I}}{\varepsilon ^2} I_{\varepsilon }(r,x_i)}dB_{x_iy_i}^I(r)+ \sum _{y_i \sim x_i}\int _0^t\sqrt{\dfrac{\mu _{I}}{\varepsilon ^2} I_{\varepsilon }(r,y_i)}\; \; dB_{y_ix_i}^I(r) , \\ W_{\varepsilon }(t,x_i)= & {} \mu _{R} \int _0^t \Delta _{\varepsilon }W_{\varepsilon }(r,x_i)dr+ \int _0^t \alpha (x_i) V_{\varepsilon }(r,x_i)dr-\int _0^t\sqrt{\alpha (x_i) I_{\varepsilon }(r,x_i)}\; \; dB_{x_i}^{rec}(r) \\{} & {} - \sum _{y_i \sim x_i}\int _0^t\sqrt{\dfrac{\mu _{R}}{\varepsilon ^2} R_{\varepsilon }(r,x_i)}\; \;dB_{x_iy_i}^R(r)+ \sum _{y_i \sim x_i}\int _0^t\sqrt{\dfrac{\mu _{R}}{\varepsilon ^2} R_{\varepsilon }(r,y_i)}\; \; dB_{y_ix_i}^R(r), \end{aligned}$$

where \(\{ B_{x_i}^{inf}: x_i\in \textrm{D}_\varepsilon \}\) , \(\{ B_{x_i}^{rec} : x_i\in \textrm{D}_\varepsilon \}\) , \(\{ B_{x_i y_i}^S : y_i \sim x_i\in \textrm{D}_\varepsilon \}\) , \(\{ B_{x_i y_i}^I : y_i \sim x_i\in \textrm{D}_\varepsilon \}\) and \(\{ B_{x_i y_i}^R : y_i \sim x_i\in \textrm{D}_\varepsilon \}\) are families of independent Brownian motions.

Theorem 2.1 is a special case of Theorem 3.5 of Kurtz [9] (see also Theorem 2.3.2 in Britton and Pardoux [3] ). Then, here, we do not give the proof and refer the reader to those references for a complete proof. \(\square \)

Let \( {\textbf{X}}=({\textbf{s}} \, ,\, {\textbf{i}} \, ,\,{\textbf{r}})^{\texttt{T}}\) satisfying the system (2) on [0, 1]. Thanks to Proposition 1.1 of Taylor [11] (chapter 15, section 1) we have the following lemma.

Lemma 2.1

Let \(\gamma \ge 0 \) and assume that the initial data \( {\textbf{X}}(0)\) bebong to \((\textrm{H}^{\gamma })^3\), then the parabolic system (2) has a unique solution \( {\textbf{X}} \in C\big ([0,T]; (\textrm{H}^{\gamma })^3\big ) \).

The rest of this section is devoted to the proof of some estimates for the solution of the system of equations (1). We first note that \(S_\varepsilon (t,x_i)\ge 0, I_\varepsilon (t,x_i)\ge 0, R_\varepsilon (t,x_i)\ge 0\) for all \(t\ge 0\), \(x_i\in D_\varepsilon \) and \(\varepsilon >0\). Moreover for any \(T>0\), there exists a contant \(C_T\) such that

$$\begin{aligned} \sup _{0\le t\le T}\left( \Vert S_\varepsilon (t)\Vert _\infty \vee \Vert I_\varepsilon (t)\Vert _\infty \vee \Vert R_\varepsilon (t)\Vert _\infty \right) \le C_T,\ \forall \varepsilon >0\,.\end{aligned}$$
(11)

Indeed we first note that \(\Vert S_\varepsilon (t)\Vert _\infty \le M\), since \(S_\varepsilon \) is upper bounded by the solution of the ODE

$$\begin{aligned} \frac{dX_\varepsilon }{dt}(t,x_i)=\mu _S\Delta _\varepsilon X_\varepsilon (t,x_i),\quad X_\varepsilon (0,x_i)=M\,.\end{aligned}$$
(12)

Next \(I_\varepsilon (t,x_i)\) is upper bounded by the solution of the ODE (with \({\bar{\beta }}:=\sup _x\beta (x)\))

$$\begin{aligned}\frac{dY_\varepsilon }{dt}(t,x_i)=\mu _I\Delta _\varepsilon Y_\varepsilon (t,x_i)+{\bar{\beta }}Y_\varepsilon (t,x_i),\quad Y_\varepsilon (0,x_i)=M\,.\end{aligned}$$

The result for \(R_\varepsilon \) is now easy.

Let us set \({\mathcal {A}}_{\varepsilon } :={\mathcal {S}}_{\varepsilon }+{\mathcal {I}}_{\varepsilon }+{\mathcal {R}}_{\varepsilon }\) . We have the

Lemma 2.2

For any \(T>0\), there exists a positive constant \(c_T\) such that

$$\begin{aligned}{\mathcal {A}}_\varepsilon (t,x)\ge c_T,\quad \text { for any }\varepsilon >0,\ 0\le t\le T,\ x\in \mathbb {T}^1\,. \end{aligned}$$

Proof

We consider the ODE

$$\begin{aligned} \frac{d{\mathcal {S}}_\varepsilon }{dt} (t,x)=\mu _S\Delta _\varepsilon {\mathcal {S}}_\varepsilon (t,x) -\frac{\beta (x){\mathcal {S}}_\varepsilon (t,x){\mathcal {I}}_\varepsilon (t,x)}{ {\mathcal {S}}_\varepsilon (t,x)+{\mathcal {I}}_\varepsilon (t,x)+{\mathcal {R}}_\varepsilon (t,x)}.\end{aligned}$$

Since \({\mathcal {S}}_\varepsilon (t,x)+{\mathcal {R}}_\varepsilon (t,x)\ge 0\) and \({\mathcal {I}}_\varepsilon (t,x)\ge 0\), it is plain that

$$\begin{aligned} 0\le \frac{\beta (x){\mathcal {I}}_\varepsilon (t,x)}{ {\mathcal {S}}_\varepsilon (t,x)+{\mathcal {I}}_\varepsilon (t,x)+{\mathcal {R}}_\varepsilon (t,x)}\le {\overline{\beta }}, \quad \text {where}\; \; {\overline{\beta }}:=\sup _{x\in \mathbb {T}^1}\vert \beta (x)\vert .\end{aligned}$$

Define \(\overline{{\mathcal {S}}}_\varepsilon (t,x)=e^{{\overline{\beta }}t}{\mathcal {S}}_\varepsilon (t,x)\). We have

$$\begin{aligned} \frac{d\overline{{\mathcal {S}}}_\varepsilon }{dt}(t,x)=\mu _S\Delta _\varepsilon \overline{{\mathcal {S}}}_\varepsilon (t,x)+\left( {\overline{\beta }}-\frac{\beta (x){\mathcal {I}}_\varepsilon (t,x)}{{\mathcal {S}}_\varepsilon (t,x)+{\mathcal {I}}_\varepsilon (t,x)+{\mathcal {R}}_\varepsilon (t,x)}\right) \overline{{\mathcal {S}}}_\varepsilon (t,x). \end{aligned}$$

Combining this with the last inequality, we deduce that

$$\begin{aligned}\overline{{\mathcal {S}}}_\varepsilon (t,x)\ge [e^{t\mu _S\Delta _\varepsilon } \overline{{\mathcal {S}}}_\varepsilon (0,\cdot )](x)\ge c,\end{aligned}$$

from Assumption 1.2.

Going back to \({\mathcal {S}}_\varepsilon \), we note that we have proved that

$$\begin{aligned} {\mathcal {S}}_\varepsilon (t,x)\ge ce^{-{\overline{\beta }}t}\,.\end{aligned}$$

In other words, for any \(T>0\), there exists a constant \(c_T:=ce^{-{\overline{\beta }}T}\) which is such that

$$\begin{aligned}{\mathcal {S}}_\varepsilon (t,x)\ge c_T,\quad \text { for any }\varepsilon >0,\ 0\le t\le T,\ x\in \mathbb {T}^1\,.\end{aligned}$$

And since \(I_\varepsilon (t,x_i)+R_\varepsilon (t,x_i)\ge 0\), \({\mathcal {A}}_\varepsilon (t,x)\) satisfies the same lower bound. \(\square \)

Lemma 2.3

For any \(T>0\), there exists a constant C such that for each \(\varepsilon >0\)

$$\begin{aligned}{} & {} \sup _{0\le t\le T}\Bigg ( \big \Vert {\mathcal {S}}_{\varepsilon }(t)\big \Vert _{L^2}^2+\big \Vert {\mathcal {I}}_{\varepsilon }(t)\big \Vert _{L^2}^2+\big \Vert {\mathcal {R}}_{\varepsilon }(t)\big \Vert _{L^2}^2\Bigg )\nonumber \\{} & {} \quad +2\int _0^T \Bigg (\mu _S\big \Vert \nabla _{\varepsilon }^+ {\mathcal {S}}_{\varepsilon }(r) \big \Vert _{L^2}^2 +\mu _I\big \Vert \nabla _{\varepsilon }^+ {\mathcal {I}}_{\varepsilon }(r) \big \Vert _{L^2}^2 +\mu _R\big \Vert \nabla _{\varepsilon }^+ {\mathcal {R}}_{\varepsilon }(r) \big \Vert _{L^2}^2\Bigg )dr\le C .\nonumber \\ \end{aligned}$$
(13)

Proof

For all \((t,x)\in [0,T]\times [0, 1] \), we have

\( \displaystyle \hspace{1cm} \dfrac{d\,{\mathcal {S}}_{\varepsilon }}{dt}(t,x) = \mu _S\,\Delta _{\varepsilon } {\mathcal {S}}_{\varepsilon }(t,x)- \dfrac{\beta (x)\, {\mathcal {S}}_{\varepsilon }(t,x){\mathcal {I}}_{\varepsilon }(t,x)}{{\mathcal {A}}_{\varepsilon }(t,x)}, \)

which implies

$$\begin{aligned} 2\big \langle \,{\mathcal {S}}_{\varepsilon }(t)\,,\, \dfrac{d\,{\mathcal {S}}_{\varepsilon }}{dt}(t)\,\big \rangle= & {} 2\mu _S\big \langle \,\Delta _{\varepsilon } {\mathcal {S}}_{\varepsilon }(t)\, , \, {\mathcal {S}}_{\varepsilon }(t)\,\big \rangle - 2\big \langle \,\dfrac{\beta (.)\, {\mathcal {S}}_{\varepsilon }(t){\mathcal {I}}_{\varepsilon }(t)}{{\mathcal {A}}_{\varepsilon }(t)}\, ,\,{\mathcal {S}}_{\varepsilon }(t) \,\big \rangle \\= & {} -2\mu _S\big \langle \,\nabla _{\varepsilon }^+ {\mathcal {S}}_{\varepsilon }(t)\, , \, \nabla _{\varepsilon }^+ {\mathcal {S}}_{\varepsilon }(t)\,\big \rangle - 2\big \langle \,\dfrac{\beta (.)\, {\mathcal {S}}_{\varepsilon }(t){\mathcal {I}}_{\varepsilon }(t)}{{\mathcal {A}}_{\varepsilon }(t)}\, ,\,{\mathcal {S}}_{\varepsilon }(t) \,\big \rangle . \end{aligned}$$

Then, \(\forall \, t \in [0,T]\),

$$\begin{aligned} \big \Vert {\mathcal {S}}_{\varepsilon }(t)\big \Vert _{L^2}^2+2\mu _S\int _0^t\big \Vert \nabla _{\varepsilon }^+ {\mathcal {S}}_{\varepsilon }(r) \big \Vert _{L^2}^2 dr= & {} \big \Vert {\mathcal {S}}_{\varepsilon }(0)\big \Vert _{L^2}^2- 2\int _0^t\big \langle \,\dfrac{\beta (.)\, {\mathcal {S}}_{\varepsilon }(r){\mathcal {I}}_{\varepsilon }(r)}{{\mathcal {A}}_{\varepsilon }(r)}\, ,\,{\mathcal {S}}_{\varepsilon }(r) \,\big \rangle dr. \end{aligned}$$

In the same way, we obtain

$$\begin{aligned}{} & {} \big \Vert {\mathcal {I}}_{\varepsilon }(t)\big \Vert _{L^2}^2+2\mu _I\int _0^t\big \Vert \nabla _{\varepsilon }^+ {\mathcal {I}}_{\varepsilon }(r) \big \Vert _{L^2}^2 dr\\{} & {} \quad = \big \Vert {\mathcal {I}}_{\varepsilon }(0)\big \Vert _{L^2}^2+ 2\int _0^t\big \langle \,\dfrac{\beta (.)\, {\mathcal {S}}_{\varepsilon }(r){\mathcal {I}}_{\varepsilon }(r)}{{\mathcal {A}}_{\varepsilon }(r)}\, ,\,{\mathcal {I}}_{\varepsilon }(r) \,\big \rangle dr \\{} & {} \quad - 2\int _0^t\big \langle \,\alpha (.){\mathcal {I}}_{\varepsilon }(r)\, , \, {\mathcal {I}}_{\varepsilon }(r) \,\big \rangle dr\, , \end{aligned}$$

and

$$\begin{aligned} \big \Vert {\mathcal {R}}_{\varepsilon }(t)\big \Vert _{L^2}^2+2\mu _R\int _0^t\big \Vert \nabla _{\varepsilon }^+ {\mathcal {R}}_{\varepsilon }(r) \big \Vert _{L^2}^2 dr= & {} \big \Vert {\mathcal {R}}_{\varepsilon }(0)\big \Vert _{L^2}^2+ 2\int _0^t\big \langle \,\alpha (.){\mathcal {I}}_{\varepsilon }(r)\, , \, {\mathcal {R}}_{\varepsilon }(r) \,\big \rangle dr. \end{aligned}$$

Then, we deduce that

$$\begin{aligned}{} & {} \big \Vert {\mathcal {S}}_{\varepsilon }(t)\big \Vert _{L^2}^2+\big \Vert {\mathcal {I}}_{\varepsilon }(t)\big \Vert _{L^2}^2+\big \Vert {\mathcal {R}}_{\varepsilon }(t)\big \Vert _{L^2}^2+2\int _0^t \\{} & {} \quad \Bigg (\mu _S\big \Vert \nabla _{\varepsilon }^+ {\mathcal {S}}_{\varepsilon }(r) \big \Vert _{L^2}^2 +\mu _I\big \Vert \nabla _{\varepsilon }^+ {\mathcal {I}}_{\varepsilon }(r) \big \Vert _{L^2}^2 +\mu _R\big \Vert \nabla _{\varepsilon }^+ {\mathcal {R}}_{\varepsilon }(r) \big \Vert _{L^2}^2\Bigg )dr\\{} & {} \quad \le \big \Vert {\mathcal {S}}_{\varepsilon }(0)\big \Vert _{L^2}^2+\big \Vert {\mathcal {I}}_{\varepsilon }(0)\big \Vert _{L^2}^2+\big \Vert {\mathcal {R}}_{\varepsilon }(0)\big \Vert _{L^2}^2\\{} & {} \qquad + \int _0^t\Bigg ((2{\overline{\beta }}+{\overline{\alpha }}) \big \Vert {\mathcal {I}}_{\varepsilon }(r)\big \Vert _{L^2}^2+ {\overline{\alpha }}\big \Vert {\mathcal {R}}_{\varepsilon }(r)\big \Vert _{L^2}^2\Bigg ) dr, \end{aligned}$$

where \({\overline{\alpha }}=\underset{x\in \mathbb {T}^1}{\sup }\vert \alpha (x) \vert \).

It then follows from Gronwall’s lemma that

$$\begin{aligned}{} & {} \big \Vert {\mathcal {S}}_{\varepsilon }(t)\big \Vert _{L^2}^2+\big \Vert {\mathcal {I}}_{\varepsilon }(t)\big \Vert _{L^2}^2+\big \Vert {\mathcal {R}}_{\varepsilon }(t)\big \Vert _{L^2}^2\\{} & {} \qquad +2\int _0^t \Bigg (\mu _S\big \Vert \nabla _{\varepsilon }^+ {\mathcal {S}}_{\varepsilon }(r) \big \Vert _{L^2}^2 +\mu _I\big \Vert \nabla _{\varepsilon }^+ {\mathcal {I}}_{\varepsilon }(r) \big \Vert _{L^2}^2 +\mu _R\big \Vert \nabla _{\varepsilon }^+ {\mathcal {R}}_{\varepsilon }(r) \big \Vert _{L^2}^2\Bigg )dr \\{} & {} \quad \le \Big (\big \Vert {\mathcal {S}}_{\varepsilon }(0)\big \Vert _{L^2}^2+\big \Vert {\mathcal {I}}_{\varepsilon }(0)\big \Vert _{L^2}^2+\big \Vert {\mathcal {R}}_{\varepsilon }(0)\big \Vert _{L^2}^2\Big )e^{C({\overline{\alpha }},{\overline{\beta }})} \\{} & {} \quad \le C({\overline{\alpha }}, {\overline{\beta }}) \,. \end{aligned}$$

\(\square \)

We now add the following assumption.

Assumption 2.1

The functions \(\beta \), \(\alpha \) satisfy \(\alpha \in C^1(\mathbb {T}^1)\) and \(\beta \in C^2(\mathbb {T}^1)\).

Let \(f_\varepsilon (t,x):=\beta (x)\frac{{\mathcal {S}}_{\varepsilon }(t,x)\big [{\mathcal {S}}_{\varepsilon }(t,x)+{\mathcal {R}}_{\varepsilon }(t,x)\big ]}{{\mathcal {A}}_{\varepsilon }^2(t,x)}, \text {and } g_\varepsilon (t,x) := \beta (x)\frac{{\mathcal {I}}_{\varepsilon }(t,x)\big [{\mathcal {I}}_{\varepsilon }(t,x)+{\mathcal {R}}_{\varepsilon }(t,x)\big ]}{{\mathcal {A}}_{\varepsilon }^2(t,x)}.\)

Lemma 2.4

For any \(T>0\), there exists a positive constant C such that for all \(\varepsilon >0\),

$$\begin{aligned} \int _{0}^{T} \left( \big \Vert \nabla _{\varepsilon }^+ f_\varepsilon (t) \big \Vert _{L^2}^2 + \big \Vert \nabla _{\varepsilon }^+ g_\varepsilon (t) \big \Vert _{L^2}^2\right) dt \le C.\end{aligned}$$
(14)

Proof

\(\forall x\in \mathbb {T}^1\), \(\forall t\ge 0\) we have

$$\begin{aligned}{} & {} \nabla _{\varepsilon }^+ f_\varepsilon (t,x)\nonumber \\{} & {} \quad = -\dfrac{\beta (x+\varepsilon ){\mathcal {S}}_{\varepsilon }(t, x+\varepsilon )\big [{\mathcal {S}}_{\varepsilon }(t, x+\varepsilon )+{\mathcal {R}}_{\varepsilon }(t, x+\varepsilon )\big ]\big [{\mathcal {A}}_{\varepsilon }(t, x+\varepsilon )+{\mathcal {A}}_{\varepsilon }(t, x)\big ]\nabla _{\varepsilon }^+{\mathcal {A}}_{\varepsilon }(t, x)}{{\mathcal {A}}_{\varepsilon }^2(t,x){\mathcal {A}}_{\varepsilon }^2(t, x+\varepsilon )}\nonumber \\{} & {} \qquad + \dfrac{\beta (x+\varepsilon ){\mathcal {S}}_{\varepsilon }(t, x+\varepsilon )}{{\mathcal {A}}_{\varepsilon }^2(t, x)}\nabla _{\varepsilon }^+\big ({\mathcal {S}}_{\varepsilon }(t, x)+{\mathcal {R}}_{\varepsilon }(t, x)\big )\nonumber \\{} & {} \qquad + \dfrac{\beta (x+\varepsilon )\big [{\mathcal {S}}_{\varepsilon }(t, x)+{\mathcal {R}}_{\varepsilon }(t, x)\big ]}{{\mathcal {A}}_{\varepsilon }^2(t, x)}\nabla _{\varepsilon }^+{\mathcal {S}}_{\varepsilon }(t, x) \nonumber \\{} & {} \qquad + \dfrac{{\mathcal {S}}_{\varepsilon }(t, x)\big [{\mathcal {S}}_{\varepsilon }(t, x)+{\mathcal {R}}_{\varepsilon }(t, x)\big ]}{{\mathcal {A}}_{\varepsilon }^2(t, x)}\nabla _{\varepsilon }^+\beta (x), \end{aligned}$$
(15)

from which we obtain

$$\begin{aligned}{} & {} \int _{0}^{T}\int _{\mathbb {T}^1}\Big \vert \nabla _{\varepsilon }^+ f_\varepsilon (t, x)\Big \vert ^2 dx dt \le C \int _{0}^{T}\int _{\mathbb {T}^1}\\{} & {} \quad \left( \big \vert \nabla _{\varepsilon }^+\beta (x)\big \vert ^2+\Big \vert \nabla _{\varepsilon }^+{\mathcal {S}}_{\varepsilon }(t,x)\Big \vert ^2\right. \\+ & {} \left. \Big \vert \nabla _{\varepsilon }^+{\mathcal {R}}_{\varepsilon }(t,x)\Big \vert ^2 +\Big \vert \nabla _{\varepsilon }^+{\mathcal {A}}_{\varepsilon }(t,x)\Big \vert ^2 \right) dxdt , \end{aligned}$$

where we have used Assumption 2.1, inequality (11) and Lemma 2.2. The result now follows from Lemma 2.3.

\(\square \)

Lemma 2.5

For any \(T>0\), there exists a positive constant C such that

$$\begin{aligned}&\sup _{0\le t\le T}\left( \Vert \nabla ^+_\varepsilon S_\varepsilon (t)\Vert _\infty \vee \Vert \nabla ^+_\varepsilon I_\varepsilon (t)\Vert _\infty \vee \Vert \nabla ^+_\varepsilon \right. \nonumber \\&\quad \left. R_\varepsilon (t)\Vert _\infty \vee \Vert \nabla ^+_\varepsilon f_\varepsilon (t)\Vert _\infty \vee \Vert \nabla ^+_\varepsilon g_\varepsilon (t)\Vert _\infty \right) \le C, \end{aligned}$$
(16)
$$\begin{aligned}&\quad \int _{0}^{T} \left( \big \Vert \Delta _{\varepsilon } {\mathcal {S}}_{\varepsilon }(t) \big \Vert _{L^2}^2 +\big \Vert \Delta _{\varepsilon } {\mathcal {I}}_{\varepsilon }(t) \big \Vert _{L^2}^2+ \big \Vert \Delta _{\varepsilon } {\mathcal {R}}_{\varepsilon }(t) \big \Vert _{L^2}^2\right) dt \le C, \end{aligned}$$
(17)
$$\begin{aligned}&\quad \int _{0}^{T} \left( \big \Vert \Delta _{\varepsilon } f_\varepsilon (t) \big \Vert _{L^2}^2 +\big \Vert \Delta _{\varepsilon } g_{\varepsilon }(t) \big \Vert _{L^2}^2\right) dt \le C, \end{aligned}$$
(18)

and

$$\begin{aligned} \sup _{0\le t\le T} \left( \big \Vert f_\varepsilon (t) \big \Vert _{\textrm{H}^{1,\varepsilon }}\vee \big \Vert g_\varepsilon (t) \big \Vert _{\textrm{H}^{1,\varepsilon }}\right) \le C . \end{aligned}$$
(19)

Proof

We first etablish (16). Applying the operator \(\nabla ^+_\varepsilon \) to the first equation in (1), we get

$$\begin{aligned} \frac{d\nabla ^+_\varepsilon {\mathcal {S}}_\varepsilon }{dt}(t,x)=\mu _S\Delta _\varepsilon \nabla ^+_\varepsilon {\mathcal {S}}_\varepsilon (t,x) -\nabla ^+_\varepsilon \left( \frac{\beta {\mathcal {S}}_\varepsilon {\mathcal {I}}_\varepsilon }{{\mathcal {A}}_\varepsilon }\right) (t,x) \end{aligned}$$
(20)

The last term on the above right hand side is easily explicited thanks to a computation similar to that done in (15). Combining that formula with Assumption 2.1, inequality (11) and Lemma 2.2, we deduce that

$$\begin{aligned} \Big \Vert \nabla ^+_\varepsilon \left( \frac{\beta {\mathcal {S}}_\varepsilon {\mathcal {I}}_\varepsilon }{{\mathcal {A}}_\varepsilon }\right) (t)\Big \Vert _\infty \le C\left( \Big \Vert \nabla ^+_\varepsilon {\mathcal {S}}_\varepsilon (t)\Big \Vert _\infty +\Big \Vert \nabla ^+_\varepsilon {\mathcal {I}}_\varepsilon (t)\Big \Vert _\infty +\Big \Vert \nabla ^+_\varepsilon {\mathcal {R}}_\varepsilon (t)\Big \Vert _\infty \right) \,.\end{aligned}$$

From the Duhamel formula,

$$\begin{aligned}\nabla ^+_\varepsilon {\mathcal {S}}_\varepsilon (t)=e^{t\mu _S\Delta _\varepsilon }\nabla ^+_\varepsilon {\mathcal {S}}_\varepsilon (0) +\int _0^t e^{(t-s)\mu _S\Delta _\varepsilon }\nabla ^+_\varepsilon \left( \frac{\beta {\mathcal {S}}_\varepsilon {\mathcal {I}}_\varepsilon }{{\mathcal {A}}_\varepsilon }\right) (s)ds\end{aligned}$$

Since the semigroup \(e^{t\mu _S\Delta _\varepsilon }\) is contracting in \(L^\infty \), we deduce that

$$\begin{aligned}\Vert \nabla ^+_\varepsilon {\mathcal {S}}_\varepsilon (t)\Vert _\infty\le & {} \Vert \nabla ^+_\varepsilon {\mathcal {S}}_\varepsilon (0)\Vert _\infty +C\int _0^t\left( \Vert \nabla ^+_\varepsilon {\mathcal {S}}_\varepsilon (s)\Vert _\infty +\Vert \nabla ^+_\varepsilon {\mathcal {I}}_\varepsilon (s)\Vert _\infty \right. \\{} & {} \left. +\Vert \nabla ^+_\varepsilon {\mathcal {R}}_\varepsilon (s)\Vert _\infty \right) ds\,.\end{aligned}$$

Applying similar arguments to the two other equations in (1), we obtain

$$\begin{aligned}&\Vert \nabla ^+_\varepsilon {\mathcal {S}}_\varepsilon (t)\Vert _\infty +\Vert \nabla ^+_\varepsilon {\mathcal {I}}_\varepsilon (t)\Vert _\infty +\Vert \nabla ^+_\varepsilon {\mathcal {R}}_\varepsilon (t)\Vert _\infty \\&\quad \le \Vert \nabla ^+_\varepsilon {\mathcal {S}}_\varepsilon (0)\Vert _\infty +\Vert \nabla ^+_\varepsilon {\mathcal {I}}_\varepsilon (0)\Vert _\infty +\Vert \nabla ^+_\varepsilon {\mathcal {R}}_\varepsilon (0)\Vert _\infty \\&\qquad +C\int _0^t\left( \Vert \nabla ^+_\varepsilon {\mathcal {S}}_\varepsilon (s)\Vert _\infty +\Vert \nabla ^+_\varepsilon {\mathcal {I}}_\varepsilon (s)\Vert _\infty +\Vert \nabla ^+_\varepsilon {\mathcal {R}}_\varepsilon (s)\Vert _\infty \right) ds\,. \end{aligned}$$

(16) now follows from Gronwall’s Lemma and Assumption 1.1.

We now multiply (20) by \(\nabla ^+_\varepsilon {\mathcal {S}}_\varepsilon (t,x)\) and integrate on \([0,t]\times \mathbb {T}^1\), yielding

$$\begin{aligned} \Vert \nabla ^+_\varepsilon {\mathcal {S}}_\varepsilon (t)\Vert _{L^2}+2\mu _S\int _0^t\Vert \Delta _\varepsilon {\mathcal {S}}_\varepsilon (s)\Vert _{L^2}^2ds&=2\int _0^t\left( \frac{\beta {\mathcal {S}}_\varepsilon {\mathcal {I}}_\varepsilon }{{\mathcal {A}}_\varepsilon }(s),\Delta _\varepsilon {\mathcal {S}}_\varepsilon (s)\right) ds\\&\le Ct+\mu _S\int _0^t\Vert \Delta _\varepsilon {\mathcal {S}}_\varepsilon (s)\Vert _{L^2}^2ds\,, \end{aligned}$$

which yields one third of (17). The rest of (17) is proved by similar computations applied to the equations for \(\nabla ^+_\varepsilon {\mathcal {I}}_\varepsilon \) and \(\nabla ^+_\varepsilon {\mathcal {R}}_\varepsilon \). Next (18) follows from (17), (16), Assumption 2.1, (11) and Lemma 2.2.

Since

$$\begin{aligned} \big \Vert f_{\varepsilon }(t) \big \Vert _{\textrm{H}^{1,\varepsilon }}^2 \le C\bigg (\big \Vert f_{\varepsilon }(t) \big \Vert _{L^2}^2 +\big \Vert \nabla _{\!\!\varepsilon }^+ f_{\varepsilon }(t)\big \Vert _{L^2}^2\bigg ) , \end{aligned}$$

the estimate (19) follows from (16), Assumption 2.1, inequality (11), Lemma 2.2 and the fact that the norm in \(L^2(\mathbb {T}^1)\) is bounded by the norm in \(L^\infty (\mathbb {T}^1)\).

\(\square \)

Lemma 2.6

For any \(T>0\), as \(\varepsilon \rightarrow 0\)

$$\begin{aligned}{} & {} f_\varepsilon \longrightarrow f , \qquad g_\varepsilon \longrightarrow g, \quad \nabla _{\varepsilon }^+ f_\varepsilon \longrightarrow \nabla f , \text {and} \\{} & {} \quad \nabla _{\varepsilon }^+ g_\varepsilon \longrightarrow \nabla g \text {in} C\left( [0, T] ; L^2(\mathbb {T}^1)\right) , \end{aligned}$$

where \(\displaystyle f(t,x)=\dfrac{{\textbf{s}}(t,x)\left[ {\textbf{s}}(t,x)+{\textbf{r}}(t,x)\right] }{{\textbf{a}}^2(t,x)}\) and \( \displaystyle g(t,x)=\dfrac{{\textbf{i}}(t,x)\left[ {\textbf{i}}(t,x)+{\textbf{r}}(t,x)\right] }{{\textbf{a}}^2(t,x)}, \forall t\in [0, T], \; x\in \mathbb {T}^1.\)

Moreover f, \(g \in L^2\left( 0, T ; \textrm{H}^1\right) \).

Proof

Let d be the function such that , \(\forall \) \( t\in [0, T]\) , \(x\in \mathbb {T}^1\) and \(\varepsilon >0\)

$$\begin{aligned} \displaystyle f_\varepsilon (t,x) =d\big ({\mathcal {S}}_{\varepsilon }(t,x), {\mathcal {I}}_{\varepsilon }(t,x), {\mathcal {R}}_{\varepsilon }(t,x)\big ) \quad \text {and} \quad f(t,x) =d\big ({\textbf{s}}(t,x), {\textbf{i}}(t,x), {\textbf{r}}(t,x)\big ). \end{aligned}$$

Furthermore, we know that \({\mathcal {S}}_{\varepsilon } \longrightarrow {\textbf{s}}\), \({\mathcal {I}}_{\varepsilon }\longrightarrow {\textbf{i}}\) and \({\mathcal {R}}_{\varepsilon }\longrightarrow {\textbf{r}}\) uniformly on \([0, T] \times \mathbb {T}^1\). Since d is continuous on \(\{(s, i, r)\in (\mathbb {R}_+)^3 : s+i+r>0\}\), then we deduce that \(f_\varepsilon \longrightarrow f\) uniformly on \([0, T] \times \mathbb {T}^1\), and in particular in \(C\left( [0, T] ; L^2(\mathbb {T}^1)\right) \).

From (20) and similar equations for \(\displaystyle \nabla _{\!\!\varepsilon }^+ {\mathcal {I}}_{\varepsilon }(t,x)\) and \(\displaystyle \nabla _{\!\!\varepsilon }^+ {\mathcal {R}}_{\varepsilon }(t,x)\), we obtain the convergence of \(\nabla _{\varepsilon }^+ f_\varepsilon \longrightarrow \nabla f\) by an argument similar to the previous one.

The proofs of \(g_\varepsilon \longrightarrow g\) and \(\nabla _{\varepsilon }^+ g_\varepsilon \longrightarrow \nabla g\) are obtained in the same way.

\(\square \)

In the sequel, we will write "\(f_\varepsilon (t) \longrightarrow f(t) \) in \(\textrm{H}^{1}\)" to mean that "\(f_\varepsilon (t) \longrightarrow f(t) \) in \(L^2(\mathbb {T}^1)\) and \(\nabla _{\!\!\varepsilon }^+f_\varepsilon (t) \longrightarrow \nabla f(t) \) in \(L^2(\mathbb {T}^1)\)".

We have the following compactness result.

Lemma 2.7

(Theorem 1.69 of Bahouri et al. [1], page 47) 

For any compact subset E of \(\mathbb {R}^d\) and \(s_1<s_2\), the embedding of \(\textrm{H}^{s_2}\left( E\right) \) into \(\textrm{H}^{s_1}\left( E\right) \) is a compact linear operator.

In the next section, we study the behavior of the process \(\{\Psi _{\varepsilon }, 0<\varepsilon < 1 \}\) as \(\varepsilon \) goes to zero.

3 Functional central limit theorem

Let us define \(\displaystyle {\mathscr {U}}_{\varepsilon }(t,x) = \dfrac{1}{\varepsilon ^{1/2}}\sum _{i=1}^{\varepsilon ^{-1}} U_{\varepsilon }(t,x_i)\mathbbm {1}_{V_i}(x), \) \(\displaystyle {\mathscr {V}}_{\varepsilon }(t,x) =\dfrac{1}{\varepsilon ^{1/2}}\sum _{i=1}^{\varepsilon ^{-1}}V_{\varepsilon }(t,x_i)\mathbbm {1}_{V_i}(x),\)

$$\begin{aligned} \hspace{0.5cm}{\mathscr {W}}_{\varepsilon }(t,x) = \dfrac{1}{\varepsilon ^{1/2}}\sum _{i=1}^{\varepsilon ^{-1}}W_{\varepsilon }(t,x_i)\mathbbm {1}_{V_i}(x).\end{aligned}$$

Moreover, we set

$$\begin{aligned} {\mathscr {M}}_{\varepsilon }^S(t,x)= & {} \int _0^t \varepsilon ^{-1/2} \sum _{i=1}^{\varepsilon ^{-1}}\sqrt{\beta (x_i)\dfrac{S_{\varepsilon }(r,x_i)I_{\varepsilon }(r,x_i)}{A_{\varepsilon }(r,x_i)}}\mathbbm {1}_{V_i}(x)\; \; dB_{x_i}^{inf}(r) \\{} & {} + \sqrt{\mu _S}\int _0^t \varepsilon ^{-1/2} \sum _{i=1}^{\varepsilon ^{-1}}\sum \limits _{\begin{array}{c} i\, , \, j \\ x_i \sim x_j \end{array}} \sqrt{ S_{\varepsilon }(r,x_i)}\dfrac{\big ( \mathbbm {1}_{V_j}(x)- \mathbbm {1}_{V_i}(x)\big )}{\varepsilon } dB_{x_i x_j}^S(r), \\ {\mathscr {M}}_{\varepsilon }^I(t,x)= & {} -\int _0^t \varepsilon ^{-1/2} \sum _{i=1}^{\varepsilon ^{-1}}\sqrt{\beta (x_i)\dfrac{S_{\varepsilon }(r,x_i)I_{\varepsilon }(r,x_i)}{A_{\varepsilon }(r,x_i)}}\mathbbm {1}_{V_i}(x)\; \; dB_{x_i}^{inf}(r) \\{} & {} +\int _0^t\varepsilon ^{-1/2}\sum _{i=1}^{\varepsilon ^{-1}}\sqrt{\alpha (x_i) I_{\varepsilon }(r,x_i)}\mathbbm {1}_{V_i}(x)\; dB_{x_i}^{rec}(r) \\{} & {} + \sqrt{\mu _I}\int _0^t \varepsilon ^{-1/2} \sum _{i=1}^{\varepsilon ^{-1}}\sum \limits _{\begin{array}{c} i\, , \, j \\ x_i \sim x_j \end{array}} \sqrt{ I_{\varepsilon }(r,x_i)}\dfrac{\big (\mathbbm {1}_{V_j}(x)- \mathbbm {1}_{V_i}(x)\big )}{\varepsilon } dB_{x_i x_j}^S(r),\\ {\mathscr {M}}_{\varepsilon }^R(t,x)= & {} -\int _0^t\varepsilon ^{-1/2}\sum _{i=1}^{\varepsilon ^{-1}}\sqrt{\alpha (x_i) I_{\varepsilon }(r,x_i)}\mathbbm {1}_{V_i}(x)\; dB_{x_i}^{rec}(r) \\{} & {} + \sqrt{\mu _R}\int _0^t \varepsilon ^{-1/2} \sum _{i=1}^{\varepsilon ^{-1}}\sum \limits _{\begin{array}{c} i\, , \, j \\ x_i \sim x_j \end{array}} \sqrt{ R_{\varepsilon }(r,x_i)}\dfrac{\big (\mathbbm {1}_{V_j}(x)- \mathbbm {1}_{V_i}(x)\big )}{\varepsilon } dB_{x_i x_j}^S(r). \end{aligned}$$

\(({\mathscr {U}}_{\varepsilon }, {\mathscr {V}}_{\varepsilon }, {\mathscr {W}}_{\varepsilon })\) satisfies the following system

$$\begin{aligned} \left\{ \begin{aligned} {\mathscr {U}}_{\varepsilon }(t)&= \int _0^t \mu _S \Delta _{\varepsilon }{\mathscr {U}}_{\varepsilon }(r) dr - \int _0^t \beta (.) \dfrac{{\mathcal {I}}_{\varepsilon }(r)\big ({\mathcal {I}}_{\varepsilon }(r)+{\mathcal {R}}_{\varepsilon }(r)\big ){\mathscr {V}}_{\varepsilon }(r)}{{\mathcal {A}}_{\varepsilon }^2(r)} dr\\&\hspace{0.5cm} - \int _0^t \beta (.) \dfrac{{\mathcal {S}}_{\varepsilon }(r)\big ({\mathcal {S}}_{\varepsilon }(r)+{\mathcal {R}}_{\varepsilon }(r)\big ){\mathscr {U}}_{\varepsilon }(r)}{{\mathcal {A}}_{\varepsilon }^2(r)} dr + {\mathscr {M}}_{\varepsilon }^{S}(t),\\ {\mathscr {V}}_{\varepsilon }(t)&= \int _0^t \mu _{I}\Delta _{\varepsilon } {\mathscr {V}}_{\varepsilon }(r) dr + \int _0^t \beta (.) \dfrac{{\mathcal {I}}_{\varepsilon }(r)\big ({\mathcal {I}}_{\varepsilon }(r)+{\mathcal {R}}_{\varepsilon }(r)\big ){\mathscr {V}}_{\varepsilon }(r)}{{\mathcal {A}}_{\varepsilon }^2(r)} dr\\&\quad + \int _0^t \beta (.) \dfrac{{\mathcal {S}}_{\varepsilon }(r)\big ({\mathcal {S}}_{\varepsilon }(r)+{\mathcal {R}}_{\varepsilon }(r)\big ){\mathscr {U}}_{\varepsilon }(r)}{{\mathcal {A}}_{\varepsilon }^2(r)} dr-\int _0^t \alpha (.) {\mathscr {V}}_{\varepsilon }(r) dr+ {\mathscr {M}}_{\varepsilon }^{I}(t), \\ {\mathscr {W}}_{\varepsilon }(t)&= \int _0^t \mu _R\Delta _{\varepsilon } {\mathscr {W}}_{\varepsilon }(r) dr + \int _0^t \alpha (.) {\mathscr {V}}_{\varepsilon }(r) dr + {\mathscr {M}}_{\varepsilon }^{R}(t). \end{aligned} \right. \nonumber \\ \end{aligned}$$
(21)

For \( \gamma \in \mathbb {R}_+\), we denote by \(C\big ([0,T] ; \textrm{H}^{-\gamma }\big )\) the complete separable metric space of continuous functions defined on [0, T] with values in \(\textrm{H}^{-\gamma }\). For any \( \varepsilon >0\), \({\mathscr {U}}_{\varepsilon }\), \({\mathscr {V}}_{\varepsilon }\) and \( {\mathscr {W}}_{\varepsilon }\) can be viewed as continuous processes taking values in some Hilbert space \( \textrm{H}^{-\gamma }\). Hence we will study the weak convergence of the process \(({\mathscr {U}}_{\varepsilon }, {\mathscr {V}}_{\varepsilon }, {\mathscr {W}}_{\varepsilon })\) in \(C\big ([0,T] ; (\textrm{H}^{-\gamma })^3\big ).\)

In the sequel we will need to control the stochastic convolution integrals \(\displaystyle \int _0^t \textsf {T}_{\!\varepsilon ,J}(t-r)d{\mathscr {M}}_{\varepsilon }^J(r)\), with \(J\in \{S, I, R\}\). For that sake, we shall need a maximal inequality which is a special case of Theorem 2.1 of Kotelenez [8], which we first recall.

Lemma 3.1

(Kotelenez [8]) Let \(( \textrm{H}\,; \Vert . \Vert _{\textrm{H}}) \) be a separable Hilbert space, \({\mathcal {M}}\) an \(\textrm{H}\)-valued locally square integrable càdlàg martingale and \(\textsf {T}(t)\) a contraction semigroup operator of \( {\mathcal {L}}(\textrm{H})\). Then, there is a finite constant c depending only on the Hilbert norm \( \Vert . \Vert _{\textrm{H}} \) such that for all \( T \ge 0 \)

$$\begin{aligned} \mathbb {E}\bigg (\underset{0\le t \le T}{\sup }\Big \Vert \int _0^t \textsf {T}(t-r)d{\mathcal {M}}(r)\Big \Vert _{\textrm{H}}^2 \bigg )\le c\,e^{4\sigma T}\mathbb {E}\bigg ( \Big \Vert {\mathcal {M}}(T)\Big \Vert _{\textrm{H}}^2\bigg ), \end{aligned}$$
(22)

where \(\sigma \) is a real number such that \( \big \Vert \textsf {T}(t)\big \Vert _{{\mathcal {L}}(\textrm{H})} \le e^{\sigma t}\).

We want to take the limit as \(\epsilon \rightarrow 0\) in the system of SDEs (21) satisfied by \({\mathscr {Y}}_{\varepsilon }\). To this end we will split our system into two subsystems.

First, we consider the following linear system

$$\begin{aligned} \left\{ \begin{aligned} d u_{\varepsilon } (t)&= \mu _S\,\Delta _{\varepsilon }u_{\varepsilon }(t) dt+d{\mathscr {M}}^S_{\varepsilon }(t),\\ dv_{\varepsilon }(t)&=\mu _I\,\Delta _{\varepsilon } v_{\varepsilon }dt+d{\mathscr {M}}^I_{\varepsilon }(t), \\ dw_{\varepsilon }(t)&=\mu _R\,\Delta _{\varepsilon } w_{\varepsilon }(t)dt+d{\mathscr {M}}^R_{\varepsilon }(t),\\ u_{\varepsilon }(0)&=v_{\varepsilon }(0)=w_{\varepsilon }(0)=0. \end{aligned} \right. \end{aligned}$$
(23)

Next, we shall consider the second system

$$\begin{aligned} \left\{ \begin{aligned} \dfrac{d{\overline{u}}_{\varepsilon }}{dt}(t)&= \mu _S\,\Delta _{\varepsilon } {\overline{u}}_{\varepsilon }(t)-f_{\varepsilon }(t){\overline{u}}_{\varepsilon }(t)-g_{\varepsilon }(t){\overline{v}}_{\varepsilon }(t)-f_{\varepsilon }(t)u_{\varepsilon }(t)-g_{\varepsilon }(t)v_{\varepsilon }(t),\\ \dfrac{d{\overline{v}}_{\varepsilon }}{dt}(t)&= \mu _I\,\Delta _{\varepsilon } {\overline{v}}_{\varepsilon }(t)+ f_{\varepsilon }(t){\overline{u}}_{\varepsilon }(t)+(g_{\varepsilon }(t)-\alpha ){\overline{v}}_{\varepsilon }(t)+f_{\varepsilon }(t)u_{\varepsilon }(t)\\&\quad +(g_{\varepsilon }(t)-\alpha )v_{\varepsilon }(t), \\ \dfrac{d{\overline{w}}_{\varepsilon }}{dt}(t)&= \mu _R\,\Delta _{\varepsilon } {\overline{w}}_{\varepsilon }+ \alpha \left( v_{\varepsilon }+{\overline{v}}_{\varepsilon }\right) , \\ {\overline{u}}_{\varepsilon }(0)&={\overline{v}}_{\varepsilon }(0)={\overline{w}}_{\varepsilon }(0)=0, \end{aligned} \right. \end{aligned}$$
(24)

and finally, we note that

$$\begin{aligned} {\mathscr {U}}_{\varepsilon }=u_{\varepsilon }+{\overline{u}}_{\varepsilon }, \quad {\mathscr {V}}_{\varepsilon }=u_{\varepsilon }+{\overline{v}}_{\varepsilon }, \quad {\mathscr {W}}_{\varepsilon }=w_{\varepsilon }+{\overline{w}}_{\varepsilon }. \end{aligned}$$

Then the convergence of \({\mathscr {Y}}_\varepsilon :=\left( {\mathscr {U}}_{\varepsilon }, {\mathscr {V}}_{\varepsilon }, {\mathscr {W}}_{\varepsilon }\right) \) will follow from both the convergence of \(\left( u_{\varepsilon }, v_{\varepsilon }, w_{\varepsilon }\right) \) and of \(\left( {\overline{u}}_{\varepsilon }, {\overline{v}}_{\varepsilon }, {\overline{w}}_{\varepsilon }\right) \).

Let us first look at the convergence of \(\left( u_{\varepsilon }, v_{\varepsilon }, w_{\varepsilon }\right) \).

Let \( {\mathscr {M}}_{\varepsilon }= \big ({\mathscr {M}}_{\varepsilon }^S, {\mathscr {M}}_{\varepsilon }^I, {\mathscr {M}}_{\varepsilon }^R \big )^{\texttt{T}}\).

Recall that we denote by \("\Longrightarrow "\) the weak convergence.

Proposition 3.1

For any \( \gamma > 3/2\), the Gaussian martingale \( {\mathscr {M}}_{\varepsilon }\Longrightarrow {\mathscr {M}}:= \big ({\mathscr {M}}^S, {\mathscr {M}}^I,{\mathscr {M}}^R\big )^{\texttt{T}}\) in \(C\big ([0,T]; (\textrm{H}^{-\gamma })^3\big )\) as \(\varepsilon \rightarrow 0\), where for all \( \varphi \in \textrm{H}^{\gamma }\)

$$\begin{aligned} \langle \, {\mathscr {M}}^S(t),\varphi \,\rangle= & {} -\int _0^t \int _{\mathbb {T}^1} \varphi (x) \sqrt{\dfrac{\beta (x){\textbf{s}}(r,x){\textbf{i}}(r,x)}{{\textbf{a}}(r,x)}}\; {\dot{W}}_1 (dr,dx) \\{} & {} - \sqrt{2\mu _S}\int _0^t\int _{\mathbb {T}^1} \varphi ^{\prime }(x)\sqrt{{\textbf{s}}(r,x)}\; {\dot{W}}_2(dr,dx), \\ \langle \, {\mathscr {M}}^I(t),\varphi \,\rangle= & {} \int _0^t\int _{\mathbb {T}^1}\varphi (x) \sqrt{\dfrac{\beta (x){\textbf{s}}(r,x){\textbf{i}}(r,x)}{{\textbf{a}}(r,x)}}\; {\dot{W}}_1 (dr,dx)\\{} & {} +\int _0^t\int _{\mathbb {T}^1}\varphi (x)\sqrt{\alpha (x){\textbf{i}}(r,x)}{\dot{W}}_3(dr,dx) \\{} & {} -\sqrt{2\mu _I}\int _0^t \int _{\mathbb {T}^1}\varphi ^{\prime }(x) \sqrt{{\textbf{i}}(r,x)}\; {\dot{W}}_4(dr,dx), \\ \langle \, {\mathscr {M}}^R(t),\varphi \,\rangle= & {} -\int _0^t\int _{\mathbb {T}^1}\varphi (x)\sqrt{\alpha (x){\textbf{i}}(r,x)}{\dot{W}}_3 (dr,dx)\\{} & {} -\sqrt{2\mu _R}\int _0^t\int _{\mathbb {T}^1}\varphi ^{\prime }(x)\sqrt{{\textbf{r}}(r,x)}\; {\dot{W}}_5(dr,dx), \end{aligned}$$

and \(\dot{W_1}\), \(\dot{W_2}\), \(\dot{W_3}\), \(\dot{W_4}\) and \(\dot{W_5}\) are standard space-time white noises which are mutually independent.

Proof

First, we are going to show that there exists a positive constant C independent of \(\varepsilon \) such

$$\begin{aligned} \underset{0<\varepsilon <1}{\sup } \mathbb {E}\left( \underset{0\le t\le T}{\sup }\big \Vert {\mathscr {M}}_{\varepsilon }(t)\big \Vert _{_{\textrm{H}^{-\gamma }}}^2\right)\le & {} C. \end{aligned}$$
(25)

Recall that \( \displaystyle \big \Vert {\mathscr {M}}_{\varepsilon }(t)\big \Vert _{_{\textrm{H}^{-\gamma }}}^2:= \big \Vert {\mathscr {M}}_{\varepsilon }^S(t)\big \Vert _{_{\textrm{H}^{-\gamma }}}^2+\big \Vert {\mathscr {M}}_{\varepsilon }^I(t)\big \Vert _{_{\textrm{H}^{-\gamma }}}^2+\big \Vert {\mathscr {M}}_{\varepsilon }^R(t)\big \Vert _{_{\textrm{H}^{-\gamma }}}^2\)  .

Applying Doob’s inequality to the martingale \({\mathscr {M}}_{\varepsilon }^S\), we have

$$\begin{aligned}{} & {} \mathbb {E}\left( \underset{0\le t\le T}{\sup }\big \Vert {\mathscr {M}}_{\varepsilon }^S(t)\big \Vert _{_{\textrm{H}^{-\gamma }}}^2\right) \le 4 \mathbb {E}\bigg (\big \Vert {\mathscr {M}}_{\varepsilon }^S(T)\big \Vert _{_{\textrm{H}^{-\gamma }}}^2\bigg ) \\{} & {} \quad =4\sum _{m\; \text {even}}\mathbb {E}\Big (\langle {\mathscr {M}}_{\varepsilon }^S(T) , {\textbf{f}}_{m}\; \rangle ^2 \Big )(1+\lambda _m)^{-\gamma }, \; \; \text {with}\; {\textbf{f}}_{m}\in \{\varphi _{m}, \psi _{m}\} \\{} & {} \quad = \dfrac{4}{\varepsilon }\int _0^T \sum _{m\; \text {even}}\sum _{i=1}^{\varepsilon ^{-1}} \dfrac{\beta (x_i)S_{\varepsilon }(r,x_i)I_{\varepsilon }(r,x_i)}{A_{\varepsilon }(r,x_i)}\bigg ( \int _{V_i}{\textbf{f}}_{m}(x)\,dx \bigg )^2 (1+\lambda _{m})^{-\gamma }dr \\{} & {} \qquad +\dfrac{4\mu _S}{\varepsilon } \int _0^T \sum _{m \; \text {even}} \sum _{i=1}^{\varepsilon ^{-1}}S_{\varepsilon }(r, x_i)\Bigg [\bigg (\int _{V_i} \nabla _{\varepsilon }^{+}{\textbf{f}}_{m}(x)\, dx \bigg )^2\\{} & {} \qquad +\bigg (\int _{V_i} \nabla _{\varepsilon }^{-}{\textbf{f}}_{m}(x)\, dx \bigg )^2\Bigg ](1+\lambda _{m})^{-\gamma } dr. \end{aligned}$$

But since \( \dfrac{S_{\varepsilon }(r, x_i)I_{\varepsilon }(r, x_i)}{A_{\varepsilon }(r, x_i)} \le M \) (indeed \(\dfrac{I_{\varepsilon }(r, x_i)}{A_{\varepsilon }(r, x_i)}\le 1\) and \(S_{\varepsilon }(r, x_i)\le M\), see (11) and the line which follows)   and   \(\big \vert \nabla _{\varepsilon }^{\pm }{\textbf{f}}_{m}(x) \big \vert ^2 \le 2\pi ^2 m^2, \) then we obtain

$$\begin{aligned} \mathbb {E}\left( \underset{0\le t\le T}{\sup }\big \Vert {\mathscr {M}}_{\varepsilon }^S(t)\big \Vert _{_{\textrm{H}^{-\gamma }}}^2\right)\le & {} C({\overline{\beta }}, \mu _S, T)\bigg (\sum _{m\; \text {even}}\dfrac{1}{m^{2\gamma }} + \sum _{m\; \text {even}}\dfrac{1}{ m^{2(\gamma -1)}}\bigg ). \end{aligned}$$

Since \( \displaystyle \sum _{m\; \text {even}}\dfrac{1}{m^{2(\gamma -1)}} < \infty \) iff \(\gamma >3/2\), we then have

$$\begin{aligned} \underset{0<\varepsilon <1}{\sup } \mathbb {E}\left( \underset{0\le t\le T}{\sup }\big \Vert {\mathscr {M}}_{\varepsilon }^S(t)\big \Vert _{_{\textrm{H}^{-\gamma }}}^2\right)\le & {} C({\overline{\beta }}, \mu _S,T), \quad \text {for all }{ \gamma >3/2} . \end{aligned}$$
(26)

Similar inequalities hold for the martingales \({\mathscr {M}}_{\varepsilon }^I\) and \({\mathscr {M}}_{\varepsilon }^R\). Hence we obtain

$$\begin{aligned} \underset{0<\varepsilon <1}{\sup } \mathbb {E}\left( \underset{0\le t\le T}{\sup }\big \Vert {\mathscr {M}}_{\varepsilon }(t)\big \Vert _{_{\textrm{H}^{-\gamma }}}^2\right)\le & {} C. \end{aligned}$$
(27)

Inequality (27) and standard tightness criteria for martingales (see e.g. the proof of Theorem 3.1) implies that the martingale \( {\mathscr {M}}_{\varepsilon }\) is tight in \(C\big ([0,T]; (\textrm{H}^{-\gamma })^3\big )\), with \(\gamma >3/2\).

In what follows \({\textbf {<}}{} {\textbf {<}}{\mathscr {M}}_\varepsilon ^{S,\gamma _0}{} {\textbf {>}}{} {\textbf {>}}_t\) denotes the operator–valued increasing process associated to the \(L^2(\mathbb {T}^1)\)–valued martingale \({\mathscr {M}}_\varepsilon ^{S,\gamma _0}(t)\), whose trace is the increasing process associated to the real valued submartingale \(\Vert {\mathscr {M}}_\varepsilon ^{S,\gamma _0}(t)\Vert _{L^2(\mathbb {T}^1)}^2\). Let \( \varphi \in \textrm{H}^{\gamma }\). We set \({\mathscr {M}}_{\varepsilon }^{S,\varphi }=\langle {\mathscr {M}}_{\varepsilon }^S , \varphi \rangle \). \({\mathscr {M}}_{\varepsilon }^{I,\varphi }\) and \({\mathscr {M}}_{\varepsilon }^{R,\varphi }\) are defined in the same way. \(\forall \, t\in [0,T]\), we have

$$\begin{aligned} {\textbf {<}}{} {\textbf {<}} {\mathscr {M}}_{\varepsilon }^{S,\varphi } {\textbf {>}}{} {\textbf {>}}_t= & {} \dfrac{1}{\varepsilon }\int _0^t \sum _{i=1}^{\varepsilon ^{-1}} \beta (x_i)\dfrac{S_{\varepsilon }(r,x_i)I_{\varepsilon }(r,x_i)}{A_{\varepsilon }(r,x_i)}\bigg ( \int _{V_i}\varphi (x)\,dx \bigg )^2 dr \\{} & {} + \dfrac{\mu _S}{\varepsilon }\int _0^t \sum _{i=1}^{\varepsilon ^{-1}}S_{\varepsilon }(r, x_i)\Bigg [\Big ( \int _{V_i} \nabla _{\varepsilon }^{+}\varphi (x)\, dx \Big )^2+\Big ( \int _{V_i} \nabla _{\varepsilon }^{-}\varphi (x)\, dx \Big )^2\Bigg ] dr. \end{aligned}$$

We have

$$\begin{aligned}{} & {} \dfrac{1}{\varepsilon }\int _0^t \sum _{i=1}^{\varepsilon ^{-1}} \beta (x_i)\dfrac{S_{\varepsilon }(r,x_i)I_{\varepsilon }(r,x_i)}{A_{\varepsilon }(r,x_i)}\bigg ( \int _{V_i}\varphi (x)\,dx \bigg )^2 dr \\{} & {} \quad = \dfrac{1}{\varepsilon }\int _0^t \sum _{i=1}^{\varepsilon ^{-1}} \beta (x_i)\dfrac{S_{\varepsilon }(r,x_i)I_{\varepsilon }(r,x_i)}{A_{\varepsilon }(r,x_i)}\bigg ( \int _{V_i}\varphi (x)\,dx \bigg )\bigg [ \int _{V_i}\Big (\varphi (x)-\varphi (x_i)\Big )\,dx \bigg ] dr \\{} & {} \qquad + \int _0^t \int _{\mathbb {T}^1} \sum _{i=1}^{\varepsilon ^{-1}} \beta (x_i)\dfrac{S_{\varepsilon }(r,x_i)I_{\varepsilon }(r,x_i)}{A_{\varepsilon }(r,x_i)}\varphi (x)\varphi (x_i)\mathbbm {1}_{V{x_i}}(x) dxdr . \end{aligned}$$

On the one hand we have

$$\begin{aligned}{} & {} \Bigg \vert \dfrac{1}{\varepsilon } \sum _{i=1}^{\varepsilon ^{-1}} \beta (x_i)\dfrac{S_{\varepsilon }(r,x_i)I_{\varepsilon }(r,x_i)}{A_{\varepsilon }(r,x_i)}\bigg ( \int _{V_i}\varphi (x)\,dx \bigg )\bigg [\int _{V_i}\Big (\varphi (x)-\varphi (x_i)\Big )\,dx \bigg ] \Bigg \vert \\{} & {} \quad \le C\varepsilon \big \Vert \varphi \big \Vert _{\textrm{H}^{\gamma }}\, \int _{\mathbb {T}^1} \dfrac{\beta _{\varepsilon }(x){\mathcal {S}}_{\varepsilon }(r,x){\mathcal {I}}_{\varepsilon }(r,x)\vert \varphi (x)\vert }{{\mathcal {A}}_{\varepsilon }(r,x)} dx \longrightarrow 0, \end{aligned}$$

because the quantity \( \displaystyle \int _{\mathbb {T}^1} \dfrac{\beta _{\varepsilon }(x){\mathcal {S}}_{\varepsilon }(r,x){\mathcal {I}}_{\varepsilon }(r,x)\vert \varphi (x)\vert }{{\mathcal {A}}_{\varepsilon }(r,x)} dx\) is bounded uniformly in \( \varepsilon \). Hence \(\displaystyle \dfrac{1}{\varepsilon }\int _0^t \sum _{i=1}^{\varepsilon ^{-1}} \beta (x_i)\dfrac{S_{\varepsilon }(r,x_i)I_{\varepsilon }(r,x_i)}{A_{\varepsilon }(r,x_i)}\bigg ( \int _{V_i}\varphi (x)\,dx \bigg )\bigg [ \int _{V_i}\Big (\varphi (x)-\varphi (x_i)\Big )\,dx \bigg ] \longrightarrow 0 \), as \(\varepsilon \rightarrow 0\).

On the other hand, the fact that \(\displaystyle \underset{0\le t\le T}{\sup }\big \Vert {\textbf{X}}_{\varepsilon }(t)-{\textbf{X}}(t)\big \Vert _{\infty }\longrightarrow 0\), as \(\varepsilon \rightarrow 0\), leads to

$$\begin{aligned}{} & {} \Bigg \vert \int _{\mathbb {T}^1} \sum _{i=1}^{\varepsilon ^{-1}} \beta (x_i)\dfrac{S_{\varepsilon }(r,x_i)I_{\varepsilon }(r,x_i)}{A_{\varepsilon }(r,x_i)}\varphi (x)\varphi (x_i)\mathbbm {1}_{V{x_i}}(x) dx\\{} & {} \quad -\int _{\mathbb {T}^1} \beta (x) \dfrac{{\textbf{s}}(r,x){\textbf{i}}(r,x)}{{\textbf{a}}(r,x)}\varphi ^2(x) dx\Bigg \vert \longrightarrow 0. \end{aligned}$$

This shows that

$$\begin{aligned}{} & {} \dfrac{1}{\varepsilon }\int _0^t \sum _{i=1}^{\varepsilon ^{-1}}\beta (x_i) \dfrac{S_{\varepsilon }(r,x_i)I_{\varepsilon }(r,x_i)}{A_{\varepsilon }(r,x_i)}\bigg ( \int _{V_i}\varphi (x)\,dx \bigg )^2 dr\\{} & {} \quad \longrightarrow \int _0^t \int _{\mathbb {T}^1} \beta (x) \dfrac{{\textbf{s}}(r,x){\textbf{i}}(r,x)}{{\textbf{a}}(r,x)}\varphi ^2(x) dxdr, \end{aligned}$$

as \( \varepsilon \rightarrow 0. \) Similar computation shows that

$$\begin{aligned}{} & {} \dfrac{\mu _S}{\varepsilon }\int _0^t \sum _{i=1}^{\varepsilon ^{-1}}S_{\varepsilon }(r, x_i)\Bigg [\Big ( \int _{V_i} \nabla _{\varepsilon }^{+}\varphi (x)\, dx \Big )^2+\Big ( \int _{V_i} \nabla _{\varepsilon }^{-}\varphi (x)\, dx \Big )^2\Bigg ] dr\\{} & {} \quad \longrightarrow 2\,\mu _S \int _0^t \int _{\mathbb {T}^1} {\textbf{s}}(r, x)\big (\varphi ^{\prime }(x)\big )^2 dx dr, \end{aligned}$$

from which we deduce that

$$\begin{aligned}{} & {} {\textbf {<}}{} {\textbf {<}} {\mathscr {M}}_{\varepsilon }^{S,\varphi } {\textbf {>}}{} {\textbf {>}}_t \overset{\varepsilon \rightarrow 0}{\longrightarrow }\\{} & {} \quad \int _0^t \int _{\mathbb {T}^1} \beta (x) \dfrac{{\textbf{s}}(r,x){\textbf{i}}(r,x)}{{\textbf{a}}(r,x)}\varphi ^2(x) dxdr \\{} & {} \quad + 2\,\mu _S \int _0^t \int _{\mathbb {T}^1}{\textbf{s}}(r,x)\big (\varphi ^{\prime }(x)\big )^2dx dr. \end{aligned}$$

Hence, if \({\dot{W}}_1 \) , \({\dot{W}}_2 \) and \({\dot{W}}_3 \) are space-time white noises which are mutually independent, so the limit of the centered Gaussian martingale \({\mathscr {M}}_{\varepsilon }^{S,\varphi }(t) \) can be identified with

$$\begin{aligned}{} & {} -\int _0^t\int _{\mathbb {T}^1}\varphi (x)\sqrt{\dfrac{\beta (x){\textbf{s}}(r,x){\textbf{i}}(r,x)}{{\textbf{a}}(r,x)}}\; {\dot{W}}_1 (dr,dx)\\{} & {} \quad -\sqrt{2\mu _S}\int _0^t\int _{\mathbb {T}^1}\varphi ^{\prime }(x)\sqrt{{\textbf{s}}(r,x)}\; {\dot{W}}_2(dr,dx). \end{aligned}$$

In the same way

$$\begin{aligned} {\mathscr {M}}_{\varepsilon }^{I,\varphi }(t)&\Longrightarrow \int _0^t\int _{\mathbb {T}^1}\varphi (x) \sqrt{\dfrac{\beta (x){\textbf{s}}(r,x){\textbf{i}}(r,x)}{{\textbf{a}}(r,x)}}\; {\dot{W}}_1 (dr,dx)\\&\quad +\int _0^t\int _{\mathbb {T}^1}\varphi (x)\sqrt{\alpha (x){\textbf{i}}(r,x)}{\dot{W}}_3(dr,dx) \\&\quad -\sqrt{2\mu _I}\int _0^t \int _{\mathbb {T}^1}\varphi ^{\prime }(x) \sqrt{{\textbf{i}}(r,x)}\; {\dot{W}}_4(dr,dx) \end{aligned}$$

and

$$\begin{aligned} {\mathscr {M}}_{\varepsilon }^{R,\varphi }(t)&\Longrightarrow -\int _0^t\int _{\mathbb {T}^1}\varphi (x)\sqrt{\alpha (x){\textbf{i}}(r,x)}{\dot{W}}_3 (dr,dx)\\&\quad -\sqrt{2\mu _R}\int _0^t\int _{\mathbb {T}^1}\varphi ^{\prime }(x)\sqrt{{\textbf{r}}(r,x)}\; {\dot{W}}_5(dr,dx), \end{aligned}$$

where \(\dot{W_3}\), \(\dot{W_4}\) and \(\dot{W_5}\) are also space-time white noises which are mutually independent, and independent from \(\dot{W_1}\), \(\dot{W_2}\). \(\square \)

Let set \(\Im _{\varepsilon }= \big (u_{\varepsilon }\, , \, v_{\varepsilon }\, , \, w_{\varepsilon } \big )^{\texttt{T}}.\)

We need to check tightness of the sequence of process \(\{\Im _{\varepsilon }(t)\, , \, t\in [0, T]\, , 0<\varepsilon < 1 \}\).

Theorem 3.1

For any \(\gamma >3/2 \), the process \(\{\Im _{\varepsilon }(t)\, , \, t\in [0, T]\, , 0<\varepsilon < 1 \}\) is tight in \(C\big ([0,T]; (\textrm{H}^{-\gamma })^3\big ).\)

Proof

: We denote by \( {\mathcal {G}}^T_{\varepsilon }\) the collection of \( {\mathcal {F}}_t^{\varepsilon }\)-stopping times \({\overline{\tau }}\) such that \({\overline{\tau }}\le T.\) Following Aldous’ tightness criterion (see Joffe and Metivier [5]), in oder to show that the process \(\{\Im _{\varepsilon }(t)\, , \, t\in [0, T]\, , 0<\varepsilon < 1 \}\) is tight in \(C\big ([0,T]; (\textrm{H}^{-\gamma })^3\big )\), it suffices to establish the two following conditions:

[T]:

for \(\dfrac{3}{2}<\gamma _0< \gamma \), and \(M> 0 \) there exists C such that \(\displaystyle \mathbb {P}\Big (\big \Vert \Im _{\varepsilon }(t)\big \Vert _{_{\textrm{H}^{-\gamma _0}}}\ge M \Big )\le C, \) for all \( t\in [0,T]\) ,

[A]:

\(\displaystyle \underset{\theta \rightarrow 0}{\lim }\lim \limits _{\begin{array}{c} \varepsilon \rightarrow 0 \end{array}}\sup _{{\overline{\tau }}\in {\mathcal {G}}_{\varepsilon }^{T-\theta }}\mathbb {E}\bigg (\Big \Vert \Im _{\varepsilon }({\overline{\tau }}+\theta ) - \Im _{\varepsilon }({\overline{\tau }})\Big \Vert _{_{\textrm{H}^{-\gamma }}}^2\bigg ) = 0 . \)

Let \(\dfrac{3}{2}\!< \! \gamma _0\!< \!\gamma \). Let us set \( \displaystyle u_{\varepsilon }^{\gamma _0}(t,x)=({\textbf{I}}-\Delta _{\varepsilon })^{-\gamma _0/2}u_{\varepsilon }(t,x)\). \( \forall \, t\in [0,T]\), we have

$$\begin{aligned} \big \Vert u_{\varepsilon }(t)\big \Vert _{_{\textrm{H}^{-\gamma _0}}}^2=\big \langle \, u_{\varepsilon }^{\gamma _0}(t), u_{\varepsilon }^{\gamma _0}(t) \,\big \rangle . \end{aligned}$$

If we define \({\mathscr {M}}_\varepsilon ^{S,\gamma _0}(t):=({\textbf{I}}-\Delta _{\varepsilon })^{-\gamma _0/2}{\mathscr {M}}_\varepsilon ^S(t)\), since \(\gamma _0>3/2\), it follows from (3.1) that \({\mathscr {M}}_\varepsilon ^{S,\gamma _0}(t)\) is bounded as \(\varepsilon \rightarrow 0\), as an \(L^2(\mathbb {T}^1)\)–valued martingale. Applying the Itô formula to \(\vert u_{\varepsilon }^{\gamma _0}(t,x) \vert ^2\) and integrating over \(\mathbb {T}^1\) leads to

$$\begin{aligned} \big \Vert u_{\varepsilon }(t)\big \Vert _{_{\textrm{H}^{-\gamma _0}}}^2= & {} -2\int _0^t \big \langle \, \nabla _{\!\!\varepsilon }^{+} u_{\varepsilon }^{\gamma _0}(r), \nabla _{\!\!\varepsilon }^{+} u_{\varepsilon }^{\gamma _0}(r)\, \big \rangle dr + 2\int _0^t\big \langle \, u_{\varepsilon }^{\gamma _0}(r),d{\mathscr {M}}_\varepsilon ^{S,\gamma _0}(r) \,\big \rangle \\{} & {} + \int _{\mathbb {T}^1}{} {\textbf {<}}{} {\textbf {<}} {\mathscr {M}}_\varepsilon ^{S,\gamma _0}(.,x) {\textbf {>}}{} {\textbf {>}}_t dx. \end{aligned}$$

Letting \(t=T\) and taking the expectation, we deduce that

$$\begin{aligned} \mathbb {E}(\big \Vert u_{\varepsilon }(T)\big \Vert _{_{\textrm{H}^{-\gamma _0}}}^2)+2\mu _S\mathbb {E}\int _0^T\big \Vert \nabla _{\!\!\varepsilon }^+u_{\varepsilon }(t)\big \Vert _{_{\textrm{H}^{-\gamma _0}}}^2dt =\mathbb {E}\left( \Vert {\mathscr {M}}_\varepsilon ^{S,\gamma _0}(T)\Vert ^2_{L^2}\right) \,. \end{aligned}$$

Next we want to take the supremum on [0, T] in the previous identity. For that sake, we use the Burkholder Davis Gundy inequality, which implies that

$$\begin{aligned} \mathbb {E}\left[ \sup _{0\le t\le T}\left| \int _0^t\big \langle \, u_{\varepsilon }^{\gamma _0}(r),d{\mathscr {M}}_\varepsilon ^{S,\gamma _0}(r) \,\big \rangle \right| \right]&\le 3\mathbb {E}\sqrt{{\textbf {<}}{} {\textbf {<}}\int _0^\cdot \big \langle \,u_{\varepsilon }^{\gamma _0}(r),d{\mathscr {M}}_\varepsilon ^{S,\gamma _0}(r) \,\big \rangle {\textbf {>}}{} {\textbf {>}}_T}\\&\le 3\mathbb {E}\left( \sup _{0\le t\le T}\Vert u_{\varepsilon }^{\gamma _0}(t)\Vert _{L^2}\sqrt{Tr{\textbf {<}}{} {\textbf {<}}{\mathscr {M}}_\varepsilon ^{S,\gamma _0}{} {\textbf {>}}{} {\textbf {>}}_T}\right) \\&\le \frac{1}{2}\mathbb {E}\left( \sup _{0\le t\le T}\Vert u_{\varepsilon }^{\gamma _0}(t)\Vert _{L^2}^2\right) +\frac{9}{2}\mathbb {E}(\Vert {\mathscr {M}}_\varepsilon ^{S,\gamma _0}(T)\Vert _{L^2}^2). \end{aligned}$$

We then obtain, thanks to (26),

$$\begin{aligned} \mathbb {E}\left( \sup _{0\le t\le T}\big \Vert u_{\varepsilon }(t)\big \Vert _{_{\textrm{H}^{-\gamma _0}}}^2\right)&= 11\ \mathbb {E}\left( \sup _{0\le t\le T}\big \Vert {\mathscr {M}}_\varepsilon ^{S,\gamma _0}(t)\big \Vert _{L^2}^2\right) \\&\le 44\ C({\overline{\beta }}, \mu _S,T)\,. \end{aligned}$$

We also obtain similar inequalities for \(v_\varepsilon \) and \(w_\varepsilon \). Hence there exists a constant C such that for all \(\varepsilon >0\),

$$\begin{aligned} \mathbb {E}&\left( \sup _{0\le t\le T}\big \Vert u_{\varepsilon }(t)\big \Vert _{_{\textrm{H}^{-\gamma _0}}}^2 +\sup _{0\le t\le T}\big \Vert v_{\varepsilon }(t)\big \Vert _{_{\textrm{H}^{-\gamma _0}}}^2+\sup _{0\le t\le T}\big \Vert w_{\varepsilon }(t)\big \Vert _{_{\textrm{H}^{-\gamma _0}}}^2\right) \nonumber \\&\quad + 2\mathbb {E}\int _{0}^{T}\left[ \mu _S\big \Vert \nabla _{\!\!\varepsilon }^+u_{\varepsilon }(r)\big \Vert _{_{\textrm{H}^{-\gamma _0}}}^2 + \mu _I \big \Vert \nabla _{\!\!\varepsilon }^+v_{\varepsilon }(r)\big \Vert _{_{\textrm{H}^{-\gamma _0}}}^2+\mu _R\big \Vert \nabla _{\!\!\varepsilon }^+w_{\varepsilon }(r)\big \Vert _{_{\textrm{H}^{-\gamma _0}}}^2\right] dr\le C\,. \end{aligned}$$
(28)

Then [T] follows by using Markov’s inequality.

Let \(\theta >0\) and \( {\overline{\tau }} \in {\mathcal {G}}^{T-\theta }_{\varepsilon } \). We have

$$\begin{aligned} u_{\varepsilon }({\overline{\tau }}+\theta )-u_{\varepsilon }({\overline{\tau }})= & {} \big [ \textsf {T}_{\!\varepsilon ,S}(\theta )-{\textbf{I}}\big ]u_{\varepsilon }({\overline{\tau }})+ \int _{{\overline{\tau }}}^{{\overline{\tau }}+\theta }\textsf {T}_{\!\varepsilon ,S}({\overline{\tau }}+\theta -r)d{\mathscr {M}}_{\varepsilon }^S(r). \end{aligned}$$

So,

$$\begin{aligned} \mathbb {E}\Bigg (\Big \Vert u_{\varepsilon }({\overline{\tau }} + \theta )- u_{\varepsilon }({\overline{\tau }})\Big \Vert _{_{\textrm{H}^{-\gamma }}}^2\Bigg )\le & {} 2\mathbb {E}\Bigg (\Big \Vert \big [ \textsf {T}_{\!\varepsilon ,S}(\theta )-{\textbf{I}}\big ]u_{\varepsilon }({\overline{\tau }}) \Big \Vert _{_{\textrm{H}^{-\gamma }}}^2 \Bigg )\\+ & {} 2\mathbb {E}\Bigg ( \Big \Vert \int _{{\overline{\tau }}}^{{\overline{\tau }}+\theta } \textsf {T}_{\!\varepsilon ,S}({\overline{\tau }}+\theta -r) d{\mathscr {M}}_{\varepsilon }^S(r) \Big \Vert _{_{\textrm{H}^{-\gamma }}}^2\Bigg ). \end{aligned}$$

Let us deal with each term separately. First using the inequality (9), there is a constant \( C(\gamma )\) such that

$$\begin{aligned} \mathbb {E}\bigg (\Big \Vert \big [ \textsf {T}_{\!\varepsilon ,S}(\theta )-{\textbf{I}}\big ] u_{\varepsilon }({\overline{\tau }})\Big \Vert _{_{\textrm{H}^{-\gamma }}}^2\bigg )\le & {} C(\gamma ) \mathbb {E}\bigg (\Big \Vert \big [ \textsf {T}_{\!\varepsilon ,S}(\theta )-{\textbf{I}}\big ]u_{\varepsilon }({\overline{\tau }})\Big \Vert _{_{\textrm{H}^{-\gamma , \varepsilon }}}^2\bigg ). \end{aligned}$$

Let \( 3/2<\gamma ^{\prime }<\gamma \), and let c a positive constant. We have

$$\begin{aligned} \Big \Vert \big [\textsf {T}_{\!\varepsilon ,S}(\theta )-{\textbf{I}}\big ] u_{\varepsilon }({\overline{\tau }})\Big \Vert _{_{\textrm{H}^{-\gamma }}}^2= & {} \sum _{\lambda _m^\varepsilon \ge c} \big \langle \,\big [ \textsf {T}_{\!\varepsilon ,S}(\theta )-{\textbf{I}}\big ]u_{\varepsilon }({\overline{\tau }}), {{\textbf {f}}}_{m}^{\varepsilon } \,\big \rangle ^2\big ( 1+\lambda _{m}^{\varepsilon }\big )^{-\gamma } \\{} & {} + \sum _{\lambda _m^\varepsilon < c} \big \langle \,\big [ \textsf {T}_{\!\varepsilon ,S}(\theta )-{\textbf{I}}\big ]u_{\varepsilon }({\overline{\tau }}), {{\textbf {f}}}_{m}^{\varepsilon } \,\big \rangle ^2\big ( 1+\lambda _{m}^{\varepsilon }\big )^{-\gamma } , \end{aligned}$$

and

$$\begin{aligned}{} & {} \sum _{\lambda _m^\varepsilon \ge c} \big \langle \,\big [ \textsf {T}_{\!\varepsilon ,S}(\theta )-{\textbf{I}}\big ]u_{\varepsilon }({\overline{\tau }}), {{\textbf {f}}}_{m}^{\varepsilon } \,\big \rangle ^2\big ( 1+\lambda _{m}^{\varepsilon }\big )^{-\gamma } \\{} & {} \quad \le ( 1+c)^{\gamma ^{\prime }-\gamma }\sum _{\lambda _m^\varepsilon \ge c} \big \langle \,\big [\textsf {T}_{\!\varepsilon ,S}(\theta )-{\textbf{I}}\big ] u_{\varepsilon }({\overline{\tau }}), {{\textbf {f}}}_{m}^{\varepsilon } \,\big \rangle ^2\big (1+\lambda _{m}^{\varepsilon }\big )^{-\gamma ^{\prime }} \\{} & {} \quad \le (1+c)^{\gamma ^{\prime }-\gamma } \Big \Vert \big [\textsf {T}_{\!\varepsilon ,S}(\theta )-{\textbf{I}}\big ] u_{\varepsilon }({\overline{\tau }})\Big \Vert _{\textrm{H}^{-\gamma ^{\prime }}}^2. \end{aligned}$$

Then

$$\begin{aligned} \mathbb {E}\bigg (\Big \Vert \big [ \textsf {T}_{\!\varepsilon ,S}(\theta )-{\textbf{I}}\big ]u_{\varepsilon }({\overline{\tau }})\Big \Vert _{_{\textrm{H}^{-\gamma }}}^2\bigg )\le & {} C(\gamma )(1+c)^{\gamma ^{\prime }-\gamma } \mathbb {E}\left( \Big \Vert \big [\textsf {T}_{\!\varepsilon ,S}(\theta )-{\textbf{I}}\big ] u_{\varepsilon }({\overline{\tau }})\Big \Vert _{\textrm{H}^{-\gamma ^{\prime }}}^2\right) \\{} & {} \quad + C(\gamma ) \sum _{\lambda _m^{\varepsilon }<c} \big (e^{-\lambda _{m}^{\varepsilon }\theta }-1\big )^2\mathbb {E}\Big ( \langle \; u_{\varepsilon }({\overline{\tau }}), {\textbf{f}}_m^{\varepsilon }\; \rangle ^2 \Big )\big ( 1+\lambda _{m}^{\varepsilon }\big )^{-\gamma } \end{aligned}$$

On the one hand, since \(\mathbb {E}\left( \Big \Vert \big [\textsf {T}_{\!\varepsilon ,S}(\theta )-{\textbf{I}}\big ] u_{\varepsilon }({\overline{\tau }})\Big \Vert _{\textrm{H}^{-\gamma ^{\prime }}}^2\right) \le C\), we can choose c large enough such that \(\displaystyle C(\gamma )(1+c)^{\gamma ^{\prime }-\gamma } \mathbb {E}\left( \Big \Vert \big [\textsf {T}_{\!\varepsilon ,S}(\theta )-{\textbf{I}}\big ] u_{\varepsilon }({\overline{\tau }})\Big \Vert _{\textrm{H}^{-\gamma ^{\prime }}}^2\right) \le \varepsilon /2.\) On the other hand, we have

$$\begin{aligned}{} & {} \sum _{\lambda _m^{\varepsilon }<c} \big (e^{-\lambda _{m}^{\varepsilon }\theta }-1\big )^2\mathbb {E}\Big ( \langle \;u_{\varepsilon }({\overline{\tau }}), {\textbf{f}}_m^{\varepsilon }\; \rangle ^2 \Big )\big ( 1+\lambda _{m}^{\varepsilon }\big )^{-\gamma } \\{} & {} \quad \le \underset{\lambda _m^{\varepsilon }<c}{\sup }\big (1-e^{-\lambda _n^{\varepsilon }\theta }\big )^2\sum _{\lambda _m^{\varepsilon }<c} \mathbb {E}\Big (\langle \;u_{\varepsilon }({\overline{\tau }}), {\textbf{f}}_m^{\varepsilon }\; \rangle ^2 \Big )\big (1+\lambda _{m}^{\varepsilon }\big )^{-\gamma } \\{} & {} \quad \le \underset{\lambda _m^{\varepsilon }<c}{\sup }\big (1-e^{-\lambda _m^{\varepsilon }\theta }\big )^2\mathbb {E}\bigg (\big \Vert u_{\varepsilon }({\overline{\tau }})\big \Vert _{_{\textrm{H}^{-\gamma , \varepsilon }}}^2\bigg ) \\{} & {} \quad \le C(\gamma )\underset{\lambda _m^{\varepsilon }<c}{\sup }\big (1-e^{-\lambda _m^{\varepsilon }\theta }\big )^2 \mathbb {E}\bigg (\big \Vert u_{\varepsilon }({\overline{\tau }})\big \Vert _{_{\textrm{H}^{-\gamma }}}^2\bigg ). \end{aligned}$$

Since

$$\begin{aligned} \mathbb {E}\bigg (\big \Vert u_{\varepsilon }({\overline{\tau }})\big \Vert _{-\gamma }^2\bigg )\le & {} \mathbb {E}\bigg (\sup _{0\le t\le T}\big \Vert u_{\varepsilon }(t)\big \Vert _{-\gamma }^2\bigg ) \nonumber \\\le & {} C , \end{aligned}$$

then for the previous choice of c, we can choose \(\theta \) small enough such that \(C(\gamma )\underset{\lambda _m^{\varepsilon }<c}{\sup }\big (1-e^{-\lambda _m^{\varepsilon }\theta }\big )^2 \mathbb {E}\bigg (\big \Vert u_{\varepsilon }({\overline{\tau }})\big \Vert _{_{\textrm{H}^{-\gamma }}}^2\bigg )\le \varepsilon /2\). Hence

$$\begin{aligned} \underset{\theta \rightarrow 0}{\lim }\lim \limits _{\begin{array}{c} \varepsilon \rightarrow 0 \end{array}}\sup _{{\overline{\tau }}\in {\mathcal {G}}_{\varepsilon }^{T-\theta }}\mathbb {E}\bigg (\Big \Vert \big [ \textsf {T}_{\!\varepsilon ,S}(\theta )-{\textbf{I}}\big ]u_{\varepsilon }({\overline{\tau }})\Big \Vert _{_{\textrm{H}^{-\gamma }}}^2\bigg ) = 0\,. \end{aligned}$$

Secondly, using the equivalence of the norms \(\Vert . \Vert _{_{\textrm{H}^{-\gamma }}}\) and \(\Vert . \Vert _{_{\textrm{H}^{-\gamma , \varepsilon }}}\)  , and the fact that \(\textsf {T}_{\!\varepsilon ,S}\) is a contraction semigroup on \( \texttt{H}_{\varepsilon }\) we have

$$\begin{aligned}{} & {} \mathbb {E}\Bigg (\Big \Vert \int _{{\overline{\tau }}}^{{\overline{\tau }}+\theta } \textsf {T}_{\!\varepsilon ,S}({\overline{\tau }}+\theta -r) {\mathscr {M}}_{\varepsilon }^S(r) \Big \Vert _{_{\textrm{H}^{-\gamma }}}^2 \Bigg )\\{} & {} \quad = \mathbb {E}\Bigg ( \Big \Vert \int _0^{\theta } \textsf {T}_{\!\varepsilon ,S}(\theta -r) d{\mathscr {M}}_{\varepsilon }^S(r+{\overline{\tau }}) \Big \Vert _{_{\textrm{H}^{-\gamma }}}^2\Bigg ) \\{} & {} \quad \le C(\gamma )\mathbb {E}\Bigg ( \Big \Vert {\mathscr {M}}_{\varepsilon }^S({\overline{\tau }} + \theta )- {\mathscr {M}}_{\varepsilon }^S({\overline{\tau }})\Big \Vert _{_{\textrm{H}^{-\gamma , \varepsilon }}}^2\Bigg )\\{} & {} \quad \le 2 C(\gamma ) \mathbb {E}\Bigg (\Big \Vert \int _{{\overline{\tau }}}^{{\overline{\tau }}+\theta } \sum _{i=1}^{\varepsilon ^{-1}}\dfrac{\sqrt{\beta (x_i)}}{\varepsilon }\sqrt{\dfrac{ S_{\varepsilon }(r,x_i)I_{\varepsilon }(r,x_i)}{A_{\varepsilon }(r,x_i)}}\mathbbm {1}_{V_i}(.)dB_{x_i}(r)\Big \Vert _{_{\textrm{H}^{-\gamma , \varepsilon }}}^2 \Bigg ) \\{} & {} \qquad + 2C(\gamma )\mathbb {E}\Bigg ( \Big \Vert \int _{{\overline{\tau }}}^{{\overline{\tau }}+\theta } \sum \limits _{\begin{array}{c} i\, , \, j \\ x_i \sim x_j \end{array}} \dfrac{\sqrt{\mu _S S_{\varepsilon }(r,x_i)}}{\varepsilon }\Big ( \mathbbm {1}_{V_j}(.)- \mathbbm {1}_{V_i}(.)\Big ) dB_{x_i x_j}^S(r) \Big \Vert _{_{\textrm{H}^{-\gamma , \varepsilon }}}^2 \Bigg ) \\{} & {} \quad \le \dfrac{2C(\gamma ){\overline{\beta }}}{\varepsilon }\mathbb {E}\Bigg (\int _{{\overline{\tau }}}^{{\overline{\tau }}+\theta } \sum _m\sum _{i=1}^{\varepsilon ^{-1}} \dfrac{S_{\varepsilon }(r,x_i)I_{\varepsilon }(r,x_i)}{A_{\varepsilon }(r,x_i)}\Big ( \int _{V_i}{\textbf{f}}_{m}^{\varepsilon }(x)\,dx \Big )^2 (1+\lambda _{m}^{\varepsilon })^{-\gamma } dr\Bigg ) \\{} & {} \qquad +\dfrac{2C(\gamma )\mu _S}{\varepsilon }\mathbb {E}\Bigg (\int _{{\overline{\tau }}}^{{\overline{\tau }}+\theta } \sum _m \sum _{i=1}^{\varepsilon ^{-1}} S_{\varepsilon }(r, x_i) \Big ( \int _{V_i} \nabla _{\varepsilon }^{\pm } {\textbf{f}}_{m}^{\varepsilon }(x)\, dx \Big )^2(1+\lambda _{m}^{\varepsilon })^{-\gamma } dr\Bigg ) \\{} & {} \quad \le C({\overline{\beta }} , \mu _S)\, \theta \longrightarrow 0, \qquad \text {as} \; \theta \rightarrow 0 . \end{aligned}$$

Hence the condition [A] is proved.

In the way, we prove similar estimates for \(v_{\varepsilon }\) and \(w_{\varepsilon }\). Then the process \( \big \{ \Im _\varepsilon (t),\; t\in [0,T], 0<\varepsilon <1 \big \}\) is tight in \(C\big ([0,T]; (\textrm{H}^{-\gamma })^3\big )\), \(\gamma >3/2\). \(\square \)

Lemma 3.2

For \(3/2<\gamma <2\), the process \( \{ \, \Im _{\varepsilon }(t), \; t\in [0,T], \; 0<\varepsilon <1 \, \}\) converges in law in \(C\big ([0,T]\, ; \, (\textrm{H}^{-\gamma })^3\big )\cap L^2\big (0,T; (\textrm{H}^{-1})^3\big )\).

Proof

On the one hand, from Theorem 3.1, the process \( \{ \, \Im _{\varepsilon }(t), \; t\in [0,T], \; 0<\varepsilon <1 \, \}\) is tight in \(C\big ([0,T]\, ; \, (\textrm{H}^{-\gamma })^3\big ) \), then along a subsequence, it converges in \(C\big ([0,T]\, ; \, (\textrm{H}^{-\gamma })^3\big ) \). On the other hand the sequence \( \{ \, \Im _{\varepsilon }(t), \; t\in [0,T], \; 0<\varepsilon <1 \, \}\) is bounded in \( L^2\big (0,T\; ; \; (\textrm{H}^{1-\gamma })^3\big )\). Indeed for all \(\varepsilon \) , we have

$$\begin{aligned}{} & {} \mathbb {E}\bigg (\int _0^T\big \Vert u_{\varepsilon }(t)\big \Vert _{_{\textrm{H}^{1-\gamma }}}^2dt\bigg )\\{} & {} \quad \le C(\gamma )\mathbb {E}\bigg (\int _0^T\big \Vert u_{\varepsilon }(t)\big \Vert _{_{\textrm{H}^{1-\gamma ,\varepsilon }}}^2dt\bigg )\; \; \quad (\text {by using the inequality (9)}) \\{} & {} \quad =C(\gamma ) \sum _m \mathbb {E}\Big (\int _0^T\langle \,u_{\varepsilon }(t) , {{\textbf {f}}}_{m}^{\varepsilon }\,\rangle ^2 dt\Big ) (1+\lambda _{m,S}^{\varepsilon })^{1-\gamma } \\{} & {} \quad = C(\gamma ) \sum _m \mathbb {E}\Big (\int _0^T\langle \,u_{\varepsilon }(t) , {{\textbf {f}}}_{m}^{\varepsilon }\,\rangle ^2 dt\Big ) (1+\lambda _{m,S}^{\varepsilon })^{-\gamma } \\{} & {} \qquad + C(\gamma ) \sum _m \mathbb {E}\Big (\int _0^T\langle \,u_{\varepsilon }(t) , {{\textbf {f}}}_{m}^{\varepsilon }\,\rangle ^2 dt \Big )\lambda _{m,S}^{\varepsilon }(1+\lambda _{m,S}^{\varepsilon })^{-\gamma } \\{} & {} \quad = C(\gamma )\left\{ \mathbb {E}\Big (\int _0^T \big \Vert u_{\varepsilon }(t)\big \Vert _{_{\textrm{H}^{-\gamma , \varepsilon }}}^2 dt \Big )+ \mathbb {E}\Big (\int _0^T \big \Vert \nabla _{\!\!\varepsilon }^+u_{\varepsilon }(t)\big \Vert _{_{\textrm{H}^{-\gamma , \varepsilon }}}^2 dt \Big )\right\} \\{} & {} \quad \le C(\gamma )\left\{ \mathbb {E}\Big (\int _0^T \big \Vert u_{\varepsilon }(t)\big \Vert _{_{\textrm{H}^{-\gamma }}}^2 dt \Big )+ \mathbb {E}\Big (\int _0^T \big \Vert \nabla _{\!\!\varepsilon }^+u_{\varepsilon }(t)\big \Vert _{_{\textrm{H}^{-\gamma }}}^2 dt \Big )\right\} , \end{aligned}$$

where the third equality follows from the fact that

$$\begin{aligned} \big \Vert \nabla _{\!\!\varepsilon }^+u_{\varepsilon }(t)\big \Vert _{_{\textrm{H}^{-\gamma , \varepsilon }}}^2= & {} \sum _m \langle \,u_{\varepsilon }(t) , {{\textbf {f}}}_{m}^{\varepsilon }\,\rangle ^2 \lambda _{m,S}^{\varepsilon }(1+\lambda _{m,S}^{\varepsilon })^{-\gamma }\\{} & {} \times (\text {see } \,Lemma A.2(i) \, \text { in the Appendix below}). \end{aligned}$$

The inequality (28) ensures that \( \displaystyle \mathbb {E}\int _0^T \Big [\big \Vert u_{\varepsilon }(t)\big \Vert _{_{\textrm{H}^{-\gamma }}}^2 dt + \big \Vert \nabla _{\!\!\varepsilon }^+u_{\varepsilon }(t)\big \Vert _{_{\textrm{H}^{-\gamma }}}^2 \Big ]dt\) is bounded by a constant independent of \(\varepsilon \). It then follows that

$$\begin{aligned} \underset{0<\varepsilon <1}{\sup }\mathbb {E}\Big (\int _0^T\big \Vert u_{\varepsilon }(t)\big \Vert _{_{\textrm{H}^{1-\gamma }}}^2 dt\Big )\le C(\gamma ). \end{aligned}$$

We have similar estimates for \( v_{\varepsilon }\) and \(w_{\varepsilon }\). Thus

$$\begin{aligned} \underset{0<\varepsilon <1}{\sup }\mathbb {E}\bigg (\int _0^T\big \Vert \Im _{\varepsilon }(t)\big \Vert _{_{\textrm{H}^{1-\gamma }}}^2 dt\bigg )\le & {} C. \end{aligned}$$

This implies that, from the sequence \( \{ \, \Im _{\varepsilon }(t), \; t\in [0,T], \; 0<\varepsilon <1 \, \}\), we can extract a subsequence which converges in law in \( L^2\big (0,T\; ; \;(\textrm{H}^{1-\gamma })^3\big )\) endowed with the weak topology. Furthermore, since the imbedding of \(\textrm{H}^{1-\gamma }\) into \(\textrm{H}^{-1}\) is compact and we have the convergence in \(C\left( [0, T]\, ; (\textrm{H}^{-\gamma })^3\right) \), then the extracted sequence converges in fact in \( L^2\big (0,T\; ; \; (\textrm{H}^{-1})^3\big )\). Hence, we deduce that there exists a subsequence which converges in law in \(C\big ([0,T]\, ; \, (\textrm{H}^{-\gamma })^3\big )\cap L^2\big (0,T\; ; \; (\textrm{H}^{-1})^3\big )\).

We note that the limit \( \displaystyle \Im :=(u, v, w)^{\texttt{T}}\) of any convergent subsequence satisfies the following system of stochastic PDEs

$$\begin{aligned} \left\{ \begin{aligned} d u(t)&= \mu _S\,\Delta u(t) dt+d{\mathscr {M}}^S(t),\\ dv(t)&=\mu _I\,\Delta v dt+d{\mathscr {M}}^I(t), \\ dw(t)&=\mu _R\,\Delta w(t)dt+d{\mathscr {M}}^R(t), \end{aligned} \right. \end{aligned}$$
(29)

and the solution of that system is unique. Then the whole process \( \{ \, \Im _{\varepsilon }(t), \; t\in [0,T], \; 0<\varepsilon <1 \, \}\) converges in \(C\big ([0,T]\, ; \, (\textrm{H}^{-\gamma })^3\big )\cap L^2\big (0,T\; ; \; (\textrm{H}^{-1})^3\big )\). \(\square \)

Lemma 3.3

As \(\varepsilon \rightarrow 0\), \(f_\varepsilon u_\varepsilon \Longrightarrow fu\), and \(g_\varepsilon v_\varepsilon \Longrightarrow gv\) in \(L^2\Big (0,T ; \textrm{H}^{-1}\Big )\).

Proof

The convergence \(f_\varepsilon u_\varepsilon \Longrightarrow fu\) follows to the fact that \(u_\varepsilon \Longrightarrow u\) in \(L^2\big (0,T\; ; \; \textrm{H}^{-1}\big )\) and \(f_\varepsilon \longrightarrow f\) in \(C\big ([0,T]\; ; \; \textrm{H}^{1}\big )\). The proof of the convergence \(g_\varepsilon v_\varepsilon \Longrightarrow gv\) is similar. \(\square \)

We are now interested in the convergence of the process \({\overline{\Im }}_\varepsilon := \left( {\overline{u}}_\varepsilon , {\overline{v}}_\varepsilon , {\overline{w}}_\varepsilon \right) \).

Lemma 3.4

For any \(T>0\), there exists a positive constant C such that

$$\begin{aligned}{} & {} \sup _{0\le t\le T}\Big (\big \Vert {\overline{u}}_\varepsilon (t)\big \Vert _{L^2}^2+\big \Vert {\overline{v}}_\varepsilon (t)\big \Vert _{L^2}^2 +\big \Vert {\overline{w}}_\varepsilon (t)\big \Vert _{L^2}^2\Big )\nonumber \\{} & {} \quad +C \int _{0}^{T}\Big (\big \Vert \nabla _{\varepsilon }^{+}{\overline{u}}_{\varepsilon }(s)\big \Vert _{L^2}^2 +\big \Vert \nabla _{\varepsilon }^{+}{\overline{v}}_{\varepsilon }(s)\big \Vert _{L^2}^2+\big \Vert \nabla _{\varepsilon }^{+}{\overline{w}}_{\varepsilon }(s)\big \Vert _{L^2}^2\Big ) ds \le C\eta _{_T}e^{CT} ,\nonumber \\ \end{aligned}$$
(30)

where \( \displaystyle \eta _{_T} :=\int _{0}^{T}\Big (\big \Vert u_\varepsilon (s)\big \Vert _{\textrm{H}^{-1}}^2 + \big \Vert v_\varepsilon (s)\big \Vert _{\textrm{H}^{-1}}^2\Big ) ds.\)

Proof

For all \(t\in [0 , T],\) we have

$$\begin{aligned} \int _{0}^{t} \langle \dfrac{d{\overline{u}}_\varepsilon }{ds}(s) \; , \; {\overline{u}}_\varepsilon (s) \rangle ds= & {} \mu _S \int _{0}^{t} \langle \Delta _{\varepsilon } {\overline{u}}_{\varepsilon }(s)\; , \; {\overline{u}}_\varepsilon (s) \rangle ds- \int _{0}^{t}\langle f_{\varepsilon }(s){\overline{u}}_{\varepsilon }(s)\; , \; {\overline{u}}_\varepsilon (s) \rangle ds \\{} & {} \quad -\,\int _{0}^{t}\langle g_{\varepsilon }(s){\overline{v}}_{\varepsilon }(s)\; , \; {\overline{u}}_\varepsilon (s) \rangle ds-\int _{0}^{t}\langle f_{\varepsilon }(s)u_{\varepsilon }(s)\; , \; {\overline{u}}_\varepsilon (s) \rangle ds \\{} & {} \quad - \int _{0}^{t}\langle g_{\varepsilon }(s)v_{\varepsilon }(s)\; , \; {\overline{u}}_\varepsilon (s) \rangle ds . \end{aligned}$$

Then

$$\begin{aligned}{} & {} \big \Vert {\overline{u}}_\varepsilon (t)\big \Vert _{L^2}^2 +2\mu _S \int _{0}^{t} \big \Vert \nabla _{\varepsilon }^{+}{\overline{u}}_{\varepsilon }(s)\big \Vert _{L^2}^2 ds \\{} & {} \quad \,=\, - 2\int _{0}^{t}\langle f_{\varepsilon }(s){\overline{u}}_{\varepsilon }(s)\; , \; {\overline{u}}_\varepsilon (s) \rangle ds- 2\int _{0}^{t}\langle g_{\varepsilon }(s){\overline{v}}_{\varepsilon }(s)\; , \; {\overline{u}}_\varepsilon (s) \rangle ds \\{} & {} \qquad -\, 2\int _{0}^{t}\langle f_{\varepsilon }(s)u_{\varepsilon }(s)\; , \; {\overline{u}}_\varepsilon (s) \rangle ds- 2\int _{0}^{t}\langle g_{\varepsilon }(s)v_{\varepsilon }(s)\; , \; {\overline{u}}_\varepsilon (s) \rangle ds. \end{aligned}$$

Since \(\displaystyle f_\varepsilon (t) u_\varepsilon (t) \in \textrm{H}^{-1}\) and \(\displaystyle g_\varepsilon (t) v_\varepsilon (t) \in \textrm{H}^{-1}\), then

$$\begin{aligned}{} & {} \big \Vert {\overline{u}}_\varepsilon (t)\big \Vert _{L^2}^2 +2\mu _S \int _{0}^{t} \big \Vert \nabla _{\varepsilon }^{+}{\overline{u}}_{\varepsilon }(s)\big \Vert _{L^2}^2 ds \\{} & {} \quad \le 2\sup _{0\le s\le T} \Vert f_\varepsilon (s)\Vert _{\infty } \int _{0}^{t}\big \Vert {\overline{u}}_\varepsilon (s)\big \Vert _{L^2}^2 ds \\{} & {} \qquad +\int _{0}^{t}\Big (\sup _{0\le s\le T} \Vert g_\varepsilon (s)\Vert _{\infty }^2\big \Vert {\overline{v}}_\varepsilon (s)\big \Vert _{L^2}^2+ \big \Vert {\overline{u}}_\varepsilon (s)\big \Vert _{L^2}^2 \Big ) ds \\{} & {} \qquad + 2\sup _{0\le s\le T}\big \Vert f_\varepsilon (s)\big \Vert _{\textrm{H}^{1,\varepsilon }}\int _{0}^{t}\left[ \big \Vert u_\varepsilon (s)\big \Vert _{\textrm{H}^{-1}}\big (\big \Vert {\overline{u}}_\varepsilon (s)\big \Vert _{L^2}+ \big \Vert \nabla _{\!\!\varepsilon }^+{\overline{u}}_\varepsilon (s)\big \Vert _{L^2} \big ) \right] ds \\{} & {} \qquad +2\sup _{0\le s\le T}\big \Vert g_\varepsilon (s)\big \Vert _{\textrm{H}^{1,\varepsilon }}\int _{0}^{t}\left[ \big \Vert v_\varepsilon (s)\big \Vert _{\textrm{H}^{-1}}\big (\big \Vert {\overline{u}}_\varepsilon (s)\big \Vert _{L^2}+ \big \Vert \nabla _{\!\!\varepsilon }^+{\overline{u}}_\varepsilon (s)\big \Vert _{L^2} \big ) \right] ds. \end{aligned}$$

Let \(\delta \) be some constant such that \(0<\delta <\dfrac{\mu _S}{C}.\) We have

$$\begin{aligned}{} & {} \big \Vert {\overline{u}}_\varepsilon (t)\big \Vert _{L^2}^2 +2\mu _S \int _{0}^{t} \big \Vert \nabla _{\varepsilon }^{+}{\overline{u}}_{\varepsilon }(s)\big \Vert _{L^2}^2 ds \\{} & {} \quad \le C\int _{0}^{t}\big \Vert {\overline{u}}_\varepsilon (s)\big \Vert _{L^2}^2 ds +\int _{0}^{t}\Big (C\big \Vert {\overline{v}}_\varepsilon (s)\big \Vert _{L^2}^2+ \big \Vert {\overline{u}}_\varepsilon (s)\big \Vert _{L^2}^2 \Big ) ds \\{} & {} \qquad + C\int _{0}^{t}\left[ 2\delta \big \Vert {\overline{u}}_\varepsilon (s)\big \Vert _{L^2}^2+2\delta \big \Vert \nabla _{\!\!\varepsilon }^+{\overline{u}}_\varepsilon (s)\big \Vert _{L^2}^2+\dfrac{2}{\delta } \big \Vert u_\varepsilon (s)\big \Vert _{\textrm{H}^{-1}}^2 +\dfrac{2}{\delta } \big \Vert v_\varepsilon (s)\big \Vert _{\textrm{H}^{-1}}^2 \right] ds. \end{aligned}$$

Then

$$\begin{aligned}{} & {} \big \Vert {\overline{u}}_\varepsilon (t)\big \Vert _{L^2}^2 +2(\mu _S-C\delta ) \int _{0}^{t} \big \Vert \nabla _{\varepsilon }^{+}{\overline{u}}_{\varepsilon }(s)\big \Vert _{L^2}^2 ds \nonumber \\{} & {} \quad \le (C+2C\delta )\int _{0}^{t}\big \Vert {\overline{u}}_\varepsilon (s)\big \Vert _{L^2}^2 ds +C\int _{0}^{t}\big \Vert {\overline{v}}_\varepsilon (s)\big \Vert _{L^2}^2\nonumber \\{} & {} \qquad +\dfrac{2C}{\delta }\int _{0}^{t}\Big ( \big \Vert u_\varepsilon (s)\big \Vert _{\textrm{H}^{-1}}^2 + \big \Vert v_\varepsilon (s)\big \Vert _{\textrm{H}^{-1}}^2 \Big ) ds. \end{aligned}$$
(31)

In the same way, we prove that

$$\begin{aligned}{} & {} \big \Vert {\overline{v}}_\varepsilon (t)\big \Vert _{L^2}^2 +2(\mu _I-C\delta ) \int _{0}^{t} \big \Vert \nabla _{\varepsilon }^{+}{\overline{v}}_{\varepsilon }(s)\big \Vert _{L^2}^2 ds \nonumber \\{} & {} \quad \le (C+2C\delta )\int _{0}^{t}\big \Vert {\overline{u}}_\varepsilon (s)\big \Vert _{L^2}^2 ds +C\int _{0}^{t}\big \Vert {\overline{v}}_\varepsilon (s)\big \Vert _{L^2}^2\nonumber \\{} & {} \qquad +\dfrac{2C}{\delta }\int _{0}^{t}\Big ( \big \Vert u_\varepsilon (s)\big \Vert _{\textrm{H}^{-1}}^2 + \big \Vert v_\varepsilon (s)\big \Vert _{\textrm{H}^{-1}}^2 \Big ) ds, \end{aligned}$$
(32)

and

$$\begin{aligned}{} & {} \big \Vert {\overline{w}}_\varepsilon (t)\big \Vert _{L^2}^2 +2\mu _R\int _{0}^{t} \big \Vert \nabla _{\varepsilon }^{+}{\overline{w}}_{\varepsilon }(s)\big \Vert _{L^2}^2 ds\nonumber \\{} & {} \quad \le \dfrac{C}{\delta }\int _{0}^{t}\big \Vert v_\varepsilon (s)\big \Vert _{\textrm{H}^{-1}}^2 ds +C\int _{0}^{t}\big \Vert {\overline{w}}_\varepsilon (s)\big \Vert _{L^2}^2 +C\int _{0}^{t}\big \Vert {\overline{v}}_\varepsilon (s)\big \Vert _{L^2}^2 ds . \end{aligned}$$
(33)

By adding the inequalities (31) , (32) and (33), we obtain

$$\begin{aligned}{} & {} \big \Vert {\overline{u}}_\varepsilon (t)\big \Vert _{L^2}^2+\big \Vert {\overline{v}}_\varepsilon (t)\big \Vert _{L^2}^2 +\big \Vert {\overline{w}}_\varepsilon (t)\big \Vert _{L^2}^2+C \int _{0}^{t}\Big (\big \Vert \nabla _{\varepsilon }^{+}{\overline{u}}_{\varepsilon }(s)\big \Vert _{L^2}^2 +\big \Vert \nabla _{\varepsilon }^{+}{\overline{v}}_{\varepsilon }(s)\big \Vert _{L^2}^2\\{} & {} \quad +\big \Vert \nabla _{\varepsilon }^{+}{\overline{w}}_{\varepsilon }(s)\big \Vert _{L^2}^2\Big ) ds\\{} & {} \le C\int _{0}^{t}\Big (\big \Vert {\overline{u}}_\varepsilon (s)\big \Vert _{L^2}^2+\big \Vert {\overline{v}}_\varepsilon (s)\big \Vert _{L^2}^2 +\big \Vert {\overline{w}}_\varepsilon (s)\big \Vert _{L^2}^2 \Big )ds \\{} & {} \qquad +C\int _{0}^{t}\Big (\big \Vert u_\varepsilon (s)\big \Vert _{\textrm{H}^{-1}}^2 + \big \Vert v_\varepsilon (s)\big \Vert _{\textrm{H}^{-1}}^2\Big ) ds. \end{aligned}$$

Hence applying Gronwall’s Lemma we obtain

$$\begin{aligned}{} & {} \sup _{0\le t\le T}\Big (\big \Vert {\overline{u}}_\varepsilon (t)\big \Vert _{L^2}^2+\big \Vert {\overline{v}}_\varepsilon (t) \big \Vert _{L^2}^2 +\big \Vert {\overline{w}}_\varepsilon (t)\big \Vert _{L^2}^2\Big ) \\{} & {} \quad +C \int _{0}^{T}\Big (\big \Vert \nabla _{\varepsilon }^{+}{\overline{u}}_{\varepsilon }(s)\big \Vert _{L^2}^2 +\big \Vert \nabla _{\varepsilon }^{+}{\overline{v}}_{\varepsilon }(s)\big \Vert _{L^2}^2+\big \Vert \nabla _{\varepsilon }^{+}{\overline{w}}_{\varepsilon }(s)\big \Vert _{L^2}^2\Big ) ds \le C\eta _Te^{CT}. \end{aligned}$$

\(\square \)

We want to deduce from the fact that the pair \((u_\varepsilon ,v_\varepsilon )\) converges in law towards (uv) in \(L^2(0,T; (\textrm{H}^{-1})^2)\), the convergence in law of \(({\overline{u}}_\varepsilon ,{\overline{v}}_\varepsilon ,{\overline{w}}_\varepsilon )\).

Lemma 3.5

The process \(\{\, \left( {\overline{u}}_\varepsilon (t),{\overline{v}}_\varepsilon (t), {\overline{w}}_\varepsilon (t)\right) , 0\le t\le T, \; \; 0<\varepsilon <1\, \}\Rightarrow \{\, ({\overline{u}}(t),{\overline{v}}(t), {\overline{w}}(t)), 0\le t\le T\, \}\) in \(L^2\big (0,T; (L^2)^3\big )\cap C([0,T];(\textrm{H}^{-1})^3)\), where the limit \(\{\,\left( {\overline{u}}(t),{\overline{v}}(t), {\overline{w}}(t)\right) , 0\le t\le T\,\}\) is the unique solution of the following system of parabolic PDEs

$$\begin{aligned} \left\{ \begin{aligned} \dfrac{d{\overline{u}}}{dt}(t)&= \mu _S\,\Delta {\overline{u}} (t)-f(t){\overline{u}}(t)-g(t){\overline{v}}(t)-f(t)u(t)-g(t)v(t),\\ \dfrac{d{\overline{v}}}{dt}(t)&= \mu _I\,\Delta {\overline{v}}(t)+ f(t){\overline{u}}(t)+g(t){\overline{v}}(t)+f(t)u(t)\\&\quad +g(t)v(t)-\alpha \left( v(t)+{\overline{v}}(t\right) , \\ \dfrac{d{\overline{w}}}{dt}(t)&= \mu _R\,\Delta {\overline{w}}(t)+ \alpha \left( v(t)+{\overline{v}}(t)\right) , \\ {\overline{u}}(0)&={\overline{v}}(0)={\overline{w}}(0)=0. \end{aligned} \right. \end{aligned}$$
(34)

Proof

Let

$$\begin{aligned} {\overline{\Im }}_\varepsilon (t)&=\begin{pmatrix}{\overline{u}}_\varepsilon (t)\\ {\overline{v}}_\varepsilon (t)\\ {\overline{w}}_\varepsilon (t)\end{pmatrix},\quad F_\varepsilon (t)=\begin{pmatrix} -f_\varepsilon (t)u_\varepsilon (t)-g_\varepsilon (t)v_\varepsilon (t)\\ f_\varepsilon (t)u_\varepsilon (t)+(g_\varepsilon (t)-\alpha ) v_\varepsilon (t)\\ \alpha v_\varepsilon (t)\end{pmatrix},\\ \Lambda _\varepsilon (t)&= \begin{pmatrix} \mu _S\Delta _\varepsilon -f_\varepsilon (t) &{} -g_\varepsilon (t) &{} 0\\ f_\varepsilon (t) &{} \mu _I\Delta _\varepsilon +g_\varepsilon (t)-\alpha &{} 0\\ 0 &{} 0 &{} \mu _R\Delta _\varepsilon +\alpha \end{pmatrix}. \end{aligned}$$

Note that both \({\overline{\Im }}_\varepsilon \) and \(F_\varepsilon \) belong to \(L^2(0,T ; (\texttt{H}_{\varepsilon })^3)\). We have the following system of ODEs

$$\begin{aligned} \frac{d{\overline{\Im }}_\varepsilon }{dt}(t)=\Lambda _\varepsilon (t){\overline{\Im }}_\varepsilon (t)+F_\varepsilon (t),\quad {\overline{\Im }}_\varepsilon (0)=0\,. \end{aligned}$$
(35)

Lemma 3.3 tells us that whenever, as \(\varepsilon \rightarrow 0\),

$$\begin{aligned} F_\varepsilon \Longrightarrow F \ \text { in } L^2(0,T ; (\textrm{H}^{-1})^3), \end{aligned}$$

where

$$\begin{aligned} F(t)=\begin{pmatrix} -f(t)u(t)-g(t)v(t)\\ f(t)u(t)+g(t)v(t)-\alpha v(t)\\ \alpha v(t)\end{pmatrix}\,. \end{aligned}$$
(36)

We apply the well–known theorem due to Skorohod, which asserts that redefining the probability space, we can assume that \(F_\varepsilon \rightarrow F\) a.s. strongly in \(L^2((0,T); (\textrm{H}^{-1})^3)\). Our assumptions and the hypotheses imply that both \({\overline{\Im }}_\varepsilon \) and \(\nabla _\varepsilon ^+{\overline{\Im }}_\varepsilon \) are bounded in \((L^2((0,T)\times \mathbb {T}^1))^3\). Hence along a subsequence \({\overline{\Im }}_\varepsilon \rightarrow {\overline{\Im }}\) and \(\displaystyle \nabla _\varepsilon ^+{\overline{\Im }}_\varepsilon \rightarrow {\overline{G}}\) in \((L^2((0,T)\times \mathbb {T}^1))^3\) weakly. However, it follows from a duality argument that \( {\overline{G}}=\nabla {\overline{\Im }}\), and taking the weak limit in (35), we deduce that \({\overline{\Im }}\) is the unique solution of the system of parabolic PDEs

$$\begin{aligned} \frac{d {\overline{\Im }}}{dt}(t)=\Lambda (t){\overline{\Im }}(t)+F(t),\quad {\overline{\Im }}(0)=0\,, \end{aligned}$$

with

$$\begin{aligned} \Lambda (t)= \begin{pmatrix} \mu _S\Delta -f(t) &{} -g(t) &{}0 \\ f(t) &{} \mu _I\Delta +g(t)-\alpha &{} 0 \\ 0&{}0 &{} \mu _R\Delta +\alpha \end{pmatrix} . \end{aligned}$$
(37)

Hence all converging subsequences have the same limit, and the whole sequence converges.

We now show that the pair \(\langle {\overline{\Im }}_\varepsilon ,\nabla _\varepsilon ^+{\overline{\Im }}_\varepsilon \rangle \) converges strongly in \((L^2((0,T)\times \mathbb {T}^1))^6\). We first note that both \({\overline{\Im }}_\varepsilon \) and \(\nabla _\varepsilon ^+{\overline{\Im }}_\varepsilon \) are bounded in \((L^2((0,T)\times \mathbb {T}^1))^3\), but also \(\frac{d}{dt}{\overline{\Im }}_\varepsilon \) is bounded in \(L^2((0,T);(\textrm{H}^{-1}(\mathbb {T}^1))^3)\). From these estimates, we deduce with the help of Theorem 5.4 in Droniou et al. [4] that \({\overline{\Im }}_\varepsilon \rightarrow {\overline{\Im }}\) strongly in \((L^2((0,T)\times \mathbb {T}^1))^3\). Next we deduce from (35) that

$$\begin{aligned} \frac{1}{2}\frac{d\big \Vert {\overline{\Im }}_\varepsilon (t)\big \Vert _{L^2}^2}{dt}=\langle \Lambda _\varepsilon {\overline{\Im }}_\varepsilon (t),{\overline{\Im }}_\varepsilon (t)\rangle +\langle F_\varepsilon (t),{\overline{\Im }}_\varepsilon (t)\rangle ,\end{aligned}$$

hence

$$\begin{aligned}&\frac{1}{2}\big \Vert {\overline{\Im }}_\varepsilon (T)\big \Vert _{L^2}^2+\int _0^T\left[ \mu _S\big \Vert \nabla _\varepsilon ^+{\overline{u}}_\varepsilon (t)\big \Vert _{L^2}^2+\mu _I\big \Vert \nabla _\varepsilon ^+{\overline{v}}_\varepsilon (t) \big \Vert _{L^2}^2+\mu _R\big \Vert \nabla _\varepsilon ^+{\overline{w}}_\varepsilon (t)\big \Vert _{L^2}^2\right] dt \nonumber \\&\quad =\int _0^T\Big [\langle f_\varepsilon (t){\overline{u}}_\varepsilon (t)+g_\varepsilon (t){\overline{v}}_\varepsilon (t),{\overline{v}}_\varepsilon (t)\nonumber \\&\qquad -{\overline{u}}_\varepsilon (t)\rangle +\big \Vert \sqrt{\alpha }\;{\overline{w}}_\varepsilon (t)\big \Vert _{L^2}^2-\big \Vert \sqrt{\alpha }\;{\overline{v}}_\varepsilon (t)\big \Vert _{L^2}^2+\langle F_\varepsilon (t),{\overline{\Im }}_\varepsilon (t)\rangle \Big ]dt\,. \end{aligned}$$
(38)

We have an analogous identity for the limiting quantities, namely:

$$\begin{aligned}&\frac{1}{2}\big \Vert {\overline{\Im }}(T)\big \Vert _{L^2}^2+\int _0^T \left[ \mu _S\big \Vert \nabla {\overline{u}}(t)\big \Vert _{L^2}^2+\mu _I\big \Vert \nabla {\overline{v}}(t)\big \Vert _{L^2}^2+\mu _R\big \Vert \nabla {\overline{w}}(t)\big \Vert _{L^2}^2\right] dt \nonumber \\&=\int _0^T\Big [\langle f(t){\overline{u}}(t)+g(t){\overline{v}}(t),{\overline{v}}(t)-{\overline{u}}(t)\rangle \nonumber \\&\quad +\big \Vert \sqrt{\alpha }\;{\overline{w}}(t)\big \Vert _{L^2}^2-\big \Vert \sqrt{\alpha }\;{\overline{v}}(t)\big \Vert _{L^2}^2+\langle F(t),{\overline{\Im }}(t)\rangle \Big ]dt\,. \end{aligned}$$
(39)

It follows from the strong convergence of \(F_\varepsilon \) to F in \(L^2(0,T;(\textrm{H}^{-1})^3)\), the strong convergence of \({\overline{\Im }}_\varepsilon \rightarrow {\overline{\Im }}\) in \((L^2((0,T)\times \mathbb {T}^1))^3\) and the weak convergence of \(\nabla _\varepsilon ^+{\overline{\Im }}_\varepsilon \) to \(\nabla {\overline{\Im }}\) in \((L^2((0,T)\times \mathbb {T}^1))^3\) that the right hand side of (38) converges to the right hand side of (39). Hence the left hand side of (38) converges to the left hand side of (39). Consequently

$$\begin{aligned} \frac{1}{2}\big \Vert {\overline{\Im }}_\varepsilon (T)-{\overline{\Im }}(T)\big \Vert _{L^2}^2+ & {} \int _0^T\left[ \mu _S\big \Vert \nabla _\varepsilon ^+{\overline{u}}_\varepsilon (t)-\nabla {\overline{u}}(t)\big \Vert _{L^2}^2+\mu _I\big \Vert \nabla _\varepsilon ^+{\overline{v}}_\varepsilon (t)-\nabla {\overline{v}}(t)\big \Vert _{L^2}^2 \right. \nonumber \\{} & {} \quad +\left. \mu _R\big \Vert \nabla _\varepsilon ^+{\overline{w}}_\varepsilon (t)-\nabla {\overline{w}}(t)\big \Vert _{L^2}^2\right] dt\rightarrow 0\,. \end{aligned}$$
(40)

This last result follows from the convergence of the left hand side of (38) to that of (39), and the facts that

$$\begin{aligned} \langle {\overline{\Im }}_\varepsilon (T),{\overline{\Im }}(T)\rangle\rightarrow & {} \big \Vert {\overline{\Im }}(T)\big \Vert _{L^2}^2, \end{aligned}$$

and

$$\begin{aligned}{} & {} \int _0^T\left[ \mu _S\langle \nabla _\varepsilon ^+{\overline{u}}_\varepsilon (t),\nabla {\overline{u}}(t)\rangle +\mu _I\langle \nabla _\varepsilon ^+{\overline{v}}_\varepsilon (t),\nabla {\overline{v}}(t)\rangle +\mu _R\langle \nabla _\varepsilon ^+{\overline{w}}_\varepsilon (t),\nabla {\overline{w}}(t)\rangle \right] dt \\{} & {} \quad \rightarrow \int _0^T\left[ \mu _S\big \Vert \nabla {\overline{u}}(t)\big \Vert _{L^2}^2+\mu _I\big \Vert \nabla {\overline{v}}(t)\big \Vert _{L^2}^2+\mu _R\big \Vert \nabla {\overline{w}}(t)\big \Vert _{L^2}^2\right] dt\,. \end{aligned}$$

The second convergence follows from the fact that \(\nabla _\varepsilon ^+{\overline{\Im }}_\varepsilon \rightarrow \nabla {\overline{\Im }}\) in \((L^2((0,T)\times \mathbb {T}^1))^3\) weakly. Concerning the first one, we deduce from the equations and the above statements that \({\overline{\Im }}_\varepsilon (T)\rightarrow {\overline{\Im }}(T)\) weakly in \((\textrm{H}^{-1})^3\). But since that sequence is bounded in \((L^2(\mathbb {T}^1))^3\), it also converges weakly in \((L^2(\mathbb {T}^1))^3\).

The fact that \(\nabla _\varepsilon ^+{\overline{\Im }}_\varepsilon \rightarrow \nabla {\overline{\Im }}\) strongly in \((L^2((0,T)\times \mathbb {T}^1))^3\) clearly follows from (40).

The above arguments imply that a.s.

$$\begin{aligned} \langle {\overline{\Im }}_\varepsilon ,\nabla _\varepsilon ^+{\overline{\Im }}_\varepsilon \rangle \rightarrow \langle {\overline{\Im }},\nabla {\overline{\Im }}\rangle \ \text { strongly in }(L^2((0,T)\times \mathbb {T}^1))^6\,. \end{aligned}$$

Now the convergence \({\overline{\Im }}_\varepsilon \rightarrow {\overline{\Im }}\) in \(C([0,T];(H^{-1})^3)\) follows readily from the equation. \(\square \)

Lemma 3.2 says that \(\Im _\varepsilon \Rightarrow \Im \) in \(C\big ([0,T]\, ; \, (\textrm{H}^{-\gamma })^3\big )\cap L^2\big (0,T; (\textrm{H}^{-1})^3\big )\), we have used in Lemma 3.5 the Skorohod theorem to deduce that \({\overline{\Im }}_\varepsilon \Rightarrow {\overline{\Im }}\) in \(L^2(0,T(L^2)^3)\cap C([0,T];(\textrm{H}^{-\gamma })^{-1})\). Hence the same Skorohod theorem allows us to take the limit in the sum \(\Im _\varepsilon +{\overline{\Im }}_\varepsilon \), which yields the following result.

Theorem 3.2

(Functional central limit theorem) For \(3/2<\gamma <2\), as \(\varepsilon \rightarrow 0\), \(\{{\mathscr {Y}}_{\varepsilon }(t)\; , \; 0\le t\le T \}_{0<\varepsilon <1} \Longrightarrow \{{\mathscr {Y}}(t)\; , \; 0\le t\le T \}\) in \(C\big ([0,T]\, ; \, (\textrm{H}^{-\gamma })^3\big )\cap L^2\big (0,T; (\textrm{H}^{-1})^3\big )\), where the limit \({\mathscr {Y}}\) is solution of the following system of SPDEs : for all \(\varphi \in \textrm{H}^{1}\)

$$\begin{aligned} \left\{ \begin{aligned}&\big \langle \, {\mathscr {U}}(t), \varphi \ \big \rangle _{\!{\textrm{H}^{-1},\textrm{H}^{1}}}\\&\quad =\mu _S\int _0^t\!\! \big \langle \; {\mathscr {U}}(r)\; ,\; \Delta \varphi \; \big \rangle _{\!{\textrm{H}^{-1},\textrm{H}^{1}}} dr +\!\! \int _0^t\!\! \big \langle \, {\mathscr {V}}(r) , \,\beta (.) \dfrac{{\textbf{i}}(r)\big ({\textbf{i}}(r)+{\textbf{r}}(r)\big )}{{\textbf{a}}^2(r)}\varphi \, \big \rangle _{\!{\textrm{H}^{-1},\textrm{H}^{1}}} dr\\&\qquad + \int _0^t \big \langle \; {\mathscr {U}}(r), \, \beta (.) \dfrac{{\textbf{s}}(r)\big ({\textbf{s}}(r)+{\textbf{r}}(r)\big )}{{\textbf{a}}^2(r)}\varphi \; \big \rangle _{\!{\textrm{H}^{-1},\textrm{H}^{1}}} dr + \big \langle \, {\mathscr {M}}^S(t)\, , \, \varphi \; \big \rangle _{\!{\textrm{H}^{-1},\textrm{H}^{1}}}, \\&\big \langle \; {\mathscr {V}}(t), \varphi \; \big \rangle _{\!{\textrm{H}^{-1},\textrm{H}^{1}}} \\&\quad = \mu _I\int _0^t \!\!\big \langle \, {\mathscr {V}}(r)\; ,\; \Delta \varphi \, \big \rangle _{\!{\textrm{H}^{-1},\textrm{H}^{1}}} dr - \int _0^t\big \langle \; {\mathscr {V}}(r), \, \beta (.) \dfrac{{\textbf{i}}(r)\big ({\textbf{i}}(r)+{\textbf{r}}(r)\big )}{{\textbf{a}}^2(r)}\varphi \; \big \rangle _{\!{\textrm{H}^{-1},\textrm{H}^{1}}} dr\\&\qquad - \int _0^t \big \langle \; {\mathscr {U}}(r), \, \beta (.) \dfrac{{\textbf{s}}(r)\big ({\textbf{s}}(r)+{\textbf{r}}(r)\big )}{{\textbf{a}}^2(r)}\varphi \; \big \rangle _{\!{\textrm{H}^{-1},\textrm{H}^{1}}} dr + \int _0^t\big \langle \; {\mathscr {V}}(r), \alpha (.)\varphi \; \big \rangle _{\!{\textrm{H}^{-1},\textrm{H}^{1}}} dr\\&\qquad +\big \langle \, {\mathscr {M}}^I(t)\, , \, \varphi \; \big \rangle _{\!{\textrm{H}^{-1},\textrm{H}^{1}}}, \\&\big \langle \; {\mathscr {W}}(t), \varphi \; \big \rangle _{\!{\textrm{H}^{-1},\textrm{H}^{1}}} \\&\quad = \mu _R \int _0^t \big \langle \; {\mathscr {W}}(r)\,,\,\Delta \varphi \; \big \rangle _{\!{\textrm{H}^{-1},\textrm{H}^{1}}} dr - \int _0^t\big \langle \; {\mathscr {V}}(r), \alpha (.)\varphi \; \big \rangle _{\!{\textrm{H}^{-1},\textrm{H}^{1}}} dr + \big \langle \; {\mathscr {M}}^R(t)\, , \, \varphi \; \big \rangle _{\!{\textrm{H}^{-1},\textrm{H}^{1}}} \, . \end{aligned} \right. \nonumber \\ \end{aligned}$$
(41)

Final remarks: \(\bullet \) Our functional central limit theorem is established in dimension 1. The difficulty in higher dimension is the following. \(\gamma >3/2\) has to be replaced by \(\gamma >1+d/2\). Then in Lemma 3.2 we have convergence in \(L^2(0,T;(H^{1-\gamma })^3)\cap C([0,T];(H^{-\gamma })^3)\). Note that \(1-\gamma <-d/2\). Already in dimension 2, we have \(1-\gamma <-1\), and there is a serious difficulty with the analog of Lemma 3.5.

\(\bullet \) In this work, we have first let \({\textbf{N}}\rightarrow \infty \), while \(\varepsilon >0\) is fixed, and then let \(\varepsilon \rightarrow 0\). The case where \({\textbf{N}}\rightarrow +\infty \) and \(\varepsilon \rightarrow 0\) together, with some constraint on the relative speeds of convergence (which does not allow \({\textbf{N}}\) to converge too slowly to \(\infty \) while \(\varepsilon \rightarrow 0\)) will be the subject of future work.