Abstract
A stochastic SIR epidemic model taking into account the heterogeneity of the spatial environment is constructed. The deterministic model is given by a partial differential equation and the stochastic one by a space-time jump Markov process. The consistency of the two models is given by a law of large numbers. In this paper, we study the deviation of the spatial stochastic model from the deterministic model by a functional central limit theorem. The limit is a distribution-valued Ornstein–Uhlenbeck Gaussian process, which is the mild solution of a stochastic partial differential equation.
We’re sorry, something doesn't seem to be working properly.
Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.
Avoid common mistakes on your manuscript.
1 Introduction
A stochastic spatial model of epidemic has been described by N’zi et al. [10] to study the oubreak of infectious diseases in a bounded domain. Such a model takes into account heterogeneity, spatial connectivity and movement of individuals, which play an important role in the spread of the infectious diseases. It is based on the compartmental SIR model of Kermack and Mckendrick [6]. Let us summarize the results in N’zi et al. [10] in the case of one dimensional space.
Consider a deterministic and a stochastic SIR model on a grid \(\textrm{D}_\varepsilon \) of the torus \(\mathbb {T}^1= [0,1)\) with migration between neighboring sites (two neighboring sites are at distance \(\varepsilon \) apart, \( \varepsilon ^{-1}\in \mathbb {N}^*\)). Let \(S_\varepsilon (t,x_i)\) (resp. \(I_\varepsilon (t,x_i)\), resp. \(R_\varepsilon (t,x_i)\)) be the proportion of the total population which is both susceptible (resp. infectious, resp. removed) and located at site \(x_i\) at time t. The dynamics of susceptible, infected and removed individuals at each site can be expressed as
\(\Delta _{\varepsilon }\) is the discrete Laplace operator defined as follows
The rates \( \beta : [0,1] \longrightarrow \mathbb {R}_+\) and \( \alpha :[0,1] \longrightarrow \mathbb {R}_+\) are continuous periodic functions, and \( \mu _S\), \(\mu _I\) and \(\mu _R\) are positive diffusion coefficients for the susceptible, infectious and removed subpopulations, respectively.
In what follows, we use the notations \( S_{\varepsilon }(t) := \left( \begin{array}{cl} S_{\varepsilon }(t,x_1)\\ \vdots \\ S_{\varepsilon }(t,x_{\ell }) \end{array} \right) \), \( I_{\varepsilon }(t) := \left( \begin{array}{cl} I_{\varepsilon }(t,x_1)\\ \vdots \\ I_{\varepsilon }(t,x_{\ell }) \end{array} \right) \), \( R_{\varepsilon }(t) := \left( \begin{array}{cl} R_{\varepsilon }(t,x_1)\\ \vdots \\ R_{\varepsilon }(t,x_{\ell }) \end{array} \right) \), and \(\displaystyle \quad Z_{\varepsilon }(t)= \big ( S_{\varepsilon }(t)\, ,\, I_{\varepsilon }(t)\, ,\, R_{\varepsilon }(t) \big )^{\texttt{T}}\). Here \(\ell = \varepsilon ^{-1}\).
Note that (1) is the discrete space approximation of the following system of PDEs
where \( \displaystyle \Delta = \dfrac{\partial ^2}{\partial x^2}\). In the sequel, we set \({\textbf{X}}:=({\textbf{s}} \, ,\, {\textbf{i}} \, ,\, {\textbf{r}})^{\texttt{T}}\).
Let \({\textbf{N}}\) be the total population size. The stochastic version of (1) is given by the following system
where all the \(\textrm{P}_j\)’s are standard Poisson processes, which are mutually independent. For each given site, these processes count the number of new infectious, recoveries and the migrations between sites. \( y_i \sim x_i \) means that \( y_i \in \{x_i+\varepsilon \, , \, x_i-\varepsilon \}\).
Let \( S_{{\textbf{N}},\varepsilon }(t) := \left( \begin{array}{cl} S_{{\textbf{N}},\varepsilon }(t,x_1)\\ \vdots \\ S_{{\textbf{N}},\varepsilon }(t,x_{\ell }) \end{array} \right) \) , \( I_{{\textbf{N}},\varepsilon }(t) := \left( \begin{array}{cl} I_{{\textbf{N}},\varepsilon }(t,x_1)\\ \vdots \\ I_{{\textbf{N}},\varepsilon }(t,x_{\ell }) \end{array} \right) \),
\( R_{{\textbf{N}},\varepsilon }(t) := \left( \begin{array}{cl} R_{{\textbf{N}},\varepsilon }(t,x_1)\\ \vdots \\ R_{{\textbf{N}},\varepsilon }(t,x_{\ell }) \end{array} \right) \),
\(\displaystyle \; Z_{{\textbf{N}},\varepsilon }(t):= \big ( S_{{\textbf{N}},\varepsilon }(t),\, I_{{\textbf{N}},\varepsilon }(t), \, R_{{\textbf{N}},\varepsilon }(t) \big )^{\texttt{T}}\) and \(\displaystyle b_{\varepsilon }\big (t,Z_{{\textbf{N}},\varepsilon }(t)\big ) := \sum _{j=1}^{K}h_j \beta _j( Z_{{\textbf{N}}, \varepsilon }(t))\) (K being the number of Poisson’s processes in the system), where the vectors \(h_j \in \{ -1 , 0 , 1\}^{3\ell }\) denote the respective jump directions with jump rates \(\beta _j\). The SDE (3) can be rewritten as follows
Also the sytem (1) can be written as follows
The authors show the consistency of the two models by a law of large numbers. More precisely, the following two results were proved in N’zi et al. [10].
Theorem 1.1
(Law of Large Numbers: \({\mathbf {{\varvec{N}}}}\rightarrow \infty \), \(\varepsilon \) being fixed)
Let \( Z_{{\textbf{N}},\varepsilon } \) denote the solution (4) and \( Z_{\varepsilon }\) the solution of (5).
Let us fix an arbitrary \(T > 0\) and assume that \( Z_{{\textbf{N}},\varepsilon }(0) \longrightarrow Z_{\varepsilon }(0) \), as \( {\textbf{N}}\rightarrow + \infty \).
Then \( \displaystyle \underset{0\le t\le T}{\sup }\Big \Vert Z_{{\textbf{N}},\varepsilon }(t)-Z_{\varepsilon }(t) \Big \Vert \longrightarrow 0 \; \text {a.s.} \; , \; \; as \; \; {\textbf{N}}\rightarrow + \infty \) .
Moreover, for all \( x_i \in D_{\varepsilon }\), \(V_i :=[x_i-\varepsilon /2, x_i+\varepsilon /2 )\) denote the cell centered in the site \(x_i\). We define
, and we set
We introduce the canonical projection \(P_\varepsilon : L^2(\mathbb {T}^1) \longrightarrow \texttt{H}_\varepsilon \) defined by
Throughout this paper, we assume that the initial condition satisfies
Assumption 1.1
\({\textbf{s}}(0,.)\), \({\textbf{i}}(0,.)\), \({\textbf{r}}(0,.)\) \(\in C^1(\mathbb {T}^1)\), \(\forall x\in \mathbb {T}^1\), \({\mathcal {S}}_{\varepsilon }(0,x)=P_\varepsilon {\textbf{s}}(0,x)\), \({\mathcal {I}}_{\varepsilon }(0,x)=P_\varepsilon {\textbf{i}}(0,x)\), \({\mathcal {R}}_{\varepsilon }(0,x)=P_\varepsilon {\textbf{r}}(0,x)\), and \(\displaystyle \int _{\mathbb {T}^1} \left( {\textbf{s}}(0,x)+{\textbf{i}}(0,x)+{\textbf{r}}(0,x)\right) dx=1. \)
Assumption 1.2
There exists a constant \(c>0\) such that \(\displaystyle \inf _{x \in \mathbb {T}^1}{\textbf{s}}(0,x)\ge c\).
We use the notation \( \Vert f \Vert _{\infty } := \underset{x\in [0, 1]}{\sup }\vert f(x) \vert \) to denote the supremun norm of f in [0, 1] and define \( \Big \Vert \big (f , g, h \big )^{\! \!\texttt{T}} \Big \Vert _{\infty } := \big \Vert f \big \Vert _{\infty } + \big \Vert g \big \Vert _{\infty }+\big \Vert h \big \Vert _{\infty }.\)
We have the
Theorem 1.2
For all \(T > 0 \), \(\displaystyle \underset{0\le t\le T}{\sup }\Big \Vert {\textbf{X}}_{\varepsilon }(t)- {\textbf{X}}(t)\Big \Vert _{\infty } \longrightarrow 0 \), as \( \varepsilon \rightarrow 0 . \)
Next, defining \(\displaystyle {\mathcal {S}}_{{\textbf{N}},\varepsilon }(t,x) := \sum _{i=1}^{\varepsilon ^{-1}} S_{{\textbf{N}},\varepsilon }(t,x_i)\mathbbm {1}_{V_i}(x), \; \; {\mathcal {I}}_{{\textbf{N}},\varepsilon }(t,x) := \sum _{i=1}^{\varepsilon ^{-1}} I_{{\textbf{N}},\varepsilon }(t,x_i)\mathbbm {1}_{V_i}(x),\)
\(\displaystyle {\mathcal {R}}_{{\textbf{N}},\varepsilon }(t,x) := \sum _{i=1}^{\varepsilon ^{-1}} R_{{\textbf{N}},\varepsilon }(t,x_i)\mathbbm {1}_{V_i}(x),\) and setting \({\textbf{X}}_{{\textbf{N}},\varepsilon }:=\big ({\mathcal {S}}_{{\textbf{N}},\varepsilon }\, ,\,{\mathcal {I}}_{{\textbf{N}},\varepsilon }\, ,\, {\mathcal {R}}_{{\textbf{N}},\varepsilon }\big )^{\texttt{T}}\), the following theorem is proved in N’zi et al. [10].
Theorem 1.3
Let us assume that \((\varepsilon ,{\textbf{N}})\rightarrow (0,\infty )\), in such way that
-
(i)
\(\dfrac{{\textbf{N}}}{\log (1/\varepsilon )}\longrightarrow \infty \) as \({\textbf{N}} \rightarrow \infty \) and \( \varepsilon \rightarrow 0 \);
-
(ii)
\(\Big \Vert {\textbf{X}}_{{\textbf{N}},\varepsilon }(0)- {\textbf{X}}(0) \Big \Vert _{\infty } \longrightarrow 0 \) in probability.
Then for all \( T > 0 \), \( \underset{0\le t\le T}{\sup }\Big \Vert {\textbf{X}}_{{\textbf{N}},\varepsilon }(t)- {\textbf{X}}(t) \Big \Vert _{\infty } \longrightarrow 0 \) in probability .
We devote this paper to study the deviation of the stochastic model from the deterministic one as the mesh size of the grid goes to zero. In this work, we focus our attention to the periodic boundary conditions on the unit interval [0, 1], which we denote by \(\mathbb {T}^1\). Let us mention that Blount [2] and Kotelenez [7] described similar spatial model for chemical reactions. The resulting process has one component and is compared with the corresponding deterministic model. They proved a functional central limit theorem under some restriction on the respective speeds of convergence of the initial number of particles in each cell and the number of cells.
The rest of this paper is organized as follows. In Sect. 2, we give some notations and preliminaries which will be useful in the sequel of this paper. In Sect 3, we establish a functional central limit theorem, the main result of this paper, by letting the mesh size \(\varepsilon \) of the grid go to zero. The fluctuation limit is a distribution valued generalized Ornstein–Uhlenbeck Gaussian process and can be represented as the solution of a linear stochastic partial differential equation, whose driving terms are Gaussian martingales.
2 Notations and preliminaries
In this section, we give some notations and collect some standard facts on the Sobolev spaces \(\textrm{H}^{\gamma }(\mathbb {T}^1)\), \(\gamma \in \mathbb {R}\). First of all, let us describe some of the properties of the (discrete)-Laplace operator. Let \( \texttt{H}_{\varepsilon } \subset L^2(\mathbb {T}^1)\) denote the space of real valued step functions that are constant on each cell \( V_i\). For \(f\in \texttt{H}_{\varepsilon }\), let us define
For \( f, g \in L^2(\mathbb {T}^1)\), \(\displaystyle \langle \; f , g \; \rangle := \int _{\mathbb {T}^1} f(x)g(x)dx \) denotes the scalar product in \(\textrm{L}^2(\mathbb {T}^1).\)
It is not hard to see that
For m even and \( x\in \mathbb {R} \) we define
\(\left\{ 1\, , \, \varphi _m \, , \, \psi _m \, , \, \; m=2k \, , \, k\ge 1 \right\} \) is a complete orthonormal system (CONS) of eigenvectors of \( \Delta \) in \(L^2(\mathbb {T}^1)\) with eigenvalues \(\displaystyle -\lambda _m =-\pi ^2 m^2.\) Consequently, the semigroup \(\textsf {T}(t)\!:= \!\exp (\Delta \,t)\) acting on \(L^2(\mathbb {T}^1)\) generated by \( \Delta \) can be represented as
Assume that \( \varepsilon ^{-1}\) is an odd integer. For \( m \in \left\{ 0 ,2 , \cdots , \varepsilon ^{-1}-1 \right\} \), we define \(\displaystyle \varphi _{m}^{\varepsilon }(x)=\sqrt{2}\cos (\pi m j\varepsilon )\), if \(x\in V_j\) and \(\displaystyle \psi _{m}^{\varepsilon }(x)=\sqrt{2}\sin (\pi m j\varepsilon )\), if \(x\in V_j\). \(\{\, \varphi _{m}^{\varepsilon }, \, \psi _{m}^{\varepsilon }, m \,\} \) form an orthonormal basis of \( \texttt{H}_{\varepsilon }\) as a subspace of \(L^2\left( \mathbb {T}^1\right) \). These vectors are eigenfunctions of \( \Delta _{\varepsilon } \) with the associated eigenvalues \(\displaystyle -\lambda _{m}^{\varepsilon }=-2\varepsilon ^{-2}\big ( 1 - \cos (m\pi \varepsilon ) \big ).\) Note that \( \displaystyle \lambda _{m}^{\varepsilon } \longrightarrow \lambda _{m}\), as \(\varepsilon \rightarrow 0\). Basic computations show that there exists a constant c, such that for each m and \(\varepsilon \), \(\varepsilon ^{-2}\big ( 1 - \cos (\pi m \varepsilon ) \big ) > c\, m^2.\) Let us set \( n_\varepsilon =\frac{\varepsilon ^{-1}-1}{2}.\) \( \Delta _{\varepsilon } \) generates a contraction semigroup \( \textsf {T}_{\!\varepsilon }(t) :=\exp (\Delta _{\varepsilon } t) \) whose action on each \(f\in \texttt{H}_{\varepsilon }\) is given by
Note that both \( \Delta _{\varepsilon }\) and \(\textsf {T}_{\!\varepsilon }(t)\) are self-adjoint and that \(\textsf {T}_{\!\varepsilon }(t)\Delta _{\varepsilon }\varphi =\Delta _{\varepsilon }\textsf {T}_{\!\varepsilon }(t)\varphi .\)
For any \( J\in \{S, I, R\}\), the semigroup generated by \( \mu _{\!_J}\Delta \) is \(\textsf {T}(\mu _{\!_J}t)\). In the sequel, we will use the notation \( \textsf {T}_{\!\!_J}(t):=\textsf {T}(\mu _{\!_J}t)\) and similarly, in the discrete case, we will use the notation \(\textsf {T}_{\!\varepsilon ,J}(t):=\textsf {T}_{\varepsilon }(\mu _{\!_J}t)\). Also, for any \(J\in \{S, I, R\}\), we set \( \lambda _{m,J}:=\mu _{\!_J}\lambda _{m}\) and \( \lambda _{m,J}^{\varepsilon }:=\mu _{\!_J}\lambda _{m}^{\varepsilon }. \) For \( \gamma \in \mathbb {R}_+\), we define the Hilbert space \(\textrm{H}^{\gamma }(\mathbb {T}^1)\) as follows.
We shall use the notations \(\textrm{H}^{\gamma } := \textrm{H}^{\gamma }(\mathbb {T}^1)\) and \(L^2 := L^2(\mathbb {T}^1)\).
Note that \( \displaystyle \Vert \varphi \Vert _{_{\textrm{H}^{\gamma }}}= \Vert ( {\textbf{I}}-\Delta )^{\gamma /2}\varphi \Vert _{_{L^2}} \), where \( {\textbf{I}} \) is the identity operator on \(L^2\left( \mathbb {T}^1\right) .\) For any three-dimensional vector-valued function \( \Phi =(\Phi _1,\Phi _2,\Phi _3)^{\texttt{T}}\), we use the notation \(\Vert \Phi \Vert _{_{\textrm{H}^{\gamma }}} := \Big (\Vert \Phi _1 \Vert _{_{\textrm{H}^{\gamma }}}^2 +\Vert \Phi _2 \Vert _{_{\textrm{H}^{\gamma }}}^2 +\Vert \Phi _3 \Vert _{_{\textrm{H}^{\gamma }}}^2\Big )^{1/2} \).
For \( \gamma \in \mathbb {R}\), we also define
For f , \(g \in \texttt{H}_{\varepsilon }\), we have
Elementary calculation shows that for \( f\in \texttt{H}_{\varepsilon }\), and \(\gamma >0\) there exist positive constants \(c_1(\gamma )\) and \(c_2(\gamma )\) such that for all \(\varepsilon >0\)
\( f^{\prime }:=\dfrac{\partial f}{\partial x }\) will denote the derivative of f.
In the sequel of this paper we may use the same notation for different constants (we use the generic notation C for a positive constant). These constants can depend upon some parameters of the model, as long as these are independent of \( \varepsilon \) and \( {\textbf{N}}\), we will not necessarily mention this dependence explicitly. However, we use \(C(\gamma , T)\) to denote a constant which depends on \(\gamma \) and T (and possibly on some unimportant constants). The exact value may change from line to line.
Let us now consider the deviation of the stochastic model around its determinsitc law of large numbers limit. To this end we introduce the rescaled difference between \(Z_{{\textbf{N}},\varepsilon }(t) \) and \(Z_{\varepsilon }\), namely
where
and
\(\hspace{3cm} W_{{\textbf{N}},\varepsilon }(t) := \left( \begin{array}{cl} \sqrt{{\textbf{N}}}\Big (R_{{\textbf{N}},\varepsilon }(t, x_1)- R_\varepsilon (t,x_1)\Big )\\ \vdots \\ \sqrt{{\textbf{N}}}\Big (R_{{\textbf{N}},\varepsilon }(t, x_{\ell })- R_\varepsilon (t,x_{\ell })\Big ) \end{array} \right) .\)
In the sequel, we denote by \("\Longrightarrow "\) weak convergence. By fixing the mesh size \(\varepsilon \) of the grid and letting \({\textbf{N}}\) go to infinity, we obtain the following theorem.
Theorem 2.1
(Central Limit Theorem : \({\mathbf {{\varvec{N}}}}\rightarrow \infty \), \(\varepsilon \) being fixed)
Assume that \(\sqrt{{\textbf{N}}}\big ( Z_{{\textbf{N}},\varepsilon }(0)- Z_{\varepsilon }(0)\big ) \longrightarrow 0 \), as \( {\textbf{N}}\rightarrow \infty \).
Then, as \( {\textbf{N}} \rightarrow + \infty \) , \(\big \{\begin{array}{rl} \Psi _{{\textbf{N}},\varepsilon } (t) , \; t\ge 0 \end{array} \big \} \Longrightarrow \big \{\begin{array}{rl} \Psi _{\varepsilon }(t) , \; t\ge 0 \end{array}\big \} , \) for the topology of locally uniform convergence, where the limit process \( \Psi _{\varepsilon }(t) := \left( \begin{array}{cl} U_{\varepsilon }(t)\\ V_{\varepsilon }(t)\\ W_{\varepsilon }(t) \end{array} \right) \) satisfies
and \( \displaystyle \{ B_1(t), B_2(t), \cdots , B_K(t) \}\) are mutually independent standard Brownian motions.
More precisely, by setting \( A_{\varepsilon }=S_{\varepsilon } +I_{\varepsilon }+R_{\varepsilon }\), for any site \(x_i\), the limit \((U_{\varepsilon }, V_{\varepsilon }, W_{\varepsilon })^{\texttt{T}}\) satisfies the following system
where \(\{ B_{x_i}^{inf}: x_i\in \textrm{D}_\varepsilon \}\) , \(\{ B_{x_i}^{rec} : x_i\in \textrm{D}_\varepsilon \}\) , \(\{ B_{x_i y_i}^S : y_i \sim x_i\in \textrm{D}_\varepsilon \}\) , \(\{ B_{x_i y_i}^I : y_i \sim x_i\in \textrm{D}_\varepsilon \}\) and \(\{ B_{x_i y_i}^R : y_i \sim x_i\in \textrm{D}_\varepsilon \}\) are families of independent Brownian motions.
Theorem 2.1 is a special case of Theorem 3.5 of Kurtz [9] (see also Theorem 2.3.2 in Britton and Pardoux [3] ). Then, here, we do not give the proof and refer the reader to those references for a complete proof. \(\square \)
Let \( {\textbf{X}}=({\textbf{s}} \, ,\, {\textbf{i}} \, ,\,{\textbf{r}})^{\texttt{T}}\) satisfying the system (2) on [0, 1]. Thanks to Proposition 1.1 of Taylor [11] (chapter 15, section 1) we have the following lemma.
Lemma 2.1
Let \(\gamma \ge 0 \) and assume that the initial data \( {\textbf{X}}(0)\) bebong to \((\textrm{H}^{\gamma })^3\), then the parabolic system (2) has a unique solution \( {\textbf{X}} \in C\big ([0,T]; (\textrm{H}^{\gamma })^3\big ) \).
The rest of this section is devoted to the proof of some estimates for the solution of the system of equations (1). We first note that \(S_\varepsilon (t,x_i)\ge 0, I_\varepsilon (t,x_i)\ge 0, R_\varepsilon (t,x_i)\ge 0\) for all \(t\ge 0\), \(x_i\in D_\varepsilon \) and \(\varepsilon >0\). Moreover for any \(T>0\), there exists a contant \(C_T\) such that
Indeed we first note that \(\Vert S_\varepsilon (t)\Vert _\infty \le M\), since \(S_\varepsilon \) is upper bounded by the solution of the ODE
Next \(I_\varepsilon (t,x_i)\) is upper bounded by the solution of the ODE (with \({\bar{\beta }}:=\sup _x\beta (x)\))
The result for \(R_\varepsilon \) is now easy.
Let us set \({\mathcal {A}}_{\varepsilon } :={\mathcal {S}}_{\varepsilon }+{\mathcal {I}}_{\varepsilon }+{\mathcal {R}}_{\varepsilon }\) . We have the
Lemma 2.2
For any \(T>0\), there exists a positive constant \(c_T\) such that
Proof
We consider the ODE
Since \({\mathcal {S}}_\varepsilon (t,x)+{\mathcal {R}}_\varepsilon (t,x)\ge 0\) and \({\mathcal {I}}_\varepsilon (t,x)\ge 0\), it is plain that
Define \(\overline{{\mathcal {S}}}_\varepsilon (t,x)=e^{{\overline{\beta }}t}{\mathcal {S}}_\varepsilon (t,x)\). We have
Combining this with the last inequality, we deduce that
from Assumption 1.2.
Going back to \({\mathcal {S}}_\varepsilon \), we note that we have proved that
In other words, for any \(T>0\), there exists a constant \(c_T:=ce^{-{\overline{\beta }}T}\) which is such that
And since \(I_\varepsilon (t,x_i)+R_\varepsilon (t,x_i)\ge 0\), \({\mathcal {A}}_\varepsilon (t,x)\) satisfies the same lower bound. \(\square \)
Lemma 2.3
For any \(T>0\), there exists a constant C such that for each \(\varepsilon >0\)
Proof
For all \((t,x)\in [0,T]\times [0, 1] \), we have
\( \displaystyle \hspace{1cm} \dfrac{d\,{\mathcal {S}}_{\varepsilon }}{dt}(t,x) = \mu _S\,\Delta _{\varepsilon } {\mathcal {S}}_{\varepsilon }(t,x)- \dfrac{\beta (x)\, {\mathcal {S}}_{\varepsilon }(t,x){\mathcal {I}}_{\varepsilon }(t,x)}{{\mathcal {A}}_{\varepsilon }(t,x)}, \)
which implies
Then, \(\forall \, t \in [0,T]\),
In the same way, we obtain
and
Then, we deduce that
where \({\overline{\alpha }}=\underset{x\in \mathbb {T}^1}{\sup }\vert \alpha (x) \vert \).
It then follows from Gronwall’s lemma that
\(\square \)
We now add the following assumption.
Assumption 2.1
The functions \(\beta \), \(\alpha \) satisfy \(\alpha \in C^1(\mathbb {T}^1)\) and \(\beta \in C^2(\mathbb {T}^1)\).
Let \(f_\varepsilon (t,x):=\beta (x)\frac{{\mathcal {S}}_{\varepsilon }(t,x)\big [{\mathcal {S}}_{\varepsilon }(t,x)+{\mathcal {R}}_{\varepsilon }(t,x)\big ]}{{\mathcal {A}}_{\varepsilon }^2(t,x)}, \text {and } g_\varepsilon (t,x) := \beta (x)\frac{{\mathcal {I}}_{\varepsilon }(t,x)\big [{\mathcal {I}}_{\varepsilon }(t,x)+{\mathcal {R}}_{\varepsilon }(t,x)\big ]}{{\mathcal {A}}_{\varepsilon }^2(t,x)}.\)
Lemma 2.4
For any \(T>0\), there exists a positive constant C such that for all \(\varepsilon >0\),
Proof
\(\forall x\in \mathbb {T}^1\), \(\forall t\ge 0\) we have
from which we obtain
where we have used Assumption 2.1, inequality (11) and Lemma 2.2. The result now follows from Lemma 2.3.
\(\square \)
Lemma 2.5
For any \(T>0\), there exists a positive constant C such that
and
Proof
We first etablish (16). Applying the operator \(\nabla ^+_\varepsilon \) to the first equation in (1), we get
The last term on the above right hand side is easily explicited thanks to a computation similar to that done in (15). Combining that formula with Assumption 2.1, inequality (11) and Lemma 2.2, we deduce that
From the Duhamel formula,
Since the semigroup \(e^{t\mu _S\Delta _\varepsilon }\) is contracting in \(L^\infty \), we deduce that
Applying similar arguments to the two other equations in (1), we obtain
(16) now follows from Gronwall’s Lemma and Assumption 1.1.
We now multiply (20) by \(\nabla ^+_\varepsilon {\mathcal {S}}_\varepsilon (t,x)\) and integrate on \([0,t]\times \mathbb {T}^1\), yielding
which yields one third of (17). The rest of (17) is proved by similar computations applied to the equations for \(\nabla ^+_\varepsilon {\mathcal {I}}_\varepsilon \) and \(\nabla ^+_\varepsilon {\mathcal {R}}_\varepsilon \). Next (18) follows from (17), (16), Assumption 2.1, (11) and Lemma 2.2.
Since
the estimate (19) follows from (16), Assumption 2.1, inequality (11), Lemma 2.2 and the fact that the norm in \(L^2(\mathbb {T}^1)\) is bounded by the norm in \(L^\infty (\mathbb {T}^1)\).
\(\square \)
Lemma 2.6
For any \(T>0\), as \(\varepsilon \rightarrow 0\)
where \(\displaystyle f(t,x)=\dfrac{{\textbf{s}}(t,x)\left[ {\textbf{s}}(t,x)+{\textbf{r}}(t,x)\right] }{{\textbf{a}}^2(t,x)}\) and \( \displaystyle g(t,x)=\dfrac{{\textbf{i}}(t,x)\left[ {\textbf{i}}(t,x)+{\textbf{r}}(t,x)\right] }{{\textbf{a}}^2(t,x)}, \forall t\in [0, T], \; x\in \mathbb {T}^1.\)
Moreover f, \(g \in L^2\left( 0, T ; \textrm{H}^1\right) \).
Proof
Let d be the function such that , \(\forall \) \( t\in [0, T]\) , \(x\in \mathbb {T}^1\) and \(\varepsilon >0\)
Furthermore, we know that \({\mathcal {S}}_{\varepsilon } \longrightarrow {\textbf{s}}\), \({\mathcal {I}}_{\varepsilon }\longrightarrow {\textbf{i}}\) and \({\mathcal {R}}_{\varepsilon }\longrightarrow {\textbf{r}}\) uniformly on \([0, T] \times \mathbb {T}^1\). Since d is continuous on \(\{(s, i, r)\in (\mathbb {R}_+)^3 : s+i+r>0\}\), then we deduce that \(f_\varepsilon \longrightarrow f\) uniformly on \([0, T] \times \mathbb {T}^1\), and in particular in \(C\left( [0, T] ; L^2(\mathbb {T}^1)\right) \).
From (20) and similar equations for \(\displaystyle \nabla _{\!\!\varepsilon }^+ {\mathcal {I}}_{\varepsilon }(t,x)\) and \(\displaystyle \nabla _{\!\!\varepsilon }^+ {\mathcal {R}}_{\varepsilon }(t,x)\), we obtain the convergence of \(\nabla _{\varepsilon }^+ f_\varepsilon \longrightarrow \nabla f\) by an argument similar to the previous one.
The proofs of \(g_\varepsilon \longrightarrow g\) and \(\nabla _{\varepsilon }^+ g_\varepsilon \longrightarrow \nabla g\) are obtained in the same way.
\(\square \)
In the sequel, we will write "\(f_\varepsilon (t) \longrightarrow f(t) \) in \(\textrm{H}^{1}\)" to mean that "\(f_\varepsilon (t) \longrightarrow f(t) \) in \(L^2(\mathbb {T}^1)\) and \(\nabla _{\!\!\varepsilon }^+f_\varepsilon (t) \longrightarrow \nabla f(t) \) in \(L^2(\mathbb {T}^1)\)".
We have the following compactness result.
Lemma 2.7
(Theorem 1.69 of Bahouri et al. [1], page 47)
For any compact subset E of \(\mathbb {R}^d\) and \(s_1<s_2\), the embedding of \(\textrm{H}^{s_2}\left( E\right) \) into \(\textrm{H}^{s_1}\left( E\right) \) is a compact linear operator.
In the next section, we study the behavior of the process \(\{\Psi _{\varepsilon }, 0<\varepsilon < 1 \}\) as \(\varepsilon \) goes to zero.
3 Functional central limit theorem
Let us define \(\displaystyle {\mathscr {U}}_{\varepsilon }(t,x) = \dfrac{1}{\varepsilon ^{1/2}}\sum _{i=1}^{\varepsilon ^{-1}} U_{\varepsilon }(t,x_i)\mathbbm {1}_{V_i}(x), \) \(\displaystyle {\mathscr {V}}_{\varepsilon }(t,x) =\dfrac{1}{\varepsilon ^{1/2}}\sum _{i=1}^{\varepsilon ^{-1}}V_{\varepsilon }(t,x_i)\mathbbm {1}_{V_i}(x),\)
Moreover, we set
\(({\mathscr {U}}_{\varepsilon }, {\mathscr {V}}_{\varepsilon }, {\mathscr {W}}_{\varepsilon })\) satisfies the following system
For \( \gamma \in \mathbb {R}_+\), we denote by \(C\big ([0,T] ; \textrm{H}^{-\gamma }\big )\) the complete separable metric space of continuous functions defined on [0, T] with values in \(\textrm{H}^{-\gamma }\). For any \( \varepsilon >0\), \({\mathscr {U}}_{\varepsilon }\), \({\mathscr {V}}_{\varepsilon }\) and \( {\mathscr {W}}_{\varepsilon }\) can be viewed as continuous processes taking values in some Hilbert space \( \textrm{H}^{-\gamma }\). Hence we will study the weak convergence of the process \(({\mathscr {U}}_{\varepsilon }, {\mathscr {V}}_{\varepsilon }, {\mathscr {W}}_{\varepsilon })\) in \(C\big ([0,T] ; (\textrm{H}^{-\gamma })^3\big ).\)
In the sequel we will need to control the stochastic convolution integrals \(\displaystyle \int _0^t \textsf {T}_{\!\varepsilon ,J}(t-r)d{\mathscr {M}}_{\varepsilon }^J(r)\), with \(J\in \{S, I, R\}\). For that sake, we shall need a maximal inequality which is a special case of Theorem 2.1 of Kotelenez [8], which we first recall.
Lemma 3.1
(Kotelenez [8]) Let \(( \textrm{H}\,; \Vert . \Vert _{\textrm{H}}) \) be a separable Hilbert space, \({\mathcal {M}}\) an \(\textrm{H}\)-valued locally square integrable càdlàg martingale and \(\textsf {T}(t)\) a contraction semigroup operator of \( {\mathcal {L}}(\textrm{H})\). Then, there is a finite constant c depending only on the Hilbert norm \( \Vert . \Vert _{\textrm{H}} \) such that for all \( T \ge 0 \)
where \(\sigma \) is a real number such that \( \big \Vert \textsf {T}(t)\big \Vert _{{\mathcal {L}}(\textrm{H})} \le e^{\sigma t}\).
We want to take the limit as \(\epsilon \rightarrow 0\) in the system of SDEs (21) satisfied by \({\mathscr {Y}}_{\varepsilon }\). To this end we will split our system into two subsystems.
First, we consider the following linear system
Next, we shall consider the second system
and finally, we note that
Then the convergence of \({\mathscr {Y}}_\varepsilon :=\left( {\mathscr {U}}_{\varepsilon }, {\mathscr {V}}_{\varepsilon }, {\mathscr {W}}_{\varepsilon }\right) \) will follow from both the convergence of \(\left( u_{\varepsilon }, v_{\varepsilon }, w_{\varepsilon }\right) \) and of \(\left( {\overline{u}}_{\varepsilon }, {\overline{v}}_{\varepsilon }, {\overline{w}}_{\varepsilon }\right) \).
Let us first look at the convergence of \(\left( u_{\varepsilon }, v_{\varepsilon }, w_{\varepsilon }\right) \).
Let \( {\mathscr {M}}_{\varepsilon }= \big ({\mathscr {M}}_{\varepsilon }^S, {\mathscr {M}}_{\varepsilon }^I, {\mathscr {M}}_{\varepsilon }^R \big )^{\texttt{T}}\).
Recall that we denote by \("\Longrightarrow "\) the weak convergence.
Proposition 3.1
For any \( \gamma > 3/2\), the Gaussian martingale \( {\mathscr {M}}_{\varepsilon }\Longrightarrow {\mathscr {M}}:= \big ({\mathscr {M}}^S, {\mathscr {M}}^I,{\mathscr {M}}^R\big )^{\texttt{T}}\) in \(C\big ([0,T]; (\textrm{H}^{-\gamma })^3\big )\) as \(\varepsilon \rightarrow 0\), where for all \( \varphi \in \textrm{H}^{\gamma }\)
and \(\dot{W_1}\), \(\dot{W_2}\), \(\dot{W_3}\), \(\dot{W_4}\) and \(\dot{W_5}\) are standard space-time white noises which are mutually independent.
Proof
First, we are going to show that there exists a positive constant C independent of \(\varepsilon \) such
Recall that \( \displaystyle \big \Vert {\mathscr {M}}_{\varepsilon }(t)\big \Vert _{_{\textrm{H}^{-\gamma }}}^2:= \big \Vert {\mathscr {M}}_{\varepsilon }^S(t)\big \Vert _{_{\textrm{H}^{-\gamma }}}^2+\big \Vert {\mathscr {M}}_{\varepsilon }^I(t)\big \Vert _{_{\textrm{H}^{-\gamma }}}^2+\big \Vert {\mathscr {M}}_{\varepsilon }^R(t)\big \Vert _{_{\textrm{H}^{-\gamma }}}^2\) .
Applying Doob’s inequality to the martingale \({\mathscr {M}}_{\varepsilon }^S\), we have
But since \( \dfrac{S_{\varepsilon }(r, x_i)I_{\varepsilon }(r, x_i)}{A_{\varepsilon }(r, x_i)} \le M \) (indeed \(\dfrac{I_{\varepsilon }(r, x_i)}{A_{\varepsilon }(r, x_i)}\le 1\) and \(S_{\varepsilon }(r, x_i)\le M\), see (11) and the line which follows) and \(\big \vert \nabla _{\varepsilon }^{\pm }{\textbf{f}}_{m}(x) \big \vert ^2 \le 2\pi ^2 m^2, \) then we obtain
Since \( \displaystyle \sum _{m\; \text {even}}\dfrac{1}{m^{2(\gamma -1)}} < \infty \) iff \(\gamma >3/2\), we then have
Similar inequalities hold for the martingales \({\mathscr {M}}_{\varepsilon }^I\) and \({\mathscr {M}}_{\varepsilon }^R\). Hence we obtain
Inequality (27) and standard tightness criteria for martingales (see e.g. the proof of Theorem 3.1) implies that the martingale \( {\mathscr {M}}_{\varepsilon }\) is tight in \(C\big ([0,T]; (\textrm{H}^{-\gamma })^3\big )\), with \(\gamma >3/2\).
In what follows \({\textbf {<}}{} {\textbf {<}}{\mathscr {M}}_\varepsilon ^{S,\gamma _0}{} {\textbf {>}}{} {\textbf {>}}_t\) denotes the operator–valued increasing process associated to the \(L^2(\mathbb {T}^1)\)–valued martingale \({\mathscr {M}}_\varepsilon ^{S,\gamma _0}(t)\), whose trace is the increasing process associated to the real valued submartingale \(\Vert {\mathscr {M}}_\varepsilon ^{S,\gamma _0}(t)\Vert _{L^2(\mathbb {T}^1)}^2\). Let \( \varphi \in \textrm{H}^{\gamma }\). We set \({\mathscr {M}}_{\varepsilon }^{S,\varphi }=\langle {\mathscr {M}}_{\varepsilon }^S , \varphi \rangle \). \({\mathscr {M}}_{\varepsilon }^{I,\varphi }\) and \({\mathscr {M}}_{\varepsilon }^{R,\varphi }\) are defined in the same way. \(\forall \, t\in [0,T]\), we have
We have
On the one hand we have
because the quantity \( \displaystyle \int _{\mathbb {T}^1} \dfrac{\beta _{\varepsilon }(x){\mathcal {S}}_{\varepsilon }(r,x){\mathcal {I}}_{\varepsilon }(r,x)\vert \varphi (x)\vert }{{\mathcal {A}}_{\varepsilon }(r,x)} dx\) is bounded uniformly in \( \varepsilon \). Hence \(\displaystyle \dfrac{1}{\varepsilon }\int _0^t \sum _{i=1}^{\varepsilon ^{-1}} \beta (x_i)\dfrac{S_{\varepsilon }(r,x_i)I_{\varepsilon }(r,x_i)}{A_{\varepsilon }(r,x_i)}\bigg ( \int _{V_i}\varphi (x)\,dx \bigg )\bigg [ \int _{V_i}\Big (\varphi (x)-\varphi (x_i)\Big )\,dx \bigg ] \longrightarrow 0 \), as \(\varepsilon \rightarrow 0\).
On the other hand, the fact that \(\displaystyle \underset{0\le t\le T}{\sup }\big \Vert {\textbf{X}}_{\varepsilon }(t)-{\textbf{X}}(t)\big \Vert _{\infty }\longrightarrow 0\), as \(\varepsilon \rightarrow 0\), leads to
This shows that
as \( \varepsilon \rightarrow 0. \) Similar computation shows that
from which we deduce that
Hence, if \({\dot{W}}_1 \) , \({\dot{W}}_2 \) and \({\dot{W}}_3 \) are space-time white noises which are mutually independent, so the limit of the centered Gaussian martingale \({\mathscr {M}}_{\varepsilon }^{S,\varphi }(t) \) can be identified with
In the same way
and
where \(\dot{W_3}\), \(\dot{W_4}\) and \(\dot{W_5}\) are also space-time white noises which are mutually independent, and independent from \(\dot{W_1}\), \(\dot{W_2}\). \(\square \)
Let set \(\Im _{\varepsilon }= \big (u_{\varepsilon }\, , \, v_{\varepsilon }\, , \, w_{\varepsilon } \big )^{\texttt{T}}.\)
We need to check tightness of the sequence of process \(\{\Im _{\varepsilon }(t)\, , \, t\in [0, T]\, , 0<\varepsilon < 1 \}\).
Theorem 3.1
For any \(\gamma >3/2 \), the process \(\{\Im _{\varepsilon }(t)\, , \, t\in [0, T]\, , 0<\varepsilon < 1 \}\) is tight in \(C\big ([0,T]; (\textrm{H}^{-\gamma })^3\big ).\)
Proof
: We denote by \( {\mathcal {G}}^T_{\varepsilon }\) the collection of \( {\mathcal {F}}_t^{\varepsilon }\)-stopping times \({\overline{\tau }}\) such that \({\overline{\tau }}\le T.\) Following Aldous’ tightness criterion (see Joffe and Metivier [5]), in oder to show that the process \(\{\Im _{\varepsilon }(t)\, , \, t\in [0, T]\, , 0<\varepsilon < 1 \}\) is tight in \(C\big ([0,T]; (\textrm{H}^{-\gamma })^3\big )\), it suffices to establish the two following conditions:
- [T]:
-
for \(\dfrac{3}{2}<\gamma _0< \gamma \), and \(M> 0 \) there exists C such that \(\displaystyle \mathbb {P}\Big (\big \Vert \Im _{\varepsilon }(t)\big \Vert _{_{\textrm{H}^{-\gamma _0}}}\ge M \Big )\le C, \) for all \( t\in [0,T]\) ,
- [A]:
-
\(\displaystyle \underset{\theta \rightarrow 0}{\lim }\lim \limits _{\begin{array}{c} \varepsilon \rightarrow 0 \end{array}}\sup _{{\overline{\tau }}\in {\mathcal {G}}_{\varepsilon }^{T-\theta }}\mathbb {E}\bigg (\Big \Vert \Im _{\varepsilon }({\overline{\tau }}+\theta ) - \Im _{\varepsilon }({\overline{\tau }})\Big \Vert _{_{\textrm{H}^{-\gamma }}}^2\bigg ) = 0 . \)
Let \(\dfrac{3}{2}\!< \! \gamma _0\!< \!\gamma \). Let us set \( \displaystyle u_{\varepsilon }^{\gamma _0}(t,x)=({\textbf{I}}-\Delta _{\varepsilon })^{-\gamma _0/2}u_{\varepsilon }(t,x)\). \( \forall \, t\in [0,T]\), we have
If we define \({\mathscr {M}}_\varepsilon ^{S,\gamma _0}(t):=({\textbf{I}}-\Delta _{\varepsilon })^{-\gamma _0/2}{\mathscr {M}}_\varepsilon ^S(t)\), since \(\gamma _0>3/2\), it follows from (3.1) that \({\mathscr {M}}_\varepsilon ^{S,\gamma _0}(t)\) is bounded as \(\varepsilon \rightarrow 0\), as an \(L^2(\mathbb {T}^1)\)–valued martingale. Applying the Itô formula to \(\vert u_{\varepsilon }^{\gamma _0}(t,x) \vert ^2\) and integrating over \(\mathbb {T}^1\) leads to
Letting \(t=T\) and taking the expectation, we deduce that
Next we want to take the supremum on [0, T] in the previous identity. For that sake, we use the Burkholder Davis Gundy inequality, which implies that
We then obtain, thanks to (26),
We also obtain similar inequalities for \(v_\varepsilon \) and \(w_\varepsilon \). Hence there exists a constant C such that for all \(\varepsilon >0\),
Then [T] follows by using Markov’s inequality.
Let \(\theta >0\) and \( {\overline{\tau }} \in {\mathcal {G}}^{T-\theta }_{\varepsilon } \). We have
So,
Let us deal with each term separately. First using the inequality (9), there is a constant \( C(\gamma )\) such that
Let \( 3/2<\gamma ^{\prime }<\gamma \), and let c a positive constant. We have
and
Then
On the one hand, since \(\mathbb {E}\left( \Big \Vert \big [\textsf {T}_{\!\varepsilon ,S}(\theta )-{\textbf{I}}\big ] u_{\varepsilon }({\overline{\tau }})\Big \Vert _{\textrm{H}^{-\gamma ^{\prime }}}^2\right) \le C\), we can choose c large enough such that \(\displaystyle C(\gamma )(1+c)^{\gamma ^{\prime }-\gamma } \mathbb {E}\left( \Big \Vert \big [\textsf {T}_{\!\varepsilon ,S}(\theta )-{\textbf{I}}\big ] u_{\varepsilon }({\overline{\tau }})\Big \Vert _{\textrm{H}^{-\gamma ^{\prime }}}^2\right) \le \varepsilon /2.\) On the other hand, we have
Since
then for the previous choice of c, we can choose \(\theta \) small enough such that \(C(\gamma )\underset{\lambda _m^{\varepsilon }<c}{\sup }\big (1-e^{-\lambda _m^{\varepsilon }\theta }\big )^2 \mathbb {E}\bigg (\big \Vert u_{\varepsilon }({\overline{\tau }})\big \Vert _{_{\textrm{H}^{-\gamma }}}^2\bigg )\le \varepsilon /2\). Hence
Secondly, using the equivalence of the norms \(\Vert . \Vert _{_{\textrm{H}^{-\gamma }}}\) and \(\Vert . \Vert _{_{\textrm{H}^{-\gamma , \varepsilon }}}\) , and the fact that \(\textsf {T}_{\!\varepsilon ,S}\) is a contraction semigroup on \( \texttt{H}_{\varepsilon }\) we have
Hence the condition [A] is proved.
In the way, we prove similar estimates for \(v_{\varepsilon }\) and \(w_{\varepsilon }\). Then the process \( \big \{ \Im _\varepsilon (t),\; t\in [0,T], 0<\varepsilon <1 \big \}\) is tight in \(C\big ([0,T]; (\textrm{H}^{-\gamma })^3\big )\), \(\gamma >3/2\). \(\square \)
Lemma 3.2
For \(3/2<\gamma <2\), the process \( \{ \, \Im _{\varepsilon }(t), \; t\in [0,T], \; 0<\varepsilon <1 \, \}\) converges in law in \(C\big ([0,T]\, ; \, (\textrm{H}^{-\gamma })^3\big )\cap L^2\big (0,T; (\textrm{H}^{-1})^3\big )\).
Proof
On the one hand, from Theorem 3.1, the process \( \{ \, \Im _{\varepsilon }(t), \; t\in [0,T], \; 0<\varepsilon <1 \, \}\) is tight in \(C\big ([0,T]\, ; \, (\textrm{H}^{-\gamma })^3\big ) \), then along a subsequence, it converges in \(C\big ([0,T]\, ; \, (\textrm{H}^{-\gamma })^3\big ) \). On the other hand the sequence \( \{ \, \Im _{\varepsilon }(t), \; t\in [0,T], \; 0<\varepsilon <1 \, \}\) is bounded in \( L^2\big (0,T\; ; \; (\textrm{H}^{1-\gamma })^3\big )\). Indeed for all \(\varepsilon \) , we have
where the third equality follows from the fact that
The inequality (28) ensures that \( \displaystyle \mathbb {E}\int _0^T \Big [\big \Vert u_{\varepsilon }(t)\big \Vert _{_{\textrm{H}^{-\gamma }}}^2 dt + \big \Vert \nabla _{\!\!\varepsilon }^+u_{\varepsilon }(t)\big \Vert _{_{\textrm{H}^{-\gamma }}}^2 \Big ]dt\) is bounded by a constant independent of \(\varepsilon \). It then follows that
We have similar estimates for \( v_{\varepsilon }\) and \(w_{\varepsilon }\). Thus
This implies that, from the sequence \( \{ \, \Im _{\varepsilon }(t), \; t\in [0,T], \; 0<\varepsilon <1 \, \}\), we can extract a subsequence which converges in law in \( L^2\big (0,T\; ; \;(\textrm{H}^{1-\gamma })^3\big )\) endowed with the weak topology. Furthermore, since the imbedding of \(\textrm{H}^{1-\gamma }\) into \(\textrm{H}^{-1}\) is compact and we have the convergence in \(C\left( [0, T]\, ; (\textrm{H}^{-\gamma })^3\right) \), then the extracted sequence converges in fact in \( L^2\big (0,T\; ; \; (\textrm{H}^{-1})^3\big )\). Hence, we deduce that there exists a subsequence which converges in law in \(C\big ([0,T]\, ; \, (\textrm{H}^{-\gamma })^3\big )\cap L^2\big (0,T\; ; \; (\textrm{H}^{-1})^3\big )\).
We note that the limit \( \displaystyle \Im :=(u, v, w)^{\texttt{T}}\) of any convergent subsequence satisfies the following system of stochastic PDEs
and the solution of that system is unique. Then the whole process \( \{ \, \Im _{\varepsilon }(t), \; t\in [0,T], \; 0<\varepsilon <1 \, \}\) converges in \(C\big ([0,T]\, ; \, (\textrm{H}^{-\gamma })^3\big )\cap L^2\big (0,T\; ; \; (\textrm{H}^{-1})^3\big )\). \(\square \)
Lemma 3.3
As \(\varepsilon \rightarrow 0\), \(f_\varepsilon u_\varepsilon \Longrightarrow fu\), and \(g_\varepsilon v_\varepsilon \Longrightarrow gv\) in \(L^2\Big (0,T ; \textrm{H}^{-1}\Big )\).
Proof
The convergence \(f_\varepsilon u_\varepsilon \Longrightarrow fu\) follows to the fact that \(u_\varepsilon \Longrightarrow u\) in \(L^2\big (0,T\; ; \; \textrm{H}^{-1}\big )\) and \(f_\varepsilon \longrightarrow f\) in \(C\big ([0,T]\; ; \; \textrm{H}^{1}\big )\). The proof of the convergence \(g_\varepsilon v_\varepsilon \Longrightarrow gv\) is similar. \(\square \)
We are now interested in the convergence of the process \({\overline{\Im }}_\varepsilon := \left( {\overline{u}}_\varepsilon , {\overline{v}}_\varepsilon , {\overline{w}}_\varepsilon \right) \).
Lemma 3.4
For any \(T>0\), there exists a positive constant C such that
where \( \displaystyle \eta _{_T} :=\int _{0}^{T}\Big (\big \Vert u_\varepsilon (s)\big \Vert _{\textrm{H}^{-1}}^2 + \big \Vert v_\varepsilon (s)\big \Vert _{\textrm{H}^{-1}}^2\Big ) ds.\)
Proof
For all \(t\in [0 , T],\) we have
Then
Since \(\displaystyle f_\varepsilon (t) u_\varepsilon (t) \in \textrm{H}^{-1}\) and \(\displaystyle g_\varepsilon (t) v_\varepsilon (t) \in \textrm{H}^{-1}\), then
Let \(\delta \) be some constant such that \(0<\delta <\dfrac{\mu _S}{C}.\) We have
Then
In the same way, we prove that
and
By adding the inequalities (31) , (32) and (33), we obtain
Hence applying Gronwall’s Lemma we obtain
\(\square \)
We want to deduce from the fact that the pair \((u_\varepsilon ,v_\varepsilon )\) converges in law towards (u, v) in \(L^2(0,T; (\textrm{H}^{-1})^2)\), the convergence in law of \(({\overline{u}}_\varepsilon ,{\overline{v}}_\varepsilon ,{\overline{w}}_\varepsilon )\).
Lemma 3.5
The process \(\{\, \left( {\overline{u}}_\varepsilon (t),{\overline{v}}_\varepsilon (t), {\overline{w}}_\varepsilon (t)\right) , 0\le t\le T, \; \; 0<\varepsilon <1\, \}\Rightarrow \{\, ({\overline{u}}(t),{\overline{v}}(t), {\overline{w}}(t)), 0\le t\le T\, \}\) in \(L^2\big (0,T; (L^2)^3\big )\cap C([0,T];(\textrm{H}^{-1})^3)\), where the limit \(\{\,\left( {\overline{u}}(t),{\overline{v}}(t), {\overline{w}}(t)\right) , 0\le t\le T\,\}\) is the unique solution of the following system of parabolic PDEs
Proof
Let
Note that both \({\overline{\Im }}_\varepsilon \) and \(F_\varepsilon \) belong to \(L^2(0,T ; (\texttt{H}_{\varepsilon })^3)\). We have the following system of ODEs
Lemma 3.3 tells us that whenever, as \(\varepsilon \rightarrow 0\),
where
We apply the well–known theorem due to Skorohod, which asserts that redefining the probability space, we can assume that \(F_\varepsilon \rightarrow F\) a.s. strongly in \(L^2((0,T); (\textrm{H}^{-1})^3)\). Our assumptions and the hypotheses imply that both \({\overline{\Im }}_\varepsilon \) and \(\nabla _\varepsilon ^+{\overline{\Im }}_\varepsilon \) are bounded in \((L^2((0,T)\times \mathbb {T}^1))^3\). Hence along a subsequence \({\overline{\Im }}_\varepsilon \rightarrow {\overline{\Im }}\) and \(\displaystyle \nabla _\varepsilon ^+{\overline{\Im }}_\varepsilon \rightarrow {\overline{G}}\) in \((L^2((0,T)\times \mathbb {T}^1))^3\) weakly. However, it follows from a duality argument that \( {\overline{G}}=\nabla {\overline{\Im }}\), and taking the weak limit in (35), we deduce that \({\overline{\Im }}\) is the unique solution of the system of parabolic PDEs
with
Hence all converging subsequences have the same limit, and the whole sequence converges.
We now show that the pair \(\langle {\overline{\Im }}_\varepsilon ,\nabla _\varepsilon ^+{\overline{\Im }}_\varepsilon \rangle \) converges strongly in \((L^2((0,T)\times \mathbb {T}^1))^6\). We first note that both \({\overline{\Im }}_\varepsilon \) and \(\nabla _\varepsilon ^+{\overline{\Im }}_\varepsilon \) are bounded in \((L^2((0,T)\times \mathbb {T}^1))^3\), but also \(\frac{d}{dt}{\overline{\Im }}_\varepsilon \) is bounded in \(L^2((0,T);(\textrm{H}^{-1}(\mathbb {T}^1))^3)\). From these estimates, we deduce with the help of Theorem 5.4 in Droniou et al. [4] that \({\overline{\Im }}_\varepsilon \rightarrow {\overline{\Im }}\) strongly in \((L^2((0,T)\times \mathbb {T}^1))^3\). Next we deduce from (35) that
hence
We have an analogous identity for the limiting quantities, namely:
It follows from the strong convergence of \(F_\varepsilon \) to F in \(L^2(0,T;(\textrm{H}^{-1})^3)\), the strong convergence of \({\overline{\Im }}_\varepsilon \rightarrow {\overline{\Im }}\) in \((L^2((0,T)\times \mathbb {T}^1))^3\) and the weak convergence of \(\nabla _\varepsilon ^+{\overline{\Im }}_\varepsilon \) to \(\nabla {\overline{\Im }}\) in \((L^2((0,T)\times \mathbb {T}^1))^3\) that the right hand side of (38) converges to the right hand side of (39). Hence the left hand side of (38) converges to the left hand side of (39). Consequently
This last result follows from the convergence of the left hand side of (38) to that of (39), and the facts that
and
The second convergence follows from the fact that \(\nabla _\varepsilon ^+{\overline{\Im }}_\varepsilon \rightarrow \nabla {\overline{\Im }}\) in \((L^2((0,T)\times \mathbb {T}^1))^3\) weakly. Concerning the first one, we deduce from the equations and the above statements that \({\overline{\Im }}_\varepsilon (T)\rightarrow {\overline{\Im }}(T)\) weakly in \((\textrm{H}^{-1})^3\). But since that sequence is bounded in \((L^2(\mathbb {T}^1))^3\), it also converges weakly in \((L^2(\mathbb {T}^1))^3\).
The fact that \(\nabla _\varepsilon ^+{\overline{\Im }}_\varepsilon \rightarrow \nabla {\overline{\Im }}\) strongly in \((L^2((0,T)\times \mathbb {T}^1))^3\) clearly follows from (40).
The above arguments imply that a.s.
Now the convergence \({\overline{\Im }}_\varepsilon \rightarrow {\overline{\Im }}\) in \(C([0,T];(H^{-1})^3)\) follows readily from the equation. \(\square \)
Lemma 3.2 says that \(\Im _\varepsilon \Rightarrow \Im \) in \(C\big ([0,T]\, ; \, (\textrm{H}^{-\gamma })^3\big )\cap L^2\big (0,T; (\textrm{H}^{-1})^3\big )\), we have used in Lemma 3.5 the Skorohod theorem to deduce that \({\overline{\Im }}_\varepsilon \Rightarrow {\overline{\Im }}\) in \(L^2(0,T(L^2)^3)\cap C([0,T];(\textrm{H}^{-\gamma })^{-1})\). Hence the same Skorohod theorem allows us to take the limit in the sum \(\Im _\varepsilon +{\overline{\Im }}_\varepsilon \), which yields the following result.
Theorem 3.2
(Functional central limit theorem) For \(3/2<\gamma <2\), as \(\varepsilon \rightarrow 0\), \(\{{\mathscr {Y}}_{\varepsilon }(t)\; , \; 0\le t\le T \}_{0<\varepsilon <1} \Longrightarrow \{{\mathscr {Y}}(t)\; , \; 0\le t\le T \}\) in \(C\big ([0,T]\, ; \, (\textrm{H}^{-\gamma })^3\big )\cap L^2\big (0,T; (\textrm{H}^{-1})^3\big )\), where the limit \({\mathscr {Y}}\) is solution of the following system of SPDEs : for all \(\varphi \in \textrm{H}^{1}\)
Final remarks: \(\bullet \) Our functional central limit theorem is established in dimension 1. The difficulty in higher dimension is the following. \(\gamma >3/2\) has to be replaced by \(\gamma >1+d/2\). Then in Lemma 3.2 we have convergence in \(L^2(0,T;(H^{1-\gamma })^3)\cap C([0,T];(H^{-\gamma })^3)\). Note that \(1-\gamma <-d/2\). Already in dimension 2, we have \(1-\gamma <-1\), and there is a serious difficulty with the analog of Lemma 3.5.
\(\bullet \) In this work, we have first let \({\textbf{N}}\rightarrow \infty \), while \(\varepsilon >0\) is fixed, and then let \(\varepsilon \rightarrow 0\). The case where \({\textbf{N}}\rightarrow +\infty \) and \(\varepsilon \rightarrow 0\) together, with some constraint on the relative speeds of convergence (which does not allow \({\textbf{N}}\) to converge too slowly to \(\infty \) while \(\varepsilon \rightarrow 0\)) will be the subject of future work.
References
Bahouri, H., Chemin, J., Danchin, R.: Fourier Analysis and Nonlinear Partial Differential Equations. Springer, Berlin (2011)
Blount, D.J.: Limit theorems for a sequence of nonlinear reaction–diffusion systems. Stoch. Process. Appl. 45(2), 193–207 (1993)
Britton, T., Pardoux, E.: Stochastic Epidemic Models with Inference, Lecture notes in Math. 2255. Springer (2019)
Droniou, J., Eymard, R., Gallouët, T., Herbin, R.: The Gradient Discretisation Method. Springer, New York (2018)
Joffe, A., Metivier, M.: Weak convergence of sequences of semimartingales with applications to multitype branching processes. Adv. Appl. Probab. 18(1), 20–65 (1986)
Kermack, W.O., McKendrick, A.G.: Proc. Roy. Soc. A 115 , 700. Reprinted in Bull. Math. Biol. 53, 33 (1991)
Kotelenez, P.: Gaussian approximation to the nonlinear reaction–diffusion equation. Report 146, Universität Bremen Forschungsschwerpunkt Dynamische Systemes (1986)
Kotelenez, P.: A stopped Doob inequality for stochastic convolution integrals and stochastic evolution equations. Stoch. Anal. Appl. 2(3), 245–265 (1984)
Kurtz, T.G.: Limit theorems for sequences of of jump Markov processes approximating ordinary differential processes. J. Appl. Probab. 8, 344–356 (1971)
N’zi, M., Pardoux, E., Yeo, T.: A SIR model on a refining spatial grid I. Law of large numbers. Appl. Math. Optim. 83, 1153–1189 (2021)
Taylor, M.E.: Partial Differential Equations III Nonlinear Equations. Springer, Berlin (1991)
Acknowledgements
The author Ténan Yeo would like to thank the Marseille Mathematics Institute (I2M) for funding his stay in Marseille, during which part of this work was carried out.
Funding
The authors were supported by their respective university.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest and no competing interests. The authors have no competing interests to declare that are relevant to the content of this article.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix A
Appendix A
Lemma A.1
Let \((h_\varepsilon )_{0<\varepsilon <1}\) be a sequence of \(\texttt{H}_{\varepsilon }\). If \((h_\varepsilon )_{0<\varepsilon <1}\) is bounded in \(\textrm{H}^{1,\varepsilon }\), then it is relatively compact in \(L^2\), and the limit of any convergent subsequence belongs to \(\textrm{H}^{1}\).
Proof
By using the fact that the sequence \((h_\varepsilon )\) is bounded in \(L^2\) and \(\big \Vert \nabla _{\!\!\varepsilon }^+h_\varepsilon \big \Vert _{L^2}\le C\big \Vert h_\varepsilon \big \Vert _{_{\textrm{H}^{1}}}\), then the result of the compactness follows from the compactness theorem of Kolmogorov in \(L^2\).
The fact the limit of any convergent subsequence belong to \(\textrm{H}^{1}\), follows from the discrete integrating by part
and letting \(\varepsilon \) go to zero in this equation. \(\square \)
Lemma A.2
For all \(\displaystyle u_{\varepsilon } \in \texttt{H}_{\varepsilon } \)
\( \displaystyle \big \Vert \nabla _{\!\!\varepsilon }^- u_{\varepsilon }\big \Vert _{_{\textrm{H}^{-\gamma , \varepsilon }}}^2 = \big \Vert \nabla _{\!\!\varepsilon }^+ u_{\varepsilon }\big \Vert _{_{\textrm{H}^{-\gamma , \varepsilon }}}^2= \sum _m \left( \langle \,u_{\varepsilon } , \varphi _{m}^{\varepsilon }\,\rangle ^2 + \langle \,u_{\varepsilon } , \psi _{m}^{\varepsilon }\,\rangle ^2 \right) \lambda _{m}^{\varepsilon }(1+\lambda _{m}^{\varepsilon })^{-\gamma }. \)
Proof
We have
where \(a_{m,\varepsilon }=\varepsilon ^{-1}\sin (\pi m\varepsilon )\) and \(b_{m,\varepsilon }=\varepsilon ^{-1} (\cos (\pi m\varepsilon )-1)\).
We have
Let \(\displaystyle u_{\varepsilon } \in \texttt{H}_{\varepsilon } \). We have
The proof of \(\displaystyle \big \Vert \nabla _{\!\!\varepsilon }^- u_{\varepsilon }\big \Vert _{_{\textrm{H}^{-\gamma , \varepsilon }}}^2 = \sum _m \left( \langle \,u_{\varepsilon } , \varphi _{m}^{\varepsilon }\,\rangle ^2 + \langle \,u_{\varepsilon } , \psi _{m}^{\varepsilon }\,\rangle ^2 \right) \lambda _{m}^{\varepsilon }(1+\lambda _{m}^{\varepsilon })^{-\gamma } \) is similar by noting that
\(\square \)
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Gallouët, T., Pardoux, É. & Yeo, T. A SIR epidemic model on a refining spatial grid II-central limit theorem. Stoch PDE: Anal Comp (2024). https://doi.org/10.1007/s40072-024-00333-0
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s40072-024-00333-0