1 Introduction

The continuum limit is a deterministic or stochastic differential equation given by the approximation of the difference equation modeled by interacting particle systems when the network size goes to infinity. This limit has been used to understand the dynamics of large networks, for example, existence and stability. Many studies have investigated different models and their continuum limits, such as the hydrodynamic and high-density types. The main difference between the two limits is the parameter to be rescaled. Specifically, the hydrodynamic limit is time and space, while the high-density limit is the time, space, and initial quantity of particles per site [8]. Although the high-density limit has interesting characteristics, studies investigating this limit are fewer than those investigating the hydrodynamic limit.

To the best of our knowledge, [18, 19] was the first study to investigate the high-density limit for a Markov chain approximating ordinary differential equations. Arnold and Theodosopulu considered the stochastic model as a space–time Markov process related to chemical reactions and diffusion [1]. Given a network size of N, their model is given by a simple random walk proportional to \(N^{2}\) that can move the neighborhood site without a loop, and the number of particles in each site is unlimited. This process is called the zero-range process. Moreover, they showed a high-density limit or law of large numbers for this model and revealed that the continuum limit is governed by a standard heat equation:

$$\begin{aligned} \frac{d}{dt}u(t,x)=\Delta u(t,x). \end{aligned}$$
(1.1)

In [4,5,6, 15, 17], the central limit theorem and the law of large numbers in various settings, specifically boundary conditions and reaction terms were investigated. Some studies have been conducted both for equations such as (1.1), parabolic partial differential equations [16], and Lotka–Voltera systems [26].

Recently, a discretized model of nonlocal diffusion or interaction that represent phenomena that cannot be modeled by only local interactions has attracted attention. Moreover, the approximation theory revealing its continuum limit has been tackled by many researchers. For instance, [11] considered a random graph, \(\Gamma _{N}=(V(\Gamma ^{N}),E(\Gamma ^{N}))\), where \(V(\Gamma ^{N})\) and \(E(\Gamma ^{N})\) denote the set of nodes and edges, respectively. Moreover, the discrete model on \(\Gamma ^{N}\) is given by \({i}\in {V(\Gamma ^{N})}\), where

$$\begin{aligned} \frac{d}{dt}u^{N}(t,i)=\frac{1}{\mathrm{deg}_{\Gamma _{N}}(i)}{\sum _{j:{\{i,j\}}\in {E(\Gamma _{N})}}(u^{N}(t,j)-u^{N}(t,i))}+R(u^{N}(t,i)), \end{aligned}$$
(1.2)

and it was shown that the continuum limit of (1.2) is governed by

$$\begin{aligned} \frac{d}{dt}u(t,x)={\int _{0}^{1}W(x,y)\left( u(t,y)-u(t,x)\right) dy}+R(u(t,x)), \end{aligned}$$

where \(W:[0,1]^{2}\rightarrow [0,1]\) is the graphon: a measurable function representing the limiting behavior of \(\Gamma _{N}\) (see [21]).

In [27], we attempted to determine the high-density limit of the nonlocal diffusion model, and two mathematical models of nonlocal diffusion on homogeneous networks were considered. First, the deterministic model u(tx) was given by the following integro-differential equation proposed in [9]:

$$\begin{aligned} \frac{d}{dt}u(t,x)={\int _{0}^{1}u(t,y)dy}-u(t,x)+R(u(t,x)), \end{aligned}$$

where \(R(\cdot )\) is the reaction term given by the first-degree polynomial function. Second, the stochastic model \(X^{N}(t,x)\) was scaled by \(l>0\) as a parameter proportional to the initial average number of particles in a site. It was constructed using a simple random walk on the discrete torus \({\mathbb {Z}}/N{\mathbb {Z}}\) which can move everywhere and has a jump rate proportional to \(N^{-1}\). Adopting the methods of [3], we compared these models and proved that the law of large numbers and the weak convergence theorem hold for our particle model. To show weak convergence in the \(L^{2}\) framework, we considered the rescaled difference \(\sqrt{l}(X^{N}(t)-u(t))\), and only proved that it weakly converges to a stochastic process, which is a mild solution to the stochastic differential equation on the Skorokhod space. In other words, we could not show that it weakly converges to the Ornstein–Uhlenbeck process. For this reason, we named the weak convergence theorem, not the central limit theorem.

The aims of this study are as follows: first, we extend the analysis to a nonlocal diffusion model for inhomogeneous networks. To include the structure of networks, we use the measurable function \(J(x,y):[0,1]\times [0,1]\rightarrow {\mathbb {R}}\) as the distribution of jumps from location x to y. Therefore, the deterministic model is governed by

$$\begin{aligned} \frac{d}{dt}u(t,x)={\int _{0}^{1}J(x,y)\bigl (u(t,y)-u(t,x)\bigr )dy}+R(u(t,x), \end{aligned}$$
(1.3)

where \(R(\cdot )\) is the same as above. By the adequate discretization of J, a stochastic model on an inhomogeneous network is also constructed. Second, we try to reveal more detailed properties of limits by considering whether the central limit theorem holds for our models. The methods in this study benefited from [3, 4], and there are two points of difference between their works and this work. First, in these works, two limit theorems were considered on the Sobolev space because of the regularity argument related to the Laplacian operator. However, we considered nonlocal integral operators, and thus, we must construct a different function space. To this end, we considered the reproducing kernel Hilbert space (RKHS) \({\mathscr {H}}_{J}\) based on the network structure. We denote the dual space of \({\mathscr {H}}_{J}\) associated with the usual inner product as \({\mathscr {H}}_{-J}\). Second, for the purpose of seeing two limit theorems (namely, the central limit theorem), we obtain the convergence rate of semigroups. In [3, 4], the author utilized spectral decomposition by the eigenfunctions and eigenvalues of Laplacian and discrete Laplacian operators and their convergences. However, in this study, because we cannot have an explicit form of eigenfunctions and eigenvalues of integral and summation operators, we use the \(\frac{1}{2}\)-Hölder regularity of functions in RKHS \({\mathscr {H}}_{J}\) which is obtained by assuming the Lipschitz continuity of the kernel J. Using these spaces and technique, we can prove the central limit theorem as well as the law of large numbers. Specifically, we prove that the difference \(X^{N}(t)-u(t)\) converges to 0 probability on \({\mathscr {H}}_{-J}\) and that the rescaled difference \(\sqrt{Nl}(X^{N}(t)-u(t))\) converges to the Ornstein–Uhlenbeck process U(t) in distribution on \(\mathbb {D}_{{\mathscr {H}}_{-J}}[0,T]\) equipped with the Skorokhod topology. Furthermore, assuming \({\int _{0}^{1}J(x,y)dy}=1\) for any \({x}\in {[0,1]}\), we can prove that U(t) has a continuous sample path. However, as it is obvious that the \(L^{2}\) space cannot be constructed as \({\mathscr {H}}_{J}\) by its definition (see Remark 2.1), the central limit theorem remains an open problem in the sense of the \(L^{2}\)-framework.

The remainder of this paper is organized as follows. In Sect. 2, we provide the definition and some properties of the RKHS \({\mathscr {H}}_{J}\) with kernel J. In Sect. 3, we introduce the deterministic model of nonlocal diffusion on inhomogeneous networks, which is governed by Equation (1.3) with periodic boundary conditions. In Sect. 4, we construct a stochastic model of nonlocal diffusion on a discrete torus \({\mathbb {Z}}/N{\mathbb {Z}}\). Finally, in Sects. 5 and 6, we prove the law of large numbers and the central limit theorem.

2 RKHS

In this section, we provide a definition and some well-known results for RKHS and construct it using the eigenfunction of the integral operator.

Definition 2.1

Let H be a real-valued Hilbert space over [0, 1] with an inner product \(\langle \cdot , \cdot \rangle _{H}\).

  1. (i)

    A function \(J:{[0,1]^{2}}\rightarrow {\mathbb {R}}\) is a reproducing kernel of H if we have \({J(\cdot ,x)}\in {H}\) for all \({x}\in {[0,1]}\), and the reproducing property,

    $$\begin{aligned} f(x)=\langle f, J(\cdot ,x) \rangle _{H}, \end{aligned}$$

    holds for all \({f}\in {H}\) and all \({x}\in {[0,1]}\).

  2. (ii)

    The space H is an RKHS over [0, 1] if for all \({x}\in {[0,1]}\) the Dirac functional \(\delta _{x}:H\rightarrow {\mathbb {R}}\) defined by

    $$\begin{aligned} \delta _{x}(f):=f(x),\quad {f}\in {H}, \end{aligned}$$

    is continuous.

Remark 2.1

Note that \(L^{2}([0,1])\) is not an RKHS because of \({\delta _{x}}\notin {L^{2}([0,1])}\).

Next, we define the integral operator \(T_{J}\) on \(L^{2}([0,1])\) by

$$\begin{aligned} T_{J}f(x)={\int _{0}^{1}J(x,y)f(y)dy},\quad {f}\in {L^{2}([0,1])}. \end{aligned}$$

The following are the fundamental results of constructing the separable RKHS associated with \(T_{J}\) with kernel J:

Theorem 2.1

(Theorem 4.20 and 4.21 in [25]) Let \(J:{[0,1]}^{2}\rightarrow {\mathbb {R}}\) be positive definite; that is, for all \({n}\in {{\mathbb {N}}}\), \({\alpha _{1},\ldots ,\alpha _{n}}\in {{\mathbb {R}}}\) and all \({x_{1},\ldots ,x_{n}}\in {[0,1]}\),

$$\begin{aligned} {\sum _{i,j=1}^{n}\alpha _{i}\alpha _{j}J(x_{i},x_{j})}\ge {0}. \end{aligned}$$

Then, there exists a unique RKHS H that reproduces kernel J. Conversely, for H, there is a unique reproducing kernel k of H.

Theorem 2.2

(Theorem 3.a.1 in [13]) Let \({J}\in {L^{\infty }([0,1]^{2})}\) be a kernel such that \(T_{J}:{L^{2}([0,1])}\rightarrow {L^{2}([0,1])}\) is positive. Then, the eigenvalues \(\{a_{n}\}\) of \(T_{J}\) are completely summable. The eigenfunctions \({e_{n}}\in {L^{2}([0,1])}\) of \(T_{J}\), which provide the complete orthonormal basis of \(L^{2}([0,1])\), belong to \(L^{\infty }([0,1])\) with \(\sup _{n}\Vert e_{n}\Vert _{\infty }<\infty \), and

$$\begin{aligned} J(x,y)={\sum _{n}a_{n}e_{n}(x)e_{n}(y)}, \end{aligned}$$
(2.1)

where the series converges absolutely and uniformly. Representation (2.1) is a Mercer expansion.

Theorem 2.3

(Lemma 4.33 in [25]) Let J be a continuous kernel. Thus, the RKHS with J is separable.

Utilizing the above results, the RKHS with integral kernel J, satisfying the assumptions in Theorems 2.1 and 2.2, is given by

(2.2)

where the inner product of \({\mathscr {H}}_{J}\) is

$$\begin{aligned} \langle f, g \rangle _{{\mathscr {H}}_{J}}={\sum _{n}\langle f, e_{n} \rangle \langle g, e_{n} \rangle a_{n}^{-1}} \end{aligned}$$

for \({f,g}\in {L^{2}([0,1])}\) (for details, see Chapter 2 in [24]). We checked the reproducing properties. For \({f}\in {{\mathscr {H}}_{J}}\), we have

$$\begin{aligned} \langle J(x,\cdot ), f \rangle _{{\mathscr {H}}_{J}}={\sum _{n}\langle J(x,\cdot ), e_{n} \rangle \langle f, e_{n} \rangle a_{n}^{-1}}={\sum _{n}\langle f, e_{n} \rangle a_{n}e_{n}(x)a_{n}^{-1}}=f(x). \end{aligned}$$

Furthermore, for \({f}\in {L^{2}([0,1])}\), let

$$\begin{aligned} \Vert f\Vert _{{\mathscr {H}}_{-J}}^{2}={\sum _{n}\langle f, e_{n} \rangle ^{2}a_{n}}, \end{aligned}$$

and let \({\mathscr {H}}_{-J}\) be the completion of \(L^{2}([0,1])\) in the norm, \(\Vert \cdot \Vert _{{\mathscr {H}}_{-J}}\). Note that \({\mathscr {H}}_{-J}\) is the dual space of \({\mathscr {H}}_{J}\) with respect to the usual inner product.

Remark 2.2

Note that the functions in RKHS \({\mathscr {H}}_{J}\) has the \(\frac{1}{2}\)-Hölder regularity whenever the kernel J is Lipschitz continuous.

Example 2.1

(Brownian motion kernel) Next, we define the kernel \(J:{[0,1]}\times {[0,1]}\rightarrow [0,1]\) as

$$\begin{aligned} J(x,y)=\min \{x,y\},{x,y}\in {[0,1]}. \end{aligned}$$

Note that this kernel is symmetric and positive semi-definite. Then, the eigenfunction and eigenvalue are given by

$$\begin{aligned} e_{n}(x)=\sqrt{2}\sin \left( (2n-1)\frac{\pi x}{2}\right) \quad \text{ and }\quad a_{n}=\frac{4}{(2n-1)^{2}\pi ^{2}},\quad {n}\ge {1}, \end{aligned}$$

respectively. Furthermore, this is the Mercer expansion for J; that is, J is represented by

$$\begin{aligned} J(x,y)={\sum _{n=1}^{\infty }a_{n}e_{n}(x)e_{n}(y)}. \end{aligned}$$

3 Deterministic Model

Let \(R(x)=b(x)-d(x)=b_{1}x+b_{0}-d_{1}x\) with \({b_{0},b_{1},d_{1}}\ge {0}\), and assume that \(R(x)<0\) for large \(x>\rho \). We consider the following nonlocal reaction–diffusion equation:

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle \frac{d}{dt}u(t,x)={\int _{0}^{1}J(x,y)\bigl (u(t,y)-u(t,x)\bigr )dy}+R(u(t,x)), \\ u(t,0)=u(t,1), \\ {0}\le {u(0,x)}={u_{0}(x)}\in {C([0,1])}, \end{array} \right. \end{aligned}$$
(3.1)

where the kernel J is a nonnegative, symmetric, positive-definite, and continuous function that can be interpreted as the distribution of jumping from location x to y. Thus, we can regard that the equation (3.1) represents a nonlocal diffusion on inhomogeneous networks. By standard semigroup theory (e.g., see Chapter 6 in [22]), there exists a unique mild solution \({u(t,x)}\in {C\bigl ([0,T];C([0,1])\bigr )}\) such that \({0}\le {u(t,x)}<\rho \) for \({x}\in {[0,1]}\) and \({t}\in {[0,T]}\).

4 Stochastic Model

Let \(H^{N}\) denote the space of real-valued step functions on [0, 1], which are constant on \([kN^{-1},(k+1)N^{-1})\), \({0}\le {k}\le {N-1}\). For \({f}\in {H^{N}}\), we define operator \(I_{J}^{N}\) as

$$\begin{aligned} I_{J}^{N}f(x)=\frac{1}{N}{\sum _{i=0}^{N-1}J^{N}(x,iN^{-1})f(iN^{-1})}-\frac{1}{N}{\sum _{i=0}^{N-1}J^{N}(x,iN^{-1})}f(x), \end{aligned}$$

where

$$\begin{aligned}&J^{N}_{ki}=N^{2}{\int _{kN^{-1}}^{(k+1)N^{-1}}{\int _{iN^{-1}}^{(i+1)N^{-1}}J(x,y)dy}dx},\\&J^{N}(x,y)=\sum _{k=0}^{N-1}\sum _{i=0}^{N-1}J^{N}_{k,i}\mathbbm {1}_{[kN^{-1},(k+1)N^{-1})}(x)\mathbbm {1}_{[iN^{-1},(i+1)N^{-1})}(y). \end{aligned}$$

Note that this operator is a spatially discretized version of the first and second terms of Equation (3.1). If we must extend the domain of \(I_{J}^{N}\) to \(L^{2}([0,1])\), we define the projection mapping \(P^{N}\) from \(L^{2}([0,1])\) to \(H^{N}\) as

$$\begin{aligned} P^{N}f(x)=N{\int _{kN^{-1}}^{(k+1)N^{-1}}f(y)dy} \quad \text{ for } \quad {x}\in {[kN^{-1},(k+1)N^{-1})}, \quad {0}\le {k}\le {N-1}, \end{aligned}$$

and we consider operator \(I_{J}^{N}P^{N}\) instead of \(I_{J}^{N}\).

Let \(n_{k}(t)\) be the number of particles in the kth site on \({{\mathbb {Z}}}/N{{\mathbb {Z}}}\) at time t, and let \(\vec {n}(t) = (n_{0},\ldots ,n_{N-1}(t))\) be the state of a multidimensional Markov chain at time t. Now, we define the stochastic analog as

$$\begin{aligned} X^{N}(t,r) = n_{k}(t)l^{-1}\quad \mathrm{for}\quad {r}\in {[kN^{-1},(k+1)N^{-1})}, \end{aligned}$$

which has the transition rate,

$$\begin{aligned} \left\{ \begin{array}{l} {\vec {n}}\rightarrow {\vec {n}+e_{i}-e_{k}}\quad \mathrm{at}\quad \mathrm{rate}\quad n_{k}J^{N}(kN^{-1},iN^{-1})N^{-1},\\ {\vec {n}}\rightarrow {\vec {n}+e_{k}}\quad \mathrm{at}\quad \mathrm{rate}\quad lb(n_{k}l^{-1}),\\ {\vec {n}}\rightarrow {\vec {n}-e_{k}}\quad \mathrm{at}\quad \mathrm{rate}\quad ld(n_{k}l^{-1}), \end{array} \right. \end{aligned}$$

for \({i}\in {\{0,1,\ldots ,k-1,k+1,\ldots ,N-1\}}\), where \(e_{k}\) is the kth unit vector on \({\mathbb {R}}^{N}\). Here \(n_{N}=n_{0}\) and \(n_{-1}=n_{N-1}\). We take n(t) to be a càdlàg process defined on some probability space.

Let \({\mathscr {F}}_{t}^{N}\) be the completion of the \(\sigma \)-algebra \(\sigma (\vec {n}(s);{s}\le {t})\), and let \(\varLambda n(t)=n(t)-n(t-)\) be the jump at time t. Suppose that \(\tau \) is an \({\mathscr {F}}_{t}^{N}\)-stopping time, such that

$$\begin{aligned} {\sup _{{0}\le {t}\le {T}}\sup _{k} \mathbbm {1}_{\{\tau >0\}}n_{k}({t}\wedge {\tau })}\le {M(T,N,l)}<\infty , \end{aligned}$$

for all \(T>0\), we obtain as in Lemma 2.2 [3];

Lemma 4.1

The following are \({\mathscr {F}}_{t}^{N}\)-martingales:

  1. (a)

    \(\displaystyle n_{k}({t}\wedge {\tau })-n_{k}(0)-{\int _{0}^{{t}\wedge {\tau }}\frac{1}{N}{\sum _{i=0}^{N-1}J^{N}(iN^{-1},kN^{-1})(n_{i}(s)-n_{k}(s))}ds}-\int _{0}^{{t}\wedge {\tau }} lR(n_{k}(s)l^{-1})ds\).

  2. (b)

    \(\displaystyle {\sum _{{s}\le {{t}\wedge {\tau }}}(\varLambda n_{k}(s))^{2}}-{\int _{0}^{{t}\wedge {\tau }}\frac{1}{N}\sum _{\begin{array}{c} i=0\\ i\ne {k} \end{array}}^{N-1}J^{N}(iN^{-1},kN^{-1})(n_{i}(s)+n_{k}(s))ds}-\int _{0}^{{t}\wedge {\tau }} l|R|(n_{k}(s)l^{-1})ds\), where \(|R|(x)=b(x)+d(x)\).

  3. (c)

    \(\displaystyle {\sum _{{s}\le {{t}\wedge {\tau }}}(\varLambda n_{k}(s))(\varLambda n_{i}(s))}+{\int _{0}^{{t}\wedge {\tau }}\frac{1}{N}J^{N}(iN^{-1},kN^{-1})(n_{i}(s)+n_{k}(s))ds}\)  for all \({i}\ne {k}\).

By Lemma 4.1 (a) and the definition of \(X^{N}(t)\), we can write

$$\begin{aligned} X^{N}(t)=X^{N}(0)+{\int _{0}^{t}I_{J}^{N}X^{N}(s)ds}+{\int _{0}^{t}R(X^{N}(s))ds}+Z^{N}(t), \end{aligned}$$

where \(Z^{N}({t}\wedge {\tau })\) is an \(H^{N}\)-valued martingale. Furthermore, by utilizing Lemma 4.1, we can estimate the quadratic variation for a martingale \(Z^{N}\).

Lemma 4.2

For \({\xi }\in {H^{N}}\),

$$\begin{aligned}&{\sum _{{s}\le {t}}(\varLambda \langle Z^{N}({s}\wedge {\tau }), \xi \rangle )^{2}}\\&-\frac{1}{Nl} \int _{0}^{{t}\wedge {\tau }}\Biggl \{\langle X^{N}(s), \frac{1}{N}{\sum _{i=0}^{N-1}J^{N}(\cdot ,iN^{-1})\left( \xi (\cdot )-\xi (iN^{-1})\right) ^{2}} \rangle +\langle |R|(X^{N}(s)), \xi ^{2} \rangle \Biggr \}ds \end{aligned}$$

is a mean-zero martingale.

Proof

Note that the first term in the above formula simply counts the number of events between each site. Hence, we obtain terms that only include the transition rate \(J^{N}(\cdot ,\cdot )\) without the square of \(J^{N}(\cdot ,\cdot )\). We obtain the proof by a similar calculation as Lemma 3.2 in [27], and hence we omit it. \(\square \)

5 Law of Large Numbers

In this section, we consider the behavior of the difference between \(X^{N}\) and u. First, we set \({\overline{J}}=\Vert J(\cdot ,\cdot )\Vert _{L^{\infty }([0,1]^{2})}<\infty \). Based on some assumptions for kernel J, we can construct the separable RKHS \({\mathscr {H}}_{J}\) with the form (2.2). Furthermore, suppose that there is a constant a, such that

$$\begin{aligned} 0<{a}\le {\int _{0}^{1}J(x,y)dy},\quad {x}\in {[0,1]}. \end{aligned}$$

By definition, we immediately obtain

$$\begin{aligned} 0<{a}\le {\frac{1}{N}{\sum _{i=0}^{N-1}J^{N}(x,iN^{-1})}} \end{aligned}$$
(5.1)

for \({x}\in {[kN^{-1},(k+1)N^{-1})}\), and \({0}\le {k}\le {N-1}\). The following is the main result of this section:

Theorem 5.1

Assume that

  1. (i)

    \(\Vert X^{N}(0)-u(0)\Vert _{{\mathscr {H}}_{-J}}\rightarrow 0\) in probability as \(N\rightarrow \infty \),

  2. (ii)

    \(l(N)\rightarrow \infty \) as \(N\rightarrow \infty \).

Then, for \(T>0\),

$$\begin{aligned} \sup _{{0}\le {t}\le {T}}\Vert X^{N}(t)-u(t)\Vert _{{\mathscr {H}}_{-J}}\rightarrow 0 \quad \mathrm{in} \mathrm{probability}. \end{aligned}$$

To demonstrate this theorem, we give some results:

Lemma 5.1

  1. (i)

    For every positive function \({f}\in {{\mathscr {H}}_{-J}}\), \(C_{1}=C_{1}(J)>0\) such that

    $$\begin{aligned} {\langle f, 1 \rangle }\le {C_{1}\Vert f\Vert _{{\mathscr {H}}_{-J}}}. \end{aligned}$$
  2. (ii)

    For \({f}\in {{\mathscr {H}}_{-J}}\), \(C_{2}=C_{2}(J,T)\) does not depend on N such that

    $$\begin{aligned} {\Vert e^{I_{J}^{N}t}f\Vert _{{\mathscr {H}}_{-J}}}\le {C_{2}\Vert f\Vert _{{\mathscr {H}}_{-J}}}. \end{aligned}$$

Proof

For \({f}\in {{\mathscr {H}}_{-J}}\), by (5.1), we can see that

$$\begin{aligned} {\langle f, 1 \rangle }&\le {\frac{1}{a}\frac{1}{N}{\sum _{k=0}^{N-1}\left( \frac{1}{N}{\sum _{i=0}^{N-1}J^{N}(kN^{-1},iN^{-1})}\right) P^{N}f(kN^{-1})}}=\frac{1}{a}\langle T_{J}f, 1 \rangle \\&\le \frac{1}{a}\Vert T_{J}f\Vert _{{\mathscr {H}}_{J}}\cdot \Vert 1\Vert _{{\mathscr {H}}_{-J}}\\&= C_{1}\Vert f\Vert _{{\mathscr {H}}_{-J}}. \end{aligned}$$

Next, we consider part (ii). Because both \(I_{J}^{N}\) and \(T_{J}\) are self-adjoint operators, we obtain

$$\begin{aligned} {\frac{d}{dt}\left\| e^{I_{J}^{N}t}f\right\| _{{\mathscr {H}}_{-J}}^{2}}&=2{\int _{0}^{1}I_{J}^{N}\left( T_{J}e^{I_{J}^{N}t}f(x)\right) \cdot e^{I_{J}^{N}t}f(x)dx}\\&\le 2{\overline{J}}\Vert e^{I_{J}^{N}t}f\Vert _{{\mathscr {H}}_{-J}}^{2}+2\Vert T_{J}e^{I_{J}^{N}t}f\Vert _{2}^{2}. \end{aligned}$$

Note that for \({f}\in {{\mathscr {H}}_{-J}}\),

$$\begin{aligned} \Vert T_{J}f\Vert _{2}^{2}={\int _{0}^{1}\langle J(x,\cdot ), f(\cdot ) \rangle ^{2}dx}={\int _{0}^{1}\Vert J(x,\cdot )\Vert _{{\mathscr {H}}_{J}}^{2}\Vert f\Vert _{{\mathscr {H}}_{-J}}^{2}dx}\le {{\overline{J}}\Vert f\Vert _{{\mathscr {H}}_{-J}}^{2}}; \end{aligned}$$

hence,

$$\begin{aligned} {\frac{d}{dt}\left\| e^{I_{J}^{N}t}f\right\| _{{\mathscr {H}}_{-J}}^{2}} \le {4{\overline{J}}\Vert e^{I_{J}^{N}t}f\Vert _{{\mathscr {H}}_{-J}}^{2}}. \end{aligned}$$

Therefore, applying Gronwall’s inequality gives

$$\begin{aligned} {\Vert e^{I_{J}^{N}t}f\Vert _{{\mathscr {H}}_{-J}}}\le {e^{4{\overline{J}}T}\Vert f\Vert _{{\mathscr {H}}_{-J}}^{2}}. \end{aligned}$$

\(\square \)

Lemma 5.2

For a continuous function f,

$$\begin{aligned} {\Vert I_{J}f-I_{J}^{N}f\Vert _{2}}\rightarrow 0\text{ as } N\rightarrow \infty . \end{aligned}$$

Proof

Utilizing Hölder’s and Young’s inequalities, we have

$$\begin{aligned} \Vert I_{J}f-I_{J}^{N}f\Vert _{2}^{2} \le&2{\overline{J}}\biggl \{{\int _{0}^{1}(f(y)-P^{N}f(y))^{2}dy}+{\int _{0}^{1}(f(x)-P^{N}f(x))^{2}dx}\biggr \}\\&+8\Vert f\Vert _{\infty }^{2}{\int _{0}^{1}{\int _{0}^{1}(J(x,y)-J^{N}(x,y))^{2}dx}dy}, \end{aligned}$$

where \(P^{N}\) is the projection into \(H^{N}\) defined in Sect. 4. By the continuity of f and J, we obtain

$$\begin{aligned} {\Vert f-P_{N}f\Vert _{2}^{2}}\rightarrow 0 \quad \mathrm{and}\quad {\Vert J(x,y)-J^{N}(x,y)\Vert _{2}^{2}}\rightarrow 0\quad \text{ as }\quad N\rightarrow \infty , \end{aligned}$$

finishing the proof. \(\square \)

Proof of Theorem 5.1

We fix \({\varepsilon }\in {(0,1]}\) and set the stopping time \(\tau \) as

Recall that \({0}\le {u(t,x)}<\rho \) for all \({t}\in {[0,T]}\) and \({x}\in {[0,1]}\). If \(t<\tau \), it is easy to prove that \(\left\| X^{N}(t)\right\| _{{\mathscr {H}}_{-J}}<{\overline{J}}\rho +1\). Furthermore,

$$\begin{aligned} {\left\| X^{N}(\tau )\right\| _{{\mathscr {H}}_{-J}}}\le {\left\| X^{N}(\tau -)\right\| _{{\mathscr {H}}_{-J}}+\left\| \varLambda X^{N}(\tau )\right\| _{{\mathscr {H}}_{-J}}}<{\overline{J}}(\rho +1)+1 \end{aligned}$$

for sufficiently large N. Thus, we get

$$\begin{aligned} \left\| X^{N}({t}\wedge {\tau })\right\| _{{\mathscr {H}}_{-J}}<{\overline{J}}(\rho +1)+1. \end{aligned}$$
(5.2)

Using the semigroups \(e^{I_{J}t}\) and \(e^{I_{J}^{N}t}\), we obtain

$$\begin{aligned} X^{N}({t}\wedge {\tau })-u({t}\wedge {\tau })&=e^{I_{J}^{N}({t}\wedge {\tau })}(X^{N}(0)-u(0))+{\int _{0}^{{t}\wedge {\tau }}e^{I_{J}^{N}({t}\wedge {\tau }-s)}dZ^{N}(s)}\\&\quad +\bigl (e^{I_{J}^{N}({t}\wedge {\tau })}-e^{I_{J}t}\bigr )u(0)+{\int _{0}^{{t}\wedge {\tau }}\left\{ e^{I_{J}^{N}({t}\wedge {\tau }-s)}-e^{I_{J}({t}\wedge {\tau }-s)}\right\} b_{0}ds}\\&\quad +(b_{1}-d_{1}){\int _{0}^{{t}\wedge {\tau }}e^{I_{J}^{N}(t-s)}(X^{N}(s)-u(s))ds}\\&\quad +(b_{1}-d_{1}){\int _{0}^{{t}\wedge {\tau }}\left\{ e^{I_{J}^{N}({t}\wedge {\tau }-s)}-e^{I_{J}({t}\wedge {\tau }-s)}\right\} u(s)ds}; \end{aligned}$$

hence, by Lemma 5.1 and Gronwall’s inequality, we have

$$\begin{aligned} {\Vert X^{N}({t}\wedge {\tau })-u({t}\wedge {\tau })\Vert _{{\mathscr {H}}_{-J}}}\le {\Vert E_{N}({t}\wedge {\tau })\Vert _{{\mathscr {H}}_{-J}}e^{C|b_{1}-d_{1}|T}}, \end{aligned}$$

where

$$\begin{aligned} E_{N}({t}\wedge {\tau })&=e^{I_{J}^{N}({t}\wedge {\tau })}(X^{N}(0)-u(0))+{\int _{0}^{{t}\wedge {\tau }}e^{I_{J}^{N}({t}\wedge {\tau }-s)}dZ^{N}(s)}\\&\quad +\bigl (e^{I_{J}^{N}({t}\wedge {\tau })}-e^{I_{J}t}\bigr )u(0)+{\int _{0}^{{t}\wedge {\tau }}\left\{ e^{I_{J}^{N}({t}\wedge {\tau }-s)}-e^{I_{J}({t}\wedge {\tau }-s)}\right\} b_{0}ds}\\&\quad +(b_{1}-d_{1}){\int _{0}^{{t}\wedge {\tau }}\left\{ e^{I_{J}^{N}({t}\wedge {\tau }-s)}-e^{I_{J}({t}\wedge {\tau }-s)}\right\} u(s)ds}. \end{aligned}$$

Based on assumption (a), Lemma 5.2, and by applying the Trotter–Kato theorem, the first, third, and last terms of \(E_{N}\) converge to zero with the \({\mathscr {H}}_{-J}\) norm as \(N\rightarrow \infty \). Furthermore, because \(e^{I_{J}^{N}({t}\wedge {\tau }-s)}b_{0}=e^{I_{J}({t}\wedge {\tau }-s)}b_{0}\), the fourth term is zero. Thus, it suffices to show that the stochastic integral term converges to zero in probability. Thus,

$$\begin{aligned} \sup _{{0}\le {t}\le {T}}\left\| {\int _{0}^{{t}\wedge {\tau }}e^{I_{J}^{N}({t}\wedge {\tau }-s)}dZ^{N}(s)}\right\| _{{\mathscr {H}}_{-J}}\rightarrow 0 \quad \text{ in } \text{ probability } \text{ as } \quad N\rightarrow \infty . \end{aligned}$$

By the maximal inequality related to the stochastic convolution-type integral (or the stochastic *-integral) of Theorem 1 in [14], for \(\delta >0\),

$$\begin{aligned} P\left( {\sup _{{0}\le {t}\le {T}}\left\| {\int _{0}^{{t}\wedge {\tau }}e^{I_{J}^{N}({t}\wedge {\tau }-s)}dZ^{N}(s)}\right\| _{{\mathscr {H}}_{-J}}}\ge {2\delta }\right) <\frac{e^{4C_{1}T}}{\delta ^{2}}{\mathbb {E}}\left[ \Vert Z^{N}(T)\Vert _{{\mathscr {H}}_{-J}}^{2}\right] , \end{aligned}$$

where \(C_{2}\) is a constant given by Lemma 5.1 (ii). From Lemmas 4.2 and 5.1 (i), we have

$$\begin{aligned}&{\mathbb {E}}\left[ \Vert Z^{N}(T)\Vert _{{\mathscr {H}}_{-J}}^{2}\right] ={\sum _{n}{\mathbb {E}}\left[ \langle Z^{N}(T), e_{n} \rangle ^{2}\right] a_{n}}\\&=\frac{1}{Nl}\sum _{n}a_{n}{\mathbb {E}}\Biggl [\int _{0}^{{T}\wedge {\tau }}\Biggl \{\langle X^{N}(s), \frac{1}{N}{\sum _{i=0}^{N-1}J^{N}(\cdot ,iN^{-1})\left( e_{n}(\cdot )-e_{n}(iN^{-1})\right) ^{2}} \rangle \\&\quad \quad +\langle |R|(X^{N}(s)), e_{n}^{2} \rangle \Biggr \}ds\Biggr ]\\&\le \frac{1}{Nl}{\sum _{n}a_{n}{\mathbb {E}}\left[ \bigl \{4{\overline{J}}\Vert e_{n}\Vert _{\infty }^{2}+(b_{1}+d_{1})\Vert e_{n}\Vert _{\infty }^{2}\bigr \}{\int _{0}^{{T}\wedge {\tau }}\left\| X^{N}(s)\right\| _{{\mathscr {H}}_{-J}}ds}+b_{0}({T}\wedge {\tau })\right] }\\&\le \frac{1}{Nl}C(\rho ,J,R,T), \end{aligned}$$

where we have used \({\sum _{n}a_{n}}<\infty \) in the last inequality. This completes the proof. \(\square \)

Remark 5.1

Note that Theorem 5.1 holds when the reaction term R is Lipschitz continuous and \(R(0)=b_{0}>0\). Indeed, by a similar estimate in the proof of Theorem 5.1, we have

$$\begin{aligned} {\Vert X^{N}({t}\wedge {\tau })-u({t}\wedge {\tau })\Vert _{{\mathscr {H}}_{-J}}}\le {\Vert E_{N}({t}\wedge {\tau })\Vert _{{\mathscr {H}}_{-J}}e^{C(J,K)T}}, \end{aligned}$$

where K is a Lipschitz constant of R, and

$$\begin{aligned} E_{N}({t}\wedge {\tau })&=e^{I_{J}^{N}({t}\wedge {\tau })}(X^{N}(0)-u(0))+{\int _{0}^{{t}\wedge {\tau }}e^{I_{J}^{N}({t}\wedge {\tau }-s)}dZ^{N}(s)}\\&\quad +\bigl (e^{I_{J}^{N}({t}\wedge {\tau })}-e^{I_{J}t}\bigr )u(0)+{\int _{0}^{{t}\wedge {\tau }}\left\{ e^{I_{J}^{N}({t}\wedge {\tau }-s)}-e^{I_{J}({t}\wedge {\tau }-s)}\right\} R(u(s))ds}. \end{aligned}$$

Because R is a linear growth, we observe that

$$\begin{aligned} {\left\| {\int _{0}^{{t}\wedge {\tau }}\left\{ e^{I_{J}^{N}({t}\wedge {\tau }-s)}-e^{I_{J}({t}\wedge {\tau }-s)}\right\} R(u(s))ds}\right\| _{{\mathscr {H}}_{-J}}^{2}}\le {C(J,K,T)N^{-2}}\rightarrow 0 \end{aligned}$$

as \(N\rightarrow \infty \). The remaining terms go to zero by an argument similar to the one in Theorem 5.1, and hence we obtain the proof.

6 Central Limit Theorem

Let \(M^{N}(t)=\sqrt{Nl}Z^{N}({t}\wedge {\tau })\) and \(U^{N}(t)=\sqrt{Nl}(X^{N}(t)-u(t))\), where \(\tau \) is the stopping time defined in the proof of Theorem 5.1. Two operators \({A}_{J}\) and \(A_{J}^{N}\) as \(A_{J}f=I_{J}f+(b_{1}-d_{1})f\) and \(A_{J}^{N}g=I_{J}^{N}g+(b_{1}-d_{1})g\) for \({f}\in {L^{2}([0,1])}\) and \({g}\in {H^{N}}\), respectively. Recall that to guarantee the existence of dynamics, we assume that \(R(x)<0\) for large x. This condition is equivalent to the dissipative \(b_{1}<d_{1}\). From this, we can construct semigroups \(e^{A_{J}t}\) and \(e^{A_{J}^{N}t}\). In this section, we assume further that the kernel J and initial data u(0) are globally Lipschitz continuous in \([0,1]^{2}\) and [0, 1], respectively. The main result of this section is as follows:

Theorem 6.1

In addition to the assumptions of Theorem 5.1, we assume

  1. (a)

    \(l/N\rightarrow 0\),

  2. (b)

    \(U^{N}(0)\rightarrow U_{0}\) in distribution on \({\mathscr {H}}_{-J}\).

Then, there exists a unique \({\mathscr {H}}_{-J}\)- valued Gaussian process M(t) such that \(U^{N}(t) \rightarrow U(t)\) in distribution on \({\mathbb {D}}_{{\mathscr {H}}_{-J}}[0,\infty )\), where

$$\begin{aligned} U(t)=e^{A_{J}t}U_{0}+{\int _{0}^{t}e^{A_{J}(t-s)}dM(s)} \end{aligned}$$
(6.1)

is the mild solution of the stochastic differential equation

$$\begin{aligned} dU(t)=A_{J}U(t)dt+dM(t),U(0)=U_{0}. \end{aligned}$$

Remark 6.1

If we assume that the initial positions \(X^{N}(0)\) of the particles are i. i.d., by the standard central limit theorem, assumption (b) holds automatically.

We give the sketch of the proof. Using the semigroups gives

$$\begin{aligned} U^{N}(t)&=e^{A_{J}t}U^{N}(0)+{\int _{0}^{t}e^{A_{J}^{N}t}dM^{N}(s)}+\bigl (e^{A_{J}^{N}t}-e^{A_{J}t}\bigr )U^{N}(0)\nonumber \\&\quad +\sqrt{Nl}\bigl (e^{A_{J}^{N}t}-e^{A_{J}t}\bigr )u(0)+{\int _{0}^{t}e^{A_{J}^{N}t}d(\sqrt{Nl}Z^{N}(s)-M^{N}(s))}, \end{aligned}$$
(6.2)

and we show that the first and second terms of RHS of the above equality converge to the RHS in (6.1), and the other terms converge to 0 as \(N\rightarrow \infty \). It is relatively easy to see the convergence of the last three terms by the semigroup convergence result and the law of large numbers. In particular, we must consider the limiting behavior of stochastic fluctuation \(M^{N}(t)\). Recall that the projection \(P^{N}\) into \(H^{N}\) is defined by

$$\begin{aligned} P^{N}f(x)=N{\int _{kN^{-1}}^{(k+1)N^{-1}}f(y)dy}, \end{aligned}$$

for any measurable functions f and \({x}\in {[kN^{-1},(k+1)N^{-1})}\) and \({0}\le {k}\le {N-1}\). For simplicity, we abbreviate \(P^{N}f\) as \(f^{N}\) for the remainder of this paper. Let \(\lambda ^{N}(X^{N})\) be the waiting time parameter, and let \(\sigma ^{N}(X^{N},dw)\) be the jump distribution function of \(X^{N}\). Then, from Lemma 2.6 in [19], for \({\varphi }\in {{\mathscr {H}}_{J}}\), the variance of \(\langle M^{N}(t), \varphi \rangle \) is given by

$$\begin{aligned} Var(\langle M^{N}(t), \varphi \rangle )&=(Nl)\int _{0}^{t}{\mathbb {E}}\Biggl [\lambda ^{N}(X^{N}(s))\cdot \int _{H^{N}}\bigl \{\langle w-X^{N}(s)+Z^{N}(s), \varphi \rangle ^{2}\\&\quad -\langle Z^{N}(s), \varphi \rangle ^{2}-2\langle w-X^{N}(s), \varphi \rangle \langle Z^{N}(s), \varphi \rangle \bigr \}\sigma ^{N}(X^{N}(s),dw)\Biggr ]ds\\&=(Nl){\int _{0}^{t}{\mathbb {E}}\Biggl [\lambda ^{N}(X^{N}(s))\cdot {\int _{H^{N}}\langle w-X^{N}(s), \varphi \rangle ^{2}\sigma ^{N}(X^{N}(s),dw)}\Biggr ]ds}\\&=\int _{0}^{t}{\mathbb {E}}\Biggl [\frac{1}{N}\sum _{k=0}^{N-1}X^{N}(s,kN^{-1})\frac{1}{N}\sum _{\begin{array}{c} i=0\\ i\ne {k} \end{array}}^{N-1}J^{N}(kN^{-1},iN^{-1})\\&\quad \times \Bigl \{\varphi ^{N}(kN^{-1})-\varphi ^{N}(iN^{-1})\Bigr \}^{2}\\&\quad +\frac{1}{N}{\sum _{k=0}^{N-1}|R|(X^{N}(s,kN^{-1})\varphi ^{N}(kN^{-1})^{2}}\Biggr ]ds\\&={\int _{0}^{t}{\mathbb {E}}\left[ \langle X^{N}(s), {\int _{0}^{1}J^{N}(\cdot ,y)(\varphi (\cdot )-\varphi (y))^{2}dy} \rangle +\langle |R|(X^{N}(s)), \varphi ^{2} \rangle \right] ds}. \end{aligned}$$

Lemma 6.1

There exists a compact set \({K}\subset {{\mathscr {H}}_{-J}}\) such that \({P({M^{N}(t)}\in {K^{\varepsilon }})}\ge {1-\varepsilon }\), where \(K^{\varepsilon }=\bigl \{{f}\in {{\mathscr {H}}_{-J}} \mid \Vert f-g\Vert _{{\mathscr {H}}_{-J}}<\varepsilon \mathrm{for}\mathrm{some}{g}\in {K}\bigr \}\).

Proof

We obtain the proof using a parallel argument to that presented in Lemma 4.3 of [3] (see Appendix in [27]). \(\square \)

Lemma 6.2

There is a unique (in distribution) \({\mathscr {H}}_{-J}\)-valued Gaussian process \(M(\cdot )\), which has a characteristic functional

$$\begin{aligned}&{\mathbb {E}}\left[ \exp \left( i\langle M(t), \varphi \rangle \right) \right] \\&\quad =\exp \left( -\frac{1}{2}{\int _{0}^{t}\langle u(s), {\int _{0}^{1}J(\cdot ,y)(\varphi (\cdot )-\varphi (y))^{2}dy} \rangle +\langle |R|(u(s)), \varphi ^{2} \rangle ds}\right) \end{aligned}$$

for \({\varphi }\in {{\mathscr {H}}_{J}}\).

Proof

Consider the quadratic functional \({\mathscr {E}}_{t}\) defined by

$$\begin{aligned} {\mathscr {E}}_{t}(\varphi ,\psi )=\langle \varphi T_{J}u(t), \psi \rangle -2\langle u(t)T_{J}\varphi , \psi \rangle +\langle u(t)\varphi {\int _{0}^{1}J(\cdot ,y)dy}, \psi \rangle +\langle |R|(u(t))\varphi , \psi \rangle , \end{aligned}$$

for \({\varphi ,\psi }\in {{\mathscr {H}}_{J}}\). By the Riesz representation theorem, for \({\varphi ^{*}}\in {{\mathscr {H}}_{-J}}\) and \({\psi }\in {{\mathscr {H}}_{J}}\), there exists a linear operator \(L:{\mathscr {H}}_{-J}\rightarrow {\mathscr {H}}_{J}\) such that \(\langle \varphi ^{*}, \psi \rangle =\langle \psi , L(\varphi ^{*}) \rangle _{{\mathscr {H}}_{J}}\). We define an operator A(t) on \({\mathscr {H}}_{-J}\) as

$$\begin{aligned} \langle A(t)\varphi ^{*}, \psi ^{*} \rangle _{{\mathscr {H}}_{-J}}={\int _{0}^{t}{\mathscr {E}}_{s}(L(\varphi ^{*}),L(\psi ^{*}))ds}. \end{aligned}$$

Note that \(\{a_{n}^{-\frac{1}{2}}e_{n}\}\) is the CONS in \({\mathscr {H}}_{-J}\), and we have

$$\begin{aligned} {\sum _{n}\langle A(t)e_{n}, e_{n} \rangle _{{\mathscr {H}}_{-J}}a_{n}^{-1}}={\sum _{n}{\int _{0}^{t}{\mathscr {E}}_{s}(a_{n}^{\frac{1}{2}}e_{n},a_{n}^{\frac{1}{2}}e_{n})ds}}<\infty , \end{aligned}$$

by a similar calculation for \(Z^{N}\) in the proof of Theorem 5.1. This means that A(t) is in the trace class. Thus, A(t) is a self-adjoint compact operator on \({\mathscr {H}}_{-J}\). Moreover, the structure of \({\mathscr {E}}_{t}\), \(\langle A(t)\varphi ^{*}, \varphi ^{*} \rangle _{{\mathscr {H}}_{-J}}\) is a positive-definite quadratic function of \(\varphi ^{*}\) for every t and is continuous and increasing in t for every \(\varphi ^{*}\). Therefore, the proof is obtained based on the proof of Theorem 4.1 in [10] using Kolmogorov’s extension theorem. \(\square \)

Next, we show that the Gaussian process \(M(\cdot )\) constructed in Lemma 6.2 is the limiting process of martingale \(M^{N}(\cdot )=\sqrt{Nl}Z^{N}(\cdot )\).

Lemma 6.3

For a function \({\varphi }\in {{\mathscr {H}}_{J}}\),

$$\begin{aligned}&{\mathbb {E}}\left[ \exp \left( i\langle M^{N}(t), \varphi \rangle \right) \right] \\&\quad \rightarrow \exp \left( -\frac{1}{2}{\int _{0}^{t}\langle u(s), {\int _{0}^{1}J(\cdot ,y)(\varphi (\cdot )-\varphi (y))^{2}dy} \rangle +\langle |R|(u(s)), \varphi ^{2} \rangle ds}\right) . \end{aligned}$$

Proof

The proof is essentially the same as Lemma 4.4 in [3], but we must modify some calculations owing to the change from a Laplacian to an integral operator. Let \(f(w)=e^{iw}\), \(g(w)=\langle u(w), {\int _{0}^{1}J(\cdot ,y)(\varphi (\cdot )-\varphi (y))^{2}dy} \rangle +\langle |R|(u(s)), \varphi ^{2} \rangle \), and \(m^{N}(t)=\langle M^{N}(t), \varphi \rangle \). As in the proof of Lemma 4.4 in [3], it suffices to show that

where \({|\varepsilon _{N}(t,s)|}\le {{\mathbb {E}}[\alpha _{N}(T) \mid {\mathscr {F}}_{s}^{N}]}\), \({\alpha _{N}(T)}\ge {0}\), and \({\mathbb {E}}[\alpha _{N}(T)]\rightarrow 0\) as \(N\rightarrow \infty \). Let \([m^{N}](t)={\sum _{{w}\le {t}}(\varLambda m^{N}(w))^{2}}\) be the quadratic variation of \(m^{N}(t)\), and let

$$\begin{aligned} g^{N}(w)&=\Biggl \{\langle X^{N}({w}\wedge {\tau }), \frac{1}{N}{\sum _{i=0}^{N-1}J^{N}(\cdot ,iN^{-1})\left( \varphi ^{N}(\cdot )-\varphi ^{N}(iN^{-1})\right) ^{2}} \rangle \\&\quad +\langle |R|(X^{N}(s)), (\varphi ^{N})^{2} \rangle \Biggr \}\mathbbm {1}_{\{\tau >0\}}. \end{aligned}$$

Note that \([m^{N}](t)-{\int _{0}^{{t}\wedge {\tau }}g^{N}(w)dw}\) is a mean zero martingale from Lemma 4.2. Using the Itô formula (see [23]), we have

where \(\varepsilon _{N}(t,s)=\varepsilon _{1}(t,s)+\varepsilon _{2}(t,s)\) and

Hence, we have to show that \(\varepsilon _{1}(t,s)\) and \(\varepsilon _{2}(t,s)\) has an upper bound for which the expectation converges to 0. Roughly, the jump size of \(m^{N}\) has the order \(1/\sqrt{Nl}\), and hence the difference induced by jumps vanishes when N goes to infinity. Thus, the statement for \(\varepsilon _{1}(t,s)\) holds. Furthermore, the LLN in Theorem 5.1 yields a statement for \(\varepsilon _{2}(t,s)\).

Let us consider the proof in detail. First, we show that the integral of \(g^{N}(w)\) on [0, T] is uniformly bounded in N. Indeed, let \(f^{N}(x)\) be the function defined above. By the reproducing property and the Cauchy–Schwarz inequality, for every \({\varphi }\in {{\mathscr {H}}_{J}}\), we have

$$\begin{aligned} {\Vert \varphi \Vert _{\infty }}\le {\sup _{{x}\in {[0,1]}}J(x,x)\Vert \varphi \Vert _{{\mathscr {H}}_{J}}}. \end{aligned}$$

Therefore, we obtain

$$\begin{aligned}&{\int _{0}^{{T}\wedge {\tau }}\langle X^{N}(w), {\int _{0}^{1}J^{N}(\cdot ,y)(\varphi ^{N}(\cdot )-\varphi ^{N}(y))^{2}dy} \rangle dw} \le 4{\overline{J}}\Vert \varphi \Vert _{\infty }^{2}{\int _{0}^{{T}\wedge {\tau }}\langle X^{N}(w), 1 \rangle dw}\\&\quad \le C(J)\Vert \varphi \Vert _{{\mathscr {H}}_{J}}^{2}{\int _{0}^{{T}\wedge {\tau }}\left\| X^{N}(w)\right\| _{{\mathscr {H}}_{-J}}dw} \end{aligned}$$

and

$$\begin{aligned} {\int _{0}^{{T}\wedge {\tau }}\langle |R|(X^{N}(w)), \varphi ^{2} \rangle dw}\le {C(J)(b_{1}+d_{1})\Vert \varphi \Vert _{{\mathscr {H}}_{J}}^{2}{\int _{0}^{{T}\wedge {\tau }}\left\| X^{N}(w)\right\| _{{\mathscr {H}}_{-J}}dw}+b_{0}\Vert \varphi \Vert _{2}^{2}T}. \end{aligned}$$

Thus, we obtain the boundedness of the integral of \(g^{N}\) from (5.2). Next, we consider each term of \(\varepsilon _{N}(t,s)\). We apply the Taylor expansion to \(e^{iax}\). Then, we have the following inequality:

$$\begin{aligned} {\left| e^{ia}-1-ia+\frac{a^{2}}{2}\right| }\le {\frac{|a|^{3}}{6}}. \end{aligned}$$

Hence, by the fact that \({|\varLambda m^{N}(w)|}\le {C(\varphi ,J)/\sqrt{Nl}}\), we obtain

It remains the boundedness of \(\varepsilon _{2}\). Because \(|f''|=1\), we have

We set \(g(w)-g^{N}(w)=G_{1}^{N}(w)+G_{2}^{N}(w)\) where

$$\begin{aligned} G_{1}^{N}(w)=\langle |R|(u(w)), \varphi ^{2} \rangle -\langle |R|(X^{N}({w}\wedge {\tau }), (\varphi ^{N})^{2} \rangle \mathbbm {1}_{\{\tau >0\}} \end{aligned}$$

and

$$\begin{aligned} G_{2}^{N}(w)=&\langle u(w), {\int _{0}^{1}J(\cdot ,y)(\varphi (\cdot )-\varphi (y))^{2}dy} \rangle \\&-\Biggl \{\langle X^{N}({w}\wedge {\tau }), \frac{1}{N}{\sum _{i=0}^{N-1}J^{N}(\cdot ,iN^{-1})\left( \varphi ^{N}(\cdot )-\varphi ^{N}(iN^{-1})\right) ^{2}} \rangle \Biggr \}\mathbbm {1}_{\{\tau >0\}}. \end{aligned}$$

Now, we analyze two terms separately. For \(G_{1}^{N}(w)\),

$$\begin{aligned} G_{1}^{N}(w)&=\langle |R|(u(w)), \varphi ^{2}-(\varphi ^{N})^{2} \rangle +\langle |R|(u(w))-|R|(X^{N}({w}\wedge {\tau })\mathbbm {1}_{\{\tau >0\}}, (\varphi ^{N})^{2} \rangle . \end{aligned}$$

By the Cauchy–Schwarz inequality, we obtain

$$\begin{aligned}&\langle |R|(u(w)), \varphi ^{2}-(\varphi ^{N})^{2} \rangle \\&\quad \le \left\{ (b_{1}+d_{1})\sup _{{0}\le {w}\le {T}}\Vert u(w)\Vert _{\infty }+b_{0}\right\} \left\{ {\int _{0}^{1}(\varphi +\varphi ^{N})^{2}(x)dx}\right\} ^{1/2}\left\{ {\int _{0}^{1}(\varphi -\varphi ^{N})^{2}(x)dx}\right\} ^{1/2}\\&\quad \le 2{\overline{J}}\left\{ (b_{1}+d_{1})\sup _{{0}\le {w}\le {T}}\Vert u(w)\Vert _{\infty }+b_{0}\right\} \Vert \varphi \Vert _{{\mathscr {H}}_{J}}\Vert \varphi -\varphi ^{N}\Vert _{2}, \end{aligned}$$

and

$$\begin{aligned}&\langle |R|(u(w))-|R|(X^{N}({w}\wedge {\tau })\mathbbm {1}_{\{\tau>0\}}, (\varphi ^{N})^{2} \rangle \\&\quad =\langle |R|(u(w))-|R|(u({w}\wedge {\tau }))\mathbbm {1}_{\{\tau>0\}}, (\varphi ^{N})^{2} \rangle +\langle |R|(u({w}\wedge {\tau }))\\&\qquad -|R|(X^{N}({w}\wedge {\tau }), (\varphi ^{N})^{2} \rangle \mathbbm {1}_{\{\tau>0\}}\\&\quad \le {\overline{J}}\left\{ (b_{1}+d_{1})\sup _{{0}\le {w}\le {T}}\Vert u(w)\Vert _{\infty }+b_{0}\right\} \Vert \varphi \Vert _{{\mathscr {H}}_{J}}^{2}\mathbbm {1}_{\{{\tau }\le {T}\}}\\&\qquad + {\overline{J}}(b_{1}+d_{1})\Vert \varphi \Vert _{{\mathscr {H}}_{J}}^{2}\sup _{{0}\le {w}\le {T}}\Vert X^{N}({w}\wedge {\tau })-u({w}\wedge {\tau })\Vert _{{\mathscr {H}}_{-J}}\mathbbm {1}_{\{\tau >0\}}. \end{aligned}$$

Note that for any \({\phi }\in {L^{2}([0,1])}\), we have

$$\begin{aligned} \lim _{N\rightarrow \infty }\Vert \phi -\phi ^{N}\Vert _{2}=0. \end{aligned}$$
(6.3)

Therefore, (6.3) and Theorem 5.1 conclude that \({\mathbb {E}}\left[ {\int _{0}^{T}|G_{1}^{N}(w)|dw}\right] \rightarrow 0\) as \(N\rightarrow \infty \). For \(G_{2}^{N}(w)\),

$$\begin{aligned} G_{2}^{N}(w)=&\langle u(w), {\int _{0}^{1}J(\cdot ,y)(\varphi (\cdot )-\varphi (y))^{2}dy}-\frac{1}{N}{\sum _{i=0}^{N-1}J^{N}(\cdot ,iN^{-1})\left( \varphi ^{N}(\cdot )-\varphi ^{N}(iN^{-1})\right) ^{2}} \rangle \\&+\langle u(w)-X^{N}({w}\wedge {\tau })\mathbbm {1}_{\{\tau >0\}}, \frac{1}{N}{\sum _{i=0}^{N-1}J^{N}(\cdot ,iN^{-1})\left( \varphi ^{N}(\cdot )-\varphi ^{N}(iN^{-1})\right) ^{2}} \rangle . \end{aligned}$$

The expectation and integration in the time of the second term converges to 0 by a similar argument for the above. Thus, we show that the first term converges to 0 as \(N\rightarrow \infty \). Indeed, we have

$$\begin{aligned}&\langle u(w), {\int _{0}^{1}J(\cdot ,y)(\varphi (\cdot )-\varphi (y))^{2}dy}-\frac{1}{N}{\sum _{i=0}^{N-1}J^{N}(\cdot ,iN^{-1})\left( \varphi ^{N}(\cdot )-\varphi ^{N}(iN^{-1})\right) ^{2}} \rangle \\&=:G_{2,1}^{N}(w)+G_{2,2}^{N}(w), \end{aligned}$$

where

$$\begin{aligned} G_{2,1}^{N}(w)&=\langle u(w), {\int _{0}^{1}\left\{ J(\cdot ,y)-\frac{1}{N}{\sum _{i=0}^{N-1}J^{N}(\cdot ,iN^{-1})}\right\} (\varphi (\cdot )-\varphi (y))^{2}dy} \rangle ,\\ G_{2,2}^{N}(w)&=\langle u(w), {\int _{0}^{1}J^{N}(x,y)\Bigl \{(\varphi (\cdot )-\varphi (y))^{2}-(\varphi ^{N}(\cdot )-\varphi ^{N}(y))^{2}\Bigr \}dy} \rangle . \end{aligned}$$

It is relatively easy to see that \({\int _{0}^{T}|G_{2,1}^{N}(w)|dw}\rightarrow 0\) as \(N\rightarrow \infty \) by using (6.3) for the kernel J and the \(L^{\infty }\) estimate of \(\varphi \). For \(G_{2,2}^{N}\), because we have

$$\begin{aligned}&(\varphi (x)-\varphi (y))^{2}-(\varphi ^{N}(x)-\varphi ^{N}(y))^{2}\\&=\{(\varphi (\cdot )-\varphi (y))-(\varphi ^{N}(\cdot )-\varphi ^{N}(y))\}\{(\varphi (\cdot )-\varphi (y))+(\varphi ^{N}(\cdot )-\varphi ^{N}(y))\}, \end{aligned}$$

utilizing (6.3) and \(L^{\infty }\) estimate of \(\varphi \) yields \({\int _{0}^{T}|G_{2,2}^{N}(w)|dw}\rightarrow 0\) as \(N\rightarrow \infty \). Hence, we can finally see that

$$\begin{aligned} {{\mathbb {E}}\left[ {\int _{0}^{T}|g(w)-g^{N}(w)|dw}\right] }\le {{\mathbb {E}}\left[ {\int _{0}^{T}|G_{1}^{N}(w)+G_{2}^{N}(w)|dw}\right] }\rightarrow 0. \end{aligned}$$

Choosing the bounds \(\alpha (T)\) for \(\varepsilon (t,s)\) as

$$\begin{aligned} \alpha (T)=\frac{1}{6}{\sum _{0<{w}\le {T}}\left| \varLambda m^{N}(w)\right| ^{3}}+\frac{1}{2}{\int _{0}^{T}|g(w)-g^{N}(w)|dw}, \end{aligned}$$

we then complete the proof. \(\square \)

This lemma implies that \(M^{N}(t)\) converges to M(t) in distribution on \({\mathscr {H}}_{-J}\) for any \({t}\in {[0,T]}\). We will see that the convergence on the Skorokhod space \({\mathbb {D}}_{{\mathscr {H}}_{-J}}[0,\infty )\), and its limit is only M in Lemma 6.2. The following is a well-known result of the convergence of the stochastic process on the Skorokhod space, which can be found in [7]:

Theorem 6.2

(Theorem 8.6 in [7]) Let (Er) be complete and separable, and let \(\{Y^{N}\}\) be a family of processes with sample paths in \({\mathbb {D}}_{E}[0,\infty )\). Then, \(\{Y^{N}\}\) is relatively compact if and only if the following two conditions hold:

  1. (a)

    For every \(\varepsilon >0\) and \({t}\ge {0}\), there exists a compact set \({K_{\varepsilon }}\subset {E}\) such that

    $$\begin{aligned} {\inf _{n}P\left( {Y^{N}(t)}\in {K_{\varepsilon }}\right) }\ge {1-\varepsilon }. \end{aligned}$$
  2. (b)

    For each \(T>0\), there exists \(\beta >0\) and a family \(\{\gamma _{N}(s)\}\), \(s>0\), of nonnegative random variables satisfying

    and \(\lim _{s\rightarrow 0}\sup _{N}{\mathbb {E}}[\gamma _{N}(s)]=0\).

Condition (a) follows Lemma 6.1, and it remains that condition (b) holds for \(({\mathscr {H}}_{-J},\Vert \cdot \Vert _{{\mathscr {H}}_{-J}})\).

Lemma 6.4

For \({0}\le {s}\le {t}\)

Proof

Because \(M^{N}(\cdot )\) is a martingale, we have

It suffices to show that the expected values on the right-hand side of the above equation are uniformly bounded in n. As in the proof of Theorem 5.1, we obtain

$$\begin{aligned} {\mathbb {E}}[\langle M^{N}(t), e_{n} \rangle ^{2}]&\le {\mathbb {E}}\left[ C(J,R){\int _{0}^{{t}\wedge {\tau }}\left\| X^{N}(w)\right\| _{{\mathscr {H}}_{-J}}dw}+b_{0}({t}\wedge {\tau })\right] \end{aligned}$$

Therefore, for any \({s,t}\in {[0,T]}\) with \({s}\le {t}\), we have

finishing the proof. \(\square \)

Lemma 6.5

The stochastic processes \(M^{N}\) converge to M in distribution on \({\mathbb {D}}_{{\mathscr {H}}_{-J}}[0,\infty )\), where M is constructed by Lemma 6.2.

Proof

Using Lemmas 6.1, 6.4, and Theorem 6.2, \(M^{N}\) is relatively compact in \({\mathbb {D}}_{{\mathscr {H}}_{-J}}[0,\infty )\). Thus, there exists a subsequence of \(\{M^{N}\}\) such that the subsequential limit is indeed M by Lemma 6.3. This completes the proof. \(\square \)

To complete the proof, we observe the convergence of each term in (6.2).

Lemma 6.6

  1. (i)

    For \(\alpha \)-Hölder continuous function f with \({\alpha }\in {(0,1]}\),

    $$\begin{aligned} {\sup _{{0}\le {t}\le {T}}\Vert (e^{A_{J}t}-e^{A_{J}^{N}t})f\Vert _{2}}\le {C(J,T,f)N^{-\alpha }}. \end{aligned}$$
  2. (ii)

    Suppose \(g^{N}\rightarrow g\) in distribution on \({\mathscr {H}}_{-J}\). Then,

    $$\begin{aligned} \sup _{{0}\le {t}\le {T}}\Vert (e^{A_{J}t}-e^{A_{J}^{N}t})g^{N}\Vert _{{\mathscr {H}}_{-J}}\rightarrow 0\text{ in } \text{ probability } \text{ as } N\rightarrow \infty . \end{aligned}$$

Proof

For the part (i), by the definition of two operators \(A_{J}\) and \(A_{J}^{N}\), and by similar calculation in the proof of Lemma 5.2, we observe that

$$\begin{aligned} {\Vert A_{J}f-A_{J}^{N}f\Vert _{2}}\le {C(J)\Vert f-P_{N}f\Vert _{2}^{2}+C(f){\int _{0}^{1}{\int _{0}^{1}(J(x,y)-J^{N}(x,y))^{2}dy}dx}}. \end{aligned}$$

By the regularity of f and J, and Lemma 4.1 in [12], we have

$$\begin{aligned} {\Vert A_{J}f-A_{J}^{N}f\Vert _{2}}\le {C(J,f)N^{-\alpha }}. \end{aligned}$$

Note that for any \(j>0\),

$$\begin{aligned} {\Vert (A_{J})^{j}f-(A_{J}^{N})^{j}f\Vert _{2}}\le {C(J,f)^{j}N^{-\alpha }}. \end{aligned}$$

Then, using the series expansion of exponential of operators gives

$$\begin{aligned} {\sup _{{0}\le {t}\le {T}}\Vert (e^{A_{J}t}-e^{A_{J}^{N}t})f\Vert _{2}}\le {e^{C(J,f)T}N^{-\alpha }}. \end{aligned}$$

Next, we consider part (ii). The proof is based on Lemma 4.19 in [4]. By tightness of \(g^{N}\), it suffices to show that for some compact set \({K}\subset {{\mathscr {H}}_{-J}}\),

$$\begin{aligned} \sup _{{0}\le {t}\le {T}}\sup _{{f}\in {K}}\Vert (e^{A_{J}t}-e^{A_{J}^{N}t})P_{m}f\Vert _{{\mathscr {H}}_{-J}}\rightarrow 0\text{ as } N\rightarrow \infty , \end{aligned}$$

where \(P_{m}f={\sum _{{n}\le {m}}\langle f, e_{n} \rangle e_{n}}\) for fixed m. Recall that the function in RKHS \({\mathscr {H}}_{J}\) has the \(\frac{1}{2}\)-Hölder regularity when the kernel J is Lipschitz continuous. Indeed, for \({h}\in {{\mathscr {H}}_{J}}\), using the reproducing kernel property gives

$$\begin{aligned} {|g(x)-g(y)|} =\left| \langle J(x,\cdot )-J(y,\cdot ), g \rangle _{{\mathscr {H}}_{J}}\right|&\le \Vert J(x,\cdot )-J(y,\cdot )\Vert _{{\mathscr {H}}_{J}}\Vert g\Vert _{{\mathscr {H}}_{J}}\\&= \sqrt{J(x,x)-2J(x,y)+J(y,y)}\Vert g\Vert _{{\mathscr {H}}_{J}}\\&\le C(J)\Vert g\Vert _{{\mathscr {H}}_{J}}|x-y|^{\frac{1}{2}}, \end{aligned}$$

where we have used the globally Lipschitz continuity of J in last inequality. Hence, we have

$$\begin{aligned} \Vert (e^{A_{J}t}-e^{A_{J}^{N}t})P_{m}f\Vert _{{\mathscr {H}}_{-J}}&= \sup _{\Vert g\Vert _{{\mathscr {H}}_{J}}=1}\left| \langle (e^{A_{J}t}-e^{A_{J}^{N}t})P_{m}f, g \rangle \right| \\&= \sup _{\Vert g\Vert _{{\mathscr {H}}_{J}}=1}\left| {\sum _{{n}\le {m}}\langle f, e_{n} \rangle \langle e_{n}, (e^{A_{J}t}-e^{A_{J}^{N}t})g \rangle }\right| \\&\le \sup _{\Vert g\Vert _{{\mathscr {H}}_{J}}=1}{\sum _{{n}\le {m}}\Vert f\Vert _{{\mathscr {H}}_{-J}}\Vert e_{n}\Vert _{{\mathscr {H}}_{J}}\Vert (e^{A_{J}t}-e^{A_{J}^{N}t})g\Vert _{2}}\\&\le C(J,T)\Vert f\Vert _{{\mathscr {H}}_{-J}}N^{-\frac{1}{2}}{\sum _{{n}\le {m}}a_{n}^{-\frac{1}{2}}}\\&\rightarrow 0 \end{aligned}$$

as \(N\rightarrow \infty \), where the lat inequality follows from part (i). This completes the proof. \(\square \)

Remark 6.2

For a local diffusion model considered in [3, 4], the convergence in (i) cannot be obtained directly from the definition of the exponential of operators because the \(L^{2}\)-norm of \(\Delta _{N}\) may depend on N. Therefore, they used the spectral decomposition and higher regularity.

Lemma 6.7

$$\begin{aligned} {\int _{0}^{t}e^{A_{J}^{N}(t-s)}dM^{N}(s)}\rightarrow {\int _{0}^{t}e^{A_{J}(t-s)}dM(s)} \end{aligned}$$

in distribution on \({\mathbb {D}}_{{\mathscr {H}}_{-J}}[0,\infty )\).

Proof

Here, we have

$$\begin{aligned}&{\int _{0}^{t}e^{A_{J}^{N}(t-s)}dM^{N}(s)}-{\int _{0}^{t}e^{A_{J}(t-s)}dM(s)}\nonumber \\&={\int _{0}^{t}\left( e^{A_{J}^{N}(t-s)}-e^{A_{J}(t-s)}\right) dM^{N}(s)}+{\int _{0}^{t}e^{A_{J}(t-s)}d\left( M^{N}(s)-M(s)\right) }. \end{aligned}$$
(6.4)

First, we examine the second term. By partial integration and the fact that \(M^{N}(0)=0=M(0)\), we have

$$\begin{aligned} {\int _{0}^{t}e^{A_{J}(t-s)}d\left( M^{N}(s)-M(s)\right) }=M^{N}(t)-M(t)+{\int _{0}^{t}e^{A_{J}(t-s)}A_{J}\left( M^{N}(s)-M(s)\right) ds}. \end{aligned}$$

Set the mapping \(h:{\mathbb {D}}_{{\mathscr {H}}_{-J}}[0,\infty )\rightarrow {\mathbb {D}}_{{\mathscr {H}}_{-J}}[0,\infty )\) by

$$\begin{aligned} h(Y(t))=Y(t)+{\int _{0}^{t}e^{A_{J}(t-s)}A_{J}Y(s)ds}, \end{aligned}$$

then it is easy to see that h is continuous. Thus, by the continuous mapping theorem (cf. Theorem 5.1 in [2] and Proposition 2.5 in [20]), the weak convergence of \(M^{N}\) implies that the second term of the RHS of (6.4) converges to 0 in distribution on \({\mathbb {D}}_{{\mathscr {H}}_{-J}}[0,\infty )\). It remains the estimate of the first term in (6.4), and we can show that

$$\begin{aligned} \sup _{{0}\le {t}\le {T}}\left\| {\int _{0}^{t}\left( e^{A_{J}^{N}(t-s)}-e^{A_{J}(t-s)}\right) dM^{N}(s)}\right\| _{{\mathscr {H}}_{-J}}\rightarrow 0 \quad \text{ in } \text{ probability } \text{ as } N\rightarrow \infty . \end{aligned}$$
(6.5)

Indeed, by partial integration, we have

$$\begin{aligned} {\int _{0}^{t}\left( e^{A_{J}^{N}(t-s)}-e^{A_{J}(t-s)}\right) dM^{N}(s)}={\int _{0}^{t}\left( e^{A_{J}^{N}(t-s)}A_{J}^{N}-e^{A_{J}(t-s)}A_{J}\right) M^{N}(s)ds}. \end{aligned}$$

Hence, (6.5) follows from Lemma 6.6 (ii) and Doob’s maximal inequality. This completes the proof. \(\square \)

Proof of Theorem 6.1

Due to Lemmas 6.6 (i) and 6.7, the proof follows. \(\square \)

For the kernel J satisfying the special condition, we obtain the continuity of the sample path of U.

Proposition 6.1

  1. (i)

    \(U_{0}\) is independent of M.

  2. (ii)

    Suppose that \({\int _{0}^{1}J(x,y)dy}=1\) for all \({x}\in {[0,1]}\); then, U(t) has a continuous sample path.

Proof

The proof of (i) can be obtained using a parallel argument to that presented in Lemma 4.15 of [3]. Consider part (ii), and we will show that for any \({0}\le {s}\le {t}\le {T}\), there exists \(C>0\), \(p>0\), and \(\beta >1\) such that

$$\begin{aligned} {{\mathbb {E}}\left[ \Vert V(t)-V(s)\Vert _{{\mathscr {H}}_{-J}}^{p}\right] }\le {C(t-s)^{\beta }}, \end{aligned}$$

where \(V(t)={\int _{0}^{t}e^{A_{J}(t-w)}dM(w)}\). We take \(p=4\). By definition of \(\Vert \cdot \Vert _{{\mathscr {H}}_{-J}}\),

$$\begin{aligned} \Vert V(t)-V(s)\Vert _{{\mathscr {H}}_{-J}}^{4}&=\left( {\sum _{n}\langle {\int _{s}^{t}e^{A_{J}(t-w)}dM(w)}, e_{n} \rangle ^{2}a_{n}}\right) ^{2}\\&\le C{\sum _{n}\left\{ {\int _{s}^{t}\exp \Bigl (\bigl (a_{n}-1+b_{1}-d_{1}\bigr )(t-w)\Bigr )d\langle M(w), e_{n} \rangle }\right\} ^{4}|a_{n}|}. \end{aligned}$$

where we have used the Cauchy–Schwarz inequality and the fact that \(\{a_{n}\}\) are absolutely summable. Because M(t) is a Gaussian process, we can obtain the expression

$$\begin{aligned} \langle M(w), e_{n} \rangle ={\int _{0}^{w}(g(e_{n},\sigma ))^{1/2}dB(\sigma )}, \end{aligned}$$

where \(g(e_{n},\sigma )=\langle |R|(u(\sigma )), e_{n}^{2} \rangle +\langle u(\sigma ), {\int _{0}^{1}J(\cdot ,y)(e_{n}(\cdot )-e_{n}(y))^{2}dy} \rangle \), and \(B(\sigma )\) is the standard Brownian motion. Therefore, using the Burkholder–Davis–Gundy inequality, we obtain:

$$\begin{aligned}&{\mathbb {E}}\left[ \left\{ {\int _{s}^{t}\exp \Bigl (\bigl (a_{n}-1+b_{1}-d_{1}\bigr )(t-w)\Bigr )d\langle M(w), e_{n} \rangle }\right\} ^{4}\right] \\&= {\mathbb {E}}\left[ \left\{ {\int _{s}^{t}\exp \Bigl (\bigl (a_{n}-1+b_{1}-d_{1}\bigr )(t-w)\Bigr )(g(e_{n},w))^{1/2}dB(w)}\right\} ^{4}\right] \\&\le \left( {\int _{s}^{t}\exp \Bigl (2\bigl (a_{n}-1+b_{1}-d_{1}\bigr )(t-w)\Bigr )g(e_{n},w)dw}\right) ^{2}\\&\le C(\rho ,J,R)(t-s)^{2}, \end{aligned}$$

where two constants \(C(\rho ,J,R)\) are independent of n. Utilizing the absolute summability of \(\{a_{n}\}\) again, we conclude that there is a constant \(C_{2}=C_{2}(\rho ,J,R)\) such that

$$\begin{aligned} {{\mathbb {E}}\left[ \Vert V(t)-V(s)\Vert _{{\mathscr {H}}_{-J}}^{4}\right] }\le {C_{2}(t-s)^{2}}. \end{aligned}$$

Hence, this completes the proof by applying Kolmogorov’s continuity theorem (see Proposition 10.3 in [7]). \(\square \)