1 Introduction

Let \(J:\mathbb {R}^N\rightarrow \mathbb {R}\) be a nonnegative function such that \(\int _{\mathbb {R}^N}J(x)\,dx=1\). It is known that the nonlocal dispersal equation

$$\begin{aligned} \begin{aligned} u_t(x,t)=J*u(x,t)-u(x,t)=\int _{\mathbb {R}^N}J(x-y)u(y,t)\,dy-u(x,t), \end{aligned} \end{aligned}$$
(1.1)

and its variation have been widely used to model diffusion process (see e.g. [1, 11, 13]). As stated in [9, 15], if u(xt) is thought as a density at position x at time t and the probability distribution that individuals jump from y to x is given by \(J(x-y)\), then \(\int _{\mathbb {R}^N}J(x-y)u(y,t)\,dy\) denotes the rate at which individuals are arriving to position x from all other places and \(u(x,t)=\int _{\mathbb {R}^N}J(y-x)u(x,t)\,dy\) is the rate at which they are leaving position x to all other places. This consideration, in the absence of external sources, leads immediately to that u satisfies (1.1). For recent references on nonlocal dispersal equations, see [2,3,4, 10, 15, 18, 19, 23] and references therein.

In this paper, we consider the nonlocal dispersal Cauchy problem

$$\begin{aligned} {\left\{ \begin{array}{ll} u_{t}(x,t)=J*u(x,t)-u(x,t) &{}\text { in }\mathbb {R}^{N}\times (0,\infty ),\\ u(x,0)=u_{0}(x)&{}\text { in }\mathbb {R}^{N}, \end{array}\right. } \end{aligned}$$
(1.2)

where the spatial dimension \(N>1\) and J(x), \(u_0(x)\) satisfies the following assumptions.

(A1):

\(J:\mathbb {R}^N\rightarrow \mathbb {R}\) is nonnegative, radial, continuous with unit integral, J is strictly positive in B(0, 1) and vanishes in \(\mathbb {R}^N\backslash B(0,1)\).

(A2):

The function \(u_0\in C^\infty _c(\mathbb {R}^N)\) is nontrivial.

In this case, we know that (1.2) admits a unique global solution u(xt) such that

$$\begin{aligned} u\in C([0,\infty ); C(\mathbb {R}^N)\cap L^\infty (\mathbb {R}^N))\cap C^1((0,\infty ); C(\mathbb {R}^N)\cap L^\infty (\mathbb {R}^N)), \end{aligned}$$

see e.g. [5]. On the other hand, we know that problem (1.2) is a nonlocal version of the classical heat equation

$$\begin{aligned} {\left\{ \begin{array}{ll} u_t=\Delta _N u=\sum _{i=1}^N \frac{\partial ^2u }{\partial x_i^2} &{}\text { in } \mathbb {R}^N\times (0,\infty ),\\ u(x,0)=u_{0}(x) &{}\text { in }\mathbb {R}^N. \end{array}\right. } \end{aligned}$$
(1.3)

Since in (1.3), the operator \(\Delta _N\) depends on the whole second partial derivatives of u, we say it an N-dimensional diffusion operator. It is apparent that the unique bounded solution \(U_N(x,t)\) of (1.3) is given by

$$\begin{aligned} U_N(x,t)=\frac{1}{(4\pi t)^{N/2}}\int _{\mathbb {R}^N} e^{-\frac{|x-y|^2}{4t}}u_0(y)\,dy \end{aligned}$$

for \(x\in \mathbb {R}^N\) and \(t>0\). Moreover, we know that the solution \(U_N(x,t)\) can be approximated by the solution u(xt) of (1.2) with scaling kernel functions. It becomes an interesting topic to study the approximation solution of heat equation by the corresponding nonlocal dispersal equation. The investigation of approximation problem goes back to the seminal works of Cortazar, Elgueta and Rossi [5], Cortazar, Elgueta, Rossi and Wolanski [6]. The periodic boundary problem and the eigenvalue problem were established by Shen and Xie [17]. One can see Du and Ni [7], Rossi et al. [12, 16], and Sun et al. [8, 20,21,22] for the related investigation of approximation problems.

In order to reveal the precise effect of nonlocal property on the dispersal equation, we consider the following nonlocal dispersal equation

$$\begin{aligned} {\left\{ \begin{array}{ll} u^\varepsilon _{t}(x,t)=\frac{1}{\varepsilon ^{\beta }}{\int _{\mathbb {R}^{N}}}J_\alpha ^{\varepsilon }(x-y)[u^{\varepsilon }(y,t)-u^{\varepsilon }(x,t)] \,dy&{}\text { in }\mathbb {R}^N\times (0,\infty ),\\ u^{\varepsilon }(x,0)=u_{0}(x) &{}\text { in }\mathbb {R}^N, \end{array}\right. } \end{aligned}$$
(1.4)

where \(\varepsilon >0\) is a small parameter, \(\alpha ,\beta \) are given positive constants, and the kernel function

$$\begin{aligned} \begin{aligned} J_\alpha ^\varepsilon (\xi )=\frac{1}{d\varepsilon ^{\alpha }}J\left( \frac{\xi _1}{\varepsilon ^{\alpha _1}},\frac{\xi _2}{\varepsilon ^{\alpha _2}},\ldots \frac{\xi _N}{\varepsilon ^{\alpha _N}}\right) , \end{aligned} \end{aligned}$$
(1.5)

here the constants \(\alpha _1,\alpha _2,\ldots ,\alpha _N\) are all positive such that

$$\begin{aligned} \alpha =\alpha _1+\alpha _2+\cdots +\alpha _N, \end{aligned}$$

and

$$\begin{aligned} d=\frac{1}{2N}\int _{\mathbb {R}^N}J(y)|y|^2\,dy. \end{aligned}$$

Since J(x) is compactly supported, we know from (1.5) that the nonlocal property becomes weak when \(\varepsilon \) is small. Note that in (1.5), if

$$\alpha _1=\alpha _2=\cdots =\alpha _N,$$

we call the scaling kernel function \(J_\alpha ^\varepsilon (\xi )\) is synchronous. Since in this case, all the spatial variations share the same nonlocal properties. Otherwise, the the scaling kernel function \(J_\alpha ^\varepsilon (\xi )\) is called to be asynchronous, where different locations share different nonlocal properties. Our aim is to investigate the effect of synchronous and asynchronous kernel functions on the limiting behavior of solutions of (1.4).

In the rest of paper, we always assume that (A1)-(A2) hold. Without loss of generality, we may assume that

$$0<\alpha _1\le \alpha _2\le \cdots \le \alpha _N.$$

In this situation, we have \(2\alpha _1=\min \{2\alpha _i: 1\le i\le N\}\).

We state the main result of this paper in the following two theorems. Our first result is on the asymptotic behavior of the solution of (1.4) when \(\beta =\min \{2\alpha _i: 1\le i\le N\}\).

Theorem 1.1

Suppose that \(\beta =2\alpha _1=\min \{2\alpha _i: 1\le i\le N\}\) and there has \(k\in [1,N)\) such that \(\alpha _1=\cdots =\alpha _k<\alpha _{k+1}\). Let \(u^{\varepsilon }(x,t)\) be the unique solution of (1.4) for \(\varepsilon >0\), then we have

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0+}\sup \limits _{t\in [0,T]}\Vert u^{\varepsilon }(\cdot ,t)-\tilde{U}_k(\cdot ,t)\Vert _{L^\infty (\mathbb {R}^N)}=0, \end{aligned}$$

here \(\tilde{U}_k(x,t)\) stands for the unique solution of k-dimensional diffusion equation

$$\begin{aligned} {\left\{ \begin{array}{ll} u_t=\Delta _k u &{}\text { in } \quad \mathbb {R}^k\times (0,\infty ),\\ u(x,0)=u_{0k}(x) &{}\text { in }\quad \mathbb {R}^k, \end{array}\right. } \end{aligned}$$
(1.6)

where \(u_{0k}(x)=u_0(x_1,x_2,\ldots ,x_k,X)\) for any given \(X\in \mathbb {R}^{N-k}\).

The conclusion of Theorem 1.1 reveals different effects of asynchronous kernel functions on the solution \(u^{\varepsilon }(x,t)\) of nonlocal dispersal equation (1.4). In the synchronous case \(\alpha _1=\alpha _2=\cdots =\alpha _N\), it follows from [5, 6, 17, 21] that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0+}u^{\varepsilon }(x,t)=U_N(x,t)=\frac{1}{(4\pi t)^{N/2}}\int _{\mathbb {R}^N} e^{-\frac{|x-y|^2}{4t}}u_0(y)\,dy \end{aligned}$$

for any \((x,t)\in \mathbb {R}^N\times (0,\infty )\). This implies that the nonlocal dispersal equation is similar to the Laplace diffusion when the nonlocal property is small. In the current paper, we shall establish the approximation results of (1.4) by the novel arguments and techniques developed in [5, 6]. However, in the asynchronous case, it follows from Theorem 1.1 that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0+}u^{\varepsilon }(x,t)\!=\!\tilde{U}_k(x,t)\!=\!\frac{1}{(4\pi t)^{k/2}}\!\int _{\mathbb {R}^k} e^{-\frac{\sum _{i\!=\!1}^k(x_i-y_i)^2}{4t}}u_{0k}(y_1,y_2,\ldots ,y_k,X)\,dy_1dy_2\ldots dy_k \end{aligned}$$

for any \((x,t)\in \mathbb {R}^N\times (0,\infty )\). In this situation, we know that the nonlocal dispersal is similar to the Laplace diffusion in lower dimensional space \(\mathbb {R}^k\). In the special case \(k=1\), we find that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0+}u^{\varepsilon }(x,t)=\tilde{U}_1(x_1,X_{N-1},t)=\frac{1}{(4\pi t)^{1/2}}\int _{\mathbb {R}} e^{-\frac{|x_1-y_1|^2}{4t}}u_{01}(y_1,X_{N-1})\,dy_1 \end{aligned}$$

for any \((x,t)\in \mathbb {R}^N\times (0,\infty )\), here \(x=(x_1,X_{N-1})\). We can see that \(\tilde{U}_1(x,t)=\tilde{U}_1(x_1,X_{N-1},t)\) is the unique solution of 1-dimensional diffusion equation

$$\begin{aligned} {\left\{ \begin{array}{ll} u_t(x_1,X_{N-1})=u_{x_1x_1}(x_1,X_{N-1}) &{}\text { in } \quad \mathbb {R}\times (0,\infty ),\\ u(x_1,0)=u_0(x_1,X_{N-1}) &{}\text { in }\quad \mathbb {R} \end{array}\right. } \end{aligned}$$

for any given \(X_{N-1}\in \mathbb {R}^{N-1}\).

Remark 1.2

Our interesting result appears when the kernel function admits asynchronous scalings, where every locations exist nonuniform nonlocal properties. Thanks to (1.6), it is easily seen that the initial nonlocal dispersal takes place in the whole domain \(\mathbb {R}^N\), but the limiting behavior is similar to k-dimensional diffusion as \(\varepsilon \rightarrow 0\). Consequently, we obtain that the nonlocal dispersal equation (1.4) can be similar to lower-dimensional diffusion equations when the nonlocal property is asynchronous. In fact, we know that the nonlocal dispersal in (1.4) can be similar to the \(k-\)dimensional diffusion operator

$$\begin{aligned} \Delta _k=\sum _{i=1}^k \frac{\partial ^2 }{\partial x_i^2} \end{aligned}$$

for \(k=1,2,3,\ldots ,N\). The techniques and ideas developed in this paper can be modified to treat more general nonlocal dispersal problem

$$\begin{aligned} {\left\{ \begin{array}{ll} u^\varepsilon _{t}(x,t)=\frac{1}{\varepsilon ^{\beta }}{\int _{\mathbb {R}^{N}}}J_\alpha ^{\varepsilon }(x-y)[u^{\varepsilon }(y,t)-u^{\varepsilon }(x,t)]\,dy+f(x,t) &{}\text { in }\quad \mathbb {R}^N\times (0,\infty ),\\ u^{\varepsilon }(x,0)=u_{0}(x) &{}\text { in }\quad \mathbb {R}^N. \end{array}\right. } \end{aligned}$$

The next result provide us the limiting behavior of solutions to (1.4) when \(\beta \not =2\alpha _1=\min \{2\alpha _i: 1\le i\le N\}\).

Theorem 1.3

Let \(u^{\varepsilon }(x,t)\) be the unique solution of (1.4) for \(\varepsilon >0\).

  1. (i)

    If \(\beta <2\alpha _1=\min \{2\alpha _i: 1\le i\le N\}\), we have

    $$\begin{aligned} \lim _{\varepsilon \rightarrow 0+}u^{\varepsilon }(x,t)=u_0(x) \text { uniformly in }\mathbb {R}^N\times [0,T]. \end{aligned}$$
    (1.7)
  2. (ii)

    If \(\alpha _1=\alpha _2=\cdots =\alpha _N\) and \(\beta \in (2\alpha _1,3\alpha _1)\), we have

    $$\begin{aligned} \lim _{\varepsilon \rightarrow 0+}u^{\varepsilon }(x,t)=0 \text { uniformly in }\mathbb {R}^N\times [T_0,T] \end{aligned}$$
    (1.8)

    for any \(T_0\in (0,T)\).

Remark 1.4

By (1.7), we have that the solution tends to the initial value \(u_0(x)\) as \(\varepsilon \rightarrow 0\) when \(\beta <2\alpha _1\). It then follows that the nonlocal dispersal equation (1.4) is similar to the following ODE

$$\begin{aligned} {\left\{ \begin{array}{ll} u_t(x,t)=0,\;\;t>0,\\ u(x,0)=u_0(x), \end{array}\right. } \end{aligned}$$

here x is just the role of a parameter. Thus we know that there is a quenching phenomenon for diffusion (1.4) as \(\varepsilon \rightarrow 0\). On the other hand, if \(\beta >2\alpha _N\), it follows from (1.8) that the solution of (1.4) converges to the trivial solution for any \(t>0\). Indeed, we find that the assumption \(\beta \in (2\alpha _1,3\alpha _1)\) behaves like the large dispersal in (1.4), see e.g. [19]. Thus the solution is quenching in the whole domain \(\mathbb {R}^N\). It is interesting to point out that the synchronous or asynchronous scaling kernel function does not affect the limiting behavior of solutions if either \(\beta <2\alpha _1\) or \(\beta \in (2\alpha _1,3\alpha _1)\).

The rest of the paper is organized as follows. In Sect. 2 we give some preliminaries. Section 3 is devoted to the study of the effect of synchronous and asynchronous kernel function on the solution of (1.4). In Sect. 4, we investigate the limiting behavior of (1.4) when \(\beta \not =2\alpha _{1}\) and prove Theorem 1.3.

2 Preliminaries

In this section, we present some basic results on the existence and uniqueness of solutions to nonlocal dispersal equations. To do this, we consider the following nonlocal dispersal equation

$$\begin{aligned} {\left\{ \begin{array}{ll} u_t(x,t)=\int _{\mathbb {R}^{N}}J(x-y)u(y,t)\,dy-u(x,t)+f(x,t)&{}\text { in }\mathbb {R}^{N}\times (0,\infty ),\\ u(x,0)=u_{0}(x)&{}\text { in }\mathbb {R}^{N}, \end{array}\right. } \end{aligned}$$
(2.1)

where \(f\in C(\mathbb {R}^N)\cap L^\infty (\mathbb {R}^N)\) is a given function.

Existence and uniqueness of solutions to (2.1) are followed from the classical semigroup theory (e.g., see the book of Pazy [14]). Let \(X=C(\mathbb {R}^N)\cap L^\infty (\mathbb {R}^N)\), and \(\mathcal {G}: X\rightarrow X\) be defined by

$$ \mathcal {G} u(x)=\int _{\mathbb {R}^N} J(x-y)u(y)\,dy-u(x). $$

Then \(\mathcal {G}:X\rightarrow X\) is a bounded linear operator. Hence for any \(u_0\in X\), (2.1) has a unique solution \(u(t,x;u_0)\) with \(u(0,x;u_0)=u_0(x)\) (see Theorem 1.2 in chapter 1 of [14]). In fact,

$$ u(t,\cdot ;u_0)=e^{\mathcal {G} t} u_0(\cdot ). $$

Now we give the definition of sub-super solutions to (2.1) and the corresponding comparison principle.

Definition 2.1

A boundned function \(u\in C^1([0,T); C(\mathbb {R}^{N}))\) is a super-solution to (2.1) if

$$\begin{aligned} {\left\{ \begin{array}{ll} u_t(x,t)\ge \int _{\mathbb {R}^{N}}J(x-y)u(y,t)\,dy-u(x,t)+f(x,t) &{}\text { in }\mathbb {R}^{N}\times (0,\infty ),\\ u(x,0)\ge u_{0}(x)&{}\text { in }\mathbb {R}^{N}. \end{array}\right. } \end{aligned}$$

The sub-solution is defined analogously by reversing the inequalities.

Theorem 2.2

Assume that u(xt) and v(xt) are a pair of super-sub solutions to (2.1). Then \(u(x,t)\ge v(x,t)\) for \((x,t)\in \mathbb {R}^{N}\times [0,\infty )\).

Proof

It follows from [5, Corollary 2.2]. \(\square \)

Remark 2.3

We can see that u(xt) is a solution to (2.1) with the initial value \(u_0(x)\) if and only if

$$\begin{aligned} u(x,t)=e^{-t}u_0(x)+\int _0^t\int _{\mathbb {R}^N}J(x-y)e^{-(t-s)}u(y,s)\,dyds+\int _0^te^{-(t-s)}f(x,s)\,ds \end{aligned}$$

for \((x,t)\in \mathbb {R}^N\times (0,\infty )\).

We then have the following result on the existence and uniqueness of solutions to the nonlocal problem (2.1).

Theorem 2.4

For every \(u_0\in C(\mathbb {R}^N)\cap L^\infty (\mathbb {R}^N)\), there exists a unique solution u(xt) to (2.1) such that

$$\begin{aligned} u\in C([0,\infty ); C(\mathbb {R}^N)\cap L^\infty (\mathbb {R}^N))\cap C^1((0,\infty ); C(\mathbb {R}^N)\cap L^\infty (\mathbb {R}^N)), \end{aligned}$$

and there exists \(C>0\) such that

$$\begin{aligned} -C\le u(x,t)\le C \end{aligned}$$

for \((x,t)\in \mathbb {R}^N\times [0,T]\).

Proof

The proof is followed by semigroup argument and comparison principle, we omit the details here. \(\square \)

3 Effect of Asynchronous Kernel Functions

In this section, we assume that \(\beta =2\alpha _1=\min \{2\alpha _i: 1\le i\le N\}\) and prove our main result Theorem 1.1.

Proof of Theorem 1.1

The proof is divided into the following two steps.

Step 1. In this step, we consider the case

$$\alpha _1<\alpha _2=\min \{\alpha _i: 2\le i\le N\}.$$

In this situation, we know that \(\beta =2\alpha _1<2\alpha _2\). Let \(\tilde{U}_1(x,t)=\tilde{U}_1(x_1,X_{N-1},t)\) be the unique solution of 1-dimensional diffusion equation

$$\begin{aligned} {\left\{ \begin{array}{ll} u_t(x_1,X_{N-1})=u_{x_1x_1}(x_1,X_{N-1}) &{}\text { in }\quad \mathbb {R}\times (0,\infty ),\\ u(x_1,0)=u_0(x_1,X_{N-1}) &{}\text { in }\quad \mathbb {R}, \end{array}\right. } \end{aligned}$$

here \(X_{N-1}\in \mathbb {R}^{N-1}\) is given. Letting

$$\begin{aligned} L_{\varepsilon }(v)=\frac{1}{\varepsilon ^{\beta }}{\int _{\mathbb {R}^{N}}}J_\alpha ^{\varepsilon }(x-y)[v(y,t)-v(x,t)]\,dy, \end{aligned}$$

then we obtain that \(\tilde{U}_1(x,t)\) satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} u_{t}(x,t)=L_{\varepsilon }(u)(x,t)+F_{\varepsilon }(x,t),&{}x\in \mathbb {R}^{N},t\in (0,T],\\ u(x,0)=u_{0}(x),&{}x\in \mathbb {R}^{N}, \end{array}\right. } \end{aligned}$$

where

$$\begin{aligned} F_{\varepsilon }(x,t)=-L_{\varepsilon }(\tilde{U}_1)(x,t)+ \frac{\partial ^2 \tilde{U}_1(x,t)}{\partial x_1^2}. \end{aligned}$$

The existence and uniqueness of solution \(u^{\varepsilon }(x,t)\) to (1.4) are followed by Theorem 2.4. Denote \(\omega ^{\varepsilon }(x,t)=\tilde{U}_1(x,t)-u^{\varepsilon }(x,t)\), then we get

$$\begin{aligned} {\left\{ \begin{array}{ll} \omega ^{\varepsilon }_t(x,t)=L_{\varepsilon }(\omega ^{\varepsilon })(x,t)+F_{\varepsilon }(x,t),&{}x\in \mathbb {R}^{N},t\in (0,T],\\ \omega ^{\varepsilon }(x,0)=0,&{}x\in \mathbb {R}^{N}. \end{array}\right. } \end{aligned}$$

Note that \(\tilde{U}_1\in C^{\infty ,1}(\mathbb {R}^{N}\times (0,T])\). We claim that there exist \(C>0\) and \(\eta \in (0,\min \{\alpha _1/2,2(\alpha _2-\alpha _1)\})\) such that

$$\begin{aligned} \max _{t\in [0,T]}\Vert F_{\varepsilon }(\cdot ,t)\Vert _{L^\infty (\mathbb {R}^{N})} \le C\varepsilon ^{\eta }. \end{aligned}$$
(3.1)

In fact, we know that

$$\begin{aligned} \begin{aligned}&\frac{\partial ^2\tilde{U}_1(x,t)}{\partial x_1^2}-L_{\varepsilon }(\tilde{U}_1)(x,t)\\&\quad =\frac{\partial ^2\tilde{U}_1(x,t)}{\partial x_1^2}-\frac{1}{\varepsilon ^{\beta }}{\int _{\mathbb {R}^{N}}}J_\alpha ^{\varepsilon }(x-y)\left[ \tilde{U}_1(y,t)-\tilde{U}_1(x,t)\right] \,dy\\&\quad =\frac{\partial ^2\tilde{U}_1(x,t)}{\partial x_1^2}-\frac{1}{d\varepsilon ^{\alpha \!+\!\beta }}\!\int _{\mathbb {R}^{N}}J\left( \frac{x_1\!-\!y_1}{\varepsilon ^{\alpha _1}},\!\frac{x_2\!-\!y_2}{\varepsilon ^{\alpha _2}},\ldots \frac{x_N\!-\!y_N}{\varepsilon ^{\alpha _N}}\right) \!\left[ \tilde{U}_1(y,t)\!-\!\tilde{U}_1(x,t)\right] \,dy\\&\quad =\frac{\partial ^2\tilde{U}_1(x,t)}{\partial x_1^2}-\frac{1}{d\varepsilon ^{\beta }}\int _{B(0,1)}J(y) \Bigg [\tilde{U}_1(x_1-\varepsilon ^{\alpha _1}y_1,x_2-\varepsilon ^{\alpha _2}y_2,\ldots ,x_N\\ {}&\qquad -\varepsilon ^{\alpha _N}y_N,t)-\tilde{U}_1(x,t)\Bigg ]\,dy, \end{aligned} \end{aligned}$$

here \(x=(x_1,x_2,\ldots ,x_N)\), \(y=(y_1,y_2,\ldots ,y_N)\) and \(dy=dy_1dy_2\ldots dy_N\).

On the other hand, since

$$\begin{aligned} \beta =2\alpha _1,\,\,\alpha _1<\min \{\alpha _i: 2\le i\le N\}, \end{aligned}$$

we have

$$\begin{aligned} 2\alpha _i-\beta \ge 2\alpha _2-\beta =2(\alpha _2-\alpha _1)>0\,\,\text { for }i\ge 2. \end{aligned}$$

It then follows that

$$\begin{aligned} \begin{aligned}&\frac{1}{d\varepsilon ^{\beta }}\int _{B(0,1)}J(y) \left[ \tilde{U}_1(x_1-\varepsilon ^{\alpha _1}y_1,x_2-\varepsilon ^{\alpha _2}y_2,\ldots ,x_N-\varepsilon ^{\alpha _N}y_N,t)-\tilde{U}_1(x,t)\right] \,dy\\&\quad =-\frac{1}{d}\sum \limits _{i=1}^{N}\left( \varepsilon ^{\alpha _i-\beta }\int _{\mathbb {R}^{N}}J(y)y_{i}\,dy\frac{\partial \tilde{U}_1(x,t)}{\partial x_i}\right) \\&\quad \quad +\frac{1}{2d}\sum \limits _{i,j=1}^{N} \left( \varepsilon ^{\alpha _i+\alpha _j-\beta }\int _{\mathbb {R}^{N}} J(y)y_{i}y_{j}\,dy\frac{\partial ^2 \tilde{U}_1(x,t)}{\partial x_i\partial x_j}\right) +O(\varepsilon ^{\alpha _1/2}). \end{aligned} \end{aligned}$$

Observe that

$$\begin{aligned} \int _{\mathbb {R}^{N}} J(y)y_{i}\,dy=0 \end{aligned}$$

for \(i=1,2\ldots ,N\) and

$$\begin{aligned} \int _{\mathbb {R}^{N}}J(y)y_{i}y_{j}\,dy=0 \end{aligned}$$

for \(i,j=1,2\ldots ,N\) and \(i\ne j\). Consequently, we have

$$\begin{aligned} \begin{aligned}&\frac{\partial ^2\tilde{U}_1(x,t)}{\partial x_1^2}-L_{\varepsilon }(\tilde{U}_1)(x,t)\\&\quad =\sum \limits _{i=2}^{N} \left( \varepsilon ^{2\alpha _i-\beta }\frac{\partial ^2 \tilde{U}_1(x,t)}{\partial x_i^2}\right) +O(\varepsilon ^{\alpha _1/2})\\&\quad =O(\varepsilon ^{\eta }). \end{aligned} \end{aligned}$$
(3.2)

It follows from (3.2) that (3.1) is true.

Now denote

$$\begin{aligned} \overline{w}(x,t)=C\varepsilon ^{\eta }t, \end{aligned}$$

we have

$$\begin{aligned} \overline{w}_{t}(x,t)-L_{\varepsilon }(\overline{w})(x,t)=C\varepsilon ^{\eta } \ge F_{\varepsilon }(x,t)=w^{\varepsilon }_{t}(x,t)-L_{\varepsilon } (w^{\varepsilon })(x,t) \end{aligned}$$
(3.3)

for \(x\in \mathbb {R}^{N}\) and \(t>0\). Moreover, it is clear that

$$\begin{aligned} \overline{w}(x,0)= w^{\varepsilon }(x,0)=0. \end{aligned}$$
(3.4)

Thanks to (3.3)–(3.4), from the comparison principle we know that

$$\begin{aligned} w^{\varepsilon }(x,t)\le \overline{w}(x,t)=C\varepsilon ^{\eta }t. \end{aligned}$$

By a similar way, we can show that

$$\begin{aligned} w^{\varepsilon }(x,t)\ge \underline{w}(x,t)=-C\varepsilon ^{\eta }t. \end{aligned}$$

Thus

$$\begin{aligned} \sup \limits _{t\in [0,T]}\Vert u^{\varepsilon }(\cdot ,t)-\tilde{U}_1(\cdot ,t)\Vert _{L^\infty (\mathbb {R}^N)}\le C\varepsilon ^{\eta }T\rightarrow 0\quad as\quad \varepsilon \rightarrow 0. \end{aligned}$$

Step 2. In this step, we consider the case

$$\alpha _1=\alpha _2=\cdots =\alpha _k<\min \{\alpha _i: k+1\le i\le N\}$$

for some \(1<k<N\).

Since

$$\begin{aligned} \alpha _k<\alpha _{k+1}\le \alpha _{k+2}\le \cdots \le \alpha _{N}, \end{aligned}$$

by a similar argument as in the proof of (3.1), we can show that there exist \(C_1>0\) and \(\eta _1>0\) such that

$$\begin{aligned} -C\varepsilon ^{\eta _1}\le \sum _{i=1}^k\frac{\partial ^2 \tilde{U}_k((x,t)}{\partial x_i^2}-L_{\varepsilon }(\tilde{U}_k)(x,t)\le C\varepsilon ^{\eta _1} \end{aligned}$$

for \((x,t)\in \mathbb {R}^N)\times [0,T]\), here \(\tilde{U}_k(x,t)\) is the unique solution of (1.6). Hence, by a similar way as in Step 1, we can show that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0+}\sup \limits _{t\in [0,T]}\Vert u^{\varepsilon }(\cdot ,t)-\tilde{U}_k(\cdot ,t)\Vert _{L^\infty (\mathbb {R}^N)}=0. \end{aligned}$$

\(\square \)

4 Quenching Phenomena for Dispersal and Solution

In this section, we consider the limiting behavior of solutions to (1.4) when \(\beta \not =2\alpha _1=\min \{2\alpha _i: 1\le i\le N\}\). We first consider the case \(\beta <2\alpha _1\). It turns out that there is a quenching phenomenon of the nonlocal dispersal when \(\varepsilon \rightarrow 0\).

Lemma 4.1

Suppose that \(\beta <2\alpha _1=\min \{2\alpha _i: 1\le i\le N\}\). Let \(u^{\varepsilon }(x,t)\) be the unique solution of (1.4) for \(\varepsilon >0\). Then we have

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0+}u^{\varepsilon }(x,t)=u_0(x) \text { uniformly in }\mathbb {R}^N\times [0,T]. \end{aligned}$$
(4.1)

Proof

Fix \(\gamma >0\). Let \(u^\gamma (x,t)\) be the unique solution of ODE

$$\begin{aligned} {\left\{ \begin{array}{ll} u_t(x,t)=\gamma ,\;\;t>0,\\ u(x,0)=u_0(x) \end{array}\right. } \end{aligned}$$

for any given \(x\in \mathbb {R}\). It is apparent that \(u^\gamma (x,t)=\gamma t+u_0(x)\) and

$$\begin{aligned} \lim _{\gamma \rightarrow 0+}u^\gamma (x,t)=u_0(x)\text { uniformly in }\mathbb {R}^N\times [0,T]. \end{aligned}$$
(4.2)

Note that \(u_0(x)\) is smooth. This together with the fact \(\beta <2\alpha _1\) implies that there has \(\varepsilon _0>0\) such that

$$\begin{aligned} \frac{1}{d\varepsilon ^{\beta }}\int _{B(0,1)}J(y) [u_0(x_1-\varepsilon ^{\alpha _1}y_1,x_2-\varepsilon ^{\alpha _2}y_2,\ldots ,x_N-\varepsilon ^{\alpha _N}y_N)-u_0(x)]\,dy \le \gamma \end{aligned}$$

for \(\varepsilon \le \varepsilon _0\). Therefore,

$$\begin{aligned} \begin{aligned}&u^\gamma _t(x,t)-\frac{1}{\varepsilon ^{\beta }}{\int _{\mathbb {R}^{N}}}J_\alpha ^{\varepsilon }(x-y)[u^\gamma (y,t)-u^\gamma (x,t)]\,dy\\&\quad =\gamma -\frac{1}{d\varepsilon ^{\alpha +\beta }}\int _{\mathbb {R}^{N}}J\left( \frac{x_1-y_1}{\varepsilon ^{\alpha _1}},\frac{x_2-y_2}{\varepsilon ^{\alpha _2}},\ldots \frac{x_N-y_N}{\varepsilon ^{\alpha _N}}\right) [u_0(y)-u_0(x)]\,dy\\&\quad =\gamma -\frac{1}{d\varepsilon ^{\beta }}\int _{B(0,1)}J(y) [u_0(x_1-\varepsilon ^{\alpha _1}y_1,x_2-\varepsilon ^{\alpha _2}y_2,\ldots ,x_N-\varepsilon ^{\alpha _N}y_N)-u_0(x)]\,dy\\ \ge&0 \end{aligned} \end{aligned}$$

for \(\varepsilon \le \varepsilon _0\). The comparison principle concludes

$$\begin{aligned} u^\gamma (x,t)\ge u^\varepsilon (x,t) \end{aligned}$$

for \((x,t)\in \mathbb {R}^{N}\times [0,T]\) and \(\varepsilon \le \varepsilon _0\). Thus we have

$$\begin{aligned} \limsup _{\varepsilon \rightarrow 0+}u^\varepsilon (x,t)\le u^\gamma (x,t). \end{aligned}$$

This along with (4.2) implies that

$$\begin{aligned} \limsup _{\varepsilon \rightarrow 0+}u^\varepsilon (x,t)\le u_0(x). \end{aligned}$$
(4.3)

On the other hand, let \(u_\gamma (x,t)\) be the unique solution of ODE

$$\begin{aligned} {\left\{ \begin{array}{ll} u_t(x,t)=-\gamma ,\;\;t>0,\\ u(x,0)=u_0(x) \end{array}\right. } \end{aligned}$$

for \(\gamma >0\). By a similar way, we can show that

$$\begin{aligned} \liminf _{\varepsilon \rightarrow 0+}u^\varepsilon (x,t)\ge u_0(x). \end{aligned}$$
(4.4)

Then (4.1) follows by (4.3) and (4.4). The proof is thus completed. \(\square \)

Our main result Theorem 1.3 is included by Lemma 4.1 and the following lemma.

Lemma 4.2

Suppose that \(\alpha _1=\alpha _2=\cdots =\alpha _N\) and \(\beta \in (2\alpha _1,3\alpha _1)\). Let \(u^{\varepsilon }(x,t)\) be the unique solution of (1.4) for \(\varepsilon >0\). Then

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0+}u^{\varepsilon }(x,t)=0 \text { uniformly in }\mathbb {R}^N\times [T_0,T] \end{aligned}$$
(4.5)

for any given \(T_0\in (0,T)\).

Proof

Let U(xt) be the unique solution of

$$\begin{aligned} {\left\{ \begin{array}{ll} u_t(x,t)=\Delta _N u,\;\;x\in \mathbb {R}^N,\;t>0,\\ u(x,0)=u_0(x),\;\;x\in \mathbb {R}^N, \end{array}\right. } \end{aligned}$$

and \(\tilde{u}^\varepsilon (x,t)\) be the unique solution of

$$\begin{aligned} {\left\{ \begin{array}{ll} u_t(x,t)=\varepsilon ^{2\alpha _1-\beta }\Delta _N u,\;\;x\in \mathbb {R}^N,\;t>0,\\ u(x,0)=u_0(x),\;\;x\in \mathbb {R}^N, \end{array}\right. } \end{aligned}$$

respectively. In this situation, we have

$$\begin{aligned} \tilde{u}^\varepsilon (x,t)=U(x,\varepsilon ^{2\alpha _1-\beta }t) =\frac{1}{(4\pi t \varepsilon ^{2\alpha _1-\beta )^{N/2}}}\int _{\mathbb {R}^N} e^{-\frac{|y|^2}{4t \varepsilon ^{2\alpha _1-\beta }}}u_0(x-y)\,dy. \end{aligned}$$

By the estimates of heat kernel, we get

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0+}\tilde{u}^\varepsilon (x,t)=0\text { uniformly in }\mathbb {R}^N\times [T_0,T]. \end{aligned}$$
(4.6)

Since \(u_0\) is compact supported, it then follows that

$$\begin{aligned} \Vert U\Vert _{C^4(\mathbb {R}^N\times [0,\infty ))}\le C \end{aligned}$$
(4.7)

for some constant C, independent of \(\varepsilon \).

On the other hand, let \(v^{\varepsilon }(x,t)=u^{\varepsilon }(x,\varepsilon ^{\beta -2\alpha _1}t)\). In this case, we know that \(v^{\varepsilon }(x,t)\) satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} v^\varepsilon _{t}(x,t)=\frac{1}{\varepsilon ^{2\alpha _1}}{\int _{\mathbb {R}^{N}}}J_\alpha ^{\varepsilon }(x-y)[v^{\varepsilon }(y,t)-v^{\varepsilon }(x,t)] \,dy&{}\text { in }\mathbb {R}^N\times (0,\infty ),\\ v^{\varepsilon }(x,0)=u_{0}(x) &{}\text { in }\mathbb {R}^N, \end{array}\right. } \end{aligned}$$

Thanks to (4.7), by the argument as in Step 1 of Theorem 1.1, we know that there exist \(\eta >0\) and \(\varepsilon _0>0\) such that

$$\begin{aligned} \Vert v^{\varepsilon }(\cdot ,t)- {U}(\cdot ,t)\Vert _{L^\infty (\mathbb {R}^N)}\le C\varepsilon ^{\alpha _1+\eta }t \end{aligned}$$

for \(\varepsilon \le \varepsilon _0\). Hence,

$$\begin{aligned} \Vert u^{\varepsilon }(\cdot ,t)- \tilde{u}^\varepsilon (\cdot ,t)\Vert _{L^\infty (\mathbb {R}^N)}\le C\varepsilon ^{\eta +3\alpha _1-\beta }t. \end{aligned}$$

This along with (4.6) implies that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0+} u^\varepsilon (x,t)=0\text { uniformly in }\mathbb {R}^N\times [T_0,T]. \end{aligned}$$

\(\square \)