1 Introduction

Dispersal is an important feature of life histories of many organisms and thus has been a central topic in ecology. In 1951, random diffusion was introduced to model dispersal strategies [52] and there are tremendous studies in this direction, see the books [14, 48]. However, in ecology, in many situations (e.g. [10,11,12, 51]), dispersal is better described as a long range process rather than as a local one, and integral operators appear as a natural choice. A commonly used form that integrates such long range dispersal is the following nonlocal diffusion operator:

$$\begin{aligned} \mathcal {L}u :=\int _\Omega k(x,y)u(y)dy-a(x)u(x), \end{aligned}$$

where the dispersal kernel \(k(x,y)\ge 0\) describes the probability to jump from one location to another. This nonlocal diffusion operator appears commonly in different types of models in ecology. See [4, 23, 30, 32, 40, 42, 43, 45, 49] and the references therein.

It is also worth mentioning that the nonlocal operators have been used to model many other applied situations beyond ecology, for example in image processing [18, 28], particle systems [9], coagulation models [17], nonlocal anisotropic models for phase transition [2, 3], mathematical finances using optimal control theory [8, 24] etc. We refer the book [5] and references therein for more details.

The purpose of this paper is to understand the role played by spatial heterogeneity and nonlocal dispersals in the ecology of competing species by classifying the global dynamics of the following model

$$\begin{aligned} {\left\{ \begin{array}{ll} u_t= d \mathcal {K}[u] +u(m(x)- u- c v) &{}\text {in } \Omega \times [0,\infty ),\\ v_t= D \mathcal {P}[v] +v(M(x)-b u- v) &{}\text {in } \Omega \times [0,\infty ),\\ u(x,0)=u_0(x),~v(x,0)=v_0(x) &{}\text {in } \Omega , \end{array}\right. } \end{aligned}$$
(1.1)

where \(\Omega \) is a bounded domain in \(\mathbb {R}^n\), \(n\ge 1\) and \(\mathcal {K}\), \(\mathcal {P}\) represent nonlocal operators. In this model, u(xt), v(xt) are the population densities of two competing species, \(d, D>0\) are their dispersal rates respectively. The functions m(x), M(x) represent their intrinsic growth rates, \(b, c>0\) in \(\bar{\Omega }\) are interspecific competition coefficients.

This paper continues the studies in [6, 34], where a type of simplified nonlocal operator is considered.

1.1 Background and motivations

The model (1.1) is a Lotka–Volterra type model which can be traced back to the works of Lotka and Volterra [39, 53]. Such models are widely used to describe the dynamics of biological systems in which two species interact, where predator–prey and competition are two typical situations, and play an important role in mathematical biology. To avoid being too lengthy, we restrict our discussions to models related to the model (1.1) only.

Let us begin with the the simple Lotka–Volterra ODE model (which can be considered as a special case of (1.1): \(d=D=0\) and \(M=m,u_0,v_0\) are positive constants)

$$\begin{aligned} {\left\{ \begin{array}{ll} u^\prime (t)= u(m- u- c v) &{}\text {in } [0,\infty ),\\ v^\prime (t)= v(m-b u- v) &{}\text {in } [0,\infty ),\\ u(0)=u_0,~v(0)=v_0. \end{array}\right. } \end{aligned}$$
(1.2)

The following results about the global dynamics of (1.2) are well known:

  1. (i)

    If \(b,c< 1\) then \((\frac{1-c}{1-bc}m, \frac{1-b}{1-bc}m)\) is the global attractor;

  2. (ii)

    If \(b\le 1\le c\) (or \(c\le 1\le b\)) and \((b-1)^2+(c-1)^2\ne 0\), then (0, m) (or (m, 0)) is the global attractor;

  3. (iii)

    If \(b=c=1\), for any initial data \((u_0,v_0)\), there exists \(s\in [0,1]\) such that the solution of (1.2) converges to \((sm,(1-s)m)\);

  4. (iv)

    If \(b,c>1\), the solution (uv) will converge to (m, 0), (0, m) or \((\frac{1-c}{1-bc}m, \frac{1-b}{1-bc}m)\) under the condition \(v_0<\frac{1-b}{1-c}u_0\), \(v_0>\frac{1-b}{1-c}u_0\) or \(v_0=\frac{1-b}{1-c}u_0\) respectively.

Considering the importance of dispersal strategies for species, natually, the next step is to take the diffusion of the species into consideration. If each individual moves randomly, it leads to the following model

$$\begin{aligned} {\left\{ \begin{array}{ll} u_t= d \Delta u +u(m- u- c v) &{}\text {in } \Omega \times [0,\infty ),\\ v_t= D \Delta v +v(m-b u- v) &{}\text {in } \Omega \times [0,\infty ),\\ \frac{\partial v}{\partial \gamma }= \frac{\partial v}{\partial \gamma }=0 &{}\text {on } \partial \Omega \times [0,\infty ),\\ u(x,0)=u_0(x),~v(x,0)=v_0(x) &{}\text {in } \Omega , \end{array}\right. } \end{aligned}$$
(1.3)

where \(\gamma \) denotes the unit outer normal vector on \(\partial \Omega \). It turns out that for the first three cases, systems (1.2) and (1.3) share lots of similarity, while the case (iv) is more delicate. More specifically, in the cases of (i), (ii) and (iii), the globally stable equilibrium of (1.2) given above is also globally stable as a solution of (1.3) [1, 16]. In other words, the global dynamics of the PDE model (1.3) is independent of the initial distributions of the two species. However, for the case (iv), some different and interesting phenomena happen due to the interaction between random diffusion and shape of habitat. If \(\Omega \) is convex, except for (m, 0) and (0, m), there are no stable equilibria [29]. But, if \(\Omega \) is not convex, the system (1.3) may have a stable spatially inhomogeneous equilibrium which corresponds to the habitat segregation phenomenon [25, 41, 44].

Later, to understand the effect of migration and spatial heterogeneity of resources, the global dynamics of the following model

$$\begin{aligned} {\left\{ \begin{array}{ll} u_t= d \Delta u+ u(m(x)- u- c v) &{}\text {in } \Omega \times [0,\infty ),\\ v_t= D \Delta v +v(m(x)-b u- v) &{}\text {in } \Omega \times [0,\infty ),\\ \frac{\partial v}{ \partial \gamma }= \frac{\partial v}{ \partial \gamma }=0 &{}\text {on } \partial \Omega \times [0,\infty ),\\ u(x,0)=u_0(x),~v(x,0)=v_0(x) &{}\text {in } \Omega , \end{array}\right. } \end{aligned}$$
(1.4)

where m(x) is nonconstant, has received extensive studies in the last two decades. See [13, 19, 31, 36, 37] and the references therein. For \(0<b,c<1\), an insightful conjecture was proposed and partially verified in [37]:

Conjecture

The locally stable steady state is globally asymptotically stable.

Recently, this conjecture has been completely resolved in [19] provided that \(0<bc\le 1\). Indeed, the appearance of spatial heterogeneity greatly increases the complexity of the global dynamics of the system (1.4). For example, when \(0<b,c<1\), both coexistence and extinction phenomena happen in (1.4) depending on the choice of competition coefficients bc and diffusion coefficients dD. According to previous discussions, this is dramatically different from both the ODE system (1.2) and the PDE system (1.3), where the distribution of resources is assumed to be constant. Another observation is also worth mentioning. If in addition, set \(d=D=0\), then (1.4) becomes a system of two ordinary differential equations, whose solutions converge to

$$\begin{aligned} \left( {1-b\over 1-bc}m_+(x), {1-c\over 1-bc}m_+(x)\right) \ \ \text {for every}\ x\in \Omega , \end{aligned}$$

where \(m_+(x)=\max \{ m(x), 0\}\), among all positive continuous initial data. Thus, the introduction of migration is also crucial. Moreover, when \(bc>1\), except for very special situations mentioned in [19], the global dynamics of the system (1.4) is far from being understood. In particular, to the best of our knowledge, there is no progress for the case (iv), i.e. \(b,c>1\).

Based on the importance of nonlocal dispersals, it is natural to consider the system (1.4) with random diffusion replaced by nonlocal versions. Till now the studies for the corresponding nonlocal models are quite limited. See [6, 20, 34] and the references therein.

1.2 Statements of main results

For clarity, in the statements of main results, we only focus on no flux boundary conditions for both local and nonlocal operators. Indeed, these results can be extended to models with homogeneous Dirichlet boundary conditions and space-periodic boundary conditions. See Sect. 6 for further discussions.

1.2.1 Main results: nonlocal dispersal strategies

Denote \(\mathbb {X}= C(\bar{\Omega })\), \(\mathbb {X}_+ =\{ u\in \mathbb {X} \ | \ u\ge 0\}\), \(\mathbb {X}_{++}=\mathbb {X}_+\setminus \{ 0\}\). For \(\phi \in \mathbb {X}\), define

(N) \(\displaystyle \mathcal {K} [\phi ] = \int _{\Omega }k(x,y)\phi (y)dy- \int _{\Omega }k(y,x) dy \phi (x), \mathcal {P} [\phi ] = \int _{\Omega }p(x,y)\phi (y)dy- \int _{\Omega }p(y,x) dy \phi (x),\) where the kernels k(xy), p(xy) describe the rate at which organisms move from point y to point x. Nonlocal operators in hostile surroundings or periodic environments will be discussed in Sect. 6. See [23] for the derivation of different types of nonlocal operators.

Throughout this paper, unless designated otherwise, we assume that

(C1) :

\( m(x), M(x)\in \mathbb {X}\) are nonconstant.

(C2) :

k(xy), \(p(x,y)\in C(\mathbb {R}^n\times \mathbb {R}^n)\) are nonnegative and \(k(x,x), \ p(x,x)>0\) in \(\mathbb {R}^n\). Moreover, \(\int _{\mathbb {R}^n} k(x,y)dy= \int _{\mathbb {R}^n} k(y,x)dy=1\) and \(\int _{\mathbb {R}^n} p(x,y)dy= \int _{\mathbb {R}^n} p(y,x)dy=1\).

(C3) :

k(xy), p(xy) are symmetric, i.e., \(k(x,y)=k(y,x)\), \(p(x,y)=p(y,x)\).

Remark 1.1

The assumption \(k(x,x), \ p(x,x)>0\) corresponds to the strict ellipticity condition for differential operators, which guarantees strong maximum principle.

To better demonstrate our main results and techniques, some explanations are in place. Let (U(x), V(x)) denote a nonnegative steady state of (1.1), then there are at most three possibilities:

  • \((U,V)= (0,0)\) is called a trivial steady state;

  • \((U,V)=(u_d, 0)\) or \((U,V)=(0, v_D)\) is called a semi-trivial steady state, where \(u_d\), \(v_D\) are the positive solutions to single-species models

    $$\begin{aligned} d \mathcal {K}[U] +U(m(x)- U)=0, \end{aligned}$$
    (1.5)

    and

    $$\begin{aligned} D \mathcal {P}[V] +V(M(x)- V)=0 \end{aligned}$$
    (1.6)

    respectively.

  • \(U>0,\ V>0\), and we call (UV) a coexistence/positive steady state.

The first main result in this paper gives a complete classification of the global dynamics to the competition system (1.1) provided that at least one semi-trivial steady state is locally unstable.

Theorem 1.1

Assume that (C1)(C3) hold and \(0<bc\le 1\). Also assume that (1.1) admits two semi-trivial steady states \((u_d, 0)\) and \((0,v_D)\). Then for the global dynamics of the system (1.1) with nonlocal operators defined in (N), we have the following statements:

  1. (i)

    If both \((u_d, 0)\) and \((0,v_D)\) are locally unstable, then the system (1.1) admits a unique positive steady state, which is globally asymptotically stable relative to \(\mathbb {X}_{++} \times \mathbb {X}_{++}\);

  2. (ii)

    If \((u_d, 0)\) is locally unstable and \((0,v_D)\) is locally stable or neutrally stable, then \((0,v_D)\) is globally asymptotically stable relative to \(\mathbb {X}_{++} \times \mathbb {X}_{++}\);

  3. (iii)

    If \((u_d, 0)\) is locally stable or neutrally stable and \((0,v_D)\) is locally unstable, then \((u_d, 0)\) is globally asymptotically stable relative to \(\mathbb {X}_{++} \times \mathbb {X}_{++}\).

For competition models with local dispersals, it is known that to show global dynamics, it suffices to demonstrate that every positive steady state is locally stable. See [22] and references therein, where the compactness of solutions orbits is a necessary condition. This is not satisfied in the nonlocal model (1.1) due to lack of regularity.

Moreover, in handling the local model (1.4), the key contribution in [19] is the discovery of an intrinsic relation between a positive steady state and a principal eigenfunction of the linearized problem at this steady state. However, in nonlocal models, there are difficulties determining the local stability by linearized analysis, since principal eigenvalue might not exist. For single-species models or semi-trivial steady states of competition models, it is known that this issue can be resolved by perturbation arguments and spectral analysis. See [7, 23] and so on. Unfortunately, as far as we are concerned, there is no progress in the studies of linearized problem at positive steady states. Hence, we have to avoid analyzing local stability of positive steady state.

Fortunately, two-species competition models with nonlocal dispersals still have the following solution structure:

  • if one semi-trivial steady state is locally stable while the other one is locally unstable, and there is no positive steady state, then the stable one is globally asymptotically stable;

  • if two semi-trivial steady states are both locally unstable, then there exists at least one stable positive steady state and moreover the uniqueness of positive steady state global implies asymptotic stability.

Thus, to prove Theorem 1.1, we turn our attention back to the solution structure and verify either the nonexistence or uniqueness of positive steady state directly based on characteristics of nonlocal operators and arguments by contradiction.

The second main result concerns the global dynamics to the competition system (1.1) when both semi-trivial steady states are stable.

Theorem 1.2

Assume that (C1)(C3) hold and \(0<bc\le 1\). Also assume that (1.1) admits two semi-trivial steady states \((u_d, 0)\) and \((0,v_D)\). For the system (1.1) with nonlocal operators defined in (N), if both \((u_d, 0)\) and \((0,v_D)\) are locally stable or neutrally stable, then \(bc=1\), \(bu_d = v_D\) and the system (1.1) has a continuum of steady states \(\{(su_d, (1-s)v_D),\ 0\le s\le 1\}\), which are locally stable except for \(s=0\) and \(s=1\). Moreover, the solution of (1.1) with \((u_0,v_0)\in \mathbb {X}_+ \times \mathbb {X}_+ \setminus \{ 0\}\) approaches to a steady state in \(\{(su_d, (1-s)v_D),\ 0\le s\le 1\}\) in \(\mathbb {X}\times \mathbb {X}\).

Notice that the solution orbits of the system (1.1) are uniformly bounded, but not precompact in \(\mathbb {X}\times \mathbb {X}\) due to lack of regularity. Thus when there are infinitely many steady states, it is highly nontrivial to demonstrate the global convergence of solutions of the system (1.1) in \(\mathbb {X}\times \mathbb {X}\). Indeed, the approaches developed in the proof of Theorem 1.2, which relies on energy estimates and the repeated applications of comparison principle, are original and quite involved. Roughly speaking, the key part of the proof consists of the following steps:

  • Prove that there exists \(T>0\) such that the solution (u(xt), v(xt)) of (1.1) satisfies \(u(x,t)>0\), \(0<v(x,t)<v_D(x)\) or \(0<u(x,t)<u_d(x)\), \(v(x,t)>0\) in \(\bar{\Omega }\) for \(t\ge T\).

  • Make use of energy estimates to prove that a subsequence of \((u(\cdot , t), v(\cdot , t))\) converges in \(L^2(\Omega ) \times L^2(\Omega )\) to a steady state in \(\{(su_d, (1-s)v_D),\ 0\le s\le 1\}\).

  • Improve the convergence of a subsequence to the convergence of \((u(\cdot , t), v(\cdot , t))\) in \(L^2(\Omega ) \times L^2(\Omega )\) as \(t\rightarrow \infty \).

  • Improve the convergence of \((u(\cdot , t), v(\cdot , t))\) in \(L^2(\Omega ) \times L^2(\Omega )\) as \(t\rightarrow \infty \) to that in \(\mathbb {X}\times \mathbb {X}\), which is clearly optimal for the system (1.1).

Our arguments thoroughly employ the structure of monotone systems and the characteristics of nonlocal operators. We strongly believe that this approach can be generalized to handle monotone system without compactness of solution orbits. We will turn to this topic in future work.

1.2.2 Main results: mixed dispersal strategies, location-dependent competition coefficients and self-regulations

In many species, dispersal includes both local migration and a small proportion of long-distance migration. See [50] and the references therein. For example, in genetic model with partial panmixia, the diffusion term is a combination of local and nonlocal dispersals, where the nonlocal term gives the approximation for long-distance migration. See [35, 38, 46, 47] for modeling and related studies. Moreover, in [26, 27], to understand the competitive advantage among different types of dispersal strategies, the authors study the competition system where the movement of one species is purely by random walk while the other species adopts a non-local dispersal strategy.

These works motivate our studies of competing species with mixed dispersal strategies as well as location-dependent competition coefficients and self-regulations. To be more precise, we will study models with no flux boundary conditions

$$\begin{aligned} {\left\{ \begin{array}{ll} u_t= d \left\{ \alpha \mathcal {K} [u]+(1-\alpha ) \Delta u \right\} +u(m(x)-b_1(x)u- c_1(x)v) &{}\text {in } \Omega \times [0,\infty ),\\ v_t= D \left\{ \beta \mathcal {P} [v]+(1-\beta ) \Delta v \right\} +v(M(x)-b_2(x)u- c_2(x)v) &{}\text {in } \Omega \times [0,\infty ),\\ (1-\alpha )\partial u/\partial \gamma =(1-\beta ) \partial v/\partial \gamma =0 &{}\text {on } \partial \Omega ,\\ u(x,0)=u_0(x),~v(x,0)=v_0(x) &{}\text {in } \Omega , \end{array}\right. } \end{aligned}$$
(1.7)

where \(\mathcal {K}\), \(\mathcal {P}\) are defined in (N), \(b_1,\ c_2\) represent self-regulations, and \(0\le \alpha ,\beta \le 1\). Moreover, assume that

(C4) :

\(b_1(x), c_1(x), b_2(x), c_2(x) \in \mathbb {X}\), \(b_1(x), c_1(x), b_2(x), c_2(x) >0\) in \(\bar{\Omega }\).

Equipped with the techniques developed in the study of the system (1.1), we manage to derive the third main result in this paper, which completely classifies the global dynamics of the system (1.7) provided that

$$\begin{aligned} \max _{\bar{\Omega }} b_2(x) \cdot \max _{\bar{\Omega }} c_1 (x)\le \min _{\bar{\Omega }} b_1(x) \cdot \min _{\bar{\Omega }} c_2 (x). \end{aligned}$$
(1.8)

Theorem 1.3

Assume that (C1)(C4) hold and (1.8) is valid. Also assume that (1.7) admits two semi-trivial steady states \((\hat{u}_d, 0)\) and \((0,\hat{v}_D)\). Then there are exactly four cases:

  1. (i)

    If both \((\hat{u}_d, 0)\) and \((0,\hat{v}_D)\) are locally unstable, then the system (1.7) admits a unique positive steady state, which is globally asymptotically stable relative to \(\mathbb {X}_{++} \times \mathbb {X}_{++}\);

  2. (ii)

    If \((\hat{u}_d, 0)\) is locally unstable and \((0,\hat{v}_D)\) is locally stable or neutrally stable, then \((0,\hat{v}_D)\) is globally asymptotically stable relative to \(\mathbb {X}_{++} \times \mathbb {X}_{++}\);

  3. (iii)

    If \((\hat{u}_d, 0)\) is locally stable or neutrally stable and \((0,\hat{v}_D)\) is locally unstable, then \((\hat{u}_d, 0)\) is globally asymptotically stable relative to \(\mathbb {X}_{++} \times \mathbb {X}_{++}\);

  4. (iv)

    If both \((\hat{u}_d, 0)\) and \((0,\hat{v}_D)\) are locally stable or neutrally stable, then \(b_1(x), c_2(x), b_2(x)\), \(c_2(x)\) must be constants, \(b_2c_1=b_1c_2\), \(b_2\hat{u}_d = c_2\hat{v}_D\) and the system (1.7) has a continuum of steady states \(\{(s\hat{u}_d, (1-s)\hat{v}_D),\ 0\le s\le 1\}\). Moreover, the solution of (1.7) with \((u_0,v_0)\in \mathbb {X}_+ \times \mathbb {X}_+ \setminus \{ 0\}\) approaches to a steady state in \(\{(s\hat{u}_d, (1-s)\hat{v}_D),\ 0\le s\le 1\}\) in \(\mathbb {X} \times \mathbb {X}\).

We remark that in [20], the global dynamics of the system (1.7) with \(\alpha =\beta =1\) and \(\mathcal {K}=\mathcal {P}\) can be determined provided that certain restrictive conditions are imposed on the diffusion rates, intrinsic growth rates, competition coefficients and self-regulations simultaneously. However, in Theorem 1.3, we manage to classify the global dynamics of the solutions to the system (1.7), regardless of intrinsic growth rates and competition coefficients.

For the proof of Theorem 1.3(i)–(iii), if \(\alpha ,\beta \in [0,1)\), i.e., local dispersal is at least partially adopted for both species, the method in [19] can be applied since solution orbits still admit compactness. But the situation is different if at least one of \(\alpha , \beta \) is equal to 1. However, the approach developed in the proof of Theorem 1.1 can be employed to handle \(\alpha ,\beta \in [0,1]\) all at once.

In the proof of Theorem 1.3(iv), extra care is needed when either \(\alpha =1\) or \(\beta =1\). The proof of this case mainly follows from that of the approach in handling the case that \(\alpha =\beta =1\), which has been proved in Theorem 1.2. However, some modifications are necessary due to the essential difference between local and nonlocal diffusion. We will emphasize the different parts and the corresponding adjustments in the proof. Moreover, when \(\alpha ,\beta \in [0,1)\), thanks to the compactness of solution orbits, the convergence of solutions is known [21].

At the end, we emphasize that compared with local models, lack of regularity is the key issue in the studies of models with nonlocal dispersals. The approaches and techniques developed in this paper to overcome the difficulties caused by this issue are main contributions of our work.

This paper is organized as follows. Section 2 provides some background properties and a general result concerning global dynamics of two-species competition models, regardless of whether the dispersal kernels are symmetric or not. Sections 3 and 4 are devoted to the proofs of Theorems 1.1 and 1.2 respectively. The proof of Theorem 1.3 is included in Sect. 5. At the end, other types of nonlocal dispersal strategies will be discussed in Sect. 6.

2 Preliminaries

In this section, we prepare some background results and describe the scheme of proofs of main results. It is worth pointing out that throughout this section, assumption (C3) is not imposed, i.e., the nonlocal operators can be nonsymmetric.

2.1 Single-species model

For the convenience of readers, we include a general result concerning single-species models with nonlocal operators. To be more specific, we consider a more general problem, which obviously covers (1.5) and (1.6), as follows:

$$\begin{aligned} u_t(x,t) =\mathcal {L}[u] + f(x,u) \doteq d \int _{\Omega }k(x,y)u(y,t)dy +f(x,u), \end{aligned}$$
(2.1)

where k(xy) satisfies (C2) and f(xu) satisfies

(f1) :

\(f\in C(\bar{\Omega }\times \mathbb {R}^+, \mathbb {R})\), f is \(C^1\) continuous in u and \(f(x,0)=0\);

(f2) :

For \(u>0\), f(xu) / u is strictly decreasing in u;

(f3) :

There exists \(C_1>0\) such that \(d \int _{\Omega }k(x,y)dy +f(x,C_1)/C_1\le 0\) for all \(x\in \Omega \).

To study the existence of positive steady state of (2.1), it is natural to consider the local stability of the trivial solution \(u\equiv 0\), which is determined by the signs of

$$\begin{aligned} \lambda ^*=\sup \left\{ \text {Re}\, \lambda \, |\, \lambda \in \sigma (\mathcal {L}+f_u(x,0) \right\} , \end{aligned}$$

where we think of \(\mathcal {L}+f_u(x,0)\) as an operator from \(\mathbb {X}\) to \(\mathbb {X}\). Also, if \(\lambda \) is an eigenvalue of this operator with a continuous and positive eigenfunction, we call \(\lambda \) principal eigenvalue.

Theorem 2.1

Under the assumptions (C2) and (f1)(f3), the problem (2.1) admits a unique positive steady state in \(\mathbb {X}\) if and only if \(\lambda ^*>0\). Moreover, the unique positive steady state, whenever it exists, is globally asymptotically stable relative to \( \mathbb {X}_{++}\), otherwise, \(u\equiv 0\) is globally asymptotically stable relative to \(\mathbb {X}_{++}\).

Theorem 2.1 has been obtained in [7] for symmetric operators in the one dimensional case and partially obtained in [15] for nonsymmetric operators of special type. More precisely, in [15], the author only derives the pointwise convergence of the unique positive steady state in \(L^{\infty }(\Omega )\). Since the spectrum of the operator \(\mathcal {L}+f_u(x,0)\) has been thoroughly studied in [33], the arguments in [7] and [15, Section 6] can be applied. Moreover, thanks to Dini’s theorem, the pointwise convergence can be improved to the desired convergence in \(\mathbb {X}\). The details are omitted.

2.2 Competition models

From now on, for convenience, we rewrite the nonlocal operators defined in (N) as follows

$$\begin{aligned} \mathcal {K}[u]= & {} \int _{\Omega }k(x,y)u(y)dy- a_d(x) u(x), \end{aligned}$$
(2.2)
$$\begin{aligned} \mathcal {P}[v]= & {} \int _{\Omega }p(x,y)v(y)dy- a_D(x) v(x), \end{aligned}$$
(2.3)

where \(a_d(x)=\int _{\Omega }k(y,x) dy,\ a_D(x)=\int _{\Omega }p(y,x) dy\).

For clarity, we will focus on competition model (1.1) and always assume that there exist two semi-trivial steady states \((u_d, 0)\) and \((0,v_D)\).

First of all, the linearized operator of (1.1) at \((u_d, 0)\) is

$$\begin{aligned} \mathcal {L}_{(u_d,0)} {\phi \atopwithdelims ()\psi }={d\mathcal {K}[\phi ]+[m(x)-2u_d]\phi -cu_d\psi \atopwithdelims ()D\mathcal {P}[\psi ]+[M(x)-bu_d]\psi }. \end{aligned}$$
(2.4)

Also, the linearized operator of (1.1) at \((0, v_D)\) is

$$\begin{aligned} \mathcal {L}_{(0, v_D)} {\phi \atopwithdelims ()\psi }={ d \mathcal {K}[\phi ] +[m (x)-cv_D]\phi \atopwithdelims ()D\mathcal {P}[\psi ] +[M(x)-2v_D]\psi -bv_D\phi }. \end{aligned}$$
(2.5)

Denote

$$\begin{aligned}&\mu _{(u_d,0)}=\sup \left\{ \text {Re}\, \lambda \, |\, \lambda \in \sigma (D\mathcal {P}+[M(x)-bu_d]) \right\} \\&\nu _{(0, v_D)}= \sup \left\{ \text {Re}\, \lambda \, |\, \lambda \in \sigma ( d \mathcal {K} +[m (x)-cv_D]) \right\} .\nonumber \end{aligned}$$
(2.6)

It is known that the signs of \(\mu _{(u_d,0)}\) and \(\nu _{(0, v_D)}\) determine the local stability/instability of \((u_d,0)\) and \((0, v_D)\) respectively. This is explicitly stated as follows and the proof is omitted since it is standard.

Lemma 2.2

Assume that the assumptions (C1), (C2) hold. Then

  1. (i)

    \((u_d,0)\) is locally unstable if \(\mu _{(u_d,0)}>0\); \((u_d,0)\) is locally stable if \(\mu _{(u_d,0)}<0\); \((u_d,0)\) is neutrally stable if \(\mu _{(u_d,0)}=0\).

  2. (ii)

    \((0, v_D)\) is locally unstable if \(\nu _{(0, v_D)}>0\); \((0, v_D)\) is locally stable if \(\nu _{(0, v_D)}<0\); \((0, v_D)\) is neutrally stable if \(\nu _{(0, v_D)}=0\).

Remark that as explained in Sect. 2.1, in general \(\mu _{(u_d,0)}\) and \(\nu _{(0, v_D)}\) might not be principal eigenvalues of the corresponding linearized operators. See [33] and its references for more discussions.

Next, some definitions and basic properties are included since they will be useful in the proof of main results.

Definition 2.1

Define the competitive order in \(\mathbb {X} \times \mathbb {X}\): \((u_1, v_1)\le _c(<_c)(u_2,v_2)\) if \(u_1 \le (<) u_2\) and \( v_1\ge (>) v_2\).

Definition 2.2

We say \((u,v)\in \mathbb {X}\times \mathbb {X}\) is a lower(upper) solution of the system (1.1) if

$$\begin{aligned} {\left\{ \begin{array}{ll} 0\le (\ge ) d\mathcal {K}[u]+u(m(x)-u-cv) &{} \text {in}\; \Omega ,\\ 0\ge (\le ) D\mathcal {P}[v]+v(M(x)-bu-v) &{} \text {in}\; \Omega . \end{array}\right. } \end{aligned}$$

Lemma 2.3

Assume that \((\tilde{u}, \tilde{v})\) and \((\underline{u},\underline{v})\) are upper and lower solutions of the system (1.1) respectively with \(\tilde{u}, \underline{u}, \tilde{v}, \underline{v}>0\). Then

  1. (i)

    The solution of (1.1) with initial value \((\tilde{u}, \tilde{v})\) is decreasing in t under the competitive order.

  2. (ii)

    The solution of (1.1) with initial value \((\underline{u},\underline{v})\) is increasing in t under the competitive order.

Lemma 2.4

Assume that the assumptions (C1), (C2) hold. Also assume that system (1.1) admits two semi-trivial steady states \((u_d, 0)\) and \((0,v_D)\).

  1. (i)

    If \(\mu _{(u_d,0)}>0\), then there exists \(\varepsilon _1>0\) such that for any \(0<\varepsilon \le \varepsilon _1\) and \(0<\delta \le \varepsilon _1\), there exists an upper solution \((\tilde{u}, \tilde{v})\) of (1.1) satisfying

    $$\begin{aligned} \tilde{u}=(1+\delta )u_d(x),\ 0<\tilde{v}<\varepsilon . \end{aligned}$$
  2. (ii)

    If \(\nu _{(0, v_D)}>0\), then there exists \(\varepsilon _2>0\) such that for any \(0<\varepsilon \le \varepsilon _2\) and \(0<\delta \le \varepsilon _2\), there exists a lower solution \((\underline{u}, \underline{v})\) of (1.1) satisfying

    $$\begin{aligned} 0<\underline{u}<\varepsilon ,\ \underline{v}=(1+\delta )v_D(x). \end{aligned}$$

The proof of Lemma 2.4 is similar to that of [6, Lemmas 2.3 and 2.5] and thus the details are omitted.

The following result explains how to characterize the global dynamics of the competition model (1.1) with two semi-trivial steady states.

Theorem 2.5

Assume that the assumptions (C1), (C2) hold. Also assume that system (1.1) admits two semi-trivial steady states \((u_d, 0)\) and \((0,v_D)\). We have the following three possibilities:

  1. (i)

    If both \(\mu _{(u_d,0)}\) and \(\nu _{(0, v_D)}\), defined in (2.6), are positive, the system (1.1) at least has one positive steady state in \(L^{\infty }(\Omega )\times L^{\infty }(\Omega )\). If in addition, assume that the system (1.1) has a unique positive steady state in \(\mathbb {X}\times \mathbb {X}\), then it is globally asymptotically stable relative to \(\mathbb {X}_{++} \times \mathbb {X}_{++}\).

  2. (ii)

    If \(\mu _{(u_d,0)}\) defined in (2.6) is positive and no positive steady states of the system (1.1) exist, then the semi-trivial steady state \((0,v_D)\) is globally asymptotically stable relative to \(\mathbb {X}_{++} \times \mathbb {X}_{++}\).

  3. (iii)

    If \(\nu _{(0, v_D)}\) defined in (2.6) is positive and the system (1.1) does not admit positive steady states, then the semi-trivial steady state \((u_d,0)\) is globally asymptotically stable relative to \(\mathbb {X}_{++} \times \mathbb {X}_{++}\).

Proof

The arguments are almost the same as that of [6, Theorem 2.1], where a simplified nonlocal operator is considered.\(\square \)

It is routine to verify that Theorem 2.5 also holds for the system (1.7). Indeed, one sees from the proof of Theorem 2.5 that for models with only nonlocal dispersals, \(\mu _{(u_d,0)}\) and \(\nu _{(0, v_D)}\) might not be principal eigenvalues, thus the constructions of upper/lower solutions rely on the principal eigenfunctions of suitably perturbed eigenvalue problems which admit principal eigenvalues. However, when local diffusion is incorporated, the existence of principal eigenvalues is always guaranteed, which makes the arguments standard.

It is worth pointing out that the proof of Theorem 2.5(i) relies on the upper/lower solution method and this method can only indicate the existence of positive steady state, denoted by (uv), in \(L^{\infty }(\Omega )\times L^{\infty }(\Omega )\). However, according to the assumptions (C1), (C2), the optimal regularity should be \((u,v)\in \mathbb {X}\times \mathbb {X} \). A natural question is when this could be true. The following lemma provides a partial answer, which is important for this paper.

Lemma 2.6

Assume that the assumptions (C1), (C2) hold. If \(bc\le 1\), then any positive steady state of (1.1) in \(L^{\infty }(\Omega )\times L^{\infty }(\Omega )\) belongs to \(\mathbb {X}\times \mathbb {X}\).

Proof

It follows from the proof of [20, Lemma 4.1]. Note that in [20, Lemma 4.1], it is assumed that \(bc<1\). However, \(bc=1\) can be handled similarly.\(\square \)

3 Proof of Theorem 1.1

To better demonstrate the proof of Theorem 1.1, some properties of local stability and positive steady states of (1.1) will be analyzed first.

The following result is about the classification of local stability.

Proposition 3.1

Assume that (C1)(C3) hold and \(0<bc\le 1\). Then there exist exactly four alternatives as follows.

  1. (i)

    \(\mu _{(u_d,0)}>0\), \(\nu _{(0, v_D)}>0\);

  2. (ii)

    \(\mu _{(u_d,0)}>0\), \(\nu _{(0, v_D)}\le 0\);

  3. (iii)

    \(\mu _{(u_d,0)}\le 0\), \(\nu _{(0, v_D)}>0\);

  4. (iv)

    \(\mu _{(u_d,0)}= \nu _{(0, v_D)}=0\).

Moreover, (iv) holds if and only if \(bc=1\) and \(bu_d= v_D\).

Proof

It suffice to show that when \(\mu _{(u_d,0)}\le 0\), \(\nu _{(0, v_D)}\le 0\), that is, none of (i)–(iii) is valid, we have \(\mu _{(u_d,0)}= \nu _{(0, v_D)}=0\), and furthermore \(bc=1\) and \(b u_d= v_D\).

Note that

$$\begin{aligned} \mu _{(u_d,0)}= \sup _{0 \ne \psi \in L^2} \frac{\int _{\Omega } \left( D\psi \mathcal {P}[\psi ]+[M(x)-bu_d]\psi ^2 \right) dx }{\int _{\Omega } \psi ^2 dx}\le 0. \end{aligned}$$

Thus one sees that

$$\begin{aligned} \frac{\int _{\Omega } \left( Dv_D \mathcal {P}[v_D]+[M(x)-bu_d]v_D^2 \right) dx}{\int _{\Omega } v_D^2 dx}\le \mu _{(u_d,0)}\le 0, \end{aligned}$$

and thus, due to (1.6), it follows that

$$\begin{aligned} \int _{\Omega } \left( v_D^3 -bu_dv_D^2 \right) dx\le 0. \end{aligned}$$
(3.1)

Similarly, \(\nu _{(0, v_D)}\le 0\) and (1.5) give that

$$\begin{aligned} \int _{\Omega } \left( u_d^3 -cv_Du_d^2 \right) dx\le 0. \end{aligned}$$
(3.2)

Now by multiplying (3.2) by \(b^3\) and using the condition \(0<bc\le 1\), we have

$$\begin{aligned} \int _{\Omega } \left( (bu_d)^3 - v_D(bu_d)^2 \right) dx\le & {} \int _{\Omega } \left( (bu_d)^3 - bcv_D(bu_d)^2 \right) dx\\= & {} \int _{\Omega } b^3 \left( u_d^3 - cv_D u_d^2 \right) dx \le 0, \end{aligned}$$

which, together with (3.1), implies that

$$\begin{aligned} \int _{\Omega }(bu_d-v_D)^2(bu_d+v_D)dx\le 0. \end{aligned}$$
(3.3)

Therefore, all previous inequalities should be equalities. Hence it is obvious that \(\mu _{(u_d,0)}= \nu _{(0, v_D)}=0\), \(bc=1\) and \(bu_d=v_D\).

    \(\square \)

The next result indicates that whenever there exist two ordered positive steady states, there are infinitely many positive steady states. Our arguments rely on exploring characteristics of nonlocal operators, as well as some integral relations inspired by [19].

Proposition 3.2

Assume that (C1)(C3) hold and \(0<bc\le 1\). Then (1.1) admits two strictly ordered continuous positive steady states (uv) and \((u^*,v^*)\) (that is without loss of generality, \(u>u^*\), \(v<v^*\)) if and only if \(bc=1\), \(b u_d= v_D\). Moreover, all the positive steady states of (1.1) consist of \((su_d, (1-s)v_D)\), \(0<s<1\).

Proof

If \(bc=1\), \(b u_d= v_D\), it is routine to check that all the positive steady states of (1.1) consist of \((su_d, (1-s)v_D)\), \(0<s<1\), which implies (1.1) admits two strictly ordered continuous positive steady states.

Now suppose that (1.1) admits two different positive steady states (uv) and \((u^*,v^*)\), without loss of generality, \(u>u^*\), \(v<v^*\). We will show that \(bc=1\), \(b u_d= v_D\) is valid.

First, set \(w= u- u^*>0\) and \(z=v-v^*<0\) and it is standard to check that

$$\begin{aligned} {\left\{ \begin{array}{ll} d\mathcal {K}[w] +(m-u-cv)w-u^*w-cu^*z=0,\\ D\mathcal {P}[z] + (M-bu-v) z- bv^*w -v^* z=0. \end{array}\right. } \end{aligned}$$
(3.4)

Using the equation satisfied by u, one has

$$\begin{aligned} d\left( u \mathcal {K}[w] - w\mathcal {K}[u] \right) =u u^* (w+cz). \end{aligned}$$

This yields that

$$\begin{aligned} d\int _{\Omega }\left( -u \mathcal {K}[u^*] + u^* \mathcal {K}[u] \right) \frac{w^2}{u u^* } dx = \int _{\Omega }(w+cz)w^2 dx. \end{aligned}$$
(3.5)

We claim that \(\int _{\Omega }(w+cz)w^2 dx\le 0\).

To prove this claim, let us calculate the left hand side of (3.5). Applying the assumption (C3), we have that

$$\begin{aligned}&d\int _{\Omega }\left( -u \mathcal {K}[u^*] + u^* \mathcal {K}[u] \right) \frac{w^2}{u u^* } dx \nonumber \\&\quad = d\int _{\Omega }\int _{\Omega } k(x,y)\left[ u^*(x)u(y)-u(x)u^*(y) \right] \frac{(u(x)-u^*(x))^2}{u(x) u^*(x) } dy dx\nonumber \\&\quad = d\int _{\Omega }\int _{\Omega } k(x,y)\left[ u^*(x)u(y)-u(x)u^*(y) \right] \left( \frac{u(x)}{u^*(x) }+\frac{u^*(x)}{u(x) } \right) dy dx, \end{aligned}$$
(3.6)

where \(\int _{\Omega }\int _{\Omega } k(x,y)\left[ u^*(x)u(y)-u(x)u^*(y) \right] dydx =0\) is used. By exchanging x and y, we have

$$\begin{aligned}&d\int _{\Omega }\left( -u \mathcal {K}[u^*] + u^* \mathcal {K}[u] \right) \frac{w^2}{u u^* } dx \nonumber \\&\quad = d\int _{\Omega }\int _{\Omega } k(y,x)\left[ u^*(y)u(x)-u(y)u^*(x) \right] \left( \frac{u(y)}{u^*(y) }+\frac{u^*(y)}{u(y) } \right) dy dx. \end{aligned}$$
(3.7)

Due to (3.6) and (3.7), one sees that

$$\begin{aligned}&d\int _{\Omega }\left( -u \mathcal {K}[u^*] + u^* \mathcal {K}[u] \right) \frac{w^2}{u u^* } dx \\&\quad = {d\over 2}\int _{\Omega }\int _{\Omega } k(x,y)\left[ u^*(x)u(y)-u(x)u^*(y) \right] \left( \frac{u(x)}{u^*(x) }+\frac{u^*(x)}{u(x) } -\frac{u(y)}{u^*(y) }-\frac{u^*(y)}{u(y) } \right) dy dx\\&\quad = {d\over 2}\int _{\Omega }\int _{\Omega } k(x,y)\left[ u^*(x)u(y)-u(x)u^*(y) \right] ^2\left( \frac{1}{u(x) u(y) } -\frac{1}{u^*(x)u^*(y) } \right) dy dx\\&\quad \le 0 \end{aligned}$$

since \(u>u^*\). The claim is proved, i.e., \(\int _{\Omega }(w+cz)w^2 dx\le 0\).

Similarly, using (3.4) and the equation satisfied by v, we have

$$\begin{aligned} D\left( v \mathcal {P}[z] - z\mathcal {P}[v] \right) =v v^* (bw+z), \end{aligned}$$

which gives that

$$\begin{aligned} D\int _{\Omega }\left( -v \mathcal {P}[v^*] + v^*\mathcal {P}[v] \right) \frac{z^2}{v v^* } dx = \int _{\Omega }(bw+z)z^2 dx. \end{aligned}$$

Similar to the proof of the previous claim, we obtain

$$\begin{aligned}&\int _{\Omega }(bw+z)z^2 dx\nonumber \\&\quad =D\int _{\Omega }\left( -v \mathcal {P}[v^*] + v^*\mathcal {P}[v] \right) \frac{z^2}{v v^* } dx\nonumber \\&\quad = {D\over 2}\int _{\Omega }\int _{\Omega } p(x,y)\left[ v^*(x)v(y)-v(x)v^*(y) \right] ^2\left( \frac{1}{v(x) v(y) } -\frac{1}{v^*(x)v^*(y) } \right) dy dx\nonumber \\&\quad \ge 0 \end{aligned}$$
(3.8)

since \(v<v^*\).

Now we have derived two important inequalities:

$$\begin{aligned} \int _{\Omega }(w+cz)w^2 dx\le 0,\ \ \ \int _{\Omega }(bw+z)z^2 dx\ge 0. \end{aligned}$$
(3.9)

Multiplying the second one by \(c^3\) and subtracting the first one, it follows that

$$\begin{aligned} 0\le & {} \int _{\Omega }(cbw+cz)(cz)^2 dx -\int _{\Omega }(w+cz)w^2 dx\nonumber \\\le & {} \int _{\Omega }(w+cz)(cz)^2 dx -\int _{\Omega }(w+cz)w^2 dx\nonumber \\= & {} \int _{\Omega }(w+cz)^2(cz-w) dx, \end{aligned}$$
(3.10)

where \(bc\le 1\) is used in the second inequality. The assumption \(w= u- u^*>0\) and \(z=v-v^*<0\) indicates that \(w+cz =0\) in \(\bar{\Omega }\) and all the previous inequalities should be equalities. Hence we also have \(bc=1\) and \(bw+z =0\) (i.e., \(w+cz =0\)) in \(\bar{\Omega }\).

Moreover, note that \(w+cz =0\) is equivalent to \(u+cv =u^* +cv^*\). Denote \(R(x)=u+cv =u^* +cv^*\) for convenience. According to the equation satisfied by u, \(u^*\), one sees that both u and \(u^*\) are solutions of the same linear equation

$$\begin{aligned} d \mathcal {K}[U] + (m(x)-R(x)) U=0. \end{aligned}$$

Since both u and \(u^*\) are positive functions in in \(\mathbb {X}\), u and \(u^*\) can be regarded as the principal eigenfunctions of the nonlocal eigenvalue problem

$$\begin{aligned} d \mathcal {K}[\phi ] +(m(x)-R(x))\phi =\lambda \phi \end{aligned}$$

with the principal eigenvalue being zero. It is proved in [33] that the principal eigenvalue is algebraically simple whenever it exists, which implies that \(u^*= \alpha u\), where \(0<\alpha <1\). Similarly, it can be verified that \(v^*= \beta v\), where \(\beta >1\). Then using \(u+cv =u^* +cv^*\) again, we have

$$\begin{aligned} u=c{\beta -1\over 1-\alpha } v. \end{aligned}$$

Substitute this relation into the system satisfied by (uv), we have

$$\begin{aligned} {\left\{ \begin{array}{ll} d \mathcal {K}[ v] + v\left( m(x)-c{\beta -\alpha \over 1-\alpha } v \right) =0, \\ D \mathcal {P}[v] +v\left( M(x)- {\beta -\alpha \over 1-\alpha } v \right) =0, \end{array}\right. } \end{aligned}$$

where \(bc=1\) is used. The uniqueness of positive steady state to single-species models (1.5) and (1.6) implies that

$$\begin{aligned} u_d= c{\beta -\alpha \over 1-\alpha } v,\ \ \ v_D = {\beta -\alpha \over 1-\alpha } v. \end{aligned}$$

Therefore, \(bc=1\), \(b u_d= v_D\) and all the positive steady states of (1.1) consist of \((su_d, (1-s)v_D)\), \(0<s<1\).\(\square \)

Now we complete the proof of Theorem 1.1 on the basis of Propositions 3.1 and 3.2.

Proof of Theorem 1.1

(i) According to Lemma 2.2, in this case, \(\mu _{(u_d,0)}>0\), \(\nu _{(0, v_D)}>0\). Thus thanks to Theorem 2.5 and Lemma 2.6, one sees that the system (1.1) admits a positive steady state \((u,v)\in \mathbb {X}\times \mathbb {X}\).

Again due to Theorem 2.5, it suffices to verify the uniqueness of positive steady states. Suppose that this is not true. Let \((u^*, v^*)\) denote a positive steady state of (1.1) different from (uv). By Lemma 2.4, there exist an upper solution \((\tilde{u}_0, \tilde{v}_0)\) and a lower solution \((\underline{u}_0, \underline{v}_0)\) of (1.1) such that

$$\begin{aligned} (\underline{u}_0, \underline{v}_0)<_c (u,v),\ (u^*, v^*) <_c (\tilde{u}_0, \tilde{v}_0). \end{aligned}$$

Then according to Lemma 2.3, one sees that the solution of (1.1) with initial value \((\underline{u}_0, \underline{v}_0)\) increases to a positive steady state of (1.1) in \(L^{\infty }(\Omega )\times L^{\infty }(\Omega )\), denoted by \((u_1,v_1)\), while the solution of (1.1) with initial value \((\tilde{u}_0, \tilde{v}_0)\) decreases to a positive steady state of (1.1) in \(L^{\infty }(\Omega )\times L^{\infty }(\Omega )\), denoted by \((u_2,v_2)\). Thanks to Lemma 2.6, one has \((u_1,v_1), (u_2,v_2) \in \mathbb {X}\times \mathbb {X}.\) Moreover, by comparison principle, it is routine to show that

$$\begin{aligned} (u_1,v_1) \le _c (u,v),\ (u^*, v^*) \le _c (u_2,v_2). \end{aligned}$$

Therefore, Propositions 3.1 and 3.2 indicate that

$$\begin{aligned} (u_1,v_1) = (u,v)= (u^*, v^*) = (u_2,v_2). \end{aligned}$$

This is a contradiction.

(ii) According to Theorem 2.5, to prove that \((0,v_D)\) is globally asymptotically stable, it suffices to show that (1.1) admits no positive steady states. Suppose that (1.1) admits a positive steady state (uv), i.e., (uv) satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} d \mathcal {K}[u] +u(m(x)-u- cv)=0, \\ D \mathcal {P}[v] +v(M(x)-bu- v)=0. \end{array}\right. } \end{aligned}$$

Denote \((u^*, v^*)= (0,v_D)\) and set \(w= u- u^*= u>0\), \(z=v-v^*<0\). Similar to the computation of (3.8), one has

$$\begin{aligned}&\int _{\Omega }(bu+z)z^2 dx = \int _{\Omega }(bw+z)z^2 dx\\&\quad = {D\over 2}\int _{\Omega }\int _{\Omega } p(x,y)\left[ v^*(x)v(y)-v(x)v^*(y) \right] ^2\left( \frac{1}{v(x) v(y) } -\frac{1}{v^*(x)v^*(y) } \right) dy dx \ge 0. \end{aligned}$$

However,

$$\begin{aligned} 0\ge & {} \nu _{(0, v_D)}= \sup _{0 \ne \phi \in L^2} \frac{\int _{\Omega } \left( d\phi \mathcal {K}[\phi ]+[m(x)-cv_D]\phi ^2 \right) dx }{\int _{\Omega } \phi ^2 dx}\\\ge & {} \frac{\int _{\Omega } \left( d u \mathcal {K}[ u ]+[m(x)-cv_D]u^2 \right) dx }{\int _{\Omega } u^2 dx}\\= & {} \frac{\int _{\Omega } \left( -[m(x)-u-cv]u^2+[m(x)-cv_D]u^2 \right) dx }{\int _{\Omega } u^2 dx}\\= & {} \frac{\int _{\Omega } (u+cz)u^2 dx }{\int _{\Omega } u^2 dx}. \end{aligned}$$

Putting together the above two inequalities:

$$\begin{aligned} \int _{\Omega }(bu+z)z^2 dx\ge 0,\ \ \ \int _{\Omega } (u+cz)u^2 dx \le 0. \end{aligned}$$
(3.11)

Similar to (3.10), we obtain

$$\begin{aligned} \int _{\Omega } (u+cz)^2(cz-u)dx\ge 0, \end{aligned}$$

where \(0<bc\le 1\) is used. Hence \(u+cz =0 \) in \(\bar{\Omega }\) and all the previous inequalities should be equalities. In particular, \(bc=1\) and \(bu+z =0\). Note that \(bu+z =0\) means \(bu+v = v_D\). Then based on the equations satisfied by v and \(v_D\) respectively, it is routine to show that \(v= \alpha v_D\), where \(0<\alpha <1\). Thus, \(u= c(1-\alpha )v_D\). Then plugging \(v= \alpha v_D\) and \(u= c(1-\alpha )v_D\) into the equation satisfied by u, we have

$$\begin{aligned} d c(1-\alpha ) \mathcal {K}[v_D] +c(1-\alpha )v_D (m(x)-c v_D)=0, \end{aligned}$$

which indicates that \(u_d = c v_D\), i.e., \(bu_d= v_D\). This yields a contradiction due to Proposition 3.1.

(iii) is similar to the proof of case (ii), thus the details are omitted.\(\square \)

4 Proof of Theorem 1.2

Throughout this section, let (u(xt), v(xt)) denote a solution of the system (1.1). First of all, thanks to Proposition 3.1, if both \((u_d, 0)\) and \((0,v_D)\) are locally stable or neutrally stable, then \(bc=1\), \(bu_d = v_D\) and thus it is routine to verify that (1.1) has a continuum of steady states \(\{(su_d, (1-s)v_D),\ 0\le s\le 1\}\). Moreover, fix \(0<s<1\). For any \(\epsilon >0\), choose \(\tau >0\) such that

$$\begin{aligned} \tau< \min \left\{ {1\over 2}s, \ {1\over 2} (1-s) \right\} ,\ \tau \max _{\bar{\Omega }}u_d<{\epsilon \over 2},\ \tau \max _{\bar{\Omega }}v_D< { \epsilon \over 2}. \end{aligned}$$

Set

$$\begin{aligned} \delta = \min \left\{ \tau \min _{\bar{\Omega }}u_d ,\ \tau \min _{\bar{\Omega }}v_D \right\} . \end{aligned}$$

Then we claim that for any \((u_0,v_0) \in \mathbb {X} \times \mathbb {X}\), if

$$\begin{aligned} \Vert u_0 -su_d \Vert _{\mathbb {X}} + \Vert v_0 - (1-s)v_D \Vert _{\mathbb {X}} < \delta , \end{aligned}$$
(4.1)

then the solution of (1.1)satisfies

$$\begin{aligned} \Vert u(\cdot , t) -su_d \Vert _{\mathbb {X}} + \Vert v(\cdot , t) - (1-s)v_D \Vert _{\mathbb {X}} < \epsilon \end{aligned}$$

for any \(t>0\). Notice that (4.1) indicates that

$$\begin{aligned} u_0<su_d+ \delta \le su_d + \tau \min _{\bar{\Omega }}u_d \le (s+\tau ) u_d <u_d\ \ \text {in} \ \bar{\Omega }\end{aligned}$$

and

$$\begin{aligned} v_0> (1-s)v_D- \delta \ge (1-s)v_D -\tau \min _{\bar{\Omega }}v_D \ge (1-s-\tau ) v_D>0 \ \ \text {in} \ \bar{\Omega }. \end{aligned}$$

By comparison principle, it follows that for \(t>0\)

$$\begin{aligned} u(x,t) < (s+\tau ) u_d, \ v(x,t)>(1-s-\tau ) v_D \ \ \text {in} \ \bar{\Omega }. \end{aligned}$$

Similarly, we can derive that

$$\begin{aligned} u(x,t) > (s-\tau ) u_d, \ v(x,t)<(1-s+\tau ) v_D \ \ \text {in} \ \bar{\Omega }. \end{aligned}$$

Hence according to the choice of \(\tau \), one sees that

$$\begin{aligned} \Vert u(\cdot , t) -su_d \Vert _{\mathbb {X}} + \Vert v(\cdot , t) - (1-s)v_D \Vert _{\mathbb {X}} \le \Vert \tau u_d \Vert _{\mathbb {X}} + \Vert \tau v_D \Vert _{\mathbb {X}} < \epsilon . \end{aligned}$$

The claim is proved and thus \((su_d, (1-s)v_D)\) is locally stable for any \(0<s<1\).

Now it remains to demonstrate the global convergence of solutions to the system (1.1) for any nonnegative initial data \((u_0, v_0) \not \equiv (0,0)\). The proof of this part is quite involved and complicated.

Let us add some explanations here for the convenience of readers. If either \(u_0 \equiv 0\) or \(v_0 \equiv 0\), then (1.1) is reduced to a single-species model and thus it follows that the corresponding solution (u(xt), v(xt)) approaches to \((0, v_D)\) or \((u_d,0)\) respectively in \(\mathbb {X} \times \mathbb {X}\). Now only consider initial data \((u_0, v_0) \in \mathbb {X}_{++} \times \mathbb {X}_{++}\). By comparison principle, we have \(u(x,t)>0\) and \(v(x,t)>0\) in \(\bar{\Omega }\) for \(t>0\). Hence, for the rest of the proof, assume that \(u_0 > 0\), \(v_0 > 0\) in \(\bar{\Omega }\) and consider three cases separately:

Case I :

u(xt) does not weakly converge to zero in \(L^2(\Omega )\);

Case II :

v(xt) does not weakly converge to zero in \(L^2(\Omega )\);

Case III :

both u(xt) and v(xt) weakly converge to zero in \(L^2(\Omega )\).

The following property indicates how to initiate the proofs of Cases I and II.

Proposition 4.1

Assume that (C1)(C3) hold.

  1. (i)

    If Case I holds, then there exists \(T_1>0\) such that \(v(x,t)<v_D(x)\) in \(\bar{\Omega }\) for \(t\ge T_1\).

  2. (ii)

    If Case II holds, then there exists \(T_2>0\) such that \(u(x,t)<u_d(x)\) in \(\bar{\Omega }\) for \(t\ge T_2\).

We prepare a lemma first, which is crucial in the proof of Proposition 4.1.

Lemma 4.2

Let \(\Omega \) denote a bounded domain in \(\mathbb {R}^n\). Assume that \(u(\cdot ,t)\in L^{\infty }(\Omega )\), \(t \ge 0\) satisfies

$$\begin{aligned} u_t(x,t) \ge \delta \int _{\Omega \cap B_r(x)} u(y,t) dy,\ \text { and } u(x,t)\ge 0 \text { for } t\ge 0, \end{aligned}$$

where \(r>0, \delta >0\). Then for any \(t_0\ge 0\), \(0<t<1\), there exist \(\alpha >0\) and \(A_0=A_0(\Omega )\) such that

$$\begin{aligned} u(x,t_0+t) \ge A_0 t^{\alpha } \int _{\Omega } u(x,t_0) dx\ \ \text {in}\ \Omega . \end{aligned}$$

Proof

Without loss of generality, assume that \(t_0=0\). Note that it is obvious if \(u(x,0)\equiv 0\). Now suppose that \(u(x,0)\not \equiv 0\) and let \(a= \int _{\Omega } u(x,0)dx >0\). Since \(\Omega \) is bounded, there exist \(x_j \in \mathbb {R}^n\), \(1\le j\le J\) such that

$$\begin{aligned} \Omega \subset \subset \bigcup _{1\le j\le J} B_j, \text { with } B_j \triangleq B_{r/4} (x_j)= \{ x\in \mathbb {R}^n, \ |x-x_j| < r/4 \}. \end{aligned}$$

Without loss of generality, assume \(\sigma = \min \{ |B_j\bigcap \Omega |,\ 1\le j\le J \}>0\), \(\int _{\Omega \bigcap B_1} u(y,t) dy \ge a/J\), and \(B_{j+1} \bigcap B_j \ne \varnothing \), \(1\le j\le J-1\).

Now first for any \(x\in B_1\),

$$\begin{aligned} u_t(x,t) \ge \delta \int _{\Omega \bigcap B_r(x)} u(y,t) dy \ge \delta \int _{\Omega \bigcap B_1} u(y,t) dy \ge \delta {a\over J}. \end{aligned}$$

Thus for \(x\in B_1\), \(t>0\),

$$\begin{aligned} u(x,t) \ge \delta {a\over J} t. \end{aligned}$$
(4.2)

Secondly, for any \(x\in B_2\), it follows that

$$\begin{aligned} u_t(x,t) \ge \delta \int _{\Omega \bigcap B_r(x)} u(y,t) dy \ge \delta \int _{\Omega \bigcap B_1} u(y,t) dy \ge \delta {a\over J}. \end{aligned}$$

Thus for \(x\in B_2\), \(t>0\),

$$\begin{aligned} u(x,t) \ge \delta {a\over J} t. \end{aligned}$$
(4.3)

Next, for any \(x\in B_3\), by (4.3), one sees that

$$\begin{aligned} u_t(x,t)\ge & {} \delta \int _{\Omega \bigcap B_r(x)} u(y,t) dy \ge \delta \int _{\Omega \bigcap B_2} u(y,t) dy \\\ge & {} \delta \sigma \delta {a\over J} t = \sigma \delta ^2 {a\over J} t. \end{aligned}$$

Hence for \(x\in B_3\), \(t>0\),

$$\begin{aligned} u(x,t) \ge \sigma \delta ^2 {a\over J} {t^2\over 2}. \end{aligned}$$

This step can be repeated and we have, for \(x\in B_j\), \(3\le j\le J\), \(t>0\),

$$\begin{aligned} u(x,t) \ge \sigma ^{j-2} \delta ^{j-1} {a\over J} {t^{j-1}\over (j-1)!}. \end{aligned}$$

Therefore, together with (4.2) and (4.3), one sees that, for \(x\in \bar{\Omega }\), \(0<t<1\)

$$\begin{aligned} u(x,t) \ge A_0 t^{J-1} \int _{\Omega } u(x,0) dx, \end{aligned}$$

where

$$\begin{aligned} A_0= \min \left\{ 1,\sigma , \sigma ^{J-2} \right\} \min \left\{ \delta , \delta ^{J-1} \right\} {1\over J!}. \end{aligned}$$

The lemma is proved by choosing \(\alpha =J-1.\) \(\square \)

Proof of Proposition 4.1

Assume that Case I happens, i.e., in \(L^2(\Omega )\) as \(t \rightarrow \infty \), then there exist a constant \(a_0>0\) and a sequence \(\{ t_j \}_{j\ge 1}\) with \(t_j \rightarrow \infty \) as \(j\rightarrow \infty \) such that

$$\begin{aligned} \int _{\Omega } u(x,t_j)dx > a_0\ \ \text {for all } j\ge 1. \end{aligned}$$
(4.4)

First of all, we will derive an uniform lower bound for u in certain time intervals. According to assumption (C2), there exist \(r_1>0\), \(\delta _1>0\) such that \(k(x,y)\ge \delta _1\) if \(|x-y|\le r_1\). Then one sees that

$$\begin{aligned} u_t= & {} d\int _{\Omega } k(x,y) u(y,t) dy +u(m(x)- d a_d(x) -u-cv)\nonumber \\\ge & {} d \delta _1 \int _{\Omega \bigcap B_{r_1}(x)} u(y,t) dy -A_1 u, \end{aligned}$$
(4.5)

where

$$\begin{aligned} A_1 = \sup _{x\in \Omega ,\ t>0}\left| m(x)-d a_d(x) -u(x,t)-cv(x,t)\right| . \end{aligned}$$
(4.6)

Let \(U=e^{A_1 t} u\) and it follows that

$$\begin{aligned} U_t \ge d \delta _1 \int _{\Omega \bigcap B_{r_1}(x)} U(y,t) dy. \end{aligned}$$

Thus Lemma 4.2 can be applied to induce that there exist \(\alpha >0\), \(A_0=A_0(\Omega )\) such that

$$\begin{aligned} U \left( x,t_j+{1\over 2}\right) \ge A_0 \left( {1\over 2}\right) ^{\alpha } \int _{\Omega } U(x,t_j) dx\ \ \text {in}\ \bar{\Omega }, \ j\ge 1, \end{aligned}$$

which, by (4.4), implies a crucial estimate:

$$\begin{aligned} u \left( x,t_j+{1\over 2}\right)\ge & {} A_0 e^{-{1\over 2}A_1} \left( {1\over 2}\right) ^{\alpha } \int _{\Omega } u(x,t_j) dx \nonumber \\&\ge \,&A_0 e^{-{1\over 2}A_1} \left( {1\over 2}\right) ^{\alpha } a_0 \doteq A_2\ \ \text {in}\ \bar{\Omega }, \ j\ge 1. \end{aligned}$$
(4.7)

Thus, thanks to (4.7), we have the following estimate for \(t> t_j+{1\over 2}\)

$$\begin{aligned}&d\int _{\Omega } k(x,y) u(y,t) dy\\&\quad = d\int _{\Omega } k(x,y) \int _{t_j+{1\over 2}}^t u_{\tau }(y,\tau ) d\tau dy +d\int _{\Omega } k(x,y) u\left( y,t_j+{1\over 2}\right) dy\\&\quad \ge \, -d \left\| \int _{\Omega } k(\cdot ,y) dy\right\| _{L^{\infty }(\Omega )}\Vert u_t(\cdot , t) \Vert _{L^{\infty }(\Omega )} \left( t-t_j- {1\over 2} \right) + d A_2\int _{\Omega } k(x,y) dy. \end{aligned}$$

It is easy to see that \(\min _{x\in \bar{\Omega }} \int _{\Omega } k(x,y) dy >0\) since \(\int _{\Omega } k(x,y) dy \in \mathbb {X}\). Denote

$$\begin{aligned} \delta _2 =d A_2\min _{x\in \bar{\Omega }} \int _{\Omega } k(x,y) dy . \end{aligned}$$

Also, it is easy to verify that \(\Vert u_t(\cdot , t) \Vert _{L^{\infty }(\Omega )}\) has an upper bound independent of \(t\ge 0\). Hence, there exists \(\epsilon _1>0\) such that for \(j\ge 1\) and \(0<t-t_j-{1\over 2}<\epsilon _1\), it holds that

$$\begin{aligned} d\int _{\Omega } k(x,y) u(y,t) dy \ge \delta _2/ 2\ \ \text {in}\ \bar{\Omega }, \end{aligned}$$

which yields that for \(t\in [t_j+{1\over 2}, t_j+{1\over 2} +\epsilon _1]\), \(x\in \bar{\Omega }\), \(j\ge 1\),

$$\begin{aligned} u_t(x,t)= & {} d\int _{\Omega } k(x,y) u(y,t) dy +u(m(x)- a_d(x) -u-cv) \ge \delta _2/ 2 -A_1 u(x,t), \end{aligned}$$

where \(A_1\) is determined in (4.6). Direct computation gives that

$$\begin{aligned} u(x,t) \ge {\delta _2 \over 2A_1} \left( 1- e^{-A_1 (t-t_j - {1\over 2} )} \right) \ \ \text {for}\ x\in \bar{\Omega },\ t\in \left[ t_j+{1\over 2}, t_j+{1\over 2} +\epsilon _1\right] , \ j\ge 1. \end{aligned}$$

Therefore, we reach the conclusion that

$$\begin{aligned} u(x,t) \ge A_3\ \ \text {for}\ x\in \bar{\Omega },\ t\in \left[ t_j+{1\over 2}+ {\epsilon _1\over 2}, t_j+{1\over 2} +\epsilon _1\right] , \ j\ge 1, \end{aligned}$$
(4.8)

where

$$\begin{aligned} A_3 = {\delta _2 \over 2A_1} \left( 1- e^{-A_1 \epsilon _1/2}\right) >0. \end{aligned}$$

Now we are ready to derive the desired estimates for v(xt). Note that for single-species model (1.6), for any given initial data in \(\mathbb {X}_{++}\), the corresponding solution \(V(\cdot , t) \rightarrow v_D\) in \(\mathbb {X}\) as \(t\rightarrow \infty .\) Thus, thanks to comparison principle, it is routine to verify that there exists a sequence \(\{ h_j\}_{j\ge 1}\) with \(h_j>0\), and \(\lim _{j\rightarrow \infty }h_j =0\) such that

$$\begin{aligned} v(x,t) \le (1+h_j) v_D(x) \ \ \text {in} \ \bar{\Omega }\ \ \text {for} \ t\ge t_j. \end{aligned}$$
(4.9)

Notice that to complete the proof, by comparison principle, it suffices to show the existence of \(T_1\) such that \(v(x,t)<v_D(x)\) in \(\bar{\Omega }\) at \(t = T_1\). Indeed we will prove that \(v(x,t_j+{1\over 2}+\epsilon _1)<v_D(x)\) in \(\bar{\Omega }\) for j large.

First, we show that if j is sufficiently large, then for each \(x\in \bar{\Omega }\), there exists \(s= s(x)\in [t_j+{1\over 2}+ {\epsilon _1\over 2}, t_j+{1\over 2}+\epsilon _1]\) such that \(v(x,s)<v_D(x)\).

Fix \(x\in \bar{\Omega }\) and suppose that

$$\begin{aligned} v(x,t) \ge v_D(x)\ \ \ \text {for}\ t\in \left[ t_j+{1\over 2}+ {\epsilon _1\over 2}, t_j+{1\over 2}+\epsilon _1\right] , \end{aligned}$$
(4.10)

which, by (4.8) and (4.9), yields that for \(t\in [t_j+{1\over 2}+ {\epsilon _1\over 2}, t_j+{1\over 2}+\epsilon _1]\)

$$\begin{aligned} v_t(x, t)= & {} D\int _{\Omega } p(x,y) v(y, t) dy +v(x, t)(M(x)- D a_D(x) -b u(x, t)-v(x, t)) \nonumber \\&\le \,&D\int _{\Omega } p(x,y) v(y, t) dy +v(x, t)(M(x)- D a_D(x) -b u(x, t)-v_D(x))\nonumber \\&\le \,&D\int _{\Omega } p(x,y) (1+h_j) v_D(y) dy +v_D(x)(M(x)- D a_D(x) -v_D(x))\nonumber \\&+\, (v(x,t)-v_D(x)) (M(x)- D a_D(x) -b u(x, t) -v_D(x)) - b u(x,t)v_D(x)\nonumber \\&\le \,&O(h_j) - b A_3 v_D(x). \end{aligned}$$
(4.11)

Thus there exists \(K_1>0\), independent of x, such that for \(j\ge K_1\),

$$\begin{aligned} v_t(x, t) \le - A_4, \end{aligned}$$

where

$$\begin{aligned} A_4 = {1\over 2} b A_3 \min _{ \bar{\Omega }} v_D >0. \end{aligned}$$

Hence

$$\begin{aligned} v\left( x,t_j+{1\over 2}+\epsilon _1\right) \le v\left( x, t_j+{1\over 2}+ {\epsilon _1\over 2}\right) -{1\over 2} \epsilon _1 A_4\le (1+h_j) v_D(x)-{1\over 2} \epsilon _1 A_4, \end{aligned}$$

which implies that there exists \(K_2 \ge K_1\), independent of x, such that for \(j\ge K_2\),

$$\begin{aligned} v\left( x,t_j+{1\over 2}+\epsilon _1\right) <v_D(x). \end{aligned}$$

This contradicts to (4.10).

Therefore, if \(j\ge K_2\), there exists \(s= s(x)\in [t_j+{1\over 2}+ {\epsilon _1\over 2}, t_j+{1\over 2}+\epsilon _1]\) such that \(v(x,s)<v_D(x)\). Note that s depends on the choice of x and in fact we need to find a moment which is independent of \(x\in \bar{\Omega }\).

To be more specific, we will show that if j is large enough, \(v(x,t)<v_D(x)\) for \(t\in [s(x), t_j+{1\over 2}+\epsilon _1].\) Otherwise, there exists \(\tilde{t} = \tilde{t} (x)\in (s(x), t_j+{1\over 2}+\epsilon _1]\) such that \(v(x,\tilde{t})=v_D(x)\) and \(v(x, t)<v_D(x)\) for \(t\in (s(x), \tilde{t})\). Then, by (4.8) and (4.9), it follows that

$$\begin{aligned} 0\le v_t(x,\tilde{t})= & {} D\int _{\Omega } p(x,y) v(y,\tilde{t}) dy +v(x,\tilde{t})(M(x)- D a_D(x) -b u(x,\tilde{t})-v(x,\tilde{t})) \nonumber \\&\le \,&D\int _{\Omega } p(x,y) (1+h_j) v_D(y) dy +v_D(x)(M(x)- D a_D(x) -b A_3-v_D(x))\nonumber \\&=\,&h_j D\int _{\Omega } p(x,y) v_D(y) dy - b A_3 v_D(x). \end{aligned}$$
(4.12)

Then there exists \(K_3\ge K_2\), independent of x, such that for \(j\ge K_3\), \(v_t(x,\tilde{t})<0\), which is a contradiction. Hence, in particular, \(v(x,t_j+{1\over 2}+\epsilon _1)<v_D(x)\) for \(j\ge K_3\).

The proof of (i) is complete and (ii) can be proved in the same way.\(\square \)

Now, we continue the proof for Case I. With the help of Proposition 4.1(i), without loss of generality, we could assume that \(u_0>0\), \(0<v_0< v_D\) in \(\bar{\Omega }\). Define

$$\begin{aligned} \theta (t) = \sup \{ \theta \ | \ u(x,t)>\theta u_d(x), v(x,t)<(1-\theta ) v_D(x) \ \text {in}\ \bar{\Omega }\}. \end{aligned}$$

It is obvious that \(0< \theta (0) <1\), \(\theta (t)\) is increasing in t due to comparison principle. Denote

$$\begin{aligned} 0<\theta _* = \lim _{t\rightarrow \infty } \theta (t) \le 1. \end{aligned}$$

Assume that \(\theta _* = 1\). For v(xt) , since \(v(x,t) \le (1-\theta (t) ) v_D(x)\) in \(\bar{\Omega }\), it is obvious that

$$\begin{aligned} v(\cdot , t) \rightarrow 0\ \text {in}\ \mathbb {X} \ \ \text {as}\ t \rightarrow \infty . \end{aligned}$$

For u(xt) , compared with the solution U(xt) of single-species model (1.5) with initial data \(u_0\in \mathbb {X}_{++}\), one sees that \(u(x,t) \le U(x,t).\) Thus it follows from Theorem 2.1 and the definition of \(\theta (t)\) that

$$\begin{aligned} u(\cdot , t) \rightarrow u_d(\cdot )\ \text {in}\ \mathbb {X} \ \ \text {as}\ t \rightarrow \infty , \end{aligned}$$

It remains to consider \(0<\theta _* < 1\). For clarity, the proof of this situation will be divided into three steps.

Step 1 We claim that there exists a subsequence of \((u(\cdot , t), v(\cdot , t))\), which converges to \((\alpha _1 u_d, (1-\alpha _1 ) v_D)\) in \(L^2(\Omega ) \times L^2(\Omega )\), where \(\alpha _1\in [0,1]\) .

Fix \(0<s_1<\theta (0)\), let \((u^*, v^*) = (s_1 u_d, (1-s_1) v_D )\) and set

$$\begin{aligned} w(x,t) = u(x,t)- u^*(x),\ z(x,t) =v(x,t)- v^*(x). \end{aligned}$$

Recall that (uv) satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} u_t= d \mathcal {K}[u] +u(m(x)- u- c v) &{}\text {in } \Omega \times [0,\infty ),\\ v_t= D \mathcal {P}[v] +v(M(x)-b u- v) &{}\text {in } \Omega \times [0,\infty ), \end{array}\right. } \end{aligned}$$

and \((u^*, v^*)\) satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} d \mathcal {K}[u^*] +u^*(m(x)-u^*- cv^*)=0, \\ D \mathcal {P}[v^*] +v^*(M(x)-bu^*- v^*)=0. \end{array}\right. } \end{aligned}$$

Thus using the equations satisfied by u and \(u^*\), one has

$$\begin{aligned} d\left( u^* \mathcal {K}[u] - u \mathcal {K}[u^*] \right) = u^* u_t + uu^* (w+cz). \end{aligned}$$

This yields that

$$\begin{aligned} d\int _{\Omega }\left( -u \mathcal {K}[u^*] + u^* \mathcal {K}[u] \right) \frac{w^2}{u u^* } dx = \int _{\Omega } \left( {u_t\over u}w^2 + (w+cz)w^2 \right) dx. \end{aligned}$$

Applying the same estimates as that of the left hand side of (3.5) in the proof of Proposition 3.2, we have

$$\begin{aligned} \int _{\Omega } \left( {u_t\over u}w^2 + (w+cz)w^2 \right) dx \le 0. \end{aligned}$$
(4.13)

Similarly, using the equations satisfied by v and \(v^*\), we obtain

$$\begin{aligned} \int _{\Omega } \left( {v_t\over v}z^2 + (bw+ z)z^2 \right) dx \ge 0. \end{aligned}$$
(4.14)

Then (4.13), (4.14) and \(bc=1\) imply that

$$\begin{aligned}&c^3 \int _{\Omega } {v_t\over v} z^2dx - \int _{\Omega } {u_t\over u} w^2 dx \ge \int _{\Omega } \left[ -c^3(bw+ z)z^2 + (w+cz)w^2 \right] dx\nonumber \\&\quad = \int _{\Omega } (w+cz)^2 (w - cz) dx. \end{aligned}$$
(4.15)

Note that

$$\begin{aligned} w- cz = u- u^* -c (v-v^*) \ge (\theta (t) -s_1) u_d + c (\theta (t) -s_1)v_D =2(\theta (t) -s_1) u_d, \end{aligned}$$

since \(bc =1\) and \(bu_d = v_D\). Denote \(C_0 = 2(\theta (0) -s_1) \min _{ \bar{\Omega }} u_d \). Hence (4.15) implies

$$\begin{aligned}&\int _{\Omega } (w+cz)^2 dx\\&\quad \le \, {1\over C_0} \left( c^3 \int _{\Omega } {v_t\over v} z^2dx - \int _{\Omega } {u_t\over u} w^2 dx \right) \\&\quad = {1\over C_0} \left( c^3 \int _{\Omega } \left( vv_t -2v^* v_t + (v^*)^2{v_t\over v} \right) dx - \int _{\Omega } \left( uu_t -2u^* u_t + (u^*)^2{u_t\over u} \right) dx \right) . \end{aligned}$$

Thus for any \(T>0\),

$$\begin{aligned}&\int _0^{T } \int _{\Omega } (w+cz)^2 dx dt\\&\quad \le \, { c^3 \over C_0}\int _{\Omega } \left( { 1\over 2}v^2(x,T) -2v^*(x) v(x,T) + (v^*(x))^2\ln v (x,T) \right) dx\\&\qquad -\, { c^3 \over C_0}\int _{\Omega } \left( {1\over 2} v_0^2(x) -2 v^*(x) v_0(x) +(v^*(x))^2 \ln v_0(x)\right) dx\\&\qquad -\, {1\over C_0}\int _{\Omega } \left( {1\over 2 } u^2 (x,T) -2u^*(x) u(x,T) + (u^*(x))^2 \ln u (x,T) \right) dx\\&\qquad +\, {1\over C_0}\int _{\Omega } \left( {1\over 2 } u_0^2 (x) -2u^*(x) u_0(x) + (u^*(x))^2 \ln u_0 (x) \right) dx. \end{aligned}$$

Notice that in this case, \(u(x,t)\ge \theta (0) u_d\) and \(\theta (0)>0\), hence

$$\begin{aligned} \int _0^{\infty } \int _{\Omega } (w+cz)^2 dx dt <\infty . \end{aligned}$$
(4.16)

Moreover, it is routine to verify that \(\int _{\Omega } (w+cz)^2 dx\) is uniformly continuous in t. This, together with (4.16), yields that

$$\begin{aligned} \lim _{t\rightarrow \infty }\int _{\Omega } (w+cz)^2 dx =0. \end{aligned}$$
(4.17)

Again since \(bc =1\) and \(bu_d = v_D\), \(w+cz = u+cv - s_1u_d- c(1-s_1) v_D = u+cv -u_d\). Hence (4.17) tells us that

$$\begin{aligned} u(\cdot , t)+cv(\cdot , t) \rightarrow u_d(\cdot ) \ \text {in}\ L^2(\Omega )\ \text {as} \ t\rightarrow \infty . \end{aligned}$$
(4.18)

Next estimate \(\int _{\Omega } u_t^2 dx\) as follows.

$$\begin{aligned} \int _{\Omega } u_t^2 dx= & {} \int _{\Omega } \left( d \mathcal {K}[u] u_t +u(m(x)- u- c v)u_t \right) dx\\= & {} {d\over dt} \int _{\Omega } \left( {1\over 2} d \mathcal {K}[u] u +{1\over 2} (m-u_d) u^2 \right) dx + \int _{\Omega } (u_d - u- cv) uu_t dx\\&\le \,&{d\over dt} \int _{\Omega } \left( {1\over 2} d \mathcal {K}[u] u +{1\over 2} (m-u_d) u^2 \right) dx + {1\over 2}\int _{\Omega } (u_d - u- cv)^2 u^2 dx +{1\over 2}\int _{\Omega } u_t^2 dx, \end{aligned}$$

which gives that

$$\begin{aligned}&\int _0^{\infty }\int _{\Omega } u_t^2 dx dt \\&\quad \le \, \int _0^{\infty } {d\over dt} \int _{\Omega } \left( d \mathcal {K}[u] u + (m-u_d) u^2 \right) dx dt + \int _0^{\infty } \int _{\Omega } (u_d - u- cv)^2 u^2 dx dt \\&\quad < \infty \end{aligned}$$

thanks to (4.16). Moreover, \(\int _{\Omega } u_t^2 dx\) is uniformly continuous in t. Thus we obtain that

$$\begin{aligned} \lim _{t\rightarrow \infty }\int _{\Omega } u_t^2 dx =0. \end{aligned}$$
(4.19)

Furthermore, by the equation satisfied by u:

$$\begin{aligned} u_t =d\int _{\Omega } k(x,y) u(y,t) dy +u(m(x)- d a_d(x) -u-cv), \end{aligned}$$

one has

$$\begin{aligned} u(x,t) = \frac{u_t - d\int _{\Omega } k(x,y) u(y,t) dy}{m(x)- d a_d(x) -u_d} - \frac{u(u_d-u-cv)}{m(x)- d a_d(x) -u_d}, \end{aligned}$$
(4.20)

where, by the equation satisfied by \(u_d\),

$$\begin{aligned} m(x)- d a_d(x) -u_d= - \frac{d\int _{\Omega } k(x,y) u_d(y)dy}{u_d(x)}<0\ \ \text {in}\ \bar{\Omega }. \end{aligned}$$

Also, notice that for \(\phi \in \mathbb {X}\), the mapping \( \phi \rightarrow \int _{\Omega } k(x,y) \phi (y) dy \) is compact from \(\mathbb {X}\) to \(\mathbb {X}\). Thus, there exist a subsequence \(\{u(\cdot ,t_j) \}\), \(j\ge 1\), and \(\Phi \in \mathbb {X}\) such that, as \(j\rightarrow \infty \),

$$\begin{aligned} \int _{\Omega } k(x,y) u(y,t_j) dy \rightarrow \Phi \ \ \text {in}\ \ \mathbb {X}. \end{aligned}$$
(4.21)

This, together with (4.18), (4.19) and (4.20), implies that

$$\begin{aligned} u(\cdot , t_j) \rightarrow \frac{ - d \Phi (\cdot ) }{m(\cdot )- d a_d(\cdot ) -u_d(\cdot )}\ \ \text {in}\ L^2(\Omega )\ \ \text {as} \ j\rightarrow \infty . \end{aligned}$$
(4.22)

Denote

$$\begin{aligned} \tilde{u}= \frac{ - d \Phi }{m - d a_d -u_d } \in \mathbb {X}. \end{aligned}$$

By (4.21) and (4.22), we have

$$\begin{aligned} d\mathcal {K}[\tilde{u} ] +\tilde{u}(m-u_d)=0, \end{aligned}$$

which implies that there exists \( \alpha _1 \ge 0\) such that \(\tilde{u} = \alpha _1 u_d\) since both \(\tilde{u}\) and \(u_d\) can be regarded as the eigenfunctions to the principal eigenvalue zero of the eigenvalue problem \(d\mathcal {K}[\phi ] + (m-u_d) \phi =\mu \phi .\) Thus (4.22) becomes

$$\begin{aligned} u(\cdot , t_j) \rightarrow \alpha _1 u_d \ \ \text {in}\ L^2(\Omega )\ \ \text {as} \ j\rightarrow \infty . \end{aligned}$$

At the end, according to \(bc =1\), \(bu_d = v_D\) and (4.18), it is routine to check that \( \alpha _1 \in [0,1]\) and \(v(\cdot , t_j) \rightarrow (1- \alpha _1) v_D\) in \(L^2(\Omega )\) as \(j\rightarrow \infty \).

The claim is proved.

Step 2 In this step, we will prove that \((u(\cdot , t), v(\cdot , t))\) converges in \(L^2(\Omega ) \times L^2(\Omega )\) as \(t\rightarrow \infty \). Based on the proof in Step 1, it suffices to show \(\theta _* = \alpha _1\). Obviously, \(\theta _* \le \alpha _1\). Now suppose that \(\theta _* < \alpha _1\) and a contradiction will be derived.

According to the definition of \(\theta _*\), for any \(\delta >0\), there exists \(t_{\delta }>0\) such that for \(t \ge t_{\delta }\),

$$\begin{aligned} u(x,t) > (\theta _* -\delta ) u_d(x),\ v(x,t) < (1-\theta _* +\delta ) v_D(x)\ \ \text {in} \ \bar{\Omega }. \end{aligned}$$
(4.23)

We claim that there exist \( \epsilon _0>0\), \( \delta _0>0\) and \( j_0\ge 1\) such that for \(j\ge j_0\),

$$\begin{aligned} {u(x,t_j+ \epsilon _0 ) > (\theta _* + \delta _0) u_d(x),\ v(x,t_j+ \epsilon _0) < (1-\theta _* - \delta _0 ) v_D(x)\ \ \text {in} \ \bar{\Omega }.} \end{aligned}$$
(4.24)

Since \(u(\cdot , t_j) \rightarrow \alpha _1 u_d(\cdot )\) in \(L^2(\Omega )\) as \(j\rightarrow \infty \), it is standard to check that

$$\begin{aligned} d \int _{\Omega } k(x,y) u(y,t_j) dy \rightarrow d \int _{\Omega } k(x,y) \alpha _1 u_d(y) dy \ \ \text {in}\ \ \mathbb {X}\ \text {as}\ j\rightarrow \infty . \end{aligned}$$

Thus \(\theta _* < \alpha _1\) implies that there exist \(\ell _1>0\) and \(j_1\ge 1\) such that for \(j\ge j_1\),

$$\begin{aligned} d \int _{\Omega } k(x,y) u(y,t_j) dy > d \int _{\Omega } k(x,y) \theta _* u_d(y) dy+3\ell _1 \ \ \text {in}\ \ \bar{\Omega }. \end{aligned}$$
(4.25)

Also, note that \(\Vert u_t(\cdot , t)\Vert _{L^{\infty }(\Omega )}\) is uniformly bounded in t due to the boundedness of solutions. It follows from (4.25) that there exists \(\epsilon _1>0\), independent of \(j\ge j_1\), such that for \(t\in [t_j,t_j+\epsilon _1]\), \(j\ge j_1\),

$$\begin{aligned} d \int _{\Omega } k(x,y) u(y,t) dy > d \int _{\Omega } k(x,y) \theta _* u_d(y) dy+ 2 \ell _1 \ \ \text {in}\ \ \bar{\Omega }. \end{aligned}$$
(4.26)

Note that \(\epsilon _1\) could be smaller if necessary.

Moreover, there exists \(\delta _1>0\) such that for any \(0<\delta <\delta _1\), \(x\in \bar{\Omega }\)

$$\begin{aligned} \ell _1 + u(m-d a_d -u -cv) > \theta _* u_d \left( m- d a_d - \theta _* u_d -c(1-\theta _* )v_D \right) , \end{aligned}$$
(4.27)

as long as \(u\in [(\theta _* -\delta ) u_d(x), (\theta _* +\delta ) u_d(x)]\), \(v\in (0, (1-\theta _* +\delta ) v_D(x)]\).

Fix \(x\in \bar{\Omega }\) and \(0<\delta <\delta _1\). Suppose that if \(j\ge j_1\), \(t_j\ge t_{\delta }\), for any \(t\in [t_j,t_j+\epsilon _1]\), \(u(x,t) \le (\theta _* +\delta ) u_d(x)\). Then by (4.23), (4.26) and (4.27),

$$\begin{aligned} u_t(x,t)= & {} d\int _{\Omega } k(x,y) u(y,t) dy +u(m(x)- d a_d(x) -u-cv)\nonumber \\&>\,&d \int _{\Omega } k(x,y) \theta _* u_d(y) dy+ 2 \ell _1 \nonumber \\&\quad +\theta _* u_d \left( m-d a_d - \theta _* u_d -c(1-\theta _* )v_D \right) -\ell _1 \nonumber \\&=\,&\ell _1>0, \end{aligned}$$
(4.28)

which yields that

$$\begin{aligned} u(x, t_j+\epsilon _1)> u(x,t_j) +\ell _1 \epsilon _1 > (\theta _* -\delta ) u_d(x) +\ell _1 \epsilon _1 \ge (\theta _* +\delta ) u_d(x) \end{aligned}$$

provided that

$$\begin{aligned} \delta \le {\ell _1\epsilon _1\over 2 \max _{\bar{\Omega }} u_d}. \end{aligned}$$

This is impossible. Therefore, given

$$\begin{aligned} x\in \bar{\Omega },\ \ 0<\delta < \min \left\{ \delta _1, {\ell _1\epsilon _1\over 2 \max _{\bar{\Omega }} u_d} \right\} , \end{aligned}$$

if \(j\ge j_1\), \(t_j\ge t_{\delta }\), there exists \(\hat{t}_j =\hat{t}_j(x) \in [t_j, t_j+\epsilon _1]\) such that \(u(x,\hat{t}_j) > (\theta _* +\delta ) u_d(x)\). Note that indeed \(\hat{t}_j \) depends on x.

Next we will show that for all \(t\in [\hat{t}_j(x), t_j+\epsilon _1]\), \(u(x, t) > (\theta _* +\delta ) u_d(x)\) for any x in \(\bar{\Omega }\). Otherwise, if there exist \(x^*\in \bar{\Omega }\) and \(t^*_j\in (\hat{t}_j(x^*), t_j+\epsilon _1]\) such that \(u(x^*, t^*_j) = (\theta _* +\delta ) u_d(x^*)\) and \(u(x^*, t ) > (\theta _* +\delta ) u_d(x^*)\) for \(t\in (\hat{t}_j(x^*), t^*_j) \), then due to (4.23), (4.26) and (4.27), we derive that

$$\begin{aligned} 0\ge & {} u_t(x^*,t^*_j )\nonumber \\&=\,&d\int _{\Omega } k(x,y) u(y,t) dy +u(m(x)- d a_d(x) -u-cv)\Big |_{(x,t)=(x^*, t^*_j)}\nonumber \\&>\,&d \int _{\Omega } k(x,y) \theta _* u_d(y) dy+ 2 \ell _1 \nonumber \\&+\, u(m(x)- d a_d(x) -u-c(1-\theta _* +\delta ) v_D) \Big |_{(x,t)=(x^*, t^*_j)}\nonumber \\&>\,&d \int _{\Omega } k(x,y) \theta _* u_d(y) dy+ 2 \ell _1 +\theta _* u_d \left( m- d a_d - \theta _* u_d -c(1-\theta _* )v_D \right) -\ell _1\nonumber \\&=\,&\ell _1>0, \end{aligned}$$
(4.29)

which is a contradiction.

Thus we have proved that there exist \(\ell _1>0\), \(j_1\ge 1\), \(\epsilon _1>0\) and \(\delta _1>0\) such that

$$\begin{aligned} u(x, t_j+\epsilon _1) > (\theta _* +\delta ) u_d(x)\ \ \text {in} \ \bar{\Omega }, \end{aligned}$$

provided that

$$\begin{aligned} 0<\delta < \min \left\{ \delta _1, {\ell _1\epsilon _1\over 2 \max _{\bar{\Omega }} u_d} \right\} ,\ j\ge j_1,\ t_j\ge t_{\delta }. \end{aligned}$$

Similarly, there exist \(\ell _2>0\), \(j_2\ge 1\), \(\epsilon _2>0\) and \(\delta _2>0\) such that

$$\begin{aligned} v(x,t_j+\epsilon _2) < (1-\theta _* - \delta ) v_D(x)\ \ \text {in} \ \bar{\Omega }, \end{aligned}$$

provided that

$$\begin{aligned} 0<\delta < \min \left\{ \delta _2, {\ell _2\epsilon _2\over 2 \max _{\bar{\Omega }} v_D} \right\} ,\ j\ge j_2,\ t_j\ge t_{\delta }. \end{aligned}$$

Also \(\epsilon _2\) could be smaller if necessary.

In summary, choose \( \epsilon _0 = \min \{\epsilon _1, \epsilon _2 \}\) and fix

$$\begin{aligned} 0< \delta _0 <\min \left\{ \delta _1, {\ell _1 \epsilon _0 \over 2 \max _{\bar{\Omega }} u_d}, \delta _2, {\ell _2 \epsilon _0\over 2 \max _{ \bar{\Omega }} v_D} \right\} \end{aligned}$$

and \( j_0\) large enough such that for \(j\ge j_0\), \(t_j\ge t_{ \delta _0}\), then for \(j\ge j_0\),

$$\begin{aligned} {u(x,t_j+ \epsilon _0 ) > (\theta _* + \delta _0) u_d(x),\ v(x,t_j+ \epsilon _0 ) < (1-\theta _* - \delta _0 ) v_D(x)\ \ \text {in} \ \bar{\Omega }}, \end{aligned}$$

i.e., (4.24) is proved. This is a contradiction to the definition of \(\theta _*\).

Therefore, \(\theta _* = \alpha _1\) and it follows that \((u(\cdot , t), v(\cdot , t))\) converges to \(( \alpha _1 u_d, (1- \alpha _1 ) v_D)\) in \(L^2(\Omega ) \times L^2(\Omega )\) as \(t\rightarrow \infty \).

Step 3. We will improve the \(L^2(\Omega )\times L^2(\Omega )-\)convergence to \(\mathbb {X}\times \mathbb {X}-\)convergence in this step. Define

$$\begin{aligned} \eta (t) = \inf \{ \eta \ | \ u(x,t)<\eta u_d(x), v(x,t) > (1-\eta ) v_D(x) \ \text {in}\ \bar{\Omega }\}. \end{aligned}$$

Obviously, \(\eta (t)\) is decreasing in t due to comparison principle. Denote \( \eta ^* = \lim _{t\rightarrow \infty } \eta (t). \)

Notice that \(\theta _* =\alpha _1<1\) immediately yields that v(xt) does not weakly converge to zero in \(L^2(\Omega )\). Due to Proposition 4.1(ii), there exists \(T_2>0\) such that \(u(x,t)<u_d(x)\) in \(\bar{\Omega }\) for \(t\ge T_2\). Hence, for Case I, without loss of generality, assume that

$$\begin{aligned} 0<u_0<u_d,\ 0<v_0<v_D\ \ \text {in}\ \bar{\Omega }. \end{aligned}$$

This indicates that \(0<\theta (0), \eta (0)<1\).

According to the definitions of \(\theta _*\), \(\eta ^*\) and \(\alpha _1\), it is obvious that

$$\begin{aligned} 0<\theta (0) \le \theta _* \le \alpha _1 \le \eta ^* \le \eta (0)<1. \end{aligned}$$

By Steps 1 and 2, \(\theta (0)>0\) and \(\theta _* <1\) yield that \(\theta _* = \alpha _1\). Similarly, it can be proved that \(\eta (0)<1\) and \(\eta ^* >0\) imply that \(\eta ^* = \alpha _1\). Hence \(\theta _* = \eta ^* = \alpha _1\in (0,1)\) and thus \((u(\cdot , t), v(\cdot , t))\) converges to \(( \alpha _1 u_d, (1- \alpha _1 ) v_D)\) in \(\mathbb {X} \times \mathbb {X}\) as \(t\rightarrow \infty \). Therefore, the proof of Case I is complete. Obviously, Case II can be proved in the same way.

At the end, let us handle Case III when both u(xt) and v(xt) weakly converge to zero in \(L^2(\Omega )\). Indeed we will show that Case III cannot happen. We prepare the following proposition first.

Proposition 4.3

Assume that (C1)(C3) hold.

  1. (i)

    If in \(L^2(\Omega )\) as \(t\rightarrow \infty \), then there exists \(T_3>0\) such that \(u(x,t)<u_d(x)\) in \(\bar{\Omega }\) for \(t\ge T_3\).

  2. (ii)

    If in \(L^2(\Omega )\) as \(t\rightarrow \infty \), then there exists \(T_4>0\) such that \(v(x,t)<v_D(x)\) in \(\bar{\Omega }\) for \(t\ge T_4\).

Proof

(i) Choose

$$\begin{aligned} 0<\ell \le {1\over 2} \min \left\{ \min _{ \bar{\Omega }} \frac{d \int _{\Omega } k(x,y) u_d(y) dy}{ u_d(x)},\ \min _{\bar{\Omega }} u_d \right\} . \end{aligned}$$

It follows from the equations satisfied by u and \(u_d\) that

$$\begin{aligned} u_t(x,t)= & {} d\int _{\Omega } k(x,y) u(y,t) dy +u(m(x)- d a_d(x) -u-cv)\nonumber \\= & {} d\int _{\Omega } k(x,y) u(y,t) dy + u\left( u_d - \frac{d \int _{\Omega } k(x,y) u_d(y) dy}{ u_d(x)} -u-cv \right) \nonumber \\\le & {} d\int _{\Omega } k(x,y) u(y,t) dy + u(u_d - 2\ell -u). \end{aligned}$$
(4.30)

Note that if in \(L^2(\Omega )\) as \(t\rightarrow \infty \), then it is routine to check that as \(t\rightarrow \infty \),

$$\begin{aligned} d\int _{\Omega } k(\cdot ,y) u(y,t) dy \rightarrow 0\ \ \text {in} \ L^{\infty }( \Omega ). \end{aligned}$$

Denote \(\epsilon = {\ell \over 4} \min _{\bar{\Omega }} u_d\). There exists \(T_0>0\) such that for \(t\ge T_0\),

$$\begin{aligned} d\int _{\Omega } k(x ,y) u(y,t) dy \le \epsilon \ \ \text {in} \ \bar{\Omega }. \end{aligned}$$
(4.31)

Denote \(C_1 = \Vert u(x,t)\Vert _{L^{\infty }(\bar{\Omega }\times [0,\infty ))}<\infty \). We claim that \(u(x,t) < u_d(x)- \ell \) in \(\bar{\Omega }\) for \(t\ge T_0+ C_1/\epsilon \).

Suppose that the claim is not true, i.e., there exist \(\hat{x}\in \bar{\Omega }\) and \(\hat{t}\ge T_0+ C_1/\epsilon \) such that \(u(\hat{x},\hat{t}) \ge u_d(\hat{x})- \ell \).

First, fix \(x\in \bar{\Omega }\), we show that if for some \(t_0\ge T_0\), \(u(x,t_0) < u_d(x)- \ell \), then \(u(x,t) < u_d(x)- \ell \) for \(t\ge t_0\). Otherwise, if there exists \(t_1>t_0\) such that \(u(x,t_1) = u_d(x)- \ell \) and \(u(x,t) < u_d(x)- \ell \) for \(t_0<t<t_1\), then by (4.30) and (4.31),

$$\begin{aligned} 0\le u_t(x,t_1) =\le & {} d\int _{\Omega } k(x,y) u(y,t_1) dy + u(x,t_1)(u_d - 2\ell -u(x,t_1)) \\&\le \,&\epsilon -\ell (u_d(x)- \ell ) \le {\ell \over 4} \min _{\bar{\Omega }} u_d- {\ell \over 2} \min _{\bar{\Omega }} u_d = -\epsilon <0, \end{aligned}$$

which is impossible.

Now one sees that for any \(t\in [T_0, T_0+ C_1/\epsilon ] \), \(u(\hat{x}, t) \ge u_d(\hat{x})- \ell \). Then by (4.30) and (4.31), when \(t\in [T_0, T_0+ C_1/\epsilon ] \),

$$\begin{aligned} u_t(\hat{x},t)\le & {} d\int _{\Omega } k(\hat{x},y) u(y,t) dy + u(\hat{x},t)(u_d(\hat{x}) - 2\ell -u(\hat{x}, t)) \\&\le \,&{\ell \over 4} \min _{\bar{\Omega }} u_d -\ell u(\hat{x},t)\le {\ell \over 4} \min _{\bar{\Omega }} u_d -\ell (u_d(\hat{x})- \ell ) \le -\epsilon . \end{aligned}$$

This gives that

$$\begin{aligned} u(\hat{x}, T_0+ C_1/\epsilon ) \le u(\hat{x}, T_0 ) -\epsilon C_1/\epsilon \le 0, \end{aligned}$$

which contradicts to the positivity of u. The claim is proved and (i) follows.

Obviously, (ii) can be proved similarly.\(\square \)

Thanks to Proposition 4.3, without loss of generality, assume that

$$\begin{aligned} 0<u_0<u_d,\ 0<v_0<v_D\ \ \text {in}\ \bar{\Omega }. \end{aligned}$$

and Steps 1, 2 and 3 in the proof of Case I can be repeated. Thus the solution (uv) of (1.1) approaches to a steady state in \(\{(su_d, (1-s)v_D),\ 0\le s\le 1\}\) in \(\mathbb {X} \times \mathbb {X}\). This is impossible since and in \(L^2(\Omega )\) as \(t\rightarrow \infty \). Therefore, Case III cannot happen.

5 Models with mixed dispersal strategies

This section is devoted to the proof of Theorem 1.3, which is about the system (1.7). The general approaches in handling Theorem 1.3(i)–(iii) are similar to that of Theorem 1.1, we only emphasize the places which are different. If both equations in (1.7) have local dispersals, then the solution orbits are precompact and thus Theorem 1.3(iv) has been established in [21]. However, if local dispersal is only incorporated into one equations in (1.7), additional techniques and adjustments are needed on the basis of the proof of Theorem 1.2.

First of all, consider the linearized operators of (1.7) at \((\hat{u}_d, 0)\) and \((0, \hat{v}_D)\). If \(\beta =1\), \(\mu _{(\hat{u}_d,0)}\) is defined in the same way as in (2.6). If \(0\le \beta <1\), \(\mu _{(\hat{u}_d,0)}\) denotes the principal eigenvalue of the eigenvalue problem

$$\begin{aligned} {\left\{ \begin{array}{ll} D \left\{ \beta \mathcal {P}[\psi ] +(1-\beta ) \Delta \psi \right\} +[M(x)-b_2\hat{u}_d]\psi =\mu \psi &{} \text {in}\ \bar{\Omega },\\ \partial \psi /\partial \gamma =0 &{} \text {on}\ \partial \Omega . \end{array}\right. } \end{aligned}$$

The definition for \(\nu _{(0, \hat{v}_D)}\) is similar. Propositions 3.1 and 3.2 still hold for the system (1.7).

Proposition 5.1

Assume that (C1)(C4) hold and (1.8) is valid. Then there exist exactly four alternatives as follows.

  1. (i)

    \(\mu _{(\hat{u}_d,0)}>0\), \(\nu _{(0, \hat{v}_D)}>0\);

  2. (ii)

    \(\mu _{(\hat{u}_d,0)}>0\), \(\nu _{(0, \hat{v}_D)}\le 0\);

  3. (iii)

    \(\mu _{(\hat{u}_d,0)}\le 0\), \(\nu _{(0, \hat{v}_D)}>0\);

  4. (iv)

    \(\mu _{(\hat{u}_d,0)}= \nu _{(0, \hat{v}_D)}=0\).

Moreover, (iv) holds if and only if \(b_1(x), c_1(x), b_2(x), c_2(x)\) are constants, \(b_2c_1=b_1 c_2\) and \(b_2\hat{u}_d= c_2 \hat{v}_D\).

The proof of Proposition 5.1 is the same as that of Proposition 3.1 and thus we omit the details.

Proposition 5.2

Assume that (C1)(C4) hold and (1.8) is valid. Then the system (1.7) admits two strictly ordered continuous positive steady states (uv) and \((u^*,v^*)\) (that is without loss of generality, \(u>u^*\), \(v<v^*\)) if and only if \(b_2c_1=b_1 c_2\), \(b_2 \hat{u}_d=c_2 \hat{v}_D\). Moreover, all the positive steady states of (1.7) consist of \((s\hat{u}_d, (1-s)\hat{v}_D)\), \(0<s<1\).

Proof

Set \(w = u- u^*>0\) and \(z= v-v^*<0\). Following the proof of Proposition 3.2, we only explain how to obtain the following two important inequalities:

$$\begin{aligned} \int _{\Omega }\left( b_1(x)w+c_1(x)z\right) w^2 dx\le 0,\ \ \ \int _{\Omega }\left( b_2(x)w+c_2(x)z\right) z^2 dx\ge 0. \end{aligned}$$
(5.1)

For this purpose, first, similar to (3.5), it is routine to check that

$$\begin{aligned}&d\int _{\Omega }\left( -u \left\{ \alpha \mathcal {K}[ u^*]+(1-\alpha ) \Delta u^* \right\} + u^* \left\{ \alpha \mathcal {K}[u]+(1-\alpha ) \Delta u \right\} \right) \frac{w^2}{u u^*} dx \nonumber \\&\quad = \int _{\Omega }(b_1(x)w+c_1(x)z)w^2 dx. \end{aligned}$$
(5.2)

Then due to (C3), the left hand side of (5.2) is calculated as follows

$$\begin{aligned}&d\int _{\Omega }\left( -u \left\{ \alpha \mathcal {K}[ u^*]+(1-\alpha ) \Delta u^* \right\} + u^* \left\{ \alpha \mathcal {K}[u]+(1-\alpha ) \Delta u \right\} \right) \frac{w^2}{u u^*} dx \\&\quad =\, d\alpha \int _{\Omega }\int _{\Omega } k(x,y)\left[ u^*(x)u(y)-u(x)u^*(y) \right] \frac{(u(x)- u^*(x))^2}{u(x) u^*(x) } dy dx\\&\qquad +\, d(1-\alpha ) \int _{\Omega } \left( -u \Delta u^*+ u^* \Delta u\right) \frac{(u(x)- u^*(x))^2}{u(x) u^*(x) } dx\\&\quad =\, {d\over 2}\int _{\Omega }\int _{\Omega } k(x,y)\left[ u^*(x)u(y)-u(x) u^*(y) \right] ^2\left( \frac{1}{u(x) u(y) } -\frac{1}{ u^*(x) u^*(y) } \right) dy dx\\&\qquad +\, d(1-\alpha ) \int _{\Omega } | u(x) \nabla u^*(x) - u^*(x) \nabla u(x)|^2\left( \frac{1}{u^2(x) } -\frac{1}{ (u^*)^2(x) } \right) dx\\&\quad \le \, 0. \end{aligned}$$

Thus

$$\begin{aligned} \int _{\Omega }\left( b_1(x)w+c_1(x)z\right) w^2 dx\le 0, \end{aligned}$$

while the other inequality in (5.1) can be handled similarly.

Obviously, since \(w>0\) and \(z<0\), (5.1) implies that

$$\begin{aligned} \int _{\Omega }\left( \left[ \min _{\bar{\Omega }}b_1\right] w+\left[ \max _{\bar{\Omega }}c_1\right] z\right) w^2 dx\le 0,\ \ \ \int _{\Omega }\left( \left[ \max _{\bar{\Omega }}b_2\right] w+\left[ \min _{\bar{\Omega }}c_2\right] z\right) z^2 dx\ge 0. \end{aligned}$$
(5.3)

Now the arguments after (3.9) can be applied to show that \(b_1(x), c_1(x), b_2(x), c_2(x)\) must be constants, \(b_2c_1=b_1c_2\) and \(b_2u^*_d = c_2v^*_D\).\(\square \)

Now we are ready to continue the proof of Theorem 1.3. First, Theorem 1.3(i)–(iii) can be handled by the same approach employed in the proof of Theorem 1.1. Secondly, according to Proposition 5.1(iv), when both \((\hat{u}_d, 0)\) and \((0,\hat{v}_D)\) are locally stable or neutrally stable, then \(b_1(x), c_1(x), b_2(x), c_2(x)\) must be constants, \(b_2c_1=b_1c_2\), \(b_2\hat{u}_d = c_2\hat{v}_D\) and the system (1.7) has a continuum of steady states \(\{(s\hat{u}_d, (1-s)\hat{v}_D),\ 0\le s\le 1\}\). It remains to verify the global convergence of solutions of (1.7).

For clarity, we divide it into three cases.

Case 1 \(\alpha = \beta =1\). This corresponds to the system (1.1) and has been proved in Theorem 1.2 already.

Case 2 \(0\le \alpha ,\beta <1\), i.e, local dispersals are incorporated into both equations of the system (1.7). Then the solution orbit \(\{ (u(\cdot , t), v(\cdot , t)) \ |\ t\ge 0 \}\) is precompact in \(L^{\infty }(\Omega ) \times L^{\infty }(\Omega )\). Moreover, it is standard to verify that (0, 0) is locally unstable due to the existence of \(\hat{u}_d\) and \(\hat{v}_D\). Therefore, the conclusion follows from the arguments in the proof of [21, Theorem 3].

Case 3 \(\alpha =1\), \(0\le \beta <1\) or \(0\le \alpha <1\), \(\beta =1\), i.e. local dispersal is only incorporated into one equation of the system (1.7). We only prove the case \(\alpha =1\), \(0\le \beta <1\), since the other one can be handled in the same way.

The rest of this section is devoted to the proof of Case 3. We will mainly follow the scheme of the proof of Theorem 1.2. However, the introduction of local dispersal to only one equation causes extra obstacles and some new ideas are needed to overcome these difficulties. For clarity, we focus on the following system

$$\begin{aligned} {\left\{ \begin{array}{ll} u_t= d \mathcal {K} [u] +u(m(x)-b_1 u- c_1 v) &{}\text {in } \Omega \times [0,\infty ),\\ v_t= D \mathcal {P}_{\beta }[v] +v(M(x)-b_2 u- c_2 v) &{}\text {in } \Omega \times [0,\infty ),\\ \partial v/\partial \gamma =0 &{}\text {on } \partial \Omega ,\\ u(x,0)=u_0(x),~v(x,0)=v_0(x) &{}\text {in } \Omega , \end{array}\right. } \end{aligned}$$
(5.4)

where \(\mathcal {P}_{\beta }[v]= \beta \mathcal {P} [v]+(1-\beta ) \Delta v \). Also, let (u(xt), v(xt)) denote the corresponding solution.

First of all, assume that u(xt) does not weakly converge to zero in \(L^2(\Omega )\) and prepare the following proposition for system (5.4), which is parallel to Proposition 4.1(i). But the proof has to be modified since v satisfies an equation with local dispersal now. To be more specific, the inequalities (4.11) and (4.12) do not hold when local dispersal is incorporated.

Proposition 5.3

Assume that (C1)(C3) hold. If u(xt) does not weakly converge to zero in \(L^2(\Omega )\), then there exists \(T_1>0\) such that \(v(x,t)<\hat{v}_D(x)\) in \(\bar{\Omega }\) for \(t\ge T_1\).

Proof

Since in \(L^2(\Omega )\) as \(t \rightarrow \infty \) and u(xt) satisfies the equation with nonlocal dispersal only, the arguments in deriving (4.8) can be applied word by word to indicate that there exist a constant \(B_1>0\), \(\varepsilon _1>0\) and a sequence \(\{ \tau _j \}_{j\ge 1}\) with \(\tau _j \rightarrow \infty \) as \(j\rightarrow \infty \) such that

$$\begin{aligned} u(x,t) \ge B_1\ \ \text {for}\ x\in \bar{\Omega },\ t\in \left[ \tau _j+{1\over 2}+ { \varepsilon _1\over 2}, \tau _j+{1\over 2} + \varepsilon _1\right] , \ j\ge 1. \end{aligned}$$
(5.5)

Moreover, comparison principle implies that there exists a sequence \(\{ h_j \}_{j\ge 1}\) with \(h_j>0\) and \(\lim _{j\rightarrow \infty } h_j =0\) such that

$$\begin{aligned} v(x,t) \le (1+ h_j) \hat{v}_D(x) \ \ \text {in} \ \bar{\Omega }\ \ \text {for} \ t\ge \tau _j. \end{aligned}$$
(5.6)

Define

$$\begin{aligned} V_1(x,t) = \left( 1- \sigma \left( t-\tau _j - {1\over 2} - { \varepsilon _1\over 2}\right) \right) (1+ h_j) \hat{v}_D(x), \end{aligned}$$

where \(\sigma >0\) is to be determined later.

For \(x\in \bar{\Omega },\ t\in [\tau _j+{1\over 2}+ { \varepsilon _1\over 2}, \tau _j+{1\over 2} + \varepsilon _1], \ j\ge 1\), direct computation gives that

$$\begin{aligned}&D \mathcal {P}_{\beta }[V_1] +V_1(M(x)-b_2 u- c_2 V_1)-\frac{\partial V_1}{\partial t}\\&\quad \le \left( 1- \sigma \left( t-\tau _j - {1\over 2} - { \varepsilon _1\over 2}\right) \right) (1+ h_j)\left( D \mathcal {P}_{\beta }[\hat{v}_D] +\hat{v}_D(M(x)-b_2 B_1- c_2 V_1) \right) \\&\qquad + \sigma (1+ h_j) \hat{v}_D \\&\quad \le \left( 1- O( \sigma \varepsilon _1) \right) (1+ h_j)\hat{v}_D \left( -b_2 B_1 + O( h_j) + O( \sigma \varepsilon _1) + O(\sigma h_j \varepsilon _1)\right) + \sigma (1+ h_j) \hat{v}_D \\&\quad <0 \end{aligned}$$

if \(\sigma \) is chosen to be small enough and j is large enough. Note that \(\sigma \) is fixed now and we still have the freedom for the choice of j. Moreover, it is obvious that (5.6) implies that

$$\begin{aligned} v\left( x, \tau _j+{1\over 2}+ { \varepsilon _1\over 2}\right) \le V_1\left( x, \tau _j+{1\over 2}+ { \varepsilon _1\over 2}\right) \ \ \text {in}\ \bar{\Omega }. \end{aligned}$$

Then thanks to comparison principle, it follows that

$$\begin{aligned} v\left( x, \tau _j+{1\over 2}+ \varepsilon _1 \right) \le V_1\left( x, \tau _j+{1\over 2}+ \varepsilon _1 \right) \ \ \text {in}\ \bar{\Omega }. \end{aligned}$$

Furthermore, it is routine to check that

$$\begin{aligned}&V_1\left( x, \tau _j+{1\over 2}+ \varepsilon _1 \right) = \left( 1- \sigma {\varepsilon _1\over 2} \right) (1+ h_j) \hat{v}_D(x)\\&\quad \le \, \left( 1- \sigma {\varepsilon _1\over 2} + h_j -\sigma {\varepsilon _1\over 2} h_j \right) \hat{v}_D(x) \le \left( 1- {1\over 2} \sigma {\varepsilon _1\over 2} \right) \hat{v}_D(x)\ \ \text {in} \ \bar{\Omega }\end{aligned}$$

for j sufficiently large.

The proof is complete.\(\square \)

Now thanks to Proposition 5.3, without loss of generality, we could assume that \(u_0>0\), \(0<v_0< \hat{v}_D\) in \(\bar{\Omega }\) and define

$$\begin{aligned} \hat{\theta }(t) = \sup \{ \theta \ | \ u(x,t)>\theta \hat{u}_d(x), v(x,t)<(1-\theta ) \hat{v}_D(x) \ \text {in}\ \bar{\Omega }\}. \end{aligned}$$

Moreover, \(0<\hat{\theta }(0) <1\), \(\hat{\theta }(t)\) is increasing in t due to comparison principle and denote

$$\begin{aligned} 0< \hat{\theta }_* = \lim _{t\rightarrow \infty } \hat{\theta }(t) \le 1. \end{aligned}$$

As explained before Step 1 in Sect. 4, when \(\hat{\theta }_* = 1\), one has

$$\begin{aligned} \lim _{t\rightarrow \infty }(u(\cdot , t), v(\cdot , t)) =( \hat{u}_d, 0)\ \ \text {in}\ \mathbb {X} \times \mathbb {X}. \end{aligned}$$

Let us restrict to the case that \(0< \hat{\theta }_* < 1\). To make the arguments transparent, we discuss it step by step.

Step 1’ Considering how (5.1) is verified, similar to the arguments in Step 1 in Sect. 4, we obtain that there exists a subsequence \(\{\tau _j\}_{j\ge 1} \) with \(\tau _j \rightarrow \infty \) as \(j\rightarrow \infty \) and \(\hat{\alpha }_1\in [0,1]\) such that

$$\begin{aligned} \lim _{j\rightarrow \infty }(u(\cdot , \tau _j), v(\cdot , \tau _j)) =(\hat{\alpha }_1 \hat{u}_d, (1-\hat{\alpha }_1 ) \hat{v}_D)\ \ \text {in}\ L^2(\Omega ) \times L^2(\Omega ). \end{aligned}$$

Step 2’. Similar to Step 2 in Sect. 4, to prove that \((u(\cdot , t), v(\cdot , t))\) converges in \(L^2(\Omega ) \times L^2(\Omega )\), one needs to show that \(\hat{\theta }_* =\hat{\alpha }_1\). Obviously \(\hat{\theta }_* \le \hat{\alpha }_1\). Now suppose that \(\hat{\theta }_* < \hat{\alpha }_1\) and a contradiction will be derived.

According to the definition of \(\hat{\theta }_*\), for any \(\delta >0\), there exists \(\tau _{\delta }>0\) such that for \(t \ge \tau _{\delta }\),

$$\begin{aligned} u(x,t) > (\hat{\theta }_* -\delta ) \hat{u}_d(x),\ v(x,t) < (1-\hat{\theta }_* +\delta )\hat{v}_D(x)\ \ \text { in} \ \bar{\Omega }. \end{aligned}$$
(5.7)

We claim that there exist \(\hat{\varepsilon } >0\), \(\hat{\delta }>0\) and \(\hat{j}\ge 1\) such that for \(j\ge \hat{j}\),

$$\begin{aligned} u(x,\tau _j+ \hat{\varepsilon } ) > (\hat{\theta }_* +\hat{\delta }) \hat{u}_d(x),\ v(x,\tau _j+ \hat{\varepsilon }) < (1-\hat{\theta } _* - \hat{\delta }) \hat{v}_D(x)\ \ in \ \bar{\Omega }. \end{aligned}$$
(5.8)

Obviously, for u(xt) in (5.4), the same arguments in Step 2 in Sect. 4 can be applied to show that there exist \(\ell _1>0\), \(j_1\ge 1\), \(\varepsilon _1>0\) and \(\delta _1>0\) such that

$$\begin{aligned} u(x, \tau _j+\varepsilon _1) > (\hat{\theta }_* +\delta ) \hat{u}_d(x)\ \ \text {in} \ \bar{\Omega }, \end{aligned}$$

provided that

$$\begin{aligned} 0<\delta < \min \left\{ \delta _1, {\ell _1\varepsilon _1\over 2 \max _{\bar{\Omega }}\hat{u}_d} \right\} ,\ j\ge j_1,\ \tau _j\ge \tau _{\delta }. \end{aligned}$$

Here fix \(\delta = \delta _2>0\) satisfying the above inequality.

However, the arguments for u(xt) can not be applied to handle v(xt), since (4.25), (4.26), (4.28) and (4.29) are not valid when local dispersal is incorporated. The idea in the proof of Proposition 5.3 is borrowed here. We include the details for the convenience of readers.

Notice that \(\Vert u_t(\cdot , t) \Vert _{L^{\infty }(\Omega )}\) has an upper bound independent of \(t\ge 0\), thus there exists \(\varepsilon _2>0\) such that for \(t\in [\tau _j+\varepsilon _1, \tau _j+\varepsilon _1+\varepsilon _2]\), \(\tau _j\ge \tau _{\delta _2}\)

$$\begin{aligned} u(x, t) > \left( \hat{\theta }_* +{\delta _2 \over 2}\right) \hat{u}_d(x)\ \ \text {in} \ \bar{\Omega }. \end{aligned}$$

Define

$$\begin{aligned} V_2(x,t) = \left( 1- \sigma _1(t-\tau _j - \varepsilon _1 )\right) (1-\hat{\theta }_* +\delta ) \hat{v}_D(x), \end{aligned}$$

where \(\sigma _1\) and \(\delta \) are to be determined later. For \(x\in \bar{\Omega },\ t\in [\tau _j+\varepsilon _1, \tau _j+\varepsilon _1+\varepsilon _2], \ \tau _j\ge \tau _{\delta }\), direct computation gives that

$$\begin{aligned}&D \mathcal {P}_{\beta }[V_2] +V_2(M(x)-b_2 u- c_2 V_2)-\frac{\partial V_2}{\partial t}\\&\quad \le \, \left( 1- \sigma _1(t-\tau _j - \varepsilon _1 )\right) (1-\hat{\theta }_* +\delta )\left( D \mathcal {P}_{\beta }[\hat{v}_D] +\hat{v}_D\left( M(x)-b_2 \left( \hat{\theta }_* +{\delta _2\over 2}\right) \hat{u}_d- c_2 V_2\right) \right) \\&\qquad +\, \sigma _1 (1-\hat{\theta }_* +\delta ) \hat{v}_D \\&\quad \le \, \left( 1- O( \sigma _1 \varepsilon _2) \right) (1-\hat{\theta }_* +\delta )c_2\hat{v}_D \left( - {\delta _2\over 2} - \delta + \sigma _1(t-\tau _j - \varepsilon _1 )(1-\hat{\theta }_* +\delta )\right) \hat{v}_D \\&\qquad +\, \sigma _1 (1-\hat{\theta }_* +\delta ) \hat{v}_D \\&\quad <\,0 \end{aligned}$$

provided that \(\sigma _1>0\) is sufficiently small and fixed. Moreover, (5.7) indicates that for \(\tau _j\ge \tau _{\delta }\),

$$\begin{aligned} v(x, \tau _j+\varepsilon _1) \le V_2(x,\tau _j+\varepsilon _1)\ \ \text {in}\ \bar{\Omega }. \end{aligned}$$

Then, due to comparison principle, we have

$$\begin{aligned} v(x, \tau _j+\varepsilon _1+ \varepsilon _2)\le & {} V_2(x,\tau _j+\varepsilon _1+ \varepsilon _2)\\&=\,&\left( 1- \hat{\theta }_* - \sigma _1 \varepsilon _2 (1-\hat{\theta }_* ) +\delta - \sigma _1 \varepsilon _2\delta \right) \hat{v}_D(x)\\&\le \,&\left( 1- \hat{\theta }_* - {1\over 2}\sigma _1 \varepsilon _2 (1-\hat{\theta }_* ) \right) \hat{v}_D(x) \end{aligned}$$

by choosing \(\delta \le {1\over 2}\sigma _1 \varepsilon _2 (1-\hat{\theta }_* )\).

In summary, set

$$\begin{aligned} \hat{\varepsilon } = \varepsilon _1 + \varepsilon _2,\ \hat{\delta }= \min \left\{ {\delta _2 \over 2},{1\over 2}\sigma _1 \varepsilon _2 (1-\hat{\theta }_* ) \right\} \end{aligned}$$

and choose \(\hat{j}\) such that for \(j\ge \hat{j}\), \(\tau _j\ge \tau _{\hat{\delta }}\). The claim is proved. This contradicts to the definition of \(\hat{\theta }_*\). Therefore, \(\hat{\theta }_* =\hat{\alpha }_1\) and thus

$$\begin{aligned} \lim _{t\rightarrow \infty }(u(\cdot , t), v(\cdot , t)) =(\hat{\alpha }_1 \hat{u}_d, (1-\hat{\alpha }_1 ) \hat{v}_D)\ \ \text {in}\ L^2(\Omega ) \times L^2(\Omega ). \end{aligned}$$
(5.9)

Step 3’. Similar to Step 3 in Sect. 4, we improve the \(L^2(\Omega )\times L^2(\Omega )-\)convergence to \(\mathbb {X}\times \mathbb {X}-\)convergence here. Define

$$\begin{aligned} \hat{\eta }(t) = \inf \{ \eta \ | \ u(x,t)<\eta \hat{u}_d(x), v(x,t) > (1-\eta ) \hat{v}_D(x) \ \text {in}\ \bar{\Omega }\}. \end{aligned}$$

Denote \(\hat{\eta }^* = \lim _{t\rightarrow \infty } \hat{\eta }(t),\) where \(\hat{\eta }(t)\) is decreasing in t due to comparison principle.

Since \(\{ v(\cdot , t) \ |\ t\ge 0 \}\) is precompact in \( \mathbb {X}\), it follows immediately from (5.9) that

$$\begin{aligned} \lim _{t\rightarrow \infty } v(\cdot , t) = (1-\hat{\alpha }_1 ) \hat{v}_D \ \ \text {in}\ \mathbb {X} . \end{aligned}$$
(5.10)

Recall that \(\hat{\theta }_* =\hat{\alpha }_1<1\), hence we have the lower bound for v(xt) when t is large. Then the arguments after (4.8) in the proof of Proposition 4.1 can borrowed to show that \(u(x,t)<\hat{u}_d(x)\) in \(\bar{\Omega }\) for large time. Therefore, without loss of generality, we assume that

$$\begin{aligned} 0<u_0<\hat{u}_d,\ 0<v_0<\hat{v}_D\ \ \text {in}\ \bar{\Omega }. \end{aligned}$$

Then \(0<\hat{\theta }(0), \hat{\eta }(0)<1\).

According to the definitions of \(\hat{\theta }_*\), \(\hat{\eta }^*\) and \(\hat{\alpha }_1\), it is obvious that

$$\begin{aligned} 0<\hat{\theta }(0) \le \hat{\theta }_* \le \hat{\alpha }_1 \le \hat{\eta }^*\le \hat{\eta }(0)<1. \end{aligned}$$

To obtain the \(\mathbb {X}\times \mathbb {X}-\)convergence of (uv), it suffices to show \(\hat{\theta }_* = \hat{\eta }^*\). \(\hat{\theta }_* =\hat{\alpha }_1\) has been proved in Step 2’ and we only need demonstrate that \(\hat{\alpha }_1 = \hat{\eta }^*\) as follows.

Suppose that \(\hat{\alpha }_1 < \hat{\eta }^*\). Then (5.10) implies that there exists \(T_1>0\) such that for \(t\ge T_1\),

$$\begin{aligned} v(x,t) > \left( 1- \hat{\eta }^* + \frac{\hat{\eta }^* -\hat{\alpha }_1}{2}\right) \hat{v}_D(x)\ \ \text {in} \ \bar{\Omega }. \end{aligned}$$

Then for \(t\ge T_1\),

$$\begin{aligned} u_t= & {} d \mathcal {K} [u] +u(m(x)-b_1 u- c_1 v)\\&\le \,&\mathcal {K} [u] +u \left( m(x)-b_1 u- c_1\left( 1- \hat{\eta }^* + \frac{\hat{\eta }^* -\hat{\alpha }_1}{2}\right) \hat{v}_D(x) \right) . \end{aligned}$$

Recall that \(b_1 \hat{u}_d =c_1 \hat{v}_D\). Then it is easy to check that \( \displaystyle \left( \hat{\eta }^*- \frac{\hat{\eta }^* -\hat{\alpha }_1}{2}\right) \hat{u}_d\) satisfies

$$\begin{aligned} \mathcal {K} [u] +u \left( m(x)-b_1 u- c_1\left( 1- \hat{\eta }^* + \frac{\hat{\eta }^* -\hat{\alpha }_1}{2}\right) \hat{v}_D(x) \right) =0. \end{aligned}$$

Then it follows from Theorem 2.1 and comparison principle that there exists \(T_2\ge T_1>0\) and \(0< \tilde{\delta }< (\hat{\eta }^* -\hat{\alpha }_1) /2 \) such that

$$\begin{aligned} u(x, T_2) < ( \hat{\eta }^* - \tilde{\delta }) \hat{u}_d(x)\ \ \text {in} \ \bar{\Omega }. \end{aligned}$$

The above two inequalities contradict to the definition of \( \hat{\eta }^*\). Hence \(\hat{\alpha }_1 = \hat{\eta }^*\).

So far, we have proved the convergence of (u(xt), v(xt)) in \(\mathbb {X}\times \mathbb {X}\) when u(xt) does not weakly converge to zero in \(L^2(\Omega )\).

At the end, assume that u(xt) weakly converges to zero in \(L^2(\Omega )\) as \(t\rightarrow \infty \). It follows from Proposition 4.3 that \(u(x,t)<\hat{u}_d(x)\) in \(\bar{\Omega }\) for large time. Without loss of generality, assume that

$$\begin{aligned} 0<u_0 <\hat{u}_d,\ v_0 >0\ \ \text {in}\ \bar{\Omega }. \end{aligned}$$

Thus \(\hat{\eta }^* \le \hat{\eta }(0) <1\).

By similar arguments in Step 1 in Sect. 4, we obtain that there exists a subsequence \(\{s_j\}_{j\ge 1} \) with \(s_j \rightarrow \infty \) as \(j\rightarrow \infty \) and \(\hat{\alpha }_1\in [0,1]\) such that

$$\begin{aligned} \lim _{j\rightarrow \infty }(u(\cdot , s_j), v(\cdot , s_j)) =(\hat{\alpha }_1 \hat{u}_d, (1-\hat{\alpha }_1 ) \hat{v}_D)\ \ \text {in}\ L^2(\Omega ) \times L^2(\Omega ). \end{aligned}$$

This implies that \(\hat{\alpha }_1 =0\) since u(xt) weakly converges to zero in \(L^2(\Omega )\) as \(t\rightarrow \infty \). Then it follows that

$$\begin{aligned} \lim _{t\rightarrow \infty }(u(\cdot , t), v(\cdot , t)) =(0 , \hat{v}_D)\ \ \text {in}\ L^2(\Omega ) \times L^2(\Omega ). \end{aligned}$$

Moreover, since \(\{ v(\cdot , t) \ |\ t\ge 0 \}\) is precompact in \( \mathbb {X}\), one has

$$\begin{aligned} \lim _{t\rightarrow \infty } v(\cdot , t) = \hat{v}_D \ \ \text {in}\ \mathbb {X} . \end{aligned}$$

Then it is standard to show that

$$\begin{aligned} \lim _{t\rightarrow \infty } u(\cdot , t) = 0 \ \ \text {in}\ \mathbb {X} . \end{aligned}$$

This completes the proof of Theorem 1.3.

6 Other types of nonlocal dispersal strategies

Theorems 1.1, 1.2 and 1.3 are about environments with no flux boundary condition. In this section, we briefly explain how to extend these results to nonlocal operators in hostile surroundings or periodic environments.

  • Hostile surroundings For \(\phi \in \mathbb {X}\), the nonlocal operator in hostile surroundings is defined as follows:

    $$\begin{aligned} \mathbf{(D) } \ \displaystyle \mathcal {K}[\phi ] = \int _{\Omega }k(x,y)\phi (y)dy- \phi (x). \end{aligned}$$
  • Periodic environments First set \(\mathbb {X}_p = \{ \phi \in C(\mathbb {R}^n ) \ | \ \phi (x+ l_j e_j) = \phi (x) ,\ 1\le j\le n \},\) where \(l_j>0\), \(e_j= (e_{j1}, e_{j2}, \ldots , e_{jn})\) with \(e_{ji}=1\) if \(j=i\) and \(e_{ji} =0\) if \(j\ne i\). For \(k(x,y): \mathbb {R}^n \times \mathbb {R}^n \rightarrow \mathbb {R}_+\), assume that

    $$\begin{aligned} \mathbf{(C }_p\mathbf{) } \ k(x+ l_j e_j, y) = k(x,y- l_j e_j),\ 1\le j\le n . \end{aligned}$$

    Now, for \(\phi \in \mathbb {X}_p\) and k(xy) satisfies (C1), (C2) and \(\mathbf{(C }_p\mathbf{) }\), the nonlocal operator in periodic environments is defined as follows:

    $$\begin{aligned} \mathbf{(P) } \ \displaystyle \mathcal {K}[\phi ] = \int _{\mathbb {R}^n}k(x,y)\phi (y)dy- \phi (x). \end{aligned}$$

    Denote \(\Omega _p = [0,l_1]\times [0,l_2]\times \cdots \times [0,l_n]\). Then

    $$\begin{aligned} \mathcal {K}[\phi ]= & {} \int _{\mathbb {R}^n}k(x,y)\phi (y)dy- \phi (x)\nonumber \\= & {} \int _{\Omega _p} \sum _{j=1}^n \sum _{m=-\infty }^{\infty } k(x, y+ m l_j e_j) \phi (y) dy - \phi (x). \end{aligned}$$
    (6.1)

Recall that when studying nonlocal operators defined in (N), in fact we consider the operators defined in (2.2) and (2.3). Therefore, it is easy to see that Theorems 1.1 and 1.2 still hold for the system (1.1) with nonlocal operators in hostile surroundings or periodic environments.

At the end, when local dispersals are incorporated, for hostile surroundings, homogeneous Dirichlet boundary conditions should be imposed. The proof of this case is almost the same as that of Theorem 1.3. The only different part is in the verification of (5.1), where Hopf boundary lemma is needed for Dirichlet boundary conditions. Moreover, for periodic environments, it is natural to impose periodic boundary conditions when local dispersals are incorporated. Due to (6.1), the proof of this case follows from that of Theorem 1.3 word by word.