1 Introduction

In this paper, we investigate the spreading behavior of two competing species described by the following free boundary problem in \(\mathbb {R}^N\) (\(N\ge 1\)) with spherical symmetry:

$$\begin{aligned} \left\{ \begin{array}{l} u_t=d\Delta u+ru(1-u-kv) \text{ for } 0<r<s_1(t),\ t>0,\\ v_t=\Delta v+v(1-v-hu) \text{ for } 0<r<s_2(t),\ t>0,\\ u_r(0,t)=v_r(0,t)=0 \text{ for } t>0,\\ u\equiv 0\;\text{ for } \text{ all }\ r\ge s_1(t)\ \text{ and }\ t>0,\; v\equiv 0\; \text{ for } \text{ all }\ r\ge s_2(t)\ \text{ and }\ t>0,\\ s_{1}'(t)=-\mu _1 u_r(s_1(t),t) \text{ for } t>0,\; s_{2}'(t)=-\mu _2 v_r(s_2(t),t) \text{ for } t>0,\\ s_1(0)=s_1^0,\ s_2(0)=s_2^0,\ u(r,0)=u_0(r),\ v(r,0)=v_0(r)\ \text{ for }\ r\in [0,\infty ), \end{array}\right. \end{aligned}$$
(P)

where u(rt) and v(rt) represent the population densities of the two competing species at spatial location \(r\, (=|x|)\) and time t; \(\Delta \varphi :=\varphi _{rr}+\frac{(N-1)}{r}\phi _r\) is the usual Laplace operator acting on spherically symmetric functions. All the parameters are assumed to be positive, and without loss of generality, we have used a simplified version of the Lotka–Volterra competition model, which can be obtained from the general model by a standard change of variables procedure (see, for example, [13]). The initial data \((u_0,v_0,s_1^0,s_2^0)\) satisfies

$$\begin{aligned} \left\{ \begin{array}{l} s_1^0>0,\ s_2^0>0,\ u_0\in C^2([0,s_1^0]),\ v_0\in C^2([0,s_2^0]),\ u_0'(0)=v_0'(0)=0,\\ u_0(r)>0\ \text{ for }\ r\in [0,s_1^0),\ u_0(r)=0\ \text{ for }\ r\ge s_1^0,\\ v_0(r)>0\ \text{ for }\ r\in [0,s_2^0),\ v_0(r)=0\ \text{ for }\ r\ge s_2^0. \end{array}\right. \end{aligned}$$
(1.1)

In this model, both species invade the environment through their own free boundaries: the species u has a spreading front at \(r=s_1(t)\), while v’s spreading front is at \(r=s_2(t)\). For the mathematical treatment, we have extended u(rt) from its population range \(r\in [0, s_1(t)]\) to \(r>s_1(t)\) by 0, and extended v(rt) from \(r\in [0, s_2(t)]\) to \(r>s_2(t)\) by 0.

The global existence and uniqueness of the solution to problem (P) under (1.1) can be established by the approach in [15] with suitable changes. In fact, the local existence and uniqueness proof can cover a rather general class of such free boundary systems. The assumption in (P) that u and v have independent free boundaries causes some difficulties but this can be handled by following the approach in [15] with suitable modifications and corrections. The details are given in the Appendix at the end of the paper.

We say \((u,v,s_1, s_2)\) is a (global classical) solution of (P) if

$$\begin{aligned} (u,v,s_1,s_2)\in C^{2,1}(D^1)\times C^{2,1}(D^2)\times C^1([0,+\infty ))\times C^1([0,+\infty )), \end{aligned}$$

where

$$\begin{aligned} D^1:=\{(r,t): r\in [0, s_1(t)], \; t>0\},\;\; D^2:=\{(r,t): r\in [0, s_2(t)], \; t>0\}, \end{aligned}$$

and all the equations in (P) are satisfied pointwisely. By the Hopf boundary lemma, it is easily seen that, for \(i=1,2\) and \(t>0\), \(s_i'(t)>0\). Hence

$$\begin{aligned} s_{i,\infty }:=\lim _{t\rightarrow \infty } s_i(t) \end{aligned}$$

is well-defined.

We are interested in the long-time behavior of (P). In order to gain a good understanding, we focus on some interesting special cases. Our first assumption is that

$$\begin{aligned} 0<k<1<h. \end{aligned}$$
(1.2)

It is well known that under this assumption, when restricted over a fixed bounded domain \(\Omega \) with no-flux boundary conditions, the unique solution \((\tilde{u}(x,t), \tilde{v}(x,t))\) of the corresponding problem of (P) converges to (1, 0) as \(t\rightarrow \infty \) uniformly for \(x\in \overline{\Omega }\). So in the long run, the species u drives v to extinction and wins the competition. For this reason, condition (1.2) is often referred to as the case that u is superior and v is inferior in the competition. This is often referred to as a weak–strong competition case. A symmetric situation is \(0<h<1<k\).

The case \(h,k\in (0,1)\) is called the weak competition case (see [33]), while the case \(h,k\in (1,+\infty )\) is known as the strong competition case. In these cases, rather different long-time dynamical behaviors are expected.

In this paper, we will focus on problem (P) for the weak–strong competition case (1.2), and demonstrate a rather interesting phenomenon, where u and v both survive in the competition, but they spread into the new territory with different speeds, and their population masses tend to segregate, with the population mass of v shifting to infinity as \(t\rightarrow \infty \).

For (P) with space dimension \(N=1\), such a phenomenon was discussed in [15], though less precisely than here. It is shown in Theorem 5 of [15] that under (1.2) and some additional conditions, both species can spread successfully, in the sense that

  1. (i)

    \(s_{1,\infty }=s_{2,\infty }=\infty \),

  2. (ii)

    there exists \(\delta >0\) such that for all \(t>0\),

    $$\begin{aligned} u(x,t)\ge \delta \hbox { for }x\in I_u(t),\; v(x,t)\ge \delta \hbox { for }x\in I_v(t), \end{aligned}$$

    where \(I_u(t)\) and \(I_v(t)\) are intervals of length at least \(\delta \) that vary continuously in t.

At the end of the paper [15], the question of determining the spreading speeds for both species was raised as an open problem for future investigation.

In this paper, we will determine, for such a case, \( \lim _{t\rightarrow \infty }\frac{s_i(t)}{t},\; i=1,2\); so in particular, the open problem of [15] on the spreading speeds is resolved here. Moreover, we also obtain a much better understanding of the long-time behavior of \(u(\cdot , t)\) and \(v(\cdot , t)\), for all dimensions \(N\ge 1\). See Theorem 1 and Corollary 1 below for details.

A crucial new ingredient (namely \(c^*_{\mu _1}\) below) in our approach here comes from recent research on another closely related problem (proposed in [7]):

$$\begin{aligned} \left\{ \begin{array}{l} u_t=d \Delta u+ru(1-u-kv),\quad 0\le r<h(t),\ t>0,\\ v_t=\Delta v+v(1-v-hu),\quad 0\le r<\infty ,\ t>0,\\ u_r(0,t)=v_r(0,t)=0,\ u(r,t)=0,\quad h(t)\le r<\infty ,\ t>0,\\ h'(t)=-\mu _1 u_r(h(t),t),\quad t>0,\\ h(0)=h_0, u(r,0)=\hat{u}_0(r),\quad 0\le r\le h_0,\\ v(r,0)=\hat{v}_0(r),\quad 0\le r<\infty , \end{array}\right. \end{aligned}$$
(Q)

where \(0<k<1<h\) and

$$\begin{aligned} \left\{ \begin{array}{l} \hat{u}_0\in C^2([0,h_0]),\ \hat{u}_0'(0)=\hat{u}_0(h_0)=0,\ \hat{u}_0>0\ \text{ in } [0,h_0),\\ \hat{v}_0\in C^2([0,\infty ))\cap L^{\infty }((0,\infty )),\ \hat{u}_0'(0)=0,\ \hat{v}_0\ge (\not \equiv )0\ \text{ in } [0,\infty ). \end{array}\right. \end{aligned}$$
(1.3)

In problem (Q) the inferior competitor v is assumed to be a native species already established in the environment, while the superior competitor u is invading the environment via the free boundary \(r=h(t)\). Theorem 4.3 in [7] gives a spreading-vanishing dichotomy for (Q): Either

  • (Spreading of u) \(\lim _{t\rightarrow \infty } h(t)=\infty \) and

    $$\begin{aligned} \lim _{t\rightarrow \infty } (u(r, t), v(r,t))=(1,0) \text{ locally } \text{ uniformly } \text{ for } \,r\in [0,\infty ), \hbox { or} \end{aligned}$$
  • (Vanishing of u) \(\lim _{t\rightarrow \infty } h(t)<\infty \) and

    $$\begin{aligned} \lim _{t\rightarrow \infty } (u(r, t), v(r,t))=(0,1) \text{ locally } \text{ uniformly } \text{ for } \,r\in [0,\infty ). \end{aligned}$$

Sharp criteria for spreading and vanishing of u are also given in [7]. When spreading of u happens, an interesting question is whether there exists an asymptotic spreading speed, namely whether \(\lim _{t\rightarrow \infty }\frac{h(t)}{t}\) exists. This kind of questions, similar to the one being asked in [15] mentioned above, turns out to be rather difficult to answer for systems of equations with free boundaries. Recently, Du, Wang and Zhou [13] successfully established the spreading speed for (Q), by making use of the following so-called semi-wave system:

$$\begin{aligned} \left\{ \begin{array}{l} cU'+d U''+rU(1-U-kV)=0,\quad -\infty<\xi<0,\\ cV'+V''+v(1-V-hU)=0,\quad -\infty<\xi<\infty ,\\ U(-\infty )=1,\quad U(0)=0,\quad U'(\xi )<0 =U(-\xi ),\ \xi <0,\\ V(-\infty )=0,\quad V(+\infty )=1,\quad V'(\xi )>0,\ \xi \in \mathbb {R}. \end{array}\right. \end{aligned}$$
(1.4)

It was shown that (1.4) has a unique solution if \(c\in [0, c_0)\), and it has no solution if \(c\ge c_0\), where

$$\begin{aligned} c_0\in [\,2\sqrt{rd(1-k)},2\sqrt{rd}\,] \end{aligned}$$

is the minimal speed for the traveling wave solution studied in [20]. More precisely, the following result holds:

Theorem A

(Theorem  1.3 of [13]) Assume that \(0<k<1<h\). Then for each \(c\in [0,c_0)\), (1.4) has a unique solution \((U_c,V_c)\in [C(\mathbb {R})\cap C^2([0,\infty ))]\times C^2(\mathbb {R})\), and it has no solution for \(c\ge c_0\). Moreover,

  1. (i)

    if \(0\le c_1< c_2<c_0\), then

    $$\begin{aligned} U'_{c_1}(0)<U'_{c_2}(0),\; U_{c_1}(\xi )>U_{c_2}(\xi ) \text{ for } \xi <0,\; V_{c_2}(\xi )>V_{c_1}(\xi ) \text{ for } \xi \in \mathbb {R}; \end{aligned}$$
  2. (ii)

    the mapping \(c\mapsto (U_c,V_c)\) is continuous from \([0,c_0)\) to \(C^2_{loc}((-\infty ,0])\times C^2_{loc}(\mathbb {R})\) with

    $$\begin{aligned} \lim _{c\rightarrow c_0}(U_c,V_c)=(0,1)\quad \text{ in }\, C^2_{loc}((-\infty ,0])\times C^2_{loc}(\mathbb {R}); \end{aligned}$$
  3. (iii)

    for each \(\mu _1>0\), there exists a unique \(c=c^*_{\mu _1}\in (0,c_0)\) such that

    $$\begin{aligned} \mu _1 U'_{c^*_{\mu _1}}(0)=c^*_{\mu _1}\quad \text{ and } c^*_{\mu _1}\nearrow c_0\hbox { as }{\mu _1}\nearrow \infty . \end{aligned}$$

The spreading speed for (Q) is established as follows.

Theorem B

(Theorem  1.1 of [13]) Assume that \(0<k<1<h\). Let (uvh) be the solution of (Q) with (1.3) and

$$\begin{aligned} \liminf _{r\rightarrow \infty } \hat{v}_0(r)>0. \end{aligned}$$
(1.5)

If \(h_{\infty }:=\lim _{t\rightarrow \infty }h(t)=\infty \), then

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{h(t)}{t}=c^*_{\mu _1}, \end{aligned}$$

where \(c^*_{\mu _1}\) is given in Theorem A.

It turns out that \(c^*_{\mu _1}\) also plays an important role in determining the long-time dynamics of (P). In order to describe the second crucial number for the dynamics of (P) (namely \(s^*_{\mu _2}\) below), let us recall that, in the absence of the species u, problem (P) reduces to a single species model studied by Du and Guo [2], who generalized the model proposed by Du and Lin [6] from one dimensional space to high dimensional space with spherical symmetry. In such a case, a spreading-vanishing dichotomy holds for v, and when spreading happens, the spreading speed of v is related to the following problem

$$\begin{aligned} \left\{ \begin{array}{l} dq''+sq'+q(a-bq)=0\quad \text{ in }\, (-\infty ,0),\\ q(0)=0,\quad q(-\infty )=a/b,\quad q(\xi )>0\quad \text{ in } \,(-\infty ,0). \end{array}\right. \end{aligned}$$
(1.6)

More precisely, by Proposition 2.1 in [1] (see also Proposition 1.8 and Theorem 6.2 of [8]), the following result holds:

Theorem C

For fixed \(a,b,d,\mu _2>0\), there exists a unique \(s=s^*(a,b,d,\mu _2)\in (0, 2\sqrt{ad})\) and a unique solution \(q^*\) to (1.6) with \(s=s^*(a,b,d,\mu _2)\) such that \((q^*)'(0)=-s^*(a,b,d,\mu _2)/\mu _2\). Moreover, \((q^*)'(\xi )<0\) for all \(\xi \le 0\).

Hereafter, we shall denote \(s^*_{\mu _2}:=s^*(1,1,1,\mu _2)\). It turns out that the long-time behavior of (P) depends crucially on whether \(c^*_{\mu _1}<s^*_{\mu _2}\) or \(c^*_{\mu _1}>s^*_{\mu _2}\). As demonstrated in Theorems 1 and 2 below, in the former case, it is possible for both species to spread successfully, while in the latter case, at least one species has to vanish eventually.

Let us note that while the existence and uniqueness of \(s^*_{\mu _2}\) is relatively easy to establish (and has been used in [15] and other papers to estimate the spreading speeds for various systems), this is not the case for \(c^*_{\mu _1}\), which takes more than half of the length of [13] to establish. The main advance of this research from [15] is achieved by making use of \(c^*_{\mu _1}\).

Theorem 1

Suppose (1.2) holds and

$$\begin{aligned} c^*_{\mu _1}<s^*_{\mu _2}. \end{aligned}$$
(1.7)

Then one can choose initial functions \(u_0\) and \(v_0\) properly such that the unique solution \((u,v, s_1, s_2)\) of (P) satisfies

$$\begin{aligned} \lim _{t\rightarrow \infty } \frac{s_1(t)}{t}=c_{\mu _1}^*,\;\; \lim _{t\rightarrow \infty }\frac{s_2(t)}{t}=s^*_{\mu _2}, \end{aligned}$$

and for every small \(\epsilon >0\),

$$\begin{aligned} \lim _{t\rightarrow \infty } (u(r, t), v(r,t))=(1, 0) \text{ uniformly } \text{ for } \,r\in [0,(c_{\mu _1}^*-\epsilon )t], \end{aligned}$$
(1.8)
$$\begin{aligned} \lim _{t\rightarrow \infty }v(r,t)=1 \text{ uniformly } \text{ for } \,r\in [(c^*_{\mu _1}+\epsilon )t, (s^*_{\mu _2}-\epsilon )t]. \end{aligned}$$
(1.9)

Before giving some explanations regarding the condition (1.7) and the choices of \(u_0\) and \(v_0\) in the above theorem, let us first note that the above conclusions indicate that the u species spread at the asymptotic speed \(c^*_{\mu _1}\), while v spreads at the faster asymptotic speed \(s^*_{\mu _2}\). Moreover, (1.8) and (1.9) imply that the population mass of u roughly concentrates on the expanding ball \(\{r<c^*_{\mu _1}t\}\), while that of v concentrates on the expanding spherical shell \(\{c^*_{\mu _1}t<r< s^*_{\mu _2}t\}\) which shifts to infinity as \(t\rightarrow \infty \). We also note that, apart from a relatively thin coexistence shell around \(r=c^*_{\mu _1}t\), the population masses of u and v are largely segregated for all large time. Clearly this gives a more precise description for the spreadings of u and v than that in Theorem 5 of [15] (for \(N=1\)) mentioned above.

We now look at some simple sufficient conditions for (1.7). We note that \(c^*_{\mu _1}\) is independent of \(\mu _2\) and the initial functions. From the proof of Lemma 2.9 in [13], we see that \(c^*_{\mu _1}\rightarrow 0\) as \(\mu _1\rightarrow 0\). Therefore when all the other parameters are fixed,

$$\begin{aligned}&(1.7) \hbox { holds for all small } \mu _1>0. \end{aligned}$$

A second sufficient condition can be found by using Theorem A (iii), which implies \(c^*_{\mu _1}< c_0\le 2\sqrt{rd}\) for all \(\mu _1>0\). It follows that

$$\begin{aligned}&(1.7) \text{ holds } \text{ for } \text{ all } \mu _1>0 \text{ provided } \text{ that } 2\sqrt{rd}\le s^*_{\mu _2}. \end{aligned}$$

Note that \(2\sqrt{rd}\le s^*_{\mu _2}\) holds if \(\sqrt{rd}<1\) and \(\mu _2\gg 1\) since \(s^*_{\mu _2}\rightarrow 2\) as \(\mu _2\rightarrow \infty \).

For the conditions in Theorem 1 on the initial functions \(u_0\) and \(v_0\), the simplest ones are given in the corollary below.

Corollary 1

Assume (1.2) and (1.7). Then there exists a large positive constant \(C_0\) depending on \(s^0_1\) such that the conclusions of Theorem 1 hold if

  1. (i)

    \(\Vert u_0\Vert _{L^{\infty }([0,s^0_1])}\le 1\) with \(s^0_1\ge R^*\sqrt{d/[r(1-k)]}\),

  2. (ii)

    for some \(x_0\ge C_0\) and \( L\ge C_0\), \(v_0(r)\ge 1\) for \(r\in [x_0,x_0+L]\).

Here \(R^*\) is uniquely determined by \(\lambda _1(R^*)=1\), where \(\lambda _1(R)\) is the principal eigenvalue of

$$\begin{aligned} -\Delta \phi = \lambda \phi \quad \text{ in }\, B_R,\quad \phi =0\quad \text{ on }\, \partial B_R. \end{aligned}$$

Roughly speaking, conditions (i) and (ii) above (together with (1.2) and (1.7)) guarantee that u does not vanish yet it cannot spread too fast initially, and the initial population of v is relatively well-established in some part of the environment where u is absent, so with its fast spreading speed v can outrun the superior but slower competitor u. In Sect. 2, weaker sufficient conditions on \(u_0\) and \(v_0\) will be given (see (B1) and (B2) there).

Next we describe the long-time behavior of (P) for the case

$$\begin{aligned} c^*_{\mu _1}>s^*_{\mu _2}. \end{aligned}$$
(1.10)

We will show that, in this case, no matter how the initial functions \(u_0\) and \(v_0\) are chosen, at least one of u and v will vanish eventually. As in [15], we say u (respectively v) vanishes eventually if

$$\begin{aligned}&s_{1,\infty }<+\infty \text{ and } \lim _{t\rightarrow +\infty }\Vert u(\cdot , t)\Vert _{L^\infty ([0, s_1(t)])}=0\\&(\text{ respectively, } \; s_{2,\infty }<+\infty \text{ and } \lim _{t\rightarrow +\infty }\Vert v(\cdot , t)\Vert _{L^\infty ([0, s_1(t)])}=0); \end{aligned}$$

and we say u (respectively v) spreads successfully if

$$\begin{aligned}&s_{1,\infty }=\infty \text{ and } \text{ there } \text{ exists } \delta>0 \text{ such } \text{ that, }\\&u(x,t)\ge \delta \, \text{ for } \, x\in I_u(t)\, \text{ and } \,t>0, \end{aligned}$$

where \(I_u(t)\) is an interval of length at least \(\delta \) that varies continuously in t (respectively,

$$\begin{aligned}&s_{2,\infty }=\infty \, \text{ and } \text{ there } \text{ exists } \, \delta>0\, \text{ such } \text{ that }\\&v(x,t)\ge \delta \, \text{ for }\, x\in I_v(t)\, \text{ and }\, t>0, \end{aligned}$$

where \(I_v(t)\) is an interval of length at least \(\delta \) that varies continuously in t).

For fixed \(d,r,h,k>0\) satisfying (1.2), we define

$$\begin{aligned} \mathcal {B}=\mathcal {B}(d,r,h,k):=\Big \{(\mu _1,\mu _2)\in \mathbb {R}_+\times \mathbb {R}_+: \ c^*_{\mu _1}>s^*_{\mu _2}\Big \}. \end{aligned}$$

Note that \(\mathcal {B}\ne \emptyset \) since \(s^*_{\mu _2}\rightarrow 0\) as \(\mu _2\rightarrow 0\) and \(c^*_{\mu _1}>0\) is independent of \(\mu _2\).

We have the following result.

Theorem 2

Assume that (1.2) holds. If \((\mu _1,\mu _2)\in \mathcal {B}\), then at least one of the species u and v vanishes eventually. More precisely, depending on the choice of \(u_0\) and \(v_0\), exactly one of the following occurs for the unique solution \((u,v,s_1,s_2)\) of (P):

  1. (i)

    Both species u and v vanish eventually.

  2. (ii)

    The species u vanishes eventually and v spreads successfully.

  3. (iii)

    The species u spreads successfully and v vanishes eventually.

Note that \((\mu _1,\mu _2)\in \mathcal {B}\) if and only if (1.10) holds. Theorem 2 can be proved along the lines of the proof of [15, Corollary 1] with some suitable changes. When \(N=1\), Theorem 2 slightly improves the conclusion of Corollary 1 in [15], since it is easily seen that \(\mathcal {A}\subset \mathcal {B}\) (due to \(s^*(r(1-k),r,d,\mu _1)\le c^*_{\mu _1}\)), where \(\mathcal {A}:=\Big \{(\mu _1,\mu _2)\in \mathbb {R}_+\times \mathbb {R}_+: \ s^*(r(1-k),r,d,\mu _1)>s^*_{\mu _2}\Big \}\) is given in [15].

Remark 1.1

We note that by suitably choosing the initial functions \(u_0\) and \(v_0\) and the parameters \(\mu _1\) and \(\mu _2\), all the three possibilities in Theorem 2 can occur. For example, for given \(u_0\) and \(v_0\) with \(s_1^0<R^*\sqrt{\frac{d}{r}}\) and \(s_2^0< R^*\), then scenario (i) occurs as long as both \(\mu _1\) and \(\mu _2\) are small enough and \((\mu _1, \mu _2)\in \mathcal {B}\) (which can be proved by using the argument in [6, Lemma 3.8]). If next we modify \(v_0\) such that \(s_2^0\ge R^*\), then u still vanishes eventually but v will spread successfully, which leads to scenario (ii). For scenario (iii) to occur, we can take \(s_1^0\ge R^*\sqrt{\frac{d}{r(1-k)}}\) and \(\mu _2\) small enough.

Our results here suggest that in the weak–strong competition case, co-existence of the two species over a common (either moving or stationary) spatial region can hardly happen. This contrasts sharply to the weak competition case (\(h,k\in (0,1)\)), where coexistence often occurs; see, for example [31, 33].

Before ending this section, we mention some further references that form part of the background of this research. Since the work [6], there have been tremendous efforts towards developing analytical tools to deal with more general single species models with free boundaries; see [1, 3,4,5, 8, 9, 11, 12, 17, 19, 21, 22, 24, 25, 27, 35] and references therein. Related works for two species models can be found in, for example, [7, 13, 14, 26, 28,29,30, 32,33,34]. The issue of the spreading speed for single species models in homogeneous environment has been well studied, and we refer to [11, 12] for some sharp estimates. Some of the theory on single species models can be used to estimate the spreading speed for two species models; however, generally speaking, only rough upper and lower bounds can be obtained via this approach.

The rest of this paper is organized as follows. In Sect. 2, we shall prove our main result, Theorem 1, based on the comparison principle and on the construct of various auxiliary functions as comparison solutions to (P). Section 3 is an appendix, where we prove the local and global existence and uniqueness of solutions to a wide class of problems including (P) as a special case, and we also sketch the proof of Theorem 2.

2 Proof of Theorem 1

We start by establishing several technical lemmas.

Lemma 2.1

Let \(\mu _2>0\) and \(s^*_{\mu _2}\) be given in Theorem C. Then for each \(s\in (0,s^*_{\mu _2})\), there exists a unique \(z=z(s)>0\) such that the solution \(q_s\) of the initial value problem

$$\begin{aligned} \left\{ \begin{array}{l} q''+sq'+q(1-q)=0\quad \text{ in }\, (-\infty ,0),\\ q(0)=0,\quad q'(0)=-s^*_{\mu _2}/{\mu _2} \end{array}\right. \end{aligned}$$

satisfies \(q_s'(-z(s))=0\) and \(q_s'(z)<0\) for \(z\in (-z(s),0)\). Moreover, \(q_s(-z(s))\) is continuous in s and

$$\begin{aligned} z(s)\nearrow \infty ,\quad q_s(-z(s))\nearrow 1\quad \text{ as }\, s\nearrow s^*_{\mu _2}. \end{aligned}$$

Proof

The conclusions follow directly from Proposition 2.4 in [18]. \(\square \)

Lemma 2.2

Let \((u,v,s_1,s_2)\) be a solution of (P) with \(s_{1,\infty }=s_{2,\infty }=\infty \). Suppose that

$$\begin{aligned} \limsup _{t\rightarrow \infty }\frac{s_1(t)}{t}<c_1<c_2<\liminf _{t\rightarrow \infty }\frac{s_2(t)}{t} \end{aligned}$$

for some positive constants \(c_1\) and \(c_2\). Then for any \(\varepsilon >0\), there exists \(T>0\) such that

$$\begin{aligned}&v(r,t)<1+\varepsilon \quad \text{ for } \text{ all }\, t\ge T\, \text{ and }\, r\in [0,\infty ), \end{aligned}$$
(2.1)
$$\begin{aligned}&v(r,t)>1-\varepsilon \quad \text{ for } \text{ all }\, t\ge T\, \text{ and }\, r\in [c_1 t,c_2 t]. \end{aligned}$$
(2.2)

Proof

Let \(\bar{w}\) be the solution of \(w'(t)=w(1-w)\) with initial data \(w(0)=\Vert v_0\Vert _{L^{\infty }}.\) By the standard comparison principle, \(v(x,t)\le \bar{w}(t)\) for all \(t\ge 0\). Since \(\bar{w}\rightarrow 1\) as \(t\rightarrow \infty \), there exists \(T>0\) such that (2.1) holds.

Before proving (2.2), we first show \(\limsup _{t\rightarrow \infty }s_2(t)/t\le s^*_{\mu _2}\) by simple comparison. Indeed, it is easy to check that \((v,s_2)\) forms a subsolution of

$$\begin{aligned} \left\{ \begin{array}{l} \bar{w}_t=\Delta \bar{w}+\bar{w}(1-\bar{w}),\ 0<r<\bar{\eta }(t),\ t>0,\\ \bar{w}_r(0,t)=0,\ \bar{w}(\bar{\eta }(t),t)=0,\ t>0,\\ \bar{\eta }'(t)=-\mu _2 \bar{w}_r(\bar{\eta }(t),t),\ t>0,\\ \bar{\eta }(0)=s_2^0,\ \bar{w}(r,0)=v_0(r),\ r\in [0,s_2^0], \end{array}\right. \end{aligned}$$

By the comparison principle (Lemma 2.6 of [2]), \(\bar{\eta }(t)\ge s_{2}(t)\) for all t, which implies that \(\bar{\eta }(\infty )=\infty \). It then follows from Corollary 3.7 of [2] that \(\bar{\eta }(t)/t\rightarrow s^*_{\mu _2}\) as \(t\rightarrow \infty \). Consequently, we have

$$\begin{aligned} \limsup _{t\rightarrow \infty }\frac{s_2(t)}{t}\le \lim _{t\rightarrow \infty }\frac{\bar{\eta }(t)}{t}=s^{*}_{\mu _2}. \end{aligned}$$

It follows that \(c_2<s^{*}_{\mu _2}\).

We now prove (2.2) by using a contradiction argument. Assume that the conclusion does not hold. Then there exist small \(\epsilon _0>0\), \(t_k \uparrow \infty \) and \(x_k \in [c_1 t_k, c_2 t_k]\) such that

$$\begin{aligned} v(x_k, t_k)\le 1-\epsilon _0 \quad \text{ for } \text{ all } \,k\in \mathbb {N}. \end{aligned}$$
(2.3)

Up to passing to a subsequence we may assume that \(p_k:=x_k/t_k \rightarrow p_0\) for some \(p_0 \in [c_1,c_2]\) as \(k\rightarrow \infty \).

We want to show that

$$\begin{aligned} \limsup _{k\rightarrow \infty } v(x_k, t_k)>1-\epsilon _0, \end{aligned}$$
(2.4)

which would give the desired contradiction (with (2.3)). To do so, we define

$$\begin{aligned} w_k(R,t)= v(R + p_k t, t). \end{aligned}$$

Then \(w_k\) satisfies

$$\begin{aligned} w_t = w_{RR} + \Big [\frac{N-1}{R+p_k t} + p_k\Big ] w_R + w ( 1-w)\quad \mathrm{for}\ -p_k t<R<s_2(t)-p_k t,\ t\ge t_1. \end{aligned}$$

Recall that \(0<c_1<c_2<s_{\mu _2}^*<2\), \(x_k=p_kt_k\) and \(p_k\rightarrow p_0\in [c_1, c_2]\subset (0,2)\). Hence there exists large positive L such that for all \(L_1\), \(L_2\in [L, \infty )\), the problem

$$\begin{aligned} z_{RR}+p_0z_R+z(1-z)=0 \text{ in } (-L_2, L_1),\; z(-L_2)=z(L_1)=0 \end{aligned}$$
(2.5)

has a unique positive solution z(R) and \(z(0)>1-\epsilon _0\).

Fix \(L_1\ge L\), \(p\in (p_0, 2)\) and define

$$\begin{aligned} \phi (R)=e^{-\frac{p}{2} R}\cos \frac{\pi R}{2L_1}. \end{aligned}$$

It is easily checked that

$$\begin{aligned} -\phi ''-p\phi '=\left[ \frac{p^2}{4}+\frac{\pi ^2}{4L_1^2}\right] \phi \quad \text{ for } R\in [-L_1, L_1],\; \phi (\pm L_1)=0. \end{aligned}$$

Moreover, there exists a unique \(R_0\in (-L_1, 0)\) such that

$$\begin{aligned} \phi '(R)<0 \text{ for } R\in (R_0, L_1],\; \phi '(R_0)=0. \end{aligned}$$

We may assume that \(L_1\) is large enough such that

$$\begin{aligned} \tilde{p}:=\frac{p^2}{4}+\frac{\pi ^2}{4L_1^2}<1. \end{aligned}$$

We then choose \(L_2>L\) such that

$$\begin{aligned} \tilde{L}:=L_2+R_0>0\, \text{ and } \; \frac{\pi ^2}{4\tilde{L}^2}<\tilde{p}. \end{aligned}$$

Set

$$\begin{aligned} \tilde{\phi }(R):=\left\{ \begin{array}{ll} \phi (R), &{} R\in [R_0, L_1],\\ \phi (R_0)\cos \frac{\pi (R-R_0)}{2\tilde{L}},&{} R\in [-L_2, R_0). \end{array} \right. \end{aligned}$$

Then clearly

$$\begin{aligned} -\tilde{\phi }''=\frac{\pi ^2}{4\tilde{L}^2}\tilde{\phi },\quad \tilde{\phi }'>0\ \text{ in } (-L_2, R_0),\; \tilde{\phi }(-L_2)=0=\tilde{\phi }'(R_0). \end{aligned}$$

Since

$$\begin{aligned} \frac{N-1}{R+p_kt}+p_k<p \end{aligned}$$

for all large k and large t, we further obtain, for such k and t, say \(k\ge k_0\) and \(t\ge T_1\),

$$\begin{aligned} -\tilde{\phi }''-\left[ \frac{N-1}{R+p_kt}+p_k\right] \tilde{\phi }'\le -\tilde{\phi }''-p\chi _{[R_0,L_1]}\tilde{\phi }'\le \tilde{p}\,\tilde{\phi } \text{ for } R\in (-L_2, L_1). \end{aligned}$$
(2.6)

The above differential inequality should be understood in the weak sense since \(\tilde{\phi }''\) may have a jump at \(R=R_0\).

We now fix \(T_0\ge T_1\) and observe that

$$\begin{aligned} v(R, T_0)>0 \text{ for } R\in [0, s_2(T_0)],\; c_2<\liminf _{t\rightarrow \infty }\frac{s_2(t)}{t}. \end{aligned}$$

Hence if \(T_0\) is large enough then for \(R\in [-L_2, L_1]\) and \(t\ge T_0\) we have

$$\begin{aligned} 0<-L_2+c_1t\le R+p_kt\le L_1+c_2t<s_2(t) \text{ for } \text{ all } k\ge 1. \end{aligned}$$

It follows that

$$\begin{aligned} w_k(R, T_0)&=v(R+p_kT_0,T_0)\ge \sigma _0\\ :&=\min _{R\in [0, L_1+c_2T_0]}v(R,T_0)>0 \text{ for } R\in [-L_2, L_1],\; k\ge 1. \end{aligned}$$

Let \(z_k(R,t)\) be the unique solution of

$$\begin{aligned} z_t = z_{RR} + \Big [\frac{N-1}{R+p_k t} + p_k\Big ]z_R + z(1-z),\quad z(L_1, t) = z(-L_2,t) =0. \end{aligned}$$

with initial condition

$$\begin{aligned} z_k(R, T_0)= w_k(R, T_0)\,\; R\in [-L_2, L_1]. \end{aligned}$$

The comparison principle yields

$$\begin{aligned} w_k(R,t)\ge z_k(R, t) \text{ for } R\in [-L_2, L_1],\; t\ge T_0,\; k\ge 1 \end{aligned}$$

since \(w_k(R, t)>0=z_k(R,t)\) for \(R\in \{-L_2, L_1\}\) and \(t> T_0,\; k\ge 1\).

On the other hand, if we choose \(\delta >0\) sufficiently small, then \(\underline{z}(R):=\delta \tilde{\phi }(R)\le \sigma _0\) for \(R\in [-L_2, L_1]\) and due to (2.6), \(\underline{z}(R)\) satisfies

$$\begin{aligned} -\underline{z}''-\left[ \frac{N-1}{R+p_kt}+p_k\right] \underline{z}'\le \underline{z}(1-\underline{z}) \text{ for } R\in (-L_2, L_1),\; t\ge T_0,\; k\ge k_0. \end{aligned}$$

We thus obtain

$$\begin{aligned} z_k(R,t)\ge \underline{z}(R) \text{ for } R\in [-L_2, L_1],\; t\ge T_0,\; k\ge k_0. \end{aligned}$$

We claim that

$$\begin{aligned} \lim _{k\rightarrow \infty } z_k(0, t_k)=z(0)>1-\epsilon _0, \end{aligned}$$
(2.7)

where z(R) is the unique positive solution of (2.5), which then gives

$$\begin{aligned} \limsup _{k\rightarrow \infty }v(x_k,t_k)=\limsup _{k\rightarrow \infty }w_k(0,t_k)\ge \limsup _{k\rightarrow \infty }z_k(0,t_k)>1-\epsilon _0, \end{aligned}$$

and so (2.4) holds.

It remains to prove (2.7). Set

$$\begin{aligned} Z_k(R,t):=z_k(R, t_k+t). \end{aligned}$$

Then \(Z_k\) satisfies

$$\begin{aligned} (Z_k)_t&=(Z_k)_{RR}+\left[ \frac{N-1}{R+p_k(t_k+t)}+p_k\right] (Z_k)_R\\&\quad +Z_k(1-Z_k) \text{ for } R\in (-L_2, L_1),\; t\ge T_0-t_k, \end{aligned}$$

and

$$\begin{aligned} Z_k(-L_2,t)=Z_k(L_1,t)=0, \; Z_k(R,t)\ge \underline{z}(R) \text{ for } R\in [-L_2,L_1], t\ge T_0-t_k,\ k\ge k_0. \end{aligned}$$

By a simple comparison argument involving a suitable ODE problem we easily obtain

$$\begin{aligned} Z_k(R,t)\le M:=\max \{ \Vert v(\cdot , T_0)\Vert _{L^\infty }, 1\} \text{ for } R\in [-L_2,L_1], t\ge T_0-t_k,\ k\ge 1. \end{aligned}$$

Since \( \frac{N-1}{R+p_k(t_k+t)}+p_k\rightarrow p_0\) uniformly as \(k\rightarrow \infty \), we may apply the parabolic \(L^p\) estimate to the equations satisfied by \(Z_k\) to conclude that, for any \(p>1\) and \(T>0\), there exists \(C_1>0\) such that, for all large \(k\ge k_0\), say \(k\ge k_1\),

$$\begin{aligned} \Vert Z_k\Vert _{W^{2,1}_p([-L_2, L_1]\times [-T,T])}\le C_1. \end{aligned}$$

It then follows from the Sobolev embedding theorem that, for every \(\alpha \in (0,1)\) and all \(k\ge k_1\),

$$\begin{aligned} \Vert Z_k\Vert _{C^{1+\alpha , (1+\alpha )/2}([-L_2, L_1]\times [-T,T])}\le C_2 \end{aligned}$$

for some constant \(C_2\) depending on \(C_1\) and \(\alpha \). Let \(\tilde{\alpha }\in (0, \alpha )\). Then by compact embedding and a well known diagonal process, we can find a subsequence of \(\{Z_k\}\), still denoted by itself for the seek of convenience, such that

$$\begin{aligned} Z_k(R,t)\rightarrow Z(R,t) \text{ as } k\rightarrow \infty \text{ in } C_{loc}^{1+\tilde{\alpha }, (1+\tilde{\alpha })/2}([-L_2,L_1]\times \mathbb R). \end{aligned}$$

From the equations satisfied by \(Z_k\) we obtain

$$\begin{aligned} Z_t=Z_{RR}+p_0Z_R+Z(1-Z) \text{ for } R\in (-L_2, L_1),\; t\in \mathbb R, \end{aligned}$$

and

$$\begin{aligned} Z(-L_2, t)=Z(L_1,t)=0,\; M\ge Z(R, t)\ge \underline{z}(R) \text{ for } R\in [-L_2, L_1],\; t\in \mathbb R. \end{aligned}$$

We show that \(Z(R,t)\equiv z(R)\). Indeed, if we denote by \(\underline{Z}\) the unique solution of

$$\begin{aligned} z_t=z_{RR}+p_0z_R+z(1-z) \text{ for } R\in (-L_2, L_1), \; t>0 \end{aligned}$$

with boundary conditions \(z(-L_2,t)=z(L_1,t)=0\) and initial condition \(z(R,0)=\underline{z}(R)\), while let \(\overline{Z}\) be the unique solution to this problem but with initial condition replaced by \(z(R,0)=M\), then clearly

$$\begin{aligned} \lim _{t\rightarrow \infty } \underline{Z}(R,t)=\lim _{t\rightarrow \infty }\overline{Z}(R,t)=z(R). \end{aligned}$$

On the other hand, for any \(s>0\), by the comparison principle we have

$$\begin{aligned} \underline{Z}(R, t+s)\le Z(R,t)\le \overline{Z}(R, t+s) \text{ for } R\in [-L_2, L_1],\; t\ge -s. \end{aligned}$$

Letting \(s\rightarrow \infty \) we obtain \(z(R)\le Z(R,t)\le z(R)\). We have thus proved \(Z(R,t)\equiv z(R)\) and hence

$$\begin{aligned} z_k(0, t_k)=Z_k(0,0)\rightarrow Z(0,0)=z(0) \text{ as } k\rightarrow \infty . \end{aligned}$$

This proves (2.7) and the proof of Lemma 2.2 is complete.\(\square \)

We now start to construct some auxiliary functions by modifying the unique solution (UV) of (1.4) with \(c=c_{\mu _1}^*\). Firstly, for any given small \(\varepsilon \in (0,1)\) we consider the following perturbed problem

$$\begin{aligned} \left\{ \begin{array}{l} {cU'+d U''+rU(1+\varepsilon -U-kV)=0} \text{ for } -\infty<\xi<0,\\ {cV'+V''+V(1-\varepsilon -V-hU)=0} \text{ for } -\infty<\xi<\infty ,\\ U(-\infty )=1+\varepsilon ,\; U(0)=0,\; U_{\varepsilon }'(0)=-\mu _1/c,\; U'(\xi )<0=U(-\xi ) \text{ for } \xi <0\\ V(-\infty )=0,\quad V(+\infty )=1-\varepsilon ,\quad V'(\xi )>0 \text{ for } \xi \in \mathbb {R}. \end{array}\right. \end{aligned}$$
(2.8)

Taking \(U=(1+\varepsilon )\widehat{U}\) and \(V=(1-\varepsilon )\widehat{V}\), then \((\widehat{U}, \widehat{V})\) satisfies (1.4) for k and h replaced by some \(\hat{k}_\varepsilon \) and \(\hat{h}_\varepsilon \) with \(0<\hat{k}_\varepsilon<1<\hat{h}_\varepsilon \), and \(\hat{k}_\varepsilon \rightarrow k\), \(\hat{h}_\varepsilon \rightarrow h\) as \(\varepsilon \rightarrow 0\). Hence by Theorem A, there exists a unique \(c=c^{\varepsilon }_{\mu _1}>0\) such that (2.8) with \(c=c^{\varepsilon }_{\mu _1}\) admits a unique solution \((U_{\varepsilon },V_{\varepsilon })\). As in [13], \((U_\varepsilon , V_\varepsilon )\) and \(c^\varepsilon _{\mu _1}\) depends continuously on \(\varepsilon \), and in particular, \(c^{\varepsilon }_{\mu _1}\rightarrow c^{*}_{\mu _1}\) as \(\varepsilon \rightarrow 0\). Moreover, as in the proof of Lemma 2.5 in [13], we have the asymptotic expansion

$$\begin{aligned} V_{\varepsilon }(\xi )=Ce^{\mu \xi }(1+o(1)), \quad V_{\varepsilon }'(\xi )=C\mu e^{\mu \xi }(1+o(1))\ \text{ as }\, \xi \rightarrow -\infty \end{aligned}$$
(2.9)

for some \(C>0\), where

$$\begin{aligned} \mu =\mu (\varepsilon ):=\frac{-c^{\varepsilon }_{\mu _1}+\sqrt{(c^{\varepsilon }_{\mu _1})^2+4[h(1+\varepsilon )-1+\varepsilon ]}}{2}>0. \end{aligned}$$

Next we modify \((U_{\varepsilon }(\xi ),V_{\varepsilon }(\xi ))\) to obtain the required auxiliary functions. The modification of \(V_{\varepsilon }\) is rather involved, and for simplicity, we do that for \(\xi \ge 0\) and \(\xi \le 0\) separately.

We first consider the case \(\xi \ge 0\). For fixed \(\varepsilon \in (0,1)\) sufficiently small, we define

$$\begin{aligned} Q^{\varepsilon }_+(\xi )={\left\{ \begin{array}{ll} V_{\varepsilon }(\xi ) &{} \text{ for } 0\le \xi \le \xi _0,\\ V_{\varepsilon }(\xi )-\delta (\xi -\xi _0)^2V_{\varepsilon }(\xi _0)&{} \text{ for } \xi _0\le \xi \le \xi _0+1, \end{array}\right. } \end{aligned}$$
(2.10)

where \(\xi _0=\xi _0(\varepsilon )>0\) is determined later and

$$\begin{aligned} \delta =\delta (\varepsilon ):=\frac{\varepsilon }{4+2c^{\varepsilon }_{\mu _1}}\in (0,1). \end{aligned}$$
(2.11)

It is straightforward to see that \(Q^{\varepsilon }_+\in C^1([0,\xi _0+1])\). The following result will be useful later.

Lemma 2.3

For any small \(\varepsilon >0\), there exist \(\xi _0=\xi _0(\varepsilon )>0\) and \(\xi _1=\xi _1(\varepsilon )\in (\xi _0,\xi _0+1)\) such that \(\lim _{\varepsilon \rightarrow 0}\xi _0(\varepsilon )=\infty \) and

$$\begin{aligned}&(Q_+^{\varepsilon })'(\xi _1)=0, (Q_+^{\varepsilon })'(\xi )>0 \text{ for } \xi \in [0,\xi _1),\nonumber \\&c^{\varepsilon }_{\mu _1}(Q^{\varepsilon }_{+})'+(Q^{\varepsilon }_{+})''+Q^{\varepsilon }_+(1-Q^{\varepsilon }_{+})\ge 0\quad \text{ for } \xi \in [0,\xi _1]\backslash \{\xi _0\}. \end{aligned}$$
(2.12)

Moreover, there exists \(s^\varepsilon \in (0,s^*_{\mu _2})\) such that \(s^\varepsilon \rightarrow s^*_{\mu _2}\) as \(\varepsilon \rightarrow 0^+\) and

$$\begin{aligned} Q^{\varepsilon }_+(\xi _1)=q_{s^\varepsilon }(-z(s^\varepsilon )), \end{aligned}$$
(2.13)

where \(z(s^\varepsilon )\) and \(q_{s^\varepsilon }\) are defined in Lemma 2.1 with \(s=s^\varepsilon \).

Proof

For convenience of notation we will write \(Q^{\varepsilon }_+=Q\). Since \(V_{\varepsilon }(\infty )= 1-\varepsilon \), \(V_{\varepsilon }'(\infty )=0\) and \(V_\varepsilon '>0\), we can choose \(\xi _0=\xi _0(\varepsilon )\gg 1\) such that \(\lim _{\varepsilon \rightarrow 0}\xi _0(\varepsilon )=\infty \) and

$$\begin{aligned}&1-2\varepsilon \le V_{\varepsilon }(\xi )\le 1-\varepsilon ,\quad V_{\varepsilon }'(\xi +1)-2\delta V_{\varepsilon }(\xi )<0 \quad \text{ for } \text{ all } \,\xi \ge \xi _0. \end{aligned}$$

In particular, we have

$$\begin{aligned} Q'(\xi _0+1)=V_{\varepsilon }'(\xi _0+1)-2\delta V_{\varepsilon }(\xi _0)<0. \end{aligned}$$

Note that \(Q'(\xi _0)=V_{\varepsilon }'(\xi _0)>0\). By the continuity of \(Q'\), we can find \(\xi _1=\xi _1(\xi _0)\in (\xi _0,\xi _0+1)\) such that

$$\begin{aligned} Q'(\xi _1)=0,\quad Q'(\xi )>0\ \text{ for } \text{ all } \,\xi \in [\xi _0,\xi _1). \end{aligned}$$
(2.14)

Hence we have \(Q'>0\) in \([0,\xi _1)\) since \(Q'=V_{\varepsilon }'>0\) in \([0,\xi _0]\).

We now prove (2.12). For \(\xi \in [0,\xi _0)\), we have \(Q=V_{\varepsilon }\). Using \(U_{\varepsilon }(\xi )\equiv 0\) for \(\xi \ge 0\) and the second equation of (2.8), it is straightforward to see that the inequality in (2.12) holds for \(\xi \in [0,\xi _0)\). For \(\xi \in (\xi _0,\xi _1]\), direct computation gives us

$$\begin{aligned} Q'(\xi )=V'_{\varepsilon }(\xi )-2 \delta (\xi -\xi _0)V_{\varepsilon }(\xi _0),\quad Q''(\xi )=V''_{\varepsilon }(\xi )-2 \delta V_{\varepsilon }(\xi _0). \end{aligned}$$

Hence

$$\begin{aligned}&c^{\varepsilon }_{\mu _1}Q'+Q''+Q(1-Q)\\&\quad = c^{\varepsilon }_{\mu _1}V'_{\varepsilon }(\xi )-2 c^{\varepsilon }_{\mu _1} \delta (\xi -\xi _0)V_{\varepsilon }(\xi _0) +V''_{\varepsilon }(\xi )-2\delta V_{\varepsilon }(\xi _0)\\&\qquad +\big [V_{\varepsilon }(\xi )-\delta (\xi -\xi _0)^2V_{\varepsilon }(\xi _0)\big ]\big [1-V_{\varepsilon }(\xi )+\delta (\xi -\xi _0)^2V_{\varepsilon }(\xi _0)\big ] \end{aligned}$$

Using \(0\le \xi -\xi _0\le \xi _1-\xi _0<1\), \(1>V_{\varepsilon }(\xi )>V_{\varepsilon }(\xi _0)\) for \(\xi \in [\xi _0,\xi _1]\), the identity

$$\begin{aligned} c^{\varepsilon }_{\mu _1}V'_{\varepsilon }+V''_{\varepsilon }=-(1-\varepsilon -V_{\varepsilon })V_{\varepsilon }, \end{aligned}$$

and (2.11), we deduce

$$\begin{aligned}&c^{\varepsilon }_{\mu _1}Q'+Q''+Q(1-Q)\\&\quad \ge \varepsilon V_{\varepsilon }(\xi ) -2c^{\varepsilon }_{\mu _1} \delta V_{\varepsilon }(\xi _0)-2\delta V_{\varepsilon }(\xi _0) -\delta V_{\varepsilon }(\xi _0)(1-V_{\varepsilon }(\xi ))-\delta ^2 V_{\varepsilon }^2(\xi _0)\\&\quad \ge V_{\varepsilon }(\xi _0)(\varepsilon -2c^{\varepsilon }_{\mu _1}\delta -4\delta )\\&\quad = 0\qquad \text{ for } \xi \in [\xi _0,\xi _1]. \end{aligned}$$

To complete the proof, it remains to show the existence of \(s^{\varepsilon }\). Note that

$$\begin{aligned} Q(\xi _0)=V_{\varepsilon }(\xi _0)\in [1-2\varepsilon ,1-\varepsilon ]. \end{aligned}$$

By (2.14), we have

$$\begin{aligned} 1-2\varepsilon \le Q(\xi _0) \le Q(\xi _1)\le V_{\varepsilon }(\xi _1) \le 1-\varepsilon . \end{aligned}$$
(2.15)

By Lemma 2.1, \(q_s(-z(s))\) is a continuous and increasing function of s for \(s\in (0, s^*_{\mu _2})\), and \(q_s(-z(s))\rightarrow 1\) as \(s\rightarrow s^*_{\mu _2}\). Therefore, in view of (2.15), for each small \(\varepsilon >0\) there exists \(s^\epsilon \in (0, s^*_{\mu _2})\) such that

$$\begin{aligned} Q(\xi _1)=q_{s^\varepsilon }(-z(s^\varepsilon )). \end{aligned}$$

Moreover, \(s^{\varepsilon }\rightarrow s^*_{\mu _2}\) as \(\varepsilon \rightarrow 0\). Thus (2.13) holds. The proof of Lemma 2.4 is now complete.\(\square \)

We now consider the case \(\xi \le 0\). We define

$$\begin{aligned} Q^{\varepsilon }_{-}(\xi ):={\left\{ \begin{array}{ll} V_{\varepsilon }(\xi ) &{} \text{ for } \xi _2\le \xi \le 0,\\ V_{\varepsilon }(\xi )+\gamma (\xi -\xi _2)V_{\varepsilon }(\xi _2)&{} \text{ for } -\infty <\xi \le \xi _{2}, \end{array}\right. } \end{aligned}$$
(2.16)

where

$$\begin{aligned} \gamma (\xi )=\gamma (\xi ;\lambda ):=-\big (e^{\lambda \xi }+e^{-\lambda \xi }-2\big ), \end{aligned}$$

with \(\lambda >0\) and \(\xi _2<0\) to be determined below.

Lemma 2.4

Let \(\varepsilon >0\) be sufficiently small and \((U_\varepsilon ,V_\varepsilon )\) be the solution of (2.8) with \(c=c^{\varepsilon }_{\mu _1}\). Then there exist \(\lambda =\lambda (\varepsilon )>0\) sufficiently small and \(\xi _2=\xi _2(\varepsilon )<0\) such that \(V_{\varepsilon }(\xi _2)=Q^{\varepsilon }_{-}(\xi _2)<\varepsilon \) and

$$\begin{aligned} Q^{\varepsilon }_{-}\in C^1((-\infty ,0])\cap C^2((-\infty ,0]\backslash \{\xi _2\}),\ (Q^{\varepsilon }_{-})'(\xi )>0\ \text{ for } \text{ all }\, \xi <0. \end{aligned}$$
(2.17)

Moreover, there exists a unique \(\xi _3\in (-\infty ,\xi _2)\) depending on \(\xi _2\) and \(\lambda \) such that \(Q^{\varepsilon }_{-}(\xi _3)=0\) and the following inequality holds:

$$\begin{aligned} c^{\varepsilon }_{\mu _1} (Q^{\varepsilon }_{-})'+(Q^{\varepsilon }_{-})''+Q^{\varepsilon }_{-} (1-Q^{\varepsilon }_{-}-hU_{\varepsilon })\ge 0\ \text{ for }\, \xi \in (\xi _3,0)\backslash \{\xi _2\}. \end{aligned}$$
(2.18)

Proof

We write \(Q^{\varepsilon }_-=Q\) for convenience of notation. Using \(\gamma '(0)=0\), it is straightforward to see that

$$\begin{aligned} Q\in C^1((-\infty ,0])\cap C^2((-\infty ,0]\backslash \{\xi _2\}) \end{aligned}$$

for any choice of \(\xi _2<0\). Since \(V'_{\varepsilon }>0\) in \(\mathbb {R}\) and \(\gamma '(\xi )>0\) for \(\xi <0\), we have

$$\begin{aligned} Q'(\xi )={\left\{ \begin{array}{ll} V'_{\varepsilon }(\xi )>0 &{} \text{ for } \xi _2\le \xi \le 0,\\ V'_{\varepsilon }(\xi )+\gamma '(\xi -\xi _2)V_{\varepsilon }(\xi _2)>0 &{} \text{ for } \xi \le \xi _2. \end{array}\right. } \end{aligned}$$

Hence (2.17) holds for any choice of \(\xi _2<0\).

For any given \(\lambda >0\), we take \(K_{\lambda }>0\) such that

$$\begin{aligned} e^{ K_{\lambda }\lambda }+e^{- K_{\lambda }\lambda }-2>e^{-K_{\lambda }\mu }, \end{aligned}$$
(2.19)

where \(\mu >0\) is given in (2.9). By (2.9), we have

$$\begin{aligned} \frac{V_{\varepsilon }(\xi _2-K_{\lambda })}{V_{\varepsilon }(\xi _2)}\rightarrow e^{-K_{\lambda }\mu }\quad \text{ as } \,\xi _2\rightarrow -\infty . \end{aligned}$$

Together with (2.19), and \((U_{\varepsilon },V_{\varepsilon })(-\infty )=(1+\varepsilon ,0)\), we can take \(\xi _2=\xi _2(\lambda )\) close to \(-\infty \) such that

$$\begin{aligned}&Q(\xi _2-K_{\lambda })= V_{\varepsilon }(\xi _2)\left[ \frac{V_{\varepsilon }(\xi _2-K_{\lambda })}{V_{\varepsilon }(\xi _2)} -(e^{ K_{\lambda }\lambda }+e^{- K_{\lambda }\lambda }-2)\right]<0,\nonumber \\&V_{\varepsilon }(\xi _2)<\min \left\{ \varepsilon ,\frac{\varepsilon }{-\gamma (-K_{\lambda })}\right\} ,\quad U_{\varepsilon }(\xi _2)>1. \end{aligned}$$
(2.20)

On the other hand, since \(Q(\xi _2)=V_{\varepsilon }(\xi _2)>0\), we can apply the intermediate value theorem to obtain \(\xi _3\in (\xi _2-K_{\lambda },\xi _2)\) such that \(Q(\xi _3)=0\). Such \(\xi _3\) is unique because of the monotonicity of Q.

Next we show that, if \(\lambda >0\) has been chosen small enough, with the above determined \(\xi _2\) and \(\xi _3\), (2.18) holds. To do this, we consider (2.18) for \(\xi \in (\xi _3,\xi _2)\) and \(\xi \in [\xi _2,0)\) separately.

For \(\xi \in (\xi _3,\xi _2)\), we write \(V_{\varepsilon }=V_{\varepsilon }(\xi )\), \(\gamma =\gamma (\xi -\xi _2)\) and obtain

$$\begin{aligned}&c^{\varepsilon }_{\mu _1}Q'+Q''+Q(1-Q-hU_{\varepsilon })\\&\quad =c^{\varepsilon }_{\mu _1}V'_{\varepsilon }+V''_{\varepsilon }+V_{\varepsilon }(\xi _2)\big [c^{\varepsilon }_{\mu _1}\gamma '+\gamma ''\big ] +\big [V_{\varepsilon }+\gamma V_{\varepsilon }(\xi _2)\big ]\big [1-V_{\varepsilon }-\gamma V_{\varepsilon }(\xi _2)-hU_{\varepsilon }\big ]\\&\quad =-V_{\varepsilon }(1-\varepsilon -V_{\varepsilon }-hU_{\varepsilon })+V_{\varepsilon }(\xi _2)\big [c^{\varepsilon }_{\mu _1}\gamma '+\gamma ''\big ]\\&\quad +\big [V_{\varepsilon }+\gamma V_{\varepsilon }(\xi _2)\big ]\big [1-V_{\varepsilon }-\gamma V_{\varepsilon }(\xi _2)-hU_{\varepsilon }\big ]\\&\quad \ge \varepsilon V_{\varepsilon }+V_{\varepsilon }(\xi _2)\big [c^{\varepsilon }_{\mu _1}\gamma '+\gamma ''\big ] -\gamma V_{\varepsilon }(\xi _2) \big [hU_{\varepsilon }+\gamma V_{\varepsilon }(\xi _2)-1\big ]. \end{aligned}$$

By (2.20), for \(\xi \in (\xi _3,\xi _2)\),

$$\begin{aligned} U_{\varepsilon }\ge 1,\; 0>\gamma V_{\varepsilon }(\xi _2)=\gamma (\xi -\xi _2)V_{\varepsilon }(\xi _2)\ge \gamma (-K_{\lambda })V_{\varepsilon }(\xi _2)>-\varepsilon . \end{aligned}$$

It follows that

$$\begin{aligned} \begin{array}{ll} &{}c^{\varepsilon }_{\mu _1}Q'+Q''+Q(1-Q-hU_{\varepsilon })\\ &{}\quad \ge \varepsilon V_{\varepsilon }+V_{\varepsilon }(\xi _2)\big [c^{\varepsilon }_{\mu _1}\gamma '+\gamma ''\big ]-\gamma V_{\varepsilon }(\xi _2)\big [h-\varepsilon -1\big ] \text{ for } \xi _3<\xi <\xi _2. \end{array} \end{aligned}$$
(2.21)

Using (2.9), we see that the right side of (2.21) is nonnegative if the following inequality holds:

$$\begin{aligned} \varepsilon e^{\mu (\xi -\xi _2)}+c^{\varepsilon }_{\mu _1}\gamma '+\gamma ''-[h-\varepsilon -1]\gamma >0 \text{ for } \xi _3<\xi <\xi _2. \end{aligned}$$
(2.22)

We shall show that (2.22) indeed holds provided that \(\lambda >0\) has been chosen small enough. To check this, for \(t=\xi _2-\xi \ge 0\) we define

$$\begin{aligned} \begin{array}{ll} F(t):=&{}\varepsilon e^{-\mu t} -c^{\varepsilon }_{\mu _1}\lambda (e^{-\lambda t}-e^{\lambda t})-\lambda ^2(e^{-\lambda t}+e^{\lambda t}) \\ &{}\quad +[h-\varepsilon -1](e^{-\lambda t}+e^{\lambda t}-2). \end{array} \end{aligned}$$
(2.23)

By Lemma 2.5 below, we can take small \(\lambda \) depending only on \(\varepsilon \) such that \(F(t)>0\) for all \(t\ge 0\). This implies (2.22), and so (2.18) holds for \(\xi \in (\xi _3,\xi _2]\).

For \(\xi \in (\xi _2,0)\), we have \(Q'(\xi )=V_{\varepsilon }(\xi )\). From (2.8), it is straightforward to see that (2.18) holds for \(\xi \in (\xi _2, 0)\). This completes the proof.\(\square \)

Lemma 2.5

Let \(\varepsilon >0\) and \(F:[0,\infty )\rightarrow \mathbb {R}\) be defined by (2.23). Then \(F(t)>0\) for all \(t\ge 0\) as long as \(\lambda >0\) is small enough.

Proof

The argument is similar to [13, Lemma 3.3]. Let \(\kappa :=h-\varepsilon -1\). Note that \(\kappa >0\) since \(h>1\). By direct computations,

$$\begin{aligned}&F'(t)=-\varepsilon \mu e^{-\mu t}+\lambda ^2c^{\varepsilon }_{\mu _1}(e^{\lambda t} +e^{-\lambda t})+(\kappa \lambda -\lambda ^3)(e^{\lambda t}-e^{-\lambda t}),\\&F''(t)=\varepsilon \mu ^2 e^{-\mu t}+\lambda ^3c^{\varepsilon }_{\mu _1}(e^{\lambda t} -e^{-\lambda t})+(\kappa \lambda ^2-\lambda ^4)(e^{\lambda t}+e^{-\lambda t}), \end{aligned}$$

where \(\mu >0\) is given in (2.9). By taking

$$\begin{aligned} \lambda \in \left( 0,\min \left\{ \sqrt{\frac{\varepsilon }{2}},\ \sqrt{\frac{\varepsilon \mu }{2c^{\varepsilon }_{\mu _1}}},\ \sqrt{\kappa }\right\} \right) , \end{aligned}$$

we have \(F(0)>0\), \(F'(0)<0\), \(F'(\infty )=\infty \) and \(F''(t)>0\) for \(t\ge 0\). If follows that F has a unique minimum point \(t=t_{\lambda }\). Consequently, to finish the proof of Lemma 2.5, it suffices to show the following:

$$\begin{aligned} F(t_{\lambda })\ge 0\, \hbox {as long as}\, \lambda >0 \hbox { is small}. \end{aligned}$$
(2.24)

By direct calculation, \(F'(t_{\lambda })=0\) implies that

$$\begin{aligned} \varepsilon \mu e^{-\mu t_{\lambda }}=\lambda ^2c^{\varepsilon }_{\mu _1}(e^{\lambda t_{\lambda }} +e^{-\lambda t_{\lambda }})+(\kappa \lambda -\lambda ^3)(e^{\lambda t_{\lambda }}-e^{-\lambda t_{\lambda }}). \end{aligned}$$
(2.25)

From (2.25), we easily deduce \(t_\lambda \rightarrow \infty \) as \(\lambda \rightarrow 0\), for otherwise, the left hand side of (2.25) is bounded below by a positive constant while the right hand side converges to 0 as \(\lambda \rightarrow 0\) along some sequence. Multiplying \(t_\lambda \) to both sides of (2.25) and we obtain, by a similar consideration, that \(\lambda t_\lambda \) is bounded from above by a positive constant as \(\lambda \rightarrow 0\). It then follows that

$$\begin{aligned} \kappa \lambda t_\lambda e^{\lambda t_\lambda }\rightarrow 0 \text{ as } \lambda \rightarrow 0, \end{aligned}$$

which implies \(\lambda t_\lambda \rightarrow 0\) as \(\lambda \rightarrow 0\). We thus obtain

$$\begin{aligned} t_{\lambda }\rightarrow \infty ,\quad \lambda t_{\lambda }\rightarrow 0\quad \text{ as }\quad \lambda \rightarrow 0^+. \end{aligned}$$
(2.26)

It follows that

$$\begin{aligned} \lim _{\lambda \rightarrow 0^+}\frac{e^{\lambda t_{\lambda }}-e^{-\lambda t_{\lambda }}}{ 2\lambda t_{\lambda } } =\lim _{\lambda \rightarrow 0^+}\frac{e^{\lambda t_{\lambda }}+e^{-\lambda t_{\lambda }}}{ 2 } =1. \end{aligned}$$
(2.27)

We now prove (2.24). Substituting (2.25) into F, we have

$$\begin{aligned} F(t_{\lambda })= & {} (e^{\lambda t_{\lambda }}-e^{-\lambda t_{\lambda }}) \left( c^{\varepsilon }_{\mu _1} \lambda -\frac{\lambda ^3}{\mu }+\frac{\kappa }{\mu }\lambda \right) \\&+\,\lambda ^2(e^{\lambda t_{\lambda }}+e^{-\lambda t_{\lambda }})\left( \frac{c^{\varepsilon }_{\mu _1}}{\mu }-1\right) + \kappa (e^{\lambda t_{\lambda }}+e^{-\lambda t_{\lambda }}-2) \end{aligned}$$

Using (2.26) and (2.27), for small \(\lambda >0\),

$$\begin{aligned} F(t_{\lambda })\ge & {} 2\lambda t_{\lambda }[1+o(1)]\left( c^{\varepsilon }_{\mu _1}\lambda -\frac{\lambda ^3}{\mu }+\frac{\kappa }{\mu }\lambda \right) + \lambda ^2[2+o(1)]\left( \frac{c^{\varepsilon }_{\mu _1}}{\mu }-1\right) \\= & {} 2 \lambda ^2t_\lambda \left[ c_{\mu _1}^{\varepsilon }+\frac{\kappa }{\mu }+o(1)\right] \\> & {} 0. \end{aligned}$$

This completes the proof.\(\square \)

Combining (2.10) and (2.16) we now define

$$\begin{aligned} \widehat{Q}^{\varepsilon }(\xi ):={\left\{ \begin{array}{ll} Q^\varepsilon _{+}(\xi ) &{} \text{ for } \xi \in [0,\xi _1],\\ Q^\varepsilon _{-}(\xi ) &{} \text{ for } \xi \in [\xi _3,0],\\ 0 &{} \text{ for } \xi \in (-\infty ,\xi _3], \end{array}\right. } \end{aligned}$$
(2.28)

where \(\xi _1>0\) and \(\xi _3<0\) are given in Lemmas 2.3 and 2.3, respectively. We then define

$$\begin{aligned} W_{\varepsilon }(r):={\left\{ \begin{array}{ll} 0 &{} \text{ for } r\in [\xi _1+z(s^{\varepsilon }),\infty ).\\ q_{s^\varepsilon }(r-\xi _1-z(s^{\varepsilon })) &{} \text{ for } r\in [\xi _1, \xi _1+z(s^{\varepsilon })],\\ \widehat{Q}^{\varepsilon }(r) &{} \text{ for } r\in (-\infty ,\xi _1], \end{array}\right. } \end{aligned}$$

with \(s^{\varepsilon }\in (0, s^*_{\mu _2})\) given in Lemma 2.3. Then clearly \(W_{\varepsilon }\in C(\mathbb {R})\) has compact support \([\xi _3,\xi _1+z(s^{\varepsilon })]\).

We are now ready to describe the conditions in Theorem 1 on the initial functions \(u_0\) and \(v_0\). Since \(s^{\varepsilon }\rightarrow s^*_{\mu _2}>c^*_{\mu _1}\) and \(\xi _0(\varepsilon )\rightarrow \infty \) as \(\varepsilon \rightarrow 0\), where \(\xi _0(\varepsilon )\) is defined in Lemma 2.3, we can fix \(\varepsilon _0>0\) small so that

$$\begin{aligned} s^{\varepsilon }-c^{\varepsilon }_{\mu _1}>\frac{N-1}{\xi _0(\varepsilon )}>0 \text{ for } \text{ all } \varepsilon \in (0,\varepsilon _0], \end{aligned}$$
(2.29)

where N is the space dimension. Our first condition is

(B1): For some \(z_0>0\) and small \(\varepsilon _0>0\) as above,

$$\begin{aligned} u_0(r)\le U_{\varepsilon _0}(r-z_0), \ v_0(r)\ge W_{\varepsilon _0}(r-z_0) \text{ for } r\ge 0. \end{aligned}$$

We note that (B1) implies

$$\begin{aligned} s_1^0\le z_0 \text{ and } s_2^0\ge \xi _1(\varepsilon _0)+z(s^{\varepsilon _0})+z_0. \end{aligned}$$

Our second condition is

(B2): \(s_1^{0}\ge R^*\sqrt{\frac{d}{r(1-k)}}\), where \(R^*>0\) is defined in Corollary 1.

Since \(\limsup _{t\rightarrow \infty } v(r,t)\le 1\) uniformly in \(r\in [0, s_2(t)]\), it is easy to see that (B2) guarantees \(s_{1,\infty }=\lim _{t\rightarrow \infty } s_1(t)=\infty \) (see also the proof of Theorem 2).

We are now ready to prove Theorem 1, which we restate as

Theorem 3

Suppose that (1.2), (1.7), (B1) and (B2) hold. Then the solution \((u,v,s_1,s_2)\) of (P) satisfies

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{s_1(t)}{t}=c^*_{\mu _1},\quad \lim _{t\rightarrow \infty }\frac{s_2(t)}{t}=s^*_{\mu _2}, \end{aligned}$$

and for every small \(\epsilon >0\), (1.8), (1.9) hold.

Before giving the proof of Theorem 3, let us first observe how Corollary 1 follows easily from Theorem 3. It suffices to show that assumptions (i) and (ii) in Corollary 1 imply (B1) and (B2). Recall that

$$\begin{aligned} U_{\varepsilon _0}(-\infty )=1+\varepsilon _0 \text{ and } U_{\varepsilon _0}(\xi )=0 \text{ for } \xi \ge 0. \end{aligned}$$

Therefore, for fixed \(s_1^0\ge R^*\sqrt{\frac{d}{r(1-k)}}\), there exists \(C_1>0\) large such that

$$\begin{aligned} U_{\varepsilon _0}(r-z_0)\ge 1 \text{ for } r\in [0, s_1^0],\; z_0\ge C_1. \end{aligned}$$

Hence for any given \(u_0\) satisfying (i) of Corollary 1, (B2) and the first inequality in (B1) are satisfied if we take \(z_0\ge C_1\).

From the definition of \(W_{\varepsilon _0}\) we see that

$$\begin{aligned} W_{\varepsilon _0}(\xi )\le 1 \text{ for } \,\xi \in \mathbb {R}^1, \text{ and } W_{\varepsilon _0}(\xi )=0 \text{ for } \xi \not \in [\xi _3, \xi _1+ z(s^{\varepsilon _0})]. \end{aligned}$$

If we take

$$\begin{aligned} C_0:=\max \big \{ C_1, \;\xi _1+z(s^{\varepsilon _0})-\xi _3\big \}, \end{aligned}$$

and for \(x_0\ge C_0\) and \(L\ge C_0\), we let \(z_0:=x_0-\xi _3\), then

$$\begin{aligned} z_0\ge x_0\ge C_1,\; [\xi _3, \xi _1+ z(s^{\varepsilon _0})]\subset [x_0-z_0, x_0+L-z_0], \end{aligned}$$

and hence \(W_{\varepsilon _0}(r-z_0)=0\) for \(r\not \in [x_0, x_0+L]\). Thus when (ii) in Corollary 1 holds, we have

$$\begin{aligned} v_0(r)\ge W_{\varepsilon _0}(r-z_0) \text{ for } r\ge 0, \end{aligned}$$

which is the second inequality in (B1). This proves what we wanted.

Proof of Theorem 3

We break the rather long proof into 4 steps.

Step 1: We show

$$\begin{aligned} \limsup _{t\rightarrow \infty }\frac{s_1(t)}{t}\le c^{\varepsilon _0}_{\mu _1},\quad \liminf _{t\rightarrow \infty }\frac{s_2(t)}{t}\ge s^{\varepsilon _0}-\sigma _{\varepsilon _0}, \end{aligned}$$
(2.30)

where

$$\begin{aligned} \sigma _\varepsilon :=\frac{N-1}{\xi _0(\varepsilon )} \text{ for } \varepsilon \in (0,\varepsilon _0]. \end{aligned}$$

By (2.29), we have

$$\begin{aligned} c^{\varepsilon }_{\mu _1}<s^{\varepsilon }-\sigma _{\varepsilon } \text{ for } \varepsilon \in (0,\varepsilon _0]. \end{aligned}$$
(2.31)

We prove (2.30) by constructing suitable functions \((\overline{U}(r,t), \underline{V}(r,t), l(t), g(t))\) which satisfy certain differential inequalities that enable us to use a comparison argument to relate them to \((u(r,t),v(r,t), s_1(t), s_2(t))\). Set

$$\begin{aligned}&l(t):=c^{\varepsilon _0}_{\mu _1}t+z_0,\quad g(t):=\left( s^{\varepsilon _0}-\sigma _{\varepsilon _0}\right) t+\xi _1+z(s^{{\varepsilon _0}})+z_0,\\&\overline{U}(r,t):=U_{{\varepsilon _0}}(r-l(t)) \text{ for } r\in [0,l(t)],\ t\ge 0,\\&\underline{V}(r,t):={\left\{ \begin{array}{ll} \widehat{Q}^{\varepsilon _0}(r-l(t)) &{} \text{ for } r\in [0,l(t)+\xi _1],\ t\ge 0,\\ \widehat{Q}^{\varepsilon _0}(\xi _1) &{} \text{ for } r\in [l(t)+\xi _1,g(t)-z(s^{{\varepsilon _0}})],\ t\ge 0,\\ q_{s^{\varepsilon _0}}(r-g(t)) &{} \text{ for } r\in [g(t)-z(s^{{\varepsilon _0}}), g(t)],\ t\ge 0, \end{array}\right. } \end{aligned}$$

where \(\widehat{Q}^{\varepsilon _0}\) is defined in (2.28) and \(\xi _1=\xi _1(\varepsilon _0)\) is given in Lemma 2.3. We note that

$$\begin{aligned} \xi _1(\varepsilon _0)>\xi _0(\varepsilon _0)=\frac{N-1}{\sigma _{\varepsilon _0}}. \end{aligned}$$
(2.32)

By the assumption (B1), we have

$$\begin{aligned} u(r,0)\le \overline{U}(r,0) \text{ for } r\in [0,s^0_{1}];\ v(r,0)\ge \underline{V}(r,0) \text{ for } r\in [0,s^0_{2}]. \end{aligned}$$
(2.33)

We now show the wanted differential inequality for \(\overline{U}\):

$$\begin{aligned} \overline{U}_t-d\overline{U}_{rr}-\frac{N-1}{r}\overline{U}_{r}-r\overline{U}(1-\overline{U}-k\underline{V})\ge 0 \text{ for } 0\le r\le l(t),\ t>0. \end{aligned}$$
(2.34)

Using \(\overline{U}_{r}=U_{\varepsilon _0}'<0\), direct computation gives us

$$\begin{aligned} \begin{array}{rl} J(r,t):=&{}\overline{U}_t-d\overline{U}_{rr}-\frac{N-1}{r}\overline{U}_{r}-r\overline{U}(1-\overline{U}-k\underline{V})\\ \ge &{} -c^{\varepsilon _0}_{\mu _1} U_{{\varepsilon _0}}'-d U_{{\varepsilon _0}}''-rU_{{\varepsilon _0}}(1-U_{{\varepsilon _0}}-k\underline{V})\\ =&{} rU_{{\varepsilon _0}}(1+{\varepsilon _0}-U_{{\varepsilon _0}}-kV_{{\varepsilon _0}})-rU_{{\varepsilon _0}}(1-U_{{\varepsilon _0}}-k\underline{V})\\ =&{} rU_{{\varepsilon _0}}({\varepsilon _0}-kV_{{\varepsilon _0}}+k\underline{V}). \end{array} \end{aligned}$$
(2.35)

When \(\overline{U}>0\), we have \(r\le l(t)\) and so we can divide into two cases: when \(r-l(t)\in (\xi _2,0)\), we have

$$\begin{aligned} \underline{V}(r,t)=\widehat{Q}^{{\varepsilon _0}}_-(r-l(t))=V_{{\varepsilon _0}}(r-l(t)). \end{aligned}$$

Hence from (2.35) we see that \(J(r,t)>0\). When \(r-l(t)\in (-l(t),\xi _2)\), by Lemma 2.4 with \(\varepsilon =\varepsilon _0\), we have

$$\begin{aligned} kV_{{\varepsilon _0}}(r-l(t))<V_{{\varepsilon _0}}(r-l(t))<V_{{\varepsilon _0}}(\xi _2)<{\varepsilon _0}. \end{aligned}$$

Again we obtain from (2.35) that \(J(r,t)>0\). Hence (2.34) holds.

We next show the wanted differential inequality for \(\underline{V}\):

$$\begin{aligned} \underline{V}_t-\underline{V}_{rr}-\frac{N-1}{r}\underline{V}_{r}-\underline{V}(1-\underline{V}-h\overline{U})\le 0 \text{ for } 0\le r\le g(t),\ t>0. \end{aligned}$$
(2.36)

We divide the proof into three parts.

(i) For \(r\in [0,l(t)+\xi _1]\), using \(\underline{V}_r(r,t)=(\widehat{Q}^{\varepsilon _0})'(r-l(t))\ge 0\), Lemma 2.3 and Lemma 2.4 with \(\varepsilon =\varepsilon _0\),

$$\begin{aligned} \underline{V}_t-\underline{V}_{rr}-\frac{N-1}{r}\underline{V}_{r}-\underline{V}(1-\underline{V}-h\overline{U}) \le -\frac{N-1}{r}\underline{V}_r\le 0. \end{aligned}$$

(ii) For \(r\in [l(t)+\xi _1, g(t)-z(s^{{\varepsilon _0}})]\), we have \(\overline{U}\equiv 0\) and \(\underline{V}\equiv \widehat{Q}^{\varepsilon _0}(\xi _1)<1\). So clearly (2.36) holds.

(iii) For \(r\in (g(t)-z(s^{{\varepsilon _0}}), g(t))\), we observe that \(r\ge g(t)-z(s^{{\varepsilon _0}})\ge \xi _1\). Also, by (2.32), we have

$$\begin{aligned} \sigma _{\varepsilon _0}-\frac{N-1}{r}\ge \sigma _{\varepsilon _0}-\frac{N-1}{\xi _1}>0. \end{aligned}$$

Together with the fact that \((q_{s^{\varepsilon _0}})'(r-g(t))<0\) for \(r\in (g(t)-z(s^{{\varepsilon _0}}), g(t))\) and \(t>0\), we have

$$\begin{aligned}&\underline{V}_t-\underline{V}_{rr}-\frac{N-1}{r}\underline{V}_{r}-\underline{V}(1-\underline{V}-h\overline{U})\\&\quad = -g'(t)q_{s^{\varepsilon _0}}'-q_{s^{\varepsilon _0}}''-\frac{N-1}{r}q_{s^{\varepsilon _0}}' -q_{s^{\varepsilon _0}}(1-q_{s^{\varepsilon _0}})\\&\quad =\left( \sigma _{\varepsilon _0}-\frac{N-1}{r}\right) q_{s^{\varepsilon _0}}'(r-g(t))\\&\quad \le 0. \end{aligned}$$

We have thus proved (2.36).

In order to use the comparison principle to compare \((u,v,s_1, s_2)\) with \((\overline{U},\underline{V}, l, g)\), we note that on the boundary \(r=0\),

$$\begin{aligned} \overline{U}_r(0,t)<0,\quad \underline{V}_r(0,t)>0 \text{ for } t>0. \end{aligned}$$
(2.37)

Regarding the free boundary conditions, we have

$$\begin{aligned}&l'(t)=c_{\mu }^{\varepsilon _0}=-\mu _1U_{\varepsilon _0}'(0)=-\mu _1\underline{U}(l(t),t) \text{ for } t>0, \end{aligned}$$
(2.38)
$$\begin{aligned}&g'(t)=s^{\varepsilon _0}-\sigma _{\varepsilon _0}<s^*_{\mu _2}=-\mu _2 (q_{s}^{\varepsilon _0})'(0)=-\mu _2\underline{V}(g(t),t) \text{ for } t>0. \end{aligned}$$
(2.39)

By (2.33), (2.34), (2.36)-(2.39), we can apply the comparison principle ([33, Lemma 3.1] with minor modifications) to deduce that

$$\begin{aligned}&s_1(t)\le l(t),\; u(r,t)\le \overline{U}(r,t) \text{ for } r\in [0, s_1(t)),\; t>0;\\&s_2(t)\ge g(t),\; v(r,t)\ge \underline{V}(r,t) \text{ for } r\in [0, g(t)],\; t>0. \end{aligned}$$

In particular,

$$\begin{aligned} \limsup _{t\rightarrow \infty }\frac{s_1(t)}{t}\le \lim _{t\rightarrow \infty }\frac{l(t)}{t}= c^{\varepsilon _0}_{\mu _1},\quad \liminf _{t\rightarrow \infty }\frac{s_2(t)}{t}\ge \lim _{t\rightarrow \infty }\frac{g(t)}{t}=s^{\varepsilon _0}-\sigma _{\varepsilon _0}. \end{aligned}$$

We have thus proved (2.30).

Step 2: We refine the definitions of \((\overline{U}(r,t), \underline{V}(r,t), l(t), g(t))\) in Step 1 to obtain the improved estimates

$$\begin{aligned} \limsup _{t\rightarrow \infty }\frac{s_1(t)}{t}\le c^{*}_{\mu _1},\quad \liminf _{t\rightarrow \infty }\frac{s_2(t)}{t}\ge s^{*}_{\mu _2}. \end{aligned}$$
(2.40)

For any given \(\varepsilon \in (0,\varepsilon _0)\), we redefine \((l,g,\overline{U},\underline{V})\) as

$$\begin{aligned}&l(t)=c^{\varepsilon }_{\mu _1}(t-T_{\varepsilon })+z_1,\quad g(t)=(s^{\varepsilon }-\sigma _{\varepsilon })(t-T_{\varepsilon })+\xi _1+z(s^{{\varepsilon }})+z_1,\\&\overline{U}(r,t)=U_{\varepsilon }(r-l(t)) \text{ for } r\in [0,l(t)],\ t\ge 0,\\&\underline{V}(r,t)={\left\{ \begin{array}{ll} \widehat{Q}^{\varepsilon }(r-l(t)) &{} \text{ for } r\in [0,l(t)+\xi _1],\ t\ge 0,\\ \widehat{Q}^{\varepsilon }(\xi _1) &{} \text{ for } r\in [l(t)+\xi _1,g(t)-z(s^{\varepsilon })],\ t\ge 0,\\ q_{s^\varepsilon }(r-g(t)) &{} \text{ for } \, r\in [g(t)-z(s^{\varepsilon }), g(t)],\ t\ge 0,\\ 0 &{} \text{ for } r\in [g(t),\infty ),\ t\ge 0, \end{array}\right. } \end{aligned}$$

where \(\xi _1=\xi _1(\varepsilon )\) is given in Lemma 2.3 and \(z_1, T_{\varepsilon }\gg 1\) are to be determined later.

We want to show that there exist \(z_1\gg 1\) and \(T_{\varepsilon }\gg 1\) such that

$$\begin{aligned} u(r,T_{\varepsilon })\le \overline{U}(r,T_{\varepsilon }) \text{ for } r\in [0,s_1(T_{\varepsilon })];\ v(r,T_{\varepsilon })\ge \underline{V}(r,T_{\varepsilon }) \text{ for } r\in [0,s_2(T_{\varepsilon })]. \end{aligned}$$
(2.41)

Since \(\limsup _{t\rightarrow \infty }u(r,t)\le 1\) uniformly in r, there exists \(T_{1,\varepsilon }\) such that

$$\begin{aligned} u(r,t)\le 1+\varepsilon /2\quad \text{ for }\,r\in [0,s_1(t)]\, \text{ and }\, t\ge T_{1,\varepsilon }. \end{aligned}$$
(2.42)

By (2.31) and Lemma 2.2, we can find \(0<\nu \ll 1\) and then \(T_{2,\varepsilon }\gg 1\) such that

$$\begin{aligned}&c_1:=c^{\varepsilon _0}_{\mu _1}+\nu <s^{\varepsilon _0}-\sigma _{\varepsilon _0}-\nu =:c_2, \end{aligned}$$
(2.43)
$$\begin{aligned}&v(r,t)\ge 1-\varepsilon \quad \text{ for }\quad r\in [c_1 t,c_2 t]\quad \hbox { and} \quad t\ge T_{2,\varepsilon }. \end{aligned}$$
(2.44)

We now prove (2.41) by making use of (2.42) and (2.44). By the definition of \(\underline{V}(r,t)\), we see that \(\Vert \underline{V}(\cdot ,t)\Vert _{L^{\infty }}<1-\varepsilon \) for all \(t>0\). Also, note that \(\underline{V}(\cdot ,T_{\varepsilon })=W_{\varepsilon }(\cdot -z_1)\) has compact support \([\xi _3+z_1,\xi _1+z(s^\varepsilon )+z_1]\), whose length equals to \(\xi _1-\xi _3+z(s^{\varepsilon })\) which is independent of the choice of \(T_{\varepsilon }\).

Next, we show the following claim: there exist \(z_1\gg 1\) and \(T_{\varepsilon }\gg 1\) such that

$$\begin{aligned}{}[\xi _3+z_1,\xi _1+z(s^\varepsilon )+z_1]\subset [c_1T_\varepsilon ,c_2T_\varepsilon ]. \end{aligned}$$

Since \(U_{\varepsilon }(-\infty )=1+\varepsilon \), we can find \(T_{3,\varepsilon }\gg 1\) such that

$$\begin{aligned} U_{\varepsilon }(r)>1+\frac{\varepsilon }{2} \text{ for } r\le -T_{3,\varepsilon }. \end{aligned}$$

By (2.30), we can find \(T_{4,\varepsilon }\gg 1\) so that

$$\begin{aligned} s_1(t)\le \left( c_{\mu _1}^{\varepsilon _0}+\frac{\nu }{2}\right) t=\left( c_1-\frac{\nu }{2}\right) t \text{ for } t\ge T_{4,\varepsilon }. \end{aligned}$$

We now take \(z_1:=c_1T_\varepsilon -\xi _3\) with \(T_{\varepsilon }>\max \{T_{1,\varepsilon },T_{2,\varepsilon }, T_{4,\varepsilon }\}\) chosen such that

$$\begin{aligned}&s_1(T_\varepsilon )-c_1T_\varepsilon +\xi _3<-\frac{\nu }{2}T_\varepsilon +\xi _3(\varepsilon )<-T_{3,\varepsilon },\\&\xi _1+z(s^\varepsilon )+c_1T_\varepsilon -\xi _3<c_2T_\varepsilon . \end{aligned}$$

It follows that

$$\begin{aligned}{}[\xi _3+z_1, \xi _1+z(s^\varepsilon )+z_1]=[c_1T_\varepsilon , \xi _1+z(s^\varepsilon )+c_1T_\varepsilon -\xi _3]\subset [c_1T_\varepsilon , c_2T_\varepsilon ], \end{aligned}$$

and

$$\begin{aligned} \overline{U}(r,T_{\varepsilon })=U_\varepsilon (r-z_1)={U_\varepsilon (r-c_1T_\varepsilon +\xi _3)}\ge 1+\frac{\varepsilon }{2} \text{ for } r\le s_1(T_\epsilon ). \end{aligned}$$

Thus we may use (2.42) and (2.44) to obtain

$$\begin{aligned}&u(r,T_{\varepsilon })\le 1+\frac{\varepsilon }{2}\le \overline{U}(r,T_{\varepsilon }) \text{ for } r\in [0,s_1(T_{\varepsilon })],\\&\underline{V}(\cdot ,T_{\varepsilon })<{1-\varepsilon }\le v(\cdot ,T_\varepsilon ) \text{ for } r\in [0, s_2(T_\varepsilon )]. \end{aligned}$$

We have thus proved (2.41).

It is also easily seen that, with \(t>0\) replaced by \(t>T_{\varepsilon }\) and \(\varepsilon _0\) replaced by \(\varepsilon \), the inequalities (2.34) and (2.36)–(2.39) still hold. Thus we are able to use the comparison principle as before to deduce

$$\begin{aligned}&s_1(t)\le l(t),\; u(r,t)\le \overline{U}(r,t) \text{ for } r\in [0, s_1(t)),\; t>T_\varepsilon ;\\&s_2(t)\ge g(t),\; v(r,t)\ge \underline{V}(r,t) \text{ for } r\in [0, g(t)],\; t>T_\varepsilon . \end{aligned}$$

In particular,

$$\begin{aligned} \limsup _{t\rightarrow \infty }\frac{s_1(t)}{t}\le \lim _{t\rightarrow \infty }\frac{l(t)}{t}= c^{\varepsilon }_{\mu _1},\quad \liminf _{t\rightarrow \infty }\frac{s_2(t)}{t}\ge \lim _{t\rightarrow \infty }\frac{g(t)}{t}=s^{\varepsilon }-\sigma _{\varepsilon }. \end{aligned}$$

Since \(\varepsilon \in (0,\varepsilon _0)\) is arbitrary, taking \(\varepsilon \rightarrow 0\) we obtain (2.40).

Step 3: We prove the following conclusions:

$$\begin{aligned}&\lim _{t\rightarrow \infty }\frac{s_1(t)}{t}=c^*_{\mu _1},\; \lim _{t\rightarrow \infty } \Big [\max _{r\in [0, (c^*_{\mu _1}-\epsilon )t]} |u(r,t)-1|\Big ]= 0. \end{aligned}$$
(2.45)
$$\begin{aligned}&\lim _{t\rightarrow \infty }\frac{s_2(t)}{t}=s^*_{\mu _2},\; \lim _{t\rightarrow \infty }\Big [\max _{r\in [(c^*_{\mu _1}+\epsilon )t, (s^*_{\mu _2}-\epsilon )t]} |v(r,t)-1|\Big ]=0. \end{aligned}$$
(2.46)

We note that for \(r\in [l(t)+\xi _1, g(t)-z(s^\varepsilon )]\) and \(t>0\),

$$\begin{aligned} \underline{V}(r,t)=\widehat{Q}^\varepsilon (\xi _1)&=V_\varepsilon (\xi _1)-\delta (\xi _1-\xi _0)^2V_\varepsilon (\xi _0)&\\&\ge V_\varepsilon (\xi _0)-\delta V_\varepsilon (\xi _0)=(1-\delta )V_\varepsilon (\xi _0)&\\&\ge \left( 1-\frac{\varepsilon }{4}\right) (1-2\varepsilon ).&\end{aligned}$$

Thus for any given \(\epsilon >0\) we can choose \(\varepsilon ^*>0\) small enough so that for all \(\varepsilon \in (0, \varepsilon ^*]\),

$$\begin{aligned} \underline{V}(r,t)\ge 1-\epsilon \text{ for } r\in [l(t)+\xi _1, g(t)-z(s^\varepsilon )],\; t>0. \end{aligned}$$

In view of

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} c^\varepsilon _{\mu _1}=c^*_{\mu _1},\; \lim _{\varepsilon \rightarrow 0} s^\varepsilon =s^*_{\mu _2}, \end{aligned}$$

and the inequality (2.31), by further shrinking \(\varepsilon ^*\) we may also assume that for all \(\varepsilon \in (0, \varepsilon ^*]\),

$$\begin{aligned} c_{\mu _1}^\varepsilon <c_{\mu _1}^*+\frac{\epsilon }{2},\; s^\varepsilon -\sigma _\varepsilon >s^*_{\mu _2}-\frac{\epsilon }{2}. \end{aligned}$$

Hence for every \(\varepsilon \in (0,\varepsilon ^*]\) we can find \(\tilde{T}_\varepsilon \ge T_\varepsilon \) such that

$$\begin{aligned}{}[(c^*_{\mu _1}+\epsilon )t, (s^*_{\mu _2}-\epsilon )t]\subset [l(t)+\xi _1, g(t)-z(s^\varepsilon )] \text{ for } t\ge \tilde{T}_\varepsilon . \end{aligned}$$

It follows that

$$\begin{aligned} v(r,t)\ge \underline{V}(r,t)\ge 1-\epsilon \text{ for } r\in [(c^*_{\mu _1}+\epsilon )t, (s^*_{\mu _2}-\epsilon )t],\; t\ge \tilde{T}_\varepsilon , \;\varepsilon \in (0,\varepsilon ^*], \end{aligned}$$

which implies

$$\begin{aligned} \liminf _{t\rightarrow \infty }\Big [\min _{r\in [(c^*_{\mu _1}+\epsilon )t, (s^*_{\mu _2}-\epsilon )t]} v(r,t)\Big ]\ge 1. \end{aligned}$$
(2.47)

Next we obtain bounds for \((u,v,s_1,s_2)\) from the other side.

By comparison with an ODE upper solution,

$$\begin{aligned} \limsup _{t\rightarrow \infty } v(r,t)\le 1 \text{ uniformly } \text{ for } r\in [0,\infty ), \end{aligned}$$

which, combined with (2.47), yields

$$\begin{aligned} \lim _{t\rightarrow \infty }\Big [\max _{r\in [(c^*_{\mu _1}+\epsilon )t, (s^*_{\mu _2}-\epsilon )t]} |v(r,t)-1|\Big ]=0. \end{aligned}$$

This proves the second identity in (2.46).

As seen in the proof of Lemma 2.2, we have

$$\begin{aligned} \limsup _{t\rightarrow \infty }\frac{s_2(t)}{t}\le s^{*}_{\mu _2}. \end{aligned}$$

Combining this with (2.40), we obtain the first identity in (2.46), namely

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{s_2(t)}{t}=s^*_{\mu _2}. \end{aligned}$$

We next prove (2.45). Consider the problem (Q) with initial data in (1.3) chosen the following way: \( \hat{u}_0=u_0,\; h_0=s^0_1\) and \(\hat{v}_0\in C^2([0,\infty ))\cap L^{\infty }((0,\infty ))\) satisfies (1.5) and

$$\begin{aligned} \hat{v}_0(r)\ge v_0(r)\quad \text{ for } \,r\in [0,s^0_2]. \end{aligned}$$
(2.48)

We denote its unique solution by \((\hat{u}, \hat{v}, h)\). Then by (B2) and Theorem 4.4 in [7], we have \(h_{\infty }=\infty \). Moreover, it follows from Theorem B that

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{h(t)}{t}=c^*_{\mu _1}. \end{aligned}$$

Due to (2.48), we can apply the comparison principle ([33, Lemma 3.1] with minor modifications) to derive

$$\begin{aligned} s_1(t)\ge h(t),\quad u(r,t)\ge \hat{u}(r,t) \text{ for } \quad r\in [0, h(t)], t\ge 0, \end{aligned}$$

which in particular implies

$$\begin{aligned} \liminf _{t\rightarrow \infty }\frac{s_1(t)}{t}\ge c^{*}_{\mu _1}. \end{aligned}$$
(2.49)

Moreover, by the definition of \(\psi _\delta \) and \(\underline{h}(t)\) in [13], and the estimate

$$\begin{aligned} \hat{u}(r,t)\ge \psi _\delta (\underline{h}(t-T)-r) \text{ for } t>T, \; r\in [0, \underline{h}(t-T)], \end{aligned}$$

we easily obtain the following conclusion:

For any given small \(\epsilon >0\), there exists \(\delta ^*>0\) small such that for every \(\delta \in (0, \delta ^*]\), there exists \(T^*_\delta >0\) large so that

$$\begin{aligned} \hat{u}(r,t)\ge \psi _\delta (\underline{h}(t-T)-r)\ge 1-\epsilon \text{ for } r\in [0, (c^*_{\mu _1}-\epsilon )t],\; t\ge T^*_\delta . \end{aligned}$$

It follows that

$$\begin{aligned} \liminf _{t\rightarrow \infty } \Big [\min _{r\in [0, (c^*_{\mu _1}-\epsilon )t]}\hat{u}(r,t)\Big ]\ge 1. \end{aligned}$$

Hence

$$\begin{aligned} \liminf _{t\rightarrow \infty } \Big [\min _{r\in [0, (c^*_{\mu _1}-\epsilon )t]} u(r,t)\Big ]\ge 1. \end{aligned}$$

By comparison with an ODE upper solution, it is easily seen that

$$\begin{aligned} \limsup _{t\rightarrow \infty }u(r,t)\le 1 \text{ uniformly } \text{ for } r\in [0,\infty ). \end{aligned}$$

We thus obtain, for any small \(\epsilon >0\),

$$\begin{aligned} \lim _{t\rightarrow \infty } \Big [\max _{r\in [0, (c^*_{\mu _1}-\epsilon )t]} |u(r,t)-1|\Big ]= 0. \end{aligned}$$

This proves the second identity in (2.45).

Combining (2.49) and (2.40), we obtain the first identity in (2.45):

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{s_1(t)}{t}=c^*_{\mu _1}. \end{aligned}$$

Step 4: We complete the proof of Theorem 3 by finally showing that, for any small \(\epsilon >0\),

$$\begin{aligned} \lim _{t\rightarrow \infty }\Big [\max _{r\in [0, (c^*_{\mu _1}-\epsilon )t]}v(r,t)\Big ]=0. \end{aligned}$$
(2.50)

We prove this by making use of (2.45). Suppose by way of contradiction that (2.50) does not hold. Then for some \(\epsilon _0>0\) small there exist \(\delta _0>0\) and a sequence \(\{(r_k, t_k)\}_{k=1}^\infty \) such that

$$\begin{aligned} \lim _{k\rightarrow \infty }t_k=\infty ,\; r_k\in [0, (c^*_{\mu _1}-\epsilon _0)t_k],\; v(r_k, t_k)\ge \delta _0 \text{ for } \text{ all } k\ge 1. \end{aligned}$$

By passing to a subsequence, we have either (i) \(r_k\rightarrow r^*\in [0, \infty )\) or (ii) \(r_k\rightarrow \infty \) as \(k\rightarrow \infty \).

In case (i) we define

$$\begin{aligned} v_k(r,t):=v(r, t+t_k),\; u_k(r,t):=u(r, t+t_k) \text{ for } k\ge 1. \end{aligned}$$

Then

$$\begin{aligned} \partial _t v_k=\Delta v_k+v_k(1-v_k-hu_k) \text{ for } r\in [0, s_2(t+t_k)),\; t\ge -t_k. \end{aligned}$$

By (2.45), we have \(u_k\rightarrow 1\) in \(L^\infty _{loc}([0,\infty )\times \mathbb {R}^1)\). Since \(v_k(1-v_k-hu_k)\) has an \(L^\infty \) bound that is independent of k, by standard parabolic regularity and a compactness consideration, we may assume, by passing to a subsequence involving a diagonal process, that

$$\begin{aligned} v_k(r,t)\rightarrow v^*(r,t) \text{ in } C_{loc}^{1+\alpha , \frac{1+\alpha }{2}}([0,\infty )\times \mathbb {R}^1),\; \alpha \in (0,1), \end{aligned}$$

and \(v^*\in W^{2,1}_{p, loc}([0,\infty )\times \mathbb {R}^1)\) \((p>1)\) is a solution of

$$\begin{aligned} \left\{ \begin{array}{ll} v^*_t=\Delta v^*+v^*(1-h-v^*) &{} \text{ for } r\in [0,\infty ),\; t\in \mathbb {R}^1,\\ v^*_r(0,t)=0 &{} \text{ for } t\in \mathbb {R}^1. \end{array}\right. \end{aligned}$$

Moreover, \({v^*(r^*,0)}\ge \delta _0\) and due to \(\limsup _{t\rightarrow \infty } v(r,t)\le 1\) we have \(v^*(r,t)\le 1\).

Fix \(R>0\) and let \(\hat{v}(r,t)\) be the unique solution of

$$\begin{aligned} \left\{ \begin{array}{ll} \hat{v}_t=\Delta \hat{v}+\hat{v}(1-h-\hat{v}) &{} \text{ for } r\in [0, R),\; t>0,\\ \hat{v}_r(0,t)=0,\; \hat{v}(R,t)=1 &{} \text{ for } t>0,\\ \hat{v}(r,0)=1 &{} \text{ for } r\in [0,R]. \end{array}\right. \end{aligned}$$

By the comparison principle we have, for any \(s>0\),

$$\begin{aligned} 0\le v^*(r,t)\le \hat{v}(r, t+s) \text{ for } r\in [0, R],\; t\ge -s. \end{aligned}$$

On the other hand, by the well known properties of logistic type equations, we have

$$\begin{aligned} \hat{v}(r,t)\rightarrow V_R(r) \text{ as }\, t\rightarrow \infty \hbox { uniformly for }r\in [0, R], \end{aligned}$$

where \(V_R(r)\) is the unique solution to

$$\begin{aligned} \Delta V_R+V_R(1-h-V_R) \text{ in } [0, R];\; V_R'(0)=0,\; V_R(R)=1. \end{aligned}$$

It follows that

$$\begin{aligned} \delta _0\le v^*(r^*, 0)\le \lim _{s\rightarrow \infty } \hat{v}(r^*, s)=V_R(r^*). \end{aligned}$$
(2.51)

By Lemma 2.1 in [10], we have \(V_R\le V_{R'}\) in \([0, R']\) if \(0<R'<R\). Hence \(V_\infty (r):=\lim _{R\rightarrow \infty } V_R(r)\) exists, and it is easily seen that \(V_\infty \) is a nonnegative solution of

$$\begin{aligned} \Delta V_\infty +V_\infty (1-h-V_\infty )=0 \text{ in } \mathbb {R}^1. \end{aligned}$$

Since \(1-h<0\), by Theorem 2.1 in [10], we have \(V_\infty \equiv 0\). Hence \(\lim _{R\rightarrow \infty } V_R(r)=0\) for every \(r\ge 0\). We may now let \(R\rightarrow \infty \) in (2.51) to obtain \( \delta _0\le 0\). Thus we reach a contradiction in case (i).

In case (ii), \(r_k\rightarrow \infty \) as \(k\rightarrow \infty \), and we define

$$\begin{aligned} v_k(r,t):=v(r+r_k, t+t_k),\; u_k(r,t):=u(r+r_k, t+t_k) \text{ for } k\ge 1. \end{aligned}$$

Since \(r_k\le (c^*_{\mu _1}-\epsilon _0)t_k\), by (2.45) we see that \(u_k(r,t)\rightarrow 1\) in \(L^\infty _{loc}(\mathbb {R}^1\times \mathbb {R}^1)\). Then similarly, by passing to a subsequence, \(v_k(r,t)\rightarrow \tilde{v}^*(r,t)\) in \(C_{loc}^{1+\alpha , \frac{1+\alpha }{2}}(\mathbb {R}^1\times \mathbb {R}^1),\; \alpha \in (0,1)\), and \(\tilde{v}^*\in W^{2,1}_{p, loc}(\mathbb {R}^1\times \mathbb {R}^1)\) \((p>1)\) is a solution of

$$\begin{aligned} \tilde{v}^*_t=\tilde{v}^*_{rr}+\tilde{v}^*(1-h-\tilde{v}^*) \text{ for } (r,t)\in \mathbb {R}^2. \end{aligned}$$

Moreover, \(\tilde{v}^*(0,0)\ge \delta _0\) and \(\tilde{v}^*(r,t)\le 1\). We may now compare \(\tilde{v}^*\) with the one-dimensional version of \(\hat{v}(r, t)\) used in case (i) to obtain a contradiction. We omit the details as they are just obvious modifications of the arguments in case (i).

As we arrive at a contradiction in both cases (i) and (ii), (2.50) must hold. The proof is now complete.\(\square \)

3 Appendix

This section is divided into three subsections. In Sect. 3.1, we establish the local existence and uniqueness of solutions for a rather general system including (P) as a special case. In Sect. 3.2, we prove the global existence with some additional assumptions on the general system considered in Sect. 3.1, but the resulting system is still much more general than (P). In the final subsection, we give the proof of Theorem 2.

3.1 Local existence and uniqueness

In this subsection, for possible future applications, we show the local existence and uniqueness of the solution to a more general system than (P). Our approach follows that in [15] with suitable changes, and in particular, we will fill in a gap in the argument of [15].

More precisely, we consider the following problem:

$$\begin{aligned} \left\{ \begin{array}{l} u_t=d_1\Delta u+f(r,t,u,v) \text{ for } 0<r<{s}_1(t),\ t>0,\\ v_t=d_2\Delta v+g(r,t,u,v) \text{ for } 0<r<{s}_2(t),\ t>0,\\ u_r(0,t)=v_r(0,t)=0 \text{ for } t>0,\\ u\equiv 0 \text{ for } r\ge {s}_1(t)\ \text{ and }\ t>0;\; v\equiv 0 \text{ for } r\ge {s}_2(t)\ \text{ and }\ t>0,\\ s_{1}'(t)=-\mu _1 u_r(s_1(t),t),\; s_{2}'(t)=-\mu _2 v_r(s_2(t),t) \text{ for } t>0,\\ ({s}_1(0),{s}_2(0))=(s_1^0,s_2^0),\; (u,v)(r,0)=(u_0,v_0)(r)\ \text{ for }\ r\in [0,\infty ), \end{array}\right. \end{aligned}$$
(3.1)

where \(r=|x|\), \(\Delta \varphi :=\varphi _{rr}+\frac{(N-1)}{r}\phi _r\), and the initial data satisfies (1.1). We assume that the nonlinear terms f and g satisfy

$$\begin{aligned} \left\{ \begin{array}{ll} \text{(i) }\, &{}f\hbox { and }g\hbox { are continuous in }r,t,u,v\in [0,\infty ),\\ \text{(ii) }\,&{} f(r,t, 0,v)=0=g(r,t, u,0)\hbox { for }r,t, u,v\ge 0,\\ \text{(iii) }\,&{} f\hbox { and }g\hbox { are locally Lipschitz continuous in }r,u,v\in [0,\infty ), \\ \;\;\;\;\; \; &{}\text{ uniformly } \text{ for }\, t \text{ in } \text{ bounded } \text{ subsets } \text{ of } \,[0,\infty ). \end{array}\right. \end{aligned}$$
(H1)

We have the following local existence and uniqueness result for (3.1).

Theorem 4

Assume (H1) holds and \(\alpha \in (0,1)\). Suppose for some \(M>0\),

$$\begin{aligned} \Vert u_0\Vert _{C^2([0,s_1^0])}+\Vert v_0\Vert _{C^2([0,s_2^0])}+ s_1^0+s_2^0\le M. \end{aligned}$$

Then there exist \(T\in (0,1)\) and \(\widehat{M}>0\) depending only on \(\alpha \), M and the local Lipschitz constants of f and g such that problem (3.1) has a unique solution

$$\begin{aligned} (u,v,s_1,s_2)\in C^{1+\alpha ,(1+\alpha )/2}({D^1_{T}})\times C^{1+\alpha ,(1+\alpha )/2}({D^2_{T}}) \times C^{1+\alpha /2}([0,{T}]) \times C^{1+\alpha /2}([0,{T}]) \end{aligned}$$

satisfying

$$\begin{aligned} \Vert u\Vert _{C^{1+\alpha ,(1+\alpha )/2}(D^1_T)}+\Vert v\Vert _{C^{1+\alpha ,(1+\alpha )/2}(D^2_T)} +\sum _{i=1}^2\Vert s_i\Vert _{C^{1+\alpha /2}([0,T])}\le \widehat{M}, \end{aligned}$$
(3.2)

where \(D^i_{T}:=\{(x,t):0\le x \le s_i(t),\ 0\le t\le T\}\) for \(i=1,2\).

Proof

Firstly, for given \(T\in (0,1)\), we introduce the function spaces

$$\begin{aligned}&\Sigma _T^i:=\big \{s\in C^1([0,T]):\, s(0)=s_i^0,\ s'(0)=s^*_i,\ 0 \quad \le s'(t)\\&\quad \le s^*_i+1,\ t\in [0,T]\big \},\quad i=1,2, \end{aligned}$$

where

$$\begin{aligned} s^*_1:=-\mu _1 u_0'(s_1^0),\; s^*_2:=-\mu _2 v_0'(s_2^0). \end{aligned}$$

Clearly \(s(t)\ge s_i^0\) for \(t\in [0,T]\) if \(s\in \Sigma _T^i\).

For given \((\hat{s}_1,\hat{s}_2)\in \Sigma _T^1\times \Sigma _T^2\), we introduce two corresponding function spaces

$$\begin{aligned} X^1_{T}&=X_T^1(\hat{s}_1,\hat{s}_2):=\big \{u\in C([0,\infty )\times [0,T]): u\equiv 0\ \text{ for }\ r\ge \hat{s}_1(t),\ t\in [0,T],\\&\quad u(r,0)\equiv u_0(r),\; \Vert u-u_0\Vert _{L^\infty ([0,\infty )\times [0,T])}\le 1\big \};\\&X^2_{T}=X_T^2(\hat{s}_1,\hat{s}_2):=\big \{v\in C([0,\infty )\times [0,T]): \ v\equiv 0\ \text{ for }\ r\ge \hat{s}_2(t),\ t\in [0,T],\\&\quad v(r,0)\equiv v_0(r),\;\Vert v-v_0\Vert _{L^\infty ([0,\infty )\times [0,T])}\le 1\big \}. \end{aligned}$$

We note that \(X^1_T\) and \(X^2_T\) are closed subsets of \(C([0,\infty )\times [0,T])\) under the \(L^\infty ([0,\infty )\times [0,T])\) norm.

Given \((\hat{s}_1,\hat{s}_2)\in \Sigma _T^1\times \Sigma _T^2\) and \((\hat{u}, \hat{v})\in X_T^1\times X_T^2\), we consider the following problem

$$\begin{aligned} \left\{ \begin{array}{l} u_t=d_1\Delta u+f(r,t,\hat{u},\hat{v}) \text{ for } 0<x<\hat{s}_1(t),\ 0<t<T,\\ v_t=d_2\Delta v+g(r,t,\hat{u},\hat{v}) \text{ for } 0<x<\hat{s}_2(t),\ 0<t<T,\\ u_r(0,t)=v_r(0,t)=0 \text{ for } 0<t<T,\\ u\equiv 0\;\text{ for }\ r\ge \hat{s}_1(t)\ \text{ and }\ t>0;\; v\equiv 0\;\text{ for }\ r\ge \hat{s}_2(t)\ \text{ and }\ t>0,\\ (\hat{s}_1,\hat{s}_2)(0)=(s_1^0,s_2^0),\ (u,v)(r,0)=(u_0,v_0)(r)\ \text{ for }\ r\in [0,\infty ). \end{array}\right. \end{aligned}$$
(3.3)

To solve (3.3) for u, we straighten the boundary \(r=\hat{s}_1(t)\) by the transformation \(R:={r}/{\hat{s}_1(t)}\) and define

$$\begin{aligned} U(R,t):=u(r,t),\quad V(R,t):=v(r,t),\quad \hat{U}(R,t):=\hat{u}(r,t),\quad \hat{V}(R,t):=\hat{v}(r,t). \end{aligned}$$

Then U satisfies

$$\begin{aligned} \left\{ \begin{array}{ll} \displaystyle U_t=\frac{d_1\Delta U}{(\hat{s}_1(t))^2}+\frac{\hat{s}'_1(t)R}{\hat{s}_1(t)}U_R+\tilde{f}(R,t) &{} \text{ for } R\in (0,1),\, t\in (0,T),\\ \displaystyle U_R(0,t)=U(1,t)=0&{} \text{ for } t\in (0,T),\\ \displaystyle U(R,0)=U^0(R):=u_0(s_1^0R)&{} \text{ for } R\in [0, 1], \end{array}\right. \end{aligned}$$
(3.4)

where

$$\begin{aligned} \Delta U:=U_{RR}+\frac{N-1}{R}U_R,\; \tilde{f}(R,t):=f(\hat{s}_1(t)R,t,\hat{U},\hat{V}). \end{aligned}$$

Since

$$\begin{aligned}&s_1^0\le \hat{s}_1(t)\le s_1^0+s_1^*+1 \text{ for } t\in [0,T], \text{ and } \\&\big \Vert {\hat{s}'_1}/{\hat{s}_1}\big \Vert _{L^{\infty }([0,T])}+\Vert \tilde{f}\Vert _{L^\infty ([0,\infty )\times [0,T])}<\infty , \end{aligned}$$

one can apply the standard parabolic \(L^p\) theory and the Sobolev embedding theorem (see [16, 23]) to deduce that (3.4) has a unique solution \( U\in {C^{1+\alpha ,(1+\alpha )/2}([0,1]\times [0,T])}\) with

$$\begin{aligned} \Vert {U}\Vert _{C^{1+\alpha ,(1+\alpha )/2}([0,1]\times [0,T])}\le C_1(\Vert \tilde{f}\Vert _\infty +\Vert u_0\Vert _{C^2}) \end{aligned}$$

for some \(C_1\) depending only on \(\alpha \in (0,1)\) and M. It follows that \(u(r,t)=U(\frac{r}{\hat{s}_1(t)}, t)\) satisfies

$$\begin{aligned} \Vert {u}\Vert _{C^{1+\alpha ,(1+\alpha )/2}(D^1_T)}\le \tilde{C}_1 (\Vert \tilde{f}\Vert _\infty +\Vert u_0\Vert _{C^2}) \end{aligned}$$
(3.5)

where \(\tilde{C}_1\) depends only on \(\alpha \) and M, and

$$\begin{aligned} D^1_T:=\{(r,t): r\in [0, \hat{s}_1(t)),\; t\in [0,T]\}. \end{aligned}$$

Similarly we can solve (3.3) to find a unique \(v\in C^{1+\alpha ,(1+\alpha )/2}(D_T^2)\) satisfying

$$\begin{aligned} \Vert {v}\Vert _{C^{1+\alpha ,(1+\alpha )/2}(D^2_T)}\le \tilde{C}_2 (\Vert \tilde{g}\Vert _\infty +\Vert v_0\Vert _{C^2}), \end{aligned}$$
(3.6)

where \(\tilde{C}_2\) depends only on \(\alpha \) and M, and

$$\begin{aligned}&{\tilde{g}(R,t)=g(\hat{s}_2(t)R,t,\hat{U},\hat{V}),}\\&D^2_T:=\{(r,t): r\in [0, \hat{s}_2(t)),\; t\in [0,T]\}. \end{aligned}$$

We now define a mapping \(\mathcal {G}\) over \(X^1_{T}\times X^2_{T}\) by

$$\begin{aligned} \mathcal {G}(\hat{u},\hat{v}):=(u,v), \end{aligned}$$

and show that \(\mathcal {G}\) has a unique fixed point in \(X^1_{T}\times X^2_{T}\) as long as \(T\in (0,1)\) is sufficiently small, by using the contraction mapping theorem.

For \(R\in [0,1]\) and \(t\in [0,T]\),

$$\begin{aligned} |U(R,t)-U(R,0)|\le & {} T^{(1+\alpha )/2}\Vert U\Vert _{C^{1+\alpha , (1+\alpha )/2}([0,1]\times [0,T])}\\\le & {} C_1T^{(1+\alpha )/2}(\Vert \tilde{f}\Vert _\infty +\Vert u_0\Vert _{C^2}). \end{aligned}$$

It follows that

$$\begin{aligned} \Vert u-u_0\Vert _{L^\infty ([0,\infty )\times [0,T])}= & {} \Vert U-U^0\Vert _{C([0,1]\times [0,T])}\\\le & {} C_1T^{(1+\alpha )/2}(\Vert \tilde{f}\Vert _\infty +\Vert u_0\Vert _{C^2}). \end{aligned}$$

Similarly,

$$\begin{aligned} \Vert v-v_0\Vert _{L^\infty ([0,\infty )\times [0,T])}\le C_2 T^{(1+\alpha )/2}(\Vert \tilde{g}\Vert _\infty +\Vert v_0\Vert _{C^2}). \end{aligned}$$

This implies that \(\mathcal {G}\) maps \(X^1_{T}\times X^2_{T}\) into itself for small \(T\in (0,1)\).

To see that \(\mathcal {G}\) is a contraction mapping, we choose any \((\hat{u}_i,\hat{v}_i)\in X_T^1\times X_T^2\), \(i=1,2\), and set

$$\begin{aligned} \tilde{u}:=\hat{u}_1-\hat{u}_2,\quad \tilde{v}:=\hat{v}_1-\hat{v}_2. \end{aligned}$$

Then \((\tilde{u},\tilde{v})\) satisfies

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle \tilde{u}_t= d_1\Delta \tilde{u}+f(r,t,\hat{u}_1,\hat{v}_1)-f(r,t,\hat{u}_2,\hat{v}_2) \text{ for } 0<r<\hat{s}_1(t),\ 0<t<T,\\ \displaystyle \tilde{v}_t=d_2\Delta \tilde{v}+g(r,t,\hat{u}_1,\hat{v}_1)-g(r,t,\hat{u}_2,\hat{v}_2) \text{ for } 0<r<\hat{s}_2(t),\ 0<t<T,\\ \displaystyle \tilde{u}_r(0,t)=\tilde{v}_r(0,t)=0 \text{ for } 0<t<T,\\ \displaystyle \tilde{u}\equiv 0\quad \text{ for } \ r\ge \hat{s}_1(t),\ 0<t<T;\quad \tilde{v}\equiv 0\quad \text{ for }\ r\ge \hat{s}_2(t),\ 0<t<T,\\ \displaystyle (\tilde{u},\tilde{v})(R,0)=(0,0),\ r\in [0,\infty ). \end{array}\right. \end{aligned}$$

By the Lipschitz continuity of f and g, there exists \(C_0>0\) such that for \(r\in [0,\max \{\hat{s}_1(T),\hat{\sigma }_1(T)\}]\) and \(t\in [0, T]\),

$$\begin{aligned}&|f(r,t,\hat{u}_1,\hat{v}_1)-f(r,t,\hat{u}_2,\hat{v}_2)|\le C_0(|\hat{u}_1-\hat{u}_2|+|\hat{v}_1-\hat{v}_2|),\\&|g(r,t,\hat{u}_1,\hat{v}_1)-g(r,t,\hat{u}_2,\hat{v}_2)|\le C_0(|\hat{u}_1-\hat{u}_2|+|\hat{v}_1-\hat{v}_2|). \end{aligned}$$

We may then repeat the arguments leading to (3.5) and (3.6) to obtain

$$\begin{aligned}&\Vert \tilde{u}\Vert _{C^{1+\alpha ,(1+\alpha )/2}(D^1_T)}+\Vert \tilde{v}\Vert _{C^{1+\alpha ,(1+\alpha )/2}(D^2_T)}\\&\quad \le C_2(\Vert \hat{u}_1-\hat{u}_2\Vert _{L^\infty ([0,\infty )\times [0,T])}+\Vert \hat{v}_1-\hat{v}_2\Vert _{L^\infty ([0,\infty )\times [0,T])}), \end{aligned}$$

for some \(C_{2}=C_2(\alpha , M, C_0)\).

If we define

$$\begin{aligned} \tilde{U}(R, t):=\tilde{u}(\hat{s}_1(t)R, t),\; \tilde{V}(R, t):=\tilde{v}(\hat{s}_2(t)R, t), \end{aligned}$$

then

$$\begin{aligned}&\Vert \tilde{U}\Vert _{C^{1+\alpha ,(1+\alpha )/2}([0,1]\times [0,T])}+\Vert \tilde{V}\Vert _{C^{1+\alpha ,(1+\alpha )/2}([0,1]\times [0,T])}\\&\quad \le C\left( \Vert \tilde{u}\Vert _{C^{1+\alpha ,(1+\alpha )/2}(D^1_T)}+\Vert \tilde{v}\Vert _{C^{1+\alpha ,(1+\alpha )/2}(D^2_T)}\right) \end{aligned}$$

for some \(C=C(M)\). Hence from the above estimate for \(\tilde{u}\) and \(\tilde{v}\) we obtain

$$\begin{aligned}&\Vert \tilde{U}\Vert _{C^{1+\alpha ,(1+\alpha )/2}([0,1]\times [0,T])}+\Vert \tilde{V}\Vert _{C^{1+\alpha ,(1+\alpha )/2}([0,1]\times [0,T])}\\&\quad \le C'_2(\Vert \hat{u}_1-\hat{u}_2\Vert _{L^\infty ([0,\infty )\times [0,T])}+\Vert \hat{v}_1-\hat{v}_2\Vert _{L^\infty ([0,\infty )\times [0,T])}), \end{aligned}$$

for some \(C'_{2}=C_2'(\alpha , M, C_0)\). Since \(\tilde{U}(R,0)=\tilde{V}(R,0)\equiv 0\), it follows that

$$\begin{aligned}&\Vert \tilde{U}\Vert _{C([0,1]\times [0,T])}+\Vert \tilde{V}\Vert _{C([0,1]\times [0,T])}\\&\quad \le C_2' T^{(1+\alpha )/2}(\Vert \hat{u}_1-\hat{u}_2\Vert _{L^\infty ([0,\infty )\times [0,T])}+\Vert \hat{v}_1-\hat{v}_2\Vert _{L^\infty ([0,\infty )\times [0,T])}), \end{aligned}$$

and hence

$$\begin{aligned}&\Vert \tilde{u}\Vert _{L^\infty ([0,\infty )\times [0,T])}+\Vert \tilde{v}\Vert _{L^\infty ([0,\infty )\times [0,T])}\\&\quad \le C_2'T^{(1+\alpha )/2}(\Vert \hat{u}_1-\hat{u}_2\Vert _{L^\infty ([0,\infty )\times [0,T])}+\Vert \hat{v}_1-\hat{v}_2\Vert _{L^\infty ([0,\infty )\times [0,T])}). \end{aligned}$$

This implies that \(\mathcal {G}\) is a contraction mapping as long as \(T\in (0,1)\) is sufficiently small. By the contraction mapping theorem, \(\mathcal {G}\) has a unique fixed point in \(X_T^1\times X_T^2\), which we denote by \((\hat{u},\hat{v})\). Furthermore, from (3.5) and (3.6), we have

$$\begin{aligned} \Vert \hat{u}\Vert _{C^{1+\alpha ,(1+\alpha )/2}(D^1_T)}+\Vert \hat{v}\Vert _{C^{1+\alpha ,(1+\alpha )/2}(D^2_T)}\le \widehat{C}_1, \end{aligned}$$
(3.7)

for some \(\widehat{C}_1=\widehat{C}_1(\alpha , M, C_0)\).

For such \((\hat{u},\hat{v})\), we introduce the mapping

$$\begin{aligned} \mathcal {F}(\hat{s}_1,\hat{s}_2)=\mathcal {F}(\hat{s}_1,\hat{s}_2; \hat{u},\hat{v}):= (\bar{s}_1,\bar{s}_2) \end{aligned}$$

with

$$\begin{aligned}&\bar{s}_1(t)=s_1^0-\mu _1\int _0^t {\hat{u}_r}(\hat{s}_1(\tau ),\tau )d\tau ,\quad t\in [0,T];\\&\bar{s}_2(t)=s_2^0-\mu _2\int _0^t {\hat{v}_r}(\hat{s}_2(\tau ),\tau )d\tau ,\quad t\in [0,T]. \end{aligned}$$

Clearly

$$\begin{aligned} \bar{s}_1'(t)=-\mu _1\hat{u}_r(\hat{s}_1(t),t)\ge 0,\; \bar{s}_2'(t)=-\mu _2\hat{v}_r(\hat{s}_2(t),t)\ge 0 \text{ for } t\in [0,T]. \end{aligned}$$
(3.8)

We shall again apply the contraction mapping theorem to deduce that \(\mathcal {F}\) defined on \(\Sigma _T^1\times \Sigma _T^2\) has a unique fixed point. By (3.7) and (3.8), we see that \(\bar{s}'_i\in C^{{\alpha }/{2}}([0,T])\) with

$$\begin{aligned} \sum _{i=1}^{2}\Vert \bar{s}'_i\Vert _{C^{{\alpha }/{2}}([0,T])}\le (\mu _1+\mu _2)\widehat{C}_1. \end{aligned}$$
(3.9)

It follows that

$$\begin{aligned} \sum _{i=1}^{2}\Vert \bar{s}'_i-s^*_i\Vert _{C([0,T])}\le (\mu _1+\mu _2)\widehat{C}_1T^{{\alpha }/{2}}. \end{aligned}$$

Hence \(\mathcal {F}\) maps \(\Sigma _T^1\times \Sigma _T^2\) into itself as long as \(T\in (0,1)\) is sufficiently small.

To show that \(\mathcal {F}\) is a contraction mapping, we let \((\hat{u}^{s},\hat{v}^{s})\) and \((\hat{u}^{\sigma },\hat{v}^{\sigma })\) be two fixed points of \(\mathcal {G}\) associated with \((\hat{s}_1,\hat{s}_2)\) and \((\hat{\sigma }_1,\hat{\sigma }_2)\in \Sigma _T^1\times \Sigma _T^2\), respectively; and for \(i=1,2\), we denote \(D_T^i\) associated to \((\hat{s}_1,\hat{s}_2)\) and \((\hat{\sigma }_1,\hat{\sigma }_2)\) by, respectively

$$\begin{aligned} D_{T,s}^i \text{ and } D_{T,\sigma }^i. \end{aligned}$$

Let us straighten \(r=\hat{s}_1(t)\) and \(r=\hat{\sigma }_1(t)\), respectively. To do so for \(r=\hat{s}_1(t)\), we define

$$\begin{aligned} U^{s}(R,t):=\hat{u}^{s}(r,t), \quad V^{s}(R,t):=\hat{v}^{s}(r,t),\quad R=\frac{r}{\hat{s}_1(t)}; \end{aligned}$$

then \(U^{s}\) satisfies

$$\begin{aligned} \left\{ \begin{array}{ll} \displaystyle U^s_t=\frac{d_1\Delta U^s}{(\hat{s}_1(t))^2}+\frac{\hat{s}'_1(t)R}{\hat{s}_1(t)}U^s_R+\tilde{f}^s(R,t) &{} \text{ for } R\in (0,1),\, t\in (0,T),\\ \displaystyle U^s_R(0,t)=U^s(1,t)=0 &{} \text{ for } t\in (0,T),\\ \displaystyle U^s(R,0)=u_0(s_1^0R)&{} \text{ for } R\in [0, 1], \end{array}\right. \end{aligned}$$
(3.10)

where

$$\begin{aligned} \tilde{f}^s(R,t):=f(\hat{s}_1(t)R,t, U^s, V^s). \end{aligned}$$

Similarly we set

$$\begin{aligned} U^{\sigma }(R,t):=\hat{u}^{\sigma }(r,t), \quad V^{\sigma }(R,t):=\hat{v}^{\sigma }(r,t),\quad R=\frac{r}{\hat{\sigma }_1(t)}, \end{aligned}$$

and find that (3.10) holds with \((U^s, V^s, \hat{s}_1(t))\) replaced by \((U^\sigma , V^\sigma ,\hat{\sigma }_1(t))\) everywhere.

Next we introduce

$$\begin{aligned}&\eta (t):=\hat{s}_1(t)/\hat{s}_2(t),\; \xi (t):=\hat{\sigma }_1(t)/\hat{\sigma }_2(t),\\&P(R,t):=U^{s}(R,t)-U^{\sigma }(R,t),\quad Q(R,t):=V^{s}(R,t)-V^{\sigma }(R,t). \end{aligned}$$

By some simple computations, P satisfies

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle P_t=\frac{d_1\Delta P}{(\hat{s}_1(t))^2}+\frac{\hat{s}'_1(t)R P_{R}}{\hat{s}_1(t)}+d_1B_1(t)U^{\sigma }_{RR}+R B_2(t)U^{\sigma }_{R} +F(R,t)\\ \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \text{ for } R\in [0,1],\, t\in [0,T],\\ \displaystyle P_R(0,t)=P(1,t)=0 \text{ for } t\in [0,T],\\ \displaystyle P(R,0)=0 \text{ for } R\in [0, 1], \end{array}\right. \end{aligned}$$
(3.11)

where

$$\begin{aligned}&B_1(t):=\frac{1}{(\hat{s}_1(t))^2}-\frac{1}{(\hat{\sigma }_1(t))^2},\quad B_2(t):=\frac{\hat{s}_1'(t)}{\hat{s}_1(t)}-\frac{\hat{\sigma }_1'(t)}{\hat{\sigma }_1(t)},\\&F(R,t):=f(R\hat{s}_1(t),t, U^{s},V^{s})-f(R\hat{\sigma }_1(t),t, U^\sigma ,V^\sigma ). \end{aligned}$$

In view of (3.8),

$$\begin{aligned} \bar{s}'_1(t)=-\mu _1\frac{U^s_R(1,t)}{\hat{s}_1(t)},\quad \bar{\sigma }'_1(t)=-\mu _1\frac{U^{\sigma }_R(1,t)}{\hat{\sigma }_1(t)}, \end{aligned}$$

and hence

$$\begin{aligned} \bar{s}'_1(t)-\bar{\sigma }'_1(t) =\frac{\mu _1}{\hat{s}_1(t)}\big [U_R^\sigma (1,t)-U_R^s(1,t)\big ] +\frac{\mu _1 U_R^\sigma (1,t)}{\hat{s}_1(t)\hat{\sigma }_1(t)} \big [\hat{s}_1(t)-\hat{\sigma }_1(t)\big ]. \end{aligned}$$

From now on, we will depart from the approach of [15] and fill in a gap which occurs in the argument there towards the proof that \(\mathcal {F}\) is a contraction mapping.

It follows from the above identity that

$$\begin{aligned} \Vert \bar{s}'_1-\bar{\sigma }'_1\Vert _{C^{\frac{1+\alpha }{2}}([0,T])}\le C\Big (\Vert U_R^s(1,\cdot )-U_R^\sigma (1,\cdot )\Vert _{C^{\frac{1+\alpha }{2}}([0,T])}+ \Vert \hat{s}_1-\hat{\sigma }_1\Vert _{C^{\frac{1+\alpha }{2}}([0,T])}\Big ), \end{aligned}$$

where C depends on \(\mu _1\) and the upper bounds of \(\Vert \hat{s}_1\Vert _{C^{(1+\alpha )/2}([0,T])}\), \(\Vert \hat{\sigma }_1\Vert _{C^{(1+\alpha )/2}([0,T])}\) and \(\Vert U_R^\sigma (1,\cdot )\Vert _{C^{(1+\alpha )/2}([0,T])}\). Hence \(C=C(\alpha , M,C_0)\).

Since \(T\le 1\), clearly

$$\begin{aligned} \Vert \hat{s}_1-\hat{\sigma }_1\Vert _{C^{\frac{1+\alpha }{2}}([0,T])}\le \Vert \hat{s}_1'-\hat{\sigma }_1'\Vert _{C([0,T])}. \end{aligned}$$

We also have

$$\begin{aligned} \Vert U_R^s(1,\cdot )-U_R^\sigma (1,\cdot )\Vert _{C^{\frac{1+\alpha }{2}}([0,T])}\le \Vert P\Vert _{C^{1+\alpha ,(1+\alpha )/2}([0,1]\times [0,T])}. \end{aligned}$$

We thus obtain

$$\begin{aligned} \Vert \bar{s}'_1-\bar{\sigma }'_1\Vert _{C^{\frac{1+\alpha }{2}}([0,T])}\le C \Big (\Vert P\Vert _{C^{1+\alpha ,(1+\alpha )/2}([0,1]\times [0,T])}+\Vert \hat{s}_1'-\hat{\sigma }_1'\Vert _{C([0,T])}\Big ). \end{aligned}$$
(3.12)

Applying the \(L^p\) estimate and the Sobolev embedding theorem to the problem (3.11), we obtain, for some \(p>1\),

$$\begin{aligned} \begin{array}{l} \Vert P\Vert _{C^{1+\alpha , (1+\alpha )/2}([0,1]\times [0,T])}\\ \quad \le M_4\Big (\Vert B_1\Vert _{C([0,T])}\Vert U_{RR}^\sigma \Vert _{L^p([0,1]\times [0,T])}\\ \qquad +\Vert B_2\Vert _{C([0,T])}\Vert U_{R}^\sigma \Vert _{L^p([0,1]\times [0,T])}+\Vert F\Vert _{L^p([0,1]\times [0,T])} \Big ) \end{array} \end{aligned}$$

for some \(M_4>0\) depending only on \(\alpha \) and M. Due to the \(W^{2,1}_p([0,1]\times [0,T])\) bound for \(U^\sigma \), we hence obtain

$$\begin{aligned} \begin{aligned}&\Vert P\Vert _{C^{1+\alpha , (1+\alpha )/2}([0,1]\times [0,T])} \\&\quad \le M_5\Big (\Vert B_1\Vert _{C([0,T])}+\Vert B_2\Vert _{C([0,T])}+\Vert F\Vert _{L^p([0,1]\times [0,T])} \Big ) \end{aligned} \end{aligned}$$
(3.13)

for some \(M_5>0\) depending only on \(\alpha \), M and the Lipschitz constant of f.

By the definitions of \(B_1(t),\; B_2(t)\) and F(Rt), we have

$$\begin{aligned}&\Vert B_1\Vert _{C([0,T])}\le C\Vert \hat{s}_1-\hat{\sigma }_1\Vert _{C([0,T])}, \end{aligned}$$
(3.14)
$$\begin{aligned}&\Vert B_2\Vert _{C([0,T])}\le C\Big (\Vert \hat{s}_1-\hat{\sigma }_1\Vert _{C([0,T])} +\Vert \hat{s}_1'-\hat{\sigma }_1'\Vert _{C([0,T])}\Big ), \end{aligned}$$
(3.15)

and

$$\begin{aligned} \Vert F\Vert _{L^p([0,1]\times [0,T])}\le C\Big ( \Vert P\Vert _{C([0,1]\times [0,T])}+\Vert Q\Vert _{C([0, 1]\times [0,T])}+\Vert \hat{s}_1-\hat{\sigma _1}\Vert _{C[0,T]}\Big ), \end{aligned}$$
(3.16)

for some \(C>0\) depending only on M and the Lipschitz constants of f.

We next estimate \(\Vert P\Vert _{C([0,1]\times [0,T])}\) and \(\Vert Q\Vert _{C([0, 1]\times [0,T])}\) by using the estimate in Lemma 2.2 of [15], namely

$$\begin{aligned} \Vert \hat{u}^s-\hat{u}^\sigma \Vert _{C(\Gamma _T^1)}+\Vert \hat{v}^s-\hat{v}^\sigma \Vert _{C(\Gamma _T^2)}\le C\sum _{i=1}^2\Vert \hat{s}_i-\hat{\sigma }_i\Vert _{C([0,T])}, \end{aligned}$$

where

$$\begin{aligned} \Gamma _T^i:=\big \{(r,t): 0\le r\le \min \{\hat{s}_i(t),\hat{\sigma }_i(t)\}\big \},\; i=1,2. \end{aligned}$$

Without loss of generality, we may assume \(\hat{s}_1(t)\le \hat{\sigma }_1(t)\). Then for any \(R\in [0,1]\) and \(t\in [0,T]\), we have

$$\begin{aligned} |P(R,t)|= & {} |\hat{u}^s(R\hat{s}_1(t),t)-\hat{u}^\sigma (R\hat{\sigma }_1(t), t)|\\\le & {} |\hat{u}^s(R\hat{s}_1(t), t)-\hat{u}^\sigma (R \hat{s}_1(t),t)|+ |\hat{u}^\sigma (R\hat{s}_1(t), t)-\hat{u}^\sigma (R \hat{\sigma }_1(t),t)|\\\le & {} C\sum _{i=1}^2\Vert \hat{s}_i-\hat{\sigma }_i\Vert _{C([0,T])}+\Vert \hat{u}^\sigma \Vert _{C^{1+\alpha ,(1+\alpha )/2}({D_{T,\sigma }^1})}\Vert \hat{s}_1-\hat{\sigma }_1\Vert _{C([0,T])}\\\le & {} \tilde{C}\sum _{i=1}^2\Vert \hat{s}_i-\hat{\sigma }_i\Vert _{C([0,T])}. \end{aligned}$$

It follows that

$$\begin{aligned} \Vert P\Vert _{C([0,1]\times [0,T])}\le \tilde{C}\sum _{i=1}^2\Vert \hat{s}_i-\hat{\sigma }_i\Vert _{C([0,T])}. \end{aligned}$$

For any \(R\in [0,1]\) and \(t\in [0,T]\), we have

$$\begin{aligned} |Q(R,t)|= & {} |\hat{v}^s(R\hat{s}_1(t),t)-\hat{v}^\sigma (R\hat{\sigma }_1(t), t)|\\= & {} |\hat{v}^s(R\eta (t)\hat{s}_2(t), t)-\hat{v}^\sigma (R\xi (t) \hat{\sigma }_2(t),t)|. \end{aligned}$$

We now consider all the possible cases:

(i) If \(R\eta (t)\ge 1\) and \(R\xi (t)\ge 1\), then we immediately obtain

$$\begin{aligned} |Q(R,t)|=0. \end{aligned}$$

(ii) If \(R\eta (t)<1\) and \(R\xi (t)<1\), assuming without loss of generality \(\hat{s}_2(t)\le \hat{\sigma }_2(t)\), then

$$\begin{aligned} |Q(R,t)|= & {} |\hat{v}^s(R\eta (t)\hat{s}_2(t), t)-\hat{v}^\sigma (R\xi (t) \hat{\sigma }_2(t),t)|\\\le & {} |\hat{v}^s(R\eta (t)\hat{s}_2(t),t)-\hat{v}^\sigma (R\eta (t) \hat{s}_2(t),t)|\\&\; +|\hat{v}^\sigma (R\eta (t)\hat{s}_2(t), t)-\hat{v}^\sigma (R\xi (t) \hat{\sigma }_2(t),t)|\\\le & {} C\sum _{i=1}^2\Vert \hat{s}_i-\hat{\sigma }_i\Vert _{C([0,T])}\\&\; +\Vert \hat{v}^\sigma \Vert _{C^{1+\alpha ,(1+\alpha )/2}({D_{T,\sigma }^2})}\Vert \hat{s}_1-\hat{\sigma }_1\Vert _{C([0,T])}\\\le & {} \tilde{C}\sum _{i=1}^2\Vert \hat{s}_i-\hat{\sigma }_i\Vert _{C([0,T])}. \end{aligned}$$

(iii) If \(R\eta (t)<1\le R\xi (t)\) and \(R\eta (t)\hat{s}_2(t)\le \hat{\sigma }_2(t)\), then

$$\begin{aligned} |Q(R,t)|= & {} |\hat{v}^s(R\eta (t)\hat{s}_2(t), t)-\hat{v}^\sigma (R\xi (t) \hat{\sigma }_2(t),t)|\\= & {} |\hat{v}^s(R\eta (t) \hat{s}_2(t),t)|\\\le & {} |\hat{v}^s(R\eta (t)\hat{s}_2(t), t)-\hat{v}^\sigma (R\eta (t)\hat{s}_2(t),t)|\\&\; +|\hat{v}^\sigma (R\eta (t)\hat{s}_2(t),t)-\hat{v}^\sigma (\hat{\sigma }_2(t),t)|\\\le & {} C\sum _{i=1}^2\Vert \hat{s}_i-\hat{\sigma }_i\Vert _{C([0,T])}\\&\;+\Vert \hat{v}^\sigma \Vert _{C^{1+\alpha ,(1+\alpha )/2}({D_{T,\sigma }^2})}|R\eta (t)\hat{s}_2(t)-\hat{\sigma }_2(t)|. \end{aligned}$$

From \(R\eta (t)<1\le R\xi (t)\) and \(R\eta (t)\hat{s}_2(t)\le \hat{\sigma }_2(t)\) we obtain

$$\begin{aligned} |R\eta (t)\hat{s}_2(t)-\hat{\sigma }_2(t)|\le & {} \left| \frac{\eta (t)}{\xi (t)}\hat{s}_2(t)-\hat{\sigma }_2(t)\right| \\\le & {} \Vert 1/\xi \Vert _{C([0,T])}|\eta (t)\hat{s}_2(t)-\xi (t)\hat{\sigma }_2(t)|\\= & {} \Vert 1/\xi \Vert _{C([0,T])}|\hat{s}_1(t)-\hat{\sigma }_1(t)|. \end{aligned}$$

Thus in this case we also have

$$\begin{aligned} |Q(R,t)|\le \tilde{C}\sum _{i=1}^2\Vert \hat{s}_i-\hat{\sigma }_i\Vert _{C([0,T])}. \end{aligned}$$

(iv) If \(R\eta (t)<1\le R\xi (t)\) and \(R\eta (t)\hat{s}_2(t)>\hat{\sigma }_2(t)\), then

$$\begin{aligned} |R\eta (t)\hat{s}_2(t)-\hat{s}_2(t)|\le & {} |\hat{\sigma }_2(t)-\hat{s}_2(t)| \text{ and } \\ |Q(R,t)|= & {} |\hat{v}^s(R\eta (t) \hat{s}_2(t),t)|\\= & {} |\hat{v}^s(R\eta (t)\hat{s}_2(t), t)-\hat{v}^s (s_2(t),t)|\\\le & {} \Vert \hat{v}^s\Vert _{C^{1+\alpha ,(1+\alpha )/2}({D_{T,s}^2})}|R\eta (t)\hat{s}_2(t)-\hat{s}_2(t)|\\\le & {} \Vert \hat{v}^s\Vert _{C^{1+\alpha ,(1+\alpha )/2}({D_{T,s}^2})}|\hat{s}_2(t)-\hat{\sigma }_2(t)|\\\le & {} \tilde{C}\sum _{i=1}^2\Vert \hat{s}_i-\hat{\sigma }_i\Vert _{C([0,T])}. \end{aligned}$$

(v) If \(R\eta (t)\ge 1> R\xi (t)\), we are in a symmetric situation to cases (iii) and (iv) above, so we similarly obtain

$$\begin{aligned} |Q(R,t)|\le \tilde{C}\sum _{i=1}^2\Vert \hat{s}_i-\hat{\sigma }_i\Vert _{C([0,T])}. \end{aligned}$$
(3.17)

Thus in all the possible cases (3.17) always holds. It follows that

$$\begin{aligned} \Vert Q\Vert _{C([0,1]\times [0,T])}\le \tilde{C}\sum _{i=1}^2\Vert \hat{s}_i-\hat{\sigma }_i\Vert _{C([0,T])}. \end{aligned}$$

We thus obtain from (3.16) that

$$\begin{aligned} \Vert F\Vert _{L^p([0,1]\times [0,T])}\le \tilde{C}\sum _{i=1}^2\Vert \hat{s}_i-\hat{\sigma }_i\Vert _{C([0,T])}. \end{aligned}$$
(3.18)

We may now substitute (3.14), (3.15) and (3.18) into (3.13) to obtain

$$\begin{aligned} \Vert P\Vert _{C^{1+\alpha , (1+\alpha )/2}([0,1]\times [0,T])}\le \tilde{C}\left( \Vert \hat{s}_1'-\hat{\sigma }_1'\Vert _{C([0,T])}+ \sum _{i=1}^2\Vert \hat{s}_i-\hat{\sigma }_i\Vert _{C([0,T])}\right) . \end{aligned}$$

It thus follows from (3.12) that

$$\begin{aligned} \Vert \bar{s}'_1-\bar{\sigma }'_1\Vert _{C^{\frac{1+\alpha }{2}}([0,T])}\le \tilde{C}' \left( \Vert \hat{s}_1'-\hat{\sigma }_1'\Vert _{C([0,T])}+ \sum _{i=1}^2\Vert \hat{s}_i-\hat{\sigma }_i\Vert _{C([0,T])}\right) . \end{aligned}$$

Since \(\bar{s}'_1(0)-\bar{\sigma }_1'(0)=0\), this implies

$$\begin{aligned} \Vert \bar{s}'_1-\bar{\sigma }'_1\Vert _{C([0,T])}\le T^{\frac{1+\alpha }{2}}\tilde{C}' \left( \Vert \hat{s}_1'-\hat{\sigma }_1'\Vert _{C([0,T])}+ \sum _{i=1}^2\Vert \hat{s}_i-\hat{\sigma }_i\Vert _{C([0,T])}\right) . \end{aligned}$$

Hence for \(T>0\) sufficiently small we have

$$\begin{aligned} \Vert \bar{s}'_1-\bar{\sigma }'_1\Vert _{C([0,T])}\le T^{\frac{1+\alpha }{2}}\hat{C}_1 \sum _{i=1}^2\Vert \hat{s}_i-\hat{\sigma }_i\Vert _{C([0,T])} \end{aligned}$$
(3.19)

with \(\hat{C}_1>0\) depending only on \(\alpha , M\) and the Lipschitz constant of f.

In a similar manner, we can straighten \(r=\hat{s}_2(t)\) and \(r=\hat{\sigma }_2(t)\) to obtain

$$\begin{aligned} \Vert \bar{s}'_2-\bar{\sigma }'_2\Vert _{C([0,T])}\le T^{\frac{1+\alpha }{2}}\hat{C}_2 \sum _{i=1}^2\Vert \hat{s}_i-\hat{\sigma }_i\Vert _{C([0,T])} \end{aligned}$$
(3.20)

with \(\hat{C}_2>0\) depending only on \(\alpha , M\) and the Lipschitz constant of g.

Finally, using \(\hat{s}_i(0)=\hat{\sigma }_i(0)=s_i^0\), \(i=1,2\), we see that

$$\begin{aligned} \Vert \hat{s}_i-\hat{\sigma }_i\Vert _{C([0,T])}\le T \Vert \hat{s}'_i-\hat{\sigma }'_i\Vert _{C([0,T])}, \ i=1,2. \end{aligned}$$
(3.21)

Combining (3.19), (3.20) and (3.21), we see that \(\mathcal {F}\) is a contraction mapping as long as \(T>0\) is sufficiently small. Hence \(\mathcal {F}\) has a unique fixed point \((s_1,s_2)\in \Sigma _T^1\times \Sigma _T^2\) for such T.

Let (uv) be the unique fixed point of \(\mathcal {G}\) in \(X_T^1(s_1, s_2)\times X_T^2(s_1, s_2)\); then it is easily seen that \((u,v,s_1,s_2)\) is the unique solution of (3.1). Furthermore, (3.2) holds because of (3.7) and (3.9). We have now completed the proof of Theorem 4.\(\square \)

By Theorem 4 and the Schauder estimate, we see that the solution of (P) defined for \(t\in [0,T]\) is actually a classical solution.

3.2 Global existence

In this subsection, we show that the unique local solution of (3.1) can be extended to all positive time if the following extra assumption is imposed:

(H2) :

There exists a positive constant K such that \(f(r,t,u,v)\le K(u+v)\) and \(g(r,t,u,v)\le K(u+v)\) for \(r,t, u,v\ge 0\).

Theorem 5

Under the assumptions of Theorem 4 and (H2), problem (3.1) has a unique globally in time solution.

Proof

The proof is similar to that of [7, Theorem 2.4]. For the reader’s convenience, we present a brief proof. Let \([0,T^*)\) be the largest time interval for which the unique solution of (3.1) exists. By Theorem 4, \(T_*>0\). By the strong maximum principle, we see that \(u(r,t)>0\) in \([0,s_1(t))\times [0,T_*)\) and \(v(r,t)>0\) in \([0,s_2(t))\times [0,T_*)\). We will show that \(T_*=\infty \). Aiming for a contradiction, we assume that \(T_*<\infty \). Consider the following ODEs

$$\begin{aligned}&dU/dt=K(U+V),\quad t>0,\quad U(0)=\Vert u_0\Vert _{L^{\infty }([0,s_1^0])},\\&dV/dt=K(U+V),\quad t>0,\quad V(0)=\Vert v_0\Vert _{L^{\infty }([0,s_2^0])}. \end{aligned}$$

Take \(M^*>T_*\). Clearly,

$$\begin{aligned} 0<U(t)+V(t)<\Big (\Vert u_0\Vert _{L^{\infty }([0,s_1^0])}+\Vert v_0\Vert _{L^{\infty }([0,s_2^0])}\Big )e^{2KM^*}:=C_1,\quad t\in [0,T_*) \end{aligned}$$

By (H2), we can compare (uv) with (UV) to obtain

$$\begin{aligned} \Vert u\Vert _{L^{\infty }([0,s_1(t)]\times [0,T_*))}+\Vert v\Vert _{L^{\infty }([0,s_2(t)]\times [0,T_*))}\le C_1. \end{aligned}$$

Next, we can use a similar argument as in [6, Lemma 2.2] to derive

$$\begin{aligned} 0<s_i'(t)\le C_2,\quad t\in (0,T_*),\ i=1,2 \end{aligned}$$

for some \(C_2\) independent of \(T_*\). Furthermore, we have

$$\begin{aligned} s_i^0\le s_i(t)\le s_i^0+C_2t\le s_i^0+C_2M^*,\quad t\in [0,T_*),\ i=1,2. \end{aligned}$$

Taking \(\epsilon \in (0,T_*)\), by standard parabolic regularity, there exists \(C_3>0\) depending only on K, \(M^*\), \(C_1\) and \(C_2\) such that

$$\begin{aligned} \Vert u(\cdot ,t)\Vert _{C^2([0,s_1(t)])}+\Vert v(\cdot ,t)\Vert _{C^2([0,s_2(t)])}\le C_3,\quad t\in [\epsilon ,T_*). \end{aligned}$$

By Theorem 4, there exists \(\tau >0\) depending only on K, \(M^*\) and \(C_i\) (\(i=1,2,3\)) such that the solution of problem (3.1) with initial time \(T_*-\tau /2\) can be extended uniquely to the time \(T_*+\tau /2\), which contradicts the definition of \(T_*\). This completes the proof of Theorem 5. \(\square \)

3.3 Proof of Theorem 2

Proof of Theorem 2

Define

$$\begin{aligned} s_*=R^*\sqrt{\frac{d}{r}},\quad s^*=R^*\sqrt{\frac{d}{r}}\frac{1}{\sqrt{1-k}},\quad s^{**}=R^*. \end{aligned}$$

First, following the same lines in [15, Theorem 2] with some minor changes, we can prove the following three results:

  1. (i)

    If \(s_{1,\infty }\le s_*\), then u vanishes eventually. In this case, v spreads successfully (resp. vanishes eventually) if \(s_{2,\infty }>s^{**}\) (resp. \(s_{2,\infty }\le s^{**}\)),

  2. (ii)

    If \(s_*<s_{1,\infty }\le s^*\), then u vanishes eventually, and v spreads successfully.

  3. (iii)

    If \(s_{1,\infty }> s^*\), then u spreads successfully.

Next, we shall show

  1. (iv)

    If \(s_{1,\infty }> s^*\) and \((\mu _1,\mu _2)\in \mathcal {B}\), then u spreads successfully and v vanishes eventually.

By a simple comparison consideration we see that

$$\begin{aligned} \limsup _{t\rightarrow \infty }\frac{s_2(t)}{t}\le s^*_{\mu _2}. \end{aligned}$$
(3.22)

By (iii), we see that u spreads successfully and so \(s_{1,\infty }=\infty \). It follows that there exists \(T\gg 1\) such that

$$\begin{aligned} s_1(T)\ge R^*\sqrt{\frac{d}{r}}\frac{1}{\sqrt{1-k}}. \end{aligned}$$

This allows us to use a similar argument to that leading to (2.49) but taking T as the initial time to obtain

$$\begin{aligned} \liminf _{t\rightarrow \infty }\frac{s_1(t)}{t}\ge c^*_{\mu _1}. \end{aligned}$$
(3.23)

To show that \(s_{2,\infty }<\infty \) we argue by contradiction and assume \(s_{2,\infty }=\infty \). Since \((\mu _1,\mu _2)\in \mathcal {B}\), from (3.23) and (3.22) we can find \(\tau \gg 1\) and \(\hat{c}\) such that

$$\begin{aligned} s^*_{\mu _2}<\hat{c}<c^*_{\mu _1},\quad s_2(t)<\hat{c}t<s_1(t)\quad \text{ for } \text{ all }\, t\ge \tau . \end{aligned}$$

Then by the same process used in deriving the second identity in (2.45), we have

$$\begin{aligned} \lim _{t\rightarrow \infty } \Big [\max _{r\in [0, \hat{c}t]} |u(r,t)-1|\Big ]= 0. \end{aligned}$$
(3.24)

Also, noting \(h>1\) and \(s_2(t)<\hat{c}t\), there exist \(\hat{\tau }> \tau \) such that

$$\begin{aligned} v_t=\Delta v+v(1-v-hu)\le \Delta v,\quad 0<r<s_2(t),\ t\ge \hat{\tau }, \end{aligned}$$

which leads to \(s_{2,\infty }<\infty \) by simple comparison (cf. [15, Theorem 3]). This reaches a contradiction. Hence we have proved \(s_{2,\infty }<\infty \). Finally, using \(s_{2,\infty }<\infty \) we can show \(\lim _{t\rightarrow \infty }\Vert v(\cdot ,t)\Vert _{C([0,s_2(t)])}=0\) (cf. [15, Lemma 3.4]). Hence v vanishes eventually and then (iv) follows.

The conclusions of Theorem 2 follow easily from (i)–(iv). \(\square \)