1 Introduction and Main Results

In this paper, we study spreading properties for the solutions of the following non-autonomous and non-local one-dimensional equation

$$\begin{aligned} \partial _t u(t,x)= \int _{\mathbb {R}} K(y) \left[ u(t,x-y) -u(t,x) \right] \textrm{d}y + { u(t,x)f \left( t, u(t,x)\right) }, \end{aligned}$$
(1.1)

posed for time \(t\ge 0\) and \(x\in \mathbb {R}\). This evolution problem is supplemented with an appropriated initial data that will be discussed below. Here, \(K=K(y)\) is a nonnegative dispersal kernel with thin-tailed (see Assumption 1.3). Let us set \(F(t,u):= uf(t,u)\). At the same time, \(F=F(t,u)\) stands for the nonlinear growth term, which depends on time t and that will be assumed in this note to be of the Fisher–KPP type (see Assumption 1.5). The above problem typically describes the spatial invasion of a population (see, for instance, (Berestycki et al. 2016; Lutscher et al. 2005) and the references cited therein) with the following features:

  1. (1)

    individuals exhibit long distance dispersal according to the kernel K; in other words, the quantity \(K(x-y)\) corresponds to the probability for individuals to jump from y to x;

  2. (2)

    time-varying birth and death processes modelled by the nonlinear Fisher–KPP-type function F(tu). The time variations may stand for seasonality and/or external events (see Jin and Lewis (2012)).

When local diffusion is considered, the Fisher–KPP equation posed in a time homogeneous medium reads as

$$\begin{aligned} \partial _t u(t,x)= \partial _{xx} u(t,x) + F(u(t,x)). \end{aligned}$$
(1.2)

As mentioned above, this problem arises as a basic model in many different fields, in biology and ecology in particular. It can be used, for instance, to describe the spatio-temporal evolution of an invading species into an empty environment. The above equation (1.2) was introduced separately by Fisher (1937) and Kolmogorov et al. (1937), when the nonlinear function F satisfies the Fisher–KPP conditions. Recall that a typical example of such Fisher–KPP nonlinearity is given by the logistic function \(F(u)= u(1-u)\).

There is a large amount of the literature related to (1.2) and its generalizations. To study propagation phenomena generated by reaction–diffusion equations, in addition to the existence of travelling wave solution, the asymptotic speed of spread (or spreading speed) was introduced and studied in Aronson and Weinberger (1978). Roughly speaking if \(u_0\) is a non-trivial and nonnegative initial data with compact support, then the solution of (1.2) associated with this initial data \(u_0\) spreads with the speed \(c^*>0\) (the minimal wave speed of the travelling waves) in the sense that

$$\begin{aligned} \lim _{t\rightarrow \infty } \sup _{|x|\le c t} | u(t,x)-1|=0, \; \forall \; c \in [0,c^*) \; \text { and } \; \lim _{t\rightarrow \infty } \sup _{|x|\ge c t} u(t,x)=0, \; \forall \; c>c^*. \end{aligned}$$

This concept of spreading speed has been further developed by several researchers in the last decades from different view points including PDE’s argument, dynamical systems theory, probability theory, mathematical biology, etc. Spreading speeds of KPP-type reaction–diffusion equations in homogeneous and periodic media have been extensively studied (see Berestycki et al. (2005); Fang et al. (2017); Liang et al. (2006); Liang and Zhao (2007); Weinberger (1982, 2002) and the references cited therein). There is also an extensive literature on spreading phenomena for reaction–diffusion systems. We refer, for instance, (Ambrosio et al. 2021; Ducrot et al. 2019; Girardin and Lam 2019) and the references cited therein.

Recently, spreading properties for KPP-type reaction–diffusion equations in more general environments have attracted a lot of attention, see (Berestycki et al. 2008; Berestycki and Nadin 2019; Nadin and Rossi 2012; Shen 2010) and the references cited therein. In particular, Nadin and Rossi (2012) studied spreading properties for KPP equation with local diffusion and general time heterogeneities. Furthermore, they obtained a definite spreading speed when the coefficients share some averaging properties.

The spreading properties of non-local diffusion equation as (1.1) have attracted a lot of interest in the last decades. Since the semiflow generated by non-local diffusion equations does not enjoy any regularization effects, this brings additional difficulties. Fisher–KPP equations or monostable problems in homogeneous environments have been studied from various point of views: wave front propagation (see Coville and Dupaigne (2005); Schumacher (1980) and the references cited therein), hair trigger effect and spreading speed (see Alfaro 2017; Cabré and Roquejoffre 2013; Diekmann 1979; Finkelshtein and Tkachov 2018; Lutscher et al. 2005; Xu et al. 2021 and the references cited therein). For the thin-tailed kernel, we refer, for instance, to Lutscher et al. (2005) and the recent work Xu et al. (2021) where a new sub-solution has been constructed to provide a lower bound of the spreading speed. Note also that the aforementioned work deals with possibly non-symmetric kernel where the propagation speed on the left- and the right-hand side of the domain can be different. For the fat-tailed dispersion kernels, the propagating behaviour of the solutions can be very different from the one observed with thin-tailed kernel. Acceleration may occur. We refer to Finkelshtein and Tkachov (2019); Garnier (2011) for fat-tailed kernel and to Cabré and Roquejoffre (2013) for fractional Laplace-type dispersion.

Recently, wave propagation and spreading speeds for non-local diffusion problem with time and/or space heterogeneities have been considered. Existence and non-existence of generalized travelling wave solutions have been discussed in Ducrot and Jin (2022); Jin and Zhao (2009); Lim and Zlatoš (2016); Shen and Shen (2016) and the references cited therein. For spreading speed results, we refer the reader to Jin and Lewis (2012); Jin and Zhao (2009); Liang and Zhou (2020); Shen and Zhang (2010) and the references cited therein. We also refer to Bao et al. (2018); Xu et al. (2020); Zhang and Zhao (2020) for the analysis of the spreading speed for systems with non-local diffusion.

As far as monotone problem is concerned, one may apply the well developed monotone semiflow method to study the spreading speed for non-local diffusion problems. We refer the reader to Liang and Zhao (2007); Weinberger (1982) and to Jin and Lewis (2012); Jin and Zhao (2009) for time periodic systems.

In this work, we provide a new approach which is based on the construction of suitable propagating paths (namely, functions \(t\mapsto X(t)\) with \(\liminf _{t\rightarrow \infty } u(t,X(t))>0\)) coupled with what we call a persistence lemma (see Lemma 2.6) for uniformly continuous solutions, to obtain lower estimate for the propagating set. This lemma roughly states that controlling from below the solution at \(x=0\) and X(t) for \(t\gg 1\) implies a control of the solution \(u=u(t,x)\) from below on the whole interval \(x\in [0,kX(t)]\) for some \(k\in (0,1)\) and \(t\gg 1\). The proof of this lemma does not make use of the properties of the tail of the kernel, so that we expect our key persistence lemma to be applied for the study of acceleration phenomena for fat-tailed dispersal kernel. However, the uniform continuity property for the solutions is important for our proof and this remains complicated to check. Here, we are able to prove such a property for some specific initial data and logistic-type nonlinearities. Note that in Li et al. (2010) the authors consider this regularity problem. They show that when the nonlinear term satisfies \(F_u(u) < {\overline{K}}\) for any \(u\ge 0\), where \({\overline{K}}= \int _{\mathbb {R}}K(y)\textrm{d}y\), then solutions of the homogeneous problem inherit the Lipschitz continuity property from those of their initial data, with a control of the Lipschitz constant for all time \(t\gg 1\). In this note, we prove the uniform continuity of some solutions when the above condition fails (see Assumption 1.5 (f4)). This point is studied in Sect. 3.1, where we provide a class of initial data for which the solutions (of the non-local logistic equation) are uniformly continuous on \([0,\infty )\times {\mathbb {R}}\).

Now to state our results, we first introduce some notations and present our main assumptions. Let us define the important notion of the least mean value for a bounded function.

Definition 1.1

Along this work, for any given function \(h\in L^\infty (0, \infty ;\mathbb {R})\), we define

$$\begin{aligned} \lfloor h \rfloor := \lim \limits _{T\rightarrow +\infty }\inf _{s> 0} \frac{1}{T} \int _{0}^{T} h(t+s) \textrm{d}t. \end{aligned}$$
(1.3)

In that case, the quantity \(\lfloor h \rfloor \) is called the least mean of the function h (over \((0,\infty )\)).

If h admits a mean value \(\langle h \rangle \), that is, there exists

$$\begin{aligned} \langle h \rangle := \lim \limits _{T\rightarrow +\infty } \frac{1}{T} \int _{0}^{T} h(t+s) \textrm{d}t, \text { uniformly with respect to } s\ge 0. \end{aligned}$$
(1.4)

Then, \(\lfloor h \rfloor = \langle h \rangle \). Particularly, the time periodic, almost periodic and uniquely ergodic coefficients have the mean value. For the definition and properties of almost periodic function and uniquely ergodic function, we refer the reader to Matano and Nara (2011). An equivalent and useful characterization for the least mean of the function, as above, is given in the next lemma.

Lemma 1.2

(See Nadin and Rossi (2012, 2015)) Let \(h\in L^\infty (0,\infty ;\mathbb {R})\) be given. Then, one has

$$\begin{aligned} \lfloor h \rfloor =\sup _{a\in W^{1, \infty }(0, \infty ) }\inf _{t>0} \left( a' + h\right) (t). \end{aligned}$$

We are now able to present the main assumptions that will be needed in this note. First we assume that the kernel \(K=K(y)\) enjoys the following set of properties:

Assumption 1.3

(Kernel \(K\!=\!K(y)\)) We assume that the kernel \(K\!:\! \mathbb {R}\!\rightarrow \! [0,\infty )\) satisfies the following set of assumptions:

  1. (i)

    The function \(y\mapsto K(y)\) is nonnegative, continuous and integrable;

  2. (ii)

    There exists \(\alpha >0\) such that

    $$\begin{aligned} \int _\mathbb {R}K(y)e^{\alpha y}dy<\infty . \end{aligned}$$
  3. (iii)

    We also assume that \(K(0)>0\).

Remark 1.4

Note that here we do not impose that the kernel function is symmetric. We focus on the propagation to the right-hand side of the spatial domain. Thus in (ii), we only assume the kernel is thin-tailed on the right-hand side.

Since K(y) is continuous and \(K(0)>0\), then there exist \(\delta >0\) and \(k:\mathbb {R}\rightarrow [0,\infty )\), continuous, even and compactly supported such that

$$\begin{aligned} \begin{aligned}&\textrm{supp}\;k=[-\delta ,\delta ],\;k(y)>0,\;\forall y\in (-\delta ,\delta ),\\&k(y)\le K(y) \text { and } k(y)=k(-y), \; \forall y\in \mathbb {R}. \end{aligned} \end{aligned}$$
(1.5)

This property will allow us to control the solution on bounded sets, around \(x=0\).

Now, we discuss our Fisher–KPP assumptions for the nonlinear term \(F(t,u)=uf(t,u)\).

Assumption 1.5

(KPP nonlinearity) Assume that the function \(f:[0, \infty )\times [0,1]\rightarrow \mathbb {R}\) satisfies the following set of hypotheses:

  1. (f1)

    \(f(\cdot , u)\in L^\infty (0, \infty ;\mathbb {R})\), for all \(u\in [0,1]\), and f is Lipschitz continuous with respect to \(u\in [0,1]\), uniformly with respect to \(t\ge 0\);

  2. (f2)

    Let \(f(t,1)=0\) for a.e. \(t\ge 0 \). Setting \(\mu (t):=f(t,0)\), we assume that \(\mu (\cdot )\) is bounded and uniformly continuous. Also, we require that

    $$\begin{aligned} h(u):=\inf _{t\ge 0} f(t,u)>0 \hbox { for all}\ u\in [0,1); \end{aligned}$$
  3. (f3)

    For almost every \(t\ge 0\), the function \(u\mapsto f(t,u)\) is non-increasing on [0, 1];

  4. (f4)

    Set \({\overline{K}}:= \int _\mathbb {R}K(y)\textrm{d}y\). The least mean of the function \(\mu \) satisfies

    $$\begin{aligned} \lfloor \mu \rfloor >{\overline{K}}. \end{aligned}$$

Remark 1.6

Here, we assume that the steady states are \(p^-= 0\) and \(p^+ =1\). These assumptions can be relaxed by the change of variables to take into account \(p^-=p^-(t)\) and \(p^+= p^+(t)\). Indeed, under the conditions \((p^+(t)-p^-(t))\) that is bounded and \((\inf _{t\ge 0} p^+(t) -p^-(t)>0)\) one can set

$$\begin{aligned} {\tilde{u}}(t,x):= \frac{u(t,x) -p^-(t)}{p^+(t) -p^-(t)}. \end{aligned}$$

This can reduce the equation heterogeneous steady states into the equation with steady states 0 and 1 as long as \((p^+(t) - p^-(t))\) is bounded and \((\inf _{t\ge 0} p^+(t) - p^-(t)>0)\).

Remark 1.7

From the above assumption, one can note that

$$\begin{aligned} \inf _{t\ge 0}\mu (t)=h(0)>0. \end{aligned}$$

Next this assumption also implies that there exists some constant \(C>0\) such that for all \(u\in [0,1]\) and \(t\ge 0\) one has

$$\begin{aligned} \mu (t)\ge f(t,u)\ge \mu (t) - Cu \ge \mu (t)(1- Hu), \end{aligned}$$
(1.6)

where we have set \(H:= \sup \limits _{t\ge 0} \frac{C}{\mu (t)}=\frac{C}{h(0)}\).

Let us now define some notations related to the speed function that will be used in the following. We define \(\sigma (K)\), the abscissa of convergence of K, by

$$\begin{aligned} \sigma \left( K\right) :=\sup \left\{ \gamma >0:\;\int _\mathbb {R}K(y)e^{\gamma y}{\text {d}}y<\infty \right\} . \end{aligned}$$

Assumption 1.3 (ii) yields that \(\sigma (K)\in (0,\infty ]\). We set

$$\begin{aligned} L(\lambda ):= \int _{\mathbb {R}} K( y) [e^{\lambda y} -1 ] \textrm{d}y, \; \lambda \in \left[ 0, \sigma (K) \right) , \end{aligned}$$
(1.7)

as well for \(\lambda \in (0, \sigma (K))\) and \(t\ge 0\),

$$\begin{aligned} c(\lambda )(t):= \lambda ^{-1}L(\lambda )+\lambda ^{-1}\mu (t). \end{aligned}$$
(1.8)

For a given function \(a\in W^{1, \infty }(0,\infty )\), denote \(c_{\lambda ,a}\) the function given by

$$\begin{aligned} c_{\lambda , a}(t):= c(\lambda )(t) + a'(t), \; \lambda \in (0, \sigma (K)),\;t\ge 0. \end{aligned}$$
(1.9)

Obviously, it follows from Definition 1.1 that \( \lfloor c_{\lambda , a}(\cdot ) \rfloor = \lfloor c(\lambda )(\cdot ) \rfloor \) for each \(\lambda \in (0, \sigma (K))\). Next note that

$$\begin{aligned} \lfloor c(\lambda )(\cdot ) \rfloor = \lambda ^{-1} L(\lambda ) + \lambda ^{-1} \lfloor \mu \rfloor . \end{aligned}$$

Now, we state some properties of \(\lfloor c(\lambda )(\cdot ) \rfloor \) in the following proposition.

Proposition 1.8

Let Assumptions 1.3 and 1.5 be satisfied. Then, the following properties hold:

  1. (i)

    The map \(\lambda \mapsto \left\lfloor c(\lambda )(\cdot ) \right\rfloor \) from \((0, \sigma (K))\) to \(\mathbb {R}\) is of class \(C^1\) from \((0, \sigma (K)) \) into \(\mathbb {R}\).

  2. (ii)

    Set \( \displaystyle c^*_r:= \inf _{\lambda \in (0, \sigma (K))} \lfloor c(\lambda )(\cdot ) \rfloor .\) There exists \(\lambda _r^*\in (0, \sigma (K)]\) such that

    $$\begin{aligned} \lim _{\lambda \rightarrow (\lambda _r^*)^-}\lfloor c(\lambda ) (\cdot ) \rfloor = c_r^*. \end{aligned}$$

    Moreover, one has \(c_r^*>0\) and the map \(\lambda \mapsto \lfloor c(\lambda )(\cdot ) \rfloor \) is decreasing on \((0, \lambda _r^*)\).

  3. (iii)

    Assume that \(\lambda _r^* < \sigma (K)\). One has

    $$\begin{aligned} c^*_r =\int _{\mathbb {R}} K(y) e^{\lambda _r^* y}y \textrm{d}y. \end{aligned}$$
    (1.10)

Proposition 1.8 has been mostly proved in Ducrot and Jin (2022) (see Proposition 2.8 in Ducrot and Jin (2022)) with a more general kernel which depends on t.

Here, we only explain that \( c_r^* > 0\). To see this, note that for \(\lambda \in (0,\sigma (K) )\) one has

$$\begin{aligned} \lambda c(\lambda )(t)=\int _{\mathbb {R}} K(y) e^{\lambda y}\textrm{d}y + \mu (t)- {\overline{K}},\;\forall t\ge 0. \end{aligned}$$

Next due to Assumption 1.5 (f4) and Lemma 1.2, there exists some function \(a\in W^{1,\infty }(0,\infty )\) such that \(\mu (t)-{\overline{K}}+a'(t)\ge 0\) for all \(t\ge 0\). This yields for all \(\lambda \in (0,\sigma (K) )\) and \(t\ge 0\),

$$\begin{aligned} \lambda c(\lambda )(t)+a'(t) = \int _{\mathbb {R}} K(y) e^{\lambda y}\textrm{d}y+\mu (t)-{\overline{K}} +a'(t) \ge \int _{\mathbb {R}} K(y) e^{\lambda y}\textrm{d}y>0, \end{aligned}$$

that rewrites \(c_r^*>0\) since \(\lfloor a'\rfloor =0\). The result follows.

Remark 1.9

Let us point out that the assumption \(\lambda _r^* < \sigma (K)\) needed for (iii) to hold is satisfied, for instance, if we have

$$\begin{aligned} \limsup _{\lambda \rightarrow \sigma (K)^{-}} \frac{ L(\lambda ) }{\lambda } = +\infty . \end{aligned}$$
(1.11)

Indeed, one can observe that

$$\begin{aligned} \lfloor c(\lambda )(\cdot )\rfloor \sim \frac{\lfloor \mu \rfloor }{\lambda }\rightarrow +\infty \hbox { as}\ \lambda \rightarrow 0^+. \end{aligned}$$

In addition, if (1.11) holds, then the decreasing property of the map \(\lambda \mapsto \lfloor c(\lambda ) (\cdot ) \rfloor \) on \((0, \lambda _r^*)\) as stated in Proposition 1.8 (ii) ensures that \(\lambda _r^* < \sigma (K)\).

To state our spreading result, we impose in the following that the condition discussed in the previous remark satisfied. This means \(\lambda _r^*\) is different from the convergence abscissa.

Assumption 1.10

In addition to Assumption 1.3, we assume that \(\lambda _r^* < \sigma (K)\).

Using the above properties for the speed function \(c(\lambda )(\cdot )\) and its least mean value, we are now able to state our main results.

Theorem 1.11

(Upper bounds) Let Assumptions 1.3, 1.5 and 1.10 be satisfied. Let \(u=u(t,x)\) denote the solution of (1.1) equipped with a continuous initial data \(u_0\), with \( 0\le u_0(\cdot )\le 1\) and \(u_0(\cdot ) \not \equiv 0\).

Then, the following upper estimates for the propagation set hold: If \(u_0(x)= O(e^{-\lambda x})\) as \(x\rightarrow \infty \) for some \(\lambda >0\), then one has

$$\begin{aligned} \lim _{t\rightarrow \infty } \sup _{x\ge \int _{0}^{t} c^{+}(\lambda )(s)\textrm{d}s + \eta t }u(t,x)=0,\;\forall \eta >0, \end{aligned}$$

where the function \(c^+(\lambda )(\cdot )\) is defined by

$$\begin{aligned} c^+(\lambda )(\cdot ):={\left\{ \begin{array}{ll} c(\lambda _r^*)(\cdot ) &{}\hbox { if}\ \lambda \ge \lambda _r^*,\\ c(\lambda )(\cdot ) &{}\hbox { if}\ \lambda \in (0,\lambda _r^*). \end{array}\right. } \end{aligned}$$

For the lower estimates of the propagation set, we first state our result for a specific function \(f=f(t,u)\) of the form \(f(t,u)=\mu (t)(1-u).\) In other words, we are considering the following non-autonomous logistic equation

$$\begin{aligned} \partial _{t} u(t,x) = \int _\mathbb {R}K(y) \left[ u(t,x-y) -u(t, x)\right] \textrm{d}y + \mu (t) u(t,x)\left( 1 - u(t,x) \right) .\nonumber \\ \end{aligned}$$
(1.12)

To enter the framework of Assumption 1.5, we assume that the function \(\mu \) satisfies following conditions:

$$\begin{aligned} \begin{aligned}&t\mapsto \mu (t) \text { is uniformly continuous and bounded with } \inf _{t\ge 0} \mu (t)>0, \\&\text { and the least mean of } \mu (\cdot ) \text { satisfies } \lfloor \mu \rfloor >{\overline{K}}. \end{aligned} \end{aligned}$$
(1.13)

For this problem, our lower estimate of propagation set reads as follows.

Theorem 1.12

(Lower bounds) Let Assumptions 1.3, 1.10 be satisfied and assume furthermore that \(\mu \) satisfies (1.13). Let \(u=u(t,x)\) denote the solution of (1.12) equipped with a continuous initial data \(u_0\), with \( 0\le u_0(\cdot )\le 1\) and \(u_0 (\cdot )\not \equiv 0\). Then, the following propagation occurs:

  1. (i)

    (Fast exponential decay case) If \(u_0(x)=O(e^{-\lambda x})\) as \(x\rightarrow \infty \) for some \(\lambda \ge \lambda _r^*\), then one has

    $$\begin{aligned} \lim _{t\rightarrow \infty } \sup _{x\in [0, ct]}\left| 1-u(t,x)\right| =0,\;\forall c\in (0,c_r^*); \end{aligned}$$
  2. (ii)

    (Slow exponential decay case) If \(\displaystyle \liminf _{x\rightarrow \infty } e^{\lambda x}u_0(x) > 0 \) for some \(\lambda \in (0,\lambda _r^*)\), then it holds that

    $$\begin{aligned} \lim _{t\rightarrow \infty } \sup _{x\in [0, ct]}\left| 1-u(t,x)\right| =0,\;\forall c\in \left( 0,\lfloor c(\lambda )\rfloor \right) . \end{aligned}$$

Next as a consequence of the comparison principle, one obtains the following lower estimates of the propagation set to the right-hand side for more general nonlinearity satisfying Assumption 1.5.

Corollary 1.13

(Inner propagation) Let Assumptions 1.3, 1.5 and 1.10 be satisfied. Let \(u=u(t,x)\) denote the solution of (1.1) supplemented with a continuous initial data \(u_0\), with \( 0\le u_0(\cdot )\le 1\) and \(u_0 (\cdot )\not \equiv 0\). Then, the following propagation result holds true:

  1. (i)

    (Fast exponential decay case) If \(u_0(x)=O(e^{-\lambda x})\) as \(x\rightarrow \infty \) for some \(\lambda \ge \lambda _r^*\), then one has

    $$\begin{aligned} {\liminf _{t\rightarrow \infty } }\inf _{x\in [0, ct]} u(t,x)>0,\;\forall c\in (0,c_r^*); \end{aligned}$$
  2. (ii)

    (Slow exponential decay case) If \(\displaystyle \liminf _{x\rightarrow \infty } e^{\lambda x}u_0(x) > 0 \) for some \(\lambda \in (0,\lambda _r^*)\), then one has

    $$\begin{aligned} {\liminf _{t\rightarrow \infty } } \inf _{x\in [0, ct]} u(t,x)>0,\;\forall c\in \left( 0,\lfloor c(\lambda )\rfloor \right) . \end{aligned}$$

Remark 1.14

When the coefficients are periodic functions with period T, from Jin and Zhao (2009) one can note that \(\frac{1}{T} \int _{0}^{T} c^{+} (\lambda )(s)\textrm{d}s\) is the exact spreading speed for (1.1). In the periodic situation, our results is sharpness, that is,

$$\begin{aligned} \lim _{t\rightarrow \infty } \frac{1}{t} \int _{0}^{t} c^{+} (\lambda ) (s) \textrm{d}s = \lfloor c^{+}(\lambda ) \rfloor = \frac{1}{T} \int _{0}^{T} c^{+} (\lambda )(s)\textrm{d}s. \end{aligned}$$

The two values \( \lim _{t\rightarrow \infty } \frac{1}{t} \int _{0}^{t} c^{+} (\lambda ) (s) \textrm{d}s \) and \(\lfloor c^{+}(\lambda ) \rfloor \) can also coincides when \(c^+(\lambda ) (t) \) is time almost periodic or uniquely ergodic function. However, in some more general heterogeneities, for instance, non-recurrent environment, see examples in Berestycki and Nadin (2019), the above two values can be different. Our results are not optimal. This may be either caused by our choice of ways of averaging, or the structure of time heterogeneities leading no exact speed. For the second situation, we refer the reader to the examples constructed in Berestycki and Nadin (2019, Section 13), which considers the spreading properties in reaction–diffusion equations. From such examples with general time heterogeneities, one can see that the level set of \(u(t,\cdot )\) can oscillate between two speeds instead of moving with a given speed. This is a quite interesting phenomena.

If one has \(\displaystyle \lfloor c^{+}(\lambda ) \rfloor <\liminf \limits _{t\rightarrow \infty } \frac{1}{t} \int _{0}^{t} c^{+}(\lambda ) (s) \textrm{d}s\), then the behaviour of \(u(t,\beta t)\) for \(t\gg 1\) with

$$\begin{aligned} \lfloor c^{+}(\lambda ) \rfloor< \beta < \liminf \limits _{t\rightarrow \infty } \frac{1}{t} \int _{0}^{t} c^{+}(\lambda ) (s) \textrm{d}s, \end{aligned}$$

is unknown. This open problem is similar to the Fisher–KPP equation with local diffusion (Nadin and Rossi 2012).

In the above result, we only consider the propagation to the right-hand side of the real line and obtain a propagation result on some interval of the form [0, ct] for suitable speed c and for \(t\gg 1\). Note that the kernel is not assumed to be even, so that the propagation behaviours on the right- and the left-hand sides can be different. For instance, different spreading speeds may arise at right- and left-hand sides when the kernel is thin-tailed on both sides. To study the propagation behaviour of the left-hand side, it is sufficient to change x to \(-x\) in the above results.

The results stated in this section and more precisely the lower bounds for the propagation follow from the derivation of suitable regularity estimates for the solution. Here, we show that the solutions of (1.12) with suitable initial data are uniformly continuous. Next Theorem 1.12 follows from the application of a general persistence lemma (see Lemma 2.6) for uniformly continuous solutions. This key lemma roughly ensures that if there is a uniformly continuous solution \(u=u(t,x)\) admitting a propagating path \(t\mapsto X(t)\), then [0, kX(t)] with any \(k\in (0,1)\) is a propagating interval; that is, u stays uniformly far from 0 on this interval, in the large time. The idea of the proof of this lemma comes from the uniform persistence theory for dynamical systems for which we refer the reader to Hale and Waltman (1989); Magal and Zhao (2005); Smith and Thieme (2011); Zhao (2003) and references cited therein.

This paper is organized as follows. In Sect. 2, we recall comparison principles and derive our general key persistence lemma. Section 3 is devoted to the derivation of some regularity estimates for the solutions of (1.12) with suitable initial data. With all these materials, we conclude the proofs of theorems and the corollary.

2 Preliminary and Key Lemma

This section is devoted to the statement of the comparison principle and a key lemma that will be used to prove the inner propagation theorem, namely Theorem 1.12.

2.1 Comparison Principle and Strong Maximum Principle

We start this section by recalling the following more general comparison principle.

Proposition 1.15

(See Ducrot and Jin 2022, Proposition 3.1)[Comparison principle] Let \(t_0\in \mathbb {R}\) and \(T>0\) be given. Let \(K:\mathbb {R}\rightarrow [0,\infty )\) be an integrable kernel, and let \(F=F(t,u)\) be a function defined in \([t_0,t_0+T]\times [0,1]\) which is Lipschitz continuous with respect to \(u\in [0,1]\), uniformly with respect to t. Let \({\underline{u}}\), and \({{\overline{u}}}\) be two uniformly continuous functions defined from \([t_0,t_0+T]\times \mathbb {R}\) into the interval [0, 1] such that for each \(x\in \mathbb {R}\), the maps \({\underline{u}} (\cdot , x)\) and \({{\overline{u}}} (\cdot , x)\) both belong to \(W^{1,1} (t_0,t_0+T)\), satisfying \({\underline{u}}(t_0,\cdot )\le {{\overline{u}}}(t_0,\cdot )\), and for all \(x\in \mathbb {R}\) and for almost every \(t\in (t_0,t_0+T)\),

$$\begin{aligned} \begin{aligned}&\partial _t {\overline{u}}(t,x) \ge \int _{\mathbb {R}} K(y) \left[ {\overline{u}} (t,x-y) -{\overline{u}}(t,x)\right] \textrm{d}y +F(t,{\overline{u}}(t,x)),\\&\partial _t {\underline{u}}(t,x) \le \int _{\mathbb {R}} K(y) \left[ {\underline{u}} (t,x-y) -{\underline{u}}(t,x)\right] \textrm{d}y +F(t,{\underline{u}}(t,x)). \end{aligned} \end{aligned}$$

Then, \({\underline{u}}\le {{\overline{u}}}\) on \([t_0,t_0+T]\times \mathbb {R}\).

We also need some comparison principle on moving domain as follows. (This can be proved similarly as Lemma 5.4 in Abi Rizk et al. (2021) and Lemma 4.7 in Zhang and Zhao (2020).)

Proposition 1.16

Assume that \(K: R\rightarrow [0,\infty )\) is integrable. Let \(t_0>0\) and \(T>0\) be given. Let b(tx) be a uniformly bounded function from \([t_0, t_0 +T] \times \mathbb {R}\rightarrow \mathbb {R}\). Assume that u(tx) is uniformly continuous defined from \([t_0, t_0 +T ] \times \mathbb {R}\) into the interval [0, 1] such that for each \(x\in \mathbb {R}\), \(u(\cdot , x) \in W^{1,1} (t_0, t_0 +T) \). Assume that X and Y are continuous functions on \([t_0, t_0 +T]\) with \(X< Y\). If u satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t u \ge \int _{\mathbb {R}} K(y) \left[ u(t,x-y) -u(t,x)\right] \textrm{d}y +b(t,x) u, &{}\!\forall t\in [t_0, t_0+T], x\in (X(t), Y(t)), \\ u(t,x)\ge 0, &{}\!\forall t\in (t_0, t_0+T], x \in \mathbb {R}{\setminus } (X(t),Y(t) ), \\ u(t_0,x) \ge 0, &{}\! \forall x\in (X(t_0), Y(t_0)), \end{array}\right. } \end{aligned}$$

then

$$\begin{aligned} u(t,x)\ge 0 \text { for all } t\in [t_0, t_0 +T], x\in [X(t), Y(t)]. \end{aligned}$$

We continue this section by the following strong maximum principle. We refer the reader to Kao et al. (2010) for the proof of following proposition.

Proposition 1.17

(Strong maximum principle) Let Assumptions 1.3, 1.5 be satisfied. Let \(u=u(t,x)\) be the solution of (1.1) supplemented with some continuous initial data \(u_0\), such that \(0 \le u_0 \le 1\) and \(u_0 \not \equiv 0\). Then, \(u(t,x)>0\) for all \(t >0, x\in \mathbb {R}\).

2.2 Key Lemma

In this section, we derive an important lemma that will be used in the next section to prove our main inner propagation result, namely Theorem 1.12. In this section, we only let Assumption 1.3 (i), (iii) and Assumption 1.5 be satisfied.

Definition 1.18

(Limit orbits set) Let \(u=u(t,x)\) be a uniformly continuous function on \([0,\infty )\times \mathbb {R}\) into [0, 1], solution of (1.1). We define \(\omega (u)\), the set of the limit orbits, as the set of the bounded and uniformly continuous functions \({{\tilde{u}}}:\mathbb {R}^2\rightarrow \mathbb {R}\) where exist sequences \((x_n)_n\subset \mathbb {R}\) and \((t_n)_n \subset [0, \infty )\) such that \(t_n\rightarrow \infty \) as \(n\rightarrow \infty \) and

$$\begin{aligned} {{\tilde{u}}}(t,x)=\lim _{n\rightarrow \infty } u(t+t_n,x+x_n), \end{aligned}$$

uniformly for (tx) in bounded sets of \(\mathbb {R}^2\).

Let us observe that since u is assumed to be bounded and uniformly continuous on \([0,\infty )\times \mathbb {R}\), Arzelà–Ascoli theorem ensures that \(\omega (u)\) is not empty. Indeed, for each sequence \((t_n)_n \subset [0,\infty )\) with \(t_n\rightarrow \infty \) and \((x_n)\subset \mathbb {R}\) the sequence of functions \((t,x)\mapsto u(t+t_n,x+x_n)\) is equi-continuous and thus has a converging subsequence with respect to the local uniform topology. In addition, it is a compact set with respect to the compact open topology that is with respect to the local uniform topology.

Before going to our key lemma, we claim that the set \(\omega (u)\) enjoys the following property:

Claim 1.19

Let \(u= u(t,x)\) be a uniformly continuous solution of (1.1). Let \({{\tilde{u}}}\in \omega (u)\) be given. Then, one has:

$$\begin{aligned} \text {Either }{{\tilde{u}}}(t,x)>0\text { for all }(t,x)\in \mathbb {R}^2\text { or }\tilde{u}(t,x)\equiv 0 on \mathbb {R}^2. \end{aligned}$$

Proof

Note that due to Assumption 1.5 (see Remark 1.7), the function u satisfies the following differential inequality for all \(t\ge 0\) and \(x\in \mathbb {R}\)

$$\begin{aligned} \partial _t u(t,x)\ge K*u(t,\cdot )(x)-{\overline{K}}u(t,x)+u(t,x)(\mu (t)-Cu(t,x)). \end{aligned}$$

Since the function \(\mu (\cdot )\) is bounded, for each \({{\tilde{u}}}\in \omega (u)\), there exists \({\tilde{\mu }}= {\tilde{\mu }}(t)\in L^\infty (\mathbb {R})\), a weak star limit of some shifted function \(\mu (t_n+\cdot )\), for some suitable time sequence \((t_n)\), such that \({{\tilde{u}}}\) satisfies

$$\begin{aligned} \begin{aligned} \partial _t {\tilde{u}}(t,x)&\ge K*{\tilde{u}}(t,\cdot )(x) - {\overline{K}} {\tilde{u}} (t,x) +{\tilde{u}}(t,x)({\tilde{\mu }}(t) -C{\tilde{u}}(t,x)) \\&\ge K*{\tilde{u}}(t,\cdot )(x)+ \left( -{\overline{K}} +\inf _{t\in \mathbb {R}} {\tilde{\mu }}(t) - C \right) {\tilde{u}} (t,x),\;\forall (t,x)\in \mathbb {R}^2. \end{aligned} \end{aligned}$$

Herein, \(\partial _{t} {\tilde{u}}\) is a weak star limit of \(\partial _{t} u(\cdot +t_n, \cdot +x_n)\) for some suitable subsequence of \((x_n)_n\) and \((t_n)_n\). This is due to \(\partial _{t} u \in L^\infty ([0, \infty )\times \mathbb {R})\).

Next the claim follows from the same arguments as for the proof of the strong maximum principle, see Kao et al. (2010). \(\square \)

Using the above definition and its properties, we are now able to state and prove the following key lemma.

Lemma 1.20

Let \(u=u(t,x):[0,\infty )\times \mathbb {R}\rightarrow [0,1]\) be a uniformly continuous solution of (1.1). Let \(t \mapsto X(t)\) from \([0, \infty )\) to \([0, \infty )\) be a given continuous function. Let the following set of hypothesis be satisfied:

  1. (H1)

    Assume that \(\liminf \limits _{t\rightarrow \infty } u(t,0) >0;\)

  2. (H2)

    There exists some constant \({\tilde{\varepsilon }}_0>0\) such that for all \({{\tilde{u}}}\in \omega (u)\setminus \{0\}\), one has

    $$\begin{aligned} \liminf \limits _{t\rightarrow \infty } {\tilde{u}}(t,0) >{\tilde{\varepsilon }}_0; \end{aligned}$$
  3. (H3)

    The map \(t\mapsto X(t)\) is a propagating path for u, in the sense that

    $$\begin{aligned} \liminf \limits _{t\rightarrow \infty } u(t, X(t)) >0. \end{aligned}$$

Then for any \(k\in (0, 1)\), one has

$$\begin{aligned} \liminf \limits _{t\rightarrow \infty } \inf _{ 0 \le x\le k X(t)} u(t,x)>0. \end{aligned}$$

Remark 2.7

The above result holds without assuming that the convolution kernel is exponential bounded. We expect this key lemma may also be useful to study the spatial propagation for Fisher–KPP equation with fat-tailed dispersion kernel, where the solution may accelerate, see (Cabré and Roquejoffre 2013; Finkelshtein and Tkachov 2019; Garnier 2011).

To prove the above lemma, we make use of ideas coming from uniform persistence theory, see Hale and Waltman 1989; Magal and Zhao 2005; Smith and Thieme 2011. This is somehow close to those developed in Ducrot et al. (2021, 2019).

Proof

To prove the lemma, we argue by contradiction by assuming that there exists \(k \in (0, 1)\), a sequence \((t_n)_n \subset [0,\infty )\) with \(t_n \rightarrow \infty \) and a sequence \((k_n)\) with \( 0 \le k_n \le k\) such that

$$\begin{aligned} u(t_n, k_n X(t_n)) \rightarrow 0 \;\text { as } \; n\rightarrow \infty . \end{aligned}$$
(2.1)

First we claim that one has

$$\begin{aligned} \lim _{n\rightarrow \infty } k_n X(t_n)=\infty . \end{aligned}$$
(2.2)

To prove this claim, we argue by contradiction by assuming that \(\{ k_n X(t_n)\}\) has a bounded subsequence. Hence, there exists \(x_\infty \in \mathbb {R}\) such that possibly along a subsequence still denoted with the index n, one has \(k_nX(t_n) \rightarrow x_\infty \) as \(n\rightarrow \infty \).

Now, let us consider the sequence of functions \(u_n(t,x):= u(t+t_n, x)\). Since \(u=u(t,x)\) is uniformly continuous, possibly up to a subsequence still denoted with the same index n, there exists \(u_\infty \in \omega (u)\) such that

$$\begin{aligned} u_n(t,x) \rightarrow u_\infty (t,x) \text { locally uniformly for }(t,x) \in \mathbb {R}^2. \end{aligned}$$

Next since \(k_nX(t_n)\rightarrow x_\infty \), (2.1) ensures that

$$\begin{aligned} u_\infty (0, x_\infty )= \lim \limits _{n\rightarrow \infty } u(t_n,k_n X(t_n)) =0. \end{aligned}$$

Since \(u_\infty \in \omega (u)\), Claim 2.5 ensures that \(u_\infty (t,x)\equiv 0\). On the other hand, (H1) ensures that for all \(t\in \mathbb {R}\), one has

$$\begin{aligned} u_\infty (t,0)\ge \liminf _{t\rightarrow \infty } u(t,0)>0, \end{aligned}$$

a contradiction, so that (2.2) holds.

Now due to (2.2), there exists N such that

$$\begin{aligned} X(0)<k_nX(t_n),\;\forall n\ge N. \end{aligned}$$

Hence, due to \(k_n<1\) we have

$$\begin{aligned} X(0)< k_n X(t_n) < X(t_n), \;\forall n\ge N. \end{aligned}$$

And since \(t\mapsto X(t)\) is continuous, then for each \(n\ge N\) there exists \(t_n'\in ( 0, t_n )\) such that \(t_n'\rightarrow \infty \) and

$$\begin{aligned} X(t_n') = k_n X(t_n),\;\forall n\ge N. \end{aligned}$$

From the above definition of \(t_n'\), one has

$$\begin{aligned} u(t_n', k_n X(t_n))= u(t_n', X(t_n')),\;\forall n\ge N, \end{aligned}$$

so that (H3) ensures that for all n large enough, there exists \(\varepsilon >0\) such that

$$\begin{aligned} u(t_n', k_n X(t_n))= u(t_n', X(t_n')) \ge \varepsilon . \end{aligned}$$

Recall Assumption (H2). Now for all n large enough, we define

$$\begin{aligned} t_n'':=\inf \left\{ t \le t_n; \; \forall s\in (t,t_n), \; u(s, k_n X(t_n)) \le \frac{ \min \{{\tilde{\varepsilon }}_0, \varepsilon \} }{2} \right\} \subset (t_n', t_n). \end{aligned}$$

Since \(u(t_n, k_n X(t_n)) \rightarrow 0\) as \(n\rightarrow \infty \), then one may assume that for all n large enough one has

$$\begin{aligned} {\left\{ \begin{array}{ll} u(t_n'', k_n X( t_n)) = \frac{\min \{{\tilde{\varepsilon }}_0, \varepsilon \}}{2}, \\ u(t, k_n X( t_n)) \le \frac{\min \{{\tilde{\varepsilon }}_0, \varepsilon \}}{2}, \; \forall t\in [t_n'', t_n], \\ u(t_n, k_n X(t_n)) \le \frac{1}{n}. \end{array}\right. } \end{aligned}$$

Next we claim that \(t_n- t_n'' \rightarrow \infty \) as \(n\rightarrow \infty \). Indeed, if (a subsequence of) \(t_n -t_n''\) converge to \(\sigma \in \mathbb {R}\), define the sequence of functions \({\tilde{u}}_n(t,x):= u(t+t_n'', x+ k_n X(t_n))\), that converge, possibly along a subsequence, locally uniformly to some function \({\tilde{u}}_\infty ={\tilde{u}}_\infty (t,x)\in \omega (u)\) that satisfies

$$\begin{aligned} {\tilde{u}}_\infty (0,0)=\frac{\min \{{\tilde{\varepsilon }}_0, \varepsilon \}}{2}>0, \end{aligned}$$

and

$$\begin{aligned} {\tilde{u}}_\infty (\sigma , 0) = \lim \limits _{n \rightarrow \infty } {\tilde{u}}_n(t_n -t_n'', 0 ) = \lim \limits _{n \rightarrow \infty } u(t_n, k_n X(t_n) ) =0. \end{aligned}$$

Since \({{\tilde{u}}}_\infty \in \omega (u)\), then the two above values of \({{\tilde{u}}}_\infty \) contradict the dichotomy stated in Claim 2.5 and this proves that \( t_n- t_n'' \rightarrow \infty \) as \(n\rightarrow \infty \).

As a consequence, one obtains that the function \({{\tilde{u}}}_\infty \in \omega (u)\) satisfies

$$\begin{aligned} {\tilde{u}}_\infty (0,0)=\frac{\min \{{\tilde{\varepsilon }}_0, \varepsilon \}}{2}>0, \end{aligned}$$

together with

$$\begin{aligned} {\tilde{u}}_\infty (t,0) \le \frac{\min \{{\tilde{\varepsilon }}_0, \varepsilon \}}{2}, \; \forall t\ge 0. \end{aligned}$$
(2.3)

Due to Claim 2.5, the above equality yields \({\tilde{u}}_\infty \in \omega (u) {\setminus } \{0\}\) and (2.3) contradicts (H2). The proof is completed. \(\square \)

3 Proof of Spreading Properties

In this section, we shall make use of the key lemma (see Lemma 2.6) to prove Theorem 1.12. To do this, we first derive some important regularity properties of the solutions of the logistic equation (1.12) associated with suitable initial data. Next we prove Theorem 1.11 by constructing suitable exponentially decaying super-solutions for (1.1). Finally, we turn to the proof of Theorem 1.12. As already mentioned we crucially make use of Lemma 2.6 and construct a suitable propagating path \(t\mapsto X(t)\) that depends on the decay rate of the initial data \(u_0=u_0(x)\) for \(x\gg 1\). As a corollary, we conclude the propagation results for (1.1).

3.1 Uniform Continuity Estimate

This subsection is devoted to giving some regularity estimates for the solutions of the following logistic equation (recalling (1.12)) when endowed with suitable initial data,

$$\begin{aligned} \partial _{t} u(t,x) = \int _\mathbb {R}K(y) u(t,x-y) \textrm{d}y - {\overline{K}} u(t,x) + \mu (t) u(t,x)\left( 1 - u(t,x) \right) . \end{aligned}$$

Here, we focus on two types of initial data that will be used to prove Theorem 1.12: initial data with a compact support and initial data with support on a right semi-infinite interval and with some prescribed exponential decay on this right-hand side (that is for \(x\gg 1\)).

Our first lemma is concerned with the compactly supported case.

Lemma 1.22

Let Assumptions 1.3 and (1.13) be satisfied. Let \(u=u(t,x)\) be the solution of (1.12) equipped with the initial data \(v_0=v_0(x)\), where \(v_0\) is Lipschitz continuous in \(\mathbb {R}\), and \(0<v_0(x)<1\) for all \(x \in (0, A)\), for some constant \(A>0\) while \(v_0=0\) outside of (0, A). Then, the function \((t,x)\mapsto u(t,x)\) is uniformly continuous on \([0,\infty )\times \mathbb {R}\).

Proof

Firstly, since \( 0 \le u \le 1\), then one has

$$\begin{aligned} \Vert \partial _t u\Vert _{L^\infty (\mathbb {R}^+\times \mathbb {R})} \le M:=2 {\overline{K}} +\Vert \mu \Vert _\infty . \end{aligned}$$
(3.1)

As a consequence, the map \((t,x)\mapsto u(t,x)\) is Lipchitz continuous for the variable \(t\in [0,\infty )\), uniformly with respect to \(x\in \mathbb {R}\), that is,

$$\begin{aligned} |u(t,x)-u(s,x)|\le M|t-s|,\;\forall (t,s)\in [0,\infty )^2,\;\forall x\in \mathbb {R}. \end{aligned}$$
(3.2)

Next we investigate the regularity with respect to the spatial variable \(x\in \mathbb {R}\). To do so, we claim that the following holds true:

Claim 1.23

For all \(h>0\) sufficiently small, there exists \(0<\sigma (h)<1\) such that \(\sigma (h)\rightarrow 1\) as \(h\rightarrow 0\) and

$$\begin{aligned} u(\sqrt{h}, x) \ge \sigma (h) v_0(x-h),\;\forall x\in \mathbb {R}. \end{aligned}$$

Proof of Claim 3.2

Let us first observe that since \(u(t,.)>0\) for all \(t>0\), it is sufficient to look at \(x-h\in [0,A]\), that is \(h\le x\le A+h\).

Next to prove this claim, note that one has for all \(h>0\) and \(x\in \mathbb {R}\):

$$\begin{aligned} \begin{aligned} u(\sqrt{h},x)&= v_0(x) + \int _0^{\sqrt{h}} \partial _t u(l,x)\textrm{d}l\\&= v_0(x) + \int _0^{\sqrt{h}}\left\{ \int _{\mathbb {R}} K(y) \left[ u(l,x-y) -u(l,x) \right] \textrm{d}y + \mu (l) u(l,x) \left( 1- u(l,x) \right) \right\} \textrm{d}l \end{aligned} \end{aligned}$$

Now coupling (3.2) and \(0\le u \le 1\), one gets for all \(h>0\) small enough and uniformly for \(x\in \mathbb {R}\)

$$\begin{aligned} u(\sqrt{h},x) \ge v_0(x) + \int _0^{\sqrt{h}}\left\{ \int _{\mathbb {R}} K(y) v_0(x-y)\textrm{d}y -{\overline{K}} v_0(x) \right\} \textrm{d}l+o(\sqrt{h}), \end{aligned}$$

that is,

$$\begin{aligned} u(\sqrt{h},x) \ge v_0(x) \bigg ( 1 - {\overline{K}} \sqrt{h} \bigg ) + \sqrt{h} \bigg ( \int _{\mathbb {R}} K(y) v_0(x-y)\textrm{d}y + o(1) \bigg ). \end{aligned}$$

Now, observe that Assumption 1.3 (see (i) and (iii)) ensures that there exists \(\varepsilon >0\) such that

$$\begin{aligned} \min _{x\in [0,A]}\int _{\mathbb {R}} K(y) v_0(x-y)\textrm{d}y\ge 2\varepsilon , \end{aligned}$$

so that for \(h>0\) small enough one has

$$\begin{aligned} \min _{x\in [h,A+h]}\int _{\mathbb {R}} K(y) v_0(x-y)\textrm{d}y\ge \varepsilon , \end{aligned}$$

Now to prove the claim, it is sufficiently to reach, for all \(h>0\) small enough and \( x\in [h,A+h]\),

$$\begin{aligned} v_0(x) \bigg ( 1 - {\overline{K}} \sqrt{h} \bigg ) + \sqrt{h} \left( o(1) + \varepsilon \right) \ge \sigma (h) v_0(x-h). \end{aligned}$$
(3.3)

Now, set \(\sigma (h) = 1 - 2{\overline{K}} \sqrt{h}\) and let us show that Claim 3.2 follows.

Since \(v_0\) is Lipschitz continuous, then there exists some constant \(L>0\) such that

$$\begin{aligned} |v_0(x)-v_0(x-h)|\le L h, \;\forall x\in \mathbb {R}. \end{aligned}$$

Hence, to reach (3.3) it is sufficient to reach for all \(x\in [h, A+h]\) and all \(h>0\) small enough

$$\begin{aligned} {{\overline{K}}} \sqrt{h} v_0(x-h) + \sqrt{h} \left( o(1) + \varepsilon \right) \ge L h \bigg ( 1 - {\overline{K}} \sqrt{h} \bigg ). \end{aligned}$$
(3.4)

Dividing by \(\sqrt{h}\), the above inequality holds whenever

$$\begin{aligned} {{\overline{K}}} v_0(x-h) + \left( o(1) + \varepsilon \right) \ge L \sqrt{h}\bigg ( 1 - {\overline{K}} \sqrt{h} \bigg ), \end{aligned}$$
(3.5)

which holds true for all \(h>0\) small enough. So the claim is proved. \(\square \)

Now, we come back to the proof of Lemma 3.1. For each \(h>0\) small enough, let us introduce the following function

$$\begin{aligned} b_h (t)= b_h(0) \exp \left\{ \int _{0}^{t}\left[ \mu (s+\sqrt{h}) - \mu (s) \right] \textrm{d}s \right\} , \; \text { for all } t\ge 0, \end{aligned}$$
(3.6)

where \(b_h(0)\) is some constant depending on h and that satisfies the following three conditions:

$$\begin{aligned} 0<b_h(0) \le \sigma (h)< 1, \end{aligned}$$

\(b_h(0) \rightarrow 1\) as \(h\rightarrow 0\) and for all \(h>0\) small enough

$$\begin{aligned} b_h(0)\le \inf _{t\ge 0} \frac{\mu (t)}{\mu (t+ \sqrt{h})} \exp \left\{ \int _{0}^{t} \left[ \mu (s)- \mu (s+ \sqrt{h}) \right] \textrm{d}s \right\} . \end{aligned}$$

For the later condition, one can observe that it is feasible since one has

$$\begin{aligned} \begin{aligned} \left| \int _{0}^{t} \left[ \mu (s+ \sqrt{h}) - \mu (s) \right] \textrm{d}s \right|&= \left| \int _{\sqrt{h}}^{t+\sqrt{h}} \mu (s) \textrm{d}s - \int _{0}^{t}\mu (s) \textrm{d}s \right| \\&= \left| \int _{t}^{t+\sqrt{h}} \mu (s) \textrm{d}s - \int _{0}^{\sqrt{h}} \mu (s) \textrm{d}s\right| \\&\le 2\Vert \mu \Vert _\infty \sqrt{h}. \end{aligned} \end{aligned}$$

As a consequence, recalling (1.13), \(\mu (\cdot )\) is uniformly continuous and we end-up with

$$\begin{aligned} \frac{\mu (t)}{\mu (t+ \sqrt{h})} \exp \left\{ \int _{0}^{t} \left[ \mu (s)- \mu (s+ \sqrt{h}) \right] \textrm{d}s \right\} \rightarrow 1, \text { as } h \rightarrow 0, \text { uniformly for } t\ge 0. \end{aligned}$$

Hence, \(b_h(0)\) is well defined and \(b_h(t) \rightarrow 1\) as \(h\rightarrow 0\) uniformly for \(t\ge 0\).

Now, setting \(w_h=w_h(t,x)\) the function given by

$$\begin{aligned} w_h(t,x):= u(t+ \sqrt{h}, x) - b_h(t) u(t,x-h), \end{aligned}$$

one obtains that it becomes a solution of the following equation

$$\begin{aligned} \begin{aligned} \partial _t w_h (t,x)&= K*w_h(t,x) - {\overline{K}} w_h(t,x) \\&\quad +\mu (t+\sqrt{h}) \left[ w_h(t,x) + b_h(t) u(t,x-h)\right] \left[ 1- \left( w_h(t,x) + b_h(t) u(t,x-h) \right) \right] \\&\quad - \mu (t) b_h(t) u(t,x-h) \left[ 1- u(t,x-h) \right] - b'_h(t) u(t, x-h) \\&= K*w_h(t,x) - {\overline{K}} w_h(t,x) + \mu (t+\sqrt{h}) w_h(t,x) \\&\quad \bigg (1- w_h (t,x) -2b_h(t) u(t,x-h) \bigg ) \\&\quad + b_h(t) u(t,x-h) \left( \mu (t+ \sqrt{h})- \mu (t)- \frac{b_h'(t)}{b_h(t)} \right) \\&\quad + b_h(t) u^2(t,x-h) \left( \mu (t) - b_h (t) \mu (t+\sqrt{h}) \right) . \end{aligned} \end{aligned}$$

It follows from the definition of \(b_h(t)\) (see (3.6)) that \(w_h(t,x)\) satisfies

$$\begin{aligned} \partial _t w_h(t,x) \ge{} & {} K*w_h (t,x) -{\overline{K}} w_h(t,x) \\{} & {} + w_h(t,x) \mu (t+\sqrt{h}) \bigg ( 1- w_h (t,x) - 2b_h(t) u(t,x-h) \bigg ). \end{aligned}$$

Claim 3.2 together with \(b_h(0) <\sigma (h)\) ensures that \(w_h(0, \cdot ) \ge 0\). Then, the comparison principle applies and implies that \(w_h(t,x) \ge 0\) for all \(t\ge 0, x\in \mathbb {R}\), that rewrites as \(u(t+ \sqrt{h}, x) \ge b_h(t) u(t,x-h)\) for all \(t\ge 0, x\in \mathbb {R}\), for \(h>0\) small enough. Recalling (3.2), for \( h>0\) sufficiently small, one has for all \(t\ge 0\) and \(x\in \mathbb {R}\),

$$\begin{aligned} \begin{aligned} u(t,x-h) - u(t,x) \le&\left( \frac{1}{b_h(t)} - 1 \right) u(t + \sqrt{h}, x) + M \sqrt{h} \\ \le&\left( \frac{1}{b_h(t)} - 1 \right) + M \sqrt{h}. \end{aligned} \end{aligned}$$
(3.7)

Since for \(h>0\) small enough one has

$$\begin{aligned} \min _{x\in [-h,A-h]}\int _{\mathbb {R}} K(y) v_0(x-y)\textrm{d}y\ge \varepsilon , \end{aligned}$$

then one can similarly prove that for sufficiently small \(h>0\), there exists \(\sigma (h)= 1- 2{\overline{K}} \sqrt{h}\) such that

$$\begin{aligned} u( \sqrt{h}, x) \ge \sigma (h) v_0 ( x+h), \; \forall x\in \mathbb {R}. \end{aligned}$$

This rewrites as

$$\begin{aligned} u( \sqrt{h}, x-h) \ge \sigma (h) v_0 ( x), \; \forall x\in \mathbb {R}. \end{aligned}$$

Then as above one can choose a suitable function \(b_h(t)\) and obtain that

$$\begin{aligned} u(t+ \sqrt{h}, x-h) \ge b_h(t) u(t,x), \; \forall t\ge 0, x\in \mathbb {R}. \end{aligned}$$

Recalling (3.2), for \(h>0\) sufficiently small, one obtains for all \(t\ge 0\) and \(x\in \mathbb {R}\),

$$\begin{aligned} \begin{aligned} u(t,x) -u(t,x-h)&\le \left( \frac{1}{b_h(t)} -1 \right) u(t+ \sqrt{h}, x-h) + M \sqrt{h} \\&\le \left( \frac{1}{b_h(t)} - 1 \right) + M \sqrt{h}. \end{aligned} \end{aligned}$$
(3.8)

Since estimates (3.7) and (3.8) are uniform with respect to the spatial variable \(x\in \mathbb {R}\), one also obtains a similar estimates for \(u(t,x) -u(t,x+h) \) and \(u(t,x+h) -u(t,x)\). From these estimates, one has reached that \(u=u(t,x)\) is uniformly continuous for all \(t\ge 0, x\in \mathbb {R}\), which completes the proof of the lemma. \(\square \)

In the following, we derive regularity estimates for the solutions to (1.12) coming from an initial data with a prescribed exponential decay rate of the right, that for \(x\gg 1\). To do this, we show that such solutions to (1.12) decay with the same rate as the initial data, at least in a short time.

Let us introduce some function spaces. Recalling that \(\lambda _r^*\) is defined in Proposition 1.8, for \(\lambda \in ( 0, \lambda _r^*) \) let us define the space \(BC_\lambda (\mathbb {R})\) by

$$\begin{aligned} BC_\lambda (\mathbb {R}):= \left\{ \phi \in C(\mathbb {R}): \; \sup _{x\in \mathbb {R}} e^{\lambda x} |\phi (x)| < \infty \; \right\} , \end{aligned}$$

equipped with the weighted norm

$$\begin{aligned} \Vert \phi \Vert _{BC_\lambda }:=\sup _{x\in \mathbb {R}} e^{\lambda x} |\phi (x)|. \end{aligned}$$

Recall that \(BC_\lambda (\mathbb {R})\) is a Banach space when endowed with the above norm.

Define also the subset E by

$$\begin{aligned} E:= \left\{ \phi \in BC_\lambda (\mathbb {R}): 0\le \phi \le 1 \right\} , \end{aligned}$$
(3.9)

and let us observe that it is a closed subset of \(BC_\lambda (\mathbb {R})\).

Using these notations, we turn to the proof of the following lemma.

Lemma 1.24

Let Assumptions 1.3 and 1.10 and (1.13) be satisfied. Let \(\lambda \in (0,\lambda _r^*)\) and \(u_0\in E\) be given. Then, the solution of (1.12) with initial data \(u_0\), denoted by \(u=u(t,x)\), satisfies

$$\begin{aligned} \lim \limits _{t\rightarrow 0^+} \sup _{x\in \mathbb {R}} e^{\lambda x} |u(t,x) - u_0(x)|=0. \end{aligned}$$

Proof

Fix \(\alpha >{\overline{K}}+ 2\Vert \mu \Vert _\infty \). Let us introduce for each \(\phi \in E\) and \(t \ge 0\), the operator given by

$$\begin{aligned} Q_t[\phi ](\cdot ):= \alpha \phi (\cdot ) + \int _\mathbb {R}K(y)\phi (\cdot -y)\textrm{d}y -{\overline{K}} \phi (\cdot )+ \mu (t) \phi (\cdot )\left( 1- \phi (\cdot ) \right) . \end{aligned}$$

Note that one has

$$\begin{aligned} \left\| \int _\mathbb {R}K(y)\phi (\cdot - y)\textrm{d}y \right\| _{BC_\lambda }{} & {} = \sup _{x\in \mathbb {R}} \left| \int _\mathbb {R}K(y) e^{\lambda y} e^{\lambda (x -y) } \phi (x-y) \textrm{d}y \right| \\{} & {} \le \left| \int _\mathbb {R}K(y) e^{\lambda y} \textrm{d}y \right| \left\| \phi \right\| _{BC_\lambda }. \end{aligned}$$

Let us observe that \(\left| \int _\mathbb {R}K(y) e^{\lambda y} \textrm{d}y \right| < \infty \) due to \( 0<\lambda< \lambda _r^* < \sigma (K) \). Since \( 0\le \phi \le 1\), then one has

$$\begin{aligned} \Vert Q_t[\phi ](\cdot ) \Vert _{BC_\lambda } \le \left( \alpha + \left| \int _\mathbb {R}K(y) e^{\lambda y} \textrm{d}y \right| + {\overline{K}} + \Vert \mu \Vert _\infty \right) \Vert \phi \Vert _{BC_\lambda }<\infty . \end{aligned}$$

Thus for each \(\phi (\cdot ) \in E\), for all \(t\ge 0\), \( Q_t [\phi ]( \cdot ) \in BC_\lambda (\mathbb {R})\).

Next let us observe that \(Q_t[\phi ]\) is non-decreasing with respect to \(\phi \in E\). Indeed, if for any \( \phi , \psi \in E\) and \(\phi (x) \ge \psi (x) \) for all \(x\in \mathbb {R}\), then for each given \(t\ge 0, x\in \mathbb {R}\)

$$\begin{aligned} \begin{aligned} Q_t[\phi ](x) -Q_t[\psi ](x)&= \alpha (\phi (x)-\psi (x)) + \int _\mathbb {R}K(y)[\phi (x-y) \\&\quad - \psi (x-y)] \textrm{d}y - {\overline{K}} (\phi -\psi )(x) \\&\quad + \mu (t) \phi (x)( 1 -\phi (x)) -\mu (t) \psi (x)( 1 -\psi (x) ) \\&\ge \left( \alpha - {\overline{K}} - 2 \Vert \mu \Vert _\infty \right) \left( \phi (x)-\psi (x) \right) \\&\ge 0. \end{aligned} \end{aligned}$$

The last inequality comes from \(\alpha >{\overline{K}}+ 2 \Vert \mu \Vert _\infty \). So that for any \(t\ge 0\), the map \(\phi \mapsto Q_t[\phi ]\) is non-decreasing on E.

For each given \(u_0 \in E\) and any fixed \(h>0\), we define the following space

$$\begin{aligned} W:= \left\{ t\mapsto u(t, \cdot ) \in C([0,h], BC_\lambda (\mathbb {R})): \; 0\le u \le 1, u(0, x)=u_0(x) \right\} . \end{aligned}$$

Let us rewrite (1.12) to

$$\begin{aligned} \partial _t u(t,x) + \alpha u(t,x) = Q_t[u(t,\cdot )](x), \end{aligned}$$

then one has

$$\begin{aligned} u(t,\cdot )= e^{-\alpha t} u_0(\cdot ) + \int _{0}^{t} e^{\alpha (s-t)} Q_s[u(s,\cdot )](\cdot ) \textrm{d}s =: T[u](t,\cdot ). \end{aligned}$$

Next we show that for each \(u \in W\), one has \( T[u] \in W\). Let \(u\in W\) be given, firstly we show that \(Q_t[u](\cdot ) \in BC_\lambda (\mathbb {R})\) uniformly for \(t\in [0, h]\). Since \(t\mapsto u(t, \cdot ) \in C([0,h], BC_\lambda (\mathbb {R}) )\), then one has

$$\begin{aligned} \sup _{t\in [0,h]} \Vert u(t,\cdot )\Vert _{BC_\lambda } <\infty . \end{aligned}$$

Thus,

$$\begin{aligned} \sup _{ t\in [0,h] } \Vert Q_t[u(t,\cdot )](\cdot ) \Vert _{BC_\lambda }{} & {} \le \left( \alpha + \left| \int _\mathbb {R}K(y) e^{\lambda y} \textrm{d}y \right| + {\overline{K}} + \Vert \mu \Vert _\infty \right) \sup _{ t\in [0,h] } \Vert u(t,\cdot )\Vert _{BC_\lambda }\\{} & {} <\infty . \end{aligned}$$

Moreover, one can observe that for each \(t\in [0, h]\),

$$\begin{aligned} \Vert T[u](t,\cdot ) \Vert _{ BC_\lambda } \le \Vert u_0 \Vert _{BC_\lambda } + \frac{1}{\alpha } \sup _{ t\in [0,h] } \Vert Q_t[u(t,\cdot )] \Vert _{BC_\lambda } < \infty . \end{aligned}$$

That is, \(T[u](t,\cdot ) \in BC_\lambda (\mathbb {R})\), for each \(t\in [0, h]\).

Then, we show that \(t\mapsto T[u](t,\cdot )\) is continuous. To see this, fix \(t_0 \in [0,h]\) and observe that one has

$$\begin{aligned} \begin{aligned} \left\| T[u](t,\cdot ) - T[u](t_0, \cdot ) \right\| _{BC_\lambda }&\le \left| e^{-\alpha t} - e^{-\alpha t_0}\right| \Vert u_0\Vert _{BC_\lambda } \\&\quad + \sup _{x\in \mathbb {R}} e^{\lambda x}\left| \int _{0}^{t_0} \left[ e^{\alpha (s-t)} -e^{\alpha (s-t_0)} \right] Q_s[u(s,\cdot )](x) \textrm{d}s \right| \\&\quad + \sup _{x\in \mathbb {R}} e^{\lambda x} \left| \int _{t_0}^{t} e^{\alpha ( s-t)} Q_s[u(s,\cdot )](x) \textrm{d}s \right| \\&\le \left| e^{-\alpha t} - e^{-\alpha t_0}\right| \Vert u_0\Vert _{BC_\lambda } \\&\quad + \left| e^{-\alpha t} - e^{-\alpha t_0}\right| \sup _{ s\in [0,h] } \Vert Q_s[u(s,\cdot )] \Vert _{BC_\lambda } \int _{0}^{t_0} e^{\alpha s} \textrm{d}s \\&\quad + \sup _{ s\in [0,h] } \Vert Q_s[u(s,\cdot )]\Vert _{BC_\lambda } \left| \frac{1- e^{ \alpha (t_0-t)}}{\alpha } \right| . \end{aligned} \end{aligned}$$

So that \(t\mapsto T[u](t, \cdot ) \in C([0, h], BC_\lambda (\mathbb {R}))\) and \(T[u](0,\cdot )= u_0(\cdot )\).

Also, note that due to for each \(t\in [0, h] \), \(Q_t[u(t,\cdot )]\) is non-decreasing with \(u(t, \cdot )\in E\), then we get

$$\begin{aligned} 0\le T [u](t,\cdot ) \le e^{-\alpha t} + \frac{1}{\alpha } (1- e^{-\alpha t}) \alpha \le 1, \;\; \forall t\in [0, h]. \end{aligned}$$

Hence, for each \(u \in W\), then \(T[u] \in W\).

For each \(u, v\in W\) and a given \(\gamma >0\) large enough, we introduce a metric on W defined by

$$\begin{aligned} d(u,v):= \sup _{t\in [0, h]} \sup _{x\in \mathbb {R}} e^{\lambda x} |u(t,x) - v(t,x) | e^{-\gamma t}. \end{aligned}$$

Note that

$$\begin{aligned} \begin{aligned}&d(T [u], T[v] ) = \sup _{t\in [0, h]} \sup _{x\in \mathbb {R}} e^{\lambda x } \left| \int _{0}^{t} e^{\alpha (s-t)} \left( Q[u] (s,x) -Q[v](s,x)\right) \textrm{d}s \right| e^{-\gamma t}\\&\le \sup _{t\in [0, h]} \sup _{x\in \mathbb {R}} \left| \int _{0}^{t} e^{(\alpha + \gamma ) (s-t)} \left[ \alpha + \int _\mathbb {R}K(y) e^{\lambda y} \textrm{d}y + {\overline{K}} + 3 \Vert \mu \Vert _\infty \right] e^{-\gamma s} e^{\lambda x} |u(s,x)-v(s,x)| \textrm{d}s \right| \\&\le \left[ \alpha + \int _\mathbb {R}K(y) e^{\lambda y} \textrm{d}y + {\overline{K}} + 3 \Vert \mu \Vert _\infty \right] \sup _{t\in [0,h]} \int _{0}^{t} e^{(\alpha +\gamma ) (s-t)} \textrm{d}s \cdot d(u,v) \\&\le \frac{\alpha + \int _\mathbb {R}K(y) e^{\lambda y} \textrm{d}y + {\overline{K}} + 3 \Vert \mu \Vert _\infty }{\alpha + \gamma } \cdot d(u, v). \end{aligned} \end{aligned}$$

So that T[u] is a contraction map on W endowed with the metric \(d=d(u,v)\), as long as \(\gamma >0\) sufficiently large such that

$$\begin{aligned} \frac{\alpha + \int _\mathbb {R}K(y) e^{\lambda y} \textrm{d}y + {\overline{K}} + 3 \Vert \mu \Vert _\infty }{\alpha + \gamma }<1. \end{aligned}$$

Finally, since (Wd) is a complete metric space, Banach fixed point theorem ensures that T[u] has a unique fixed point in W which is the solution of (1.12) with \(u(0, \cdot )= u_0(\cdot )\). Since \(t\mapsto u(t,\cdot ) \in C([0,h], BC_\lambda (\mathbb {R}))\), then one has obtained

$$\begin{aligned} \lim \limits _{t\rightarrow 0^+} \sup _{x\in \mathbb {R}} e^{\lambda x} |u(t,x) - u_0(x)|=0, \end{aligned}$$

that completes the proof of the lemma. \(\square \)

Lemma 1.25

Let Assumptions 1.3 and 1.10 and (1.13) be satisfied. Let \(u=u(t,x)\) be the solution of (1.12) supplemented with the initial data \(v_0\) satisfying the following properties: Assume \(v_0\) is Lipschitz continuous in \(\mathbb {R}\), there is \(A>0\) large enough, \(\alpha >0\), \(p\in (0, 1)\) and \(\lambda \in (0, \lambda _r^*)\) such that

$$\begin{aligned} v_0(x)= {\left\{ \begin{array}{ll} \text {increasing function}, \; &{} x\in [0, \alpha ], \\ \beta := p e^{- \lambda A}, \; &{} x\in [\alpha , A], \\ p e^{-\lambda x}, \; &{} x \in [A, \infty ), \\ 0, \; &{} x\in (-\infty , 0]. \end{array}\right. } \end{aligned}$$
(3.10)

Then, the function \(u=u(t,x)\) is uniformly continuous on \([0,\infty )\times \mathbb {R}\).

Proof

As in the proof of Lemma 3.1, \(u=u(t,x)\) also satisfies (3.2).

Now from the definition of \(v_0\), for \(h>0\) small enough, for the given \(\lambda \in (0, \lambda _r^*)\), one can observe

$$\begin{aligned} v_0(x) \ge e^{-\lambda h} v_0 (x-h), \; \forall x\in \mathbb {R}. \end{aligned}$$

Let us show that the function \(v^h(t,x):= e^{-\lambda h} u(t,x-h) \) (with \(v^h(0,x) = e^{-\lambda h} v_0(x-h)\)) is a sub-solution of (1.12). To see this, note that \(v^h(t,x)\) satisfies

$$\begin{aligned} \begin{aligned} \partial _t v^h(t,x)&= \int _\mathbb {R}K(y) v^h(t,x-y) \textrm{d}y \!-\!{\overline{K}} v^h(t,x) \!+\! \mu (t) v^h(t,x) \left( 1 - e^{\lambda h} v^{h}(t,x) \right) \\&\le \int _\mathbb {R}K(y) v^h(t,x-y) \textrm{d}y -{\overline{K}} v^h(t,x) + \mu (t) v^h(t,x) \left( 1 - v^{h}(t,x) \right) . \end{aligned} \end{aligned}$$

Hence, \(v^h (t,x)\) becomes a sub-solution of (1.12).

Since \(v^h(0, \cdot ) \le v_0(\cdot )\), the comparison principle implies that

$$\begin{aligned} u(t,x) \ge e^{-\lambda h} u(t,x-h), \; \forall t\ge 0, x\in \mathbb {R}. \end{aligned}$$

Similarly as in (3.7), one also has, for all \(h>0\) sufficiently small,

$$\begin{aligned} u(t,x-h) - u(t,x) \le \left( 1 -e^{-\lambda h} \right) u(t,x-h) \le 1 -e^{-\lambda h}, \forall t\ge 0, x\in \mathbb {R},\nonumber \\ \end{aligned}$$
(3.11)

and changing x to \(x+h\) yields for all \(h >0\) sufficiently small,

$$\begin{aligned} u(t,x) - u(t,x+h) \le \left( 1 -e^{-\lambda h} \right) u(t,x) \le 1 -e^{-\lambda h}, \; \forall t\ge 0, x\in \mathbb {R}.\quad \qquad \end{aligned}$$
(3.12)

Next we show that there exists \(0< \alpha (h) <1\), \(\alpha (h) \rightarrow 1\) as \(h \rightarrow 0\) such that for all \(h>0\) small enough

$$\begin{aligned} u(\sqrt{h}, x) \ge \alpha (h) v_0(x+h), \; \forall x\in \mathbb {R}. \end{aligned}$$

Since \(v_0(x+h)=0\) for \(x\le -h\), it is sufficiently to consider the above inequality for \(x \ge -h\). As in the proof of Lemma 3.1, note that for all \(h>0\) sufficiently small and uniformly for \(x\in \mathbb {R}\), one has

$$\begin{aligned} u(\sqrt{h},x) \ge v_0(x) \bigg ( 1 - {\overline{K}} \sqrt{h} \bigg ) + \sqrt{h} \bigg ( \int _{\mathbb {R}} K(y) v_0(x-y)\textrm{d}y + o(1) \bigg ). \end{aligned}$$

One may now observe that for all \( 2A \ge x\ge -h\), there exists \(\varepsilon >0\) such that

$$\begin{aligned} \int _{\mathbb {R}} K(y) v_0(x-y)\textrm{d}y \ge \varepsilon >0. \end{aligned}$$

As in the proof of Claim 3.2, set \(\alpha _1 (h)= 1-2{\overline{K}}\sqrt{h}\). Then, one has

$$\begin{aligned} u(\sqrt{h}, x) \ge \alpha _1 (h) v_0(x+h), \;\forall x\le 2A. \end{aligned}$$

Let us now prove that there exist \(0<\alpha _2(h)<1\) and \(\alpha _2(h) \rightarrow 1,\) as \(h\rightarrow 0\) such that \(u(\sqrt{h}, x) \ge \alpha _2(h) v_0(x+h)\) for \(x\ge 2A\). From Lemma 3.3, one has

$$\begin{aligned} \lim \limits _{h\rightarrow 0^+} \sup _{x\ge 2A} e^{\lambda x} |u(\sqrt{h}, x) - p e^{-\lambda x}|=0. \end{aligned}$$

Set

$$\begin{aligned} \gamma (h):= \sup _{x\ge 2A} e^{\lambda x} |u(\sqrt{h}, x) - p e^{-\lambda x}|, \end{aligned}$$

and observe that, for h sufficiently small, for all \(x\ge 2A\), one has

$$\begin{aligned} \begin{aligned} \left( 1- \frac{\gamma (h)}{p} \right) v_0(x) =-\gamma (h) e^{-\lambda {x}} + p e^{-\lambda x}&\le u(\sqrt{h}, x) \\&\le \gamma (h) e^{-\lambda {x}} + p e^{-\lambda x} \\&= \left( \frac{\gamma (h)}{p} + 1 \right) v_0(x). \end{aligned} \end{aligned}$$

So that one can set \(\alpha _2(h):= 1- \frac{\gamma (h)}{p}\) to obtain \(0<\alpha _2(h)<1\), \(\alpha _2(h) \rightarrow 1\) as \(h\rightarrow 0\) and

$$\begin{aligned} u(\sqrt{h},x) \ge \alpha _2(h) v_0(x), \; \forall x\ge 2A. \end{aligned}$$

Then, since \(v_0\) is non-increasing for \(x\ge A\), one has

$$\begin{aligned} u(\sqrt{h}, x) \ge \alpha _2(h) v_0(x) \ge \alpha _2 (h) v_0(x+h), \; \forall x\ge 2A. \end{aligned}$$

Now, set \(\alpha (h):= \min \{\alpha _1(h), \alpha _2(h)\}\). We get

$$\begin{aligned} u(\sqrt{h}, x) \ge \alpha (h) v_0(x+h), \; \forall x\in \mathbb {R}. \end{aligned}$$

As in the proof of Lemma 3.1, one can also construct a function \({\tilde{b}}_h(t)\rightarrow 1\) as \(h\rightarrow 0\) uniformly for \(t\ge 0\) with \(0<{\tilde{b}}_h(0)<\alpha (h) \) and such that for all \(h>0\) small enough one has

$$\begin{aligned} u(t+ \sqrt{h},x) \ge {\tilde{b}}_h(t) u(t,x+h), \; \forall t\ge 0, x\in \mathbb {R}. \end{aligned}$$

With such a choice, for all \(h >0\) small enough, for all \(t\ge 0\) and \(x\in \mathbb {R}\), one obtains that

$$\begin{aligned} u(t,x\!+\!h) -u(t,x) \!\le \! \left( \frac{1}{{\tilde{b}}_h(t)} -1 \right) u(t+ \sqrt{h}, x) + M \sqrt{h} \!\le \! \left( \frac{1}{{\tilde{b}}_h(t)} \!-\! 1 \right) \!+\! M \sqrt{h}. \nonumber \\ \end{aligned}$$
(3.13)

As well as, for all \(t\ge 0\) and \(x\in \mathbb {R}\), one has

$$\begin{aligned} u(t,x) -u(t,x-h)\le & {} \left( \frac{1}{{\tilde{b}}_h(t)} -1 \right) u(t+ \sqrt{h}, x-h) + M \sqrt{h} \le \left( \frac{1}{{\tilde{b}}_h(t)} - 1 \right) \nonumber \\{} & {} + M \sqrt{h}. \end{aligned}$$
(3.14)

Combined with (3.11) and (3.12), this ensures that u is uniformly continuous on \([0,\infty )\times \mathbb {R}\) and completes the proof of the lemma. \(\square \)

Remark 3.5

Here, we point out problem (1.1) is invariant with respect to spatial translation, so that spatial shift on the initial data \(v_0(\cdot )\) induces the same spatial shift on the solution and does not change the uniform continuity on \([0,\infty )\times \mathbb {R}\).

3.2 Proof of Theorem 1.11

In this subsection, we construct a suitable exponentially decaying super-solution and prove Theorem 1.11.

Proof of Theorem 1.11

For each given \(\lambda >0\) and sufficiently large \(A>0\), let us firstly construct the following function

$$\begin{aligned} {\overline{u}}(t,x):= {\left\{ \begin{array}{ll} A e^{-\lambda _r^* \left( x- \int _{0}^{t} c(\lambda _r^*)(s) \textrm{d}s \right) }, &{}\text { if } \lambda \ge \lambda _r^*, \\ A e^{-\lambda \left( x- \int _{0}^{t} c(\lambda )(s) \textrm{d}s \right) }, &{} \text { if } 0<\lambda < \lambda _r^*. \end{array}\right. } \end{aligned}$$

Here, we let \(A>0\) large enough such that \({\overline{u}}(0,\cdot ) \ge u_0 (\cdot )\) and recall that the speed function \(t\mapsto c(\lambda )(t)\) is defined in (1.8).

Since \(f(t,u) \le \mu (t)\) for all \(t\ge 0\) and \(u\in [0, 1]\), then one readily obtains that \({\overline{u}}\) is super-solution of (1.1). So that the comparison principle implies that

$$\begin{aligned} \lim \limits _{t\rightarrow \infty } \sup _{x\ge \int _{0}^{t} c^{+} (\lambda )(s) \textrm{d}s + \eta t} u(t,x) \le \lim \limits _{t\rightarrow \infty } \sup _{x\ge \int _{0}^{t} c^{+} (\lambda )(s) \textrm{d}s + \eta t} {\overline{u}} (t,x) =0, \;\; \forall \eta >0. \end{aligned}$$

This completes the proof of the upper estimate as stated in Theorem 1.11. \(\square \)

3.3 Proof of Theorem 1.12

In this section, we first discuss some properties of the solution of the following autonomous Fisher–KPP equation:

$$\begin{aligned} \partial _t u(t,x) = \int _\mathbb {R}k(y) u(t,x-y) \textrm{d}y - {\bar{k}} u(t,x) + u(t,x)(m- b u(t,x)), \; t\ge 0, x\in \mathbb {R}.\nonumber \\ \end{aligned}$$
(3.15)

Here, \(k(\cdot )\) is a given symmetric kernel as defined in Remark 1.4, \({\bar{k}}= \int _\mathbb {R}k(y) \textrm{d}y>0\), while m and b are given positive constants.

Define

$$\begin{aligned} c_0:= \inf _{\lambda >0} \frac{ \int _\mathbb {R}k(y) e^{\lambda y} \textrm{d}y - {\bar{k}} + m}{\lambda }. \end{aligned}$$

Note that \(c_0 >0\) since \(k(\cdot )\) is a symmetric function (see also Xu et al. (2021) where the sign of the (right and left) wave speed is investigated). Next our first important result reads as follows.

Lemma 1.27

Let \(u=u(t,x)\) be the solution of (3.15) supplemented with a continuous initial data \( 0 \le u_0(\cdot ) \le \frac{m}{b} \) and \(u_0 \not \equiv 0\) with compact support. Let us furthermore assume that u is uniformly continuous for all \(t\ge 0\), \(x\in \mathbb {R}\). Then, one has

$$\begin{aligned} \lim \limits _{t\rightarrow \infty } \sup _{ |x| \le ct} \left| \frac{m}{b}-u(t,x)\right| =0, \; \forall c\in [0,c_0). \end{aligned}$$

Remark 3.7

For the kernel function with \(\textrm{supp} (k) = \mathbb {R}\) and without the uniform continuity assumption, the above propagating behaviour is already known. We refer to Lutscher et al. (2005, Theorem 3.2). For the reader convenience, we give a short proof of Lemma 3.6, with the help of Theorem 3.3 in Xu et al. (2021) and the additional regularity assumption of solution.

Proof

Let \(c\in [0, c_0)\) be given and fixed. To prove the lemma, let us argue by contradiction by assuming that there exists a sequence \((t_n, x_n)\) and \(|x_n| \le ct_n\) such that

$$\begin{aligned} \limsup \limits _{n\rightarrow \infty } u(t_n,x_n) < \frac{m}{b}. \end{aligned}$$

Denote for \(n\ge 0\) the sequence of functions \(u_n\) by \(u_n(t,x):= u(t+t_n, x+x_n)\). Since \(u=u(t,x)\) is uniformly continuous on \([0,\infty )\times \mathbb {R}\) and \( 0\le u \le \frac{m}{b}\), then Arzelà–Ascoli theorem applies and ensures that as \(n\rightarrow \infty \), one has \(u_n(t,x)\rightarrow u_\infty (t,x)\) locally uniformly for \((t,x) \in \mathbb {R}^2\), for some function \(u_\infty =u_\infty (t,x)\) defined in \(\mathbb {R}^2\) and such that \(u_\infty (0, 0) <\frac{m}{b}\).

Now, fix \(c'\in (c,c_0)\). Recall that Theorem 3.3 in Xu et al. (2021) ensures that there exists some constant \(q_{c'} \in \left( 0, \frac{m}{b}\right] \) such that

$$\begin{aligned} \liminf _{t\rightarrow \infty }\inf _{ |x| \le c't} u(t,x) \ge q_{c'}. \end{aligned}$$

Hence, there exists \(T>0\) such that

$$\begin{aligned} \inf _{ |x| \le c't} u(t,x) \ge q_{c'}/2,\;\forall t\ge T. \end{aligned}$$

This implies that for all \(n\ge 0\) and \(t\in \mathbb {R}\) such that \(t+t_n\ge T\) one has

$$\begin{aligned} \inf _{|x+x_n|\le c'(t+t_n)} u(t+t_n, x+x_n) \ge q_{c'}/2. \end{aligned}$$

Since one has \( |x_n| \le c t_n \) for all \(n\ge 0\), this implies that for all \(n\ge 0\) and \(t\in \mathbb {R}\) with \(t+t_n\ge T\):

$$\begin{aligned} \inf _{|x|\le (c'-c)t_n + c't} u(t+t_n, x+x_n) \ge q_{c'}/2. \end{aligned}$$

Finally, since \((c'>c)\) and \((t_n \rightarrow \infty )\) as \(( n \rightarrow \infty )\), then one has for all \((t,x) \in \mathbb {R}^2, u_{\infty }(t,x)\ge q_{c'}/2>0\)

Next, we consider \(U=U(t)\) with \( U(0)= q_{c'}/2>0\) the solution of the ODE

$$\begin{aligned} U'(t) = U(t) \left( m- b U(t)\right) , \forall t\ge 0. \end{aligned}$$

Since \(u_\infty (s,x) \ge q_{c'}/2 \) for all \((s, x) \in \mathbb {R}^2\), then comparison principle implies that

$$\begin{aligned} u_\infty (t+s, x) \ge U(t), \forall t\ge 0, s\in \mathbb {R}, x\in \mathbb {R}, \end{aligned}$$

so that

$$\begin{aligned} u_\infty (0, 0) \ge U(t), \forall t\ge 0. \end{aligned}$$

On the other hand, since \(U(0)>0\), one gets \(U(t) \rightarrow \frac{m}{b} \) as \(t\rightarrow \infty \). Hence, this yields \(u_\infty (0,0) \ge \frac{m}{b}\), a contradiction with \(u_\infty (0,0) < \frac{m}{b}\), which completes the proof. \(\square \)

Now, we apply the key lemma to prove our inner propagation result Theorem 1.12.

Proof of Theorem 1.12 (i)

Here, we assume that the initial data \(u_0\) have a fast decay rate and we aim at proving that

$$\begin{aligned} \lim \limits _{t\rightarrow \infty } \sup _{x\in [0, ct] } | 1 - u(t,x)| =0, \; \forall c\in (0, c_r^*). \end{aligned}$$

One can construct a initial data \(v_0\) alike in Lemma 3.1, through choosing proper parameter and spatial shifting (see Remark 3.5) such that \(v_0(x) \le u_0(x)\) for all \(x\in \mathbb {R}\). Let v(tx) be the solution of (1.12) with initial data \(v_0\), Lemma 3.1 ensures that v(tx) is uniformly continuous for all \(t\ge 0, x\in \mathbb {R}\). Since \(v_0 (\cdot ) \le u_0(\cdot ) \), then the comparison principle implies that \(v(t,x)\le u(t, x)\) for all \(t\ge 0, x\in \mathbb {R}\). Note that \(u(t,x)\le 1\), it is sufficiently to prove that

$$\begin{aligned} \lim _{ t \rightarrow \infty } \inf _{ x\in [0, ct] } v(t,x) = 1, \; \forall c \in (0, c_r^*). \end{aligned}$$

Firstly, let us prove that

$$\begin{aligned} \liminf _{ t \rightarrow \infty } \inf _{ x \in [0, ct] } v(t,x)>0, \; \forall c \in (0, c_r^*). \end{aligned}$$

To do this, for all \(B, R>0\), \(\gamma \in \mathbb {R}\), we define \(c_{R,B}(\gamma )\) by

$$\begin{aligned} c_{R,B}(\gamma ):=\frac{2R}{\pi }\int _{-B}^B K(z)\textrm{e}^{\gamma z}\sin (\frac{\pi z}{2R})\textrm{d}z. \end{aligned}$$
(3.16)

Note that \( \gamma \mapsto c_{R,B}(\gamma ) \) is continuous and recalling (1.10) one has

$$\begin{aligned} \lim \limits _{\gamma \rightarrow \lambda _r^*} \lim \limits _{\begin{array}{c} R\rightarrow \infty \\ B\rightarrow \infty \end{array} } c_{R,B}(\gamma ) = c_r^*. \end{aligned}$$

So for each \(c' \in (c, c_r^*)\), one can choose proper \(\gamma \) close to \(\lambda _r^*\) such that for \(R, B>0\) large enough,

$$\begin{aligned} c' \le c_{R,B}(\gamma ). \end{aligned}$$

Then for all \( \frac{c}{c'}< k < 1\),

$$\begin{aligned} \frac{c t}{k} \le X(t):= c_{R, B}(\gamma ) t. \end{aligned}$$

Now, we apply Lemma 2.6 to show that

$$\begin{aligned} \liminf _{ t \rightarrow +\infty } \inf _{0 \le x \le kX(t)} v(t,x)>0. \end{aligned}$$

Note that \(t\mapsto X(t)\) is continuous for \(t\ge 0\), and Lemma 3.1 ensures that \(v=v(t,x)\) is uniformly continuous for all \(t\ge 0, x\in \mathbb {R}\). We only need to check that \(v=v(t,x)\) satisfies the conditions \((H1)-(H3)\) in Lemma 2.6.

To show (H1), recalling (1.5) and (1.6), one may observe that \(v=v(t,x)\) satisfies

$$\begin{aligned} \partial _t v(t,x) \ge \int _{\mathbb {R}} k(y) v(t,x-y) \textrm{d}y -{\overline{K}} v(t,x) + v(t,x) \left( \mu (t) -C v(t,x) \right) . \end{aligned}$$

Recalling Assumption 1.5 (f4) and Lemma 1.2, there exists \(a\in W^{1,\infty }(0,\infty )\) such that \(\mu (t) -{\overline{K}} + a'(t)\ge 0 \) for all \(t\ge 0\). Set \(w(t,x):=e^{a(t)} v(t,x)\) so that w satisfies

$$\begin{aligned} \begin{aligned} \partial _t w(t,x)&\ge \int _\mathbb {R}k(y) w(t,x-y) \textrm{d}y - {\bar{k}} w(t,x) \\&\quad + w(t,x) \left( {\bar{k}} + \mu (t)-{\overline{K}} + a'(t) -C e^{-a(t)} w(t,x) \right) \\&\ge \int _\mathbb {R}k(y) w(t,x-y) \textrm{d}y - {\bar{k}} w(t,x) + w(t,x) \left( m -C e^{\Vert a\Vert _\infty } w(t,x) \right) , \end{aligned} \end{aligned}$$

where \(m:=\inf \limits _{t\ge 0} \left( {\bar{k}} +\mu (t) -{\overline{K}} + a'(t)\right) \ge {\bar{k}}>0\). Now, we consider \({\underline{w}}={\underline{w}}(t,x)\) the solution of following equation

$$\begin{aligned} \partial _t {\underline{w}}(t,x)= k* {\underline{w}} (t,x) -{\bar{k}} {\underline{w}}(t,x) + {\underline{w}}(t,x) \left( m - C e^{\Vert a\Vert _\infty } {\underline{w}} (t,x) \right) . \end{aligned}$$
(3.17)

supplemented with the initial data \({\underline{w}} (0,x) = e^{-\Vert a\Vert _\infty } v_0(x)\). Thus, note that one has \({\underline{w}}(0,x) \le w(0,x)\) for all \(x\in \mathbb {R}\) and the comparison principle implies that

$$\begin{aligned} w(t,x)= e^{a(t)}v(t,x) \ge {\underline{w}}(t,x), \;\; \forall t\ge 0, x\in \mathbb {R}. \end{aligned}$$

Lemma 3.6 implies that there exists \({\tilde{c}}>0\) such that

$$\begin{aligned} \lim _{t\rightarrow \infty } \sup _{ |x| \le ct } \left| {\underline{w}}(t,x) - \frac{m}{Ce^{\Vert a\Vert _\infty }} \right| =0, \; \forall c\in (0, {\tilde{c}}). \end{aligned}$$
(3.18)

Since \(a\in W^{1, \infty }(0, \infty ) \), we end-up with

$$\begin{aligned} \liminf \limits _{t\rightarrow \infty } v(t,0) \ge \lim \limits _{t\rightarrow \infty } e^{-\Vert a\Vert _\infty } {\underline{w}}(t,0) = \frac{m}{Ce^{ 2 \Vert a\Vert _\infty } } >0, \end{aligned}$$

and (H1) is fulfilled.

Next we verify assumption (H2). Recall that for all \({\tilde{v}} \in \omega (v) {\setminus } \{0\}\), there exist \((t_n)\) with \(t_n\rightarrow \infty \) and \((x_n)\) such that \({\tilde{v}}(t,x)= \lim \limits _{n\rightarrow \infty } v(t+t_n, x+x_n)\) where this limit holds locally uniformly for \((t,x)\in \mathbb {R}^2\). As in the proof of Claim 2.5, such a function \({\tilde{v}}\) satisfies

$$\begin{aligned} \partial _t {\tilde{v}}(t,x)\ge \int _\mathbb {R}k(y) {\tilde{v}}(t,x-y) \textrm{d}y + {\tilde{v}} (t,x) ({\tilde{\mu }}(t)- {\overline{K}}-C {\tilde{v}}(t,x)), \; \forall (t,x) \in \mathbb {R}^2, \end{aligned}$$

where k(y) is defined in (1.5) and \({\tilde{\mu }}={\tilde{\mu }}(t)\in L^\infty (\mathbb {R})\) is a weak star limit of some shifted function \(\mu (t_n+\cdot )\). Similar to Definition 1.3 and Lemma 1.2, one can define the least mean of \({\tilde{\mu }}\) over \(\mathbb {R}\) as

$$\begin{aligned} \lfloor {\tilde{\mu }} \rfloor = \lim _{ T \rightarrow \infty } \inf _{s\in \mathbb {R}} \frac{1}{T} \int _{0}^{T} {\tilde{\mu }} (t+s ) \textrm{d}t. \end{aligned}$$

Also, the least mean of \({\tilde{\mu }} \) satisfies

$$\begin{aligned} \lfloor {\tilde{\mu }} \rfloor = \sup _{a\in W^{1, \infty }(\mathbb {R}) } \inf _{ t\in \mathbb {R}} (a'+{\tilde{\mu }}) (t). \end{aligned}$$

Assumption 1.5 (f4) implies that \(\lfloor {\tilde{\mu }} \rfloor \ge {\overline{K}}\) and the same argument as above yields

$$\begin{aligned} \liminf \limits _{t\rightarrow \infty } {\tilde{v}}(t,0) \ge \frac{m}{Ce^{2\Vert b\Vert _\infty } }>0, \end{aligned}$$

where \(b\in W^{1,\infty }(\mathbb {R})\) such that \( {\tilde{\mu }}(t)- {\overline{K}} + b'(t) \ge 0\) for all \(t\in \mathbb {R}\). Hence, the condition (H2) is satisfied.

Before proving (H3), we state a lemma related to a compactly supported sub-solution of (1.1). Since (1.12) is a special case of (1.1), one can construct the similar sub-solution of (1.12). The following lemma can be proved similarly to Lemma 6.1 in Ducrot and Jin (2022), so that the proof is omitted.

Lemma 1.29

Let Assumptions 1.3, 1.5 and 1.10 be satisfied. Let \(\gamma \in (0,\lambda _r^*)\) be given. Then, there exist \(B_0>0\) large enough and \(\theta _0>0\) such that for all \(B>B_0\) there exists \(R_0=R_0(B)>0\) large enough enjoying the following properties: for all \(B>B_0\) and \(R>\max (R_0(B),B)\), there exists some function \(a\in W^{1,\infty }(0,\infty )\) such that the function

$$\begin{aligned} u_{R,B}(t,x)={\left\{ \begin{array}{ll} e^{a(t)} e^{-\gamma x}\cos (\frac{\pi x}{2R}) &{}\text { if }t\ge 0\text { and }x\in [-R,R],\\ 0 &{}\text { else}, \end{array}\right. } \end{aligned}$$

satisfies, for all \(\theta \le \theta _0\), for all \(x\in [-R,R]\) and for any \(t\ge 0\),

$$\begin{aligned} \partial _t u(t,x) -c_{R,B}(\gamma ) \partial _x u(t,x)\le \int _{\mathbb {R}} K(x-y)u(t,y)\textrm{d}y+\left( \mu (t)-\theta -{{\overline{K}}} \right) u(t,x). \end{aligned}$$

Herein the speed \(c_{R,B}(\gamma )\) is defined in (3.16). Furthermore, let

$$\begin{aligned} {\underline{u}}(t,x):= \eta u_{R,B} (t,x -X(t)), \end{aligned}$$

where \(X(t)= c_{R, B}(\gamma ) t\) and \(\eta >0\) small enough, then \({\underline{u}}(t,x)\) is the sub-solution of (1.1).

Now with the help of Lemma 3.8 and the comparison principle, one can choose \(\eta >0\) small enough such that \({\underline{u}}(0,x) \le v_0(x)\) and therefore one has

$$\begin{aligned} \liminf \limits _{t\rightarrow \infty } v(t, X(t)) \ge \liminf \limits _{ t\rightarrow \infty } {\underline{u}}(t, X(t))= \liminf \limits _{ t \rightarrow \infty } \eta u_{R, B}(t, 0)>0, \end{aligned}$$

which ensures that (H3) is satisfied.

As a conclusion all the conditions of Lemma 2.6 are satisfied and this yields

$$\begin{aligned} \liminf \limits _{ t \rightarrow \infty } \inf _{0 \le x \le kX(t)} v(t,x)>0, \end{aligned}$$

so that

$$\begin{aligned} \liminf \limits _{ t \rightarrow \infty } \inf _{ 0 \le x \le c t} v(t,x)>0, \; \forall c\in (0, c_r^*). \end{aligned}$$
(3.19)

Finally, let us prove that

$$\begin{aligned} \liminf \limits _{ t \rightarrow \infty } \inf _{ 0 \le x \le c t} v(t,x)=1, \; \forall c\in (0, c_r^*). \end{aligned}$$

To do this, note that combining (3.18) and (3.19) yields

$$\begin{aligned} \liminf \limits _{ t \rightarrow \infty } \inf _{ -c_1 t \le x \le c t } v(t,x)>0, \forall 0<c_1<{\tilde{c}}, \forall c\in (0, c^{*}_r). \end{aligned}$$

By the similar analysis to the proof of Lemma 3.6, one could show that the above limit is equal to 1. Hence, the proof is completed. \(\square \)

Next we prove Theorem 1.12 (ii). Firstly, we state a lemma about a sub-solution of (1.1). One can also construct the similar sub-solution for (1.12).

Lemma 1.30

Let Assumptions 1.3, 1.5 and 1.10 be satisfied, for each given \(\lambda \in (0, \lambda _r^*)\), define that

$$\begin{aligned} \varphi (t,x) = e^{-\lambda (x+a(t))} - e^{-\lambda a(t)+B_0(t)+B_1} e^{-(\lambda + h)x}, \;\; t\ge 0, x\in \mathbb {R}, \end{aligned}$$
(3.20)

where \(a, B_0\in W^{1,\infty }(0,\infty )\), \(B_1>0\) and \(0<h<\min \left\{ \lambda , \sigma (K) -\lambda \right\} \). Then,

$$\begin{aligned} {\underline{\phi }}(t,x):=\max \left\{ 0, \; \varphi \left( t,x-\int _{0}^{t}c_{\lambda , a} (s) \textrm{d}s \right) \right\} \end{aligned}$$

is the sub-solution of (1.1).

Remark 3.10

Note that \(\varphi (t,x)\) is positive when

$$\begin{aligned} x> \frac{\Vert B_0(t) \Vert _\infty + B_1}{h}. \end{aligned}$$

We point out this lemma can be proved similarly to Ducrot and Jin (2022, Theorem 2.9). So we omit the proof.

Proof of Theorem 1.12(ii)

As proof of Theorem 1.12 (i), we can construct \(v_0(x)\) alike in Lemma 3.4, through choosing proper parameter and spatial shifting (see Remark 3.5) such that \(v_0(x) \le u_0(x)\) for all \(x\in \mathbb {R}\). Let v(tx) be the solution of (1.12) equipped with initial data \(v_0\). Lemma 3.4 ensures that v(tx) is uniformly continuous for all \(t\ge 0, x\in \mathbb {R}\).

Recalling (1.8) and (1.9), for each given \(\lambda \in (0, \lambda _r^*)\) and for all \(c< c'< \lfloor c(\lambda ) \rfloor \), one can choose a proper function \( a \in W^{1,\infty }(0,+\infty )\) such that

$$\begin{aligned} c' < c_{\lambda , a} (t), \; \forall t\ge 0. \end{aligned}$$

Then, we define

$$\begin{aligned} X(t):= \int _{0}^{t}c_{\lambda ,a}(s) \textrm{d}s + P, \end{aligned}$$

where \(P> \frac{\Vert B_0(t)\Vert _\infty +B_1}{h}>0\) and \(B_0(\cdot )\), and \(B_1\) and h are given in Lemma 3.9. Note that for all \(\frac{c}{c'}< k < 1\),

$$\begin{aligned} c t \le k X(t). \end{aligned}$$

Next it is sufficiently to apply key Lemma 2.6 to show that

$$\begin{aligned} \liminf _{ t \rightarrow \infty } \inf _{ 0 \le x\le k X(t)} v(t, x)>0. \end{aligned}$$

Note that for exponential decay initial data \(v_0\) on the right-hand side, that is \(x\gg 1\), one can construct an initial data \({\underline{v}}_0\) alike in Lemma 3.1 with compact support such that \({\underline{v}}_0 \le v_0\). Then, comparison principle implies that (H1) and (H2) hold. To verify the condition (H3), by Lemma 3.9 and comparison principle, one has

$$\begin{aligned} \liminf _{ t \rightarrow \infty } v(t, X(t)) \ge \liminf \limits _{t\rightarrow \infty } {\underline{\phi }}(t,X(t))= \liminf \limits _{t\rightarrow \infty } \varphi (t,P)>0. \end{aligned}$$

So (H3) is satisfied. Hence, the key Lemma 2.6 ensures that

$$\begin{aligned} \liminf _{ t \rightarrow \infty } \inf _{ 0 \le x\le k X(t)} v(t, x)>0. \end{aligned}$$

Then, one has

$$\begin{aligned} \liminf _{ t \rightarrow \infty } \inf _{ 0 \le x\le ct } v(t, x)>0, \; \forall 0<c<\lfloor c(\lambda ) \rfloor . \end{aligned}$$

Similarly to the proof of Theorem 1.12 (i), one can show that

$$\begin{aligned} \lim _{ t \rightarrow \infty } \sup _{ x\in [0, ct] } |u(t, x) - 1 |=0, \; \forall 0<c<\lfloor c(\lambda ) \rfloor . \end{aligned}$$

The proof is completed. \(\square \)

Finally, we prove Corollary 1.13.

Proof of Corollary 1.13

Recalling \(H>0\) given in Remark 1.7, let us consider

$$\begin{aligned} \partial _{t} v(t,x)= & {} \int _\mathbb {R}K(y) v(t,x-y) \textrm{d}y - {\overline{K}} v(t,x) + \mu (t) v(t,x)\left( 1 - Hv(t,x) \right) ,\nonumber \\{} & {} \qquad \qquad \qquad \qquad \qquad \quad \; t\ge 0, x\in \mathbb {R}. \end{aligned}$$
(3.21)

By the same analysis, one can obtain the similar result for (3.21) as in Theorem 1.12. For the reader convenience, we state it in the following.

Let \(v=v(t,x)\) be the solution of (3.21) equipped with a continuous initial data \(u_0\), with \( 0\le u_0\le 1\) and \(u_0 \not \equiv 0\). Then, the following inner spreading occurs:

  1. (i)

    (fast exponential decay) If \(u_0(x)=O(e^{-\lambda x})\) as \(x\rightarrow \infty \) for some \(\lambda \ge \lambda _r^*\), then one has

    $$\begin{aligned} \lim _{t\rightarrow \infty } \sup _{x\in [0, ct]} \left| v(t,x)- \frac{1}{H} \right| =0,\;\forall c\in (0,c_r^*); \end{aligned}$$
  2. (ii)

    (slow exponential decay) If \(\displaystyle \liminf _{x\rightarrow \infty } e^{\lambda x}u_0(x) > 0 \) for some \(\lambda \in (0,\lambda _r^*)\), then

    $$\begin{aligned} \lim _{t\rightarrow \infty } \sup _{x\in [0, ct]} \left| v(t,x)- \frac{1}{H} \right| = 0,\;\forall c\in \left( 0,\lfloor c(\lambda )\rfloor \right) . \end{aligned}$$

Denote that u(tx) is a solution of (1.1) equipped with initial data \(u_0\). Recall (1.6) that v(tx) is the sub-solution of (1.1). Then, comparison principle implies that \(u(t,x) \ge v(t,x)\) for all \(t\ge 0, x\in \mathbb {R}\). Hence, the conclusion is proved. \(\square \)