1 Introduction

The influence of environmental heterogeneity on population dynamics has been studied extensively. For example, environmental heterogeneity can increase the total population size for a single species [21]. For two competing species in heterogeneous environments, if they are identical except dispersal rates, then the slower diffuser wins [7], whereas they can coexist in homogeneous environments. The global dynamics for the weak competition case was investigated in [19, 21], and it was completely classified in [12]. The heterogeneity of environments can also induce complex patterns for predator–prey interaction models, see [9, 10, 20] and references therein.

In heterogeneous environments, the population may have a tendency to move up or down along the gradient of the environments, which is referred to as a “advection” term [2]. That is, the random diffusion term \(d\Delta u\) is replaced by

$$\begin{aligned} d\Delta u-{\alpha \nabla \cdot \left( u\nabla m(x) \right) }, \end{aligned}$$
(1.1)

where d is the diffusion rate, \(\alpha \) is the advection rate, and m(x) represents the heterogeneity of environment. The effect of advection as (1.1) on population dynamics has been studied extensively for single and two competing species, see, e.g., [1,2,3,4,5,6, 16, 17, 22, 45]. There also exists another kind of advection for species in streams, and the random diffusion term \(du_{xx}\) is now replaced by

$$\begin{aligned} d u_{xx}-\alpha u_x, \end{aligned}$$
(1.2)

where \(\alpha u_x\) represents the unidirectional flow from the upstream end to the downstream end. It is shown that, if the two competing species in streams are identical except dispersal rates, then the faster diffuser wins [23, 26, 27, 44]. In [24, 33, 37, 39, 42], the authors showed the effect of advection as (1.2) on the persistence of the predator and prey. One can also refer to [13,14,15, 18, 28,29,30, 32, 36] and references therein for population dynamics in streams.

As is well known, periodic solutions occur commonly for predator–prey models [31], and Hopf bifurcation is a mechanism to induce these periodic solutions. For diffusive predator–prey models in homogeneous environments, Hopf bifurcations can be investigated following the framework of [11, 43], see also [8, 35, 40, 41] and references therein. A natural question is how advection affects Hopf bifurcations for predator–prey models in heterogeneous environments.

In this paper, we aim to give an initial exploration for this question, and investigate the effect of advection as (1.1) on Hopf bifurcations for the following predator–prey model:

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle u_t= \nabla \cdot \left[ d_1 \nabla u -\alpha _1 u \nabla m \right] + u\left( m(x) - u \right) - \frac{uv}{1+u},&{}x \in \Omega ,\;t> 0,\\ \displaystyle v_t = d_2 \Delta v -rv + \frac{luv}{1+u}, &{}x \in \Omega ,\;t> 0,\\ d_1 \partial _n u -\alpha _1 u \partial _n m =0,\;\;\partial _n v = 0, &{} x \in \partial \Omega ,\;t > 0,\\ u(0,x) = {u_0}(x) \ge 0,v(0,x) = {v_0}(x) \ge 0,&{}x \in \Omega . \end{array}\right. } \end{aligned}$$
(1.3)

Here \(\Omega \) is a bounded domain in \(\mathbb R^N\;(1\le N\le 3)\) with a smooth boundary \(\partial \Omega \); n is the outward unit normal vector on \(\partial \Omega \), and no-flux boundary conditions are imposed; u(xt) and v(xt) denote the population densities of the prey and predator at location x and time t, respectively; \(d_1,d_2>0\) are the diffusion rates; \(\alpha _1\ge 0\) is the advection rate; \(l>0\) is the conversion rate; \(r>0\) is the death rate of the predator; and the function \(u/(1 + u)\) denotes the Holling type-II functional response of the predator to the prey density. The function m(x) represents the intrinsic growth rate of the prey, which depends on the spatial environment.

Throughout the paper, we impose the following assumption:

\((\textbf{H}_1)\):

\(m(x)\in C^2 (\overline{\Omega })\), \(m(x)\ge (\not \equiv )0\) in \(\overline{\Omega }\), and m(x) is non-constant.

\((\textbf{H}_2)\):

\(\displaystyle \frac{d_2}{d_1}=\theta >0\) and \(\displaystyle \frac{\alpha _1}{d_1}=\alpha \ge 0\).

Here \((\textbf{H}_2)\) is a mathematically technical condition, and it means that the dispersal and advection rates of the prey and predator are proportional. Then letting \(\tilde{u}=e^{-\alpha m(x)} u,\tilde{t}=d_1 t\), denoting \(\lambda = 1/ d_1\), and dropping the tilde sign, model (1.3) can be transformed to the following model:

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle u_t= e^{-\alpha m(x)} \nabla \cdot \left[ e^{\alpha m(x)}\nabla u \right] + \lambda u\left( m(x) - e^{\alpha m(x)} u - \frac{v}{1+e^{\alpha m(x)} u}\right) ,&{}x \in \Omega ,\;t> 0,\\ \displaystyle v_t = \theta \Delta v + \lambda v\left( -r + \frac{l e^{\alpha m(x)} u}{1+e^{\alpha m(x)} u}\right) ,&{}x \in \Omega ,\;t> 0,\\ \partial _n u = \partial _n v = 0, &{} x \in \partial \Omega ,\;t>0,\\ u(0,x) = {u_0}(x) \ge 0,v(0,x) = {v_0}(x) \ge 0,&{}x \in \Omega , \end{array}\right. }\nonumber \\ \end{aligned}$$
(1.4)

where \(\theta \) and \(\alpha \) are defined in assumption \((\textbf{H}_2)\).

We remark that model (1.3) with \(\alpha _1=0\) (or respectively, model (1.4) with \(\alpha =0\)) was investigated in [25, 38], and they showed that the heterogeneity of the environment can influence the local dynamics, and multiple positive steady states can bifurcate from the semi-trivial steady state by using \(d_1,d_2\) (or respectively, \(\lambda \)) as the bifurcation parameters. In this paper, we consider model (1.4) for the case that \(\alpha \ne 0\) and \(0<\lambda \ll 1\). We show that when \(0<\lambda \ll 1\), the local dynamics of model (1.4) is similar to the following “weighted” ODEs:

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle u_t\int _\Omega e^{\alpha m(x)}dx=u\left( \int _{\Omega }e^{\alpha m(x)}m(x)dx-u\int _{\Omega }e^{2\alpha m(x)}dx\right) -v \int _{\Omega }\frac{e^{\alpha m(x)} u}{1+e^{\alpha m(x)} u}dx,\\ \displaystyle v_t =-r v+\frac{l v}{|\Omega |}\int _{\Omega }\frac{ e^{\alpha m(x)} u}{1+e^{\alpha m(x)} u}dx.\\ \end{array}\right. }\nonumber \\ \end{aligned}$$
(1.5)

A direct computation implies that model (1.5) admits a unique positive equilibrium \((c_{0l},q_{0l})\) if and only if \(l>\tilde{l}\), where \((c_{0l},q_{0l})\) and \(\tilde{l}\) are defined in Lemma 2.1. From the proof of Lemma 3.4, one can obtain the local dynamics model (1.5) as follows:

  1. (i)

    If \({\mathcal {T}}(\alpha )<0\), then the positive equilibrium \((c_{0l},q_{0l})\) of model (1.5) is stable for \(l>\tilde{l}\);

  2. (ii)

    If \({\mathcal {T}}(\alpha )>0\), then there exists \(l_0>\tilde{l}\) such that \((c_{0l},q_{0l})\) is stable for \(\tilde{l}<l<l_0\) and unstable for \(l>l_0\), and model (1.5) undergoes a Hopf bifurcation when \(l=l_0\).

Here \({\mathcal {T}}(\alpha )\) and \(l_0\) are defined in Lemma 3.3. Similarly, model (1.4) admits a unique positive steady state \(\left( u^{(\lambda ,l)},v^{(\lambda ,l)}\right) \) for \((l,\lambda )\in [\tilde{l}+\epsilon ,1/\epsilon ]\times (0,\delta _\epsilon ]\) with \(0<\epsilon \ll 1\), where \(\delta _\epsilon \) depends on \(\epsilon \) (Theorem 2.5), and admits similar local dynamics as model (1.5) when \(l\in [\tilde{l}+\epsilon ,1/\epsilon ]\) and \(0<\lambda \ll 1\) (Theorem 3.10), see also Fig. 1. Moreover, we show that the sign of \({\mathcal {T}}(\alpha )\) is key to guarantee the existence of a Hopf bifurcation curve for model (1.4). We obtain that if \(\int _{\Omega }(m(x)-1)dx<0\) and \(\{x\in \Omega :m(x)>1\}\ne \emptyset \), then there exists \(\alpha _*>0\) such that \({\mathcal {T}}(\alpha _*)=0\), \({\mathcal {T}}(\alpha )<0\) for \(0\le \alpha <\alpha _*\), and \({\mathcal {T}}(\alpha )>0\) for \(\alpha >\alpha _*\) (Theorem 4.2). Therefore, the advection rate affects the occurrence of Hopf bifurcations (Proposition 4.3). Moreover, we find that the advection rate can also affect the values of Hopf bifurcations (Proposition 4.4).

Fig. 1
figure 1

Local dynamics of model (1.4) for \((l,\lambda )\in [\tilde{l}+\epsilon ,1/\epsilon ]\times (0,\tilde{\lambda }_j(\alpha ,\epsilon ))\) with \(0<\epsilon \ll 1\). Here \(\tilde{\lambda }_j(\alpha ,\epsilon )\) means that \(\tilde{\lambda }_j\) depends on \(\alpha \) and \(\epsilon \) for \(j=1,2\). (Left): \({\mathcal {T}}(\alpha )<0\); (Right): \({\mathcal {T}}(\alpha )>0\)

For simplicity, we list some notations for later use. We denote

$$\begin{aligned} X=\left\{ \left. u\in H^2(\Omega ) \right| \partial _n u = 0 \right\} \;\;\text {and} \;\; Y=L^2(\Omega ). \end{aligned}$$

Denote the complexification of a real linear space Z by

$$\begin{aligned} Z_{\mathbb C}:= Z\oplus \textrm{i} Z=\{x_1 + \textrm{i}x_2 |x_1, x_2 \in Z\}, \end{aligned}$$

the kernel and range of a linear operator T by \({\mathcal {N}} (T)\) and \({\mathcal {R}} (T)\), respectively. For \( Y_{\mathbb C}\), we choose the standard inner product \(\langle u, v \rangle =\int _\Omega {\overline{u}(x)v(x)}dx,\) and the norm is defined by \(\Vert u\Vert _2={\langle u, u \rangle }^{\frac{1}{2}}.\)

The rest of the paper is organized as follows. In Sect. 2, we show the existence and uniqueness of the positive steady state for a range of parameters, see the rectangular region in Fig. 1. In Sect. 3, we obtain the local dynamics of model (1.4) when \((l,\lambda )\) is in the above rectangular region. In Sect. 4, we show the effect of advection on Hopf bifurcations.

2 Positive steady states

In this section, we consider the positive steady states of model (1.4), which satisfy the following system:

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle -\nabla \cdot \left[ e^{\alpha m(x)}\nabla u \right] = \lambda e^{\alpha m(x)} u\left( m(x) - e^{\alpha m(x)} u - \frac{v}{1+e^{\alpha m(x)} u}\right) ,&{}x \in \Omega ,\\ \displaystyle -\theta \Delta v = \lambda v\left( -r + \frac{l e^{\alpha m(x)} u}{1+e^{\alpha m(x)} u}\right) ,&{}x \in \Omega , \\ \partial _n u = \partial _n v = 0, &{} x \in \partial \Omega . \end{array}\right. } \end{aligned}$$
(2.1)

Denote

$$\begin{aligned} L:=\nabla \cdot \left[ e^{\alpha m(x)}\nabla \right] , \end{aligned}$$
(2.2)

and we have the following decompositions:

$$\begin{aligned} \begin{aligned}&X={\mathcal {N}} (\Delta )\oplus X_1={\mathcal {N}} (L)\oplus X_1,\\&Y={\mathcal {N}} (\Delta )\oplus Y_1={\mathcal {N}} (L)\oplus Y_1, \end{aligned} \end{aligned}$$
(2.3)

where

$$\begin{aligned} \begin{aligned}&X_1=\left\{ y\in X:\int _\Omega {y(x)}dx=0\right\} ,\\&Y_1={\mathcal {R}}(\Delta )={\mathcal {R}}(L)=\left\{ y\in Y:\int _\Omega {y(x)}dx=0\right\} . \end{aligned} \end{aligned}$$
(2.4)

Let

$$\begin{aligned} \begin{aligned}&u= c+ \xi ,\;\;\text {where}\;\;c=\displaystyle \frac{1}{|\Omega |}\int _{\Omega }udx\in \mathbb R,\;\xi \in X_1,\\&v=q+ \eta ,\;\;\text {where}\;\;q=\displaystyle \frac{1}{|\Omega |}\int _{\Omega }vdx\in \mathbb R,\;\eta \in X_1. \end{aligned} \end{aligned}$$
(2.5)

Then substituting (2.5) into (2.1), we see that (uv) (defined in (2.5)) is a solution of (2.1) if and only if \((c,q,\xi ,\eta )\in \mathbb R^2\times X_1^2\) solves

$$\begin{aligned} F(c,q,\xi ,\eta ,l,\lambda )=(f_1,f_2,f_3,f_4)^T= 0, \end{aligned}$$
(2.6)

where \(F(c,q,\xi ,\eta ,l,\lambda ):\mathbb R^2\times X_1^2\times \mathbb R^2\rightarrow \left( \mathbb R\times Y_1\right) ^2\), and

$$\begin{aligned} {\left\{ \begin{array}{ll} f_1(c,q,\xi ,\eta ,l,\lambda ):=&{}\displaystyle \int _\Omega { e^{\alpha m(x)}( c+ \xi )\left( m(x) - e^{\alpha m(x)}( c+ \xi )- \frac{q+ \eta }{1+e^{\alpha m(x)}( c+ \xi )}\right) } dx,\\ f_2(c,q,\xi ,\eta ,l,\lambda ):=&{}L \xi -\displaystyle \frac{\lambda }{{\left| \Omega \right| }} f_1\\ &{}+\displaystyle \lambda e^{\alpha m(x)}( c+ \xi )\left( m(x) - e^{\alpha m(x)}( c+ \xi )- \frac{q+ \eta }{1+e^{\alpha m(x)}( c+ \xi )}\right) ,\\ f_3(c,q,\xi ,\eta ,l,\lambda ):=&{}\displaystyle \int _\Omega { (q+ \eta )\left( -r + \frac{l e^{\alpha m(x)} ( c+ \xi )}{1+e^{\alpha m(x)}( c+ \xi )}\right) }dx ,\\ f_4(c,q,\xi ,\eta ,l,\lambda ):=&{}\displaystyle \theta \Delta \eta +\lambda (q+ \eta )\left( -r + \frac{l e^{\alpha m(x)} ( c+ \xi )}{1+e^{\alpha m(x)} ( c+ \xi )}\right) -\frac{\lambda }{{\left| \Omega \right| }} f_3. \end{array}\right. }\nonumber \\ \end{aligned}$$
(2.7)

We first solve \(F(c,q,\xi ,\eta ,l,\lambda )=0\) for \(\lambda =0\).

Lemma 2.1

Suppose that \(\lambda =0\), and let

$$\begin{aligned} \tilde{c}=\displaystyle \frac{\int _\Omega e^{\alpha m(x)}m(x) dx}{\int _\Omega e^{2\alpha m(x)}dx}\;\;\text {and}\;\; \tilde{l}=\displaystyle \frac{r |\Omega |}{\int _\Omega \frac{ \tilde{c} e^{\alpha m(x)} }{1+\tilde{c} e^{\alpha m(x)} }dx}. \end{aligned}$$
(2.8)

Then, for any \(l>0\), \( F(c,q,\xi ,\eta ,l,\lambda )= 0\) has three solutions: (0, 0, 0, 0), \((\tilde{c},0,0,0)\) and \((c_{0l},q_{0l},0,0)\), where \((c_{0l},q_{0l})\) satisfies

$$\begin{aligned} \begin{aligned}&\int _\Omega \frac{c_{0l}e^{\alpha m(x)}}{1+ c_{0l} e^{\alpha m(x)} }dx=\frac{r}{l} |\Omega |,\\&q _{0l}=\frac{l c_{0l}}{r |\Omega |} \int _\Omega e^{\alpha m(x)} \left( m(x)- c_{0l} e^{\alpha m(x)} \right) dx. \end{aligned} \end{aligned}$$
(2.9)

Moreover, \(c_{0l},q_{0l}>0\) if and only if \(l>\tilde{l}\).

Proof

Substituting \(\lambda =0\) into \(f_2=0\) and \(f_4=0\), respectively, we have \( \xi = 0\) and \( \eta = 0\). Then substituting \(\xi =\eta =0\) into \(f_1=0\) and \(f_3=0\), respectively, we have

$$\begin{aligned} \begin{aligned}&c \int _\Omega { e^{\alpha m(x)} \left( m(x) - c e^{\alpha m(x)}- \frac{q}{1+ce^{\alpha m(x)} }\right) } dx=0,\\&\int _\Omega { q\left( -r + \frac{l ce^{\alpha m(x)} }{1+ce^{\alpha m(x)} }\right) }dx=0. \end{aligned} \end{aligned}$$
(2.10)

Therefore, (2.10) has three solutions: (0, 0), \((\tilde{c},0)\), \((c_{0l},q_{0l})\), where \(\tilde{c}\) is defined in (2.8), and \((c_{0l},q_{0l})\) satisfies

$$\begin{aligned} \begin{aligned}&\int _\Omega { e^{\alpha m(x)} \left( m(x) - c_{0l} e^{\alpha m(x)}- \frac{q_{0l}}{1+c_{0l}e^{\alpha m(x)} }\right) } dx=0,\\&\int _\Omega {\left( -r + \frac{l c_{0l}e^{\alpha m(x)} }{1+c_{0l}e^{\alpha m(x)} }\right) }dx=0. \end{aligned} \end{aligned}$$
(2.11)

A direct computation implies that \(c_{0l}\) and \(q_{0l}\) satisfy (2.9). By the second equation of (2.9), we see that \(c_{0l},q_{0l}>0\) if and only if \(0<c_{0l}<\tilde{c}\). It follows from the first equation of (2.9) that

$$\begin{aligned} \displaystyle \frac{dc_{0l}}{dl}<0 \;\;\text {and}\;\;\lim _{l\rightarrow \infty } c_{0l}=0. \end{aligned}$$

Then we obtain that \(0<c_{0l}<\tilde{c}\) if and only if \(l>\tilde{l}\), where \(\tilde{l}\) is defined in (2.8). This completes the proof.\(\square \)

We remark that \(\tilde{l}\) is the critical value for the successful invasion of the predator for model (1.4) with \(0<\lambda \ll 1\) (or respectively, (1.5)). In the following we will consider the monotonicity of \(\tilde{l}\) with respect to \(\alpha \) and show the effect of advection rate on the invasion of the predator.

Proposition 2.2

Let \(\tilde{l}(\alpha )\) be defined in (2.8). Then

$$\begin{aligned} \displaystyle \tilde{l}(\alpha )\ge \frac{r( |\Omega |+V(\alpha ) )}{V(\alpha )} \;\;\text {for all}\;\; \alpha >0, \end{aligned}$$
(2.12)

where

$$\begin{aligned} \displaystyle V(\alpha ):=\frac{\int _\Omega e^{\alpha m(x)}m(x) dx \int _\Omega e^{\alpha m(x)}dx }{\int _\Omega e^{2 \alpha m(x)} dx }. \end{aligned}$$
(2.13)

Moreover, the following statements hold:

  1. (i)

    \(\tilde{l}'(\alpha )|_{\alpha =0} < 0;\)

  2. (ii)

    If   \(\displaystyle \lim \nolimits _{\alpha \rightarrow \infty } V(\alpha )=0\), then \(\displaystyle \lim \nolimits _{\alpha \rightarrow +\infty } \tilde{l}(\alpha )=\infty \). Especially, if \(\Omega =(0,1)\) and \(m'(x)>0\) (or respectively, \(m'(x)<0\)) for all \(x\in [0,1]\), then \(\displaystyle \lim \nolimits _{\alpha \rightarrow +\infty } V(\alpha )=0\).

Proof

Since function \(\displaystyle \frac{x}{1+x}\) is concave, it follows from the Jensen’s inequality that

$$\begin{aligned} \frac{1}{|\Omega |}\int _\Omega \frac{ \tilde{c}(\alpha ) e^{\alpha m(x)} }{ 1+ \tilde{c}(\alpha ) e^{\alpha m(x)}} dx\le \frac{\frac{1}{|\Omega |}\int _\Omega \tilde{c}(\alpha ) e^{\alpha m(x)} dx}{1+\frac{1}{|\Omega |}\int _\Omega \tilde{c} (\alpha ) e^{\alpha m(x)} dx}, \end{aligned}$$

where \(\tilde{c}(\alpha )\) is defined in (2.8). This combined with (2.8) implies that (2.12) holds.

(i) We define

$$\begin{aligned} {\mathcal {F}}(\alpha ):=\int _\Omega \frac{ \tilde{c}(\alpha ) e^{\alpha m(x)} }{1+\tilde{c}(\alpha ) e^{\alpha m(x)} }dx= |\Omega |- \int _\Omega \frac{1 }{1+\tilde{c}(\alpha ) e^{\alpha m(x)} }dx, \end{aligned}$$

and consequently, \(\displaystyle \tilde{l}(\alpha )=\frac{r |\Omega |}{{\mathcal {F}}(\alpha )}\). A direct computation yields

$$\begin{aligned} {\mathcal {F}}'(0)=\frac{ |\Omega |}{ \left( |\Omega |+\int _\Omega m(x) dx \right) ^2}\left[ |\Omega | \int _\Omega m^2(x) dx- \left( \int _\Omega m(x) dx \right) ^2 \right] . \end{aligned}$$

Since m(x) is non-constant, it follows from the Hölder inequality that \({\mathcal {F}}'(0)>0\), and consequently \(\tilde{l}'(0)<0\).

(ii) By (2.12), we see that \(\displaystyle \lim \nolimits _{\alpha \rightarrow +\infty } \tilde{l}(\alpha )=\infty \) if \(\displaystyle \lim \nolimits _{\alpha \rightarrow \infty } V(\alpha )=0\). Next, we give a sufficient condition for \(\displaystyle \lim \nolimits _{\alpha \rightarrow \infty } V(\alpha )=0\). We only consider the case that \(m'(x)>0\) for all \(x\in \Omega \), and the other case can be proved similarly. Since

$$\begin{aligned} \int _{0}^{1}e^{\alpha m(x)}dx=\int _{0}^{1}e^{\alpha m(x)}m'(x)\frac{1}{m'(x)}dx=\int _{0}^{1}e^{\alpha m(x)}\frac{1}{m'(x)}dm(x), \end{aligned}$$

which implies that

$$\begin{aligned} \frac{e^{\alpha m(1)}-e^{\alpha m(0)}}{\alpha \displaystyle \max _{x\in [0,1]} m'(x)}\le \int _{0}^{1}e^{\alpha m(x)}dx\le \frac{e^{\alpha m(1)}-e^{\alpha m(0)}}{\alpha \displaystyle \min _{x\in [0,1]} m'(x)}. \end{aligned}$$

Therefore,

$$\begin{aligned} V(\alpha )\le \frac{2\Vert m(x)\Vert _\infty ( \displaystyle \max _{x\in [0,1]} m'(x))\left( e^{\alpha m(1)}-e^{\alpha m(0)}\right) ^2}{\alpha ( \displaystyle \min _{x\in [0,1]} m'(x))^2\left( e^{2\alpha m(1)}-e^{2\alpha m(0)}\right) }, \end{aligned}$$

which yields \(\displaystyle \lim \nolimits _{\alpha \rightarrow +\infty } V(\alpha )=0\).\(\square \)

It follows from Proposition 2.2 that \(\tilde{l}(\alpha )\) is strictly monotone decreasing when \(\alpha \) is small, and it may change its monotonicity at least once under certain condition. We conjecture that \(\tilde{l}(\alpha )\) changes its monotonicity even for general function m(x) and all \(\lambda >0\).

Now we solve (2.6) for \(\lambda >0\) by virtue of the implicit function theorem.

Lemma 2.3

For any \( l_*>\tilde{l}\), where \(\tilde{l}\) is defined in (2.8), there exists \(\tilde{\delta }_{l_*}>0\), a neighborhood \({\mathcal {O}}_{l_*}\) of \((c_{0l_*},q_{0l_*},0,0)\) in \(\mathbb R^2\times X_1^2\), and a continuously differentiable mapping

$$\begin{aligned} (\lambda ,l)\mapsto \left( c^{(\lambda ,l)},q^{(\lambda ,l)},\xi ^{(\lambda ,l)},\eta ^{(\lambda ,l)}\right) : [0,\tilde{\delta }_{l_*}]\times [l_*-\tilde{\delta }_{l_*},l_*+\tilde{\delta }_{l_*}]\rightarrow \mathbb R^2\times X_1^2 \end{aligned}$$

such that \(\left( c^{(\lambda ,l)},q^{(\lambda ,l)},\xi ^{(\lambda ,l)},\eta ^{(\lambda ,l)}\right) \in \mathbb R^2\times X_1^2\) is a unique solution of (2.6) in \({\mathcal {O}}_{l_*}\) for \((\lambda ,l)\in [0,\tilde{\delta }_{l_*}]\times [l_*-\tilde{\delta }_{l_*},l_*+\tilde{\delta }_{l_*}]\). Moreover,

$$\begin{aligned} \left( c^{(\lambda ,l)},q^{(\lambda ,l)},\xi ^{(\lambda ,l)},\eta ^{(\lambda ,l)}\right) =(c_{0l_*},q_{0 l_*},0,0) \;\;\text {for}\;\; (\lambda ,l)=(0,l_*). \end{aligned}$$

Here \(c_{0l}\) and \(q_{0l}\) are defined in Lemma 2.1.

Proof

It follows from Lemma 2.1 that

$$\begin{aligned} F(c_{0l_*},q_{0l_*}, 0, 0,l_*,0)= 0, \end{aligned}$$

where F is defined in (2.6). Then the Fréchet derivative of F with respect to \((c,q,\xi ,\eta )\) at \((c_{0l_*},q_{0l_*}, 0, 0,l_*,0)\) is as follows:

$$\begin{aligned} G(\hat{c},\hat{q},\hat{\xi },\hat{\eta })=(g_1,g_2,g_3,g_4)^T, \end{aligned}$$

where \(\hat{c},\hat{q}\in \mathbb R\), \(\hat{\xi },\hat{\eta }\in X_1\), and

$$\begin{aligned} {\left\{ \begin{array}{ll} g_1(\hat{c},\hat{q},\hat{\xi },\hat{\eta }):=&{}\displaystyle \int _\Omega e^{\alpha m(x)} \left( m(x)-2 c_{0l_*} e^{\alpha m(x)} - \frac{ q_{0l_*}}{(1+c_{0l_*} e^{\alpha m(x)} )^2}\right) (\hat{c}+ \hat{\xi }) dx\\ &{} \displaystyle - \int _\Omega \frac{c_{0l_*} e^{\alpha m(x)} }{1+ c_{0l_*} e^{\alpha m(x)} }(\hat{q}+ \hat{\eta }) dx,\\ g_2(\hat{c},\hat{q},\hat{\xi },\hat{\eta }):=&{}L\hat{\xi },\\ g_3(\hat{c},\hat{q},\hat{\xi },\hat{\eta }):=&{} \displaystyle \int _\Omega \frac{ l_* q_{0l_*} e^{\alpha m(x)} }{(1+c_{0l_*} e^{\alpha m(x)} )^2}\left( \hat{c}+ \hat{\xi }\right) dx\\ &{}+\displaystyle \int _\Omega \left( -r+\displaystyle \frac{l_*c_{0l_*}e^{\alpha m(x)}}{1+ c_{0l_*} e^{\alpha m(x)}}\right) (\hat{q}+\hat{\eta })dx,\\ g_4(\hat{c},\hat{q},\hat{\xi },\hat{\eta }):=&{}\theta \Delta \hat{\eta }.\\ \end{array}\right. } \end{aligned}$$

If \( G(\hat{c},\hat{q},\hat{\xi },\hat{\eta })= 0\), then \(\tilde{\xi }= 0\) and \( \hat{\eta }= 0\). Substituting \(\hat{\xi }=\hat{\eta }= 0\) into \(g_1=0\) and \(g_3=0\), respectively, we have

$$\begin{aligned} (P_{ij})(\hat{c},\hat{q})^T=(0,0)^T, \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} P_{11}=&\displaystyle \int _\Omega { e^{\alpha m(x)} m(x) } dx -2 c_{0l_*} \int _\Omega e^{2\alpha m(x)} dx -q_{0l_*} \int _\Omega \frac{ e^{\alpha m(x)} }{(1+ c_{0l_*} e^{\alpha m(x)} )^2} dx,\\ P_{12}=&-\int _\Omega \frac{ c_{0l_*} e^{\alpha m(x)} }{1+ c_{0l_*} e^{\alpha m(x)} } dx=-\displaystyle \frac{r |\Omega |}{l_*},\\ P_{21}=&\displaystyle \int _\Omega \frac{ l_* q_{0l_*} e^{\alpha m(x)} }{(1+ c_{0l_*} e^{\alpha m(x)} )^2}dx,\\ P_{22}=&\displaystyle \int _\Omega \left( -r+\displaystyle \frac{l_*c_{0l_*}e^{\alpha m(x)}}{1+ c_{0l_*} e^{\alpha m(x)}}\right) dx=0. \end{aligned} \end{aligned}$$

Noticing that

$$\begin{aligned} \text {det} (P_{ij}) = r |\Omega | \int _\Omega \frac{ q_{0l_*} e^{\alpha m(x)} }{(1+ c_{0l_*} e^{\alpha m(x)})^2}dx \ne 0, \end{aligned}$$

we obtain that \(\hat{c}=0\) and \(\hat{q}=0\). Therefore, G is injective and thus bijective. Then, we can complete the proof by the implicit function theorem.\(\square \)

By virtue of Lemma 2.3, we have the following result.

Theorem 2.4

Assume that \(l_*>\tilde{l}\), where \(\tilde{l}\) is defined in (2.8). Let

$$\begin{aligned} u^{(\lambda ,l)}=c^{(\lambda ,l)}+\xi ^{(\lambda ,l)},\;\;v^{(\lambda ,l)}=q^{(\lambda ,l)}+\eta ^{(\lambda ,l)}\;\;\text {for}\;\;(\lambda ,l)\in [0,\delta _{l_*}]\times [l_*-\delta _{l_*},l_*+\delta _{l_*}],\nonumber \\ \end{aligned}$$
(2.14)

where \(0<\delta _{l_*}\ll 1\), and \(\left( c^{(\lambda ,l)},q^{(\lambda ,l)},\xi ^{(\lambda ,l)},\eta ^{(\lambda ,l)}\right) \) is obtained in Lemma 2.3. Then \(\left( u^{(\lambda ,l)},v^{(\lambda ,l)}\right) \) is the unique positive solution of (2.1) for \((\lambda ,l)\in (0,\delta _{l_*}]\times [l_*-\delta _{l_*},l_*+\delta _{l_*}]\). Moreover,

$$\begin{aligned} \left( c^{(0,l)},q^{(0,l)},\xi ^{(0,l)},\eta ^{(0,l)}\right) =(c_{0l},q_{0l}, 0, 0)\;\;\text {for}\;\; l\in [l_*-\delta _{l_*},l_*+\delta _{l_*}], \end{aligned}$$
(2.15)

where \(c_{0l}\) and \(q_{0l}\) are defined in Lemma 2.1.

Proof

It follows from Lemma 2.3 that when \(\delta _{l_*}<\tilde{\delta }_{l_*}\), \(\left( u^{(\lambda ,l)},v^{(\lambda ,l)}\right) \) is a solution of (2.1) for \((\lambda ,l)\in (0,\delta _{l_*}]\times [l_*-\delta _{l_*},l_*+\delta _{l_*}]\), and

$$\begin{aligned} \lim _{(\lambda ,l)\rightarrow (0,l_*)}\left( u^{(\lambda ,l)},v^{(\lambda ,l)}\right) =\left( c_{0l_*},q_{0l_*}\right) \;\; \text {in}\;\;X^2. \end{aligned}$$

Note from Lemma 2.1 that \(c_{0l_*},q_{0l_*}>0\) if \(l_*>\tilde{l}\). Then \(\left( u^{(\lambda ,l)},v^{(\lambda ,l)}\right) \) is a positive solution of (2.1) for \((\lambda ,l)\in (0,\delta _{l_*}]\times [l_*-\delta _{l_*},l_*+\delta _{l_*}]\) with \(0<\delta _{l_*}\ll 1\).

Next, we show that \(\left( u^{(\lambda ,l)},v^{(\lambda ,l)}\right) \) is the unique positive solution of (2.1) for \((\lambda ,l)\in (0,\delta _{l_*}]\times [l_*-\delta _{l_*},l_*+\delta _{l_*}]\) with \(0<\delta _{l_*}\ll 1\). If it is not true, then there exists a sequence \(\{(\lambda _k,l_k)\}_{k=1}^\infty \) such that

$$\begin{aligned} 0<\lambda _k\ll 1,\;\; |l_k-l_*|\ll 1,\;\; \lim _{k\rightarrow \infty }(\lambda _k,l_k)=(0,l_*), \end{aligned}$$
(2.16)

and (2.1) admits a positive solution \((u_k,v_k)\) for \((\lambda ,l)=(\lambda _k,l_k)\) with

$$\begin{aligned} (u_k,v_k)\ne \left( u^{(\lambda _k,l_k)},v^{(\lambda _k,l_k)}\right) . \end{aligned}$$
(2.17)

It follows from (2.3) that \((u_k,v_k)\) can also be decomposed as follows:

$$\begin{aligned} u_k= c_k+ \xi _k,\;\; v_k= q_k+ \eta _k,\;\;\text {where}\;\;c_k,q_k\in \mathbb {R},\; \xi _k,\eta _k\in X_1. \end{aligned}$$

Plugging \((u,v,\lambda ,l)=(u_k, v_k,\lambda _k,l_k)\) into (2.1), we see that

$$\begin{aligned} f_i\left( c_k,q_k,\xi _k,\eta _k,l_k,\lambda _k\right) =0\;\;\text {for}\;\;i=1,2,3,4, \end{aligned}$$
(2.18)

where \(f_i\;(i=1,2,3,4)\) are defined in (2.7). It follows from (2.1) that

$$\begin{aligned} \begin{aligned}&- Lu_k \le \lambda _k e^{\alpha m(x)} u_k\left( m(x) - u_k\right) ,\\&l_k \int _\Omega e^{\alpha m(x)} u_k\left( m(x) - e^{\alpha m(x)} u_k \right) dx=r \int _\Omega v_k dx, \end{aligned} \end{aligned}$$
(2.19)

where we have used the divergence formula to obtain the second equation. From the maximum principle and the first equation of (2.19), we have

$$\begin{aligned} 0\le u_k\le \max _{x\in \overline{\Omega }} m(x) \;\;\text {for}\;\;k\ge 1. \end{aligned}$$
(2.20)

This, together with the second equation of (2.19), implies that

$$\begin{aligned} \int _\Omega v_k dx \le {\mathcal {P}}_1:=\frac{\max _{k\ge 1}l_k}{r}\max _{x\in \overline{\Omega }} m(x) \int _\Omega e^{\alpha m(x)} m(x) dx\;\;\text {for}\;\;k\ge 1. \end{aligned}$$

Here we remark that \(\max _{k\ge 1}l_k<\infty \) from (2.16). Consequently,

$$\begin{aligned} \inf _{x\in \overline{\Omega }} v_k\le {\mathcal {P}}_2:=\frac{{\mathcal {P}}_1}{| \Omega | }\;\;\text {for}\;\;k\ge 1. \end{aligned}$$
(2.21)

Note from (2.1) and (2.16) that

$$\begin{aligned} -\theta \Delta v_k+ r v_k\ge -\theta \Delta v_k+\lambda _k r v_k\ge 0. \end{aligned}$$

where we have used \(0<\lambda _k\ll 1\) (see (2.16)) for the first inequality. Note that \(\Omega \) is a bounded domain in \(\mathbb R^N\) with \(1\le N\le 3\). Then we see from [34, Lemma 2.1] that there exists a positive constant \(C_0\), depending only on r and \(\Omega \), such that

$$\begin{aligned} \left\| v_k \right\| _2 \le C_0 \inf _{x\in \overline{\Omega }} v_k=C_0{\mathcal {P}}_2\;\;\text {for}\;\;k\ge 1, \end{aligned}$$
(2.22)

where we have used (2.21) in the last step.

Note from (2.20) and (2.22) that \( \{u_k\}_{k=1}^\infty \) is bounded in \(L^{\infty } (\Omega )\), and \( \{v_k\}_{k=1}^\infty \) is bounded in \(L^2 (\Omega )\). Since \(\lim _{k\rightarrow \infty }\lambda _k=0\), we see from (2.18) with \(i=2,4\) that

$$\begin{aligned} \lim _{k \rightarrow \infty } L\xi _k=0\;\;\text {and}\;\;\lim _{k \rightarrow \infty } \Delta \eta _k=0 \;\;\text {in}\;\;Y_1, \end{aligned}$$

which implies that

$$\begin{aligned} \lim _{k \rightarrow \infty } \xi _k=0\;\;\text {and}\;\;\lim _{k \rightarrow \infty } \eta _k=0 \;\;\text {in}\;\;X_1. \end{aligned}$$
(2.23)

Here \(X_1\) and \(Y_1\) are defined in (2.4). By (2.5), (2.20) and (2.22), we see that

$$\begin{aligned} c_k=\frac{1}{| \Omega | } \int _\Omega u_k dx\;\; \text {and}\;\; q_k=\frac{1}{| \Omega | } \int _\Omega v_k dx, \end{aligned}$$

and \(\{c_k\}_{k=1}^\infty \), \(\{q_k\}_{k=1}^\infty \) are bounded. Then, up to a subsequence, we see that

$$\begin{aligned} \lim _{k \rightarrow \infty } c_k= c^*,\;\;\lim _{k \rightarrow \infty } q_k= q^*. \end{aligned}$$

Taking the limits of (2.18) with \(i=1,3\) on both sides as \(k\rightarrow \infty \), respectively, we obtain that \((c^*,q^*)\) satisfies (2.10) with \(l=l_*\).

We first claim that

$$\begin{aligned} (c^*,q^*)\ne (0,0). \end{aligned}$$

Suppose that it is not true. Then by (2.23) and the embedding theorems, we see that, up to a subsequence,

$$\begin{aligned} \lim _{k\rightarrow \infty }(u_k,v_k)=(0,0) \;\;\text {in}\;\;C^{\gamma }(\overline{\Omega })\;\;\text {for some}\;\;0<\gamma <1. \end{aligned}$$

This yields, for sufficiently large k,

$$\begin{aligned} \int _\Omega v_k\left( -r+\frac{l_k e^{\alpha m(x)} u_k}{1+ e^{\alpha m(x)} u_k} \right) dx<0. \end{aligned}$$

Substituting \((u,v,\lambda ,l)=(u_k,v_k,\lambda _k,l_k)\) into (2.1), and integrating the result over \(\Omega \), we obtain that

$$\begin{aligned} \int _\Omega v_k\left( -r+\frac{l_k e^{\alpha m(x)} u_k}{1+ e^{\alpha m(x)} u_k} \right) dx=0, \end{aligned}$$

which is a contradiction. Next, we show that

$$\begin{aligned} (c^*,q^*)\ne (\tilde{c},0). \end{aligned}$$

By way of contradiction, \((c^*,q^*)=(\tilde{c},0)\). Similarly, by (2.23) and the embedding theorems, we see that, up to a subsequence,

$$\begin{aligned} \lim _{k\rightarrow \infty }(u_k,v_k)=(\tilde{c},0) \;\;\text {in}\;\;C^{\gamma }(\overline{\Omega })\;\;\text {for some}\;\;0<\gamma <1. \end{aligned}$$

From the second equation of (2.1), we have

$$\begin{aligned} -r|\Omega |+\int _\Omega \frac{l_k e^{\alpha m(x)}u_k}{1+e^{\alpha m(x)} u_k}dx=-\displaystyle \frac{\theta }{\lambda _k}\int _{\Omega }\displaystyle \frac{|\nabla v_k|^2}{v_k^2}dx\le 0. \end{aligned}$$
(2.24)

Then taking \(k\rightarrow \infty \) on both sides of (2.24) yields

$$\begin{aligned} l_*\le \frac{r |\Omega |}{\int _\Omega \frac{ \tilde{c} e^{\alpha m(x)} }{1+\tilde{c} e^{\alpha m(x)} }dx} = \tilde{l}, \end{aligned}$$

which is a contradiction.

Therefore,

$$\begin{aligned} \lim _{k\rightarrow \infty }(c_k,q_k)= \left( c_{0l_*},q_{0l_*}\right) . \end{aligned}$$

This, combined with (2.23) and Lemma 2.3, implies that, for sufficiently large k,

$$\begin{aligned} (c_k,q_k,\xi _k,\eta _k)=\left( c^{(\lambda _k,l_k)},\xi ^{(\lambda _k,l_k)},q^{(\lambda _k,l_k)},\eta ^{(\lambda _k,l_k)}\right) , \end{aligned}$$

and consequently, for sufficiently large k,

$$\begin{aligned} (u_k,v_k)=\left( u^{(\lambda _k,l_k)},v^{(\lambda _k,l_k)}\right) . \end{aligned}$$

This contradicts (2.17), and the uniqueness is obtained.

By using similar arguments as in the proof of the uniqueness, we can show that, for any \(l\in [l_*-\delta _{l_*},l_*+\delta _{l_*}]\),

$$\begin{aligned} \lim _{\lambda \rightarrow 0}\left( c^{(\lambda ,l)},q^{(\lambda ,l)},\xi ^{(\lambda ,l)},\eta ^{(\lambda ,l)}\right) =(c_{0l},q_{0l}, 0, 0)\;\;\text {in}\;\;\mathbb R^2\times X_1^2. \end{aligned}$$

Note from Lemma 2.3 that \(\left( c^{(\lambda ,l)},q^{(\lambda ,l)},\xi ^{(\lambda ,l)},\eta ^{(\lambda ,l)}\right) \) is continuously differentiable for \((\lambda ,l)\in [0,\delta _{l_*}]\times [l_*-\delta _{l_*},l_*+\delta _{l_*}]\). Then we see that (2.15) holds. This completes the proof.\(\square \)

From Theorem 2.4, we see that (2.1) admits a unique positive solution when \((l,\lambda )\) is in a small neighborhood of \((l_*,0)\) with \(l_*>\tilde{l}\). In the following, we will solve (2.1) for a wide range of parameters, see the rectangular region in Fig. 1.

Theorem 2.5

Let \({\mathcal {L}}:=[ \tilde{l}+\epsilon ,1/\epsilon ]\), where \(0<\epsilon \ll 1\) and \( \tilde{l}\) is defined in Lemma 2.1. Then the following statements hold.

  1. (i)

    There exists \(\delta _{\epsilon } >0\) such that, for \((\lambda ,l)\in (0,\delta _\epsilon ]\times {\mathcal {L}}\), model (2.1) admits a unique positive solution \((u^{(\lambda ,l)},v^{(\lambda ,l)})\).

  2. (ii)

    Let \((u^{(0,l)},v^{(0,l)})=\left( c_{0l},q_{0l}\right) \) for \(l\in {\mathcal {L}}\), where \(\left( c_{0l},q_{0l}\right) \) is defined in Lemma 2.1. Then \((u^{(\lambda ,l)},v^{(\lambda ,l)})\) is continuously differentiable for \((\lambda ,l)\in [0,\delta _\epsilon ]\times {\mathcal {L}}\), and \((u^{(\lambda ,l)},v^{(\lambda ,l)})\) can be decomposed as follows:

    $$\begin{aligned} u^{(\lambda ,l)}=c^{(\lambda ,l)}+\xi ^{(\lambda ,l)},\;\;v^{(\lambda ,l)}=q^{(\lambda ,l)}+\eta ^{(\lambda ,l)}, \end{aligned}$$

    where \(\left( c^{(\lambda ,l)},q^{(\lambda ,l)},\xi ^{(\lambda ,l)},\eta ^{(\lambda ,l)}\right) \in \mathbb R^2\times X_1^2\) solves Eq. (2.6) for \((\lambda ,l)\in [0,\delta _\epsilon ]\times {\mathcal {L}}\).

Proof

It follows from Lemma 2.4 that, for any \(l_*\in {\mathcal {L}}\), there exists \(\delta _{l_*}\) (\(0<\delta _{l_*}\ll 1\)) such that, for \((\lambda ,l)\in (0,\delta _{l_*}]\times [ l_*-\delta _{l_*}, l_*+\delta _{l_*}]\), system (2.1) admits a unique positive solution \((u^{(\lambda ,l)},v^{(\lambda ,l)})\), where \(u^{(\lambda ,l)}\) and \(v^{(\lambda ,l)}\) are defined in (2.14) and continuously differentiable for \((\lambda ,l)\in [0,\delta _{l_*}]\times [l_*-\delta _{l_*},l_*+\delta _{l_*}]\). Clearly,

$$\begin{aligned} {\mathcal {L}} \subseteq \bigcup _{ l_*\in {\mathcal {L}}} {\left( l_* - \delta _{l_*}, l_* +\delta _{l_*} \right) }. \end{aligned}$$

Noticing that \({\mathcal {L}}\) is compact, we see that there exist finite open intervals, denoted by \(\left( {l_*^{(i)}} -\delta _{ l^{(i)}_*},l^{(i)}_* +\delta _{l^{(i)}_*} \right) \) for \(i=1,\ldots ,s\), such that

$$\begin{aligned} {\mathcal {L}} \subseteq \bigcup _{i=1}^s \left( {l_*^{(i)}} -\delta _{ l^{(i)}_*},{l_*^{(i)}} +\delta _{l^{(i)}_*} \right) . \end{aligned}$$

Choose \( \delta _\epsilon = \min _{1 \le i\le s} \delta _{ l_*^{(i)}}\). Then, for \((\lambda ,l)\in (0,\delta _\epsilon ]\times {\mathcal {L}}\), system (2.1) admits a unique positive solution \((u^{(\lambda ,l)},v^{(\lambda ,l)})\). By Lemma 2.3 and Theorem 2.4, we see that \((u^{(\lambda ,l)},v^{(\lambda ,l)})\) is continuously differentiable on \([0,\delta _\epsilon ]\times {\mathcal {L}}\), if \((u^{(0,l)},v^{(0,l)})=\left( c_{0l},q_{0l}\right) \).\(\square \)

Using similar arguments as in Lemma 2.3 and Theorems 2.4 and 2.5, we can show the nonexistence of positive solution under certain condition, and here we omit the proof for simplicity.

Theorem 2.6

Let \({\mathcal {L}}_1:=[\epsilon _1,\tilde{l}-\epsilon _1]\), where \(0<\epsilon _1\ll 1\) and \( \tilde{l}\) is defined in (2.8). Then there exists \(\delta _{\epsilon _1} >0\) such that, for \((\lambda ,l)\in (0,\delta _{\epsilon _1}]\times {\mathcal {L}}_1\), model (2.1) admits no positive solution.

3 Stability and Hopf bifurcation

Throughout this section, we let \(\left( u^{(\lambda ,l)},v^{(\lambda ,l)}\right) \) be the unique positive solution of model (2.1) obtained in Theorem 2.5. In the following, we use l as the bifurcation parameter, and show that under certain condition there exists a Hopf bifurcation curve \(l=l_\lambda \) when \(\lambda \) is small.

Linearizing model (1.4) at \(\left( u^{(\lambda ,l)}, v^{(\lambda ,l)}\right) \), we obtain

$$\begin{aligned} {\left\{ \begin{array}{ll} \tilde{u}_t= e^{-\alpha m(x)} \nabla \cdot \left[ e^{\alpha m(x)} \nabla \tilde{u} \right] +\lambda \left( M_{1}^{(\lambda ,l)}\tilde{u} +M_{2}^{(\lambda ,l)} \tilde{v}\right) ,&{}x \in \Omega ,\;t> 0,\\ \tilde{v}_t = \theta \Delta \tilde{v} + \lambda \left( M_{3}^{(\lambda ,l)}\tilde{u} +M_{4} ^{(\lambda ,l)}\tilde{v}\right) , &{}x \in \Omega ,\;t> 0,\\ \partial _n \tilde{u} = \partial _n \tilde{v} = 0, &{} x \in \partial \Omega ,\;t > 0,\\ \end{array}\right. } \end{aligned}$$

where

$$\begin{aligned} \begin{aligned}&\displaystyle M_{1}^{(\lambda ,l)}= m(x)-2 e^{\alpha m(x)} u^{(\lambda ,l)}-\frac{v^{(\lambda ,l)}}{\left( 1+ e^{\alpha m(x)} u^{(\lambda ,l)} \right) ^2} ,\;\;M_{2}^{(\lambda ,l)}=-\frac{u^{(\lambda ,l)}}{1+e^{\alpha m(x)} u^{(\lambda ,l)}},\\&\displaystyle M_{3}^{(\lambda ,l)}=\frac{l e^{\alpha m(x)} v^{(\lambda ,l)}}{\left( 1+ e^{\alpha m(x)} u^{(\lambda ,l)} \right) ^2},\;\;M_{4}^{(\lambda ,l)}=-r+\frac{l e^{\alpha m(x)} u^{(\lambda ,l)}}{1+ e^{\alpha m(x)} u^{(\lambda ,l)} }. \end{aligned} \end{aligned}$$
(3.1)

Let

$$\begin{aligned} A_l(\lambda ) : = \left( {\begin{array}{cc} e^{-\alpha m(x)} \nabla \cdot \left[ e^{\alpha m(x)} \nabla \right] &{}0\\ 0&{}\theta \Delta \end{array}} \right) + \lambda \left( {\begin{array}{cc} M_1^{(\lambda ,l)}&{}M_2^{(\lambda ,l)}\\ M_3^{(\lambda ,l)}&{} M_4^{(\lambda ,l)} \end{array}} \right) . \end{aligned}$$
(3.2)

Then, \(\mu \in \mathbb C\) is an eigenvalue of \(A_{l}(\lambda )\) if and only if there exists \( (\varphi , \psi )^T\left( \ne (0,0)^T\right) \in X^2_{\mathbb C}\) such that

$$\begin{aligned} {\left\{ \begin{array}{ll} &{}\displaystyle e^{-\alpha m(x)} \nabla \cdot \left[ e^{\alpha m(x)} \nabla \varphi \right] +\lambda \left( M_{1}^{(\lambda ,l)} \varphi + M_{2}^{(\lambda ,l)} \psi \right) -\mu \varphi =0, \\ &{}\displaystyle \theta \Delta \psi + \lambda \left( M_{3}^{(\lambda ,l)} \varphi +M_{4}^{(\lambda ,l)} \psi \right) -\mu \psi =0. \end{array}\right. } \end{aligned}$$
(3.3)

We first give a priori estimates for solutions of (3.3) for later use.

Lemma 3.1

Let \({\mathcal {L}}\) and \(\delta _\epsilon \) be defined and obtained in Theorem 2.5. Suppose that \((\mu _\lambda ,l_\lambda , \varphi _\lambda , \psi _\lambda )\) solves (3.3) for \(\lambda \in (0,\delta _\epsilon ]\), where \({\mathcal {R}}e \mu _\lambda \ge 0\), \((\varphi _\lambda , \psi _\lambda )^T\left( \ne (0,0)^T\right) \in X^2_{\mathbb C}\) and \(l_\lambda \in {\mathcal {L}}\). Then \(\left| \mu _\lambda /\lambda \right| \) is bounded for \(\lambda \in (0, \delta _\epsilon ]\).

Proof

Substituting \((\mu _\lambda ,l_\lambda , \varphi _\lambda , \psi _\lambda )\) into Eq. (3.3), we have

$$\begin{aligned} {\left\{ \begin{array}{ll} &{}\displaystyle e^{-\alpha m(x)} \nabla \cdot \left[ e^{\alpha m(x)} \nabla \varphi _\lambda \right] +\lambda \left( M_{1}^{(\lambda ,l_\lambda )} \varphi _\lambda + M_{2}^{(\lambda ,l_\lambda )} \psi _\lambda \right) -\mu _\lambda \varphi _\lambda =0, \\ &{}\displaystyle \theta \Delta \psi _\lambda + \lambda \left( M_{3}^{(\lambda ,l_\lambda )} \varphi _\lambda +M_{4}^{(\lambda ,l_\lambda )} \psi _\lambda \right) -\mu _\lambda \psi _\lambda =0. \end{array}\right. } \end{aligned}$$
(3.4)

Then multiplying the first and second equations of (3.4) by \( e^{\alpha m(x)}\overline{\varphi }_\lambda \) and \(\overline{\psi }_\lambda \), respectively, summing these two equations, and integrating the result over \(\Omega \), we have

$$\begin{aligned} \begin{aligned} \mu _\lambda \int _\Omega \left( e^{\alpha m(x)} |\varphi _{\lambda }|^2+|\psi _{\lambda }|^2 \right) dx=&\langle \varphi _{\lambda }, \nabla \cdot \left[ e^{\alpha m(x)} \nabla \varphi _\lambda \right] \rangle +\langle \psi _{\lambda }, \Delta \psi _\lambda \rangle \\&+\lambda \int _\Omega e^{\alpha m(x)} \left( M_{1}^{(\lambda ,l_\lambda )} |\varphi _{\lambda }|^2+M_{2}^{(\lambda ,l_\lambda )}\overline{\varphi }_{\lambda } \psi _{\lambda }\right) dx\\&+\lambda \int _\Omega \left( M_{3}^{(\lambda ,l_\lambda )}\varphi _{\lambda } \overline{\psi }_{\lambda } +M_{4}^{(\lambda ,l_\lambda )} |\psi _{\lambda }|^2 \right) dx. \end{aligned} \end{aligned}$$
(3.5)

We see from the divergence theorem that

$$\begin{aligned} \langle \varphi _{\lambda }, \nabla \cdot \left[ e^{\alpha m(x)} \nabla \varphi _\lambda \right] \rangle +\langle \psi _{\lambda }, \Delta \psi _\lambda \rangle =-\int _\Omega e^{\alpha m(x)} |\nabla \varphi _{\lambda } |^2dx-\int _\Omega |\nabla \psi _{\lambda } |^2dx\le 0. \end{aligned}$$
(3.6)

Noticing from Theorem 2.5 that \(\left( u^{(\lambda ,l)},v^{(\lambda ,l)}\right) :[0,\delta _\epsilon ]\times {\mathcal {L}}\rightarrow X^2 \) is continuously differentiable, we see from the imbedding theorems that there exists a positive constant \({\mathcal {P}}_*\) such that

$$\begin{aligned} \left\| M^{(\lambda ,l)}_i\right\| _\infty \le {\mathcal {P}}_*\;\;\text {for}\;\;(\lambda ,l)\in [0,\delta _\epsilon ]\times {\mathcal {L}}\;\;\text {and}\;\;i=1,2,3,4. \end{aligned}$$
(3.7)

Then, we see from (3.5)–(3.7) that

$$\begin{aligned} \begin{aligned} 0\le {\mathcal {R}}e\left( \frac{\mu _\lambda }{\lambda }\right) \le&{\mathcal {P}}_*\left( 1+\displaystyle \frac{\int _\Omega \left( e^{\alpha m(x)} \overline{\varphi }_{\lambda } \psi _{\lambda } + \varphi _{\lambda } \overline{\psi }_{\lambda } \right) dx}{ \int _\Omega \left( e^{\alpha m(x)} |\varphi _{\lambda }|^2+|\psi _{\lambda }|^2 \right) dx}\right) \\ \le&{\mathcal {P}}_*\left( 1+e^{\alpha \max _{x\in \overline{\Omega }}m(x)}\right) . \end{aligned} \end{aligned}$$

Similarly, we can obtain that

$$\begin{aligned} \left| \mathcal Im\left( \frac{\mu _\lambda }{\lambda }\right) \right| \le {\mathcal {P}}_*\left( 1+e^{\alpha \max _{x\in \overline{\Omega }}m(x)}\right) . \end{aligned}$$

This completes the proof.\(\square \)

To analyze the stability of \(\left( u^{(\lambda ,l)},v^{(\lambda ,l)}\right) \), we need to consider whether the eigenvalues of (3.3) could pass through the imaginary axis. It follows from Lemma 3.1 that if \(\mu =\textrm{i} \sigma \) is an eigenvalue of (3.3), then \(\nu =\sigma /\lambda \) is bounded for \(\lambda \in (0,\delta _\epsilon ]\), where \(\delta _\epsilon \) is obtained in Theorem 2.5. Substituting \(\mu =\textrm{i}\lambda \nu \;(\nu \ge 0)\) into (3.3), we have

$$\begin{aligned} {\left\{ \begin{array}{ll} &{}\displaystyle L \varphi +\lambda e^{\alpha m(x)} \left( M_{1}^{(\lambda ,l)} \varphi + M_{2}^{(\lambda ,l)} \psi \right) -\textrm{i}\lambda \nu e^{\alpha m(x)} \varphi =0, \\ &{}\displaystyle \theta \Delta \psi + \lambda \left( M_{3}^{(\lambda ,l)} \varphi +M_{4}^{(\lambda ,l)} \psi \right) -\textrm{i}\lambda \nu \psi =0. \end{array}\right. } \end{aligned}$$
(3.8)

Ignoring a scalar factor, \(( \varphi , \psi )^T\in X^2_{\mathbb C}\) in (3.8) can be decomposed as follows:

$$\begin{aligned} {\left\{ \begin{array}{ll} \varphi =\delta + w, \;\;\text {where}\; \delta \ge 0\;\;\text {and}\;\; w\in \left( X_{1}\right) _{\mathbb C},\\ \psi =(s_{1}+\textrm{i}s_{ 2})+ z,\;\;\text {where}\;s_1,s_2\in \mathbb R\;\;\text {and}\;\; z \in \left( X_{1}\right) _{\mathbb C},\\ \Vert \varphi \Vert _2^{2}+\Vert \psi \Vert _2^{2}=| \Omega |. \end{array}\right. } \end{aligned}$$
(3.9)

Now we can obtain an equivalent problem of (3.8) in the following.

Lemma 3.2

Let \(( \varphi , \psi )\) be defined in (3.9). Then \((\varphi , \psi ,\nu ,l)\) solves (3.8) with \(\nu \ge 0\), \(l\in {\mathcal {L}}\), if and only if \(( \delta , s_1,s_2, \nu ,w, z,l)\) solves

$$\begin{aligned} {\left\{ \begin{array}{ll} { H}(\delta , s_1,s_2, \nu , w,z,l,\lambda )= 0,\\ \delta \ge 0,\;s_1 ,s_2\in \mathbb R,\;\nu \ge 0,\;l\in {\mathcal {L}},\; w, z\in \left( X_{1}\right) _{\mathbb C}. \end{array}\right. } \end{aligned}$$
(3.10)

Here

$$\begin{aligned} H(\delta , s_1,s_2, \nu , w,z,l,\lambda )=( h_1, h_2,h_3,h_4,h_5)^T \end{aligned}$$

is a continuously differentiable mapping from \(\mathbb R^4\times \left( \left( X_{1}\right) _{\mathbb C}\right) ^2\times {\mathcal {L}}\times [0,\delta _\epsilon ]\) to \(\left( \mathbb C\times \left( Y_{1}\right) _{\mathbb C}\right) ^2\times \mathbb R\), where

$$\begin{aligned} \begin{aligned} h_1(\delta , s_1,s_2, \nu , w,z,l,\lambda ):=&\displaystyle \int _\Omega { e^{\alpha m(x)} \left[ M_{1}^{(\lambda ,l)}(\delta +w)+ M_{2}^{(\lambda ,l)} (s_1 + \textrm{i} s_2+z) \right] } dx\\&-\textrm{i} \nu \int _\Omega e^{\alpha m(x)} (\delta +w)dx ,\\ h_{2}(\delta , s_1,s_2, \nu , w,z,l,\lambda ):=&\displaystyle L w +\lambda e^{\alpha m(x)} \left[ M_{1}^{(\lambda ,l)} (\delta +w) + M_{2}^{(\lambda ,l)} (s_1 + \textrm{i} s_2+z) \right] \\&-\textrm{i}\lambda \nu e^{\alpha m(x)} (\delta +w) -\frac{\lambda }{ | \Omega |} h_1,\\ h_{3}(\delta , s_1,s_2, \nu , w,z,l,\lambda ):=&\displaystyle \int _\Omega { \left[ M_{3}^{(\lambda ,l)}(\delta +w)+ M_{4}^{(\lambda ,l)} (s_1 + \textrm{i} s_2+z) \right] } dx\\&-\textrm{i} \nu (s_1 + \textrm{i} s_2)| \Omega | ,\\ h_{4}(\delta , s_1,s_2, \nu , w,z,l,\lambda ):=&\displaystyle \theta \Delta z + \lambda \left[ M_{3}^{(\lambda ,l)} (\delta +w) +M_{4}^{(\lambda ,l)} (s_1 + \textrm{i} s_2+z) \right] \\&-\textrm{i}\lambda \nu (s_1 + \textrm{i} s_2+z) -\frac{\lambda }{ | \Omega |} h_3,\\ h_{5}( \delta ,s_1,s_2, w, z,\nu ,l,\lambda ):=&\displaystyle | \Omega |\left( \delta ^2 +s_1^2+s_2^2-1\right) +\Vert w\Vert _2^2+\Vert z\Vert _2^2, \end{aligned} \end{aligned}$$

and \(M_{i}^{(\lambda ,l)}\) are defined in (3.1) for \(i=1,2,3,4\).

Proof

Note that \(f=c+z\) for any \(f\in Y_{\mathbb C}\), where

$$\begin{aligned} c=\frac{1}{|\Omega |}\int _\Omega f dx\in \mathbb C, \;\; z=f-\frac{1}{|\Omega |}\int _\Omega fdx\in \left( Y_1\right) _{\mathbb C}. \end{aligned}$$
(3.11)

A direct computation implies that \({ H}(\delta , s_1,s_2, \nu , w,z,l,\lambda )\) is a continuously differentiable mapping from \(\mathbb R^4\times \left( \left( X_{1}\right) _{\mathbb C}\right) ^2\times {\mathcal {L}}\times [0,\delta _\epsilon ]\) to \(\left( \mathbb C\times \left( Y_{1}\right) _{\mathbb C}\right) ^2\times \mathbb R\). Denote the left sides of the two equations of (3.8) as \(G_1\) and \(G_2\), respectively. That is,

$$\begin{aligned} \begin{aligned}&G_1(\varphi , \psi ,\nu ,l,\lambda ):=\displaystyle L \varphi +\lambda e^{\alpha m(x)} \left( M_{1}^{(\lambda ,l)} \varphi + M_{2}^{(\lambda ,l)} \psi \right) -\textrm{i}\lambda \nu e^{\alpha m(x)} \varphi , \\&G_2(\varphi , \psi ,\nu ,l,\lambda ):=\displaystyle \theta \Delta \psi + \lambda \left( M_{3}^{(\lambda ,l)} \varphi +M_{4}^{(\lambda ,l)} \psi \right) -\textrm{i}\lambda \nu \psi . \end{aligned} \end{aligned}$$

Plugging (3.9) into (3.8), wee see from (3.11) that \(G_1(\varphi , \psi ,\nu ,l,\lambda )=0\) if and only if

$$\begin{aligned} h_i(\delta , s_1,s_2, \nu , w,z,l,\lambda )=0\;\; \text {for}\;\; i=1,2, \end{aligned}$$

and \(G_2(\varphi , \psi ,\nu ,l,\lambda )=0\) if and only if

$$\begin{aligned} h_i(\delta , s_1,s_2, \nu , w,z,l,\lambda )=0\;\; \text {for}\;\; i=3,4. \end{aligned}$$

This completes the proof.\(\square \)

Note from (3.1) that

$$\begin{aligned} \begin{aligned}&\displaystyle M_{1}^{(0,l)} = m(x)-2 c _{0l} e^{\alpha m(x)} -\frac{ q _{0l} }{\left( 1+ c _{0l} e^{\alpha m(x)} \right) ^2} ,\;\; M_{2}^{(0,l)}=-\frac{ c _{0l} }{1+ c _{0l} e^{\alpha m(x)} }<0,\\&\displaystyle M_{3}^{(0,l)} dx=\frac{l q _{0l} e^{\alpha m(x)} }{\left( 1+ c _{0l} e^{\alpha m(x)} \right) ^2} >0 ,\;\;M_{4}^{(0,l)} =-r+\frac{ l c _{0l} e^{\alpha m(x)} }{1+ c _{0l} e^{\alpha m(x)} }, \end{aligned} \end{aligned}$$
(3.12)

where \(c _{0l}\) and \(q_{0l}\) are defined in Lemma 2.1. Then we give the following result for further application.

Lemma 3.3

Let \({\mathcal {S}}(l):=\int _\Omega e^{\alpha m(x)} M_1^{(0,l)}dx\), where \(l\in {\mathcal {L}}\), and \({\mathcal {L}}:=[ \tilde{l}+\epsilon ,1/\epsilon ]\) is defined in Theorem 2.5 with \(0<\epsilon \ll 1\), and let \({\mathcal {T}} (\alpha ):=\int _\Omega e^{\alpha m(x)}( m(x) -1) dx\). Then the following two statements hold:

  1. (i)

    If \({\mathcal {T}} (\alpha )<0\), then \({\mathcal {S}}(l)<0\) for all \(l\in {\mathcal {L}}\);

  2. (ii)

    If \({\mathcal {T}} (\alpha )>0\), then there exists \(l_0\in {int}( \mathcal {L})=(\tilde{l}+\epsilon ,1/\epsilon )\) such that \({\mathcal {S}}(l_0)=0\), \({\mathcal {S}}'(l_0)>0\), \({\mathcal {S}}(l)<0\) for \(l\in [ \tilde{l}+\epsilon ,l_0)\), and \({\mathcal {S}}(l)>0\) for \(l\in (l_0,1/\epsilon ]\).

Proof

We construct an auxiliary function:

$$\begin{aligned} {\mathcal {S}}_1(c)= \int _\Omega \frac{ e^{\alpha m(x)} }{ 1+ c e^{\alpha m(x)} } dx. \end{aligned}$$

A direct computation implies that

$$\begin{aligned} {\mathcal {S}}'_1(c )=-\int _\Omega \frac{ e^{2\alpha m(x)} }{ \left( 1+ c e^{\alpha m(x)} \right) ^2 } dx<0,\;\;{\mathcal {S}}''_1(c )=2\int _\Omega \frac{ e^{3\alpha m(x)} }{ \left( 1+ c e^{\alpha m(x)} \right) ^3 } dx>0.\qquad \end{aligned}$$
(3.13)

Clearly, we see from (3.12) that

$$\begin{aligned} {\mathcal {S}}(l)=\int _\Omega \left[ e^{\alpha m(x)} m(x)-2 c _{0l} e^{2 \alpha m(x)}\right] dx- q _{0l} \int _\Omega \frac{ e^{\alpha m(x)} }{\left( 1+ c _{0l} e^{\alpha m(x)} \right) ^2} dx. \end{aligned}$$
(3.14)

It follows from the first equation of (2.11) that

$$\begin{aligned} q _{0l} =\displaystyle \frac{1}{{\mathcal {S}}_1(c_{0l})}\int _\Omega \left[ e^{\alpha m(x)} m(x)- c _{0l} e^{2 \alpha m(x)}\right] dx, \end{aligned}$$
(3.15)

and plugging (3.15) into (3.14), we get

$$\begin{aligned} {\mathcal {S}}(l)={\mathcal {S}}_2(c_{0l}). \end{aligned}$$
(3.16)

Here

$$\begin{aligned} \begin{aligned} {\mathcal {S}}_2(c):=&\int _\Omega \left[ e^{\alpha m(x)} m(x)-2 c e^{2 \alpha m(x)}\right] dx\\&-\displaystyle \frac{1}{{\mathcal {S}}_1(c)}\int _\Omega \left[ e^{\alpha m(x)} m(x)- c e^{2 \alpha m(x)}\right] dx\displaystyle \int _\Omega \frac{ e^{\alpha m(x)} }{\left( 1+ c e^{\alpha m(x)} \right) ^2} dx\displaystyle \\ =&\displaystyle \int _\Omega \left[ e^{\alpha m(x)} m(x)- c e^{2 \alpha m(x)}\right] dx\left[ 1-\displaystyle \frac{1}{{\mathcal {S}}_1(c)}\displaystyle \int _\Omega \frac{ e^{\alpha m(x)} }{\left( 1+ c e^{\alpha m(x)} \right) ^2} dx\right] -c\int _\Omega e^{2 \alpha m(x)} dx\\ =&-\displaystyle \frac{c{\mathcal {S}}'_1(c)}{{\mathcal {S}}_1(c)}\displaystyle \int _\Omega \left[ e^{\alpha m(x)} m(x)- c e^{2 \alpha m(x)}\right] dx-c\int _\Omega e^{2 \alpha m(x)} dx,\\ \end{aligned} \end{aligned}$$

where we have used (3.13) in the last step. Let

$$\begin{aligned} {\mathcal {S}}_3(c)=\displaystyle \frac{{\mathcal {S}}'_1(c)}{{\mathcal {S}}_1(c)}\displaystyle \int _\Omega \left[ e^{\alpha m(x)} m(x)- c e^{2 \alpha m(x)}\right] dx+\int _\Omega e^{2 \alpha m(x)} dx, \end{aligned}$$
(3.17)

and consequently, we have

$$\begin{aligned} {\mathcal {S}}_2(c)=-c{\mathcal {S}}_3(c). \end{aligned}$$
(3.18)

It follows from the first equation of (2.9) that

$$\begin{aligned} \displaystyle \frac{dc_{0l}}{dl}<0,\;\;\lim _{l\rightarrow \tilde{l}}c_{0l}=\tilde{c}, \;\;\text {and}\;\; \lim _{l\rightarrow \infty }c_{0l}=0, \end{aligned}$$
(3.19)

where \(\tilde{c}\) and \( \tilde{l}\) are defined in Lemma 2.1 (see (2.8)). Therefore, to determine the zeros of \({\mathcal {S}}(l)\) in \((\tilde{l},\infty )\), we only need to consider the zeros of \({\mathcal {S}}_3(c)\) in \((0,\tilde{c})\).

It follows from the Hölder inequality and (3.13) that

$$\begin{aligned} \begin{aligned} {[{\mathcal {S}}_1'(c)]}^2&=\left[ \int _\Omega \frac{ e^{\frac{3}{2}\alpha m(x)} e^{\frac{1}{2}\alpha m(x)} }{ \left( 1+ c e^{\alpha m(x)} \right) ^\frac{3}{2} \left( 1+ c e^{\alpha m(x)} \right) ^\frac{1}{2}} dx\right] ^2\\&\le \int _\Omega \frac{ e^{3\alpha m(x)} }{ \left( 1+ c e^{\alpha m(x)} \right) ^3 } dx\int _\Omega \frac{ e^{\alpha m(x)} }{ 1+ c e^{\alpha m(x)} } dx=\frac{{\mathcal {S}}_1''(c){\mathcal {S}}_1(c)}{2}, \end{aligned} \end{aligned}$$

and consequently,

$$\begin{aligned} \left[ \displaystyle \frac{{\mathcal {S}}'_1(c)}{{\mathcal {S}}_1(c)}\right] '=\frac{{\mathcal {S}}''_1(c ){\mathcal {S}}_1(c )-\left( {\mathcal {S}}'_1(c )\right) ^2}{{\mathcal {S}}^2_1(c )}>0. \end{aligned}$$
(3.20)

Note from (2.8) that \(\int _\Omega \left[ e^{\alpha m(x)} m(x)- c e^{2 \alpha m(x)}\right] dx>0\) for \(c\in (0,\tilde{c})\). This, combined with (3.13), (3.17) and (3.20), yields

$$\begin{aligned} {\mathcal {S}}'_3(c )=\left[ \displaystyle \frac{{\mathcal {S}}'_1(c)}{{\mathcal {S}}_1(c)}\right] '\int _\Omega \left[ e^{\alpha m(x)} m(x)- c e^{2 \alpha m(x)}\right] dx-\displaystyle \frac{{\mathcal {S}}'_1(c)}{{\mathcal {S}}_1(c)}\int _\Omega e^{2 \alpha m(x)} dx>0.\nonumber \\ \end{aligned}$$
(3.21)

It follows from (2.8) that \(\lim _{c\rightarrow \tilde{c}}{\mathcal {S}}_3(c)=\int _\Omega e^{2 \alpha m(x)} dx>0\), and

$$\begin{aligned} \lim _{c\rightarrow 0} {\mathcal {S}}_3(c)=-\frac{\int _\Omega e^{2\alpha m(x)} dx}{\int _\Omega e^{\alpha m(x)} dx}{\mathcal {T}}(\alpha ). \end{aligned}$$

Therefore, if \({\mathcal {T}}(\alpha )<0\), then \({\mathcal {S}}_3(c)>0\) for all \(c\in (0,\tilde{c})\). If \({\mathcal {T}}(\alpha )>0\), then there exists \(c_0\in (0,\tilde{c})\) such that \({\mathcal {S}}_3(c_0)=0\), \({\mathcal {S}}_3(c)<0\) for \(c\in (0,c_0)\) and \({\mathcal {S}}_3(c)>0\) for \(c\in (c_0,\tilde{c})\). Then, we see from (3.16), (3.18) and (3.19) that if \({\mathcal {T}}(\alpha )<0\), then (i) holds; and if \({\mathcal {T}}(\alpha )>0\), then there exists \(l_0>\tilde{l}\) such that

$$\begin{aligned} c_{0l_0}=c_0,\;\;{\mathcal {S}}(l_0)=0,\;\;{\mathcal {S}}(l)<0\;\;\text {for}\;\;l\in (\tilde{l},l_0),\;\;\text {and}\;{\mathcal {S}}(l)>0\;\;\text {for}\;\;l\in (l_0,\infty ).\nonumber \\ \end{aligned}$$
(3.22)

It follows from (3.16) and (3.18) that

$$\begin{aligned} \begin{aligned} {\mathcal {S}}'(l_0)=&\left. {\mathcal {S}}'_2\left( c_{0l}\right) \right| _{l=l_0}\left. \displaystyle \frac{dc_{0l}}{dl}\right| _{l=l_0}=-\left[ {\mathcal {S}}_3\left( c_{0l_0}\right) +c_{0l_0}\left. {\mathcal {S}}_3'(c_{0l})\right| _{l=l_0}\right] \left. \displaystyle \frac{dc_{0l}}{dl}\right| _{l=l_0}\\ =&-\left[ {\mathcal {S}}_3\left( c_{0}\right) +c_{0}{\mathcal {S}}_3'(c_{0})\right] \left. \displaystyle \frac{dc_{0l}}{dl}\right| _{l=l_0},\\ \end{aligned} \end{aligned}$$

where we have used \(c_{0l_0}=c_0\) in the last step. Noting that \({\mathcal {S}}_3(c_0)=0\), we see from (3.19) and (3.21) that \({\mathcal {S}}'(l_0)>0\). Note from Theorem 2.5 that \({\mathcal {L}}=[ \tilde{l}+\epsilon ,1/\epsilon ]\) with \(0<\epsilon \ll 1\). Then, for sufficiently small \(\epsilon \), \(l_0\in {int}( {\mathcal {L}})=(\tilde{l}+\epsilon ,1/\epsilon )\) and (ii) holds. This completes the proof.\(\square \)

By Lemma 3.3, we can solve (3.10) for \(\lambda =0\).

Lemma 3.4

Suppose that \(\lambda =0\), and let \({\mathcal {T}} (\alpha )\) be defined in Lemma 3.3. Then the following statements hold:

  1. (i)

    If \({\mathcal {T}} (\alpha )<0\), then (3.10) has no solution;

  2. (ii)

    If \({\mathcal {T}} (\alpha )>0\), then (3.10) has a unique solution

    $$\begin{aligned} (\delta ,s_1,s_2,\nu , w, z,l)=(\delta _0,s_{10},s_{20},\nu _0, w_0, z_0,l_0), \end{aligned}$$

    where \(l_0\) is obtained in Lemma 3.3,

    $$\begin{aligned} \begin{aligned}&s_{10}=0,\;w_0= 0,\; z_0= 0,\;\nu _0=\sqrt{ -\frac{ \int _\Omega e^{\alpha m(x)} M_{2}^{(0,l_0)} dx \int _\Omega M_{3}^{(0,l_0)} dx }{ | \Omega | \int _\Omega e^{\alpha m(x)} dx } },\\&\delta _0= \sqrt{ \displaystyle \frac{1 }{1+\left( \frac{\int _\Omega M_{3}^{(0,l_0)} dx }{\nu _0|\Omega |} \right) ^2}},\;\;s_{20}=-\displaystyle \frac{\delta _0}{\nu _0|\Omega |}\displaystyle \int _\Omega M_{3}^{(0,l_0)} dx, \end{aligned} \end{aligned}$$

    and \(M_{i}^{(0,l)}\) is defined in (3.12) for \(i=1,2,3,4\).

Proof

It follows from (3.12) that

$$\begin{aligned} \displaystyle \int _\Omega e^{\alpha m(x)}M_{2}^{(0,l_0)}dx<0,\;\;\displaystyle \int _\Omega M_{3}^{(0,l_0)} dx>0, \end{aligned}$$
(3.23)

which implies that \(\nu _0\) is well defined.

Substituting \(\lambda =0\) into \( h_{2}=0\) and \(h_{4}=0\), respectively, we have \( w= w_0= 0\) and \( z= z_0= 0\). Note from the second equation of (2.11) that

$$\begin{aligned} \int _\Omega M_{4}^{(0,l)} dx=0 \;\;\text {for any}\;\;l\in {\mathcal {L}}. \end{aligned}$$
(3.24)

Then plugging \( w= z=0\) and \(\lambda =0\) into \( h_i=0\) for \(i=1,3,5\), respectively, we have

$$\begin{aligned} \left( {\begin{array}{*{20}{c}} \int _\Omega e^{\alpha m(x)} M_{1}^{(0,l)} dx-\textrm{i}\nu \int _\Omega e^{\alpha m(x)}dx &{} \int _\Omega e^{\alpha m(x)} M_{2}^{(0,l)} dx\\ \int _\Omega M_3^{(0,l)} dx &{} -\textrm{i}\nu | \Omega | \end{array}} \right) \left( {\begin{array}{*{20}{c}} \delta \\ s_{1} + \textrm{i} s_{2} \end{array}} \right) = \left( {\begin{array}{*{20}{c}} 0\\ 0 \end{array}} \right) ,\nonumber \\ \end{aligned}$$
(3.25)

and

$$\begin{aligned} \delta ^2 +s_1^2+s_2^2=1. \end{aligned}$$
(3.26)

Therefore, (3.10) has a solution if and only if (3.25)–(3.26) is solvable for some value of \((\delta ,s_1,s_2,\nu ,l)\) with \(\delta ,\nu \ge 0\), \(s_1,s_2\in \mathbb R\) and \(l\in {\mathcal {L}}\). It follows from (3.23) that (3.25)–(3.26) is solvable (or (3.10) is solvable) if and only if

$$\begin{aligned} {\mathcal {S}}(l)=\int _\Omega e^{\alpha m(x)} M_{1}^{(0,l)} dx=0 \end{aligned}$$
(3.27)

is solvable for some \(l\in {\mathcal {L}}\). From Lemma 3.3, we see that if \({\mathcal {T}} (\alpha )<0\), then (3.27) has no solution in \({\mathcal {L}}\); and if \({\mathcal {T}} (\alpha )>0\), then (3.27) has a unique solution \(l_0\) in \({\mathcal {L}}\) with \({\mathcal {S}}(l_0)=0\).

Substituting \(l=l_0\) into (3.25), we compute that

$$\begin{aligned} \begin{aligned} s_1=s_{10}=0,\; \nu =\nu _0,\;s_2=-\displaystyle \frac{\delta }{\nu _0|\Omega |}\displaystyle \int _\Omega M_{3}^{(0,l_0)} dx. \end{aligned} \end{aligned}$$
(3.28)

Then, plugging (3.28) into (3.26), we get \(\delta =\delta _0\) and \(s_2=s_{20}\). This completes the proof.\(\square \)

Now, we solve (3.10) for \(\lambda >0\).

Theorem 3.5

Suppose that \({\mathcal {T}} (\alpha )>0\), where \({\mathcal {T}} (\alpha )\) is defined in Lemma 3.3. Then there exists \(\tilde{\lambda }\in (0,\delta _\epsilon )\), where \(\delta _\epsilon \) is obtained in Theorem 2.5, and a continuously differentiable mapping

$$\begin{aligned} \lambda \mapsto (\delta _{\lambda },s_{1\lambda },s_{2\lambda }, \nu _{\lambda },w_{\lambda }, z_\lambda ,l_\lambda ): [0,\tilde{\lambda }]\rightarrow \mathbb R^4\times \left( \left( X_{1}\right) _{\mathbb C}\right) ^2\times {\mathcal {L}} \end{aligned}$$

such that (3.10) has a unique solution \((\delta _{\lambda },s_{1\lambda },s_{2\lambda }, \nu _{\lambda },w_{\lambda }, z_\lambda ,l_\lambda )\) for \(\lambda \in [0,\tilde{\lambda }]\), and

$$\begin{aligned} (\delta _{\lambda },s_{1\lambda },s_{2\lambda }, \nu _{\lambda },w_{\lambda }, z_\lambda ,l_\lambda )=(\delta _{0},s_{10},s_{20}, \nu _0,w_0, z_0,l_0) \end{aligned}$$

for \(\lambda =0\), where \((\delta _{0},s_{10},s_{20}, \nu _0,w_0, z_0,l_0)\) is defined in Lemma 3.4.

Proof

We first show the existence. It follows from Lemma 3.4 that \( H\left( K_0\right) = 0\), where \(K_0=(\delta _{0},s_{10},s_{20}, \nu _0,w_0, z_0,l_0,0)\). Note from (3.24) that

$$\begin{aligned} \displaystyle \int _\Omega M_4^{(0,l)}dx=0,\;\; \displaystyle \frac{d}{dl}\left( \displaystyle \int _\Omega M_4^{(0,l)}dx\right) =0\;\;\text {for all}\;\; l\in {\mathcal {L}}. \end{aligned}$$
(3.29)

Then the Fréchet derivative of \(H (\delta , s_1,s_2, \nu ,w, z,l,\lambda )\) with respect to \((\delta ,s_1,s_2, \nu ,w, z, l)\) at \(K_0\) is as follows:

$$\begin{aligned} K( \hat{\delta },\hat{s}_1,\hat{s}_2, \hat{\nu },\hat{w},\hat{z},\hat{l}):=(k_1,k_2,k_3,k_4,k_5)^T, \end{aligned}$$

where \((\hat{\delta },\hat{s}_1,\hat{s}_2,\hat{\nu },\hat{w},\hat{z},\hat{l})\in \mathbb R^4\times \left( \left( X_{1}\right) _{\mathbb C}\right) ^2\times {\mathcal {L}}\), and

$$\begin{aligned} \begin{aligned} k_1=&\displaystyle \int _\Omega { e^{\alpha m(x)} \left[ \left( M_{1}^{(0,l_0)}-\textrm{i} \nu _0\right) (\hat{\delta }+ \hat{w} )+ M_{2}^{(0,l_0)} (\hat{s}_1 + \textrm{i}\hat{s}_2+\hat{z}) \right] } dx- \textrm{i}\hat{\nu }\delta _0 \int _\Omega e^{\alpha m(x)} dx\\&+\hat{l} \left. \frac{d}{dl}\left[ \int _\Omega e^{\alpha m(x)} \left( \delta _0 M_{1}^{(0,l)} +\textrm{i}s_{20} M_{2}^{(0,l)} \right) dx \right] \right| _{l=l_0},\\ k_{2}=&\displaystyle L \hat{w},\\ k_3=&\int _\Omega { \left[ M_{3}^{(0,l_0)}(\hat{\delta }+ \hat{w} )+ M_{4}^{(0,l_0)} \hat{z} \right] } dx+ \hat{\nu }s_{20} | \Omega | -\textrm{i} \nu _0 (\hat{s}_1 + \textrm{i}\hat{s}_2)| \Omega | \\&+\hat{l} \delta _0 \left. \frac{d}{dl} \left( \int _\Omega M_{3}^{(0,l)} dx \right) \right| _{l=l_0},\\ k_{4}=&\displaystyle \theta \Delta \hat{z} ,\\ k_5=&2 \delta _0| \Omega |\hat{\delta }+2s_{20}| \Omega |\hat{s}_2,\\ \end{aligned} \end{aligned}$$

where we have used (3.29) to obtain \(k_3\). If \( K( \hat{\delta },\hat{s}_1,\hat{s}_2,\hat{\nu },\hat{w},\hat{z},\hat{l})= 0\), then \( \hat{w}= 0\) and \(\hat{z}= 0\). Substituting \(\hat{w}=\hat{z}= 0\) into \(k_1=0\) and \(k_3=0\), respectively, separating the real and imaginary parts, and noting that \(k_5=0\), we get

$$\begin{aligned} (D_{ij})(\hat{\delta }, \hat{s}_1,\hat{s}_2,\hat{\nu },\hat{l})^T= (0,0,0,0,0)^T. \end{aligned}$$

Here \(D_{ij}=0\) except

$$\begin{aligned} \begin{aligned}&D_{12}= \int _\Omega e^{\alpha m(x)} M_{2}^{(0,l_0)} dx,\;\;D_{15}= \delta _0 {\mathcal {S}}'(l_0),\;\;D_{21}=- \nu _0 \int _\Omega e^{\alpha m(x)} dx, \;\;\;D_{34}=s_{20}| \Omega |,\\&D_{23}= \int _\Omega e^{\alpha m(x)} M_{2}^{(0,l_0)} dx,\;\;D_{24}=- \delta _0 \int _\Omega e^{\alpha m(x)} dx,\\&D_{25}= s_{20} \frac{d}{dl} \left. \left( \int _\Omega e^{\alpha m(x)} M_{2}^{(0,l)} dx\right) \right| _{l=l_0},\;\;D_{31}= \int _\Omega M_{3}^{(0,l_0)} dx,\;\;D_{33}= \nu _0| \Omega |,\\&D_{35}=\delta _0 \frac{d}{dl} \left. \left( \int _\Omega M_{3}^{(0,l)} dx \right) \right| _{l=l_0},\;\;D_{42}= -\nu _0 | \Omega |,\;\;D_{51}=2 \delta _0| \Omega |,\;\;D_{53}=2s_{20}| \Omega |, \end{aligned} \end{aligned}$$

where \({\mathcal {S}}(l)\) is defined in Lemma 3.3. A direct computation implies that

$$\begin{aligned} \begin{aligned} \left| \left( D_{ij}\right) \right| =&2\nu _0 | \Omega |^2\delta _0{\mathcal {S}}'(l_0) \left| \begin{array}{ccc} - \nu _0 \int _\Omega e^{\alpha m(x)} dx&{} \int _\Omega e^{\alpha m(x)} M_{2}^{(0,l_0)} dx&{}- \delta _0 \int _\Omega e^{\alpha m(x)} dx \\ \int _\Omega M_{3}^{(0,l_0)} dx&{}\nu _0| \Omega |&{}s_{20}| \Omega |\\ \delta _0&{}s_{20}&{}0 \end{array}\right| \\ =&2\nu _0 \delta _0 | \Omega |^2 {\mathcal {S}}'(l_0) \bigg ( \delta _0 s_{20} |\Omega | \int _\Omega e^{\alpha m(x)} M_{2}^{(0,l_0)} dx+\delta _0^2 \nu _0 |\Omega | \int _\Omega e^{\alpha m(x)} dx \\&+\nu _0 s_{20}^2 |\Omega | \int _\Omega e^{\alpha m(x)} dx -\delta _0 s_{20} \int _\Omega e^{\alpha m(x)} dx \int _\Omega M_{3}^{(0,l_0)} dx \bigg ). \end{aligned} \end{aligned}$$
(3.30)

Since \((\delta _0,s_{10},s_{20},\nu _0)\) satisfies (3.25), we have

$$\begin{aligned} s_{20} \int _\Omega e^{\alpha m(x)} M_{2}^{(0,l_0)} dx = \nu _0 \delta _0 \int _\Omega e^{\alpha m(x)} dx,\;\;\delta _0 \int _\Omega M_{3}^{(0,l_0)} dx =-\nu _0 s_{20} |\Omega |. \end{aligned}$$
(3.31)

Plugging (3.31) into (3.30), we have

$$\begin{aligned} |(D_{ij})|=4\nu _0^2 \delta _0 | \Omega |^3{\mathcal {S}}'(l_0)\left( \delta _0^2+s_{20}^2 \right) \int _\Omega e^{\alpha m(x)} dx>0, \end{aligned}$$

where we have used Lemma 3.3 (ii) in the last step. Therefore, \(\hat{\delta }=0,\hat{s}_1=0,\hat{s}_2=0,\hat{\nu }=0\) and \(\hat{l}=0\), which implies that K is injective and thus bijective. From the implicit function theorem, we see that there exists \(\tilde{\lambda }\in (0,\delta _\epsilon )\) and a continuously differentiable mapping

$$\begin{aligned} \lambda \mapsto (\delta _{\lambda },s_{1\lambda },s_{2\lambda },\nu _{\lambda }, w_{\lambda }, z_\lambda ,l_\lambda ): [0,\tilde{\lambda }]\rightarrow \mathbb R^4\times \left( \left( X_{1}\right) _{\mathbb C}\right) ^2\times {\mathcal {L}} \end{aligned}$$

such that \( H\left( \delta _{\lambda },s_{1\lambda },s_{2\lambda }, \nu _\lambda ,w_\lambda , z_\lambda ,l_\lambda ,\lambda \right) = 0\), and for \(\lambda =0\),

$$\begin{aligned} \left( \delta _{\lambda },s_{1\lambda },s_{2\lambda }, \nu _{\lambda },w_{\lambda }, z_\lambda ,l_\lambda \right) =(\delta _0,s_{10},s_{20},\nu _0, w_0, z_0,l_0). \end{aligned}$$

Now, we show the uniqueness. From the implicit function theorem, we only need to verify that if \(\left( \delta ^{\lambda },s_{1}^{\lambda },s_{2}^{\lambda }, \nu ^{\lambda },w^{\lambda },z^{\lambda },l^{\lambda }\right) \) is a solution of (3.10) for \(\lambda \in (0,\tilde{\lambda }]\), then

$$\begin{aligned} \lim _{\lambda \rightarrow 0}\left( \delta ^{\lambda },s_{1}^{\lambda },s_{2}^{\lambda },\nu ^{\lambda }, w^{\lambda },z^{\lambda },l^{\lambda }\right) = (\delta _0,s_{10},s_{20}, \nu _0,w_0, z_0,l_0) \;\;\text {in}\;\;\mathbb R^4\times \left( (X_1)_{\mathbb C}\right) ^2\times \mathbb R.\nonumber \\ \end{aligned}$$
(3.32)

Noticing that \(h_5\left( \delta ^{\lambda },s_{1}^{\lambda },s_{2}^{\lambda },\nu ^{\lambda }, w^{\lambda },z^{\lambda },l^{\lambda }\right) =0\), we see that \(|\delta ^{\lambda }|\), \(|s_{1}^{\lambda }|\), \(|s_{2}^{\lambda }|\), \(\Vert w^{\lambda }\Vert _2\) and \(\Vert z^{\lambda }\Vert _2\) are bounded for \(\lambda \in [0,\tilde{\lambda }]\). Since \(l^\lambda \in {\mathcal {L}}\), we obtain that \(l^{\lambda }\) is bounded for \(\lambda \in [0,\tilde{\lambda }]\). Moreover, \(|\nu ^\lambda |\) is also bounded for \(\lambda \in [0,\tilde{\lambda }]\) from Lemma 3.1. Let

$$\begin{aligned} \begin{aligned} \mathcal {Q}_1(\lambda )=&\lambda e^{\alpha m(x)} \left[ M_{1}^{(\lambda ,l^{\lambda })} (\delta ^{\lambda }+w^{\lambda }) + M_{2}^{(\lambda ,l^{\lambda })} (s_1^{\lambda } + \textrm{i} s_2^{\lambda }+z^{\lambda }) \right] \\&-\textrm{i}\lambda \nu ^{\lambda } e^{\alpha m(x)} (\delta ^{\lambda }+w^{\lambda })-\frac{\lambda }{ | \Omega |} h_1\left( \delta ^{\lambda },s_{1}^{\lambda },s_{2}^{\lambda },\nu ^{\lambda }, w^{\lambda },z^{\lambda },l^{\lambda }\right) ,\\ \mathcal {Q}_2(\lambda )=&\lambda \left[ M_{3}^{(\lambda ,l^{\lambda })} (\delta ^{\lambda }+w^{\lambda }) + M_{4}^{(\lambda ,l^{\lambda })} (s_1^{\lambda } + \textrm{i} s_2^{\lambda }+z^{\lambda }) \right] \\&-\textrm{i}\lambda \nu ^{\lambda } (s_1^{\lambda } + \textrm{i} s_2^{\lambda }+z^{\lambda })-\frac{\lambda }{ | \Omega |} h_3\left( \delta ^{\lambda },s_{1}^{\lambda },s_{2}^{\lambda },\nu ^{\lambda }, w^{\lambda },z^{\lambda },l^{\lambda }\right) . \end{aligned} \end{aligned}$$

Noting that \(\left( \delta ^{\lambda },s_{1}^{\lambda },s_{2}^{\lambda }, \nu ^{\lambda },w^{\lambda },z^{\lambda },l^{\lambda }\right) \) is bounded in \(\mathbb R^4\times \left( (Y_1)_{\mathbb C}\right) ^2\times \mathbb R\), we see from (3.7) that \(\lim _{\lambda \rightarrow 0} \mathcal {Q}_1(\lambda )=0\) and \(\lim _{\lambda \rightarrow 0} \mathcal {Q}_2(\lambda )=0\) in \((Y_1)_{\mathbb C}\). Since

$$\begin{aligned} w^{\lambda }=-L^{-1}\left[ \mathcal {Q}_1(\lambda )\right] ,\;\;z^{\lambda }=-\Delta ^{-1}\left[ \mathcal {Q}_2(\lambda )\right] , \end{aligned}$$

where \(L^{-1}\) and \(\Delta ^{-1}\) are bounded operators from \(\left( Y_{1}\right) _{\mathbb C}\) to \(\left( X_{1}\right) _{\mathbb C}\), we get

$$\begin{aligned} \lim _{\lambda \rightarrow 0} w^{\lambda }= w_0=0,\;\;\lim _{\lambda \rightarrow 0} z^{\lambda }= z_0=0\;\;\text {in}\;\;\left( X_1\right) _{\mathbb C}. \end{aligned}$$

Since \(\left( \delta ^{\lambda },s_{1}^{\lambda },s_{2}^{\lambda }, \nu ^{\lambda },l^{\lambda }\right) \) is bounded in \(\mathbb R^5\) for \(\lambda \in (0,\tilde{\lambda }]\), we see that, up to a subsequence,

$$\begin{aligned} \lim _{\lambda \rightarrow 0} \delta ^{\lambda }= \delta ^*, \;\;\lim _{\lambda \rightarrow 0} s^{\lambda }_1= s_1^*,\;\;\lim _{\lambda \rightarrow 0} s^{\lambda }_2= s_2^*,\; \;\lim _{\lambda \rightarrow 0} \nu ^{\lambda }= \nu ^*,\;\;\lim _{\lambda \rightarrow 0} l^{\lambda }= l^*. \end{aligned}$$

Taking \(\lambda \rightarrow 0\) on both sides of

$$\begin{aligned} H\left( \delta ^{\lambda },s_{1}^{\lambda },s_{2}^{\lambda },\nu ^{\lambda }, w^{\lambda },z^{\lambda },l^{\lambda }\right) =0, \end{aligned}$$

we see that \(H\left( \delta ^{*},s_{1}^{*},s_{2}^{*},\nu ^{*}, 0,0,l^{*}\right) =0\). This combined with Lemma 3.4 implies that \(\delta ^*=\delta _0\), \(s_1^*=s_{10}\), \(s_2^*=s_{20}\), \(\nu ^*=\nu _0\) and \(l^*=l_0\). This completes the proof.\(\square \)

Then from Lemma 3.2 and Theorem 3.5, we have the following result.

Theorem 3.6

Let \({\mathcal {L}}\) be defined in Theorem 2.5. Suppose that \({\mathcal {T}} (\alpha )>0\) and \(\lambda \in (0,\tilde{\lambda }]\), where \(0<\tilde{\lambda }\ll 1\), and \({\mathcal {T}} (\alpha )\) is defined in Lemma 3.3. Then \((\varphi ,\psi ,\sigma ,l)\) solves

$$\begin{aligned} {\left\{ \begin{array}{ll} \left( A_l(\lambda )- \textrm{i}\sigma I\right) ( \varphi , \psi )^T= 0,\\ \sigma \ge 0,\;l\in {\mathcal {L}},\;(\varphi , \psi )^T\ne (0,0)^T\in X^2_{\mathbb C},\\ \end{array}\right. } \end{aligned}$$

if and only if

$$\begin{aligned} \sigma =\lambda \nu _\lambda ,\;\varphi =\kappa \varphi _\lambda =\kappa (\delta _\lambda + w_\lambda ),\;\psi =\kappa \psi _\lambda =\kappa [ (s_{1\lambda }+\textrm{i}s_{2\lambda })+ z_\lambda ],\;l=l_\lambda , \end{aligned}$$

where \(A_l(\lambda )\) is defined in (3.2), I is the identity operator, \(\kappa \in \mathbb C\) is a nonzero constant, and \((\delta _\lambda ,s_{1\lambda },s_{2\lambda }, \nu _{\lambda },w_{\lambda }, z_\lambda , l_\lambda )\) is obtained in Theorem 3.5.

To show that \(\textrm{i}\lambda \nu _\lambda \) is a simple eigenvalue of \(A_{l_\lambda }(\lambda )\), we need to consider the following operator

$$\begin{aligned} A^H(\lambda ) : = \left( {\begin{array}{cc} L&{}0\\ 0&{}\theta \Delta \end{array}} \right) + \lambda \left( {\begin{array}{cc} e^{\alpha m(x)}M_1^{(\lambda ,l_\lambda )}+\textrm{i}\nu _\lambda e^{\alpha m(x)}&{}M_3^{(\lambda ,l_\lambda )}\\ e^{\alpha m(x)}M_2^{(\lambda ,l_\lambda )}&{} M_4^{(\lambda ,l_\lambda )}+\textrm{i}\nu _\lambda \end{array}} \right) . \end{aligned}$$
(3.33)

Let

$$\begin{aligned} \mathcal {I} : = \left( {\begin{array}{cc} e^{\alpha m(x)}&{}0\\ 0&{}1 \end{array}} \right) . \end{aligned}$$
(3.34)

Then \(A^H(\lambda )\) is the adjoint operator of \(\mathcal {I}\left( A_{l_\lambda }(\lambda )- \textrm{i}\lambda \nu _\lambda I \right) \). That is, for any \((\phi _1,\phi _2)^T\in X_{\mathbb C}\) and \((\psi _1,\psi _2)^T\in X_{\mathbb C}\), we have

$$\begin{aligned} \left\langle A^H(\lambda )\left( \phi _1,\phi _2\right) ^T,\left( \psi _1,\psi _2\right) ^T\right\rangle =\left\langle \left( \phi _1,\phi _2\right) ^T,\mathcal {I}\left( A_{l_\lambda }(\lambda )- \textrm{i}\lambda \nu _\lambda I \right) \left( \psi _1,\psi _2\right) ^T\right\rangle . \end{aligned}$$
(3.35)

Lemma 3.7

Let \(A^H(\lambda )\) be defined in (3.33). Suppose that \({\mathcal {T}} (\alpha )>0\) and \(\lambda \in (0,\tilde{\lambda }]\), where \(0<\tilde{\lambda }\ll 1\) and \({\mathcal {T}} (\alpha )\) is defined in Lemma 3.3. Then

$$\begin{aligned} {\mathcal {N}} \left[ A^H(\lambda ) \right] =\textrm{span} [( \tilde{\varphi }_\lambda ,\tilde{\psi }_\lambda )^T ], \end{aligned}$$

and, ignoring a scalar factor, \((\tilde{\varphi }_\lambda ,\tilde{\psi }_\lambda )\) can be represented as

$$\begin{aligned} {\left\{ \begin{array}{ll} \tilde{\varphi }_\lambda =\tilde{\delta }_\lambda +\tilde{w}_\lambda , \;\;\text {where}\; \tilde{\delta }\ge 0\;\;\text {and}\;\;\tilde{w}_\lambda \in \left( X_{1}\right) _{\mathbb C},\\ \tilde{\psi }_\lambda =(\tilde{s}_{1\lambda }+\textrm{i}\tilde{s}_{ 2\lambda }) + \tilde{z}_\lambda ,\;\;\text {where}\;\; \tilde{s}_{1\lambda },\tilde{s}_{2\lambda }\in \mathbb R\;\;\text {and}\;\;\tilde{z}_\lambda \in \left( X_{1}\right) _{\mathbb C},\\ \Vert \tilde{\varphi }_\lambda \Vert _2^{2}+\Vert \tilde{\psi }_\lambda \Vert _2^{2}=| \Omega |. \end{array}\right. } \end{aligned}$$
(3.36)

Moreover, \( \lim _{\lambda \rightarrow 0}\left( \tilde{\delta }_\lambda ,\tilde{s}_{1\lambda },\tilde{s}_{2\lambda },\tilde{w}_{\lambda },\tilde{z}_\lambda \right) =(\tilde{\delta }_0,\tilde{s}_{10},\tilde{s}_{20},\tilde{w}_0,\tilde{z}_0 )\) in \(\mathbb R^3\times \left( (X_1)_{\mathbb C}\right) ^2\), where \(\tilde{w}_0= 0,\;\tilde{z}_0= 0,\)

$$\begin{aligned} \tilde{\delta }_0=\sqrt{\frac{1}{1+\left( \frac{ \nu _0 \int _\Omega e^{\alpha m(x)} dx }{ \int _\Omega M_3^{(0,l_0)} dx}\right) ^2 }}, \;\;\tilde{s}_{10}=0, \;\;\tilde{s}_{20}=-\frac{ \tilde{\delta }_{0} \nu _0 \int _\Omega e^{\alpha m(x)} dx }{ \int _\Omega M_3^{(0,l_0)} dx}, \end{aligned}$$

and \(\nu _0,l_0\) are defined in Lemma 3.4.

Proof

It follows from Theorem 3.6 that 0 is an eigenvalue of \( \mathcal {I}\left( A_{l_\lambda }(\lambda )- \textrm{i}\lambda \nu _\lambda I \right) \) and \({\mathcal {N}} \left[ \mathcal {I}\left( A_{l_\lambda }(\lambda )- \textrm{i}\lambda \nu _\lambda I \right) \right] \) is one-dimensional for \(\lambda \in (0,\tilde{\lambda }]\), where \({\mathcal {I}}\) is defined in (3.34). Then 0 is also an eigenvalue of \( A^H(\lambda )\) and \({\mathcal {N}} [ A^H(\lambda )]\) is also one-dimensional. Then \({\mathcal {N}} [ A^H(\lambda )]=\textrm{span} [(\tilde{\varphi }_\lambda ,\tilde{\psi }_\lambda )^T]\), and \(( \tilde{\varphi }_\lambda ,\tilde{\psi }_\lambda )\) can be represented as (3.36). By the similar arguments as in the proof of Theorem 3.5, we see that

$$\begin{aligned} \lim _{\lambda \rightarrow 0}\tilde{w}_\lambda =0,\;\;\lim _{\lambda \rightarrow 0}\tilde{z}_\lambda =0\;\;\text {in}\;\;\left( X_1\right) _{\mathbb C}, \end{aligned}$$

and up to a subsequence,

$$\begin{aligned} \lim _{\lambda \rightarrow 0} \tilde{\delta }_{\lambda }=\delta _0^*,\;\;\lim _{\lambda \rightarrow 0} \tilde{s}_{1\lambda }= s_{10}^*,\;\;\lim _{\lambda \rightarrow 0} \tilde{s}_{2\lambda }=s_{20}^*, \end{aligned}$$

where \(\left( \delta _0^{*}, s_{10}^*, s_{20}^*\right) \) satisfies \(\left( \delta _0^*\right) ^2+\left( s_{ 10}^*\right) ^2+\left( s_{ 20}^*\right) ^2=1\), and

$$\begin{aligned} \left( {\begin{array}{*{20}{c}} \displaystyle \textrm{i}\nu _0 \int _\Omega e^{\alpha m(x)} dx&{} \displaystyle \int _\Omega M_3^{(0,l_0)} dx\\ \displaystyle \int _\Omega e^{\alpha m(x)} M_2^{(0,l_0)} dx &{} \textrm{i}\nu _0 | \Omega | \end{array}} \right) \left( {\begin{array}{*{20}{c}} \delta _{0}^*\\ s_{10}^* + \textrm{i} s_{20}^* \end{array}} \right) = \left( {\begin{array}{*{20}{c}} 0\\ 0 \end{array}} \right) . \end{aligned}$$

A direct computation implies that \(\left( \delta _0^{*}, s_{10}^*, s_{20}^*\right) =\left( \tilde{\delta }_0, \tilde{s}_{10}, \tilde{s}_{20}\right) \), and consequently, \(\lim _{\lambda \rightarrow 0}\left( \tilde{\delta }_\lambda ,\tilde{s}_{1\lambda },\tilde{s}_{2\lambda }\right) =\left( \tilde{\delta }_0, \tilde{s}_{10}, \tilde{s}_{20}\right) \). This completes the proof.\(\square \)

Then, by virtue of Lemma 3.7, we show that \(\textrm{i} \lambda \nu _\lambda \) is simple.

Theorem 3.8

Suppose that \({\mathcal {T}} (\alpha )>0\) and \(\lambda \in (0,\tilde{\lambda }]\), where \(0<\tilde{\lambda }\ll 1\) and \({\mathcal {T}} (\alpha )\) is defined in Lemma 3.3. Then \(\textrm{i} \lambda \nu _\lambda \) is a simple eigenvalue of \( A_{l_\lambda }(\lambda ) \), where \( A_l(\lambda ) \) is defined in (3.2).

Proof

It follows from Theorem 3.6 that \({\mathcal {N}} [ A_{l_\lambda }(\lambda ) - \textrm{i}\lambda \nu _\lambda I] =\textrm{span} [( \varphi _\lambda , \psi _\lambda )^T]\), where \( \varphi _\lambda \) and \(\psi _\lambda \) are defined in Theorem 3.5. Then we show that

$$\begin{aligned} {\mathcal {N}} [ A_{l_\lambda }(\lambda ) - \textrm{i}\lambda \nu _\lambda I]^2 ={\mathcal {N}} [ A_{l_\lambda }(\lambda ) - \textrm{i}\lambda \nu _\lambda I]. \end{aligned}$$

Letting \(( \Psi _1, \Psi _2)^T \in {\mathcal {N}} [ A_{l_\lambda }(\lambda ) - \textrm{i}\lambda \nu _\lambda I]^2\), we have

$$\begin{aligned}{}[ A_{l_\lambda }(\lambda ) - \textrm{i}\lambda \nu _\lambda I] ( \Psi _1, \Psi _2)^T \in {\mathcal {N}} [ A_{l_\lambda }(\lambda ) - \textrm{i}\lambda \nu _\lambda I]=\textrm{span} [( \varphi _\lambda , \psi _\lambda )^T], \end{aligned}$$

and consequently, there exists a constant \(s\in \mathbb C\) such that

$$\begin{aligned}{}[ A_{l_\lambda }(\lambda ) - \textrm{i}\lambda \nu _\lambda I] ( \Psi _1, \Psi _2)^T =s ( \varphi _\lambda , \psi _\lambda )^T. \end{aligned}$$

Then

$$\begin{aligned} {\mathcal {I}}[ A_{l_\lambda }(\lambda ) - \textrm{i}\lambda \nu _\lambda I] ( \Psi _1, \Psi _2)^T =s{\mathcal {I}}( \varphi _\lambda , \psi _\lambda )^T, \end{aligned}$$
(3.37)

where \({\mathcal {I}}\) is defined in (3.34). Note that \(A^H(\lambda )\) is the adjoint operator of \({\mathcal {I}}[ A_{l_\lambda }(\lambda ) - \textrm{i}\lambda \nu _\lambda I]\), and it follows from Lemma 3.7 that

$$\begin{aligned} {\mathcal {N}} \left[ A^H(\lambda ) \right] =\textrm{span} [( \tilde{\varphi }_\lambda ,\tilde{\psi }_\lambda )^T ]. \end{aligned}$$

Then, by (3.35) and (3.37), we have

$$\begin{aligned} \begin{aligned} 0&=\left\langle A^H(\lambda )(\tilde{\varphi }_\lambda , \tilde{\psi }_\lambda )^T ,( \Psi _1, \Psi _2)^T \right\rangle =\left\langle (\tilde{\varphi }_\lambda ,\tilde{\psi }_\lambda )^T, \mathcal {I}\left( A_{l_\lambda }(\lambda )- \textrm{i}\lambda \nu _\lambda I \right) ( \Psi _1, \Psi _2)^T \right\rangle \\&=s \mathcal {W}(\lambda ), \end{aligned} \end{aligned}$$

where

$$\begin{aligned} \mathcal {W}(\lambda ):=\left\langle (\tilde{\varphi }_\lambda ,\tilde{\psi }_\lambda )^T, {\mathcal {I}}( \varphi _\lambda , \psi _\lambda )^T\right\rangle =\int _\Omega \left( e^{\alpha m(x)}\overline{\tilde{\varphi }_\lambda } \varphi _\lambda + \overline{ \tilde{\psi }_\lambda } \psi _\lambda \right) dx. \end{aligned}$$
(3.38)

It follows from Lemmas 3.4 and 3.7 and Theorem 3.5 that

$$\begin{aligned} \lim _{\lambda \rightarrow 0}\mathcal {W}(\lambda )=2\tilde{\delta }_0 \delta _0 \int _\Omega e^{\alpha m(x)} dx>0, \end{aligned}$$
(3.39)

which implies that \(s=0\) for \(\lambda \in (0,\tilde{\lambda }]\) with \(0<\tilde{\lambda }\ll 1\). Therefore,

$$\begin{aligned} {\mathcal {N}} [ A_{l_\lambda }(\lambda ) - \textrm{i}\lambda \nu _\lambda I]^2 ={\mathcal {N}} [ A_{l_\lambda }(\lambda ) - \textrm{i}\lambda \nu _\lambda I]. \end{aligned}$$

This completes the proof.\(\square \)

Note that \(\textrm{i}\lambda \nu _\lambda \) is simple if exists. Then, by using the implicit function theorem, we see that there exists a neighborhood \(P_\lambda \times V_\lambda \times O_\lambda \) of \( ( \varphi _\lambda , \psi _\lambda , \textrm{i}\nu _\lambda ,l_\lambda )\) (\(P_\lambda \), \(V_\lambda \) and \(O_\lambda \) are neighborhoods of \(( \varphi _\lambda , \psi _\lambda )\), \(\textrm{i}\nu _\lambda \) and \(l_\lambda \), respectively), and a continuously differentiable function \( ( \varphi (l), \psi (l), \mu (l)):O_\lambda \rightarrow P_\lambda \times V_\lambda \) such that \(\mu (l_\lambda )=\textrm{i}\nu _\lambda ,\;( \varphi (l_\lambda ), \psi (l_\lambda ))=( \varphi _\lambda , \psi _\lambda )\), where \(\nu _\lambda \), \(\varphi _\lambda \) and \(\psi _\lambda \) are defined in Theorem 3.5. Moreover, for each \(l \in O_\lambda \), the only eigenvalue of \(A_{l}(\lambda )\) in \(V_\lambda \) is \(\mu (l)\), and

$$\begin{aligned} \left( A_{l}(\lambda )- \mu (l) I\right) ( \varphi (l), \psi (l))^T= 0. \end{aligned}$$
(3.40)

Then, we show that the following transversality condition holds.

Theorem 3.9

Let \(l_\lambda \) be obtained in Theorem 3.5. Then

$$\begin{aligned} \left. \frac{ d {\mathcal {R}}e [\mu (l)]}{ d l}\right| _{l=l_\lambda } > 0. \end{aligned}$$

Proof

Multiplying both sides of (3.40) by \({\mathcal {I}}\) to the left, and differentiating the result with respect to l at \(l=l_\lambda \), we have

$$\begin{aligned} \begin{aligned} \left. \frac{ d \mu }{ d l}\right| _{l=l_\lambda } \mathcal {I} ( \varphi _\lambda , \psi _\lambda )^T=&\, \mathcal {I} \left( A_{l_\lambda }(\lambda )- \textrm{i}\nu _\lambda I\right) \left. \left( \frac{d \varphi }{ d l}, \frac{d \psi }{ d l} \right) ^T\right| _{l=l_\lambda }\\&+{\mathcal {I}}\left. \left( {\begin{array}{cc} \displaystyle \frac{dM^{(\lambda ,l)}_1}{d l}&{}\displaystyle \frac{dM^{(\lambda ,l)}_2}{d l}\\ \displaystyle \frac{dM^{(\lambda ,l)}_3}{d l}&{}\displaystyle \frac{dM^{(\lambda ,l)}_4}{d l} \end{array}} \right) \right| _{l=l_\lambda } (\varphi _\lambda , \psi _\lambda )^T, \end{aligned} \end{aligned}$$
(3.41)

where \(M^{(\lambda ,l)}_i\;(i=1,\ldots ,4)\) and \({\mathcal {I}}\) are defined in (3.1) and (3.34), respectively. Note from (3.35) and Lemma 3.7 that

$$\begin{aligned} \begin{aligned}&\left\langle (\tilde{\varphi }_\lambda , \tilde{\psi }_\lambda )^T , \mathcal {I} \left( A_{l_\lambda }(\lambda )- \textrm{i}\nu _\lambda I\right) \left. \left( \frac{d \varphi }{ d l}, \frac{d \psi }{ d l} \right) ^T\right| _{l=l_\lambda }\right\rangle \\&=\left\langle A^H(\lambda )(\tilde{\varphi }_\lambda , \tilde{\psi }_\lambda )^T, \left. \left( \frac{d \varphi }{ d l}, \frac{d \psi }{ d l} \right) ^T\right| _{l=l_\lambda } \right\rangle =0, \end{aligned} \end{aligned}$$

where \(\tilde{\varphi }_\lambda \) and \(\tilde{\psi }_\lambda \) are defined in Lemma 3.7. Then, multiplying both sides of (3.41) by \((\tilde{\varphi }_\lambda , \tilde{\psi }_\lambda )\) to the left, and integrating the result over \(\Omega \), we have

$$\begin{aligned} \begin{aligned} \mathcal {W}(\lambda )\left. \frac{ d \mu }{ d l}\right| _{l=l_\lambda } =&\,\lambda \int _\Omega e^{\alpha m(x)} \overline{\tilde{\varphi }_\lambda }\varphi _\lambda \left. \frac{d M_{1}^{(\lambda ,l)} }{d l} \right| _{l=l_\lambda } dx+ \lambda \int _\Omega \overline{\tilde{\psi }_\lambda }\varphi _\lambda \left. \frac{d M_{3}^{(\lambda ,l)} }{d l} \right| _{l=l_\lambda } dx\\&+\lambda \int _\Omega e^{\alpha m(x)} \overline{\tilde{\varphi }_\lambda }\psi _\lambda \left. \frac{d M_{2}^{(\lambda ,l)} }{d l} \right| _{l=l_\lambda } dx+\lambda \int _\Omega \overline{\tilde{\psi }_\lambda }\psi _\lambda \left. \frac{d M_{4}^{(\lambda ,l)} }{d l} \right| _{l=l_\lambda } dx, \end{aligned} \end{aligned}$$
(3.42)

where \(\mathcal {W}(\lambda )\) is defined in (3.38). From Theorem 3.5 and Lemma 3.7, we see that

$$\begin{aligned} \lim _{\lambda \rightarrow 0}\left( l_\lambda ,\varphi _\lambda ,\psi _\lambda ,\tilde{\varphi }_\lambda ,\tilde{\psi }_\lambda \right) = \left( l_0,\delta _0,\textrm{i}s_{20},\tilde{\delta }_0,\textrm{i}\tilde{s}_{20}\right) \;\;\text {in}\;\; \mathbb {R}\times X_{\mathbb C}^4. \end{aligned}$$
(3.43)

It follows from Theorem 2.5 that \(( u^{(\lambda ,l)}, v^{(\lambda ,l)})\) is continuously differentiable. This, combined with the embedding theorems and Eq. (3.43), implies that

$$\begin{aligned} \begin{aligned}&\lim _{\lambda \rightarrow 0}\int _\Omega e^{\alpha m(x)} \overline{\tilde{\varphi }_\lambda }\varphi _\lambda \left. \frac{d M_{1}^{(\lambda ,l)} }{d l} \right| _{l=l_\lambda } dx=\delta _0\tilde{\delta }_0\left. \displaystyle \frac{d}{dl}\left( \int _\Omega e^{\alpha m(x)} M_{1}^{(0,l)} dx\right) \right| _{l=l_0},\\&\lim _{\lambda \rightarrow 0} \int _\Omega e^{\alpha m(x)} \overline{\tilde{\varphi }_\lambda }\psi _\lambda \left. \frac{d M_{2}^{(\lambda ,l)} }{d l} \right| _{l=l_\lambda } dx=\textrm{i} \tilde{\delta }_0s_{20}\left. \displaystyle \frac{d}{dl}\left( \int _\Omega e^{\alpha m(x)} M_{2}^{(0,l)} dx\right) \right| _{l=l_0},\\&\lim _{\lambda \rightarrow 0}\int _\Omega \overline{\tilde{\psi }_\lambda }\varphi _\lambda \left. \frac{d M_{3}^{(\lambda ,l)} }{d l} \right| _{l=l_\lambda } dx=-\textrm{i}\delta \tilde{s}_{20}\left. \displaystyle \frac{d}{dl}\left( \int _\Omega M_{3}^{(0,l)} dx\right) \right| _{l=l_0},\\&\lim _{\lambda \rightarrow 0}\int _\Omega \overline{\tilde{\psi }_\lambda }\psi _\lambda \left. \frac{d M_{4}^{(\lambda ,l)} }{d l} \right| _{l=l_\lambda } dx=\tilde{s}_{20}s_{20}\left. \displaystyle \frac{d}{dl}\left( \int _\Omega M_{4}^{(0,l)} dx\right) \right| _{l=l_0}. \end{aligned} \end{aligned}$$
(3.44)

It follows from Lemma 3.3 and Eq. (3.29) that

$$\begin{aligned} \left. \displaystyle \frac{d}{dl}\left( \int _\Omega e^{\alpha m(x)} M_{1}^{(0,l)} dx\right) \right| _{l=l_0}={\mathcal {S}}'(l_0)>0 \;\;\text {and}\;\; \left. \displaystyle \frac{d}{dl}\left( \int _\Omega M_{4}^{(0,l)} dx\right) \right| _{l=l_0}=0. \end{aligned}$$

This, together (3.39), (3.42) and (3.44), yields

$$\begin{aligned} \lim _{\lambda \rightarrow 0} \frac{1}{\lambda } \left. \frac{ d {\mathcal {R}}e [\mu (l)]}{ d l}\right| _{l=l_\lambda }=\frac{{\mathcal {S}}'(l_0)}{2\int _\Omega e^{\alpha m(x)} dx} >0. \end{aligned}$$

This completes the proof.\(\square \)

From Theorems 2.5, 3.5, 3.8 and 3.9, we can obtain the following results on the dynamics of model (1.4), see also Fig. 1.

Theorem 3.10

Let \(\left( u^{(\lambda ,l)},v^{(\lambda ,l)}\right) \) be the unique positive steady state (obtained in Theorem 2.5) of model (1.4) for \(l\in {\mathcal {L}}:=[\tilde{l}+\epsilon ,1/\epsilon ]\) and \(\lambda \in (0,\delta _\epsilon ]\) with \(0<\epsilon \ll 1\), where \(\tilde{l}\) and \(\delta _\epsilon \) are defined in Eq. (2.8) and Theorem 2.5, respectively. Then the following statements hold.

  1. (i)

    If \({\mathcal {T}}(\alpha )<0\), where \({\mathcal {T}} (\alpha )\) is defined in Lemma 3.3, then there exists \(\tilde{\lambda }_1\in (0,\delta _\epsilon )\) such that, for each \(\lambda \in (0,\tilde{\lambda }_1]\), the positive steady state \(\left( u^{(\lambda ,l)},v^{(\lambda ,l)}\right) \) of model (1.4) is locally asymptotically stable for \(l\in {\mathcal {L}}\).

  2. (ii)

    If \({\mathcal {T}}(\alpha )>0\), then there exists \(\tilde{\lambda }_2\in (0,\delta _\epsilon )\) and a continuously differentiable mapping

    $$\begin{aligned} \lambda \mapsto l_\lambda :(0,\tilde{\lambda }_2]\rightarrow {\mathcal {L}}=[\tilde{l}+\epsilon ,1/\epsilon ] \end{aligned}$$

    such that, for each \(\lambda \in (0,\tilde{\lambda }_2]\), the positive steady state \(\left( u^{(\lambda ,l)},v^{(\lambda ,l)}\right) \) of model (1.4) is locally asymptotically stable when \(l \in [ \tilde{l}+\epsilon ,l_\lambda )\) and unstable when \(l \in (l_\lambda , 1/\epsilon ]\). Moreover, system (1.4) undergoes a Hopf bifurcation at \(\left( u^{(\lambda ,l)},v^{(\lambda ,l)}\right) \) when \(l=l_\lambda \).

Proof

To prove (i), we need to show that if \({\mathcal {T}}(\alpha )<0\), there exists \(\tilde{\lambda }_1>0\) such that

$$\begin{aligned} \sigma ( A_{ l}(\lambda ))\subset \{x+\textrm{i}y : x,y\in \mathbb {R},x < 0 \} \; \text {for all}\; \lambda \in (0,\tilde{\lambda }_1] \; \text {and}\; l\in \mathcal {L}. \end{aligned}$$

If it is not true, then there exists a sequence \(\{ (\lambda _k ,l_k )\}_{k=1}^\infty \) such that \(\lim _{k\rightarrow \infty }\lambda _k=0\), \(\lim _{k\rightarrow \infty }l_k=l^*\in {\mathcal {L}}\), and

$$\begin{aligned} \sigma ( A_{ l_k}(\lambda _k) )\not \subset \{x+\textrm{i}y : x,y\in \mathbb {R},x < 0 \}. \end{aligned}$$

Then, for \(k\ge 1\),

$$\begin{aligned} \left( A_{ l_k}(\lambda _k)- \mu I \right) (\varphi , \psi )^T= 0 \end{aligned}$$
(3.45)

is solvable for some value of \((\mu _{k},\varphi _{k}, \psi _{k})\) with \({\mathcal {R}}e \mu _{k}\ge 0\) and \( (\varphi _{k}, \psi _{k})^T(\ne (0,0)^T)\in (X_{\mathbb C})^2\). Substituting \((\mu ,\varphi ,\psi )=(\mu _{k},\varphi _{k},\psi _{k})\) into (3.45), we have

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \mu _{k}e^{\alpha m(x)} \varphi _{k}= L \varphi _{k} +\lambda _ke^{\alpha m(x)} \left( M_1^{({\lambda _k}, l_k)} \varphi _{k}+ M_2^{({\lambda _k}, l_k)} \psi _{k} \right) , \\ \displaystyle \mu _{k} \psi _{k}= \theta \Delta \psi _{k} + \lambda _k \left( M_3^{({\lambda _k}, l_k)}\varphi _{k} + M_4^{({\lambda _k}, l_k)} \psi _{k} \right) .\\ \end{array}\right. } \end{aligned}$$

Ignoring a scalar factor, we see that \(( \varphi _{k}, \psi _{k})^T(\ne ( 0,0)^T)\in (X_{\mathbb C})^2\) can be represented as

$$\begin{aligned} {\left\{ \begin{array}{ll} \varphi _{k} =\delta _{k}+ w_{k}, \;\;\text {where}\;\;\delta _{k}\ge 0\;\;\text {and}\;\; w_{k}\in \left( X_{1}\right) _{\mathbb C},\\ \psi _{k} =(s_{1k}+\textrm{i}s_{2k})+ z_{k},\;\;\text {where}\;\;s_{1k},s_{2k}\in \mathbb R\;\;\text {and}\;\; z_{k} \in \left( X_{1}\right) _{\mathbb C},\\ \Vert \varphi _{k} \Vert _2^{2}+\Vert \psi _{k} \Vert _2^{2}=| \Omega |, \end{array}\right. } \end{aligned}$$

and \((\mu _{k},\delta _{k}, s_{1k}, s_{2k}, w_{k},z_{k})\) satisfies

$$\begin{aligned} \varvec{H}(\mu _{k},\delta _{k}, s_{1k}, s_{2k}, w_{k},z_{k},l_k,\lambda _k)=(\mathcal {H}_1,\mathcal {H}_2,\mathcal {H}_3,\mathcal {H}_4,\mathcal {H}_5)^T=0, \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} {\mathcal {H}}_1&:=\int _\Omega { e^{\alpha m(x)} \left[ M_{1}^{(\lambda _k,l_k)}(\delta _{k}+w_{k})+ M_{2}^{(\lambda _k,l_k)} (s_{1k} + \textrm{i} s_{2k}+z_k) \right] } dx\\&\quad \ -\mu _{k} \int _\Omega e^{\alpha m(x)} (\delta _{k}+w_{k})dx ,\\ {\mathcal {H}}_{2}&:=\displaystyle L w_k +\lambda _k e^{\alpha m(x)} \left[ M_{1}^{(\lambda _k,l_k)} (\delta _{k}+w_{k}) + M_{2}^{(\lambda _k,l_k)} (s_{1k} + \textrm{i} s_{2k}+z_{k}) \right] \\&\quad \ \displaystyle -\lambda _k \mu _{k} e^{\alpha m(x)} (\delta _{k}+w_{k}) -\frac{\lambda _k }{ | \Omega |} {\mathcal {H}}_1,\\ {\mathcal {H}}_{3}&:= \displaystyle \int _\Omega { \left[ M_{3}^{(\lambda _k,l_k)}(\delta _{k}+w_{k})+ M_{4}^{(\lambda _k,l_k)} (s_{1k}+ \textrm{i} s_{2k}+z_{k}) \right] } dx\\&\quad \ -\mu _{k} (s_{1k} + \textrm{i} s_{2k})| \Omega | ,\\ {\mathcal {H}}_{4}&:=\displaystyle \theta \Delta z_k + \lambda _k \left[ M_{3}^{(\lambda _k,l_k)} (\delta _{k}+w_{k}) +M_{4}^{(\lambda _k,l_k)} (s_{1k} + \textrm{i} s_{2k}+z_{k}) \right] \\&\quad \ \displaystyle -\lambda _k \mu _{k} (s_{1k} + \textrm{i} s_{2k}+z_{k}) -\frac{\lambda _k }{ | \Omega |} {\mathcal {H}}_3,\\ {\mathcal {H}}_{5}&:=\displaystyle | \Omega |\left( \delta _{k}^2 +s_{1k}^2+s_{2k}^2-1\right) +\Vert w_{k}\Vert _2^2+\Vert z_{k}\Vert _2^2. \end{aligned} \end{aligned}$$

Using similar arguments as in the proof of Theorem 3.5, we see that

$$\begin{aligned} \lim _{k\rightarrow \infty } w_{k} = 0,\;\;\lim _{k\rightarrow \infty } z_{k} = 0\;\;\text {in}\;\;(X_1)_{\mathbb C}. \end{aligned}$$

Since \({\mathcal {H}}_5(\mu _{k},\delta _{k}, s_{1k}, s_{2k}, w_{k},z_{k},l_k,\lambda _k)=0\), we see that, up to a subsequence, \(\lim _{k\rightarrow \infty } \delta _{k } =\delta ^*, \lim _{k\rightarrow \infty } s_{1k } =s_1^*\) and \(\lim _{k\rightarrow \infty }s_{2k }=s_2^*\). It follows from Lemma 3.1 that, up to a subsequence, \(\lim _{k\rightarrow \infty } \mu _{k }=\mu ^*\) with \({\mathcal {R}}e \mu ^*\ge 0\). Then, taking \(k\rightarrow \infty \) on both sides of \({\mathcal {H}}_j(\mu _{k},\delta _{k}, s_{1k}, s_{2k}, w_{k},z_{k},l_k ,\lambda _k)=0\) for \(j=1,3\), we have

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \mu ^*\delta ^*= \delta ^*\frac{\mathcal {S}(l^*)}{ \int _\Omega e^{\alpha m(x)} dx } + \left( s_1^*+\textrm{i}s_2^* \right) \frac{\int _\Omega e^{\alpha m(x)} M_2^{(0,l^*)} dx}{ \int _\Omega e^{\alpha m(x)} dx } , \\ \displaystyle \mu ^* \left( s_1^*+\textrm{i}s_2^* \right) = \frac{\delta ^*}{ | \Omega |} \int _\Omega M_3^{(0,l^*)} dx+ \left( s_1^*+\textrm{i}s_2^* \right) \frac{1}{ | \Omega |} \int _\Omega M_4^{(0,l^*)} dx, \end{array}\right. } \end{aligned}$$

where \(\mathcal {S}(l)\) is defined in Lemma 3.3. Note from (3.24) that \(\displaystyle \int _\Omega M_{4}^{(0,l^*)} dx=0\), and consequently, \(\mu ^*\) is an eigenvalue of the following matrix

$$\begin{aligned} \left( {\begin{array}{cc} \displaystyle \frac{\mathcal {S}(l^*)}{ \int _\Omega e^{\alpha m(x)} dx } &{}\displaystyle \frac{\int _\Omega e^{\alpha m(x)} M_2^{(0, l^*)} dx}{ \int _\Omega e^{\alpha m(x)} dx } \\ \displaystyle \frac{1}{ | \Omega |} \int _\Omega M_3^{(0, l^*)} dx &{} 0 \end{array}} \right) . \end{aligned}$$

It follows from Lemma 3.3 that \(\mathcal {S}(l^*)<0\), which contradicts the fact that \({\mathcal {R}}e \mu ^*\ge 0\). Therefore, (i) holds.

Now we consider the case of \({\mathcal {T}}(\alpha )>0\). Then we only need to show that there exists \(\tilde{\lambda }_2>0\) such that

$$\begin{aligned} \sigma ( A_{\tilde{l}+\epsilon }(\lambda ))\subset \{x+\textrm{i}y : x,y\in \mathbb {R},x < 0 \} \; \text {for}\; \lambda \in (0,\tilde{\lambda }_2]. \end{aligned}$$

Note from Lemma 3.3 that \(\mathcal {S}(\tilde{l}+\epsilon )<0\). Then substituting \(l_k= \tilde{l}+\epsilon \) in the proof of (i) and using similar arguments, we can also obtain a contradiction. This proves (ii).\(\square \)

Remark 3.11

We remark that, in Theorem 3.10, \(\tilde{\lambda }_i\) depends on \(\alpha \) and \(\epsilon \) for \(i=1,2\).

4 The effect of the advection

In this section, we show the effect of advection. For later use, we first show the properties of the following auxiliary sequence:

$$\begin{aligned} \left\{ B_k\right\} _{k=0}^\infty ,\;\text { where}\;B_k=\int _\Omega m^k (x)(m(x)-1) dx. \end{aligned}$$
(4.1)

Lemma 4.1

Let \(\{ B_k\}_{k=0}^\infty \) be defined in (4.1), and let \(\mathcal {B}=\{x\in \Omega :\;m(x)>1\}\). Then \(B_{k+1}\ge B_k\) for \(k=0,1,2,\ldots \); \(\lim _{k\rightarrow \infty } B_k =\infty \) if \({\mathcal {B}}\ne \emptyset \); and \(\lim _{k\rightarrow \infty } B_k =0\) if \({\mathcal {B}}=\emptyset \).

Proof

A direct computation implies that

$$\begin{aligned} B_k=\int _\Omega f_k (x) dx - \int _\Omega g_k (x) dx, \end{aligned}$$
(4.2)

where

$$\begin{aligned} f_k (x) = m^k (x)(m(x)-1) \tilde{I}_1,\;\; g_k (x) = m^k (x)(1-m(x)) \tilde{I}_2, \end{aligned}$$

and

$$\begin{aligned} \tilde{I}_1 = {\left\{ \begin{array}{ll} 0,&{}x\in \Omega \setminus {\mathcal {B}},\\ 1,&{}x\in {\mathcal {B}}, \end{array}\right. }\;\;\;\;\tilde{I}_2= {\left\{ \begin{array}{ll} 1,&{}x\in \Omega \setminus {\mathcal {B}},\\ 0,&{}x\in {\mathcal {B}}. \end{array}\right. } \end{aligned}$$

Note that

$$\begin{aligned} \begin{aligned}&0\le f_1 (x) \le f_2 (x) \le \ldots \le f_k (x) \le \ldots ,\\&g_1 (x) \ge g_2 (x)\ge \ldots \ge g_k (x) \ge \ldots \ge 0. \end{aligned} \end{aligned}$$

Then we obtain that \(B_{k+1}\ge B_k\) for \(k=0,1,2,\ldots \), \(\lim _{k\rightarrow \infty } f_k (x)=f_* (x)\), and \(\lim _{k\rightarrow \infty } g_k (x)=g_*(x)\), where \(g_* (x)\equiv 0\), and

$$\begin{aligned} f_* (x) = {\left\{ \begin{array}{ll} 0,&{}x\in \Omega \setminus {\mathcal {B}},\\ \infty ,&{}x\in {\mathcal {B}}. \end{array}\right. } \end{aligned}$$

Then we see from the Lebesgue’s monotone convergence theorem that

$$\begin{aligned} \lim _{k\rightarrow \infty } \int _\Omega f_k (x) dx =\int _\Omega f_* (x)dx \; \;\text {and}\;\; \lim _{k\rightarrow \infty } \int _\Omega g_k (x) dx =0. \end{aligned}$$
(4.3)

Clearly, \(\int _\Omega f_* (x)dx=\infty \) if \({\mathcal {B}}\ne \emptyset \), and \(\int _\Omega f_* (x)dx=0\) if \({\mathcal {B}}=\emptyset \). This, combined with (4.2) and (4.3), implies that \(\lim _{k\rightarrow \infty } B_k =\infty \) if \({\mathcal {B}}\ne \emptyset \), and \(\lim _{k\rightarrow \infty } B_k =0\) if \({\mathcal {B}}=\emptyset \).\(\square \)

Now, we can consider function \({\mathcal {T}}(\alpha )\), which determines the existence of Hopf bifurcation from Theorem 3.10.

Theorem 4.2

Let \({\mathcal {T}} (\alpha )\) and \({\mathcal {B}}\) be defined in Lemmas 3.3 and 4.1, respectively. Then the following statements hold:

  1. (i)

    If \({\mathcal {T}}(0)=\int _\Omega ( m(x) -1) dx\ge 0\), then \({\mathcal {T}}(\alpha )>0\) for any \(\alpha >0\);

  2. (ii)

    If \({\mathcal {T}}(0)<0\) and \({\mathcal {B}}\ne \emptyset \), then there exists \(\alpha _*>0\) such that \({\mathcal {T}}(\alpha _*)=0\), \({\mathcal {T}}(\alpha )<0\) for \(0<\alpha <\alpha _*\), and \({\mathcal {T}}(\alpha )>0\) for \(\alpha >\alpha _*\);

  3. (iii)

    If \({\mathcal {B}}=\emptyset \), then \({\mathcal {T}}(\alpha )\le 0\) for any \(\alpha \ge 0\).

Proof

For simplicity, we denote

$$\begin{aligned} {\mathcal {T}}^{(k)} (\alpha )=\frac{d^k {\mathcal {T}}(\alpha )}{d \alpha ^k}\;\;\text {for}\;\;k\ge 1\;\;\text {and}\;\; {\mathcal {T}}^{(0)} (\alpha )={\mathcal {T}}(\alpha ). \end{aligned}$$

A direct computation yields

$$\begin{aligned} {\mathcal {T}}^{(k)} (\alpha )=\int _\Omega e^{\alpha m(x)}m^k (x)( m(x) -1) dx\;\; \text {for}\;\;k\ge 0. \end{aligned}$$

Note that m(x) is non-constant. Then we see that, for \(k\ge 0\),

$$\begin{aligned} {\mathcal {T}}^{(k+1)} (\alpha )-{\mathcal {T}}^{(k)} (\alpha )=\int _\Omega e^{\alpha m(x)}m^k (x)( m(x) -1)^2 dx>0, \end{aligned}$$

which yields

$$\begin{aligned} \frac{d\left[ e^{-\alpha } {\mathcal {T}}^{(k)} (\alpha ) \right] }{d \alpha }=e^{-\alpha } {\mathcal {T}}^{(k+1)} (\alpha ) - e^{-\alpha } {\mathcal {T}}^{(k)} (\alpha ) >0, \end{aligned}$$

and consequently,

$$\begin{aligned} {\mathcal {T}}^{(k)} (\alpha )> e^{\alpha } {\mathcal {T}}^{(k)} (0)\;\;\text {for all}\;\;\alpha >0. \end{aligned}$$
(4.4)

Here \({\mathcal {T}}^{(k)} (0) =B_k\), where \(B_k\) is defined in (4.1).

Now we show that (i) holds. Note that \({\mathcal {T}} (0) ={\mathcal {T}}^{(0)} (0) = B_0\ge 0\). Then we see from (4.4) that \({\mathcal {T}} (\alpha )>0\) for all \(\alpha >0\). Then we show that (iii) holds. Since \({\mathcal {B}}=\emptyset \), we have \(0\le m(x)\le 1\), and consequently, \({\mathcal {T}}(\alpha )\le 0\) for all \(\alpha \ge 0\).

Finally, we consider (ii). Note that \({\mathcal {T}} (0) = B_0<0\). It follows from Lemma 4.1 that there exists an integer \(k_*\ge 1\) such that \(B_k\ge 0\) for \(k\ge k_*\) and \(B_k<0\) for \(0\le k< k_*\). This, combined with (4.4), implies that

$$\begin{aligned} {\mathcal {T}}^{(k)} (\alpha )>0\;\text {for all}\;\alpha >0\;\text {and}\;k\ge k_*. \end{aligned}$$
(4.5)

Then \({\mathcal {T}}^{(k_*-1)} (\alpha )\) is strictly increasing for \(\alpha >0\), and consequently,

$$\begin{aligned} \lim _{\alpha \rightarrow \infty } {\mathcal {T}}^{(k_*-1)} (\alpha )=\alpha ^\infty _{k_*-1}. \end{aligned}$$

We claim that \(\alpha ^\infty _{k_*-1}=\infty \). If it is not true, then

$$\begin{aligned} \lim _{\alpha \rightarrow \infty } \frac{{\mathcal {T}}^{(k_*-1)} (\alpha )}{\alpha }=0. \end{aligned}$$
(4.6)

Note from (4.5) that \({\mathcal {T}}^{(k_*)} (\alpha )\) is also strictly increasing for \(\alpha >0\). This, combined with the fact \({\mathcal {T}}^{(k_*)} (0)=B_{k_*}\ge 0\), implies that

$$\begin{aligned} \lim _{\alpha \rightarrow \infty } {\mathcal {T}}^{(k_*)} (\alpha )>0. \end{aligned}$$

Then we see from the L’Hospital’s rule that

$$\begin{aligned} \lim _{\alpha \rightarrow \infty } \frac{{\mathcal {T}}^{(k_*-1)} (\alpha )}{\alpha }=\lim _{\alpha \rightarrow \infty } {\mathcal {T}}^{(k_*)} (\alpha )>0, \end{aligned}$$

which contradicts (4.6). Therefore, the claim is true and \(\lim _{\alpha \rightarrow \infty } {\mathcal {T}}^{(k_*-1)} (\alpha )=\infty \). This, combined with the fact that \({\mathcal {T}}^{(k_*-1)}(0)=B_{k_*-1}\), implies that there exists \(\alpha _{k_*-1}\) such that \({\mathcal {T}}^{(k_*-1)}(\alpha _{k_*-1} )=0\), and

$$\begin{aligned} {\mathcal {T}}^{(k_*-1)}(\alpha )<0\;\text { for}\;\alpha \in [0,\alpha _{k_*-1})\;\;\text { and}\; \;{\mathcal {T}}^{(k_*-1)}(\alpha )>0\;\text { for}\;\alpha >\alpha _{k_*-1}. \end{aligned}$$

This implies that \({\mathcal {T}}^{(k_*-2)}(\alpha )\) is strictly decreasing for \(\alpha \in [0,\alpha _{k_*-1}]\) and strictly increasing for \(\alpha \in [\alpha _{k_*-1},\infty )\). Therefore, \( \lim _{\alpha \rightarrow \infty } {\mathcal {T}}^{(k_*-2)} (\alpha )=\alpha ^\infty _{k_*-2}\). Then we claim that \(\alpha ^\infty _{k_*-2}=\infty \). If it is not true, then

$$\begin{aligned} \lim _{\alpha \rightarrow \infty } \frac{{\mathcal {T}}^{(k_*-2)} (\alpha )}{\alpha }=0. \end{aligned}$$
(4.7)

By the L’Hospital’s rule again, we have

$$\begin{aligned} \lim _{\alpha \rightarrow \infty } \frac{{\mathcal {T}}^{(k_*-2)} (\alpha )}{\alpha }=\lim _{\alpha \rightarrow \infty } {\mathcal {T}}^{(k_*-1)} (\alpha )=\infty , \end{aligned}$$

which contradicts (4.7). Therefore, \( \lim _{\alpha \rightarrow \infty } {\mathcal {T}}^{(k_*-2)} (\alpha )=\infty \). Then, there exist \(\alpha _{k_*-2}\) such that \({\mathcal {T}}^{(k_*-2)}(\alpha _{k_*-2} )=0\), and

$$\begin{aligned} {\mathcal {T}}^{(k_*-2)}(\alpha )<0\;\text { for}\;\alpha \in [0,\alpha _{k_*-2})\;\;\text { and}\; \;{\mathcal {T}}^{(k_*-2)}(\alpha )>0\;\text { for}\;\alpha >\alpha _{k_*-2}. \end{aligned}$$

Therefore, we can repeat the previous arguments to obtain (ii). This completes the proof.\(\square \)

Then, by virtue of Theorem 4.2, we show the effect of advection rate \(\alpha \) on the occurrence of Hopf bifurcations for model (1.4).

Proposition 4.3

Assume that \({\mathcal {T}}(0)<0\), \({\mathcal {B}}\ne \emptyset \), and let \(\alpha _*\) be defined in Theorem 4.2. Then for any \(\epsilon \) with \(0<\epsilon \ll 1\) and \(\alpha \ne \alpha _*\), there exists \(\tilde{\lambda }(\alpha ,\epsilon )>0\) such that the following statements hold.

  1. (i)

    If \(\alpha <\alpha _*\), then for each \(\lambda \in (0,\tilde{\lambda }(\alpha ,\epsilon )]\), the positive steady state \(\left( u^{(\lambda ,l)},v^{(\lambda ,l)}\right) \) of model (1.4) is locally asymptotically stable for \(l\in {\mathcal {L}}:=[\tilde{l}+\epsilon ,1/\epsilon ]\), where \(\tilde{l}\) is defined in (2.8) and depends on \(\alpha \).

  2. (ii)

    If \(\alpha >\alpha _*\), then there exists a continuously differentiable mapping

    $$\begin{aligned} \lambda \mapsto l_\lambda :[0,\tilde{\lambda }(\alpha ,\epsilon )]\rightarrow {\mathcal {L}}=[\tilde{l}+\epsilon ,1/\epsilon ] \end{aligned}$$

    such that, for each \(\lambda \in (0,\tilde{\lambda }(\alpha ,\epsilon )]\), the positive steady state \(\left( u^{(\lambda ,l)},v^{(\lambda ,l)}\right) \) of model (1.4) is locally asymptotically stable for \(l \in [ \tilde{l}+\epsilon ,l_\lambda )\) and unstable for \(l \in (l_\lambda , 1/\epsilon ]\), where \(\tilde{l}\) is defined in (2.8) and depends on \(\alpha \). Moreover, system (1.4) undergoes a Hopf bifurcation at \(\left( u^{(\lambda ,l)},v^{(\lambda ,l)}\right) \) when \(l=l_\lambda \).

By Proposition 4.3, we see that the advection rate affects the occurrence of Hopf bifurcations if \({\mathcal {T}}(0)<0\) and \({\mathcal {B}}\ne \emptyset \). Actually, there exists a critical value \(\alpha _*\) such that Hopf bifurcation can occur (or respectively, cannot occur) with \(\alpha >\alpha _*\) (or respectively, \(\alpha <\alpha _*\)). Next, we show that the advection rate can also affect the values of Hopf bifurcations if \({\mathcal {T}}(0)>0\).

Proposition 4.4

Assume that \({\mathcal {T}}(0)>0\). Then for any \(\epsilon \) with \(0<\epsilon \ll 1\) and \(\alpha \ge 0\), there exists \(\tilde{\lambda }(\alpha ,\epsilon )>0\) and a continuously differentiable mapping

$$\begin{aligned} \lambda \mapsto l_\lambda :[0,\tilde{\lambda }(\alpha ,\epsilon )]\rightarrow {\mathcal {L}}=[\tilde{l}+\epsilon ,1/\epsilon ] \end{aligned}$$

such that, for each \(\lambda \in (0,\tilde{\lambda }(\alpha ,\epsilon )]\), the positive steady state \(\left( u^{(\lambda ,l)},v^{(\lambda ,l)}\right) \) of model (1.4) is locally asymptotically stable for \(l \in [ \tilde{l}+\epsilon ,l_\lambda )\) and unstable for \(l \in (l_\lambda , 1/\epsilon ]\), and system (1.4) undergoes a Hopf bifurcation at \(\left( u^{(\lambda ,l)},v^{(\lambda ,l)}\right) \) when \(l=l_{\lambda }\), where \(\tilde{l}\) is defined in (2.8) and depends on \(\alpha \). Moreover, \(\lim _{\lambda \rightarrow 0}l_{\lambda }=l_0\), and \(l_0\) (defined in Lemma 3.3) depends on \(\alpha \) and satisfies the following properties:

  1. (i)

    If \({\mathcal {H}}>0\), then \(l'_0(\alpha )|_{\alpha =0}>0\), and \(l_0(\alpha ) \) is strictly increasing for \(\alpha \in (0,\epsilon )\) with \(0<\epsilon \ll 1\);

  2. (ii)

    If \({\mathcal {H}}<0\), then \(l'_0(\alpha )|_{\alpha =0}<0\), and \(l_0(\alpha )\) is strictly decreasing for \(\alpha \in (0,\epsilon )\) with \(0<\epsilon \ll 1\).

Here \({\mathcal {H}}=2\left( \displaystyle \int _{\Omega }m(x)dx\right) ^2-|\Omega |\displaystyle \int _\Omega m(x) dx-|\Omega |\displaystyle \int _{\Omega }m^2(x)dx\).

Proof

Let \({\mathcal {S}}_1(c), {\mathcal {S}}'_1(c),{\mathcal {S}}_3(c), c_0\) be defined in the proof of Lemma 3.3, where \('\) is the derivative with respect to c, and they all depend on \(\alpha \). Therefore, we denote them by \({\mathcal {S}}_1(c,\alpha ), \displaystyle \frac{\partial {\mathcal {S}}_1}{\partial c}(c,\alpha ),{\mathcal {S}}_3(c,\alpha ), c_0(\alpha )\), respectively. By (3.19) and (3.22), we see that \(c_0(\alpha )=c_{0l}\) with \(l=l_{0}(\alpha )\) and \(\displaystyle \frac{d c_{0l}}{dl}<0\), which implies that \(l'_0(\alpha )\) has the same sign as \(-c'_0(\alpha )\).

Since \({\mathcal {T}}(0)>0\), it follows from Theorem 4.2 that \({\mathcal {T}}(\alpha )>0\) for all \(\alpha \ge 0\). This combined with Lemma 3.3 implies that \(c_0(\alpha )\) exists for all \(\alpha \ge 0\). From the proof of Lemma 3.3, we see that

$$\begin{aligned} {\mathcal {S}}_3(c_0(\alpha ),\alpha )=0, \end{aligned}$$
(4.8)

and \(\displaystyle \frac{\partial {\mathcal {S}}_3}{\partial c}>0\) for all \(\alpha \ge 0\). Therefore, \(-c'_0(\alpha )\) has the same sign as \(\displaystyle \frac{\partial {\mathcal {S}}_3}{\partial \alpha }(c_0(\alpha ),\alpha )\). By (3.17) and a direct computation yields

$$\begin{aligned} \begin{aligned} \frac{\partial {\mathcal {S}}_3}{\partial \alpha }=&\left( {\mathcal {S}}_1\displaystyle \frac{\partial ^2{\mathcal {S}}_1}{\partial c\partial \alpha } - \frac{\partial {\mathcal {S}}_1}{\partial \alpha } \frac{\partial {\mathcal {S}}_1}{\partial c}\right) \frac{ 1 }{{\mathcal {S}}^2_1}\int _\Omega \left[ e^{\alpha m(x)} m(x)- c e^{2 \alpha m(x)}\right] dx\\&+ \frac{\partial {\mathcal {S}}_1}{\partial c}\frac{1}{{\mathcal {S}}_1} \int _\Omega \left[ e^{\alpha m(x)} m^2(x)- 2c e^{2 \alpha m(x)} m(x)\right] dx+2 \int _\Omega e^{2 \alpha m(x)} m(x) dx, \end{aligned}\qquad \end{aligned}$$
(4.9)

where

$$\begin{aligned} \frac{\partial {\mathcal {S}}_1}{\partial \alpha }=\int _\Omega \frac{ e^{\alpha m(x)}m(x) }{ \left( 1+ c e^{\alpha m(x)} \right) ^2 } dx, \;\;\displaystyle \frac{\partial ^2{\mathcal {S}}_1}{\partial c\partial \alpha }=-2\int _\Omega \frac{ e^{2\alpha m(x)}m(x) }{ \left( 1+ c e^{\alpha m(x)} \right) ^3 } dx. \end{aligned}$$

Substituting \(\alpha =0\) into (4.8), we obtain that

$$\begin{aligned} c_0(0)=\frac{1}{2 |\Omega |}\int _{\Omega }(m(x)-1)dx. \end{aligned}$$
(4.10)

Then plugging \(c=c_0(0)\) and \(\alpha =0\) into (4.9), we have

$$\begin{aligned} \frac{\partial {\mathcal {S}}_3}{\partial \alpha }(c_0(0),0)=\frac{ 2\left[ \left( \int _\Omega m(x) dx\right) ^2-|\Omega | \int _\Omega m(x) dx-|\Omega | \int _\Omega m^2(x) dx \right] }{\int _\Omega m(x) dx+ |\Omega | }, \end{aligned}$$

which implies that \(l_0(\alpha )\) satisfies (i) and (ii).\(\square \)

Here we only show the effects of advection rate on the values of Hopf bifurcations for \(0<\alpha \ll 1\), and the general case still awaits further investigation.