Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

It has been commonly accepted that spatial diffusion and environmental heterogeneity are important factors that should be considered in the spread of infectious diseases. In order to understand the impact of spatial heterogeneity of the environment and movement of individuals on the persistence and extinction of a disease, Allen et al. [9] proposed a frequency-dependent SIS (susceptible-infected-susceptible) reaction–diffusion model for a population in a continuous spatial habitat. They assumed that both rates of the transmission and recovery of the disease depend on spatial variables. Another feature of this SIS model is that the total population number is constant. The habitat is characterized as low-risk (or high-risk) if the spatial average of the transmission rate of the disease is less than (or greater than) the spatial average of its recovery rate. The individual site is also characterized as low-risk (or high-risk) if the local transmission rate of the disease is less than (or greater than) its local recovery rate, which corresponds to the case where the local reproduction number is less than (or greater than) one.

Assume that the habitat \(\varOmega \subset \mathbb{R}^{m}\,(m \geq 1)\) is a bounded domain with smooth boundary ∂ Ω (when m > 1), and ν is the outward unit normal vector on ∂ Ω and \(\frac{\partial } {\partial \nu }\) means the normal derivative along ν on ∂ Ω. The global stability of the unique disease-free equilibrium and asymptotic profiles of the unique endemic equilibrium were established in [9] for the following SIS reaction-diffusion system:

$$\displaystyle\begin{array}{rcl} \begin{array}{ll} \frac{\partial \overline{S}} {\partial t} - d_{S}\varDelta \overline{S} = -\frac{\beta (x)\overline{S}\,\overline{I}} {\overline{S}+\overline{I}} +\gamma (x)\overline{I},\,\,&x \in \varOmega,\ t > 0, \\ \frac{\partial \overline{I}} {\partial t} - d_{I}\varDelta \overline{I} = \frac{\beta (x)\overline{S}\,\overline{I}} {\overline{S}+\overline{I}} -\gamma (x)\overline{I},\quad &x \in \varOmega,\ t > 0, \\ \frac{\partial \overline{S}} {\partial \nu } = \frac{\partial \overline{I}} {\partial \nu } = 0,\,\,\, &x \in \partial \varOmega,\ t > 0,\end{array} & &{}\end{array}$$
(13.1)

where \(\overline{S}(x,t)\) and \(\overline{I}(x,t)\), respectively, represent the density of susceptible and infected individuals at location x and time t; the positive constants d S and d I denote the diffusion rates of susceptible and infected populations; and β(x) and γ(x) are positive Hölder continuous functions on Ω which account for the rates of disease transmission and disease recovery at x, respectively. The homogeneous Neumann boundary conditions mean that there is no population flux across the boundary ∂ Ω and both the susceptible and infected individuals live in a self-contained environment.

In model (13.1), it was assumed that the rates of disease transmission and recovery depend only on the spatial variable. However, the rates of disease transmission and disease recovery may be spatially and temporally heterogeneous. Typically, they vary periodically in time, for instance, due to the seasonal fluctuation and periodic availability of vaccination strategies. A natural consideration of a spatially heterogeneous and temporally periodic environment leads us to the study of the following system:

$$\displaystyle\begin{array}{rcl} \begin{array}{ll} \frac{\partial \overline{S}} {\partial t} - d_{S}\varDelta \overline{S} =\ -\frac{\beta (x,t)\overline{S}\,\overline{I}} {\overline{S}+\overline{I}} +\gamma (x,t)\overline{I},\ \ &x \in \varOmega,\ t > 0, \\ \frac{\partial \overline{I}} {\partial t} - d_{I}\varDelta \overline{I} = \frac{\beta (x,t)\overline{S}\,\overline{I}} {\overline{S}+\overline{I}} -\gamma (x,t)\overline{I},\ \ \ &x \in \varOmega,\ t > 0, \\ \frac{\partial \overline{S}} {\partial \nu } = \frac{\partial \overline{I}} {\partial \nu } = 0,\ \ \ &x \in \partial \varOmega,\ t > 0, \\ \overline{S}(x,0) = S_{0}(x),\ \overline{I}(x,0) = I_{0}(x),\ \ \ &x \in \varOmega.\end{array} & &{}\end{array}$$
(13.2)

In the current situation, the functions β(x, t) and γ(x, t) represent the rates of disease transmission and recovery at location x and time t, respectively.

It is easy to see that the function \(\overline{S}\,\overline{I}/(\overline{S} + \overline{I})\) is a Lipschitz continuous function of \(\overline{S}\) and \(\overline{I}\) in the first open quadrant. Thus, we can extend its definition to the entire first quadrant by defining it to be zero when either \(\overline{S} = 0\) or \(\overline{I} = 0\). Throughout this chapter, we make the following assumption:

  1. (A)

    The functions β(x, t) and γ(x, t) are Hölder continuous and nonnegative but not zero identically on \(\overline{\varOmega } \times \mathbb{R}\), and ω-periodic in t for some number ω > 0.

From the classical theory for parabolic equations (see, e.g., [228]), we know that for any \((S_{0},I_{0}) \in C(\overline{\varOmega }, \mathbb{R}_{+}^{2})\), system (13.2) has a unique classical solution \((\overline{S},\overline{I}) \in C^{2,1}(\overline{\varOmega } \times (0,\infty ))\). By the strong maximum principle and the Hopf boundary lemma for parabolic equations (see, e.g., [283]), it follows that if I 0(x) ≢ 0, then both \(\overline{S}(x,t)\) and \(\overline{I}(x,t)\) are positive for \(x \in \overline{\varOmega }\) and t ∈ (0, ). Following [9], we define

$$\displaystyle\begin{array}{rcl} N:=\int _{\varOmega }\,[S_{0}(x) + I_{0}(x)]\,dx > 0& &{}\end{array}$$
(13.3)

to be the total number of individuals in Ω at t = 0. We add two equations in (13.2) and then integrate over Ω by parts to obtain

$$\displaystyle{{ \partial \over \partial t}\int _{\varOmega }\,(\overline{S} + \overline{I})\,dx =\int _{\varOmega }\,\varDelta (d_{S}\overline{S} + d_{I}\overline{I})\,dx = 0,\quad \forall t > 0.}$$

This implies that the total population size is a constant, i.e.,

$$\displaystyle\begin{array}{rcl} \int _{\varOmega }\,[\overline{S}(x,t) + \overline{I}(x,t)]\,dx = N,\quad \forall t \geq 0,& &{}\end{array}$$
(13.4)

which also shows that both \(\|\overline{S}(\cdot,t)\|_{L^{1}(\varOmega )}\) and \(\|\overline{I}(\cdot,t)\|_{L^{1}(\varOmega )}\) are bounded on [0, ). From now on, we let N be a given positive constant.

A nonnegative ω-periodic solution \((\tilde{S},\tilde{I})\) of system (13.2)–(13.3) is said to be disease-free if \(\tilde{I} \equiv 0\) on \(\overline{\varOmega } \times \mathbb{R}\); and endemic if \(\tilde{I} \geq 0,\not\equiv 0\) on \(\overline{\varOmega } \times \mathbb{R}\). It is easy to observe from (13.2)–(13.3) that the unique disease-free ω-periodic solution is \((\tilde{S},0) = (N/\vert \varOmega \vert,0)\) (see [9, Lemma 2.1]), and henceforth we call this solution the disease-free constant solution. Hereafter, | Ω | always represents the volume of the domain Ω. Moreover, the maximum principle and the Hopf boundary lemma for parabolic equations imply that an endemic ω-periodic solution \((\tilde{S},\tilde{I})\) is positive on \(\overline{\varOmega } \times [0,\infty )\), that is, \(\tilde{S}(x,t) > 0,\,\tilde{I}(x,t) > 0,\,\forall (x,t) \in \overline{\varOmega }\times [0,\infty )\).

The purpose of this chapter is to investigate the effect of spatial and temporal heterogeneities on the extinction and persistence of the infectious disease for system (13.2)–(13.3). In Section 13.1, we first introduce the basic reproduction ratio R 0 and then provide its analytical characterizations. In particular, we obtain the asymptotic behavior of R 0 as d I tends to zero or infinity. It turns out that when β and γ depend only on the temporal variable (namely, β(x, t) = β(t) and γ(x, t) = γ(t)), R 0 is a constant independent of d I , and when β and γ depend on the spatial variable alone (namely, β(x, t) = β(x) and γ(x, t) = γ(x)), R 0 is a nonincreasing function of d I . In sharp contrast, our result shows that in general, R 0 is not a monotone function of d I . In the case where β(x, t) is a constant, we also address an optimization problem concerning R 0 when the average of the function γ(x, t) is given.

In Section 13.2, we derive a threshold-type dynamics for system (13.2)–(13.3) in terms of R 0. More specifically, we prove that the disease-free constant solution is globally stable if R 0 < 1; while if R 0 > 1, system (13.2)–(13.3) admits at least one endemic ω-periodic solution and the disease is uniformly persistent. In order to establish a uniform upper bound for positive solutions to system (13.2)–(13.3), we re-formulate the general theory developed in [214] in such a way that it applies to system (13.2)–(13.3) (see Lemma 13.2.1).

In Section 13.3, we establish the global attractivity of the positive ω-periodic solution (and hence its uniqueness) of system (13.2)–(13.3) for some special cases. However, it remains a challenging problem to study the uniqueness of the endemic ω-periodic solution for the general case. The biological interpretations of our analytical results are presented in Section 13.4.

13.1 Basic Reproduction Ratio

In this section, we introduce the basic reproduction ratio for the periodic reaction–diffusion system (13.2), and analyze its properties. As a first step, we need to define the next infection operator for system (13.2), which is a combination of the idea in [388] for periodic ordinary differential models with that in [389] for autonomous reaction–diffusion systems.

Let C ω be the ordered Banach space consisting of all ω-periodic and continuous functions from \(\mathbb{R}\) to \(C(\overline{\varOmega }, \mathbb{R})\), which is equipped with the maximum norm ∥ ⋅ ∥ and the positive cone \(C_{\omega }^{+}:=\{\phi \in C_{\omega }:\ \phi (t)(x) \geq 0,\,\forall t \in \mathbb{R},x \in \overline{\varOmega }\}\). For any given ϕ ∈ C ω , we also use the notation ϕ(x, t): = ϕ(t)(x). Let V (t, s) be the evolution operator of the reaction-diffusion equation

$$\displaystyle\begin{array}{rcl} \begin{array}{ll} I_{t} - d_{I}\varDelta I = -\gamma (x,t)I,\ \ &x \in \varOmega,\ t > 0, \\ {\partial I \over \partial \nu } = 0,\ \ \ &x \in \partial \varOmega,\ t > 0.\end{array} & &{}\end{array}$$
(13.5)

By the standard theory of evolution operators, it follows that there exist positive constants K and c 0 such that

$$\displaystyle\begin{array}{rcl} \|V (t,s)\| \leq Ke^{-c_{0}(t-s)},\ \ \forall t \geq s,\ t,\,s \in \mathbb{R}.& &{}\end{array}$$
(13.6)

Suppose that ϕ ∈ C ω is the density distribution of initial infectious individuals at the spatial location x ∈ Ω and the time s. Then the term β(x, s)ϕ(x, s) means the density distribution of the new infections produced by the infected individuals who were introduced at time s. Thus, for given t ≥ s, V (t, s)β(x, s)ϕ(x, s) is the density distribution at location x of those infected individuals who were newly infected at time s and remains infected at time t. Therefore, the integral

$$\displaystyle{\int _{-\infty }^{t}V (t,s)\beta (\cdot,s)\phi (\cdot,s)ds =\int _{ 0}^{\infty }V (t,t - a)\beta (\cdot,t - a)\phi (\cdot,t - a)da}$$

represents the density distribution of the accumulative new infections at location x and time t produced by all those infected individuals ϕ(x, s) introduced at all the previous time to t.

As in [388], we introduce the linear operator L: C ω C ω :

$$\displaystyle\begin{array}{rcl} L(\phi )(t):=\int _{ 0}^{\infty }V (t,t - a)\beta (\cdot,t - a)\phi (\cdot,t - a)da,& &{}\end{array}$$
(13.7)

which we may call as the next generation operator. Under our assumption on β and γ, it is easy to see that L is continuous, compact on C ω and positive (i.e., L(C ω +) ⊂ C ω +). We define the spectral radius of L as the basic reproduction ratio

$$\displaystyle\begin{array}{rcl} R_{0} =\rho (L)& &{}\end{array}$$
(13.8)

for system (13.2).

In what follows, we first obtain a characterization of the basic reproduction ratio R 0. This leads us to consider the following linear periodic-parabolic eigenvalue problem

$$\displaystyle\begin{array}{rcl} \begin{array}{ll} \psi _{t} - d_{I}\varDelta \psi = -\gamma (x,t)\psi +{ \beta (x,t) \over \mu } \psi,\ \ &x \in \varOmega,\ t > 0, \\ {\partial \psi \over \partial \nu } = 0,\ \ \ &x \in \partial \varOmega,\ t > 0, \\ \psi (x,0) =\psi (x,\omega ),\ \ \ &x \in \varOmega.\end{array} & &{}\end{array}$$
(13.9)

By [152, Theorem 16.3], problem (13.9) has a unique principal eigenvalue μ 0, which is positive and corresponds to an eigenvector ψ 0 ∈ C ω and ψ 0 > 0 on \(\mathbb{R}\).

Lemma 13.1.1.

R 0 = μ 0 > 0.

Proof.

Since (μ 0, ψ 0) satisfies (13.9), it follows from the constant-variation formula that

$$\displaystyle\begin{array}{rcl} \psi _{0}(x,t) = V (t,\tau )\psi _{0}(x,\tau ) +\int _{ \tau }^{t}V (t,s){\beta (x,s) \over \mu _{0}} \psi _{0}(x,s)ds.& &{}\end{array}$$
(13.10)

Using (13.6) and the boundedness of ψ 0 on \(\mathbb{R}\), by letting τ → −, we obtain

$$\displaystyle{\psi _{0}(x,t) =\int _{ -\infty }^{t}V (t,s){\beta (x,s) \over \mu _{0}} \psi _{0}(x,s)ds,\ \ \forall t \in \mathbb{R},}$$

which implies L ψ 0 = μ 0 ψ 0 due to (13.7).

Note that under our assumption (A), the operator L may not be strongly positive. To show R 0 = μ 0, we use a perturbation argument. For any given ε > 0, we define

$$\displaystyle\begin{array}{rcl} L_{\epsilon }(\phi )(t):=\int _{ 0}^{\infty }V (t,t - a)(\beta (\cdot,t - a)+\epsilon )\phi (\cdot,t - a)da,& &{}\end{array}$$
(13.11)

and its spectral radius \(\mathcal{R}_{\epsilon,0} =\rho (L_{\epsilon })\). As β(x, t) +ε > 0 on \(\overline{\varOmega } \times \mathbb{R}\), L ε : C ω C ω is continuous, compact, and strongly positive. By the upper semicontinuity of the spectrum ([198, Sect. IV.3.1]) and the continuity of a finite system of eigenvalues ([198, Sect. IV.3.5]), we then derive

$$\displaystyle\begin{array}{rcl} \mathcal{R}_{\epsilon,0} \rightarrow R_{0}\ \ \mbox{ as}\ \epsilon \rightarrow 0.& &{}\end{array}$$
(13.12)

On the other hand, we denote by μ ε, 0 the unique positive principal eigenvalue of (13.9) with β(x, t) replaced by β(x, t) +ε, which corresponds to a positive eigenvector ψ ε, 0 ∈ C ω . Arguing as above, we see that L ε ψ ε, 0 = μ ε, 0 ψ ε, 0. By virtue of the strong positivity of L ε and the Krein-Rutman theorem (see, e.g., [152, Theorem 7.2]), we have \(\mathcal{R}_{\epsilon,0} =\mu _{\epsilon,0}\). Furthermore, from the continuity of the principal eigenvalue on the weight function ([152]), it follows that \(\mathcal{R}_{\epsilon,0} =\mu _{\epsilon,0} \rightarrow \mu _{0}\) as ε → 0. This fact, together with (13.12), implies R 0 = μ 0.

For our later purpose, we consider the periodic-parabolic eigenvalue problem

$$\displaystyle\begin{array}{rcl} \begin{array}{ll} \varphi _{t} - d_{I}\varDelta \varphi =\beta (x,t)\varphi -\gamma (x,t)\varphi +\lambda \varphi,\ \ &x \in \varOmega,\ t > 0, \\ {\partial \varphi \over \partial \nu } = 0,\ \ \ &x \in \partial \varOmega,\ t > 0, \\ \varphi (x,0) =\varphi (x,\omega ),\ \ \ &x \in \varOmega.\end{array} & &{}\end{array}$$
(13.13)

Let λ 0 be the unique principal eigenvalue of (13.13) (see, e.g., [152]). Then we have the following observation.

Lemma 13.1.2.

1 − R 0 has the same sign as λ 0.

Proof.

This lemma is a straightforward consequence of [370, Theorem 5.7]. Here we provide an elementary proof. In view of Lemma 13.1.1, it suffices to prove that 1 −μ 0 has the same sign as λ 0. Due to [152, Theorem 7.2], we can assert that λ 0 is also the principal eigenvalue of the adjoint problem of (13.13):

$$\displaystyle\begin{array}{rcl} \begin{array}{ll} -\varphi _{t}^{{\ast}}- d_{ I}\varDelta \varphi ^{{\ast}} =\beta (x,t)\varphi ^{{\ast}}-\gamma (x,t)\varphi ^{{\ast}} +\lambda \varphi ^{{\ast}},\ \ &x \in \varOmega,\ t > 0, \\ {\partial \varphi ^{{\ast}} \over \partial \nu } = 0,\ \ \ &x \in \partial \varOmega,\ t > 0, \\ \varphi ^{{\ast}}(x,0) =\varphi ^{{\ast}}(x,\omega ),\ \ \ &x \in \varOmega,\end{array} & &{}\end{array}$$
(13.14)

where φ  ∈ C ω and φ  > 0 on \(\mathbb{R}\). We multiply the equation (13.9) that (μ 0, ψ 0) satisfies by φ and then integrate over Ω × (0, ω) by parts to obtain

$$\displaystyle{\Big(1 -{ 1 \over \mu _{0}} \Big)\int _{0}^{\omega }\int _{ \varOmega }\beta \psi _{0}\varphi ^{{\ast}}dxdt +\lambda _{ 0}\int _{0}^{\omega }\int _{ \varOmega }\psi _{0}\varphi ^{{\ast}}dxdt = 0.}$$

Since 0 ω Ω β ψ 0 φ dxdt and 0 ω Ω ψ 0 φ dxdt are both positive, it follows that \(1 -{ 1 \over \mu _{0}}\) and λ 0 have the opposite signs, which thereby deduces our result.

From now on, we present some quantitative properties for the basic reproduction ratio R 0. First of all, when β(x, t) −γ(x, t) or both β(x, t) and γ(x, t) are spatially homogeneous, we have the following result.

Lemma 13.1.3.

The following statements hold true:

  1. (a)

    If β(x,t) ≡β(t) and γ(x,t) ≡γ(t), then R 0 = ∫ 0 ω β(t)dt∕∫ 0 ω γ(t)dt.

  2. (b)

    If β(x,t) −γ(x,t) ≡ h(t), then R 0 − 1 has the same sign as ∫ 0 ω h(t)dt.

Proof.

We first prove (a). For simplicity, let

$$\displaystyle{\mu ^{{\ast}} ={ \int _{0}^{\omega }\beta (t)dt \over \int _{0}^{\omega }\gamma (t)dt}.}$$

Consider the ordinary differential equation:

$$\displaystyle\begin{array}{rcl} u_{t} =\Big (-\gamma (t) +{ 1 \over \mu ^{{\ast}}}\beta (t)\Big)u,\ \ \ u(0) = 1.& &{}\end{array}$$
(13.15)

It is easy to see that (13.15) admits a unique positive solution

$$\displaystyle{u(t) = e^{\int _{0}^{t}(-\gamma (s)+{ 1 \over \mu ^{{\ast}}}\beta (s))ds},}$$

which also satisfies u(ω) = u(0) = 1. So u(t) is a positive ω-periodic solution to (13.15). Thanks to the uniqueness of the principal eigenvalue of (13.9), we have μ 0 = μ , and hence (a) holds since R 0 = μ 0.

We then verify (b). In this case, we consider the following ordinary differential problem:

$$\displaystyle\begin{array}{rcl} u_{t} - h(t)u =\lambda u,\,\,\,u(0) = u(\omega ) = 1.& &{}\end{array}$$
(13.16)

Clearly, (13.16) has a unique positive solution if and only if

$$\displaystyle{\lambda = -{1 \over \omega } \int _{0}^{\omega }h(t)dt.}$$

Furthermore, such a unique positive ω-periodic solution can be expressed as \(u(t) = e^{\int _{0}^{t}(h(s)+\lambda )ds }\). Observe that

$$\displaystyle{\lambda = -{1 \over \omega } \int _{0}^{\omega }h(t)dt\quad \mbox{ and}\quad \psi (t) = e^{\int _{0}^{t}(h(s)+\lambda )ds }}$$

satisfy (13.13). By the uniqueness of the principal eigenvalue, we immediately have

$$\displaystyle{\lambda _{0} = -{1 \over \omega } \int _{0}^{\omega }h(t)dt.}$$

Therefore, applying Lemma 13.1.2, we see that (b) holds true.

Secondly, if β(x, t) −γ(x, t) or both β(x, t) and γ(x, t) depend on the spatial factor alone, we have the following result.

Lemma 13.1.4.

Assume that β(x,t) −γ(x,t) ≡ h(x). Then the following assertions hold true:

  1. (a)

    If ∫ Ω h(x)dx ≥ 0 and h≢0 in Ω, then R 0 > 1 for all d I;

  2. (b)

    If ∫ Ω h(x)dx < 0 and h(x) ≤ 0 on \(\overline{\varOmega }\) , then R 0 < 1 for all d I;

  3. (c)

    If ∫ Ω h(x)dx < 0 and \(\max _{\overline{\varOmega }}h(x) > 0\) , then there exists a threshold value d I ∈ (0,∞) such that R 0 > 1 for d I < d I , R 0 = 1 for d I = d I , and R 0 < 1 for d I > d I .

In particular, if β(x,t) ≡β(x) and γ(x,t) ≡γ(x), we have

$$\displaystyle{ R_{0} =\sup _{\varphi \in H^{1}(\varOmega ),\,\varphi \not =0}\left \{{ \int _{\varOmega }\beta \varphi ^{2}dx \over \int _{\varOmega }\left (d_{I}\vert \nabla \varphi \vert ^{2} +\gamma \varphi ^{2}\right )dx}\right \} }$$
(13.17)

and R 0 is a nonincreasing function of d I with \(R_{0} \rightarrow \max _{\overline{\varOmega }}\{{\beta (x) \over \gamma (x)}\}\) as d I → 0, and R 0 → ∫ Ω β(x)dx∕∫ Ω γ(x)dx as d I →∞. Here and in what follows, when \(\max _{\overline{\varOmega }}\{{\beta (x) \over \gamma (x)}\} = \infty \) , we understand R 0 →∞ as d I → 0.

Proof.

To prove our assertions, we resort to problem (13.13). First, when β(x, t) −γ(x, t) ≡ h(x), we consider the elliptic eigenvalue problem

$$\displaystyle\begin{array}{rcl} -d_{I}\varDelta u - h(x)u =\lambda u,\ \ x \in \varOmega;\ \ \ { \partial u \over \partial \nu } = 0,\ \ x \in \partial \varOmega.& &{}\end{array}$$
(13.18)

It is well known that (13.18) possesses a unique principal eigenvalue, denoted by λ . From the uniqueness of the principal eigenvalue for (13.13) and (13.18), it is necessary that λ 0 = λ in the present situation. By [9, Lemma 2.2] and its proof, we further see that λ 0 is nondecreasing with respect to d I  > 0, and if additionally h(x) is not a constant in Ω, then λ 0 is strictly increasing in d I  > 0. Moreover, \(\lambda _{0} \rightarrow -\max _{\overline{\varOmega }}h(x)\) as d I  → 0 and \(\lambda _{0} \rightarrow -{ 1 \over \vert \varOmega \vert }\int _{\varOmega }h(x)dx\) as d I  → . Hence, the assertions (a)–(c) follow from these properties and Lemma 13.1.2.

In the case of β(x, t) ≡ β(x) and γ(x, t) ≡ γ(x), we recall that R 0 = μ 0. As above, it is easy to see from (13.9) that R 0 is the principal eigenvalue of the elliptic problem:

$$\displaystyle\begin{array}{rcl} -d_{I}\varDelta \psi = -\gamma (x)\psi +{ \beta (x) \over \mu } \psi,\ \ x \in \varOmega;\ \ \ { \partial \psi \over \partial \nu } = 0,\ \ x \in \partial \varOmega.& &{}\end{array}$$
(13.19)

Then the formula (13.17) follows from the well-known variational characterization of the principal eigenvalue for problem (13.19) (see, e.g., [108, Sect. II. 6.5]). Thus, the properties of R 0 are straightforward consequences of [9, Lemma 2.3].

Remark 13.1.1.

In [9, Lemma 2.3], the right-hand side expression of (13.17) is directly defined as the basic reproduction number for the autonomous system (13.1). Lemma 13.1.4 above shows that this definition is indeed meaningful biologically. Moreover, if β(x) ≢ γ(x) on \(\overline{\varOmega } \times [0,\omega ]\), according to the proof of [9, Lemma 2.3], R 0 is a strictly decreasing function of d I .

The subsequent result presents some analytical properties of R 0 for the general case of β and γ.

Theorem 13.1.1.

 The following statements are valid:

  1. (a)

    \(R_{0} \geq \frac{\int _{0}^{\omega }\int _{ \varOmega }\beta (x,t)dxdt} {\int _{0}^{\omega }\int _{\varOmega }\gamma (x,t)dxdt}\) for all d I , and the equality holds if and only if the function \(\frac{\beta (x,t)} {\int _{0}^{\omega }\int _{\varOmega }\beta (x,t)dxdt} - \frac{\gamma (x,t)} {\int _{0}^{\omega }\int _{\varOmega }\gamma (x,t)dxdt}\) is spatially homogeneous (that is, x-independent);

  2. (b)

    R 0 < 1 for all d I > 0 if \(\int _{0}^{\omega }\max _{x\in \overline{\varOmega }}(\beta (x,t) -\gamma (x,t))dt \leq 0\) and β(x,t) −γ(x,t) nontrivially depends on x;

  3. (c)

    \(R_{0} \rightarrow \frac{\int _{0}^{\omega }\int _{ \varOmega }\beta (x,t)dxdt} {\int _{0}^{\omega }\int _{\varOmega }\gamma (x,t)dxdt}\) as d I →∞;

  4. (d)

    \(R_{0} \rightarrow \max _{x\in \overline{\varOmega }}\Big\{{\int _{0}^{\omega }\beta (x,t)dt \over \int _{0}^{\omega }\gamma (x,t)dt}\Big\}\) as d I → 0;

  5. (e)

    In general, R 0 (d I ):= R 0 is not a nonincreasing function of d I ; particularly, if β(x,t) = p(x)q 1 (t) and γ(x,t) = p(x)q 2 (t) with p > 0 on \(\overline{\varOmega }\) , p≢constant, q 1 , q 2 ∈ C ω , q 1 , q 2 > 0 on [0,ω] and q 1 − q 2 ≢constant, then there exist 0 < d I 1 < d I 2 such that R 0 (d I 1 ) = R 0 (d I 2 ).

Proof.

To obtain our assertions, we use similar arguments to those in [187, Lemma 2.4]. Since some necessary modifications are required, here we provide a detailed proof.

We first prove (a). Let ψ 0 be defined as before. Since ψ 0 > 0 on \(\overline{\varOmega } \times \mathbb{R}\), we divide the equation (13.9) that ψ 0 satisfies by ψ 0 and integrate the resulting equation over Ω × (0, ω) by parts to get

$$\displaystyle{-d_{I}\int _{0}^{\omega }\int _{ \varOmega }{\vert \nabla \psi _{0}\vert ^{2} \over \psi _{0}^{2}} dxdt = -\int _{0}^{\omega }\int _{ \varOmega }\gamma dxdt +{ 1 \over R_{0}}\int _{0}^{\omega }\int _{ \varOmega }\beta dxdt.}$$

This implies R 0 ≥  0 ω Ω β(x, t)dxdt 0 ω Ω γ(x, t)dxdt, \(\forall d_{I} > 0\). Moreover, the equality holds if and only if

$$\displaystyle{\int _{0}^{\omega }\int _{ \varOmega }{\vert \nabla \psi _{0}\vert ^{2} \over \psi _{0}^{2}} dxdt = 0,}$$

which is equivalent to the condition that the function \(\frac{\beta (x,t)} {\int _{0}^{\omega }\int _{\varOmega }\beta (x,t)dxdt} - \frac{\gamma (x,t)} {\int _{0}^{\omega }\int _{\varOmega }\gamma (x,t)dxdt}\) is spatially homogeneous.

The assertion (b) follows from [152, Lemma 15.6]. Indeed, by taking m(x, t) = β(x, t) −γ(x, t) and λ = 1 in [152, Lemma 15.6], we have μ(0) = 0 and

$$\displaystyle{\mu (1) > -{1 \over \omega } \int _{0}^{\omega }\max _{ x\in \overline{\varOmega }}(\beta (x,t) -\gamma (x,t))dt \geq 0}$$

under our hypothesis. Using the notation here, we obtain λ 0 = μ(1), and hence, Lemma 13.1.2 deduces (b).

To verify (c), we first assume that γ > 0 on \(\overline{\varOmega } \times \mathbb{R}\). In this case, by directly integrating the equation (13.9) that ψ 0 satisfies over Ω × (0, ω), we easily find

$$\displaystyle\begin{array}{rcl} R_{0} ={ \int _{0}^{\omega }\int _{\varOmega }\beta \psi _{0}dxdt \over \int _{0}^{\omega }\int _{\varOmega }\gamma \psi _{0}dxdt} \leq { \max _{\overline{\varOmega }\times [0,\omega ]}\beta \over \min _{\overline{\varOmega }\times [0,\omega ]}\gamma }.& &{}\end{array}$$
(13.20)

Hence, this and the assertion (a) show that R 0 has boundedness independent of d I  > 0.

By normalizing ψ 0, we may further assume that

$$\displaystyle\begin{array}{rcl} \int _{0}^{\omega }\int _{ \varOmega }\psi _{0}^{2}dxdt = 1.& &{}\end{array}$$
(13.21)

We now multiply (13.9) with ψ = ψ 0 by ψ 0 and integrate to yield

$$\displaystyle{d_{I}\int _{0}^{\omega }\int _{ \varOmega }\vert \nabla \psi _{0}\vert ^{2}dxdt = -\int _{ 0}^{\omega }\int _{ \varOmega }\gamma \psi _{0}^{2}dxdt +{ 1 \over R_{0}}\int _{0}^{\omega }\int _{ \varOmega }\beta \psi _{0}^{2}dxdt,}$$

and so we can find a positive constant c such that

$$\displaystyle\begin{array}{rcl} \int _{0}^{\omega }\int _{ \varOmega }\vert \nabla \psi _{0}\vert ^{2}dxdt \leq { c \over d_{I}}.& &{}\end{array}$$
(13.22)

Here and in the sequel, the constant c does not depend on d I  > 0 and may vary from place to place.

On the other hand, we set

$$\displaystyle{\overline{\psi }_{0}(t) ={ 1 \over \vert \varOmega \vert }\int _{\varOmega }\psi _{0}(x,t)dx\ \ \mbox{ and}\ \ \varPsi (x,t) =\psi _{0}(x,t) -\overline{\psi }_{0}(t).}$$

Note that Ω Ψ d x = 0 for all \(t \in \mathbb{R}\). Then, from the well-known Poincaré inequality it follows that

$$\displaystyle{\int _{\varOmega }\varPsi ^{2}dx \leq c\int _{\varOmega }\vert \nabla \varPsi \vert ^{2}dx,\,\,\mbox{ for all}\ \,t.}$$

Therefore, as ∇Ψ = ∇ψ 0, making use of (13.22), we have

$$\displaystyle\begin{array}{rcl} \int _{0}^{\omega }\int _{ \varOmega }\varPsi ^{2}dxdt \leq { c \over d_{I}},\ \ \mbox{ and hence,}\ \ \int _{0}^{\omega }\int _{ \varOmega }\vert \varPsi \vert dxdt \leq { c \over \sqrt{d_{I}}}.& &{}\end{array}$$
(13.23)

Furthermore, by integrating (13.9) with ψ = ψ 0 over Ω, it is easy to see that

$$\displaystyle\begin{array}{rcl}{ d \over dt}\Big(\overline{\psi }_{0}\Big) =\int _{\varOmega }\Big[-\gamma +{ 1 \over R_{0}}\beta \Big]dx \cdot \overline{\psi }_{0} +\int _{\varOmega }\Big(-\gamma +{ 1 \over R_{0}}\beta \Big)\varPsi dx.& &{}\end{array}$$
(13.24)

Using (a), (13.20), and (13.23), one has

$$\displaystyle{\int _{0}^{\omega }\Big\vert \int _{\varOmega }\Big(-\gamma +{ 1 \over R_{0}}\beta \Big)\varPsi dx\Big\vert dt = O({ 1 \over \sqrt{d_{I}}}).}$$

Henceforth, solving the ordinary equation (13.24), we obtain

$$\displaystyle\begin{array}{rcl} \overline{\psi }_{0}(t) = e^{\int _{0}^{t}\int _{ \varOmega }(-\gamma +{ 1 \over R_{0}} \beta )dxds} \cdot \overline{\psi }_{0}(0) + O\left ({ 1 \over \sqrt{d_{I}}}\right ).& &{}\end{array}$$
(13.25)

Because of \(\overline{\psi }_{0}(\omega ) = \overline{\psi }_{0}(0)\), as d I  → , it is clear that either \(\overline{\psi }_{0}(0) \rightarrow 0\), or

$$\displaystyle{\int _{0}^{\omega }\int _{ \varOmega }\Big(-\gamma +{ 1 \over R_{0}}\beta \Big)dxdt \rightarrow 0.}$$

The latter will lead to our assertion (c). So it suffices to exclude the possibility of \(\overline{\psi }_{0}(0) \rightarrow 0\) as d I  → . Supposing \(\overline{\psi }_{0}(0) \rightarrow 0\) as d I  → , by (13.25) we would have \(\overline{\psi }_{0}(t) \rightarrow 0\) uniformly on [0, ω], which, together with (13.23), implies that 0 ω Ω ψ 0 2 dxdt → 0, contradicting (13.21).

In the general case of γ ≥ , ≢ 0 on \(\overline{\varOmega } \times \mathbb{R}\), we proceed as above except that γ is replaced by γ +ε for any given ε > 0 to get

$$\displaystyle{R_{0} \rightarrow { \int _{0}^{\omega }\int _{\varOmega }\beta (x,t)dxdt \over \int _{0}^{\omega }\int _{\varOmega }[\gamma (x,t)+\epsilon ]dxdt},\ \ \mbox{ as}\ d_{I} \rightarrow \infty,}$$

and we then obtain the desired result by letting ε → 0.

We are in a position to prove (d). Without loss of generality, we can assume that β, γ > 0 on \(\overline{\varOmega } \times \mathbb{R}\). For the general case, as above, we can replace β and γ with β +ε and γ +ε, respectively, and then get the result by letting ε → 0. For sake of simplicity, we denote

$$\displaystyle{\delta ={ \int _{0}^{\omega }\beta (x_{0},t)dt \over \int _{0}^{\omega }\gamma (x_{0},t)dt} =\max _{\overline{\varOmega }}\left \{{\int _{0}^{\omega }\beta (x,t)dt \over \int _{0}^{\omega }\gamma (x,t)dt}\right \}\ \ \mbox{ for some}\ x_{0} \in \overline{\varOmega }.}$$

For a positive constant μ to be determined later, we rewrite the equation that (μ 0, ψ 0) satisfies as

$$\displaystyle\begin{array}{rcl} (\psi _{0})_{t} - d_{I}\varDelta \psi _{0} -\Big ({1 \over \mu } \beta -\gamma \Big)\psi _{0} =\Big ({1 \over \mu _{0}} -{ 1 \over \mu } \Big)\beta \psi _{0}.& &{}\end{array}$$
(13.26)

Before going further, we need some preliminaries on the following eigenvalue problem with the positive weight function β(x, t):

$$\displaystyle\begin{array}{rcl} \begin{array}{ll} \psi _{t} - d_{I}\varDelta \psi - m(x,t)\psi =\lambda \beta (x,t)\psi,\ \ &x \in \varOmega,\ t > 0, \\ {\partial \psi \over \partial \nu } = 0,\ \ \ &x \in \partial \varOmega,\ t > 0, \\ \psi (x,0) =\psi (x,\omega ),\ \ \ &x \in \varOmega,\end{array} & &{}\end{array}$$
(13.27)

where m(x, t) ∈ C ω . Arguing as in [152], problem (13.27) admits the principal eigenvalue λ with a positive eigenvector ψ  ∈ C ω . Moreover, the same analysis as in the proof of [152, Propositions 17.1 and 17.3] implies that the following statements hold:

  1. (i)

    If there exists \(x_{{\ast}}\in \overline{\varOmega }\) such that 0 ω m(x , t)dt > 0, then λ  < 0 for all small d I .

  2. (ii)

    If 0 ω m(x, t)dt < 0 for all \(x \in \overline{\varOmega }\), then λ  > 0 for all small d I .

Here we should point out that \(\int _{0}^{\omega }\max _{\overline{\varOmega }}m(x,t)dt > 0\) does not imply 0 ω m(x , t)dt > 0 for some \(x^{{\ast}}\in \overline{\varOmega }\), and it is even possible that 0 ω m(x, t)dt < 0 for all \(x \in \overline{\varOmega }\).

Now we choose μ such that 0 < μ < δ. For any such μ, by the definition of δ, it follows that

$$\displaystyle{\int _{0}^{\omega }\Big({1 \over \mu } \beta (x_{0},t) -\gamma (x_{0},t)\Big)dt > 0.}$$

Applying the above claim (i) to problem (13.26) with \(m(x,t) ={ 1 \over \mu } \beta -\gamma\), we have

$$\displaystyle{{ 1 \over R_{0}} -{ 1 \over \mu } ={ 1 \over \mu _{0}} -{ 1 \over \mu } < 0\ \ \mbox{ for all small}\ d_{I},}$$

that is, R 0 > μ. Thanks to the arbitrariness of μ, we obtain

$$\displaystyle\begin{array}{rcl} \liminf _{d_{I}\rightarrow 0}R_{0} \geq \delta =\max _{\overline{\varOmega }}\left \{{\int _{0}^{\omega }\beta (x,t)dt \over \int _{0}^{\omega }\gamma (x,t)dt}\right \}.& &{}\end{array}$$
(13.28)

On the other hand, by taking μ > δ and noticing

$$\displaystyle{\int _{0}^{\omega }\Big({1 \over \mu } \beta (x,t) -\gamma (x,t)\Big)dt < 0\ \ \mbox{ for all}\ x \in \overline{\varOmega },}$$

we see from the previous claim (ii) that

$$\displaystyle{{ 1 \over R_{0}} -{ 1 \over \mu } ={ 1 \over \mu _{0}} -{ 1 \over \mu } > 0\ \ \mbox{ for all small}\ d_{I},}$$

which implies

$$\displaystyle\begin{array}{rcl} \limsup _{d_{I}\rightarrow 0}R_{0} \leq \delta =\max _{\overline{\varOmega }}\left \{{\int _{0}^{\omega }\beta (x,t)dt \over \int _{0}^{\omega }\gamma (x,t)dt}\right \}.& &{}\end{array}$$
(13.29)

Combining (13.28) and (13.29), we derive the assertion (d).

Finally, we verify (e). By the choice of β and γ, we easily see from (a), (c), and (d) that

$$\displaystyle{R_{0}(d_{I}) >{ \int _{0}^{\omega }\int _{\varOmega }\beta (x,t)dxdt \over \int _{0}^{\omega }\int _{\varOmega }\gamma (x,t)dxdt}\ \ \ \ \mbox{ for all}\ d_{I},}$$

and

$$\displaystyle{\lim _{d_{I}\rightarrow 0}R_{0}(d_{I}) =\lim _{d_{I}\rightarrow \infty }R_{0}(d_{I}) ={ \int _{0}^{\omega }\int _{\varOmega }\beta (x,t)dxdt \over \int _{0}^{\omega }\int _{\varOmega }\gamma (x,t)dxdt}.}$$

As a consequence, one can find 0 < d I 1 < d I 2 such that R 0(d I 1) = R 0(d I 2).

In the rest of this section, we present a bang-bang type configuration optimization result for the basic reproduction ratio R 0 in the case where the maximum, the minimum, and the average of the function γ(x, t) are fixed while β(x, t) ≡ β is a fixed positive constant.

Theorem 13.1.2.

 Assume that β(x,t) ≡β is a fixed positive constant. Let

$$\displaystyle\begin{array}{rcl} & & \varUpsilon =\Big\{\gamma \in L^{\infty }(\varOmega \times (0,\omega )):\ \gamma _{ {\ast}}\leq \gamma (x,t) \leq \gamma ^{{\ast}}\ \,\,\mbox{ a.e.}\,\,x,t, {}\\ & & \gamma (x,t)\,\,\mbox{ is}\ \omega \mbox{ -periodic in}\ t,\ { 1 \over \omega \vert \varOmega \vert }\int _{0}^{\omega }\int _{ \varOmega }\gamma (x,t)dxdt = \mathcal{N}\Big\}, {}\\ \end{array}$$

where γ ≥ 0,γ > 0 and \(\mathcal{N} > 0\) are given constants such that the set Υ is nonempty. Then the following statements are valid:

  1. (a)

    The function R 0 = R 0 (γ) reaches its maximum over Υ when γ is of the form γ(x,t) = γ χ A + γ χ ((Ω×(0,ω))∖A) , where A is a measurable subset of Ω × (0,ω) such that \(\gamma _{{\ast}}\vert A\vert +\gamma ^{{\ast}}\vert (\varOmega \times (0,\omega ))\setminus A\vert =\omega \vert \varOmega \vert \mathcal{N}\) , and χ A is the characteristic function over A.

  2. (b)

    The function R 0 = R 0 (γ) reaches its minimum \({ \beta \over \mathcal{N}}\) over Υ only when γ ∈Υ is an x-independent function.

Proof.

By the standard compactness analysis and the eigenvalue theory, it is easily seen that R 0 = R 0(γ) is a continuous function of γ in the sense that if γ n is a bounded sequence in L (Ω × (0, ω)), then there exists a subsequence γ n of γ n such that R 0(γ n) → R 0(γ) for some γ ∈ L (Ω × (0, ω)). It is also well known that \({ \beta \over R_{ 0}(\gamma )}\) is concave with respect to γ. Thus, the arguments in the proof of [254, Lemma 7.2 and Theorem 3.11], as applied to (13.9) with μ = R 0, imply that assertion (a) holds.

We now verify (b). By virtue of (13.9) and Lemma 13.1.1, it follows that if \(\gamma (x,t) \equiv \mathcal{N} \in \varUpsilon\), then \(R_{0}(\gamma ) ={ \beta \over \mathcal{N}}\) and 1 is an associated positive eigenfunction. For any given γ ∈ Υ, let ψ 0 be the positive eigenfunction associated with R 0(γ). Since ψ 0 > 0 on \(\overline{\varOmega } \times [0,\omega ]\), we may assume that ψ 0 > 1 on \(\overline{\varOmega } \times [0,\omega ]\). Thus, (R 0(γ), ψ 0) satisfies (13.9) with μ = R 0(γ).

Let \(\gamma ^{0} =\gamma -\mathcal{N}\) and ψ 0 = ψ 0 − 1. Clearly, ψ 0 > 0 on \(\overline{\varOmega } \times [0,\omega ]\), and (γ 0, ψ 0) satisfies

$$\displaystyle\begin{array}{rcl} \begin{array}{ll} (\psi ^{0})_{ t} - d_{I}\varDelta \psi ^{0} +\gamma ^{0}(x,t)\psi ^{0} =\Big ({ \beta \over R_{0}(\gamma )} -\mathcal{N}\Big)\psi ^{0},&x \in \varOmega,\ t > 0, \\ {\partial \psi ^{0} \over \partial \nu } = 0, &x \in \partial \varOmega,\ t > 0, \\ \psi ^{0}(x,0) =\psi ^{0}(x,\omega ), &x \in \varOmega.\end{array} & &{}\end{array}$$
(13.30)

Dividing (13.30) by ψ 0 and integrating the resulting equation over Ω × (0, ω), we obtain

$$\displaystyle{-d_{I}\int _{0}^{\omega }\int _{ \varOmega }{\vert \nabla \psi ^{0}\vert ^{2} \over (\psi ^{0})^{2}} dxdt +\int _{ 0}^{\omega }\int _{ \varOmega }\gamma ^{0}dxdt ={ \beta \over R_{0}(\gamma )} -\mathcal{N}.}$$

Since 0 ω Ω γ 0 dxdt = 0, it easily follows from the above identity that \(R_{0}(\gamma ) \geq { \beta \over \mathcal{N}}\), and \(R_{0}(\gamma ) ={ \beta \over \mathcal{N}}\) if and only if γ(x, t) ≡ γ(t).

13.2 Threshold Dynamics

In this section, we establish the threshold dynamical behavior of system (13.2)–(13.3) in terms of R 0. We start with the uniform bound of its nonnegative solutions.

Under the condition (13.3) (and so (13.4) holds), we can easily apply [150, Exercise 4 of Section 3.5] (or [6, Theorem 3.1]) to the second equation in (13.2) to derive the uniform bound of \(\|\overline{I}(\cdot,t)\|_{L^{\infty }(\varOmega )}\) for all t ≥ 0. In order to obtain a similar estimate for \(\|\overline{S}(\cdot,t)\|_{L^{\infty }(\varOmega )}\), we appeal to the theory developed in [214], which is a generalization of [6, Theorem 3.1]. The following result is a straightforward consequence of [214, Theorem 1 and Corollary 1].

Lemma 13.2.1.

Consider the parabolic system

$$\displaystyle\begin{array}{rcl} \begin{array}{ll} {\partial u_{i} \over \partial t} - d_{i}\varDelta u_{i} = f_{i}(x,t,u),\ \ &x \in \varOmega,\ t > 0,\,i = 1,\cdots \,,\ell \\ {\partial u_{i} \over \partial \nu } = 0,\ \ \ &x \in \partial \varOmega,\ t > 0, \\ u_{i}(x,0) = u_{i}^{0}(x),\ \ &x \in \varOmega,\end{array} & & {}\\ \end{array}$$

where u = (u 1 ,⋯ ,u ), \(u_{i}^{0} \in C(\overline{\varOmega }, \mathbb{R})\) , d i is a positive constant, i = 1,⋯ ,ℓ, and assume that, for each k = 1,⋯ ,ℓ, the functions f k satisfy the polynomial growth condition:

$$\displaystyle{\vert f_{k}(x,t,u)\vert \leq c_{1}\sum _{i=1}^{\ell}\vert u_{ i}\vert ^{\sigma } + c_{ 2}}$$

for some nonnegative constants c 1 and c 2 , and positive constant σ. Let p 0 be a positive constant such that \(p_{0} >{ m \over 2} \max \{0,\,(\sigma -1)\}\) and τ(u 0 ) be the maximal time of existence of the solution u corresponding to the initial data u 0 . Suppose that there exists a positive number C 1 = C 1 (u 0 ) such that \(\|u(\cdot,t)\|_{L^{p_{0}}(\varOmega )} \leq C_{1}\), \(\forall t \in [0,\tau (u^{0}))\) . Then the solution u exists for all time and there is a positive number C 2 = C 2 (u 0 ) such that \(\|u(\cdot,t)\|_{L^{\infty }(\varOmega )} \leq C_{2}\), \(\forall t \in [0,\infty )\) . Moreover, if there exist two nonnegative numbers ϱ and K 1 = K 1 (ϱ), independent of initial data, such that \(\|u(\cdot,t)\|_{L^{p_{0}}(\varOmega )} \leq K_{1}\), \(\forall t \in [\varrho,\infty )\) , then there is a positive number K 2 = K 2 (ϱ), independent of initial data, such that \(\|u(\cdot,t)\|_{L^{\infty }(\varOmega )} \leq K_{2}\), \(\forall t \in [\varrho,\infty )\).

By applying Lemma 13.2.1 with σ = p 0 = 1 and ϱ = 0 to system (13.2), we obtain the following result.

Lemma 13.2.2.

There exists a positive constant B, independent of the initial data \((S_{0},I_{0}) \in C(\overline{\varOmega }, \mathbb{R}_{+}^{2})\) satisfying condition (13.3) , such that for the corresponding unique solution \((\overline{S},\overline{I})\) of system (13.2) , we have

$$\displaystyle{\|\overline{S}(\cdot,t)\|_{L^{\infty }(\varOmega )} +\| \overline{I}(\cdot,t)\|_{L^{\infty }(\varOmega )} \leq B,\ \ \forall t \in [0,\infty ).}$$

Let

$$\displaystyle{Y:= \left \{(u,v) \in C(\overline{\varOmega }, \mathbb{R}_{+}^{2}): \quad \int _{\varOmega }(u(x) + v(x))dx = N\right \}}$$

and Y 0 = { (u, v) ∈ Y: v(x) ≢ 0}. We equip Y with the metric induced by the maximum norm. Then Y is a complete metric space and Y 0 is open in Y. Now we are ready to present the main result of this section, which gives the threshold dynamics of system (13.2)–(13.3).

Theorem 13.2.1.

The following statements are valid:

  1. (i)

    If R 0 < 1, then for any (S 0 ,I 0 ) ∈ Y, the solution \((\overline{S},\overline{I})\) of system (13.2)–(13.3) satisfies \(\lim _{t\rightarrow \infty }(\overline{S}(x,t),\overline{I}(x,t)) = (N/\vert \varOmega \vert,0)\) uniformly for \(x \in \overline{\varOmega }\).

  2. (ii)

    If R 0 > 1, then system (13.2)–(13.3) has at least one endemic ω-periodic solution, and there exists a constant η > 0 such that for any (S 0 ,I 0 ) ∈ Y 0 , the solution \((\overline{S},\overline{I})\) of system (13.2)–(13.3) satisfies

    $$\displaystyle{\liminf _{t\rightarrow \infty }\overline{S}(x,t) \geq \eta \quad \mbox{ and}\quad \liminf _{t\rightarrow \infty }\overline{I}(x,t) \geq \eta }$$

    uniformly for \(x \in \overline{\varOmega }\).

Proof.

We define an ω-periodic semiflow Φ(t): Y → Y by

$$\displaystyle{\varPhi (t)((S_{0},I_{0})) = (\overline{S}(\cdot,t,,(S_{0},I_{0})),\overline{I}(\cdot,t,(S_{0},I_{0})),\quad \forall (S_{0},I_{0}) \in Y,\,\,t \geq 0,}$$

where \((\overline{S}(x,t,(S_{0},I_{0})),\overline{I}(x,t,(S_{0},I_{0}))\) is the unique solution of system (13.2). Let P: = Φ(ω) be the Poincaré map associated with system (13.2) on Y. Note that Φ(t): Y → Y is compact for each t > 0. It then follows from Lemma 13.2.2 and Theorem 1.1.3 that P: Y → Y has a strong global attractor.

Given (S 0, I 0) ∈ Y, let ω(S 0, I 0) be the omega limit set of the forward orbit through (S 0, I 0) for P: Y → Y. Since \(\frac{\overline{S}}{\overline{S } +\overline{I}} \leq 1\), \(\overline{I}(x,t)\) satisfies

$$\displaystyle{\frac{\partial \overline{I}} {\partial t} - d_{I}\varDelta \overline{I} \leq (\beta (x,t) -\gamma (x,t))\overline{I},\quad x \in \varOmega,\,t > 0.}$$

In the case where R 0 < 1, we see from Lemma 13.1.2 that λ 0 > 0. This, together with the comparison principle, implies that \(\overline{I}(x,t) \rightarrow 0\) uniformly on \(\overline{\varOmega }\) as t → . It then easily follows that \(\omega (S_{0},I_{0}) =\tilde{\omega } \times \{0\}\), where \(\tilde{\omega }\) is a compact and internally chain transitive set for the Poincaré map P 1 associated with the following ω-periodic system

$$\displaystyle\begin{array}{rcl} \begin{array}{ll} \tilde{S}_{t} - d_{S}\varDelta \tilde{S} = 0,\ \ &x \in \varOmega,\ t > 0, \\ {\partial \tilde{S} \over \partial \nu } = 0,\ \ \ &x \in \partial \varOmega,\ t > 0, \end{array} & &{}\end{array}$$
(13.31)

on the space \(Y _{1}:= \left \{u \in C(\overline{\varOmega }, \mathbb{R}_{+}):\,\,\int _{\varOmega }u(x)dx = N\right \}\) equipped with the uniform convergence topology. By a well-known result on the heat equation in a bounded domain (see, e.g., [255, Section 1.1.2]), we conclude that the constant \({N \over \vert \varOmega \vert }\) is a globally asymptotically stable steady state for system (13.31) on Y 1. In view of Theorem 1.2.1, we obtain \(\tilde{\omega }=\{{ N \over \vert \varOmega \vert }\}\), and hence \(\omega (S_{0},I_{0}) =\{ ({N \over \vert \varOmega \vert },0)\}\). This implies that assertion (i) holds true.

To prove assertion (ii), we use similar arguments to those in the proof of [430, Theorem 3.1] on a periodic predator–prey reaction–diffusion system. Let ∂ Y 0: = Y ∖ Y 0 = { (S 0, I 0) ∈ Y : I 0 ≡ 0}. Clearly, Y 0 is convex, Φ(t)Y 0 ⊂ Y 0, and Φ(t)∂ Y 0 ⊂ ∂ Y 0 for all t ≥ 0. For any (S 0, I 0) ∈ ∂ Y 0, \(\overline{I}(x,t) \equiv 0\), and hence \(\overline{S}(x,t)\) is a solution of system (13.31). It then follows that \(\overline{S}(x,t) \rightarrow { N \over \vert \varOmega \vert }\) uniformly on \(\overline{\varOmega }\) as t → . This implies that \(\cup _{(S_{0},I_{0})\in \partial Y _{0}}\omega (S_{0},I_{0}) =\{ ({N \over \vert \varOmega \vert },0)\}\), where ω(S 0, I 0) is the omega limit set of the forward orbit through (S 0, I 0) for P: Y → Y. For simplicity, we denote \(M = ({N \over \vert \varOmega \vert },0)\). Then {M} is a compact and isolated invariant set for P: ∂ Y 0 → ∂ Y 0. Let \(X:= C(\overline{\varOmega }, \mathbb{R})\) and \(X_{+}:= C(\overline{\varOmega }, \mathbb{R}_{+})\). Then (X, X +) is an ordered Banach space with the maximum norm ∥ ⋅ ∥  X . We further have the following claim.

Claim.

There exists a real number δ > 0 such that limsup n →   ∥ P n(S 0, I 0) − M ∥  X×X  ≥ δ for all (S 0, I 0) ∈ Y 0.

Indeed, let λ 0 be defined as in the preceding section. Under the assumption R 0 > 1, Lemma 13.1.2 implies that λ 0 < 0. It then follows that we can choose a small positive number ε 0 such that λ 0(ε 0) < 0, where λ 0(ε 0) is the unique principal eigenvalue of the periodic-parabolic problem

$$\displaystyle\begin{array}{rcl} \begin{array}{ll} \varphi _{t} - d_{I}\varDelta \varphi ={ \beta (x,t)(N/\vert \varOmega \vert -\epsilon _{0}) \over N/\vert \varOmega \vert + 2\epsilon _{0}} \varphi -\gamma (x,t)\varphi +\lambda \varphi,\ \ &x \in \varOmega,\ t > 0, \\ {\partial \varphi \over \partial \nu } = 0,\ \ \ &x \in \partial \varOmega,\ t > 0, \\ \varphi (x, 0) =\varphi (x,\omega ),\ \ \ &x \in \varOmega.\end{array} & &{}\end{array}$$
(13.32)

According to the continuous dependence of solutions on the initial data, we observe that

$$\displaystyle{\lim _{(S_{0},I_{0})\rightarrow M}\varPhi (t)(S_{0},I_{0}) =\lim _{(S_{0},I_{0})\rightarrow M}(\overline{S}(\cdot,t),\overline{I}(\cdot,t)) = M}$$

in X × X uniformly for t ∈ [0, ω]. Thus, there exists a real number δ 0 = δ 0(ε 0) > 0 such that for any (S 0, I 0) ∈ B(M, δ 0), an open ball in X × X centered at M and with the radius δ 0, we have

$$\displaystyle{\|\overline{S}(\cdot,t) - N/\vert \varOmega \vert \|_{X} +\| \overline{I}(\cdot,t)\|_{X} <\epsilon _{0},\,\forall t \in [0,\omega ].}$$

Assume, for the sake of contradiction, that the claim above does not hold for δ = δ 0. Since \(P^{n}Y _{0} \subset Y _{0},\,\forall n \geq 0\), it then follows that there exists (S 0 , I 0 ) ∈ B(M, δ 0) ∩ Y 0 such that \(P^{n}(S_{0}^{{\ast}},I_{0}^{{\ast}}) =\varPhi (n\omega )(S_{0}^{{\ast}},I_{0}^{{\ast}})) \in B(M,\delta _{0}),\,\forall n \geq 1\). For any t ≥ 0, let t = n ω + t′ with t′ ∈ [0, ω) and n = [tω] being the integer part of tω. Note that \((\overline{S}^{{\ast}}(\cdot,t),\overline{I}^{{\ast}}(\cdot,t)):=\varPhi (t)((S_{0}^{{\ast}},I_{0}^{{\ast}})) =\varPhi (t')(\varPhi (n\omega )(S_{0}^{{\ast}},I_{0}^{{\ast}}))\). Thus, we have

$$\displaystyle\begin{array}{rcl} \|\overline{S}^{{\ast}}(\cdot,t) - N/\vert \varOmega \vert \|_{ X} +\| \overline{I}^{{\ast}}(\cdot,t)\|_{ X} <\epsilon _{0},\ \ \forall t \in [0,\infty ).& &{}\end{array}$$
(13.33)

Let φ 0 be a positive eigenvector corresponding to λ 0(ε 0) in (13.32). Clearly, φ 0 > 0 on \(\overline{\varOmega } \times \mathbb{R}\). In particular, φ 0(⋅ , 0) ∈ int(X +). On the other hand, as (S 0 , I 0 ) ∈ Y 0, the strong maximum principle for parabolic equations shows that \(\overline{S}^{{\ast}}(\cdot,t),\,\overline{I}^{{\ast}}(\cdot,t) \in \mathrm{int}(X_{+}) \times \mathrm{int}(X_{+})\) for any t > 0. Therefore, without loss of generality, we may assume that (S 0 , I 0 ) ∈ int(X +) ×int(X +). So one can find a small positive number c such that I 0  ≥ c φ 0(⋅ , 0) in X. By means of (13.33) and the choice of δ 0, it follows that \(\overline{I}^{{\ast}}(x,t)\) is a super-solution to the problem

$$\displaystyle\begin{array}{rcl} \begin{array}{ll} w_{t} - d_{I}\varDelta w ={ \beta (x,t)(N/\vert \varOmega \vert -\epsilon _{0}) \over N/\vert \varOmega \vert + 2\epsilon _{0}} w -\gamma (x,t)w,\ \ &x \in \varOmega,\ t > 0, \\ {\partial w \over \partial \nu } = 0,\ \ \ &x \in \partial \varOmega,\ t > 0, \\ w(x, 0) = c^{{\ast}}\varphi _{0}(x, 0),\ \ \ &x \in \varOmega.\end{array} & &{}\end{array}$$
(13.34)

Furthermore, it is easy to see that \(c^{{\ast}}e^{-\lambda _{0}(\epsilon _{0})t}\varphi _{0}(x,t)\) is the unique solution to problem (13.34). By the parabolic comparison principle, we deduce

$$\displaystyle{\overline{I}^{{\ast}}(x,t) \geq c^{{\ast}}e^{-\lambda _{0}(\epsilon _{0})t}\varphi _{ 0}(x,t) \rightarrow \infty \ \ \mbox{ uniformly for}\ x \in \overline{\varOmega },\ \ \mbox{ as}\ t \rightarrow \infty,}$$

which contradicts (13.33). Thus, the claim holds true for δ = δ 0.

The above claim implies that M is an isolated invariant set for P: Y → Y, and W s(M) ∩ Y 0 = ∅, where W s(M) is the stable set of M for P: Y → Y. As a result, Theorem 1.3.1 and Remark 1.3.1 assert that P is uniformly persistent with respect to (Y, ∂ Y 0). Further, Theorem 1.3.10 implies that P has a fixed point ϕ in Y 0, and hence, system (13.2) has an ω-periodic solution Φ(t)ϕ in Y 0. In view of Theorem 1.3.10, we further see that P: Y 0 → Y 0 has a global attractor A 0. Clearly, ϕ  ∈ A 0. Let B 0:=\(\bigcup _{t\in [0,\omega ]}\varPhi (t)A_{0}\). Then B 0 ⊂ Y 0, and Theorem 3.1.1 implies that \(\lim \limits _{t\rightarrow \infty }d(\varPhi (t)\phi,B_{0}) = 0\) for all ϕ ∈ Y 0, where d is the norm-induced distance in X × X. Since A 0 ⊂ Y 0 and A 0 = S(A 0) = Φ(ω)A 0, we have A 0 ⊂ int(X +) ×int(X +), and hence B 0 ⊂ int(X +) ×int(X +). Obviously, Φ(t)ϕ  ∈ B 0, and so Φ(t)ϕ is a positive ω-periodic solution of system (13.2). By virtue of the compactness and global attractiveness of B 0 for Φ(t) in Y 0, we conclude that there exists η > 0 such that \(\liminf \limits _{t\rightarrow \infty }\varPhi (t)\phi \geq (\eta,\eta )\) for all ϕ ∈ Y 0, which implies the desired uniform persistence.

As a consequence of Lemmas 13.1.3 and 13.1.4, and Theorems 13.1.1 and 13.2.1, we have the following result.

Theorem 13.2.2.

The following statements are valid:

  1. (i)

    The disease-free constant solution (N∕|Ω|,0) is globally attractive for system (13.2)–(13.3) if one of the following conditions holds:

    1. (i-a)

      β(x,t) −γ(x,t) = h(t) and ∫ 0 ω h(t)dt < 0;

    2. (i-b)

      β(x,t) −γ(x,t) = h(x) and either h ≤ 0,≢0 on \(\overline{\varOmega }\) or \(\max _{\overline{\varOmega }}h(x) > 0\) and ∫ Ω h(x)dx < 0 but d I > d I , where d I is given in Lemma 13.1.4;

    3. (i-c)

      0 ω Ω (β(x,t) −γ(x,t))dxdt < 0 and d I is sufficiently large;

    4. (i-d)

      \(\int _{0}^{\omega }\max _{x\in \overline{\varOmega }}(\beta (x,t) -\gamma (x,t))dt \leq 0\) and β(x,t) −γ(x,t) nontrivially depends on the spatial variable.

  2. (ii)

    The uniform persistence holds for system (13.2)–(13.3) if one of the following conditions holds:

    1. (ii-a)

      β(x,t) −γ(x,t) = h(t) and ∫ 0 ω h(t)dt > 0;

    2. (ii-b)

      β(x,t) −γ(x,t) = h(x), either h≢0 and ∫ Ω h(x)dx ≥ 0 or \(\max _{\overline{\varOmega }}h(x) > 0\) and ∫ Ω h(x)dx < 0 but 0 < d I < d I ;

    3. (ii-c)

      0 ω Ω (β(x,t) −γ(x,t))dxdt > 0;

    4. (ii-d)

      \(\max _{x\in \overline{\varOmega }}\Big\{{\int _{0}^{\omega }\beta (x,t)dt \over \int _{0}^{\omega }\gamma (x,t)dt}\Big\} > 1\) and d I is sufficiently small.

13.3 Global Attractivity

The uniqueness and global attractivity of the endemic ω-periodic solution to reaction-diffusion system (13.2) is a challenging problem. In [9], it was conjectured that the unique endemic equilibrium of the autonomous system (13.1) is globally stable. A partial answer to this problem was given in [275], but it remains unsolved in the general case. In this section, we address this issue for periodic system (13.2) in two special cases.

When no diffusion is taken into account, by assuming the total population number is unchanged and β(x, t) ≡ β(t), γ(x, t) ≡ γ(t) are ω-periodic continuous functions, we obtain the following ordinary differential system:

$$\displaystyle\begin{array}{rcl} \begin{array}{ll} \overline{S}_{t} = -{\beta (t)\overline{S}\,\overline{I} \over \overline{S} + \overline{I}} +\gamma (t)\overline{I},\ \ &t > 0, \\ \overline{I}_{t} ={ \beta (t)\overline{S}\,\overline{I} \over \overline{S} + \overline{I}} -\gamma (t)\overline{I},\ \ &t > 0, \\ \overline{S} + \overline{I} = N,\ \ \ &t \geq 0, \\ \overline{S}(0) = S_{0} \geq 0,\ \ \overline{I}(0) = I_{0} > 0.\end{array} & &{}\end{array}$$
(13.35)

An analysis as in Section 13.2 shows that the basic reproduction ratio is

$$\displaystyle{R_{0} = \frac{\int _{0}^{\omega }\beta (t)dt} {\int _{0}^{\omega }\gamma (t)dt}.}$$

For system (13.35), we have a threshold-type result on its global dynamics. Indeed, it is easy to see that \(\overline{I}(t)\) satisfies the scalar ordinary differential equation:

$$\displaystyle\begin{array}{rcl} \frac{d\overline{I}} {dt} = \left (\frac{\beta (t)(N - I)} {N} -\gamma (t)\right )I,\,\,\,t \geq 0;\quad \overline{I}(0) = I_{0} \in [0,N].& &{}\end{array}$$
(13.36)

By Theorem 3.1.2, it follows that the zero solution is globally asymptotically stable for system (13.36) in [0, N] in the case where R 0 ≤ 1; and system (13.36) has a globally asymptotically stable positive ω-periodic solution I (t) in (0, N] in the case where R 0 > 1. Biologically, this implies that the infectious disease dies out if R 0 ≤ 1 and it persists if R 0 > 1.

Returning to the reaction–diffusion system (13.2)–(13.3), we are able to obtain the global attractivity of the endemic ω-periodic solution in two special cases. The first one we shall cope with is that the diffusion rate of the susceptible individuals is equal to that of the infected individuals (i.e., d S  = d I ). In this situation, we can give a complete description of the global attractivity of the disease-free constant solution and the endemic ω-periodic solution.

Theorem 13.3.1.

Assume that d S = d I . If R 0 ≤ 1, then (N∕|Ω|,0) is globally attractive for system (13.2)–(13.3) ; If R 0 > 1, then system (13.2)–(13.3) admits a globally attractive endemic ω-periodic solution.

Proof.

In the case where d S  = d I , \(\overline{N}(x,t):= \overline{S}(x,t) + \overline{I}(x,t)\) is a solution of system (13.31) on Y 1, and hence \(\lim _{t\rightarrow \infty }\overline{N}(x,t) = \frac{N} {\vert \varOmega \vert }\) uniformly for \(x \in \overline{\varOmega }\). It follows that \(\overline{I}(x,t)\) satisfies the following nonautonomous equation

$$\displaystyle{ \frac{\partial \overline{I}} {\partial t} - d_{I}\varDelta \overline{I} = \left [\beta (x,t)\left (1 - \frac{\overline{I}} {\overline{N}(x,t)}\right ) -\gamma (x,t)\right ]\overline{I},\,\,x \in \varOmega,\,t > 0, }$$
(13.37)

which is asymptotic to a periodic equation

$$\displaystyle{ \frac{\partial \overline{I}} {\partial t} - d_{I}\varDelta \overline{I} = \left [\beta (x,t)\left (1 - \frac{\vert \varOmega \vert } {N}\overline{I}\right ) -\gamma (x,t)\right ]\overline{I},\,\,x \in \varOmega,\,t > 0. }$$
(13.38)

By Lemma 13.1.2 and Theorem 3.2.2, as applied to the asymptotically periodic system (13.37), it follows that the desired threshold dynamics holds for system (13.2)–(13.3) in terms of R 0.

Next, we consider the case where β(x, t) = r γ(x, t) for some real number r ∈ (0, ). It is easy to see that when r > 1,

$$\displaystyle{(\tilde{S},\tilde{I}) =\Big ({1 \over r}{ N \over \vert \varOmega \vert },\ { r - 1 \over r}{ N \over \vert \varOmega \vert }\Big)}$$

is an endemic ω-periodic solution of system (13.2)–(13.3). Since system (13.2) is periodic, we may not be able to use the LaSalle invariance principle type argument to prove the global attractivity of \((\tilde{S},\tilde{I})\). Instead, we will employ the following result, which comes from [268, Lemma 1].

Lemma 13.3.1.

Let a and b be positive constants. Assume that ϕ, ψ ∈ C 1 ([a,∞)), ψ ≥ 0, and ϕ is bounded from below on [a,∞). If ϕ′(t) ≤−bψ(t) and ψ′(t) ≤ K on [a,∞) for some positive constant K, then limt→∞ ψ(t) = 0.

We are now in a position to prove the following threshold-type result on the global dynamics of system (13.2)–(13.3).

Theorem 13.3.2.

Assume that β(x,t) = rγ(x,t) on \(\overline{\varOmega } \times \mathbb{R}\) for some constant r ∈ (0,∞). If r < 1, then (N∕|Ω|,0) is globally attractive for system (13.2)–(13.3) ; If r > 1, then \((\tilde{S},\tilde{I})\) is globally attractive for system (13.2)–(13.3).

Proof.

From (13.9) and Lemma 13.1.1, it is easy to see that \(\mathcal{R}_{0} = r\). In the case where r < 1, Theorem 13.2.1 (i) implies that (N∕ | Ω | , 0) is globally attractive. It remains to handle the case where r > 1. For any given positive solution \((\overline{S}(x,t),\overline{I}(x,t))\) of (13.2)–(13.3), we follow [275] to define the function

$$\displaystyle\begin{array}{rcl} H(t):=\int _{\varOmega }\left (\overline{S}(x,t) +{ \tilde{S}^{2} \over \overline{S}(x,t)} + \overline{I}(x,t) +{ \tilde{I}^{2} \over \overline{I}(x,t)}\right )\,dx.& &{}\end{array}$$
(13.39)

It then follows that

$$\displaystyle\begin{array}{rcl} \frac{dH(t)} {dt} & =& \int _{\varOmega }\left (\overline{S}_{t} + \overline{I}_{t}\right )dx -\int _{\varOmega }\left (\frac{\widetilde{S}^{2} \cdot \overline{S}_{t}} {\overline{S}^{2}} + \frac{\widetilde{I}^{2} \cdot \overline{I}_{t}} {\overline{I}^{2}} \right )dx {}\\ & =& -\int _{\varOmega }\frac{\widetilde{S}^{2}} {\overline{S}^{2}}\left (d_{S}\bigtriangleup \overline{S} - \frac{\beta \overline{S} \cdot \overline{I}} {\overline{S} + \overline{I}} +\gamma \overline{I}\right )dx -\int _{\varOmega }\frac{\widetilde{I}^{2}} {\overline{I}^{2}}\left (d_{I}\bigtriangleup \overline{I} + \frac{\beta \overline{S} \cdot \overline{I}} {\overline{S} + \overline{I}} -\gamma \overline{I}\right )dx {}\\ & =& \,H_{1}(t) + H_{2}(t), {}\\ \end{array}$$

where

$$\displaystyle\begin{array}{rcl} H_{1}(t)& =& -\int _{\varOmega }\left (d_{S}\frac{\widetilde{S}^{2}} {\overline{S}^{2}} \cdot \bigtriangleup \overline{S} + d_{I}\frac{\widetilde{I}^{2}} {\overline{I}^{2}} \cdot \bigtriangleup \overline{I}\right )dx {}\\ & =& -\int _{\varOmega }\left (\frac{2d_{S}\widetilde{S}^{2}} {\overline{S}^{3}} \cdot \vert \nabla \overline{S}\vert ^{2} + \frac{2d_{I}\widetilde{I}^{2}} {\overline{I}^{3}} \cdot \vert \nabla \overline{I}\vert ^{2}\right )dx {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl} H_{2}(t)& =& -\int _{\varOmega }\left \{\left (\frac{\widetilde{I}^{2}} {\overline{I}^{2}} -\frac{\widetilde{S}^{2}} {\overline{S}^{2}}\right ) \cdot \left ( \frac{\beta \overline{S} \cdot \overline{I}} {\overline{S} + \overline{I}} -\gamma \overline{I}\right )\right \}dx {}\\ & =& -\int _{\varOmega }\left \{\beta \overline{I} \cdot \left (\frac{\widetilde{I}^{2}} {\overline{I}^{2}} -\frac{\widetilde{S}^{2}} {\overline{S}^{2}}\right ) \cdot \left ( \frac{\overline{S}} {\overline{S} + \overline{I}} -\frac{1} {r}\right )\right \}dx {}\\ & =& -\int _{\varOmega }\left \{\beta \overline{I} \cdot \left (\frac{\widetilde{I}^{2}} {\overline{I}^{2}} -\frac{\widetilde{S}^{2}} {\overline{S}^{2}}\right ) \cdot \left ( \frac{\overline{S}} {\overline{S} + \overline{I}} - \frac{\widetilde{S}} {\widetilde{S} +\widetilde{ I}}\right )\right \}dx {}\\ & =& -\int _{\varOmega }\left \{\beta \overline{S} \cdot \overline{I}^{2} \cdot \left (\frac{\widetilde{I}^{2}} {\overline{I}^{2}} -\frac{\widetilde{S}^{2}} {\overline{S}^{2}}\right ) \cdot \left ( \frac{ \frac{\widetilde{I}} {\overline{I}} - \frac{\widetilde{S}} {\overline{S}}} {(\overline{S} + \overline{I}) \cdot (\widetilde{S} +\widetilde{ I})}\right )\right \}dx {}\\ & =& -\int _{\varOmega }\left \{ \frac{\beta \overline{S} \cdot \overline{I}^{2}} {(\overline{S} + \overline{I}) \cdot (\widetilde{S} +\widetilde{ I})} \cdot \left (\frac{\widetilde{I}} {\overline{I}} + \frac{\widetilde{S}} {\overline{S}}\right ) \cdot \left (\frac{\widetilde{I}} {\overline{I}} -\frac{\widetilde{S}} {\overline{S}}\right )^{2}\right \}dx. {}\\ \end{array}$$

Thus, we obtain

$$\displaystyle{\left.\begin{array}{ll} {dH(t) \over dt} & = -\int _{\varOmega }\Big\{d_{S}{2\tilde{S}^{2} \over \overline{S}^{3}} \vert \nabla \overline{S}\vert ^{2} + d_{ I}{2\tilde{I}^{2} \over \overline{I}^{3}} \vert \nabla \overline{I}\vert ^{2} \\ & \ \ \ \ \ \ +{ \beta (x,t)\overline{S}\,\overline{I}^{2} \over (\tilde{S} +\tilde{ I})(\overline{S} + \overline{I})}\Big({\tilde{S} \over \overline{S}} +{ \tilde{I} \over \overline{I}}\Big)\Big({\tilde{S} \over \overline{S}} -{\tilde{I} \over \overline{I}}\Big)^{2}\Big\}\,dx. \end{array} \right.}$$

In view of Lemma 13.2.2 and Theorem 13.2.1 (ii), there exist positive constants C 0 and T 0 such that

$$\displaystyle{{dH(t) \over dt} \leq -C_{0}\int _{\varOmega }\Big\{\vert \nabla \overline{S}\vert ^{2} + \vert \nabla \overline{I}\vert ^{2}\} +\Big ({\tilde{S} \over \overline{S}} -{\tilde{I} \over \overline{I}}\Big)^{2}\Big\}\,dx =: -\psi (t),\ \ \forall t \geq T_{ 0}.}$$

By the standard Hölder regularity theory for parabolic equations (see, e.g., [126, Theorem 9]) and the embedding theorems (see, e.g., [209, Lemma II. 3.3]) (see also the proof of Theorems A1 and A2 of [39]), together with Lemma 13.2.2 and Theorem 13.2.1 (ii), it is easy to see that ψ′(t) is bounded on [T 0, ). Thus, Lemma 13.3.1 implies that ψ(t) → 0 as t → , and hence we have

$$\displaystyle{ \lim _{t\rightarrow \infty }\int _{\varOmega }\Big(\vert \nabla \overline{S}\vert ^{2} + \vert \nabla \overline{I}\vert ^{2}\Big)\,dx = 0 }$$
(13.40)

and

$$\displaystyle{ \lim _{t\rightarrow \infty }\int _{\varOmega }\left \vert (r - 1)\overline{S}(x,t) -\overline{I}(x,t)\right \vert dx = 0. }$$
(13.41)

From (13.41) and (13.4), it follows that

$$\displaystyle{ \lim _{t\rightarrow \infty }\frac{1} {\vert \varOmega \vert }\int _{\varOmega }\overline{S}(x,t)dx =\tilde{ S},\ \ \lim _{t\rightarrow \infty }\frac{1} {\vert \varOmega \vert }\int _{\varOmega }\overline{I}(x,t)dx =\tilde{ I}. }$$
(13.42)

Let us recall the well-known Poincaré inequality:

$$\displaystyle{\mu _{1}\int _{\varOmega }(g -\hat{ g})^{2}\,dx \leq \int _{\varOmega }\vert \nabla g\vert ^{2}\,dx,\quad \forall g \in H^{1}(\varOmega ),}$$

where \(\hat{g} = \frac{1} {\vert \varOmega \vert }\int _{\varOmega }g(x)dx\) and μ 1 is the first positive eigenvalue of the Laplacian operator −Δ with zero Neumann boundary condition on ∂ Ω. As a consequence, by Hölder inequality, there holds

$$\displaystyle{\int _{\varOmega }\vert g -\hat{ g}\vert \,dx \leq \Big (\frac{\vert \varOmega \vert } {\mu _{1}} \Big)^{1/2}\Big(\int _{\varOmega }\vert \nabla g\vert ^{2}\,dx\Big)^{1/2},\ \ \forall g \in H^{1}(\varOmega ).}$$

This, in conjunction with (13.40) and (13.42), gives rise to

$$\displaystyle{ \lim _{t\rightarrow \infty }\int _{\varOmega }(\vert \overline{S}(x,t) -\tilde{ S}\vert + \vert \overline{I}(x,t) -\tilde{ I}\vert )dx = 0. }$$
(13.43)

Let X, Φ(t), P and Y 0 be defined as in the proof of Theorem 13.2.1. For any given ϕ ∈ Y 0, let ω(ϕ) be the omega-limit set of the forward orbit through ϕ for the discrete-time semiflow {P n} n ≥ 0. It then follows that for any ψ = (ψ 1, ψ 2) ∈ ω(ϕ), there exists a sequence n k  →  such that \(\lim _{k\rightarrow \infty }P^{n_{k}}(\phi ) =\lim _{ k\rightarrow \infty }\varPhi (n_{k}\omega )\phi =\psi\) in X × X. Letting \((\overline{S}(x,t),\overline{I}(x,t)) = [\varPhi (t)\phi ](x)\) and t = n k ω in (13.43), we obtain

$$\displaystyle{\int _{\varOmega }(\vert \psi _{1}(x) -\tilde{ S}\vert + \vert \psi _{2}(x) -\tilde{ I}\vert )dx = 0,}$$

and so \(\psi (x) \equiv (\tilde{S},\tilde{I})\). Thus, we have \(\omega (\phi ) =\{ (\tilde{S},\tilde{I})\}\). This implies that \(\lim _{t\rightarrow \infty }\varPhi (t)\phi = (\tilde{S},\tilde{I})\) in X × X, yielding the global attractivity of \((\tilde{S},\tilde{I})\).

13.4 Discussion

In this section, we give some biological interpretations of the analytical results obtained for model (13.2)–(13.3).

Following the terminology in [9], we say that x is a low-risk site if the local disease transmission rate 0 ω β(x, t)dt is lower than the local disease recovery rate 0 ω γ(x, t)dt. A high-risk site is defined in a reversed manner. We also say that Ω is a low-risk habitat if 0 ω Ω β(x, t)dxdt <  0 ω Ω γ(x, t)dxdt and a high-risk habitat if 0 ω Ω β(x, t)dxdt >  0 ω Ω γ(x, t)dxdt. We may call the habitat a moderate-risk one if 0 ω Ω β(x, t)dxdt =  0 ω Ω γ(x, t)dxdt.

Firstly, in the ideal case where the rates of disease transmission and recovery depend on the temporal factor alone, Theorem 13.2.2 (i-a) and (ii-a) show that a low-risk habitat always leads to the extinction of the disease while a high-risk habitat leads to the persistence. In the ideal case where the rates of disease transmission and recovery depend solely on the spatial factor, it follows from Theorem 13.2.2 (ii-b) that the disease will be persistent once a high-risk habitat exists. In such a situation, however, a low-risk habitat does not always contribute to the disease eradication. Actually, this is true only when each location of the domain is low-risk. Once the habitat contains at least one high-risk site, according to Theorem 13.2.2 (i-b) and (ii-b), there exists a threshold value d I  ∈ (0, ) such that the disease extinction happens only if the movement rate d I of the infected population is greater than d I ; otherwise, if d I  < d I , the disease will persist.

In the general situation where the rates of disease transmission and recovery depend on the spatial and temporal variables, our results assert that if either the habitat is a high-risk type or there exists at least one high-risk site and the movement of the infected population is extremely slow, then the disease will persist; see Theorem 13.2.2 (ii-c) and (ii-d). On the contrary, if the habitat is a low-risk one and the movement of the infected population is sufficiently quick, the disease will die out; see Theorem 13.2.2 (i-c).

We next discuss how the heterogeneous and time-periodic environment affects the extinction and persistence of the disease. We assume that

$$\displaystyle{\beta (x,t) = p(x)q_{1}(t)\ \ \ \mbox{ and}\ \ \ \gamma (x,t) = p(x)q_{2}(t),}$$

where p is a positive Hölder continuous function on \(\overline{\varOmega }\) and q 1, q 2 are ω-periodic positive Hölder continuous functions on \(\mathbb{R}\). If q 1 ≡ q 2, we get a moderate-risk habitat and Theorem 13.3.2 tells us that the disease will eventually die out regardless of the movement rates. We now assume that p is not a constant, q 1 ≢ q 2, and 0 ω q 1(t)dt =  0 ω q 2(t)dt so that the habitat is still a moderate-risk one. By Theorem 13.1.1, we see that the basic reproduction ratio R 0(d I ) = R 0 > 1 for any d I  > 0 and R 0(d I ) → 1 as either d I  → 0 or d I  → . Therefore, Theorem 13.2.1 implies that for this moderate-risk habitat, the disease will persist.

As a consequence, our results suggest that the combination of spatial heterogeneity and temporal periodicity tends to enhance the persistence of the infectious disease for the SIS model (13.2)–(13.3). In other words, the infection risk of the model (13.2)–(13.3) would be underestimated if only temporal periodicity or spatial heterogeneity is taken into account.

Furthermore, the above discussion also shows that in the case where p is not a constant, q 1 ≢ q 2, and 0 ω q 1(t)dt =  0 ω q 2(t)dt, when the infected population migrates at the speed \(d_{I} =\hat{ d}_{I}\), where \(\hat{d}_{I} > 0\) satisfies \(R_{0}(\hat{d}_{I}) =\max _{d_{I}\in (0,\infty )}R_{0}(d_{I}) > 1\), the persistence property of the disease will be maximized; on the other hand, the small or large migration rate of the infected population will reduce the value of the basic reproduction ratio close to unity so that the persistence of the disease will be weakened.

Finally, we try to give a biological interpretation of Theorem 13.1.2. Assume that the disease has the same transmission rate at any location in the entire habitat and at any time (namely, β is a positive constant), and that the available treatment for the disease is fixed which hence indicates that 0 ω Ω γ(x, t)dxdt is a positive constant. If the treatment is made mainly in a specific part of the habitat, Theorem 13.1.2 shows that R 0 can reach its maximum. Thus, such an allocation of the treatment results in the largest risk for the control of the disease. On the other hand, R 0 will attain its minimum if the treatment is equally distributed over the entire habitat at any time. Therefore, Theorem 13.1.2 suggests that the latter treatment strategy would be more effective for the eradication of the disease.

13.5 Notes

Sections 13.113.4 are adapted from Peng and Zhao [277]. Here we give a new proof for Theorem 13.2.1 (i) and Theorem 13.3.1, respectively. The asymptotic profiles of steady states and global dynamics for autonomous reaction–diffusion SIS epidemic models were investigated by Allen, Bolker, Lou and Nevai [9], Peng [273], Peng and Liu [275], Huang, Han and Liu [178], Peng and Yi [276], Cui and Lou [69], Wu and Zou [413], Li, Peng and Wang [221]. Recently, Wang, Zhang and Zhao [399] also studied time-periodic traveling waves for a periodic reaction–diffusion SIR model.