Keywords

1 Introduction

Prostate cancer is one of the diseases of male. By the fact that prostate cells proliferate by a male hormone so-called androgen, it is expected that prostate tumors are sensitive to androgen suppression. Huggins and Hodges [10] demonstrated the validity of the androgen deprivation. Since then, the hormonal therapy has been a major therapy of prostate cancer. The therapy aims to induce apoptosis of prostate cancer cells under the androgen suppressed condition. For instance, the androgen suppressed condition can be kept by medicating a patient continuously [22], and the therapy is called Continuous Androgen Suppression therapy (CAS therapy). However, during several years of the CAS therapy, the relapse of prostate tumor often occurs. More precisely, the relapse means that the prostate tumor mutates to androgen independent tumor. Then the CAS therapy is not effective in treating the tumor [5]. The fact was also verified mathematically by [13, 14]. It is known that there exist Androgen-Dependent cells (AD cells) and Androgen-Independent cells (AI cells) in prostate tumors. AI cells are considered as one of the causes of the relapse. For, AD cells can not proliferate under the androgen suppressed condition, whereas AI cells are not sensitive to androgen suppression and can still proliferate under the androgen poor condition [2, 18]. Thus the relapse of prostate tumors is caused by progression to androgen independent cancer due to emergence of AI cells.

In order to prevent or delay the relapse of prostate tumors, Intermittent Androgen Suppression therapy (IAS therapy) was proposed and has been studied clinically by many researchers (e.g., see [1, 3, 19], and the references therein). In contrast to the CAS therapy, the IAS therapy does not aim to exterminate prostate cancer. We mention the typical feature of the clinical phenomenon. Since prostate cancer cells produce large amount of Prostate-Specific Antigen, the PSA is regarded as a good biomarker of prostate cancer [21], and the plan of IAS therapy is based on the level:

(F) :

In the IAS therapy, the medication is stopped when the serum PSA level falls enough, and resumed when the serum PSA level rises enough.

Indeed, if one can optimally plan the IAS therapy, then the size of tumor remains in an appropriate range by way of on and off of the medication. In order to comprehend qualitative property of prostate tumors under the IAS therapy, several mathematical models were proposed and have been studied in the mathematical literature, for instance, ODE models ([9, 11, 12, 20], and references therein) and PDE models [8, 15, 2325]. Due to (F), an unknown binary function, denoting the treatment state, appears in the models. The discontinuity of the binary function is the difficulty in mathematical analysis on the models. To the best of our knowledge, there is no result dealing with switching phenomena of the binary function in the PDE models.

The purpose of this paper is to prove the existence of a solution with the switching property for the PDE model introduced by Tao et al. [23]:

$$\begin{aligned} {\left\{ \begin{array}{ll} \dfrac{d^{} a}{d {t}^{}}(t)= -\gamma (a(t)-a_*) - \gamma a_* S(t) \, &{}\text {in}\quad \mathbb {R}_+, \\ \partial _t u(\rho ,t)-\mathscr {L}(v,R)u(\rho ,t)=F_u(u(\rho ,t),w(\rho ,t),a(t)) \, &{}\text {in}\quad I_\infty ,\\ \partial _t w(\rho ,t)-\mathscr {L}(v,R)w(\rho ,t)=F_w(u(\rho ,t),w(\rho ,t),a(t)) \, &{}\text {in}\quad I_\infty ,\\ v(\rho ,t)=\dfrac{1}{\rho ^2}\displaystyle \int _0^{\rho } F_v(u(r,t),w(r,t),a(t))r^2\,dr &{}\text {in}\quad I_\infty ,\\ \dfrac{dR}{dt}(t)= v(1,t) R(t) &{} \text {in}\quad \mathbb {R}_+,\\ S(t)= {\left\{ \begin{array}{ll} 0 \rightarrow 1 \quad \text {when} \quad R(t) = r_1 \quad \text {and} \quad R'(t) > 0, \\ 1 \rightarrow 0 \quad \text {when} \quad R(t) = r_0 \quad \text {and} \quad R'(t) < 0, \\ \end{array}\right. }\, &{}\text {in}\quad \mathbb {R}_+,\\ \partial _\rho u(\rho ,t) |_{\rho \in \{0,1\}} \! = \partial _\rho w(\rho ,t) |_{\rho \in \{0,1\}} \!=0, \,\,\, v(0,t)=0, \,&{}\text {in}\quad \mathbb {R}_+,\\ (a,u,w,R,S)|_{t=0}=(a_0, u_0(\rho ),w_0(\rho ), R_0, S_0) \, &{}\text {in}\quad I, \end{array}\right. }\quad {(\mathrm{IAS})} \end{aligned}$$

where \(I=(\,0,1\,)\), \(\mathbb {R}_+=\{t \in \mathbb {R}\, | \, t>0\}\), \(I_\infty =I \times \mathbb {R}_+\), and

$$\begin{aligned}&\mathscr {L}(v,R)\varphi =\dfrac{D}{R(t)^2}\dfrac{1}{\rho ^2}\partial _\rho [\rho ^2\partial _\rho \varphi ] + \rho v(1,t)\partial _\rho \varphi -\dfrac{1}{\rho ^2}\partial _\rho [\rho ^2v(\rho ,t)\varphi ],\end{aligned}$$
(1)
$$\begin{aligned}&F_u=f_1(a)u-c_1u w, \,\,\,\,\, F_w=f_2(a)w-c_2u w, \,\,\,\,\, F_v=F_u+F_w. \end{aligned}$$
(2)

The unknowns a, u, w, v, R, and S denote respectively the androgen concentration, the volume fraction of AD cells, the volume fraction of AI cells, the advection velocity of the cancer cells, the radius of the tumor, and the treatment state. Here \(S=0\) and \(S=1\) correspond to the medication state and the non-medication state, respectively. The authors of [23] assumed that the prostate tumor is radially symmetric and densely packed by AD and AI cells. Moreover they regarded the tumor as a three dimensional sphere. Thus the unknowns u, w, and v are radially symmetric functions defined on the unit ball \(B_1 =\{ x \in \mathbb {R}^3 \mid |x|<1 \}\), i.e., \(\rho =|x|\). The unknown S(t) is governed by R(t), for they formulated the serum PSA level as the radius of the tumor. Although the condition on S in (IAS) is a concise form, the precise form is expressed as follows: \(S(t)\in \{0,1\}\) and

$$\begin{aligned} S(t)\!=\!\! {\left\{ \begin{array}{ll} \!\{0,1\} \setminus \displaystyle \lim _{\tau \uparrow t}S(\tau ) \,\,\, &{}\text {if} \quad {\left\{ \begin{array}{ll} \displaystyle \lim _{\tau \uparrow t}R'(\tau )\!>\!0, \,\, \displaystyle \lim _{\tau \uparrow t}R(\tau )\!=\!r_1, \,\, \text {and}\,\, \displaystyle \lim _{\tau \uparrow t}S(\tau )\!=\!0,\\ \displaystyle \lim _{\tau \uparrow t}R'(\tau )\!<\!0, \,\, \displaystyle \lim _{\tau \uparrow t}R(\tau )\!=\!r_0, \,\, \text {and} \,\, \displaystyle \lim _{\tau \uparrow t}S(\tau )\!=\!1, \end{array}\right. }\\ \displaystyle \lim _{\tau \uparrow t}S(\tau ) &{}\text {otherwise}. \end{array}\right. } \end{aligned}$$

The parameters \(a_*\), \(\gamma \), \(c_1\), \(c_2\), \(r_0\), and \(r_1\) denote the normal androgen concentration, the reaction velocity, the effective competition coefficient from AD to AI cells, and from AI to AD cells, the lower and upper thresholds, respectively. The given functions \(f_1 : [\,0, a_*\,] \rightarrow \mathbb {R}\) and \(f_2 : [\,0, a_*\,] \rightarrow \mathbb {R}\) describe the net growth rate of AD and AI cells, respectively. Although the typical form of \(f_{i}\) were given by [23], we deal with general \(f_{i}\) satisfying several conditions, which are stated later.

In [23], it was shown that, for each initial data \(u_{0} \in W^{2}_{p}(I)\), there exists a short time solution \(u \in W^{2,1}_{p}(I \times (\,0,T\,))\) of (IAS). However, the result is not sufficient to construct a “switching solution”. For, if (uwvRaS) is a switching solution of (IAS), then (IAS) must be solvable at least locally in time for each “initial data” \((u,w,R,a,S)|_{\{t=t_{j} \}}\), where \(t_{j}\) is a switching time. Nevertheless, the result in [23] does not ensure the solvability.

The existence of switching solutions of (IAS) is a mathematically outstanding question. We are interested in the following mathematical problem:

Problem 1.1

Does there exist a switching solution of (IAS) with appropriate thresholds \(0< r_0< r_1 < \infty \)? Moreover, what is the dynamical aspect of the solution?

We consider the initial data \((u_0, w_0, R_0, a_0, S_0)\) satisfying the following:

$$\begin{aligned} {\left\{ \begin{array}{ll} u_0, w_0 \in C^{2+\alpha } (B_1), \partial _\rho u_0(\rho ) {\Large |}_{\rho \in \{ 0, 1\} }\!=\! \partial _\rho w_0(\rho ){\Large |}_{\rho \in \{ 0, 1\} }\!=\!0,\\ u_0 \ge 0, \,\,\, w_0 \ge 0, \,\,\, u_0+w_0 \equiv 1, \,\, R_0>0,\,\,\, 0<a_0<a_*, \,\,\, S_0 \in \{0,1\}, \end{array}\right. } \end{aligned}$$
(3)

where \(\alpha \in (\,0,1\,)\). Let \(f_1\) and \(f_2\) satisfy

$$\begin{aligned} {\left\{ \begin{array}{ll} f_1(a_*)>0, \quad f_1(0)<0, \quad f_1 \in C^1([\,0,a_*\,]),\quad f'_1> 0 \quad \text {in} \quad [\,0,a_*\,], \\ f_2(0)>0, \quad f_2 \in C^1([\,0,a_*\,]), \quad f'_2 \le 0 \quad \text {in} \quad [\,0,a_*\,]. \end{array}\right. } \quad {(\mathrm{A0})} \end{aligned}$$

We note that (A0) is a natural assumption in the clinical point of view, and typical \(f_1\) and \(f_2\), which were given in [23], also satisfy (A0). In order to comprehend the role of \(f_i\) and \(c_i\), we classify asymptotic behavior of non-switching solutions of (IAS) in terms of \(f_i\) and \(c_i\) under (A0) (see Theorems 3.23.5). Following the results obtained by Theorems 3.23.5, we impose (A0) and the following assumptions on \(f_i\) and \(c_i\):

$$\begin{aligned} f_1(a_*) - f_2(a_*) -c_1>0; \quad {(\mathrm{A1})} \end{aligned}$$
$$\begin{aligned} f_1(0) - f_2(0) +c_2>0. \quad {(\mathrm{A2})} \end{aligned}$$

From now on, let \(Q_T := B_1\times (\,0, T\,)\). We denote by \(C^{2\kappa +\alpha , \kappa +\beta }(Q_T)\) the Hölder space on \(Q_T\), where \(\kappa \in \mathbb {N}\cup \{0\}\), \(0< \alpha <1\), and \(0<\beta <1\) (for the precise definition, see [16]).

Then we give an affirmative answer to Problem 1.1:

Theorem 1.1

Let \(f_i\) and \(c_i\) satisfy (A0)–(A2). Let \((u_0, w_0, R_0, a_0, S_0)\) satisfy (3), \(u_0>0\) in \(\overline{B_1}\), and \(S_0=0\). Then, there exists a pair \((r_0, r_1)\) with \(0< r_0< r_1 < \infty \) such that the system (IAS) has a unique solution (uwvRaS) in the class

$$\begin{aligned}\begin{gathered} u, w \in C^{2+\alpha ,1+\alpha /2}(Q_\infty ),\quad v \in C^{1+\alpha , \alpha /2}([\,0,1\,) \times \mathbb {R}_+) \cap C^1([\,0,1\,) \times \mathbb {R}_+),\\ R \in C^1(\mathbb {R}_+),\quad a \in C^{0,1}(\mathbb {R}_+). \end{gathered}\end{aligned}$$

Moreover, the following hold\(:\)

  1. (i)

    There exists a strictly monotone increasing divergent sequence \(\,\{t_j\}_{j=0}^{\infty }\,\) with \(t_0=0\) such that \(a\in C^1((\,t_j,t_{j+1}\,))\) and

    $$\begin{aligned} S(t)= {\left\{ \begin{array}{ll} 0 \quad \text {in} \quad [\,t_{2j},t_{2j+1}\,),\\ 1 \quad \text {in} \quad [\,t_{2j+1},t_{2j+2}\,), \end{array}\right. } \quad \text {for any} \quad j \in \mathbb {N}\cup \{0\}; \end{aligned}$$
  2. (ii)

    There exist positive constants \(C_1<C_2\) such that

    $$\begin{aligned} C_1 \le R(t) \le C_2 \quad \text {for any} \quad t\ge 0. \end{aligned}$$

We mention the mathematical contributions of Theorem 1.1 and a feature of the system (IAS). The system is composed of two different systems (S0) and (S1) by the medium of the binary function S(t), where (S0) and (S1) respectively denote (IAS) with \(S(t) \equiv 0\) and \(S(t) \equiv 1\). Generally the system with such structure is called hybrid system. Regarding (S0), the assumption (A1) implies that R(t) diverges to infinity as \(t \rightarrow \infty \) (see Theorem 3.4). On the other hand, regarding (S1), we can show that the assumption (A2) implies the following: (i) R(t) diverges to infinity as \(t \rightarrow \infty \) if \(u_0\) is sufficiently small (Theorem 3.2); (ii) R(t) converges to 0 as \(t \rightarrow \infty \) if \(u_0\) is sufficiently close to 1 (Theorem 3.3). It is natural to ask whether a solution R(t) of (IAS) is bounded or not. One of the contributions of the present paper is to show how to determine thresholds \(0< r_0< r_1 < \infty \) such that (IAS) with the thresholds has a bounded solution with infinite opportunities of switching. Furthermore, due to the discontinuity of S(t), it is expected that the switching solution is not so smooth. However, Theorem 1.1 indicates that the switching solution gains its regularity with the aid of the “indirectly controlled parameter” a(t). The other contribution of this paper is to mathematically clarify the immanent structure of the hybrid system (IAS).

We mention the clinical contribution of Theorem 1.1. Although one can expect that the system (IAS) gives us how to optimally plan the IAS therapy for each prostate cancer patient, it is not trivial matter. To do so, first we have to prove the existence of admissible thresholds for each patient. Moreover, if the admissible threshold is not unique, then we investigate the optimality of the admissible thresholds. Here, we say that the thresholds is admissible for a prostate cancer patient, if for the initial data (IAS) with the thresholds has a switching solution. Although [23] indicated that the problem, even the existence, is difficult to analyze mathematically, they numerically showed that (i) the IAS therapy fails for unsuitable thresholds, more precisely, the radius of tumor diverges to infinity after several times of switching opportunities, and while, (ii) the IAS therapy succeeds for suitable thresholds, i.e., the radius of tumor remains in a bounded range by way of infinitely many times of switching opportunities. One of the clinical contribution of Theorem 1.1 is to prove the existence of admissible thresholds for each patients, provided that (A0)–(A2) are fulfilled. Moreover, Theorem 1.1 also implies that the IAS therapy has an advantage over the CAS therapy for some patients. Indeed, Theorem 3.2 gives an instance showing a failure of the CAS therapy, whereas Theorem 1.1 asserts that the patient can be treated successfully by the IAS therapy. The fact is an example that switching strategy under the IAS therapy is able to be a successful strategy. On the other hand, the pair of admissible thresholds given by Theorem 1.1 is not uniquely determined. Thus, in order to optimally plan the IAS therapy, we have to investigate its optimality. However the optimality of the admissible thresholds is an outstanding problem.

The paper is organized as follows: In Sect. 2, we give a modified system of (IAS) and reduce the system to a simple hybrid system. Making use of the modified system, we prove the short time existence of the solution to (IAS). In Sect. 3, we show the existence of the non-switching solution of (IAS) for any finite time. Moreover, we classify the asymptotic behaviors of the non-switching solutions in terms of \(f_i\) and \(c_i\). In Sect. 4, we prove Theorem 1.1, i.e., we show the existence of a switching solution of (IAS) and give its property.

2 Short Time Existence

The main purpose of this section is to show the short time existence of the solution of (IAS). As [23] mentioned, it is difficult to prove that (IAS) has a short time solution in the Hölder space (see Remark 4.1 in [23]). The difficulty rises from the singularity of \(v/\rho \) at \(\rho =0\). Indeed, the singularity prevents us from applying the Schauder estimate. To overcome the difficulty, first we consider a modified hybrid system. More precisely, we replace the “boundary condition”

$$\begin{aligned} v(0,t) =0 \quad \text {in} \quad \mathbb {R}_+ \end{aligned}$$
(4)

by

$$\begin{aligned} \dfrac{v(\rho ,t)}{\rho }\!\!\Bigm |_{\rho =0}\!=\dfrac{1}{3} F_v(u(0,t),w(0,t),a(t)) \quad&\text {in}\quad \mathbb {R}_+. \end{aligned}$$

Then the modified hybrid system is expressed as follows:

$$\begin{aligned} {\left\{ \begin{array}{ll} \dfrac{d^{} a}{d {t}^{}}(t)= -\gamma (a(t)-a_*) - \gamma a_* S(t) \, &{}\text {in}\quad \mathbb {R}_+, \\ \partial _t u(\rho ,t)-\mathscr {L}(v,R)u(\rho ,t)=F_u(u(\rho ,t),w(\rho ,t),a(t)) &{}\text {in}\quad I_\infty ,\\ \partial _t w(\rho ,t)-\mathscr {L}(v,R)w(\rho ,t)=F_w(u(\rho ,t),w(\rho ,t),a(t)) &{}\text {in}\quad I_\infty ,\\ v(\rho ,t)=\dfrac{1}{\rho ^2}\displaystyle \int _0^{\rho } F_v(u(r,t),w(r,t),a(t))r^2\,dr &{}\text {in}\quad I_\infty ,\\ \dfrac{dR}{dt}(t)= v(1,t) R(t) &{} \text {in} \quad \mathbb {R}_+,\\ S(t)= {\left\{ \begin{array}{ll} 0 \rightarrow 1 \quad \text {when} \quad R(t) = r_1 \quad \text {and} \quad R'(t) > 0, \\ 1 \rightarrow 0 \quad \text {when} \quad R(t) = r_0 \quad \text {and} \quad R'(t) < 0, \\ \end{array}\right. }\, &{}\text {in}\quad \mathbb {R}_+,\\ \partial _\rho u(\rho ,t) |_{\rho \in \{0,1\}} \! = \partial _\rho w(\rho ,t) |_{\rho \in \{0,1\}} \!=0 \,\, &{}\text {in}\quad \mathbb {R}_+,\\ \dfrac{v(\rho ,t)}{\rho }\!\!\Bigm |_{\rho =0}\!=\dfrac{1}{3} F_v(u(0,t),w(0,t),a(t)) &{}\text {in}\quad \mathbb {R}_+,\\ (a,u,w,R,S)|_{t=0}=(a_0, u_0(\rho ),w_0(\rho ), R_0, S_0) \, &{}\text {in}\quad I. \end{array}\right. }\quad {(\mathrm{mIAS})} \end{aligned}$$

To begin with, we show that \(u+w\) is invariant under (mIAS).

Lemma 2.1

Let \((u_0, w_0, R_0, a_0, S_0)\) be an initial data satisfying (3). Assume that (uwvRaS) is a solution of (mIAS) with \(u,w \in C^{2+\alpha ,1+\alpha /2} (Q_T)\) and \(S(t)\equiv S_0\) in \([\,0,T\,)\). Then \(u+w \equiv 1\) in \(B_1\times [\,0,T\,)\).

Proof

Setting \(V:= u+w\), we reduce (mIAS) to the following system:

$$\begin{aligned} {\left\{ \begin{array}{ll} \dfrac{d^{} a}{d {t}^{}}(t)= -\gamma (a(t)-a_*) - \gamma a_* S_0 \, &{}\text {in} \quad \mathbb {R}_+, \\ \partial _t V(\rho ,t)-\mathscr {L}(v,R)V(\rho ,t)=\dfrac{1}{\rho ^2}\partial _\rho [\rho ^2 v (\rho ,t)]\,\, &{}\text {in} \quad I_\infty ,\\ v(\rho ,t)=\dfrac{1}{\rho ^2}\displaystyle \int _0^{\rho } F_v(u(r,t),w(r,t),a(t))r^2\,dr \quad &{}\text {in}\quad I_\infty ,\\ \dfrac{dR}{dt}(t)=v(1,t)R(t) \, &{}\text {in} \quad \mathbb {R}_+, \\ \partial _\rho \! V(\rho ,t) {\Large |}_{\rho \in \{0,1\}}\!\!=0, \, \dfrac{v(\rho ,t)}{\rho } {\Big |}_{\rho =0}\!\!\! =\dfrac{1}{3} F_v(u(0,t),\!w(0,t),\! a(t)), \!\!\!&{}\text {in}\quad \mathbb {R}_+, \\ V(\rho ,0)=1,\quad a(0)=a_0,\quad R(0)=R_0, \, &{}\text {in} \quad I. \\ \end{array}\right. } \end{aligned}$$
(5)

In the derivation of the second equation in (5), we used the fact \(F_u + F_w = F_v\) and the equation on v. We shall prove that \(V \equiv 1\) in \(B_1\times [\,0,T\,)\). The second equation in (5) is written as

$$\begin{aligned} \partial _t V=\dfrac{D}{R(t)^2}\varDelta _x V-\frac{x}{\rho }\cdot \nabla _x\{v(V-1)\}+v(1,t)x \cdot \nabla _x V-\frac{2}{\rho }v(V-1) \end{aligned}$$
(6)

in terms of the three-dimensional Cartesian coordinates, where \(\rho = |x|\). In what follows, we use \(\nabla \) and \(\varDelta \) instead of \(\nabla _x\) and \(\varDelta _x\), respectively, if there is no fear of confusion. First, we observe from (6) that

$$\begin{aligned}&\frac{d}{dt} \Vert V\!-1\Vert _{L^2(B_1\!)}^2 = -\frac{2D}{R(t)^2} \Vert \nabla (V\!-1)\Vert _{L^2(B_1\!)}^2 -2\int _{B_1}\!\!\!(V\!-1)\frac{x}{\rho }\cdot \nabla \{v(V\!-1)\}\,dx\\&\,\,\, \quad +2 \int _{B_1}(V\!-\!1)v(1,t)x\cdot \nabla V\,dx-2\int _{B_1}\frac{v}{\rho }(V-1)^2\, dx =:J_1+J_2+J_3+J_4. \end{aligned}$$

We start with an estimate of \(J_1\). Since it follows from the third and fourth equations in (5) that

$$\begin{aligned}&R(t)=R_0 \exp \Bigm [\int ^t_0 v(1,s) ds\Bigm ] \le R_0 e ^{\kappa T}, \end{aligned}$$

we have

$$\begin{aligned} J_1 \le -\frac{2D}{R_0^2 e^{2\kappa T}} \Vert \nabla (V\!-1)\Vert _{L^2(B_1\!)}^2, \end{aligned}$$

where \(\kappa \) is a positive constant given by

$$\begin{aligned} 3\kappa :=\Vert f_1(a) u+f_2(a) w-(c_1+c_2)uw \Vert _{L^\infty (Q_T)}. \end{aligned}$$

We turn to \(J_2\). By the relation

$$\begin{aligned} \partial _\rho v=-2\frac{v}{\rho }+f_1(a(t))u+f_2(a(t))w-(c_1+c_2)uw, \end{aligned}$$

the integral \(J_2\) is reduced to

$$\begin{aligned} J_2&=4\int _{B_1} \frac{v}{\rho }|V-1|^2 \,dx -2 \int _{B_1} (V-1)\frac{v}{\rho }x\cdot \nabla V\,dx\\&\quad -2 \int _{B_1}\{f_1(a(t))u+f_2(a(t))w-(c_1+c_2)uw\}|V-1|^2\,dx. \end{aligned}$$

Observing that

$$\begin{aligned} \Bigm | \frac{v(\rho ,t)}{\rho }\Bigm | \le \frac{1}{\rho ^3} \int ^\rho _0 | f_1(a(t))u+f_2(a(t))w-(c_1+c_2)uw | r^2\,dr \le \kappa , \end{aligned}$$

and using Hölder’s inequality and Young’s inequality, we find

$$\begin{aligned} |J_2| \le \varepsilon \Vert \nabla (V-1)\Vert _{L^2(B_1\!)}^2+C(\varepsilon ) \Vert V-1\Vert _{L^2(B_1\!)}^2. \end{aligned}$$

Regarding \(J_3\) and \(J_4\), the same argument as in the estimate of \(J_2\) asserts that

$$\begin{aligned} |J_3| \le \varepsilon \Vert \nabla (V-1)\Vert _{L^2(B_1\!)}^2+C(\varepsilon ) \Vert V-1\Vert _{L^2(B_1\!)}^2, \quad |J_4| \le 2\kappa \Vert V-1\Vert _{L^2(B_1\!)}^2. \end{aligned}$$

Thus, letting \(\varepsilon >0\) small enough, we obtain

$$\begin{aligned} \frac{d}{dt} \Vert V-1\Vert _{L^2(B_1\!)}^2 \le C \Vert V-1\Vert _{L^2(B_1\!)}^2. \end{aligned}$$
(7)

Since \(V(\cdot ,0)=1\), applying Gronwall’s inequality to (7), we obtain the conclusion. \(\square \)

Here we reduce the system (mIAS) to the following hybrid system:

$$\begin{aligned} {\left\{ \begin{array}{ll} \dfrac{d^{} a}{d {t}^{}}(t)= -\gamma (a(t)-a_*) - \gamma a_* S(t) \, &{}\text {in} \quad \mathbb {R}_+, \\ \partial _t u(\rho ,t) -\mathscr {L}'(v,R)u(\rho ,t)= P(u(\rho ,t),a(t)) \, &{}\text {in} \quad I_\infty , \\ v(\rho ,t)= \dfrac{1}{\rho ^2}\displaystyle \int _0^{\rho } F(u(r,t),a(t))r^2\,dr &{}\text {in}\quad I_\infty ,\\ \dfrac{d^{} R}{d {t}^{}}(t) = v(1,t) R(t)\, &{}\text {in} \quad \mathbb {R}_+, \\ S(t)= {\left\{ \begin{array}{ll} 0 \rightarrow 1 \quad \text {when} \quad R(t) = r_1 \quad \text {and} \quad R'(t) > 0, \\ 1 \rightarrow 0 \quad \text {when} \quad R(t) = r_0 \quad \text {and} \quad R'(t) < 0, \\ \end{array}\right. }\, &{}\text {in} \quad \mathbb {R}_+, \\ \partial _\rho u(\rho ,t)\big |_{\rho \in \{0,1\}}= 0, \,\,\, \dfrac{v(\rho ,t)}{\rho }\Bigm |_{\rho =0}\!=\dfrac{1}{3} F(u(0,t),a(t)),\, &{}\text {in} \quad \mathbb {R}_+, \\ a(0)= a_0, \,\, u(\rho , 0)= u_0(\rho ), \,\, R(0)= R_0, \,\, S(0)=S_0, \, &{}\text {in} \quad I, \end{array}\right. }\quad {(\mathrm{P})} \end{aligned}$$

where

$$\begin{aligned} \mathscr {L}'(v,R)\varphi&= \dfrac{D}{R(t)^2} \dfrac{1}{\rho ^2} \partial _\rho [\rho ^2 \partial _\rho \varphi ] -\left[ v(\rho ,t) - \rho v(1,t) \right] \partial _\rho \varphi , \\ P(u,a)&= \left\{ f_1(a) - f_2(a) - c_1 + (c_1 + c_2) u\right\} u (1-u), \nonumber \\ F(u,a)&= f_1(a) u + \left\{ f_2(a) - (c_1+ c_2) u \right\} (1-u). \nonumber \end{aligned}$$
(8)

The reduction is justified as follows:

Lemma 2.2

The system (mIAS) is equivalent to (P).

Proof

If (uwvRaS) satisfies (mIAS), then Lemma 2.1 implies that \(u+w\equiv 1\). Using \(w= 1-u\), we can reduce (mIAS) to (P). On the other hand, if (uvRaS) satisfies (P), then, setting \(w:=1-u\), we obtain (mIAS) from (P). \(\square \)

In order to prove the short time existence of a solution to (mIAS), we first consider the following system, which is formally derived from (P) provided \(S(t) \equiv S_0\).

$$\begin{aligned} {\left\{ \begin{array}{ll} \dfrac{d^{} a}{d {t}^{}}(t)= -\gamma (a(t)-a_*) - \gamma a_* S_0 \, &{}\text {in} \quad \mathbb {R}_+, \\ \partial _t u(\rho ,t) -\mathscr {L}'(v,R)u(\rho ,t)=P(u(\rho ,t),a(t))\, &{}\text {in} \quad I_\infty , \\ v(\rho ,t)= \dfrac{1}{\rho ^2}\displaystyle \int _0^{\rho } F(u(r,t),a(t))r^2\,dr &{}\text {in}\quad I_\infty ,\\ \dfrac{d^{} R}{d {t}^{}}(t) = v(1,t) R(t)\, &{}\text {in} \quad \mathbb {R}_+, \\ \partial _\rho u(\rho ,t)\big |_{\rho \in \{0,1\}}= 0, \,\,\, \dfrac{v(\rho ,t)}{\rho }\Bigm |_{\rho =0}\!=\dfrac{1}{3} F(u(0,t),a(t)),\, &{}\text {in} \quad \mathbb {R}_+, \\ a(0)= a_0, \,\, u(\rho , 0)= u_0(\rho ), \,\, R(0)= R_0, \, &{}\text {in} \quad I. \end{array}\right. }\quad {(\mathrm{PS}_0)} \end{aligned}$$

Lemma 2.3

Let \((u_0, R_0, a_0, S_0)\) satisfy (3). Then there exists \(T>0\) such that the system (\(\mathrm{PS}_{0}\)) has a unique solution (uvRa) in the class

$$\begin{aligned} C^{2+\alpha ,1+\frac{\alpha }{2}}\!(Q_T)\! \times \!(C^{1+\alpha ,\frac{\alpha }{2}}\!([\,0,1\,) \! \times \! (\,0,T\,)) \!\cap \!C^1\!([\,0,1\,) \! \times \! (\,0,T\,))) \!\times \!( C^1\!((\,0,T\,)) )^2. \end{aligned}$$

Proof

We shall prove Lemma 2.3 by the contraction mapping principle. Let us define a metric space \((X_{M}, \Vert \cdot \Vert _X)\) as follows:

$$\begin{aligned} X_{M} = \{ u \in C^{\alpha , \frac{\alpha }{2}}(Q_T) \mid u(x,t)=u(|x|,t), u|_{t=0}=u_0, \Vert u\Vert _X\le M \}, \end{aligned}$$

where \(\Vert u\Vert _X = \Vert u\Vert _{C^{\alpha ,\alpha /2}(Q_T)}\). We will take the constants \(T>0\) and \(M>0\) appropriately, later.

Step 1: We shall construct a mapping \(\varPsi : X_{M} \rightarrow X_{M}\). Let \(u\in X_{M}\). For \(u(\rho , t)\), let us define \((v(\rho ,t),R(t))\) as the solution of the following system:

$$\begin{aligned} {\left\{ \begin{array}{ll} v(\rho ,t)= \dfrac{1}{\rho ^2}\displaystyle \int _0^{\rho } F(u(r,t),a(t))r^2\,dr &{}\text {in}\quad I \times [\,0,T\,),\\ \dfrac{dR}{dt}(t)= v(1,t)R(t)\quad &{}\text {in} \quad (\,0,T\,), \\ \dfrac{v(\rho ,t)}{\rho }\Bigm |_{\rho =0}\!=\dfrac{1}{3} F(u(0,t),a(t)) \, &{}\text {in} \quad [\,0,T\,), \\ R(0)=R_0. \end{array}\right. } \end{aligned}$$
(9)

For (vR) defined by (9), let \(\tilde{u}(x,t)=\tilde{u}(|x|,t)=\tilde{u}(\rho ,t)\) denote the solution of

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t \tilde{u}(\rho ,t) -\mathscr {L}'(v,R) \tilde{u}(\rho ,t) = P(u(\rho ,t),a(t))\, &{}\text {in} \quad I \times (\,0,T\,), \\ \partial _\rho \tilde{u}(0,t)= \partial _\rho \tilde{u}(1,t)= 0 \, &{}\text {in} \quad (\,0,T\,), \\ \tilde{u}(\rho , 0)= u_0(\rho ) \,&{}\text {in} \quad I. \end{array}\right. } \end{aligned}$$
(10)

If we consider the problem as an initial-boundary value problem for a one dimensional parabolic equation, the parabolic equation has a singularity at \(\rho =0\). In order to eliminate the singularity, we rewrite the problem in terms of the three dimensional Cartesian coordinate as follows:

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t \tilde{u}(|x|,t) + \Bigm [ \dfrac{v(|x|,t)}{|x|} - v(1,t) \Bigm ] x \cdot \nabla \tilde{u}(|x|,t)\\ \qquad = \dfrac{D}{R(t)^2} \varDelta \tilde{u}(|x|,t) +P(u(|x|,t), a(t)) \quad &{}\text {in} \quad Q_T, \\ \partial _\rho \tilde{u}(0,t)= \partial _\rho \tilde{u}(1,t)= 0 \, &{}\text {in} \quad (\,0,T\,), \\ \tilde{u}(|x|, 0)= u_0(|x|) \,&{}\text {in} \quad B_1. \end{array}\right. } \end{aligned}$$
(11)

We prove that \(\tilde{u}\in X_{M}\) by applying the Schauder estimate to (11). Since \(u\in X_M\), it is clear that \(F(u,a)\in C^{\alpha ,\alpha /2}(Q_T)\), \(P(u,a)\in C^{\alpha ,\alpha /2}(Q_T)\), and

$$\begin{aligned} v(1,t)=\int ^1_0 F(u(r,t),a(t))r^2 \, dr \in C^{\frac{\alpha }{2}}((\,0,T\,)). \end{aligned}$$
(12)

Moreover, since \(R(t)>0\) in \([\,0,T\,)\), the fact (12) implies \(1/R(t)^2 \in C^{\alpha /2}((\,0,T\,))\). In the following, we will show

$$\begin{aligned} \mathscr {V}(\rho ,t):= \dfrac{v(\rho ,t)}{\rho } \in C^{\alpha , \frac{\alpha }{2}}(Q_T). \end{aligned}$$
(13)
  1. (i)

    Let us fix \(\rho \in (\,0,1\,)\) arbitrarily. Since now \(\mathscr {V}\) satisfies, for any \(0<t<s<T\),

    $$\begin{aligned} \mathscr {V}(\rho ,s)-\mathscr {V}(\rho ,t)=\frac{1}{\rho ^3}\int ^\rho _0 \{F(u(r,s),a(s))-F(u(r,t),a(t))\}r^2\,dr, \end{aligned}$$
    (14)

    we estimate the integrand. It follows from \(u \in X_M\) that

    $$\begin{aligned}&|F(u(r,s),a(s))-F(u(r,t),a(t))| \\&\quad \le C(M)\left\{ |u(r,s)-u(r,t)| +\sum _{i=1}^2 |f_i(a(s))-f_i(a(t))|\right\} \nonumber \\&\quad \le C(M)\left\{ M |s-t|^{\frac{\alpha }{2}} +\sum _{i=1}^2 |f_i(a(s))-f_i(a(t))|\right\} . \nonumber \end{aligned}$$
    (15)

    Furthermore, the mean value theorem implies

    $$\begin{aligned} |f_i(a(s))-f_i(a(t))|\le C|s-t| \quad \text {for} \quad i=1, 2, \end{aligned}$$

    where \(C=C(f_i, a_*, \gamma )\). Combining the estimate with (15), we find

    $$\begin{aligned} |F(u(r,s),a(s))-F(u(r,t),a(t))| \le C(M)|s-t|^{\frac{\alpha }{2}}. \end{aligned}$$

    Consequently, we deduce from (14) that

    $$\begin{aligned} |\mathscr {V}(\rho ,s)-\mathscr {V}(\rho ,t)| \le C(M)|s-t|^{\frac{\alpha }{2}}. \end{aligned}$$
  2. (ii)

    Let \(\rho =0\). Then by the same argument as in (i), we see that

    $$\begin{aligned} |\mathscr {V}(0,s)-\mathscr {V}(0,t)|=\frac{1}{3}|F(u(0,s),a(s))-F(u(0,t),a(t))| \le C(M)|s-t|^{\frac{\alpha }{2}} \end{aligned}$$

    for any \(0<t<s<T\).

  3. (iii)

    Fix \(0<t<T\) arbitrarily. Then, for any \(0<\rho<\sigma < 1\), it holds that

    $$\begin{aligned} \mathscr {V}(\sigma ,t)-\mathscr {V}(\rho ,t)&=\{ \mathscr {V}(\sigma ,t)-\mathscr {V}(0,t) \}-\{ \mathscr {V}(\rho ,t)-\mathscr {V}(0,t) \}\\&=\frac{1}{\sigma ^3}\int ^\sigma _\rho \{F(u(r,t),a(t))-F(u(0,t),a(t))\}r^2 \,dr\\&\quad +\left( \frac{1}{\sigma ^3}-\frac{1}{\rho ^3}\right) \int ^\rho _0 \{F(u(r,t),a(t))-F(u(0,t),a(t))\}r^2 \,dr. \end{aligned}$$

    Since \(u \in X_M\), we observe that

    $$\begin{aligned} |F(u(\rho ,t),a(t))-F(u(0,t),a(t))| \le C(M)|u(\rho ,t)-u(0,t)| \le C(M) \rho ^\alpha . \end{aligned}$$

    Therefore we obtain

    $$\begin{aligned} |\mathscr {V}(\sigma ,t)-\mathscr {V}(\rho ,t)|&\le C(M)\frac{1}{\sigma ^3}\int ^\sigma _\rho r^{2+\alpha } \,dr+C(M) \Bigm |\frac{\rho ^3-\sigma ^3}{\sigma ^3 \rho ^3}\Bigm |\int ^\rho _0 r^{2+\alpha } \,dr\\&\le C(M)\Bigm |\frac{\sigma ^3- \rho ^3}{\sigma ^{3-\alpha }}\Bigm | \le C(M)|\sigma - \rho |^\alpha . \end{aligned}$$
  4. (iv)

    Let us fix \(0<t<T\) arbitrarily. The same argument as in (iii) implies that

    $$\begin{aligned} |\mathscr {V}(\rho ,t)-\mathscr {V}(0,t)| \le C(M) \frac{1}{\rho ^3} \int ^\rho _0 r^{2+\alpha }\, dr \le C(M) \rho ^\alpha \quad \text {for any} \quad \rho \in (\,0,1\,). \end{aligned}$$

From (i)–(iv), we conclude (13). Hence, by virtue of (11) we can apply the Schauder estimate (Theorem 5.3, [16]) to (10):

$$\begin{aligned} \Vert \tilde{u}\Vert _{C^{2+\alpha ,1+\alpha /2}(Q_T)}&\le C\left( \Vert P\Vert _X +\Vert u_0\Vert _{C^{2+\alpha }(B_1)} \right) \le C(M)+C\Vert u_0\Vert _{C^{2+\alpha }(B_1)}. \end{aligned}$$

On the other hand, it follows from the mean value theorem that

$$\begin{aligned} \Vert \tilde{u}-u_0\Vert _X\le \max \left\{ T,T^{1-\frac{\alpha }{2}}\right\} \Vert \tilde{u}\Vert _{C^{2+\alpha ,1+\alpha /2}(Q_T)}. \end{aligned}$$
(16)

Therefore, for \(T<1\), we obtain

$$\begin{aligned} \Vert \tilde{u}\Vert _X&\le T^{1-\frac{\alpha }{2}} \Vert \tilde{u}\Vert _{C^{2+\alpha ,1+\alpha /2}(Q_T)}+\Vert u_0\Vert _{C^{2+\alpha }(B_1)} \\&\le T^{1-\frac{\alpha }{2}}\{ C(M)+C\Vert u_0\Vert _{C^{2+\alpha }(B_1)} \} +\Vert u_0\Vert _{C^{2+\alpha }(B_1)}. \end{aligned}$$

Consequently, for \(M:=1+\Vert u_0\Vert _{C^{2+\alpha }(B_1)}\), setting \(T<1\) small enough as

$$\begin{aligned} T^{1-\frac{\alpha }{2}}\{ C(M)+C\Vert u_0\Vert _{C^{2+\alpha }(B_1)} \} <1, \end{aligned}$$
(17)

we deduce that \(\tilde{u}\in X_{M}\). We define a mapping \(\varPsi :X_{M} \rightarrow X_{M}\) as \(\varPsi (u)= \tilde{u}\).

Step 2: We show that \(\varPsi \) is a contraction mapping. Let \(u_i\in X_{M}\). We denote by \((v_i(\rho ,t),R_i(t))\) the solution of (9) with \(u=u_i\), where \(i=1,2\). For \(\tilde{u}_i:=\varPsi (u_i)\), set \(U:=\tilde{u}_1-\tilde{u}_2\). By a simple calculation, we see that U satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t U (\rho ,t) -\mathscr {L}'(v_2,R_2)U(\rho ,t)=G(u_1,u_2) \,\, &{}\text {in} \quad I \times (\,0,T\,), \\ \partial _\rho U(0,t)= \partial _\rho U(1,t)=0 \,\, &{}\text {in} \quad (\,0,T\,),\\ U(\rho , 0)= 0 \,\, &{}\text {in} \quad I, \end{array}\right. } \end{aligned}$$

where \(G(u_1,u_2)\) is given by

$$\begin{aligned} G(u_1,u_2)=\{ \mathscr {L}'(v_1,R_1)- \mathscr {L}'(v_2,R_2) \}\tilde{u}_1+\{P(u_1)-P(u_2)\}. \end{aligned}$$

Adopting a similar argument as in Step 1, we find \(G(u_1,u_2)\in C^{\alpha ,\alpha /2}(Q_T)\) and

$$\begin{aligned} \Vert G(u_1,u_2)\Vert _X \le C(T,u_0,R_0) \Vert u_1-u_2\Vert _X. \end{aligned}$$

Then the Schauder estimate asserts that

$$\begin{aligned} \Vert U\Vert _{C^{2+\alpha ,1+\alpha /2}(Q_T)} \le C(T, u_0,R_0)\Vert u_1-u_2\Vert _X. \end{aligned}$$

By the fact that \(U(|x|,0)=0\) in \(B_1\) and a similar argument as in (16), it holds that

$$\begin{aligned} \Vert \varPsi (u_1)-\varPsi (u_2) \Vert _X =\Vert U\Vert _X \le T^{1-\frac{\alpha }{2}}\Vert U\Vert _{C^{2+\alpha ,1+\alpha /2}(Q_T)} \le T^{1-\frac{\alpha }{2}} C\Vert u_1-u_2\Vert _X, \end{aligned}$$

where \(C= C(T, u_0,R_0)\). Thus, letting T small enough as \(T^{1-\alpha /2} C<1\), we conclude that \(\varPsi \) is a contraction mapping. Then Banach’s fixed point theorem indicates that there exists \(u\in X_{M}\) uniquely such that \(\varPsi (u)=u\). By the definition of \(\varPsi \), u is a unique solution of (\(\mathrm{PS}_{0}\)) on \([\,0,T\,)\). Moreover, we infer from the above argument that \(u \in C^{2+\alpha ,1+\alpha /2}(Q_T)\).

Finally we prove that \(v \in C^{1+\alpha , \alpha /2} ([\,0,1\,) \times (\,0,T\,))\cap C^1([\,0,1\,) \times (\,0,T\,))\). By a direct calculation, we have \(v \in C([\,0,T\,); H^1(I))\). Combining the fact with the Sobolev embedding theorem \(H^1(I) \hookrightarrow C^{0,1/2}(\bar{I})\), we obtain \(v \in C([\,0,T\,); C^{0,1/2}(\bar{I}))\), in particular \(v \in C(\bar{I} \times [\,0,T\,))\). Thus it follows from the continuity that

$$\begin{aligned} v(0,t) = \lim _{\rho \downarrow 0} v(\rho ,t) =0 \quad \text {for any} \quad t \in [\,0,T\,). \end{aligned}$$
(18)

Then, along the same line as in [23], we see that \(v \in C^1([\,0,1\,)\times (\,0,T\,))\). Moreover, applying the same argument as in (13) to

$$\begin{aligned} \partial _\rho v(\rho ,t) = {\left\{ \begin{array}{ll} -\dfrac{2}{\rho ^3} \displaystyle \int ^\rho _0 F(u(r,t),a(t)) r^2\, dr + F(u(\rho ,t),a(t)) \, &{}\text {if} \quad \rho >0,\\ \dfrac{1}{3} F(u(0,t),a(t)) \, &{}\text {if} \quad \rho =0, \end{array}\right. } \end{aligned}$$

we find \(v \in C^{1+\alpha , \alpha /2}([\,0,1\,) \times (\,0,T\,))\). This completes the proof. \(\square \)

Theorem 2.1

Let \((u_0, w_0, R_0, a_0,S_0)\) satisfy (3). Then there exists \(T>0\) such that the system (IAS) has a unique solution (uwvRaS) with \(S(t) \equiv S_0\) in \([\,0,T\,)\) in the class

$$\begin{aligned} {\left\{ \begin{array}{ll} u, w \in C^{2+\alpha ,1+\frac{\alpha }{2}}(Q_T), \quad R, a \in C^1((\,0,T\,)), \\ v \in C^{1+\alpha , \frac{\alpha }{2}}([\,0,1\,) \times (\,0,T\,)) \cap C^1([\,0,1\,) \times (\,0,T\,)). \end{array}\right. } \end{aligned}$$
(19)

Proof

Let (uvRa) be the solution of (\(\mathrm{PS}_{0}\)). According to Lemma 2.3, we see that the solution (uvRa) belongs to the class

$$\begin{aligned} C^{2+\alpha ,1+\frac{\alpha }{2}}\!(Q_T) \!\times \! (C^{1+\alpha , \frac{\alpha }{2}}\!([\,0,1\,) \! \times \! (\,0,T\,))\! \cap \! C^1\!([\,0,1\,) \! \times \! (\,0,T\,))) \!\times \! ( C^1\!((\,0,T\,)) )^2 \end{aligned}$$

for some \(T>0\). To begin with, we prove the existence of a short time solution to (P). If there exists \(T_1 \in (\,0,T\,]\) such that \(R(t) \equiv R_0\) in \([\,0, T_1\,)\), then (uvRaS) with \(S(t) \equiv S_0\) is a solution of (P), for the fact that \(dR/dt = 0\) in \((\,0,T_1\,)\) implies that S(t) does not switch in \((\,0,T_1\,)\). On the other hand, if there exists no such \(T_1\), there exists \(T_2 \in (\,0,T\,]\) such that \(R(t) \not \in \{r_0, r_1\}\) in \((\,0,T_2\,)\), for R(t) is continuous. Then it is clear that (uvRaS) with \(S(t) \equiv S_0\) satisfies (P) in \((\,0,T_2\,)\). Thus we see that (uvRaS) with \(S(t) \equiv S_0\) is a solution of (P) in \((\,0,T^*\,)\) for some \(T^* \in (\,0,T\,]\).

We show the uniqueness. Let \((u_1, v_1, R_1, a_1, S_1) \not = (u_2, v_2, R_2, a_2, S_2)\) be solutions of (P) satisfying (19). Along the same line as above, we see that \(S_1(t) = S_2(t) =S_0\) in \([\,0, \widetilde{T}\,)\) for some \(\widetilde{T} \in (\,0, T^*\,]\). Then the uniqueness of the solution of (\(\mathrm{PS}_{0}\)) leads a contradiction.

Thanks to Lemma 2.2, we observe that (mIAS) has a unique solution. Moreover, it follows from (18) that the solution satisfies (IAS). Finally we show the uniqueness of solutions of (IAS). Suppose that \((u_i, w_{i}, v_i, R_i,a_i,S_0)\) are solutions of (IAS) in the class (19), where \(i=1,2\). Then, by the proof of Lemma 2.2, we observe that (IAS) is reduced to (P) replaced the condition on \(v/\rho \) by (4). It is clear that \(a_1(t)=a_2(t)\) in \([\,0,T\,)\). Set \(U:=u_1-u_2\). Then it follows from Step 2 in the proof of Lemma 2.3 that

$$\begin{aligned} \Vert U \Vert _{C^{2+\alpha , 1+\frac{\alpha }{2}}(Q_T)} \le C \Vert U \Vert _{C^{\alpha , \frac{\alpha }{2}}(Q_T)}. \end{aligned}$$
(20)

Moreover, we find

$$\begin{aligned} \Vert U \Vert _{C^{\alpha , \frac{\alpha }{2}}(Q_T)} \le T^{1-\frac{\alpha }{2}} \Vert U \Vert _{C^{2+\alpha , 1+\frac{\alpha }{2}}(Q_T)} \le CT^{1-\frac{\alpha }{2}}\Vert U \Vert _{C^{\alpha , \frac{\alpha }{2}}(Q_T)}. \end{aligned}$$
(21)

Letting T be small enough such that \(CT^{1- \alpha /2} <1\), we observe from (21) that \(\Vert U\Vert _{C^{\alpha , \alpha /2}} =0\). Combining the fact with (20), we obtain the conclusion. \(\square \)

In order to prove u, \(w \in [\, 0, 1\,]\) in \(B_1\times [\, 0,T \,)\), we apply a parabolic comparison principle to (IAS). Using (uvRaS), which is the solution of (P) in \(Q_T\) constructed by Theorem 2.1, we define the operator

$$\begin{aligned} \mathscr {P}_i: C^{2,1}(B_1\times (\,0,T\,)) \cap C(\overline{B_1} \times [\,0,T\,)) \rightarrow C(B_1\times (\,0,T\,)) \end{aligned}$$

as follows:

$$\begin{aligned} \mathscr {P}_1z:= \partial _t z- \!\mathscr {L}'(v,R) z- P(z,a),\quad \mathscr {P}_2z:= \partial _t z- \!\mathscr {L}'(v,R) z+ P(1-z,a). \end{aligned}$$

Regarding the operator \(\mathscr {P}_i\), the following parabolic comparison principle holds:

Lemma 2.4

Assume that \(z,\zeta \in C^{2,1}(B_1\times (\,0,T\,))\cap C(\overline{B_1} \times [\,0,T\,)) \) satisfy

$$\begin{aligned} {\left\{ \begin{array}{ll} \mathscr {P}_i z \ge \mathscr {P}_i \zeta \quad &{} \text {in} \quad B_1\times (\,0,T\,),\\ \partial _\nu z \ge \partial _\nu \zeta \quad &{} \text {on} \quad \partial B_1\times (\,0,T\,), \\ z \ge \zeta \quad &{} \text {in} \quad \overline{B_1}\times \{t=0\}. \end{array}\right. } \end{aligned}$$

Then \(z \ge \zeta \) in \(\overline{B_1} \times [\,0,T\,)\).

Proof

Since the proof of Lemma 2.3 implies that the coefficients in the operator \(\mathscr {L}'(v,R)\) are bounded, we can prove Lemma 2.4 along the standard argument (e.g., see [4, 17]). \(\square \)

By virtue of Lemma 2.4, one can verify \(0\le u \le 1\) and \(0 \le w \le 1\):

Lemma 2.5

Let (uwvRaS) be a solution of (IAS) obtained by Theorem 2.1. Then, \(0\le u \le 1\) and \(0\le w \le 1\) in \(\overline{B_1} \times [\,0,T\,)\).

We close this section with a property of certain quantities of u and w.

Lemma 2.6

Let us define

$$\begin{aligned} {\left\{ \begin{array}{ll} U(t):=4\pi R^3(t)\displaystyle {\int ^1_0 u(\rho ,t) \rho ^2 \,d\rho },\\ W(t):=4\pi R^3(t)\displaystyle {\int ^1_0 w(\rho ,t)\rho ^2 \,d\rho }, \end{array}\right. }\quad {\left\{ \begin{array}{ll} V_1(t):=\displaystyle {\int ^1_0 u(\rho ,t) \rho ^2 \,d\rho }, \\ V_2(t):=\displaystyle {\int ^1_0 w(\rho ,t)\rho ^2 \,d\rho }. \end{array}\right. } \end{aligned}$$

Then U, W, \(V_1\), and \(V_2\) satisfy

$$\begin{aligned}&\frac{dU}{dt}(t)=4\pi R^3(t)\int ^1_0 c_1 u(\rho ,t)^2 \rho ^2 d\rho +\{f_1(a(t))-c_1\}U(t), \end{aligned}$$
(22)
$$\begin{aligned}&\frac{dW}{dt}(t)=4\pi R^3(t)\int ^1_0 c_2 w(\rho ,t)^2 \rho ^2 d\rho +\{f_2(a(t))-c_2\}W(t), \end{aligned}$$
(23)
$$\begin{aligned}&\frac{dV_1}{dt}(t)\,\, {\left\{ \begin{array}{ll} \le g(a(t)) V_1(t)+3\{-g(a(t))+c_1+c_2\}V_1(t)^2,\\ \ge \{g(a(t))-c_1\}V_1(t)-3\{g(a(t))-c_1\}V_1(t)^2, \end{array}\right. }\end{aligned}$$
(24)
$$\begin{aligned}&\frac{dV_2}{dt}(t)\,\, {\left\{ \begin{array}{ll} \le -g(a(t)) V_2(t)+3\{g(a(t))+c_1+c_2\}V_2(t)^2, \\ \ge -\{g(a(t))+c_2\}V_2(t)+3\{g(a(t))+c_2\}V_2(t)^2, \end{array}\right. } \end{aligned}$$
(25)

respectively, where g is a function defined by

$$\begin{aligned} g(z):=f_1(z)-f_2(z). \end{aligned}$$
(26)

Proof

The Eqs. (22) and (23) were obtained by [23]. We shall show (24) and (25). It follows from Jensen’s inequality and Lemma 2.5 that

$$\begin{aligned} 3 V_1(t)^2 \le \!\int ^1_0 \!\!u(\rho ,t)^2 \rho ^2 \, d\rho \le V_1(t),\,\,\, 3 V_2(t)^2 \le \!\int ^1_0 \!\!w(\rho ,t)^2 \rho ^2 \, d\rho \le V_2(t). \end{aligned}$$
(27)

Combining (27) with the same argument as in [23], we obtain the conclusion. \(\square \)

Remark 2.1

The function g denotes the difference of net growth rate of AD cells and AI cells. We employ the notation frequently in the rest of the paper.

3 Asymptotic Behavior of Non-switching Solutions

We devote this section to investigating the asymptotic behavior of “non-switching” solutions of (IAS). To begin with, we shall show the long time existence of the non-switching solutions of (IAS).

Theorem 3.1

Let \((u_0, w_0, R_0, a_0,S_0)\) satisfy (3) and \(S_0=1\). Then the system (IAS) with \(r_0=0\) has a unique solution (uwvRaS) with \(S(t)\equiv 1\) in \([\,0,\infty \,)\) in the class

$$\begin{aligned} {\left\{ \begin{array}{ll} u, w \in C^{2+\alpha ,1+\frac{\alpha }{2}}(Q_\infty ), \quad R, a \in C^1(\mathbb {R}_+), \\ v \in C^{1+\alpha , \frac{\alpha }{2}}([\,0,1\,)\times \mathbb {R}_+)) \cap C^1([\,0,1\,)\times \mathbb {R}_+). \end{array}\right. } \end{aligned}$$

Proof

It follows from Theorem 2.1 that (IAS) with \(r_0=0\) has a unique solution with \(S(t)\equiv 1\) in \(Q_T\) for some \(T>0\). Since

$$\begin{aligned} R(t)=R_0 \exp { \Bigm [ \int ^t_0 v(1,s)\, ds \Bigm ] }, \end{aligned}$$

we observe from the continuity of the solution that R(t) is positive, i.e. \(S(t) \equiv 1\), while the solution exists. Thus, by a standard argument (e.g., see [6]), we prove that the solution can be extended beyond for any \(T>0\). Indeed, if there exists \(\widetilde{T}>0\) such that the solution can not be extended beyond \(\widetilde{T}\), then the proof of Theorem 2.1 implies that

$$\begin{aligned} \Vert u(\cdot , t)\Vert _{C^{2+\alpha }(B_1)}\rightarrow \infty \quad \text {as} \quad t\uparrow \widetilde{T}. \end{aligned}$$
(28)

On the other hand, since u is a solution of (\(\mathrm{PS}_{0}\)) on \([\,0,\widetilde{T}\,)\), it holds that

$$\begin{aligned} \Vert u(\cdot , t)\Vert _{C^{2+\alpha }(B_1)} \le \Vert u\Vert _{C^{2+\alpha ,1+\alpha /2}_{x,t}(Q_{\widetilde{T}})} \le C ( C(\widetilde{T}) +\Vert u_0\Vert _{C^{2+\alpha }(B_1)} ). \end{aligned}$$
(29)

Since (29) contradicts (28), we obtain the conclusion. \(\square \)

Remark 3.1

The system (IAS) with \(r_0=0\) and \(S_0=1\) describes a tumor growth under the CAS therapy.

Corollary 3.1

Let \((u_0, w_0, R_0, a_0, S_0)\) satisfy (3) and \(S_0=0\). Then the system (IAS) with \(r_1=\infty \) has a unique solution (uwvRaS) with \(S(t)\equiv 0\) in \([\,0,\infty \,)\) in the class

$$\begin{aligned} {\left\{ \begin{array}{ll} u, w \in C^{2+\alpha ,1+\frac{\alpha }{2}}(Q_\infty ), \quad R, a \in C^1(\mathbb {R}_+), \\ v \in C^{1+\alpha , \frac{\alpha }{2}}([\,0,1\,)\times \mathbb {R}_+)) \cap C^1([\,0,1\,)\times \mathbb {R}_+). \end{array}\right. } \end{aligned}$$

In the following, we classify the asymptotic behavior of non-switching solutions obtained by Theorem 3.1 and Corollary 3.1. Recalling Lemma 2.2 and Theorem 2.1, we may consider (P) instead of (IAS).

If \(u_0\) is trivial, i.e., \(u_0 \equiv 0\) or \(u_0 \equiv 1\), then Lemma 2.4 asserts that u is also trivial in \(Q_T\). Thus it is sufficient to consider the initial data \((u_0, R_0,a_0,S_0)\) satisfying

$$\begin{aligned} {\left\{ \begin{array}{ll} &{}u_0\in C^{2+\alpha }(B_1), \quad \partial _\rho u_0(0)= \partial _\rho u_0(1)=0, \quad 0 \le u_0 \le 1,\\ &{}u_0(\rho )\not \equiv 0, \quad u_0(\rho )\not \equiv 1, \quad 0<a_0<a_*, \quad R_0>0,\quad S_0\in \{0,1\}, \end{array}\right. }\quad {(\mathrm{IC})} \end{aligned}$$

where \(0<\alpha <1\). Regarding \(f_i\) and \(c_i\), we assume (A0) throughout this section.

From now on, for a function \(h:[\,0,a_*\,]\rightarrow \mathbb {R}\), we define \(\Vert h \Vert _{\infty }\) by

$$\begin{aligned} \Vert h \Vert _{\infty }:= \sup _{z \in [\,0,a_*\,]} |h(z)|. \end{aligned}$$
(30)

First we consider the asymptotic behavior of solutions to (P) with \(S\equiv 1\).

Theorem 3.2

Let \(r_0=0\). Let \((u_0,R_0,a_0,S_0)\) satisfy (IC) and \(S_0=1\). Assume that either of two assumptions holds\(:\)

(i)    \(g(0)+c_2<0\mathrm{;}\)

(ii)    \(g(0)+c_2>0\) and

$$\begin{aligned} \int ^1_0 u_0(\rho ) \rho ^2 \, d\rho < \frac{1}{3}\frac{-g(0)}{-g(0)+c_1+c_2}\exp {\Bigm [-\frac{a_0}{\gamma }\Vert g' \Vert _{\infty }\Bigm ]}. \end{aligned}$$
(31)

Then the solution (uvRaS) of (P) satisfies \(R(t) \rightarrow \infty \) as \(t\rightarrow \infty \).

Proof

To begin with, we note that \(S(t)\equiv 1\) under (P) with \(r_0=0\) and \(S_0=1\).

We prove the case (i). Since \(S \equiv 1\) yields the monotonicity of a(t), especially that of \(f_{i}(a(t))\), from the assumptions (A0) and (i), we find \(s_1>0\) such that

$$\begin{aligned} f_1(a(t))<0, \quad f_2(a(t))>0, \quad -g(a(t))-c_2>0 \quad \text {for any} \quad t\ge s_1. \end{aligned}$$

Recalling that \(u_0 \not \equiv 1\) yields \(V_2(t)>0\) for any \(t\ge 0\) and setting \(\widetilde{V_2}(t):=1/V_2(t)\), we observe from (25) that

$$\begin{aligned} \frac{d\widetilde{V_2}}{dt}(t) \le \{g(a(t))+c_2\} \widetilde{V_2}(t) -3\{g(a(t))+c_2\}. \end{aligned}$$
(32)

Applying Gronwall’s inequality to (32), we have

$$\begin{aligned} \widetilde{V_2}(t)\le 3+\left( \widetilde{V_2}(0)-3\right) \exp {\Bigm [\int ^t_0 \{g(a(s))+c_2\}\,ds\Bigm ]}. \end{aligned}$$

Since

$$\begin{aligned}&\int ^t_0 \{g(a(s))+c_2\}\,ds =\int ^{s_1}_0\{g(a(s))+c_2\}\,ds+\int ^t_{s_1} \{g(a(s))+c_2\}\,ds\\&\qquad \le (g(a_0)+c_2)s_1-\{-g(a(s_1))-c_2\}(t-s_1)\rightarrow -\infty \quad \text {as} \quad t \rightarrow \infty , \nonumber \end{aligned}$$

one can verify that \(\limsup _{t\rightarrow \infty } \widetilde{V_2}(t) \le 3\). On the other hand, since \(w \le 1\) yields \(\widetilde{V_2}(t) \ge 3\) in \([\, 0, \infty \,)\), we find \(\liminf _{t \rightarrow \infty } \widetilde{V_2}(t) \ge 3\). Thus we have \(\lim _{t\rightarrow \infty } \widetilde{V_2}(t)=3\) and then

$$\begin{aligned} \left\| w(\cdot ,t)-1 \right\| _{L^{\infty }(B_1)} \rightarrow 0 \quad \text {as} \quad t\rightarrow \infty . \end{aligned}$$
(33)

By way of \(u+w\equiv 1\), it follows from (33) that for any \(\varepsilon \) with

$$\begin{aligned} 0<\varepsilon <\frac{f_2(s_1)}{-g(0)+c_1+c_2}, \end{aligned}$$
(34)

there exists \(T_1>s_1\) such that

$$\begin{aligned} \left\| u(\cdot ,t) \right\| _{L^{\infty }(B_1)} <\varepsilon \quad \text {for any} \quad t > T_1. \end{aligned}$$
(35)

In what follows, let \(t>T_1\). Since R satisfies

$$\begin{aligned} R(t) = R_0 \exp {\Bigm [\int ^{T_1}_0 v(1,s)\,ds \Bigm ]} \exp { \Bigm [ \int ^t_{T_1} v(1,s)\,ds \Bigm ]}, \end{aligned}$$
(36)

it is sufficient to estimate the integrals in the right-hand side of (36). We observe from the continuity of \(v(1,\cdot )\) that

$$\begin{aligned} \int ^{T_1}_0v(1,s)\,ds \ge -C T_1 \end{aligned}$$

for some \(C>0\). Moreover, we obtain

$$\begin{aligned} \int ^t_{T_1} v(1,s)\,ds&= \int ^t_{T_1} \int ^1_0 F(u(\rho ,s), a(s))\rho ^2\,d\rho \,ds \\&\ge \int ^t_{T_1} \int ^1_0 [-\{-g(a(t))+c_1+c_2\}u+f_2(a(T_1))]\rho ^2\,d\rho \,ds \\&\ge \frac{1}{3}\{-(-g(0)+c_1+c_2)\varepsilon +f_2(a(s_1))\}(t-T_1). \end{aligned}$$

Hence, it follows from (34) and (35) that \(\liminf _{t \rightarrow \infty }R(t) = \infty \).

Next we turn to the case (ii). By the assumption (A0) and the monotonicity of \(f_i(a(\cdot ))\), there exists \(s_2\ge 0\) such that

$$\begin{aligned} f_2(a(t))>0, \quad g(a(t))<0, \quad \text {for any} \quad t \ge s_2. \end{aligned}$$

Recalling \(V_1(t)>0\) in \([\,0,\infty \,)\) and setting \(\widetilde{V_1}(t):=1/V_1(t)\), we reduce (24) to

$$\begin{aligned} \frac{d\widetilde{V_1}}{dt}(t) \ge -g(a(t)) \widetilde{V_1}-3\{-g(a(t))+c_1+c_2\}. \end{aligned}$$

Since it follows from the same argument as in (i) that

$$\begin{aligned} \widetilde{V_1}(t) \ge e^{-\int ^t_0 g(a(s))\,ds} \Bigm [ 3(g(0)-c_1-c_2)\int ^t_0 e^{\int ^s_0 g(a(\tau ))\,d\tau }ds+\widetilde{V_1}(0) \Bigm ], \end{aligned}$$
(37)

we estimate the integral in the right-hand side of (37). Noting that \(a(\cdot )\) is monotone decreasing, we use the change of variable \(a(s)=z\), and then

$$\begin{aligned} \int ^t_0 g(a(s))\,ds&= -\frac{1}{\gamma } \int ^{a(t)}_{a_0} \frac{g(z) }{z} \, dz = - \frac{1}{\gamma } \int ^{a(t)}_{a_0} \Bigm [ \frac{g(0)}{z}+g'(\tilde{z}) \Bigm ]\,dz \\&\le - \frac{g(0)}{\gamma } \log \frac{a(t)}{a_0} +\frac{a_0}{\gamma }\Vert g' \Vert _{\infty }, \nonumber \end{aligned}$$
(38)

where \(\tilde{z}\in (\,0,a_0\,)\). Combining (37) with (38), we obtain

$$\begin{aligned} \widetilde{V_1}(t)&\ge \Bigm ( \frac{a(t)}{a_{0}} \Bigm )^{\frac{g(0)}{\gamma }} \Bigm [ 3(g(0)-c_1-c_2)\int ^{\infty }_0 \!\!\Bigm (\frac{a_{0}}{a(s)}\Bigm )^{\frac{g(0)}{\gamma }} \!\!\!\! ds +\!\widetilde{V_1}(0)e^{-\!\frac{a_0}{\gamma }\Vert g' \Vert _{\infty }} \Bigm ]\\&\ge \Bigm (\frac{a(t)}{a_{0}} \Bigm )^{\frac{g(0)}{\gamma }} \Bigm [ \frac{-3(-g(0)+c_1+c_2)}{-g(0)}+\widetilde{V_1}(0)e^{-\frac{a_0}{\gamma }\Vert g' \Vert _{\infty }} \Bigm ]. \end{aligned}$$

Under (A0) and (31), the inequality implies that \(\widetilde{V_1} \rightarrow \infty \) as \(t \rightarrow \infty \), i.e., \(V_1(t) \rightarrow 0 \) as \(t \rightarrow \infty \). Thus for any \(\varepsilon \) with

$$\begin{aligned} 0<\varepsilon <\frac{f_2(a(s_2))}{-g(0)+c_1+c_2}, \end{aligned}$$
(39)

there exists \(T_2\ge s_2\) such that

$$\begin{aligned} \left\| u(\cdot ,t) \right\| _{L^{\infty }(B_1)}<\varepsilon \quad \text {for any} \quad t>T_2. \end{aligned}$$
(40)

By virtue of (39) and (40), we have

$$\begin{aligned} \int ^t_{T_2} v(1,s) \,ds&\ge \int ^t_{T_2} \int ^1_0 \{-(-g(0)+c_1+c_2)u+ f_2(a(T_2))\}\rho ^2 \,d\rho \,ds \\&\ge \frac{1}{3} \{-(-g(0)+c_1+c_2)\varepsilon + f_2(a(s_2))\}(t-T_2). \end{aligned}$$

Thus we see that \(\liminf _{t \rightarrow \infty } R(t)=\infty \) along the same line as in (i). \(\square \)

Next we give the asymptotic behavior of solutions to (P) with \(r_0=0\) and \(S_0=1\).

Theorem 3.3

Let \(r_0=0\). Let \((u_0,R_0,a_0,S_0)\) satisfy (IC) and \(S_0=1\). Assume that

$$\begin{aligned} g(0) + c_2 > 0 \end{aligned}$$
(41)

and

$$\begin{aligned} \min _{\rho \in [\,0,1\,] }u_0(\rho )>1-\frac{g(0)+c_2}{g(0)+c_1+2c_2}. \end{aligned}$$
(42)

Then the solution (uvRaS) of (P) satisfies \(R(t) \rightarrow 0\) as \(t\rightarrow \infty \).

Proof

Recalling that \(S \equiv 1\) under (P) with \(r_0=0\) and \(S_0=1\), and using (A0), we find \(s_3 \ge 0\) such that

$$\begin{aligned} f_1(a(t))<0 \quad \text {for any} \quad t \ge s_3. \end{aligned}$$
(43)

Let \(\overline{w}\) be the solution of the following initial value problem:

$$\begin{aligned} {\left\{ \begin{array}{ll} \dfrac{d\overline{w}}{dt}(t)=-\{g(a(t))+c_2\}\overline{w}(t)+\{g(a(t))+c_1+2c_2\}\overline{w}(t)^2,\\ \overline{w}(0)=1-\displaystyle \min _{\rho \in [\,0,1\,] } u_0(\rho ). \end{array}\right. } \end{aligned}$$

Then Lemma 2.4 asserts that

$$\begin{aligned} 0\le w(\rho ,t)\le \overline{w}(t) \quad \text {for any} \quad (\rho , t) \in [\,0,1\,] \times [\,0, \infty \,), \end{aligned}$$
(44)

i.e., \(\overline{w}\) is a supersolution of w. Since \(w_0 \not \equiv 0\), the relation (44) implies \(\overline{w}(t)>0\) for any \(t\ge 0\). Setting \(\omega := 1/ \overline{w}\), we see that \(\omega \) is expressed by

$$\begin{aligned} \omega =e^{\int ^t_0 \{g(a(s))+c_2\}\,ds}\!\! \Bigm [-\!\!\int ^t_0 \!\!\{g(a(s))+c_1+2c_2\}e^{-\int ^s_0 \{g(a(\tau ))+c_2\}\,d\tau } ds+\frac{1}{\overline{w}(0)}\Bigm ]. \end{aligned}$$

Here we have

$$\begin{aligned}&\int ^t_0 \{g(a(s))+c_1+2c_2\}e^{-\int ^s_0 \{g(a(\tau ))+c_2\}\,d\tau }\,ds\\&\,\,=\int ^t_0 \!\{g(a(s))+c_2\}e^{-\int ^s_0\{ g(a(\tau ))+c_2\}\,d\tau }ds +(c_1+c_2)\int ^t_0 e^{-\int ^s_0 \{g(a(\tau ))+c_2\}\,d\tau }ds\\&\,\, \le -e^{-\int ^t_0 \{g(a(\tau ))+c_2\}\,d\tau }+1 +(c_1+c_2)\int ^t_0 e^{-\{ g(0)+c_2\} s}\,ds\\&\,\, \le 1 +\frac{c_1+c_2}{g(0)+c_2}\left( 1-e^{-\{ g(0)+c_2\} t}\right) \le \frac{g(0)+c_1+2c_2}{g(0)+c_2}. \end{aligned}$$

Since it follows from (41) that

$$\begin{aligned} \liminf _{t \rightarrow \infty } \exp {\Bigm [\int ^t_0 \{g(a(s))+c_2\}\,ds\Bigm ]} \ge \liminf _{t \rightarrow \infty } \exp {\left[ (g(0)+c_2)t\right] } =\infty , \end{aligned}$$

we observe from (42) that \(\lim _{t \rightarrow \infty }\omega (t)= \infty \), i.e., \(\lim _{t \rightarrow \infty }\overline{w}(t) = 0\), where we used the positivity of \(\overline{w}\). With the aid of (44), for any \(\varepsilon \) with

$$\begin{aligned} 0<\varepsilon <\frac{-f_1(a(s_3))}{-g(0)}, \end{aligned}$$
(45)

there exists \(T_3>s_3\) such that

$$\begin{aligned} \left\| w(\cdot ,t) \right\| _{L^{\infty }(B_1)}<\varepsilon \quad \text {for any} \quad t > T_3. \end{aligned}$$

Recalling \(u=1-w\) and using the same argument as in the proof of Theorem 3.2 (i), we can verify that

$$\begin{aligned} R(t)&\le R_0 e^{ C T_3}e^{\int ^t_{T_3} v(1,s)\,ds} \le R_0 e^{ C T_3}\exp {\!\Bigm [\int ^t_{T_3} \{-g(a(s))w +f_1(a(s))\}\,ds\Bigm ]}\\&\le R_0 e^{ C T_3}\exp {\!\Bigm [\frac{1}{3}\{-g(0)\varepsilon +f_1(s_3)\}(t-T_3)\Bigm ]}. \end{aligned}$$

Then (45) yields \( \limsup _{t \rightarrow \infty } R(t)= 0\). \(\square \)

We turn to the case of (P) with \(r_1=\infty \) and \(S_0=0\). We note that (P) with \(r_1=\infty \) and \(S_0=0\) describes the behavior of prostate tumor under non-medication.

Theorem 3.4

Let \(r_1=\infty \). Let \((u_0,R_0,a_0,S_0)\) satisfy (IC) and \(S_0=0\). We suppose that one of the following assumptions holds\(:\)

(i)    \(f_1(a_*)-c_1>0\mathrm{;}\)     (ii)    \(f_2(a_*)-c_2>0\mathrm{;}\)     (iii)    \(g(a_*)-c_1>0\mathrm{;}\)

(iv)    \(-g(a_*)+c_1>0\), \(f_2(a_*)>0\), and

$$\begin{aligned} \max _{\rho \in [\,0,1\,]}u_0(\rho )< \frac{-g(a_*)+c_1}{-g(a_*)+2c_1+c_2}; \end{aligned}$$
(46)

(v)    \(g(a_*)+c_2>0\) and

$$\begin{aligned} \min _{\rho \in [\,0,1\,]} u_0(\rho ) > 1- \frac{g(a_*)+c_2}{g(a_*)+c_1+2c_2} \exp {\Bigm [-\frac{a_*}{\gamma }\Vert g' \Vert _{\infty }\Bigm ]}. \end{aligned}$$

Then the solution (uvRaS) of (P) satisfies \(R(t) \rightarrow \infty \) as \(t\rightarrow \infty \).

Proof

We prove the case (i). Remark that \(S \equiv 0\) yields the monotonicity of a(t), especially that of \(f_{i}(a(t))\). Under the assumption (i), we find \(s_4 \ge 0\) such that \(f_1(a(t))-c_1>0\) for any \(t \ge s_4\). Since it follows from (22) that

$$\begin{aligned} \frac{dU}{dt} \ge \{f_1(a(t))-c_1\}U(t) \quad \text {for any} \quad t \ge 0, \end{aligned}$$

making use of Gronwall’s inequality and the monotonicity of \(f_1(a(\cdot ))\), we find

$$\begin{aligned} U(t)&\ge U(s_4)\exp {\Bigm [\int ^t_{s_4} \{ f_1(a(s))-c_1\} \,ds\Bigm ]}\\&\ge U(s_4)\exp {\left[ \{f_1(a(s_4))-c_1\}(t-s_4)\right] } \quad \text {for any} \quad t\ge s_4. \end{aligned}$$

Consequently we see that

$$\begin{aligned} \liminf _{t\rightarrow \infty }\frac{4}{3}\pi R^3(t)=\liminf _{t\rightarrow \infty }\left\{ U(t)+W(t)\right\} \ge \liminf _{t\rightarrow \infty } U(t)= \infty . \end{aligned}$$

Regarding the other cases, we obtain the conclusion along the same line as in the proof of Theorem 3.2. \(\square \)

By the same argument as in the proof of Theorem 3.3, we obtain the following:

Theorem 3.5

Let \(r_1=\infty \). Let \((u_0,R_0,a_0,S_0)\) satisfy (IC) and \(S_0=0\). Assume that

$$\begin{aligned} -g(a_*)+c_1>0,\quad f_2(a_*)<0, \end{aligned}$$
(47)

and (46). Then the solution (uvRaS) of (P) satisfies \(R(t) \rightarrow 0\) as \(t\rightarrow \infty \).

4 Proof of the Main Theorem

The purpose of this section is to prove the existence of a switching solution of (IAS) and investigate its property under the assumption (A0)–(A2). Here we note that (A1) and (A2) are written as \(g(a_*)-c_1>0\) and \(g(0)+c_2>0\), respectively, where g was defined by (26). For this purpose, we may deal with (P) instead of (IAS), for the solution of (P) constructed in Sect. 2 also satisfies (IAS). In the following, we fix \((u_0,R_0,a_0,S_0)\) satisfying (IC), \(u_0>0\), and \(S_0=0\), arbitrarily.

To begin with, we shall study the behavior of solutions of (P) with \(S\equiv 0\). More precisely, for each “initial data” \(({\tilde{u}}_0,{\widetilde{R}}_0,{\tilde{a}}_0)\), we consider the following system:

$$\begin{aligned} {\left\{ \begin{array}{ll} \dfrac{d^{} {\tilde{a}}}{d {t}^{}}(t)= -\gamma ({\tilde{a}}(t)-a_*) \, &{}\text {in} \quad \mathbb {R}_+, \\ \partial _t {\tilde{u}}(\rho ,t) - \mathscr {L}'(\tilde{v}, \tilde{R}) {\tilde{u}}(\rho ,t) = P({\tilde{u}}(\rho ,t), {\tilde{a}}(t)) \, &{}\text {in} \quad I_\infty , \\ {\tilde{v}}(\rho ,t)= \dfrac{1}{\rho ^2}\displaystyle \int _0^{\rho } F({\tilde{u}}(r,t),{\tilde{a}}(t))r^2\,dr &{}\text {in}\quad I_\infty ,\\ \dfrac{d^{} {\widetilde{R}}}{d {t}^{}}(t) = {\tilde{v}}(1,t) {\widetilde{R}}(t)\, &{}\text {in} \quad \mathbb {R}_+, \\ \partial _\rho {\tilde{u}}(\rho ,t)\big |_{\rho \in \{0,1\}}= 0, \,\,\, \dfrac{{\tilde{v}}(\rho ,t)}{\rho }\Bigm |_{\rho =0}\!=\dfrac{1}{3} F({\tilde{u}}(0,t),{\tilde{a}}(t)),\, &{}\text {in} \quad \mathbb {R}_+, \\ {\tilde{a}}(0)= {\tilde{a}}_0, \,\, {\tilde{u}}(\rho , 0)= {\tilde{u}}_0(\rho ), \,\, {\widetilde{R}}(0)= {\widetilde{R}}_0, &{}\text {in} \quad I, \end{array}\right. }\quad {(\mathrm{P0})} \end{aligned}$$

where the operator \(\mathscr {L}'\) was defined by (8). We characterize the time variable in terms of the solution \({\tilde{a}}(\cdot )\) to (P0). Recalling that \(f_1\) is monotone, we define a function \(\tau _0 : (\,0, f_1(a_*)-f_1({\tilde{a}}_0)\,] \rightarrow [\,0,\infty \,)\) as

$$\begin{aligned} \tau _0(\varepsilon )={\tilde{a}}^{-1}(f_1^{-1}(f_1(a_*)-\varepsilon )), \end{aligned}$$
(48)

where \({\tilde{a}}^{-1}\) and \(f_1^{-1}\) denote the inverse functions of \({\tilde{a}}\) and \(f_1\), respectively. Note that, since \({\tilde{a}}(t) \uparrow a_*\) as \(t\rightarrow \infty \), \(\varepsilon \downarrow 0\) is equivalent to \(\tau _0(\varepsilon )\rightarrow \infty \).

From now on, we will follow the notation \(\Vert \cdot \Vert _{\infty }\) defined in (30).

Lemma 4.1

Assume that there exist constants \(A \in (\,0,1\,)\) and \(\kappa \in (\,0,a_*\,)\) such that \(({\tilde{u}}_0,{\widetilde{R}}_0,{\tilde{a}}_0)\) satisfies (IC) and the following\(:\)

$$\begin{aligned} \min _{\rho \in [\,0,1\,]} {\tilde{u}}_0(\rho ) \ge A; \end{aligned}$$
(49)
$$\begin{aligned} {\tilde{a}}_0 \le \kappa . \end{aligned}$$
(50)

Then there exists a strictly monotone increasing continuous function

$$\begin{aligned} \varGamma _0(\varepsilon ; A, \kappa ) : (\,0, f_{1}(a_{*})-f_{1}(0)\,] \rightarrow \mathbb {R}_+ \end{aligned}$$

with \(\varGamma _0(\varepsilon ; A, \kappa ) \downarrow 0\) as \(\varepsilon \downarrow 0\) such that the solution of (P0) satisfies

$$\begin{aligned} \left\| {\tilde{u}}(\cdot ,\tau _0(\varepsilon ))-1 \right\| _{L^{\infty }(B_1)} \le \varGamma _0(\varepsilon ; A, \kappa ) \quad \text {in} \quad (\,0, f_1(a_*)-f_1({\tilde{a}}_0)\, ]. \end{aligned}$$

Proof

Let us consider

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \frac{d\overline{w}}{dt}= -(g({\tilde{a}}(t))-c_1)(1-\overline{w})\overline{w},\\ \overline{w}(0) =1- \displaystyle \min _{\rho \in [\,0,1\,]} {\tilde{u}}_0 (\rho ). \end{array}\right. } \end{aligned}$$
(51)

By way of Lemma 2.4, one can easily verify that \(\overline{w}\) is a supersolution of \(1-{\tilde{u}}\). Solving (51) and setting \(t=\tau _0(\varepsilon )\), we find

$$\begin{aligned} \omega (\tau _0(\varepsilon )) =1+\left( \omega (0)-1 \right) \exp {\Bigm [\int ^{\tau _0(\varepsilon )}_{0} \{g({\tilde{a}}(s))-c_1\} \,ds \Bigm ]}, \end{aligned}$$

where \(\omega =1/\overline{w}\). From the change of variable \({\tilde{a}}(s)=z\), we have

$$\begin{aligned}&\int ^{\tau _0(\varepsilon )}_{0} \!\!\{g({\tilde{a}}(s))-c_1\} ds =-\frac{1}{\gamma } \int ^{{\tilde{a}}(\tau _0(\varepsilon ))}_{{\tilde{a}}_0} \!\! \Bigm [\frac{g(z)-g(a_*)}{z-a_*} +\frac{g(a_*)-c_1}{z-a_*} \Bigm ] \!\! dz \\&\quad \ge -\frac{{\tilde{a}}(\tau _0(\varepsilon )) - {\tilde{a}}_0}{\gamma }\Vert g' \Vert _{\infty } +\frac{g(a_*)-c_1}{\gamma } \log {\frac{a_*-{\tilde{a}}_0}{a_*-{\tilde{a}}(\tau _0(\varepsilon ))}} \nonumber \\&\quad \ge -\frac{a_*}{\gamma } \Vert g' \Vert _{\infty } + \frac{g(a_*)-c_1}{\gamma } \log {\frac{a_*- {\tilde{a}}_0}{a_*-{\tilde{a}}(\tau _0(\varepsilon ))}},\nonumber \end{aligned}$$
(52)

where we used (A1) in the last inequality. Therefore, using (49) and (50), we define the required function \(\varGamma _0(\varepsilon ; A, \kappa )\) as follows:

$$\begin{aligned} \overline{w}(\tau _0(\varepsilon )) \le \Bigm [1+A e^{-\frac{a_*}{\gamma } \Vert g' \Vert _{\infty }}\Bigm (\frac{a_*-\kappa }{a_*-f^{-1}_1(f_1(a_*)-\varepsilon )}\Bigm )^{\frac{g(a_*)-c_1}{\gamma }}\Bigm ]^{-1} =: \varGamma _0(\varepsilon ; A, \kappa ). \end{aligned}$$

This completes the proof. \(\square \)

Lemma 4.2

Under the same assumption as in Lemma 4.1, there exists a constant \(\varepsilon _1 \in (\,0, f_1(a_*) \,)\), independent of \(({\tilde{u}}_0,{\widetilde{R}}_0,{\tilde{a}}_0)\), such that the solution of (P0) satisfies

$$\begin{aligned} \frac{d{\widetilde{R}}}{dt}(\tau _0(\varepsilon ))>0 \quad \text {for any} \quad \varepsilon \in (\,0, \varepsilon _1\,] . \end{aligned}$$

Proof

Since \(d {\widetilde{R}}/dt\) is written by

$$\begin{aligned} \dfrac{d^{} {\widetilde{R}}}{d {t}^{}}(t)= {\widetilde{R}}(t) {\tilde{v}}(1,t) = {\widetilde{R}}(t) \int ^1_0 F({\tilde{u}}(\rho ,t),{\tilde{a}}(t)) \rho ^2 \, d\rho , \end{aligned}$$
(53)

we observe that the sign of \(d{\widetilde{R}}/dt\) is determined by that of the integral in (53). In particular, we focus on the sign of F. From \(\partial ^{2}_{z} F(z,\alpha )= 2(c_1 + c_2) > 0\), we find

$$\begin{aligned} F(z,\alpha )&> F(1,\alpha ) + \partial _{z} F(1,\alpha ) (z-1) \\&\ge F(1,\alpha ) + \partial _{z} F(1,a_*) (z-1) =: y(z;\alpha ) \quad \text {in} \quad [\,0,1\,) \times [\, 0,a_* \,], \nonumber \end{aligned}$$
(54)

where we used the monotonicity of \(\partial _{z} F(1,\alpha )=g(\alpha )-c_1-c_2\) in the second inequality. Here, noting the positivity of \(\partial _z F(1, a_*)\), we denote by \(z_0(\alpha )\) the zero point of \(y(z;\alpha )\) given by

$$\begin{aligned} z_0(\alpha )= \dfrac{-F(1,\alpha ) + \partial _{z}F(1,a_*)}{\partial _{z} F(1,a_*)}. \end{aligned}$$

Since (48) yields that

$$\begin{aligned} F(1, {\tilde{a}}(\tau _0(\varepsilon ))) = f_1(a_*) - \varepsilon > 0 \quad \text {for any} \quad \varepsilon \in (\,0, f_1(a_*)\,), \end{aligned}$$
(55)

we see that

$$\begin{aligned} z_0({\tilde{a}}(\tau _0(\varepsilon ))) < 1, \quad y(1, {\tilde{a}}(\tau _0(\varepsilon ))) > 0, \quad \text {for all} \quad \varepsilon \in (\,0, f_1(a_*)\,). \end{aligned}$$
(56)

Then, for each \(\varepsilon \in (\,0, f_1(a_*)\,)\), we observe from (56) that \(y(z, {\tilde{a}}(\tau _0(\varepsilon )))\ge 0\) for all \(z \in [\, z_0({\tilde{a}}(\tau _0(\varepsilon ))), 1\,]\). Combining the fact with (54)–(55), we infer that

$$\begin{aligned} F(z, {\tilde{a}}(\tau _0(\varepsilon )))>0 \,\,\, \text {for all} \,\,\, z \in [\, z_0({\tilde{a}}(\tau _0(\varepsilon ))), 1\,], \,\,\, \text {if} \,\,\, \varepsilon \in (\,0, f_1(a_*)\,). \end{aligned}$$
(57)

In order to complete the proof of Lemma 4.2, it is sufficient to prove the claim: there exists a constant \(\varepsilon _1 \in (\, 0, f_{1}(a_{*})\,)\), independent of \(({\tilde{u}}_0,{\widetilde{R}}_0,{\tilde{a}}_0)\), such that the solution \(({\tilde{u}},{\tilde{v}},{\widetilde{R}},{\tilde{a}})\) of (P0) satisfies

$$\begin{aligned} \min _{\rho \in [\,0,1\,]} {\tilde{u}}(\rho , {\tilde{a}}(\tau _0(\varepsilon ))) \ge z_0({\tilde{a}}(\tau _0(\varepsilon ))) \quad \text {for any} \quad \varepsilon \in (\,0, \varepsilon _1\,]. \end{aligned}$$

Indeed, combining the claim with (57), we clearly obtain the conclusion. We shall show the claim by way of Lemma 4.1. Since \(z_0({\tilde{a}}(\tau _0(f_1(a_*)))=1\) and

$$\begin{aligned} z_0({\tilde{a}}(\tau _0(\varepsilon ))) \downarrow z_0(a_*) <1,\,\,\, 1 - \varGamma _0(\varepsilon ; A, \kappa ) \uparrow 1, \quad \text {as} \quad \varepsilon \downarrow 0, \end{aligned}$$

from the monotonicity of \(z_0({\tilde{a}}(\tau _0(\varepsilon )))\) and \(1 - \varGamma _0(\varepsilon ; A, \kappa )\), we find a constant \({\tilde{\varepsilon }_1} \in (\,0, f_1(a_*)\,)\) uniquely, independent of \(({\tilde{u}}_0,{\widetilde{R}}_0,{\tilde{a}}_0)\), such that

$$\begin{aligned} 1 - \varGamma _0(\varepsilon ; A, \kappa ) \ge z_0({\tilde{a}}(\tau _0(\varepsilon ))) \quad \text {for any} \quad \varepsilon \in (\,0, \tilde{\varepsilon }_1 \,]. \end{aligned}$$
(58)

Recalling (50) implies that \(f_1(\kappa ) \ge f_1({\tilde{a}}_0)\) and setting \(\varepsilon _1 := \min \{\tilde{\varepsilon }_1, f_1(a_*) - f_1(\kappa ) \}\), we observe from (58) and Lemma 4.1 that

$$\begin{aligned} \min _{\rho \in [\,0,1\,]} {\tilde{u}}(\rho , {\tilde{a}}(\tau _0(\varepsilon ))) \ge 1 - \varGamma _0(\varepsilon ; A, \kappa ) \ge z_0({\tilde{a}}(\tau _0(\varepsilon ))) \quad \text {for any} \quad \varepsilon \in (\,0, \varepsilon _1 \,]. \end{aligned}$$

Then the claim holds true and we have completed the proof. \(\square \)

Lemma 4.3

Let \(({\tilde{u}}_0,{\widetilde{R}}_0,{\tilde{a}}_0)=(u_0,R_0,a_0)\). Then there exist monotone decreasing functions \(M^-\) and \(M^+\) defined on \((\,0,f_1(a_*)-f_1(0)\,]\) such that the solution of (P0) satisfies

$$\begin{aligned} R_0 \exp {M^{-} (\varepsilon )} \le {\widetilde{R}}(\tau _0(\varepsilon )) \le R_0\exp {M^{+} (\varepsilon )} \quad \text {in} \quad (\,0, f_1(a_*)-f_1(a_0)\,], \end{aligned}$$
(59)

where the second inequality is strict for any \(\varepsilon \in (\,0,f_1(a_*)-f_1(a_0))\). Moreover, \(M^-\) and \(M^+\) satisfy the following\(:\)

$$\begin{aligned} -\infty< M^-(\varepsilon ) \le M^+(\varepsilon ) <\infty \quad \text {in} \quad (\,0,f_1(a_*)-f_1(0)\,];\end{aligned}$$
(60)
$$\begin{aligned} \lim _{\varepsilon \downarrow 0} M^{-}(\varepsilon )=\infty . \end{aligned}$$
(61)

Proof

Since \({\widetilde{R}}(\tau _0(\varepsilon ))\) is given by

$$\begin{aligned} {\widetilde{R}}(\tau _0(\varepsilon ))=R_0 \exp {\Bigm [\int ^{\tau _0(\varepsilon )}_0 {\tilde{v}}(1,s) \, ds \Bigm ]} \quad \text {in} \quad (\, 0,f_1(a_*)-f_1(a_0)\, ], \end{aligned}$$
(62)

we will estimate the integral in (62). To this aim, setting \({\tilde{w}}=1-{\tilde{u}}\), we decompose the integral as follows:

$$\begin{aligned}&\int ^{\tau _0(\varepsilon )}_0 {\tilde{v}}(1,s) \, ds = (c_1+c_2)\int ^{\tau _0(\varepsilon )}_0 \int ^1_0 {\tilde{w}}^2\rho ^2\,d\rho ds \\&\quad - \int ^{\tau _0(\varepsilon )}_0 \int ^1_0 \left[ g({\tilde{a}}(s))+c_1+c_2 \right] {\tilde{w}}\rho ^2\,d\rho ds + \frac{1}{3}\int ^{\tau _0(\varepsilon )}_0 f_1({\tilde{a}}(s)) \,ds =: I_1+I_2+I_3. \nonumber \end{aligned}$$
(63)

First we construct \(M^-\). Regarding \(I_1\), it follows from Jensen’s inequality that

$$\begin{aligned} I_1 \ge \frac{c_1+c_2}{3}\int ^{\tau _0(\varepsilon )}_0 \left( \int ^1_0 {\tilde{w}}\rho ^2 \,d\rho \right) ^2ds =: \frac{c_1+c_2}{27}\int ^{\tau _0(\varepsilon )}_0 \mathscr {W}(s)^2\, ds. \end{aligned}$$
(64)

Employing a differential inequality in (25), we see that \(\mathscr {W}\) satisfies

$$\begin{aligned} \mathscr {W}(s)\ge \frac{1}{1+\left( \frac{1}{\mathscr {W}(0)}-1\right) \exp {\left[ \int ^{s}_0 \{g({\tilde{a}}(\tau ))+c_2\}\,d\tau \right] }}. \end{aligned}$$
(65)

Furthermore, the same argument as in (52) yields

$$\begin{aligned} \int ^{s}_0 \{g({\tilde{a}}(\tau ))+c_2\} \,d\tau \le \frac{g(a_*)+c_2}{\gamma } \log {\mathscr {T}_{a_0}({\tilde{a}}(s))}, \end{aligned}$$
(66)

where

$$\begin{aligned} \mathscr {T}_{z_1}(z_2):= \frac{a_*-z_1}{a_*-z_2}. \end{aligned}$$
(67)

Hence, combining (64) with (65)–(66), we have

$$\begin{aligned} I_1 \ge \frac{c_1+c_2}{27}\!\int ^{\tau _0(\varepsilon )}_0 \!\! \Bigm [1+\Bigm [\frac{1}{\mathscr {W}(0)}-1 \Bigm ] \mathscr {T}_{a_0}({\tilde{a}}(s))^{\frac{g(a_*)+c_2}{\gamma }}\Bigm ]^{-2}\!\!\!ds =: I_{11}. \end{aligned}$$

Changing the variable

$$\begin{aligned} \eta =1+\left( \frac{1}{\mathscr {W}(0)}-1\right) \mathscr {T}_{a_0}({\tilde{a}}(s))^{\frac{g(a_*)+c_2}{\gamma }} \end{aligned}$$

and setting

$$\begin{aligned} \eta _0:=\frac{1}{\mathscr {W}(0)}, \quad \eta _\varepsilon :=1+\left( \frac{1}{\mathscr {W}(0)}-1\right) \mathscr {T}_{a_0}({\tilde{a}}(\tau _0(\varepsilon )))^{\frac{g(a_*)+c_2}{\gamma }}, \end{aligned}$$

we can define \(M^-_1:(\, 0, f_1(a_*)- f_1(a_0) \,]\rightarrow \mathbb {R}\) as follows:

$$\begin{aligned} I_{11}=&\,C_1\int ^{\eta _\varepsilon }_{\eta _0} \frac{d \eta }{(\eta -1)\eta ^2} \ge C_1\left[ \log {\frac{\eta _0(\eta _\varepsilon -1)}{\eta _\varepsilon (\eta _0-1)}}-\frac{1}{\eta _0}\right] \\ =&-C_1\left[ \log {\left[ \mathscr {U}_0+(1-\mathscr {U}_0){\mathscr {T}_{a_0 }({\tilde{a}}(\tau _0(\varepsilon )))}^{-\frac{g(a_*)+c_2}{\gamma }}\right] }+1 -\mathscr {U}_0\right] \nonumber \\ \ge&-C_1\left[ \log {\left[ 1+(1-\mathscr {U}_0){\mathscr {T}_{\kappa _0 }({\tilde{a}}(\tau _0(\varepsilon )))}^{-\frac{g(a_*)+c_2}{\gamma }}\right] }+1-\mathscr {U}_0\right] =: M^-_1(\varepsilon ),\nonumber \end{aligned}$$

where \(C_1= (c_1+c_2)/ (27(g(a_*)+c_2))\), and

$$\begin{aligned} \mathscr {U}(s) := 3\int ^1_0{\tilde{u}}(\rho ,s)\, \rho ^2 \,d\rho , \quad \mathscr {U}_0 := \mathscr {U}(0), \quad \kappa _0:= \max \{a_0,f_1^{-1}(0)\}. \end{aligned}$$
(68)

Regarding \(I_2\), it follows from \({\tilde{w}}=1-{\tilde{u}}\) that

$$\begin{aligned} I_2 \ge -\frac{g(a_*)+c_1+c_2}{3} \int ^{\tau _0(\varepsilon )}_0 \{1-\mathscr {U}(s)\} \, ds. \end{aligned}$$

Using (24) and the same calculation as in (66), we have

$$\begin{aligned} \mathscr {U}(s) \ge \left[ 1+(\mathscr {U}_0^{-1}-1) \mathscr {T}_{a_0}({\tilde{a}}(s))^{-\frac{g(a_*)-c_1}{\gamma }}\right] ^{-1}. \end{aligned}$$

Then, by the same argument as in the derivation of \(M_1^-\), we obtain

$$\begin{aligned} I_2&\ge -\frac{g(a_*)+c_1+c_2}{3} \int ^{\tau _0(\varepsilon )}_0 \frac{\left( \mathscr {U}_0^{-1} -1\right) \mathscr {T}_{a_0}({\tilde{a}}(s))^{-\frac{g(a_*)-c_1}{\gamma }}}{1+(\mathscr {U}_0^{-1}-1) \mathscr {T}_{a_0}({\tilde{a}}(s))^{-\frac{g(a_*)-c_1}{\gamma }}} \, ds\\&=\frac{1}{3}\frac{g(a_*)+c_1+c_2}{g(a_*)-c_1} \log {\left[ \mathscr {U}_0+(1-\mathscr {U}_0) \mathscr {T}_{a_0}({\tilde{a}}(\tau _0(\varepsilon )))^{-\frac{g(a_*)-c_1}{\gamma }}\right] }\\&\ge \frac{1}{3}\frac{g(a_*)+c_1+c_2}{g(a_*)-c_1} \log {\mathscr {U}_0} =: M_2^-(\varepsilon ). \end{aligned}$$

It follows from the same argument as in (52) that

$$\begin{aligned} I_3&= \frac{f_1(a_*)}{3\gamma } \log {\mathscr {T}_{a_0}({\tilde{a}}(\tau _0(\varepsilon )))} -\frac{1}{3\gamma } \int ^{{\tilde{a}}(\tau _0(\varepsilon ))}_{a_0} f'(\tilde{z})\,dz\\&\ge \frac{f_1(a_*)}{3\gamma }\log {\mathscr {T}_{\kappa _0}({\tilde{a}}(\tau _0(\varepsilon )))} -\frac{a_*}{3\gamma }\Vert f_1' \Vert _{\infty } =: M_3^-(\varepsilon ), \nonumber \end{aligned}$$
(69)

where \(\tilde{z}\in (a_0,a_*)\). Setting \(M^-(\varepsilon )= \sum ^{3}_{i=1} M_i^-(\varepsilon )\) and recalling (48), we see that \(M^-\) is well-defined on \((\,0,f_1(a_*)-f_1(a_0)\,]\).

We shall derive \(M^+\). Since \({\tilde{w}}=1-{\tilde{u}}\le 1\), the same argument as in \(M_2^-\) yields

$$\begin{aligned} I_1&\le (c_1+c_2) \int ^{\tau _0(\varepsilon )}_0\int ^1_0 {\tilde{w}}\rho ^2 \,d\rho ds = \frac{c_1+c_2}{3} \int ^{\tau _0(\varepsilon )}_0 \{1-\mathscr {U}(s)\} \, ds\\&\le -\frac{1}{3}\frac{c_1+c_2}{g(a_*)-c_1} \log {\left[ \mathscr {U}_0+(1-\mathscr {U}_0) \mathscr {T}_{a_0}({\tilde{a}}(\tau _0(\varepsilon )))^{-\frac{g(a_*)-c_1}{\gamma }}\right] } \\&\le -\frac{1}{3}\frac{c_1+c_2}{g(a_*)-c_1} \log {\mathscr {U}_0} =: M^+_1(\varepsilon ). \end{aligned}$$

Regarding \(I_2\), we have

$$\begin{aligned} I_2 \le -\frac{g(0)+c_1+c_2}{3} \int ^{\tau _0(\varepsilon )}_0 \mathscr {W}(s) \,ds \le 0 =: M^+_2(\varepsilon ). \end{aligned}$$

Eliminating the negative term from the first line in (69), we find

$$\begin{aligned} I_3&\le \frac{f_1(a_*)}{3\gamma } \log {\mathscr {T}_{a_0}({\tilde{a}}(\tau _0(\varepsilon )))} \le \frac{f_1(a_*)}{3\gamma } \log {\mathscr {T}_{0}({\tilde{a}}(\tau _0(\varepsilon )))} =: M_3^+(\varepsilon ), \end{aligned}$$

where the first inequality is followed from the monotonicity of \(f_1\), and it is strict for any \(\varepsilon \in (\,0,f_1(a_*)-f_1(a_0))\). Setting \(M^+(\varepsilon ):= \sum ^{3}_{i=1} M_i^+(\varepsilon )\), we observe that \(M^+(\varepsilon )\) is well-defined on \((\,0,f_1(a_*)-f_1(a_0)\,]\).

From the definition of \(M^-\) and \(M^+\), we see that (59) and (61) hold true. Moreover, thanks to \({\tilde{a}}(\tau _0(\varepsilon )) = f_1^{-1}(f_1(a_*)-\varepsilon )\), we infer that \(M^-\) and \(M^+\) can be extended on \((\,0,f_1(a_*)-f_1(0)\,]\) and (60) holds. This completes the proof. \(\square \)

Lemma 4.4

Let \(M^\pm : (\,0,f_1(a_*)-f_1(0)\,] \rightarrow \mathbb {R}\) be the functions constructed by Lemma 4.3. Let \(({\tilde{u}}_0, {\widetilde{R}}_0, {\tilde{a}}_0)\) satisfy

$$\begin{aligned} \int ^1_0 {\tilde{u}}_0(\rho ) \rho ^2\, d\rho \ge \int ^1_0 u_0(\rho )\rho ^2 \,d\rho , \end{aligned}$$
(70)
$$\begin{aligned} {\tilde{a}}_0 \le \kappa _0, \end{aligned}$$
(71)

and (IC), where \(\kappa _0\) is defined by (68). Then the solution of (P0) satisfies

$$\begin{aligned} {\widetilde{R}}_0 \exp {M^{-}(\varepsilon )} \le {\widetilde{R}}(\tau _0(\varepsilon )) \le {\widetilde{R}}_0 \exp {M^{+}(\varepsilon )} \quad \text {in} \quad (\,0, f_1(a_*)-f_1({\tilde{a}}_0)\,], \end{aligned}$$
(72)

where the second inequality is strict for any \(\varepsilon \in (\,0, f_1(a_*)-f_1({\tilde{a}}_0)\,)\).

Proof

In the same manner as in the proof of Lemma 4.3, we see that (59) replaced \((M^-,M^+,a_0)\) by \((\tilde{M}^-,\tilde{M}^+,{\tilde{a}}_0)\) holds true, where \(\tilde{M}^-\) and \(\tilde{M}^+\) are respectively determined by \(M^-\) and \(M^+\), replaced \((u_0,a_0)\) by \(({\tilde{u}}_0,{\tilde{a}}_0)\). Since (70) and (71) imply that

$$\begin{aligned} \tilde{\mathscr {U}}_0 := 3 \int ^1_0 {\tilde{u}}_0(\rho ) \rho ^2\, d\rho \ge 3 \int ^1_0 u_0(\rho ) \rho ^2\, d\rho =\mathscr {U}_0 \end{aligned}$$

and

$$\begin{aligned} \mathscr {T}_{\kappa _0}(\alpha ) \le \mathscr {T}_{{\tilde{a}}_0}(\alpha ) \le \mathscr {T}_0(\alpha ) \quad \text {for any} \quad \alpha \in [\,0,a_*\,], \end{aligned}$$

we find

$$\begin{aligned} \tilde{M}^+(\varepsilon ) \le M^+(\varepsilon ), \quad \tilde{M}^-(\varepsilon ) \ge M^-(\varepsilon ), \quad \text {in} \quad (\,0,f_1(a_*)-f_1(0)\,]. \end{aligned}$$

Thus we obtain (72). \(\square \)

In order to investigate the behavior of solutions of (P) with \(S\equiv 1\), for each “initial data” \(({\tilde{u}}_0,{\widetilde{R}}_0,{\tilde{a}}_0)\), we consider the following system:

$$\begin{aligned} {\left\{ \begin{array}{ll} \dfrac{d^{} {\tilde{a}}}{d {t}^{}}(t)= -\gamma {\tilde{a}}(t)\, &{}\text {in} \quad \mathbb {R}_+, \\ \partial _t {\tilde{u}}(\rho ,t) - \mathscr {L}'(\tilde{v}, \tilde{R}) {\tilde{u}}(\rho ,t) = P({\tilde{u}}(\rho ,t), {\tilde{a}}(t)) \, &{}\text {in} \quad I_\infty , \\ {\tilde{v}}(\rho ,t)= \dfrac{1}{\rho ^2}\displaystyle \int _0^{\rho } F({\tilde{u}}(r,t),{\tilde{a}}(t))r^2\,dr &{}\text {in}\quad I_\infty ,\\ \dfrac{d^{} {\widetilde{R}}}{d {t}^{}}(t) = {\tilde{v}}(1,t) {\widetilde{R}}(t)\, &{}\text {in} \quad \mathbb {R}_+, \\ \partial _\rho {\tilde{u}}(\rho ,t)\big |_{\rho \in \{0,1\}}= 0, \,\,\, \dfrac{{\tilde{v}}(\rho ,t)}{\rho }\Bigm |_{\rho =0}\!=\dfrac{1}{3} F({\tilde{u}}(0,t),{\tilde{a}}(t)), \, &{}\text {in} \quad \mathbb {R}_+, \\ {\tilde{a}}(0)= {\tilde{a}}_0, \,\, {\tilde{u}}(\rho , 0)= {\tilde{u}}_0(\rho ), \,\, {\widetilde{R}}(0)= {\widetilde{R}}_0, &{}\text {in} \quad I. \end{array}\right. }\quad {(\mathrm{P1})} \end{aligned}$$

We characterize the time variable in terms of the solution \({\tilde{a}}(\cdot )\) to (P1). Following the same manner as in (48) and recalling the monotonicity of \(f_1\), we define a function \(\tau _1 : (\,0, f_1({\tilde{a}}_0)-f_1(0)\,] \rightarrow [\,0,\infty \,)\) as

$$\begin{aligned} \tau _1(\delta )={\tilde{a}}^{-1}(f_1^{-1}(f_1(0)+\delta )). \end{aligned}$$
(73)

Since \({\tilde{a}}(t) \downarrow 0\) as \(t\rightarrow \infty \), \(\delta \downarrow 0\) is equivalent to \(\tau _1(\delta )\rightarrow \infty \).

Lemma 4.5

Let \(({\tilde{u}}_0,{\widetilde{R}}_0,{\tilde{a}}_0)\) satisfy (IC) and

$$\begin{aligned}&\min _{\rho \in [\,0,1\,]} {\tilde{u}}_0(\rho ) >1-\frac{g(0)+c_2}{g(0)+c_1+2c_2} =: 1-C_g. \end{aligned}$$
(74)

Then the solution \(({\tilde{u}},{\tilde{v}},{\widetilde{R}},{\tilde{a}})\) of (P1) satisfies

$$\begin{aligned} \min _{\rho \in [\,0,1\,]} {\tilde{u}}(\rho ,\tau _1(\delta )) \ge \min _{\rho \in [\,0,1\,]} {\tilde{u}}_0(\rho ) \quad \text {in} \quad (\,0, f_1({\tilde{a}}_0)-f_1(0)\,]. \end{aligned}$$

Proof

Recalling that (A2) and (74) respectively correspond to (41) and (42), we can construct the supersolution \(\overline{w}\) of \({\tilde{w}}=1-{\tilde{u}}\) along the same argument as in the proof of Theorem 3.3. Using the change of variable \({\tilde{a}}(t)=z\), we have

$$\begin{aligned} \overline{w}(\tau _1(\delta ))\,\le \biggm [\frac{1}{C_g}+\Bigm [\frac{1}{\overline{w}(0)}-\frac{1}{C_g}\Bigm ] \Bigm [\frac{{\tilde{a}}_0}{f_1^{-1}(f_1(0)+\delta )}\Bigm ]^{\frac{g(0)+c_2}{\gamma }}\biggm ]^{-1} =: \varGamma _1(\delta ) \end{aligned}$$

for any \(\delta \in (\,0, f_1({\tilde{a}}_0)-f_1(0)\,]\). Then \(\underline{u} := 1-\varGamma _1\) is a subsolution of \({\tilde{u}}\). In particular, the monotonicity of \(\varGamma _1(\cdot )\) gives us the conclusion. \(\square \)

Next we construct an analogue of Lemma 4.2 for (P1). To this aim, we note that

$$\begin{aligned} F(z,\alpha ) = (c_1+c_2) \left( z- K^*(\alpha ) \right) ^2 -(c_1+c_2)K^*(\alpha ) +f_2(\alpha ), \end{aligned}$$

where

$$\begin{aligned} K^*(\alpha ):=\frac{-g(\alpha )+c_1+c_2}{2(c_1+c_2)}. \end{aligned}$$
(75)

Lemma 4.6

Let \(({\tilde{u}}_0,{\widetilde{R}}_0,{\tilde{a}}_0)\) satisfy (IC), (74), and the following\(:\)

$$\begin{aligned} \min _{\rho \in [0, 1]}{\tilde{u}}_0(\rho ) \ge K^*(0); \end{aligned}$$
(76)
$$\begin{aligned} {\tilde{a}}_0 > f_1^{-1}(0). \end{aligned}$$
(77)

Then the solution \(({\tilde{u}},{\tilde{v}},{\widetilde{R}},{\tilde{a}})\) of (P1) satisfies

$$\begin{aligned} \frac{d{\widetilde{R}}}{dt}(\tau _1 (\delta ))<0 \quad \text {for any} \quad \delta \in (\,0, -f_1(0)\, ). \end{aligned}$$

Proof

In order to verify the sign of \(d{\widetilde{R}}/dt\), we use a similar way in Lemma 4.2, i.e., focus on the sign of \(F({\tilde{u}},{\tilde{a}})\). First we note that (77) is equivalent to \(f_1({\tilde{a}}_0)>0\). Recalling the relation \((\,0,-f_1(0)\,) \subset (\,0,f_1({\tilde{a}}_0)-f_1(0)\,]\), we find

$$\begin{aligned} F(1, {\tilde{a}}(\tau _1(\delta )))=f_1({\tilde{a}}(\tau _1(\delta ))) = f_1(0)+\delta < 0 \quad \text {in} \quad (\,0, -f_1(0)\,). \end{aligned}$$
(78)

Since (A2) implies \(K^*(0)<1\), the monotonicity of \(K^*(\cdot )\) and (78) asserts that

$$\begin{aligned} F(z,\alpha ) < 0 \quad \text {for any} \quad z \in [\,K^*(0), 1\,] \times (\,0, -f_1(0)\,). \end{aligned}$$
(79)

By virtue of (74), we can apply Lemma 4.5 to the solution \({\tilde{u}}\) and then (76) implies that

$$\begin{aligned} \min _{\rho \in [\,0, 1\,]} {\tilde{u}}(\rho ,\tau _1(\delta ))&\ge \min _{\rho \in [\,0, 1\,]}{\tilde{u}}_0(\rho ) \ge K^* (0) \quad \text {for any} \quad \delta \in (\, 0 , f_1({\tilde{a}}_0) -f_1(0)\,]. \end{aligned}$$
(80)

Therefore we have completed the proof. \(\square \)

Lemma 4.7

Let \(({\tilde{u}}_0,{\widetilde{R}}_0,{\tilde{a}}_0)\) satisfy

$$\begin{aligned} \min _{\rho \in [0, 1]}{\tilde{u}}_0(\rho ) \ge 1-\frac{1}{2} C_g, \end{aligned}$$
(81)
$$\begin{aligned} {\tilde{a}}_0 \ge f_1^{-1}(0), \end{aligned}$$
(82)

and (IC). Then there exist monotone increasing functions \(L^-\) and \(L^+\) defined on the interval \((\,0,f_1(a_*)-f_1(0)\,]\), independent of \(({\tilde{u}}_0,{\widetilde{R}}_0,{\tilde{a}}_0)\), such that the solution of (P1) satisfies

$$\begin{aligned} {\widetilde{R}}_0 \exp {L^{-} (\delta )} \le {\widetilde{R}}(\tau _1 (\delta ) ) \le {\widetilde{R}}_0\exp {L^{+} (\delta )} \quad \text {in} \quad (\,0,f_1({\tilde{a}}_0)-f_1(0)\,], \end{aligned}$$
(83)

in particular, the first inequality in (83) is strict in \((\,0, f_1({\tilde{a}}_0)-f_1(0)\,)\). Moreover, \(L^-\) and \(L^+\) satisfy the following\(:\)

$$\begin{aligned} -\infty< L^-(\delta ) \le L^+(\delta ) <\infty \quad \text {in} \quad (\,0,f_1(a_*)-f_1(0)\,]; \end{aligned}$$
(84)
$$\begin{aligned} \lim _{\delta \downarrow 0} L^{+}(\delta )=-\infty . \end{aligned}$$
(85)

Proof

Along the same line as in the proof of Lemma 4.3, we will estimate the following:

$$\begin{aligned} I_1+I_2+I_3&:= (c_1+c_2) \int ^{\tau _1(\delta )}_{0} \int ^1_0 {\tilde{w}}^2 \rho ^2 \, d\rho \\&\qquad -\int ^{\tau _1(\delta )}_{0} \int ^1_0 (g({\tilde{a}}(s))+c_1+c_2){\tilde{w}}\rho ^2 \, d\rho ds + \frac{1}{3}\int ^{\tau _1(\delta )}_{0} f_1({\tilde{a}}(s)) \,ds, \end{aligned}$$

where \({\tilde{w}}=1-{\tilde{u}}\). First, since \(I_1 \ge 0\), we set \(L_1^-(\delta ) \equiv 0\). Using the supersolution \(\overline{w}\) of \({\tilde{w}}\) constructed in the proof of Theorem 3.3 and its estimate, we observe from (81) that

$$\begin{aligned} I_2&\ge -\frac{g(a_*)+c_1+c_2}{3} \int ^{\tau _1(\delta )}_{0} \overline{w}(s) \, ds \\&\ge - \frac{g(a_*)+c_1+c_2}{3}C_g \int ^{\tau _1(\delta )}_0 \exp {\left[ -\int ^s_0 \{g({\tilde{a}}(\tau '))+c_2\}\, d\tau ' \right] }\,ds. \nonumber \end{aligned}$$
(86)

Since the change of variable \({\tilde{a}}(\tau ')=s'\) yields

$$\begin{aligned} -\int ^{s}_{0} \{g({\tilde{a}}(\tau '))+c_2 \} \,d\tau ' \le \frac{g(0)+c_2}{\gamma } \log {\frac{{\tilde{a}}(s)}{{\tilde{a}}_0}}, \end{aligned}$$

the inequality (86) is reduced to

$$\begin{aligned} I_2&\ge - \frac{g(a_*)+c_1+c_2}{3}C_g \int ^{\tau _1(\delta )}_0 \Bigm (\frac{{\tilde{a}}(s)}{{\tilde{a}}_0} \Bigm )^{\frac{g(0)+c_2}{\gamma }}\,ds\\&\ge -\frac{C_g}{3} \frac{g(a_*)+c_1+c_2}{g(0)+c_2} \left\{ 1- \Bigm (\frac{{\tilde{a}}(\tau _1(\delta ))}{a_*}\Bigm )^{\frac{g(0)+c_2}{\gamma }}\right\} =: L_2^-(\delta ). \end{aligned}$$

Moreover, we find

$$\begin{aligned} I_3&=\frac{-f_1(0)}{3\gamma }\log {\frac{{\tilde{a}}(\tau _1(\delta ))}{{\tilde{a}}_0}} + \frac{1}{3\gamma } \int ^{{\tilde{a}}_0}_{{\tilde{a}}(\tau _1(\delta ))}f_1'(\tilde{z})\, dz \\&\ge \frac{-f_1(0)}{3\gamma }\log {\frac{{\tilde{a}}(\tau _1(\delta ))}{a_*}} =: L_3^-(\delta ), \nonumber \end{aligned}$$
(87)

where \(\tilde{z} \in (0,{\tilde{a}}_0)\). The last inequality is followed from the monotonicity of \(f_1\), and it is strict for any \(\delta \in (\,0, f_1({\tilde{a}}_0)-f_1(0)\,)\). Setting \(L^-(\delta ):= \sum ^{3}_{i=1} L^-_i(\delta )\) and recalling (73), we observe that \(L^-\) is well-defined on \((\,0,f_1({\tilde{a}}_0)-f_1(0)\,]\).

Next, we derive \(L^+\). By a similar argument as in the derivation of \(L_2^-\), we obtain

$$\begin{aligned} I_1&\le \frac{c_1+c_2}{3} \int ^{\tau _1 (\delta )}_0 \overline{w}(s)^2 \,ds \le \frac{c_1+c_2}{3}C_g^2 \int ^{\tau _1(\delta )}_0 \Bigm (\frac{{\tilde{a}}(s)}{{\tilde{a}}_0} \Bigm )^{2\frac{g(0)+c_2}{\gamma }}\,ds\\&\le \frac{C_g}{6}\frac{c_1+c_2}{g(0)+c_1+2c_2} =: L_1^+(\delta ). \end{aligned}$$

Since \(I_2 \le 0\), we set \(L_2^+(\delta ) \equiv 0\). From the first equality in (87), we have

$$\begin{aligned} I_3&\le \frac{-f_1(0)}{3\gamma }\log {\frac{ {\tilde{a}}(\tau _1(\delta ))}{ f_1^{-1}(0)}} +\frac{a_*}{3\gamma } \Vert f_1' \Vert _{\infty } =: L_3^+(\delta ). \end{aligned}$$

Setting \(L^+(\delta ):= \sum ^{3}_{i=1} L^+_i(\delta )\), we see that \(L^+\) is well-defined on \((\,0,f_1({\tilde{a}}_0)-f_1(0)\,]\).

From the definitions of \(L^-\) and \(L^+\), it is clear that (83), (84), and (85) hold true. We have completed the proof. \(\square \)

We are in the position to prove Theorem 1.1.

Proof of Theorem 1.1. To begin with, we prove the existence of switching solution of (IAS). The key of the proof is how to determine the appropriate thresholds \(r_0\) and \(r_1\). We divide the proof of the existence into 4 steps. Finally we shall prove a boundedness of the switching solution and its regularity.

Step 1: Fix \(r_{0} \in (\,0, \infty \,)\) arbitrarily. Let \(r_{1}\) satisfy

$$\begin{aligned} r_{1} \ge r_0 \exp {\left[ -L^- (-f_1(0)) \right] }, \end{aligned}$$
(88)

where remark that \(L^- (-f_1(0)) < 0\). We claim the following: if \(({\tilde{u}}_0,{\widetilde{R}}_0,{\tilde{a}}_0)\) satisfies

$$\begin{aligned} \min _{\rho \in [\,0,1\,]} {\tilde{u}}_0(\rho ) \ge \underline{\omega }:= \max \{K^*(0), 1 -\frac{1}{2} C_g\}, \quad {\widetilde{R}}_0=r_1, \quad {\tilde{a}}_0> f_1^{-1}(0), \end{aligned}$$
(89)

and (IC), then there exists \(\beta _1 \in (\,0,-f_{1}(0)\,)\) such that the solution of (P1) satisfies

$$\begin{aligned} {\widetilde{R}}(\tau _1(\beta _1))=r_0, \quad \frac{d{\widetilde{R}}}{dt}(\tau _1(\beta _1))<0. \end{aligned}$$
(90)

Let \(\delta _0:=f_1({\tilde{a}}_0)-f_1(0)\), i.e., \(\tilde{a}(\tau _1(\delta _0))={\tilde{a}}_0\). Remark that the third inequality in (89) yields \(\delta _0 > -f_1(0)\). Since (89) allows us to apply Lemma 4.7, there exists \(\beta '_1 \in (0, \delta _{0} \,)\) such that

$$\begin{aligned} {\widetilde{R}}(\tau _1(\beta '_1))=r_0 \quad \text {and} \quad {\widetilde{R}}(\tau _1(\delta ))>r_0 \quad \text {for any} \quad \delta \in (\,\beta '_{1}, \delta _0\,]. \end{aligned}$$

Moreover, we infer from (89) that Lemma 4.6 implies that

$$\begin{aligned} \frac{d{\widetilde{R}}}{dt}(\tau _1(\delta ))<0 \quad \text {for any} \quad \delta \in (\,0, -f_1(0)\,) . \end{aligned}$$

Therefore it is sufficient to prove that \(\beta '_1 < -f_1(0)\). Then \(\beta '_1\) is nothing but the required constant \(\beta _1\). Combining the relation (83) with (88), we have

$$\begin{aligned} r_0 = {\widetilde{R}}(\tau _1(\beta '_1)) > r_1 \exp {L^-(\beta '_1)} \ge r_0 \exp {\left[ L^-(\beta '_1)-L^- (-f_1(0)) \right] }. \end{aligned}$$

Then the monotonicity of \(L^-\) yields \(\beta '_1 < -f_1(0)\).

Step 2: We shall show that, there exists \(\varepsilon _1^* \in (\,0,f_1(a_*)-f_1(0)\,)\) such that for any \(r_0 \in (\,0,\infty \,)\) and \(r_1 \ge r_0 \exp [M^+(\varepsilon _1^*)]\), the following holds: if \(({\tilde{u}}_0,{\widetilde{R}}_0,{\tilde{a}}_0)\) satisfies

$$\begin{aligned} \min _{\rho \in [\,0,1\,]} {\tilde{u}}_0(\rho ) \ge \mathscr {U}_{0} = 3 \int ^{1}_{0} u_{0}(\rho ) \rho ^{2}\,d \rho , \quad {\widetilde{R}}_0=r_0, \quad {\tilde{a}}_0< f_1^{-1}(0), \end{aligned}$$
(91)

and (IC), then there exists \(\beta _2 \in (\,0,\varepsilon _1^*\,)\) such that the solution of (P0) satisfies

$$\begin{aligned} {\widetilde{R}}(\tau _0(\beta _2))=r_1, \quad \frac{d{\widetilde{R}}}{dt}(\tau _0(\beta _2))>0. \end{aligned}$$
(92)

Let \(\varepsilon _0:=f_1(a_*)-f_1({\tilde{a}}_0)\), i.e., \(\tilde{a}(\tau _0(\varepsilon _0))={\tilde{a}}_0\). Remark that \(\varepsilon _0 > f_1(a_*)\) by the third inequality in (91). By Lemma 4.4, there exists a constant \(\beta _2' \in (\,0, \varepsilon _{0} \,)\) such that

$$\begin{aligned} {\widetilde{R}}(\tau _0(\beta _2'))=r_1 \quad \text {and} \quad {\widetilde{R}}(\tau _0(\varepsilon ))<r_1 \quad \text {for any} \quad \varepsilon \in (\,\beta _2', \varepsilon _0\,]. \end{aligned}$$
(93)

We define \(\varepsilon _1^*\) as \(\varepsilon _1\) in Lemma 4.2 with \(A = \mathscr {U}_{0}\) and \(\kappa =f_1^{-1}(0)\), i.e.,

$$\begin{aligned} 1-\varGamma _0(\varepsilon ^*_1; \mathscr {U}_{0}, f_1^{-1}(0) ) =z_0(f_1^{-1} (f_1(a_*)- \varepsilon ^*_1)). \end{aligned}$$
(94)

Then Lemma 4.2 asserts that \(\varepsilon _1^*\in (\,0,f_1(a_*)\,)\) and

$$\begin{aligned} \frac{d{\widetilde{R}}}{dt}(\tau _0(\varepsilon ))>0 \quad \text {for any} \quad \varepsilon \in (\,0,\varepsilon _1^*\,]. \end{aligned}$$

Thus it is sufficient to prove that \(\beta '_2 \in (\,0,\varepsilon _1^*\,)\). Then \(\beta _2'\) is nothing but the required constant \(\beta _2\). Letting \(r_1\) satisfy

$$\begin{aligned} r_0 \exp {M^+(\varepsilon ^*_1)} \le r_1, \end{aligned}$$
(95)

we show that \(\beta '_2 \in (\,0,\varepsilon _1^*\,)\). Indeed, since the relation (72) in Lemma 4.4 holds true, we observe from (95) that

$$\begin{aligned} r_0 \exp {M^+(\varepsilon _1^*)} \le r_1={\widetilde{R}}(\tau _0(\beta '_2)) < r_0 \exp {M^+(\beta '_2)}. \end{aligned}$$

Then the monotonicity of \(M^+\) clearly yields \(\varepsilon _1^* > \beta '_2\).

Step 3: We shall prove that, there exists \(\varepsilon _0^* \in (\,0,f_1(a_*)-f_1(0)\,)\) such that for any \(r_0 \in (\,0,\infty \,)\) and \(r_1 \ge R_0 \exp {M^+(\varepsilon _0^*)}\), the following holds: if \(({\tilde{u}}_0,{\widetilde{R}}_0,{\tilde{a}}_0)= (u_0,R_0,a_0)\), then there exists \(\beta _0 \in (\,0,\varepsilon _0^*\,)\) such that the solution of (P0) satisfies the following:

$$\begin{aligned} {\widetilde{R}}(\tau _0(\beta _0))=r_1, \quad \frac{d{\widetilde{R}}}{dt}(\tau _0(\beta _0))>0; \end{aligned}$$
(96)
$$\begin{aligned} \min _{\rho \in [\,0,1\,]} {\tilde{u}}(\rho ,\tau _0(\beta _0)) \ge \max \{ \underline{\omega }, \mathscr {U}_0 \}, \quad {\tilde{a}}(\tau _0(\beta _0))> f_1^{-1}(0). \end{aligned}$$
(97)

Setting \(\tilde{\varepsilon }_1\) as \(\varepsilon _1\) in Lemma 4.2 with \(A=\min _{\rho \in [\,0,1\,]}u_0(\rho )\) and \(\kappa =a_0\), we have

$$\begin{aligned} \frac{d{\widetilde{R}}}{dt}(\tau _0(\varepsilon ))>0 \quad \text {for any} \quad \varepsilon \in (\,0,\tilde{\varepsilon }_1\,] \quad \text {with} \quad \tilde{\varepsilon }_{1} \in (\,0, f_{1}(a_{*})\,). \end{aligned}$$

By way of the function \(\varGamma _0^*\) defined by

$$\begin{aligned} \varGamma _0^*(\varepsilon ):= \varGamma _0 (\varepsilon ; \min \{\min _{\rho \in [\,0,1\,]}u_0(\rho ), \underline{\omega }\}, \max \{a_0, f_1^{-1}(0)\}), \end{aligned}$$

we define \(\tilde{\varepsilon }_2\) as follows:

$$\begin{aligned} 1-\varGamma _0^*(\tilde{\varepsilon }_2) = \max \{\underline{\omega }, \mathscr {U}_0, 1-\varGamma _0^*(f_1(a_*)-f_1(0))\}. \end{aligned}$$
(98)

From now on, we set \(\varepsilon _0^* := \min \{\tilde{\varepsilon }_1, \tilde{\varepsilon }_2\}\) and let \(r_1\) satisfy \(r_1 \ge R_0 \exp {M^+(\varepsilon _0^*)}\). Let \(\varepsilon '_0:=f_1(a_*)-f_1(a_0)\), i.e., \(\tilde{a}(\tau _0(\varepsilon '_0))=a_0\). With the aid of Lemma 4.3, we find a constant \(\beta _0' \in (\,0, \varepsilon '_{0} \,)\) such that (93) holds for \(\beta _2' = \beta _0'\). Noting that the latter relation in (97) is equivalent to \(\beta '_0 < f_1(a_*)\) and recalling \(\varepsilon ^*_0 \le \tilde{\varepsilon }_1 < f_1(a_*)\), we have \(\beta '_0 < \varepsilon ^*_0\). The same argument as in Step 2 implies that

$$\begin{aligned} R_0 \exp M^+(\varepsilon ^*_0) \le r_1 = \tilde{R}(\tau _0(\beta '_0)) < R_0 \exp M^+(\beta '_0), \end{aligned}$$

where the last inequality is followed from Lemma 4.3. Then the monotonicity of \(M^+\) gives us the required relation. Finally we prove the former relation in (97). Thanks to the monotonicity of \(\varGamma _0\), we observe from Lemma 4.1 that, for any \(\varepsilon \in [\,\beta '_0, \tilde{\varepsilon }_2\,]\),

$$\begin{aligned} \min _{\rho \in [\,0,1\,]} {\tilde{u}}(\rho ,\tau _0(\varepsilon )) \ge 1- \varGamma _0(\varepsilon ; \min _{\rho \in [\,0,1\,]}u_0(\rho ), a_0) \ge 1- \varGamma _0^*(\varepsilon ) \ge \max \{ \underline{\omega }, \mathscr {U}_0 \}. \end{aligned}$$

Therefore \(\beta '_0\) is nothing but the required constant \(\beta _0\).

Step 4: We shall prove that, for a suitable pair of theresholds \((r_0, r_1)\), the system (P) has a unique solution with the property (i) in Theorem 1.1. Fix \(r_{0} \in (\, 0, \infty \,)\) and let \(r_1\) satisfy

$$\begin{aligned} r_1 \ge \max \left\{ R_0 \exp {M^+(\varepsilon _0^*)}, r_0 \exp {M^+(\varepsilon ^*_1)}, r_0 \exp {\left[ -L^-(-f_1(0))\right] }\right\} . \end{aligned}$$
(99)

We note that (99) yields \(\max \{ r_0, R_0 \} < r_1\), for \(M^+\) is positive in \((\, 0, f_1(a_*) - f_1(0)\,]\).

With the aid of Step 3, there exist \(\beta _0 \in (\, 0, \varepsilon ^{*}_0 \,)\) and a unique solution \(({\tilde{u}},{\tilde{v}},{\widetilde{R}},{\tilde{a}})\) of (P0) with \(({\tilde{u}}_0,{\widetilde{R}}_0,{\tilde{a}}_0) = (u_0, R_0, a_0)\) such that (96) and (97) hold. Since \(\beta _0\) is uniquely determined, setting \((u,v,R,a)=({\tilde{u}},{\tilde{v}},{\widetilde{R}},{\tilde{a}})\) in \(\bar{I} \times [\,0, t_1 \,]\), we observe from (96) and the proof of Theorem 3.1 that (uvRaS) is a unique solution of (P) in \(\bar{I} \times [\,0, \tau _0(\beta _0)\,)\) such that \(S(t) = 0\) in \([\,0, t_1\,)\) and S(t) switches from 0 to 1 at \(t_1\), where \(t_1 := \tau _0(\beta _0)\).

Since (96)–(97) asserts that (89) holds for \(({\tilde{u}}_0,{\widetilde{R}}_0,{\tilde{a}}_0) = (u, R, a) |_{t=t_1}\), it follows from Step 1 that there exist \(\beta _1 \in (\,0, -f_1(0)\,)\) and a unique solution \(({\tilde{u}}_1,{\tilde{v}}_1,{\widetilde{R}}_1,{\tilde{a}}_1)\) of (P1), with \(({\tilde{u}}_0,{\widetilde{R}}_0,{\tilde{a}}_0) = (u, R, a) |_{t=t_1}\), satisfying (90). Since \(\beta _1\) is uniquely determined, setting \((u,v,R,a)=({\tilde{u}}_1,{\tilde{v}}_1,{\widetilde{R}}_1,{\tilde{a}}_1)\) in \(\bar{I} \times [\,t_1, t_2\,]\) and \(S(t) = 1\) in \([\,t_1, t_2\,)\), we deduce from (90) and the proof of Theorem 3.1 that (uvRaS) is a unique solution of (P) in \(\bar{I} \times [\,0, t_2\,)\) satisfying the following: \(S(t)=1\) in \([\,t_1, t_2\,)\); S(t) switches from 1 to 0 at \(t_2\), where \(t_2\) is the time determined by \(\tau _1(\beta _1)\).

Here we claim that (91) holds for \(({\tilde{u}}_0,{\widetilde{R}}_0,{\tilde{a}}_0) = (u, R, a) |_{t=t_2}\). Since (96)–(97) implies that \(\min _{\rho \in [\,0, 1\,]} u(\rho , t_1) \ge \max \{ \underline{\omega }, \mathscr {U}_0\}\), we infer from Lemma 4.5 that

$$\begin{aligned} \min _{\rho \in [\,0,1\,]} u(\rho , t_2) \ge \max \{ \underline{\omega }, \mathscr {U}_0 \}. \end{aligned}$$

Thus the claim holds true. Then it follows from Step 2 that there exist \(\beta _2 \in (\,0, \varepsilon ^*_1\,)\) and a unique solution \(({\tilde{u}}_2,{\tilde{v}}_2,{\widetilde{R}}_2,{\tilde{a}}_2)\) of (P0), with \(({\tilde{u}}_0,{\widetilde{R}}_0,{\tilde{a}}_0) = (u, R, a) |_{t=t_2}\), satisfying (92). Thanks to the uniqueness of \(\beta _2\), setting

$$\begin{aligned} (u,v,R,a)=({\tilde{u}}_2,{\tilde{v}}_2,{\widetilde{R}}_2,{\tilde{a}}_2) \quad \text {in} \quad \bar{I} \times [\,t_2, t_3\,], \end{aligned}$$

where \(t_3\) is the time determined by \(\tau _0(\beta _2)\), we deduce from the same argument as above that (uvRaS) is a unique solution of (P) in \(\bar{I} \times [\,0, t_3\,)\) satisfying the following: \(S(t)=1\) in \([\,t_2, t_3\,)\); S(t) switches from 0 to 1 at \(t_3\).

In order to apply Step 1 again, we verify that \(u(\cdot , t_3)\) satisfies the first property in (89). Combining Lemma 4.4 with (99), we see that

$$\begin{aligned} R_0 \exp {M^+(\varepsilon _0^*)} \le r_1 = {\widetilde{R}}_2(\tau _0(\beta _2)) < r_0 \exp {M^+(\beta _2)} \le R_0 \exp {M^+(\beta _2)}, \end{aligned}$$
(100)

and then the monotonicity of \(M^+\) yields \(\varepsilon _0^* > \beta _2\). Recalling the monotonicity of \(\varGamma _0\) and using Lemma 4.1, we have for any \(\varepsilon \in [\,\beta _2, \varepsilon _0^*\,]\)

$$\begin{aligned} \min _{\rho \in [\,0,1\,]} u(\rho ,\tau _0(\varepsilon ))&\ge 1- \varGamma _0(\varepsilon ; \max \{ \underline{\omega }, \mathscr {U}_0\}, f_1^{-1}(0)) \\&\ge 1- \varGamma _0^*(\varepsilon ) \ge 1- \varGamma _0^*(\varepsilon _0^*) \ge \max \{ \underline{\omega }, \mathscr {U}_0 \}. \end{aligned}$$

Thus Step 1 is applicable again. Therefore we can construct inductively a solution of (P) with the property (i) in Theorem 1.1.

Step 5: We prove the property (ii) in Theorem 1.1. Using the sequence \(\{ t_j \}^{\infty }_{j=0}\) obtained by Step 4, we inductively define sequences \(\{\varepsilon _0^{2j}\}_{j=0}^\infty \), \(\{\delta _0^{2j+1}\}_{j=0}^\infty \), and \(\{\beta _j\}_{j=0}^\infty \). Let \(\varepsilon _0^0 := f_1(a_*) - f_1(a_0)\), i.e., \(\tau _0(\varepsilon _0^0)=t_0=0\). Set

$$\begin{aligned} \beta _0 := f_1(a_*) - f_1(a(t_1)). \end{aligned}$$
(101)

By the definition of \(\tau _0\), the relation (101) is equivalent to \(a(\tau _0(\beta _0)) = a(t_1)\). We set

$$\begin{aligned} \delta _0^1 := f_1(a_*) - f_1(0) -\beta _0. \end{aligned}$$

The definitions of \(\tau _0\) and \(\tau _1\) yield \(a(\tau _1(\delta _0^1))=a(\tau _0(\beta _0))\). Since \(a(\cdot )\) is monotone in \([\,0,t_1\,]\), it holds that \(\tau _0(\beta _0)=t_1=\tau _1(\delta _0^1)\). Next we set

$$\begin{aligned} \beta _1&:=f_1(a(t_2))-f_1(0); \end{aligned}$$
(102)
$$\begin{aligned} \varepsilon _0^2&:= f_1(a_*)-f_1(0) -\beta _1. \end{aligned}$$
(103)

Then, from (102) and (103), we find \(a(\tau _1(\beta _1))=a(t_2)\) and \(a(\tau _0(\varepsilon _0^2))=a(\tau _1(\beta _1))\). The monotonicity of \(a(\cdot )\) in \([\,t_1, t_2\,]\) gives us the relation \(\tau _1(\beta _1)= \tau _0(\varepsilon _0^2)\). Along the same manner as above, we define inductively \(\varepsilon _0^{2j}\), \(\delta _0^{2j+1}\), and \(\beta _j\) for each \(j \ge 2\) as follows:

$$\begin{aligned}\begin{gathered} \beta _j := {\left\{ \begin{array}{ll} f_1(a_*)-f_1(a(t_{j+1})) \quad &{}\,\,\text {if} \,\,j\,\, \text {is even},\\ f_1(a(t_{j+1}))-f_1(0) \quad &{}\,\,\text {if} \,\,j\,\, \text {is odd}, \end{array}\right. }\\ \qquad \delta _0^{2j-1} := f_1(a_*) - f_1(0) -\beta _{2j-2}, \quad \varepsilon _0^{2j} := f_1(a_*)- f_1(0) -\beta _{2j-1}. \end{gathered}\end{aligned}$$

We note that the monotonicity of \(a(\cdot )\) in \([\,t_j, t_{j+1}\,]\) implies \(\tau _0(\beta _{2j})=\tau _1(\delta _0^{2j+1})\) and \(\tau _1(\beta _{2j+1})=\tau _0(\varepsilon _0^{2j+2})\) for each \(j \in \mathbb {N}\cup \{0\}\). Then, it follows from the definitions of the sequences that, for any \(j \in \mathbb {N}\cup \{0\}\),

$$\begin{aligned} R(\tau _0(\beta _{2j}))=r_1,\,\,\,\, R(\tau _0(\varepsilon ))<r_1 \,\,\,\, \text {and} \,\,\,\, S(\tau _0(\varepsilon )) \equiv 0, \,\,\,\,&\text {on} \,\,\,\, ( \, \beta _{2j}, \varepsilon _0^{2j}\,]; \end{aligned}$$
(104)
$$\begin{aligned} R(\tau _1(\beta _{2j+1}))=r_0, \,\,\,\, R(\tau _1(\delta )) > r_0 \,\,\,\, \text {and} \,\,\,\, S(\tau _1(\delta )) \equiv 1, \,\,\,\,&\text {on} \,\,\,\, (\, \beta _{2j+1}, \delta _0^{2j+1}\,]. \end{aligned}$$
(105)

We give the lower and upper bounds of R when \(S \equiv 0\), i.e., for the case of (104). We note that, for the case of \(j=0\), it clearly follows from Lemma 4.3 that

$$\begin{aligned} R_0 \exp {M^{-}(f_1(a_*) - f_1(a_0))} \le R(\tau _0(\varepsilon )) < r_1 \quad \text {on} \quad (\, \beta _0, \varepsilon ^0_0 \, ], \end{aligned}$$
(106)

where the first inequality was obtained by the monotonicity of \(M^{-}\). For any \(j \in \mathbb {N}\), we observe from Lemma 4.3 that

$$\begin{aligned} r_0 \exp {M^-(\varepsilon _0^{2j})} \le r_0 \exp {M^-(\varepsilon )} \le R(\tau _0(\varepsilon )) < r_1 \quad \text {on} \quad (\,\beta _{2j}, \varepsilon _0^{2j}\,]. \end{aligned}$$
(107)

Here, by (105) and Lemma 4.7, we find \(\log {(r_0/r_1)} \le L^+(\beta _{2j-1})\). Since \(L^+(\delta )\) is monotone and diverges to \(-\infty \) as \(\delta \downarrow 0\), there exists \(\hat{\delta } \in (\,0, \beta _{2j-1}\,]\), independent of j, such that \(L^+(\hat{\delta })= \log (r_0/r_1)\). Thus, setting \(\hat{\varepsilon }:=f_1(a_*)-f_1(0)-\hat{\delta }\), we obtain

$$\begin{aligned} f_1(a_*)-\hat{\varepsilon }=f_1(0)+\hat{\delta }\le f_1(0)+\beta _{2j-1}=f_1(a_*)-\varepsilon _0^{2j}, \quad \text {i.e.,} \quad \hat{\varepsilon } \ge \varepsilon _0^{2j}. \end{aligned}$$
(108)

Since \(j \in \mathbb {N}\) is arbitral, we observe from (107) and (108) that

$$\begin{aligned} r_0 \exp {M^-(\hat{\varepsilon })}\le R(\tau _0(\varepsilon )) < r_1 \quad \text {on} \quad (\,\beta _{2j}, \varepsilon _0^{2j}\,] \quad \text {for any} \quad j \in \mathbb {N}. \end{aligned}$$
(109)

In particular, we see that

$$\begin{aligned} r_0 = R(\tau _0(\varepsilon _0^{2j})) \ge r_0 \exp {M^-(\varepsilon _0^{2j})} \ge r_0 \exp {M^-(\hat{\varepsilon })}. \end{aligned}$$
(110)

Next, we derive the lower and upper bounds of R when \(S \equiv 1\), i.e., for the case of (105). For any \(j \in \mathbb {N}\cup \{0\}\), we observe from (105) and Lemma 4.7 that

$$\begin{aligned} r_0 < R(\tau _1(\delta )) \le r_1 \exp {L^+(\delta )} \le r_1 \exp {L^+(\delta _0^{2j+1})} \,\,\, \text {on} \,\,\, (\,\beta _{2j+1}, \delta _0^{2j+1}\,], \end{aligned}$$
(111)

where the last inequality was followed from the monotonicity of \(L^{+}\). Here, it follows from (104) and Lemma 4.3 that \(M^-(\beta _{2j}) \le \log (r_1/ \min \{R_0, r_0\})\). Since \(M^-(\varepsilon )\) is monotone and diverges to \(\infty \) as \(\varepsilon \downarrow 0\), there exists \(\bar{\varepsilon } \in (\,0, \beta _{2j}\,]\), independent of j, such that \(M^-(\bar{\varepsilon }) = \log (r_1/ \min \{ R_0, r_0\})\). Setting \(\bar{\delta }:=f_1(a_*)-f_1(0)-\bar{\varepsilon }\), we deduce from a similar argument as in (108) that the relation \(\bar{\delta } \ge \delta _0^{2j+1}\) holds. Combining the fact with (111), we have

$$\begin{aligned} r_0 < R(\tau _1(\delta )) \le r_1 \exp {L^+(\bar{\delta })} \quad \text {on} \quad (\,\beta _{2j+1}, \delta _0^{2j+1}\,] \quad \text {for any} \quad j \in \mathbb {N}\cup \{0\}. \end{aligned}$$
(112)

In particular, we see that

$$\begin{aligned} r_1 = R(\tau _1(\delta _0^{2j+1})) \le r_1 \exp {L^+(\delta _0^{2j+1})}\le r_1 \exp {L^+(\bar{\delta })}. \end{aligned}$$
(113)

Consequently, by virtue of (106), (109)–(110), and (112)–(113), we conclude that the property (ii) in Theorem 1.1 holds for

$$\begin{aligned} C_1=\min \{ R_0 \exp {M^{-}(f_1(a_*) - f_1(a_0))}, r_0 \exp {M^-(\hat{\varepsilon })} \}, \quad C_2= r_1 \exp {L^+(\bar{\delta })}. \end{aligned}$$

Step 6: Finally we prove the regularity of the switching solution constructed by the above arguments. The equation of a implies

$$\begin{aligned} \Bigm |\frac{da}{dt}(t)\Bigm | = |\gamma (a_*-a(t))-\gamma a_* S(t)| \le 2 \gamma a_* \quad \text {in} \quad [\,0,\infty \,) \setminus \{t_j\}_{j=0}^\infty . \end{aligned}$$

Fix \(j \in \mathbb {N}\) arbitrarily. Then, for any t and s with \(t_{j-1} \le t<t_j< s < t_{j+1}\), we have

$$\begin{aligned} |a(t)-a(s)|&\le |a(t)-a(t_j)| + |a(t_j)-a(s)|\\&= \Bigm |\frac{da}{dt}(\tau _1)\Bigm | |t-t_j| +\Bigm |\frac{da}{dt}(\tau _2)\Bigm | |t_j-s| \le 4 \gamma a_* |t-s|, \nonumber \end{aligned}$$
(114)

where \(\tau _1 \in (\,t,t_j\,)\) and \(\tau _2 \in (\,t_j, s\,)\). Since j is arbitrary, we see that \(a \in C^{0, 1}(\mathbb {R}_+)\).

We consider the following initial boundary problem:

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t \tilde{u}(\rho ,t) -\mathscr {L}'(\tilde{v}, \tilde{R}) \tilde{u} = P( \tilde{u}(\rho ,t), a(t)) \, &{}\text {in} \,\,\, I_\infty , \\ \tilde{v}(\rho ,t)= \dfrac{1}{\rho ^2} \displaystyle {\int ^\rho _0} F( \tilde{u}(r,t),a(t)) r^2 \, dr\, &{}\text {in} \,\,\, I_\infty ,\\ \dfrac{d^{} \tilde{R}}{d {t}^{}}(t) = \tilde{v}(1,t) \tilde{R}(t)\, &{}\text {in} \,\,\, \mathbb {R}_+, \\ \partial _\rho \tilde{u}(0,t)= \partial _\rho \tilde{u}(1,t)=0, \quad \dfrac{ \tilde{v}}{\rho } \Bigm |_{\rho =0} = \dfrac{1}{3} F(\tilde{u}(0,t), a(t)) &{}\text {in} \,\,\, \mathbb {R}_+, \\ \tilde{u}(\rho , 0)= u_0(\rho ), \,\, \tilde{R}(0)= R_0, &{}\text {in} \,\,\, I. \end{array}\right. }\quad {(\mathscr {P})} \end{aligned}$$

Since \(a \in C^{0, 1}(\mathbb {R}_+)\), the proofs of Lemma 2.3 and Theorem 3.1 indicate that (\(\mathscr {P}\)) has a unique solution \((\tilde{u}, \tilde{v}, \tilde{R})\) in the class

$$\begin{aligned} C^{2+\alpha , 1+\frac{\alpha }{2}}(Q_\infty ) \times (C^{1+\alpha , \frac{\alpha }{2}}([\,0,1\,)\times \mathbb {R}_+) \cap C^1([\,0,1\,)\times \mathbb {R}_+))\times C^1(\mathbb {R}_+). \end{aligned}$$

Recalling that (uvR), which is obtained by Step 4, also satisfies (\(\mathscr {P}\)), we observe from the uniqueness that \((\tilde{u}, \tilde{v}, \tilde{R})=(u,v,R)\) in \(Q_\infty \). We obtain the conclusion. \(\square \)