1 Introduction

Atherosclerosis is a chronic inflammatory disease in which a plaque builds up in the innermost layer of the artery. As the plaque grows, it progressively hardens and narrows the arteries thereby increasing the shear force of blood flow. The increased shear force may cause rupture of the plaque, which leads to thrombus formation in the lumen and may then block downstream arteries (Friedman and Hao 2015; Hao and Friedman 2014). Plaque rupture in the cerebral artery results in a stroke, while a coronary thrombus causes myocardial infarction, i.e., a heart attack. Every year about 900,000 people in USA and 13 million people worldwide die of heart attack or stroke (Hao and Friedman 2014; Friedman and Hao 2015).

Mathematical models describing the growth of a plaque in the arteries (e,g., Calvez et al. 2009; Cohen et al. 2014; Friedman and Hao 2015; Friedman et al. 2015; Hao and Friedman 2014; McKay et al. 2005; Mukherjee et al. 2019) were introduced. All of these models include the interaction of the “bad” cholesterols, low density lipoprotein (LDL), and the “good” cholesterols, high density lipoprotein (HDL), in triggering whether plaque will grow or shrink.

A series of events happens when lesions develop in the inner surface of the arterial wall (Friedman and Hao 2015) (see also Friedman 2018, Chapters 7 and 8): “LDL and HDL move from the blood into the arterial intima through those endothelial lesions and get oxidized by free radicals which are continuously released by biochemical reactions within the body. The immune system considers oxidized LDL (ox-LDL) as a dangerous substance, hence a chain of immune response is triggered. Sensing the presence of ox-LDL, endothelial cells begin to secret monocyte chemoattractant protein (MCP-1), which attracts monocytes circulating in the blood to penetrate into the intima. Once in the intima, these monocytes are converted into macrophages. The macrophages endocytose the ox-LDL and are eventually turned into foam cells. These foam cells have to be removed by the immune system, and at the same time they trigger a chronic inflammatory reaction: they secrete pro-inflammatory cytokines (e.g., TNF-\(\alpha \), IL-1) which increase endothelial cells activation to recruit more new monocytes. Smooth muscle cells (SMCs) are attracted from the media into intima by chemotactic forces due to growth factors secreted by macrophages and T-cells. ECM is remodeled by matrix metalloproteinase (MMP) which is released by a variety of cell types including SMCs, and is inhibited by tissue inhibitor of metalloproteinase (TIMP) produced by macrophages and SMCs. Interleukin IL-12, secreted by macrophages and foam cells, activates T-cells to promote the growth of a plaque. The activated T-cells secrete interferon IFN-\(\gamma \), which in turn enhance activation of macrophages in the intima. The effect of oxidized LDL on plaque growth can be reduced by the good cholesterols, HDL: HDL can remove harmful bad cholesterol out from the foam cells and convert foam cells into anti-inflammatory macrophages; moreover, HDL also competes with LDL on free radicals, decreasing the amount of radicals that are available to oxidize LDL.”

In the paper (Calvez et al. 2009), oxidized LDL, macrophages, foam cells were modeled on a rectangular two-space dimensional domain, then the growth speed of the lesion is modeled with conservation and the assumption of incomprehensibility. An ODE system was formulated in Cohen et al. (2014); when only LDL and macrophages are in the system, nice phase-plane analysis were carried out. In McKay et al. (2005), more realistic variables such as chemo-attractant, monocytes, T-cells, proliferation factors, smooth muscle cells are introduced in addition to LDL, HDL, macrophages and radicals; both ODE models and PDE models are proposed. A simple reaction-diffusion system to describe the early onset of atherosclerotic plaque formation was introduced in Mukherjee et al. (2019).

In Hao and Friedman (2014); Friedman and Hao (2015), a more sophisticated reaction-diffusion free boundary model was introduced. The model includes the interactions of variables LDL, oxidized LDL, HDL, inflammatory macrophages, anti-inflammatory macrophages, foam cells, radicals, IL-12 (Interleukin-12), MCP-1 (Monocyte Chemoattractant Protein-1), MMP (matrix metalloproteinase), smooth muscle cells and T-cells. This resulted in 17 equations in the system, plus the boundary and free boundary conditions. Nice numerical simulations were carried out.

It is extremely challenging to analyze a reaction-diffusion free boundary problem with 17 equations. Friedman et al. (2015) considered a simplified model involving LDL and HDL cholesterols, macrophages and foam cells. As the blood vessel is a long and thin tube, it is a good approximation to assume that the artery is a radially symmetric infinite cylinder. They further simplified the problem by considering the cross section only, which reduces the problem to a two-space dimensional problem. Rigorous mathematical analysis was carried out to prove that for any \(H_0\) and any small \(\varepsilon >0\), there exists a unique \(L_0\) such that there is a unique \(\varepsilon \)-thin stationary plaque; this is a reasonable requirement representing a balance of “good” and “bad” cholesterols. Necessary and sufficient conditions were found to characterize situations where a small initial plaque would shrink and disappear or persist for all time. Since it is not reasonable to assume that plaques have a strictly radially symmetric shape, Zhao and Hu (2021, 2022) investigated a systematic symmetry-breaking bifurcations utilizing the Crandall-Rabinowitz theorem. But verifying the Crandall-Rabinowitz theorem is a great challenge because the system admits no explicit smooth solutions. A number of sharp estimates were established in Zhao and Hu (2021, 2022) to overcome this difficulty. The result, however, represents bifurcations in the cross-section direction and therefore is two-space dimensional. It would be interesting to see whether bifurcations would occur in the longitude direction, which is a three-space dimensional problem. This is the goal of this paper.

The structure of this paper is as follows. In Sect. 2 we present our mathematical model, followed by the main result of the problem. In Sect. 3, we collect some well-known results which will be needed in the sequel. After the establishment of a variety of estimates for our PDE system, the Crandall-Rabinowitz theorem is applied to prove our main result in Sects. 4 and 5. Section 6 covers the conclusion.

2 Mathematical model

For reader’s convenience, we shall briefly describe the model derived in Friedman et al. (2015) and (Friedman 2018, Chapters 7 and 8). We consider a PDE model consisting of LDL and HDL cholesterols, macrophages and foam cells. The simplified model lumps the all LDL into the variable L, whether oxidized or not. Likewise, all HDL are lumped into the variable H, whether oxidized or not. The inflammatory macrophages and anti-inflammatory macrophages are lumped together into the variable M. The domain under consideration is the evolving plaque region \(\{\Omega (t),t>0\}\) with a moving boundary \(\Gamma (t)\), \(\Gamma (t)\subset \{ r<1\}\times \{-\infty<z<\infty \}\), and the fixed boundary \(\partial B_1\times {{\mathbb {R}}}=\{ r=1 \}\times \{-\infty<z<\infty \}\) representing the blood vessel wall.

The LDL satisfies, in \(\Omega (t)\),

$$\begin{aligned} \frac{\partial L}{\partial t}-\Delta L = -k_{1} \frac{M L}{K_{1}+L}-\rho _{1} L, \end{aligned}$$
(2.1)

where we have normalized the diffusion rate to 1, \(\Delta =\frac{1}{r}\frac{\partial }{\partial r}\Big (r\frac{\partial }{\partial r}\Big ) + \frac{1}{r^2}\frac{\partial ^2}{\partial \theta ^2} + \frac{\partial ^2}{\partial z^2}\) in the cylindrical domain, and the term \(-k_{1} \frac{M L}{K_{1}+L}\) is of Michaelis-Menten type and is a result of inflammatory macrophages ingesting oxidized LDL. Here the inflammatory macrophages and oxidized LDL consist of a portion of the total macrophages and LDL, respectively, and the proportion factor is absorbed into \(k_1\). The positive constant \(\rho _1\) is the natural rate of elimination of LDL.

Likewise, HDL satisfies, in \(\Omega (t)\),

$$\begin{aligned} \frac{\partial H}{\partial t}-\Delta H = -k_{2} \frac{H F}{K_{2}+F}-\rho _{2} H, \end{aligned}$$
(2.2)

where the term \(-k_{2} \frac{H F}{K_{2}+F}\) represents the amount of HDL consumed to remove harmful bad cholesterol out from the foam cells and revert foam cells into anti-inflammatory macrophages. The positive constant \(\rho _2\) is the natural rate of elimination of HDL.

The macrophages and foam cells satisfy, in \(\Omega (t)\),

$$\begin{aligned}&\frac{\partial M}{\partial t} - D\Delta M + \nabla \cdot (M \mathbf {v}) = -k_1\frac{ML}{K_1+L} + k_2\frac{HF}{K_2+F} + \lambda \frac{ML}{\gamma +H}-\rho _3 M , \end{aligned}$$
(2.3)
$$\begin{aligned}&\frac{\partial F}{\partial t} - D\Delta F + \nabla \cdot (F \mathbf {v}) = k_1\frac{ML}{K_1+L}-k_2\frac{HF}{K_2+F}-\rho _4 F , \end{aligned}$$
(2.4)

where the positive constants \(\rho _3, \rho _4\) denote the natural death rate of M and F, respectively. The extra term \(\lambda \frac{ML^{}}{\gamma +H}\) describes the effects that oxidized LDL attracts inflammatory macrophages while HDL decreases this impact by competing with LDL on free radicals.

The combined densities of macrophages and foam cells in the plaque is in a relatively small range, so it is assumed to be a constant \(M_0\), i.e.,

$$\begin{aligned} M+F\equiv M_0\quad \text {in } \Omega (t). \end{aligned}$$
(2.5)

It is further assumed (Friedman 2018) that the plaque texture is of a porous medium type and invoke Darcy’s law,

$$\begin{aligned} \mathbf {v}=-\nabla p. \end{aligned}$$
(2.6)

By adding the two Eqs. in (2.3) and (2.4) and using Darcy’s law, we derive (2.10) below. Replaced \(\mathbf {v}\) with \(-\nabla p\), the equation for F can be written in the form of (2.9) below and the equation for M can be eliminated. In summary, we have the following system of equations in the plaque region \(\{\Omega (t),t>0\}\),

$$\begin{aligned}&\frac{\partial L}{\partial t}-\Delta L = -k_{1} \frac{(M_0 - F) L}{K_{1}+L}-\rho _{1} L, \end{aligned}$$
(2.7)
$$\begin{aligned}&\frac{\partial H}{\partial t}-\Delta H = -k_{2} \frac{H F}{K_{2}+F}-\rho _{2} H,\end{aligned}$$
(2.8)
$$\begin{aligned}&\begin{array}{ll} \frac{\partial F}{\partial t}-D \Delta F-\nabla F \cdot \nabla p= &{} k_{1} \frac{(M_{0}-F) L}{K_{1}+L} -k_{2} \frac{H F}{K_{2}+F}-\lambda \frac{F\left( M_{0}-F\right) L}{M_{0}(\gamma +H)}\\ &{}+\left( \rho _{3}-\rho _{4}\right) \frac{\left( M_{0}-F\right) F}{M_{0}}, \end{array} \end{aligned}$$
(2.9)
$$\begin{aligned}&-\Delta p = \frac{1}{M_{0}}\left[ \lambda \frac{\left( M_{0}-F\right) L}{\gamma +H}-\rho _{3}\left( M_{0}-F\right) -\rho _{4} F\right] . \end{aligned}$$
(2.10)

Next we proceed to derive boundary conditions. By continuity of the velocity field, we immediately have the free boundary condition

$$\begin{aligned} V_n=-\frac{\partial p}{\partial {{\varvec{n}}}} \quad \text{ on } \ \Gamma (t), \end{aligned}$$
(2.11)

where \(V_n\) is the velocity of the free boundary \(\Gamma (t)\) in the outward normal direction \({\varvec{n}}\). Naturally, there are no exchange through the blood vessel wall (\(r=1\)) for all variables and the velocity is zero:

$$\begin{aligned} \frac{\partial L}{\partial r}=\frac{\partial H}{\partial r}=\frac{\partial F}{\partial r}=\frac{\partial p}{\partial r}=0 \quad \text { on } \partial B_1\times {{\mathbb {R}}}. \end{aligned}$$
(2.12)

On the free boundary,

$$\begin{aligned}&\frac{\partial L}{\partial {{\varvec{n}}}}+\beta _{1}\left( L-L_{0}\right) =0 \quad&\text{ on } \ \Gamma (t), \end{aligned}$$
(2.13)
$$\begin{aligned}&\frac{\partial H}{\partial {{\varvec{n}}}}+\beta _{1}\left( H-H_{0}\right) =0 \quad&\text{ on } \ \Gamma (t), \end{aligned}$$
(2.14)
$$\begin{aligned}&\frac{\partial F}{\partial {{\varvec{n}}}}+\beta _{2} F=0 \quad&\text{ on } \ \Gamma (t), \end{aligned}$$
(2.15)

where \(L_0\) and \(H_0\) in the flux boundary conditions (2.13) and (2.14) respectively represent the concentrations of L and H in the blood with \(\beta _1>0\) and \(\beta _2>0\) being transfer rate. And of course, there are no foam cells in the blood. Finally, the adhesiveness of the plaque yields the equation:

$$\begin{aligned} p=\kappa \quad \text{ on } \ \Gamma (t), \end{aligned}$$
(2.16)

where \(\kappa \) is the mean curvature in the direction \({\varvec{n}}\) for \(\Gamma (t)\).

Since the main interest is the free boundary, we could also consider a finite domain within \(\{ 0<z< T\}\). Setting the time derivatives to be zero, the corresponding stationary version of the system (2.7)-(2.16) in the finite cylinder \(\Omega \) with inner boundary \(\Gamma \) and fixed outer boundary \(\Gamma _0=\{r=1\}\times [0,T]\) is

$$\begin{aligned}&- \Delta L = - k_1 \frac{(M_0-F)L}{K_1 + L} - \rho _1 L&\text {in } \Omega , \end{aligned}$$
(2.17)
$$\begin{aligned}&- \Delta H =- k_2 \frac{HF}{K_2 + F} - \rho _2 H&\text {in } \Omega ,\end{aligned}$$
(2.18)
$$\begin{aligned}&\begin{array}{ll} -D\Delta F - \nabla F\cdot \nabla p = &{} k_1\frac{(M_0-F)L}{K_1+L}-k_2\frac{HF}{K_2+F}-\lambda \frac{F(M_0-F)L}{M_0(\gamma +H)} \\ &{} +(\rho _3-\rho _4)\frac{(M_0-F)F}{M_0} \end{array}&\text {in }\Omega ,\end{aligned}$$
(2.19)
$$\begin{aligned}&-\Delta p = \frac{1}{M_0}\Big [\lambda \frac{(M_0-F)L}{\gamma +H}-\rho _3(M_0-F) - \rho _4 F\Big ]&\text {in }\Omega ,\end{aligned}$$
(2.20)
$$\begin{aligned}&\frac{\partial L}{\partial r} = \frac{\partial H}{\partial r} = \frac{\partial F}{\partial r} = \frac{\partial p}{\partial r} = 0&\text {on } \Gamma _0, \qquad \quad \end{aligned}$$
(2.21)
$$\begin{aligned}&\frac{\partial L}{\partial {{\varvec{n}}}} + \beta _1 (L-L_0) = 0, \ \frac{\partial H}{\partial {{\varvec{n}}}} + \beta _1 (H-H_0)=0, \ \frac{\partial F}{\partial {{\varvec{n}}}} + \beta _2 F = 0&\text {on }\Gamma ,\end{aligned}$$
(2.22)
$$\begin{aligned}&p = \kappa&\text {on }\Gamma ,\end{aligned}$$
(2.23)
$$\begin{aligned}&V_n = -\frac{\partial p}{\partial {{\varvec{n}}}}=0&\text {on }\Gamma ,\end{aligned}$$
(2.24)
$$\begin{aligned}&\text {No flux conditions for all variables at }z=0 \hbox { and } z=T. \end{aligned}$$
(2.25)

As in Zhao and Hu (2021), we shall use \(\mu = \frac{1}{\epsilon }[\lambda L_0-\rho _3(\gamma +H_0)]\) as our bifurcation parameter. And we will keep all parameters fixed except \(L_0\) and \(\rho _4\) so that \(\mu \) varies by changing \(L_0\). Even though \(\epsilon \) appears in the denominator, \(\mu \) is of order O(1), since the balance of LDL and HDL is required for the existence of a stationary solution, i.e., \(\lambda L_0-\rho _3(\gamma +H_0)\) is of order \(O(\epsilon )\).

The existence of a radially symmetric stationary solution can be found in Friedman et al. (2015) and Zhao and Hu (2021). To be precise, the existence and uniqueness from Friedman et al. (2015) and Zhao and Hu (2021) are for a solution in two dimensions (independent of the variable z). It is clear that it is also a three-space dimensional solution, modulus the fact that two-space dimensional and three-space dimensional mean curvature differ by a factor of \(\frac{1}{2}\) even for a cylindrical domain and its cross section (\(\frac{1}{n-1} =1\) when \(n=2\) and \(\frac{1}{n-1}=\frac{1}{2}\) when \(n=3\)). But that does not have a material adverse impact on the existence and uniqueness proofs. To be rigorous, we need to show that the solution in three dimensions is also unique in the class of three-space dimensional solutions, and hence the two-space dimensional solution must also be the unique three-space dimensional solution.

As in Zhao and Hu (2021), we let

$$\begin{aligned} \mu _c = \frac{ \rho _3}{\beta _1}\Big \{ (\gamma + H_0) \Big ( \frac{\lambda k_1 M_0 }{\lambda K_1+\rho _3(\gamma +H_0)}+ {\rho _1 } \Big ) -\rho _2 H_0 \Big \}. \end{aligned}$$
(2.26)

We state the following analog of (Zhao and Hu 2021, Theorem 2.1). The existence is already obtained in Zhao and Hu (2021). The uniqueness proof boils down to a maximum principle, which is apparently also valid in this domain, and hence the proof of the uniqueness is omitted.

Theorem 2.1

For every \(\mu ^{*}>\mu _{c}\) and \(\mu _{c}<\mu <\mu ^{*}\), we can find a small \(\varepsilon ^{*} =\epsilon ^*(\mu ^*)>0\), and for each \(0<\varepsilon <\varepsilon ^{*}\), there exists a unique \(\rho _{4}\) such that the system (2.17)–(2.25) admits a unique solution \(\left( L_{*}(r), H_{*}(r), F_{*}(r), p_{*}(r)\right) \) with \(0 \le L_*(r) \le L_0, \; 0\le H_*(r) \le H_0, \; 0\le F_*(r) \le M_0\).

A slight modification of the maximum principle would imply that the uniqueness is also valid if the solution is considered in the infinite domain \(\{1-\epsilon<r<1\}\times \{ -\infty<z<\infty \}\).

The result of this paper is summarized in the following theorem.

Theorem 2.2

For each integer n satisfying

$$\begin{aligned} j^2 + n^2 \ne \Big (\frac{T}{2\pi }\Big )^2 \quad \text {for all } j=0,1,2,3, \ldots , \end{aligned}$$
(2.27)

we can find a small \(E>0\) and for each \(0<\epsilon <E\), there exists a unique \(\mu ^n(\epsilon )\), notice that the relationship between \(\mu ^n(\epsilon )\) and T is given by

$$\begin{aligned} \mu ^n(\epsilon )=\frac{\gamma +H_0}{2} \Big (\frac{2\pi n}{T}\Big )^2 \Big [1 -\Big (\frac{2\pi n}{T}\Big )^2 \Big ]+ O\Big ((n^3+1) \epsilon \Big ), \end{aligned}$$
(2.28)

such that if \(\mu ^n(\epsilon ) > \mu _c\) (\(\mu _c\) is defined in (2.26)), then \(\mu =\mu ^n(\epsilon )\) is a bifurcation point of the symmetry-breaking stationary solution of the system (2.17) – (2.25). Moreover, the free boundary of this bifurcation solution is of the form

$$\begin{aligned} r = 1 - \epsilon + \tau \cos \Big (\frac{2\pi n}{T}z\Big ) + o(\tau ), \quad \text {where }\quad |\tau |\ll \epsilon . \end{aligned}$$

Remark 2.1

The assumption (2.27) requires \(n\ne \frac{T}{2\pi }\). It is also clear that if \(n> \frac{T}{2\pi }\), then (2.27) is automatically satisfied. As a matter of fact, (2.27) is a very weak assumption and would be satisfied other than some isolated n’s.

To the best of our knowledge, this is the first paper producing stationary solutions of small plaques as in Fig. 1.

3 Preliminaries

3.1 Estimates on stationary solution

We now collect various estimates on \(L_{*}(r),\ H_{*}(r),\ F_{*}(r),\ p_{*}(r)\) which are already obtained in (Zhao and Hu 2021, (2.11)–(2.13), (2.18), (4.47)–(4.49), (4.3), (4,4), (2.16), (2.17)).

Lemma 3.1

(see Zhao and Hu 2021) Let \( \mu _c<\mu <\mu ^*\). Then

$$\begin{aligned} \begin{array}{rcl} L_*(r) &{} = &{} \frac{\rho _3(\gamma +H_0)}{\lambda }+ \epsilon \Big [ \frac{\mu }{\lambda }- \frac{\rho _3(\gamma +H_0)}{\beta _1} \Big ( \frac{k_1 M_0 }{\lambda K_1+\rho _3(\gamma +H_0)}+\frac{\rho _1 }{\lambda }\Big )\Big ]+ O(\epsilon ^2) \\ &{} \triangleq &{} \frac{\rho _3(\gamma +H_0)}{\lambda }+\epsilon L_*^1 + O(\epsilon ^2), \\ H_*(r) &{} = &{} H_0 - \epsilon \frac{\rho _2 H_0 }{\beta _1} + O(\epsilon ^2) \; \triangleq \; H_0 + \epsilon H_*^1 + O(\epsilon ^2),\\ F_*(r) &{} = &{} \epsilon \; \frac{\rho _3(\gamma +H_0)}{\beta _2 D} \; \frac{k_1 M_0 }{\lambda K_1+\rho _3(\gamma +H_0)}+ O(\epsilon ^2) \; \triangleq \; \epsilon F_*^1 + O(\epsilon ^2). \end{array} \end{aligned}$$
(3.1)

The following estimate holds for first derivatives,

$$\begin{aligned} |L_*'(r)| + |H_*'(r)| + |F_*'(r)| + |p_*'(r)|\le C \epsilon , \quad 1-\epsilon \le r\le 1. \end{aligned}$$
(3.2)

The estimates of the second derivatives at the boundary \(r=1-\epsilon \) are given by

$$\begin{aligned} \begin{array}{rcl} \frac{1}{\beta _1}\Big (\frac{\partial ^2 L_*}{\partial r^2}-\beta _1\frac{\partial L_*}{\partial r}\Big )\Big |_{r=1-\epsilon } &{}=&{} \frac{\mu }{\lambda } - L_*^1 + O(\epsilon ), \\ \frac{1}{\beta _1}\Big (\frac{\partial ^2 H_*}{\partial r^2}-\beta _1\frac{\partial H_*}{\partial r}\Big )\Big |_{r=1-\epsilon }&{} =&{} -H_*^1 + O(\epsilon ),\\ \frac{1}{\beta _2}\Big (\frac{\partial ^2 F_*}{\partial r^2}-\beta _2\frac{\partial F_*}{\partial r}\Big )\Big |_{r=1-\epsilon }&{} =&{} -F_*^1+O(\epsilon ). \end{array} \end{aligned}$$
(3.3)

For the function \(p_*(r)\),

$$\begin{aligned} \frac{\partial ^2p_*}{\partial r^2}\Big |_{r=1-\epsilon } = \epsilon ^2 J_1(\mu ,\rho _4(\mu )), \end{aligned}$$
(3.4)

where the function \(J_1(\mu ,\rho _4(\mu ))\) satisfies

$$\begin{aligned} |J_1(\mu ,\rho _4(\mu ))| \le C,\quad \Big |\frac{\mathrm {d}J_1(\mu ,\rho _4(\mu ))}{\mathrm {d}\mu }\Big | \le C \end{aligned}$$
(3.5)

with C independent of \(\epsilon \). And for the parameter \(\rho _4\),

$$\begin{aligned} \rho _4= & {} \frac{\beta _2 D [ \lambda K_1 + \rho _3(\gamma + H_0)]}{\rho _3 k_1(\gamma +H_0)^2} (\mu -\mu _c) + O(\epsilon ), \end{aligned}$$
(3.6)
$$\begin{aligned} \frac{\partial \rho _4}{\partial \mu }= & {} \frac{\beta _2 D [ \lambda K_1 + \rho _3(\gamma + H_0)]}{\rho _3 k_1(\gamma +H_0)^2}+ O(\epsilon ). \end{aligned}$$
(3.7)

3.2 The Crandall-Rabinowitz theorem

Next, we state the Crandall-Rabinowitz theorem, which is critical in studying bifurcation.

Theorem 3.2

(see Crandall and Rabinowitz 1971, Theorem 1.7) Let X, Y be real Banach spaces and \(F(x,\mu )\) a \(C^{p}\) map, \(p\ge 3\), of a neighborhood \((0,\mu _0)\) in \(X\times {\mathbb {R}}\) into Y. Suppose

  1. (i)

    \(F(0,\mu )=0\) for all \(\mu \) in a neighborhood of \(\mu _0\);

  2. (ii)

    \(\text{ Ker }[F_x(0,\mu _0)]\) is a one-dimensional space, spanned by \(x_0\);

  3. (iii)

    \(\text{ Im }[F_x(0,\mu _0)]=Y_1\) has codimension 1;

  4. (iv)

    \([F_{\mu x}](0,\mu _0)x_0\notin Y_1\).

Then \((0,\mu _0)\) is a bifurcation point of the equation \(F(x,\mu )=0\) in the following sense: in a neighborhood of \((0,\mu _0)\) the set of solutions of \(F(x,\mu )=0\) consists of two \(C^{p-2}\) smooth curves \(\Gamma _1\) and \(\Gamma _2\) which intersect only at the point \((0,\mu _0)\); \(\Gamma _1\) is the curve \((0,\mu )\) and \(\Gamma _2\) can be parameterized as follows:

$$\begin{aligned} \Gamma _2 : (x(\varepsilon ), \ \mu (\varepsilon )), \ |\varepsilon | \ small, \ (x(0),\mu (0))=(0,\mu _0),\ x'(0)=x_0. \end{aligned}$$

3.3 A continuation lemma

We need to establish the sharp estimates of the variable functions to compute the Fréchet derivatives, which is based on the following continuation lemma.

Lemma 3.3

(see Zhao and Hu 2021, Lemma 5.1) Let \(\{ \mathbf {Q}_\delta ^{(i)}\}_{i=1}^N\) be a finite collection of real vectors, and define the norm of the vector by \(|\mathbf {Q}_\delta |_{\max }=\max \limits _{1\le i \le N}|\mathbf {Q}_\delta ^{(i)}|\). Suppose that \(0<C_1<C_2\), and

  1. (i)

    \(|\mathbf {Q}_0|_{\max }\le C_1\);

  2. (ii)

    For any \(0< \delta \le 1\), if \(|\mathbf {Q}_\delta |_{\max }\le C_2\), then \(|\mathbf {Q}_\delta |_{\max }\le C_1\);

  3. (iii)

    \(\mathbf {Q}_\delta \) is continuous in \(\delta \).

Then \(|\mathbf {Q}_\delta |_{\max }\le C_1\) for all \(0< \delta \le 1\).

3.4 The Taylor’s expansion of the vector function

In the process of computing the Fréchet derivatives, we shall also use the following Taylor’s expansion for the vector function. Let \(f:\ {\mathbb {R}}^N\rightarrow {\mathbb {R}}^M\) be a \(C^2\) function.

Lemma 3.4

(see Zhao and Hu 2021, Lemma 3.3) For any \(y_*\), y and \(y_1\),

$$\begin{aligned} f(y) - f(y_*) - \tau \nabla f(y_*)\cdot y_1 = \nabla f(y_*)\cdot (y-y_* -\tau y_1) + R, \end{aligned}$$
(3.8)

where the error term is estimated by

$$\begin{aligned} |R| \;\le \; \frac{1}{2}\Vert D^2 f\Vert _{L^\infty } |y-y_*|^2. \end{aligned}$$
(3.9)

3.5 A supersolution

As in Friedman et al. (2015), we use the function

$$\begin{aligned} \xi (r)=\frac{1-r^2}{4}+\frac{1}{2} \log r \end{aligned}$$
(3.10)

a lot when we apply the maximum principle. Recall that \(\xi \) satisfies

$$\begin{aligned} -\Delta \xi = 1, \quad \frac{\partial \xi }{\partial r} = \frac{1-r^2}{2r}>0, \; \text {and } \xi (r)=O(\epsilon ^2) \text { for } 1-\epsilon< r < 1. \end{aligned}$$
(3.11)

Taking

$$\begin{aligned} c_1(\beta ,\epsilon )= & {} \frac{1}{\beta }\frac{\epsilon (2-\epsilon )}{2(1-\epsilon )}-\frac{\epsilon (2-\epsilon )}{4}-\frac{1}{2} \log (1-\epsilon )\equiv \frac{\epsilon }{\beta } + O(\epsilon ^2),\\ c_2(\beta ,\tau )= & {} \frac{2}{\beta } |\tau |, \end{aligned}$$

we easily verify that

$$\begin{aligned} \begin{aligned}&\Big [-\frac{\partial \xi }{\partial r} + \beta \Big (\xi +c_1(\beta ,\epsilon )\Big )\Big ]\Big |_{r=1-\epsilon }\\&=\Big [-\frac{\partial \Big (\xi +c_1(\beta ,\epsilon )\Big )}{\partial r} + \beta \Big (\xi +c_1(\beta ,\epsilon )\Big )\Big ]\Big |_{r=1-\epsilon }=0. \end{aligned} \end{aligned}$$
(3.12)

Let \(\Vert S(z)\Vert _{C^{4+\alpha }([0,T])}\le 1\), then using (3.12), we derive the following useful inequality at \(r=1-\epsilon + \tau S\) with \(|\tau |\ll \epsilon \):

$$\begin{aligned}&\Big [\frac{\partial \big (\xi +c_1(\beta ,\epsilon ) + c_2(\beta , \tau )\big )}{\partial {{\varvec{n}}}}+\beta \Big (\xi +c_1(\beta ,\epsilon ) + c_2(\beta , \tau )\Big )\Big ]\Big |_{r=1-\epsilon +\tau S} \\&\quad =\Big [-\frac{1}{\sqrt{1+(\tau S_z)^2}}\frac{\partial \xi }{\partial r} +\beta \Big (\xi +c_1(\beta ,\epsilon )\Big ) \Big ]\Big |_{r=1-\epsilon +\tau S} + \beta c_2(\beta ,\tau )\\&\quad =\Big [-\frac{\partial \xi }{\partial r}+\beta \Big (\xi +c_1(\beta ,\epsilon )\Big )\Big ]\Big |_{r=1-\epsilon } +\Big [-\frac{\partial ^2 \xi }{\partial r^2}+\beta \frac{\partial \xi }{\partial r}\Big ]\Big |_{r=1-\epsilon }\tau S \\&\qquad + 2|\tau | + O(|\tau S|^2) + O(|\tau S_z|^2) \\&\quad =0 + \Big [\frac{1+(1-\epsilon )^2}{2(1-\epsilon )^2} + \beta \frac{1-(1-\epsilon )^2}{2(1-\epsilon )}\Big ]\tau S + 2|\tau | + O(|\tau S|^2) + O(|\tau S_z|^2)\\&\quad =(1+O(\epsilon ))\tau S + 2|\tau | + O(|\tau S|^2) + O(|\tau S_z|^2) > 0. \end{aligned}$$

4 Bifurcations - The Frechét derivatives

We shall work with the Crandall-Rabinowitz theorem on the spaces

$$\begin{aligned}&X^{l+\alpha } =\{S \in C^{l+\alpha }([0,T]); \; S(z) =S(T-z)\}, \\&X^{l+\alpha }_1 = \text { closure of the linear space spanned by }\\&\Big \{ \cos \Big (\frac{2\pi n}{T}z\Big ), \; n=0,1,2,3,\cdots \Big \} \text { in } X^{l+\alpha }. \end{aligned}$$

Remark 4.1

The functions in \(X_1^{l+\alpha }\) automatically extend to periodic functions for \(z\in (-\infty , \infty )\) with period T. It is clear that for \(S\in X_1^{l+\alpha }\) with \(l\ge 1\), we have \(S'(0)=S'(T)=0\).

A solution which is T-periodic in z and bifurcates from the \(\cos (\frac{2\pi n}{T} z)\) branch automatically satisfies the zero flux boundary conditions at \(z=0\) and \(z=T\). So rather than studying the original problem, we shall consider bifurcation as a solution which is periodic in z. Consider a family of perturbed domains \(\Omega _{\tau }=\{1-\varepsilon +{\widetilde{R}}<r<1, 0< z< T\}\), where \({\widetilde{R}}=\tau S(z)\), S(z) is T-periodic in z, \(|\tau | \ll \varepsilon \) and \(\Vert S\Vert _{C^{4+\alpha }([0,T])} \le 1\), and denote the corresponding one-period inner boundary to be \(\Gamma _{\tau }\). Let (LHFp) be the solution of

$$\begin{aligned}&- \Delta L = - k_1 \frac{(M_0-F)L}{K_1 + L} - \rho _1 L\quad&\text {in } \Omega _\tau , \end{aligned}$$
(4.1)
$$\begin{aligned}&- \Delta H =- k_2 \frac{HF}{K_2 + F} - \rho _2 H\quad&\text {in } \Omega _\tau ,\end{aligned}$$
(4.2)
$$\begin{aligned}&\begin{array}{ll}-D\Delta F - \nabla F\cdot \nabla p = &{} k_1\frac{(M_0-F)L}{K_1+L}-k_2\frac{HF}{K_2+F}-\lambda \frac{F(M_0-F)L}{M_0(\gamma +H)} \\ &{}+(\rho _3-\rho _4)\frac{(M_0-F)F}{M_0}\quad \end{array}&\text {in }\Omega _\tau ,\qquad \end{aligned}$$
(4.3)
$$\begin{aligned}&-\Delta p = \frac{1}{M_0}\Big [\lambda \frac{(M_0-F)L}{\gamma +H}-\rho _3(M_0-F) - \rho _4 F\Big ]\quad&\text {in }\Omega _\tau ,\end{aligned}$$
(4.4)
$$\begin{aligned}&\frac{\partial L}{\partial r} = \frac{\partial H}{\partial r} = \frac{\partial F}{\partial r} = \frac{\partial p}{\partial r} = 0 &\text {on } \Gamma _0,\end{aligned}$$
(4.5)
$$\begin{aligned}&\frac{\partial L}{\partial {{\varvec{n}}}} + \beta _1 (L-L_0) = 0, \; \frac{\partial H}{\partial {{\varvec{n}}}} + {\beta _1}(H-H_0)=0, \; \frac{\partial F}{\partial {{\varvec{n}}}} + \beta _2 F = 0&\text {on } \Gamma _\tau ,\end{aligned}$$
(4.6)
$$\begin{aligned}&p = \kappa&\text {on } \Gamma _\tau . \end{aligned}$$
(4.7)

We need to ensure the existence and uniqueness of the solution to the problem (4.1)-(4.7). Before showing this fact, we shall first derive an asymptotic formula for the mean curvature.

Lemma 4.1

If \(S\in C^2(-\infty ,\infty )\) and \(\Vert S\Vert _{C^2([0,T])}\le 1\), then

$$\begin{aligned} \begin{aligned} \kappa \big |_{r=1-\epsilon +\tau S(z)}=&-\frac{1}{2(1-\varepsilon )}+\frac{\tau }{2(1-\varepsilon )^2} \big [S+(1-\varepsilon )^2S_{zz}\big ]\\&-\tau ^2\Big [\frac{1}{2(1-\varepsilon )^3}S^2-\frac{1}{4(1-\varepsilon )}S_z^2\Big ]+O(\tau ^3). \end{aligned} \end{aligned}$$
(4.8)

Proof

We use the notation \(\mathbf {e}_{r}, \mathbf {e}_{\theta }, \mathbf {e}_{z}\) to denote the unit normal vectors in \(r, \theta , z\) directions, respectively. Then, written in the rectangular coordinates in \({\mathbb {R}}^{3}\),

$$\begin{aligned} \mathbf {e}_{r}=(\cos \theta , \sin \theta , 0),\quad \mathbf {e}_{\theta }=(-\sin \theta , \cos \theta ,0), \quad \mathbf {e}_{z}=(0,0,1), \end{aligned}$$

and the gradient is given by

$$\begin{aligned} \nabla _{x}=\mathbf {e}_{r} \frac{\partial }{\partial r}+\mathbf {e}_{\theta } \frac{1}{r} \frac{\partial }{\partial \theta }+\mathbf {e}_z\frac{\partial }{\partial z}. \end{aligned}$$

For the surface \(r=1-\epsilon +\tau S(z)\), or, alternatively, \(\xi (r,\theta , z)=0\) where \(\xi (r, \theta , z)=r-(1-\epsilon )-\tau S(z)\), the normal vector is

$$\begin{aligned} {\varvec{n}}=-\frac{\nabla _{x} \xi }{|\nabla _{x} \xi |}=-\frac{1}{\sqrt{1+(\tau S_z)^2}}(\mathbf {e}_{r}-\tau S_z\mathbf {e}_{z}), \end{aligned}$$

and the mean curvature is then \(-\left. \frac{1}{2} {\text {div}} \frac{\nabla _{x} \xi }{\left| \nabla _{x} \xi \right| }\right| _{\xi =0}\), or

$$\begin{aligned} \kappa |_{r=1-\varepsilon +\tau S}=-\frac{1}{2}{\text {div}}\Big [\frac{1}{\sqrt{1+(\tau S_z)^2}}(\mathbf {e}_{r}-\tau S_z\mathbf {e}_{z})\Big ]\Big |_{r=1-\varepsilon +\tau S}. \end{aligned}$$

By direct computations,

$$\begin{aligned} {\text {div}}\mathbf {e}_{r}=\frac{1}{r},\quad {\text {div}}\mathbf {e}_{z}=0. \end{aligned}$$

Using \({\text {div}}(f \mathbf {g})=f{\text {div}}\mathbf {g}+\nabla _{x} f \cdot \mathbf {g}\), we obtain

$$\begin{aligned} \begin{aligned} \kappa |_{r=1-\varepsilon +\tau S}=&-\frac{1}{2}\frac{1}{\sqrt{1+(\tau S_z)^2}}\frac{1}{r}\Big |_{r=1-\varepsilon +\tau S}+\frac{1}{2}\frac{\partial }{\partial z}\Big (\frac{\tau S_z}{\sqrt{1+(\tau S_z)^2}}\Big )\\ =&-\frac{1}{2}\frac{1}{\sqrt{1+(\tau S_z)^2}}\frac{1}{1-\varepsilon +\tau S}+\frac{1}{2}\Big [\frac{\tau S_{zz}}{\sqrt{1+(\tau S_z)^2}}-\frac{\tau ^3S_z^2S_{zz}}{(1+(\tau S_z)^2)^{3/2}}\Big ]. \end{aligned} \end{aligned}$$

Since

$$\begin{aligned}&\frac{1}{\sqrt{1+(\tau S_z)^2}}=1-\frac{1}{2}(\tau S_z)^2+O(\tau ^4), \\&\frac{1}{1-\varepsilon +\tau S}=\frac{1}{1-\varepsilon }-\frac{1}{(1-\varepsilon )^2}\tau S+\frac{1}{(1-\varepsilon )^3}(\tau S)^2+O(\tau ^3), \end{aligned}$$

then the formula (4.8) follows. \(\square \)

We now establish the existence and uniqueness of the solution to the problem (4.1)-(4.7).

Lemma 4.2

Let \(S\in C^{4+\alpha }(-\infty ,\infty )\), periodic with period T, \(S'(0)=S'(T)=0\), and \(\Vert S\Vert _{C^{4+\alpha }([0,T])}\le 1\). For sufficiently small \(\epsilon \) and \(|\tau |\ll \epsilon \), the problem (4.1)-(4.7) admits a unique solution (LHFp).

Proof

We shall prove this lemma by using the contraction mapping principle. Take

$$\begin{aligned} {\mathscr {M}}=\{(L,H,F);\; 0\le L\le L_0,\, 0\le H\le H_0,\, 0\le F\le M_0\}. \end{aligned}$$
(4.9)

For each \((L,H,F)\in {\mathscr {M}}\), we solve the following linear equations:

$$\begin{aligned}&- \Delta {\widehat{L}} = - k_1 \frac{(M_0-F){\widehat{L}}}{K_1 + L} - \rho _1 {\widehat{L}}\quad \text {in } \Omega _\tau , \end{aligned}$$
(4.10)
$$\begin{aligned}&- \Delta {\widehat{H}} =- k_2 \frac{{\widehat{H}}F}{K_2 + F} - \rho _2 {\widehat{H}}\quad \text {in } \Omega _\tau ,\end{aligned}$$
(4.11)
$$\begin{aligned}&-D\Delta {\widehat{F}} - \nabla {\widehat{F}}\cdot \nabla {\widehat{p}} = k_1 \frac{(M_0-{\widehat{F}})L}{K_1+L}-k_2\frac{H{\widehat{F}}}{K_2+F}- \lambda \frac{{\widehat{F}}(M_0-F)L}{M_0(\gamma +H)} \nonumber \\&\quad +\frac{\rho _3}{M_0}(M_0-{\widehat{F}})F-\frac{\rho _4}{M_0}(M_0-F){\widehat{F}} \quad \text {in }\Omega _\tau ,\qquad \end{aligned}$$
(4.12)
$$\begin{aligned}&-\Delta {\widehat{p}} = \frac{1}{M_0}\Big [\lambda \frac{(M_0-F)L}{\gamma +H}-\rho _3(M_0-F) - \rho _4 F\Big ]\quad \text {in }\Omega _\tau ,\end{aligned}$$
(4.13)
$$\begin{aligned}&\frac{\partial {\widehat{L}}}{\partial r} = \frac{\partial {\widehat{H}}}{\partial r} = \frac{\partial {\widehat{F}}}{\partial r} = \frac{\partial {\widehat{p}}}{\partial r} = 0 \quad \text {on } \Gamma _0,\end{aligned}$$
(4.14)
$$\begin{aligned}&\frac{\partial {\widehat{L}}}{\partial {{\varvec{n}}}} + \beta _1 ({\widehat{L}}-L_0) = 0, \quad \frac{\partial {\widehat{H}}}{\partial {{\varvec{n}}}} + {\beta _1}({\widehat{H}}-H_0)=0,\nonumber \\&\frac{\partial {\widehat{F}}}{\partial {{\varvec{n}}}} + \beta _2 {\widehat{F}} = 0\quad \text {on } \Gamma _\tau ,\end{aligned}$$
(4.15)
$$\begin{aligned}&{\widehat{p}} = \kappa \quad \text {on } \Gamma _\tau . \end{aligned}$$
(4.16)

Define a map \({\mathscr {L}}:(L,H,F)\rightarrow ({\widehat{L}},{\widehat{H}},{\widehat{F}})\), then we shall prove that \({\mathscr {L}}\) maps \({\mathscr {M}}\) into itself and is a contraction, which indicates that the unique fixed point of \({\mathscr {L}}\) is the unique classical solution of the system (4.1)-(4.7).

Step 1. \({\mathscr {L}}\) maps \({\mathscr {M}}\) into itself.

By the maximum principle, we clearly have

$$\begin{aligned} 0\le {\widehat{L}} \le L_0,\quad 0\le {\widehat{H}}\le H_0 \quad \text {in }{\overline{\Omega }}_\tau . \end{aligned}$$
(4.17)

We now establish the estimate for \({\widehat{p}}\). Since LHF are all bounded, the right-hand side of (4.13) is bounded under supremum norm, i.e.,

$$\begin{aligned} \Big |\Delta \Big ({\widehat{p}}+\frac{1}{2}\Big )\Big | \le C, \end{aligned}$$
(4.18)

where C is independent of \(\epsilon \) and \(\tau \). Here and hereafter we shall use the notation C to denote various different positive constants independent of \(\epsilon \) and \(\tau \). Also, we use the mean curvature formula (4.8) and the Taylor’s expansion to derive that

$$\begin{aligned} \Big \Vert {\widehat{p}}+\frac{1}{2}\Big \Vert _{C^{1+\alpha }(\Gamma _\tau )} \le C\epsilon . \end{aligned}$$
(4.19)

It follows from (4.18) and (4.19) that \(C(\xi (r)+\epsilon )\) is a supersolution for \({\widehat{p}}+\frac{1}{2}\), then

$$\begin{aligned} \Big \Vert {\widehat{p}}+\frac{1}{2}\Big \Vert _{L^\infty (\Omega _\tau )} \le C(\xi (r)+\epsilon )\le C(O(\epsilon ^2)+\epsilon )\le C\epsilon , \end{aligned}$$
(4.20)

where \(\xi (r)\) is defined in Sect. 3.5. Next we are going to estimate \(\Vert {\widehat{p}}\Vert _{C^1(\Omega _\tau )}\) and show that it is actually independent of \(\epsilon \) and \(\tau \). Introduce the following transformation:

$$\begin{aligned} J_\tau : {\widetilde{r}}=\frac{r-1}{2(\epsilon -\tau S(z))}+1, \quad {\widetilde{z}}=\frac{z}{2\epsilon }. \end{aligned}$$

It maps \(r=1-\epsilon +\tau S(z)\) into \({\widetilde{r}}=\frac{1}{2}\). Notice that our function \({{\widehat{p}}}\) is independent of \(\theta \). Let \({\widetilde{p}}({\widetilde{r}}, {\widetilde{z}})={\widehat{p}}(r, z)+\frac{1}{2}\), then \({\widetilde{p}}\) satisfies

$$\begin{aligned} -\frac{\partial }{\partial {\widetilde{r}}}\Big ((1+ A_1) \frac{\partial {\widetilde{p}}}{\partial {\widetilde{r}}} + A_2 \frac{\partial {\widetilde{p}}}{\partial {\widetilde{z}}}\Big ) - \frac{\partial }{\partial {\widetilde{z}}} \Big (A_3\frac{\partial {\widetilde{p}}}{\partial {\widetilde{r}}} + (1+A_4) \frac{\partial {\widetilde{p}}}{\partial {\widetilde{z}}}\Big ) + A_5\frac{\partial {\widetilde{p}}}{\partial {\widetilde{r}}} + A_6\frac{\partial {\widetilde{p}}}{\partial {\widetilde{z}}} = \epsilon ^2 {\widetilde{f}}, \end{aligned}$$

where coefficients \(A_1,A_2,A_3,A_4\in C^{3+\alpha }([0,T])\), \(A_5\) and \(A_6\) are bounded, \(A_j = O(\epsilon )\) for \(|\tau |\ll \epsilon \ll 1\) (\(1\le j\le 6\)), and \({\widetilde{f}}=\frac{4r}{M_0}\Big [\lambda \frac{(M_0-F)L}{\gamma +H}-\rho _3(M_0-F) - \rho _4 F\Big ]\) is also bounded based on (4.9). Applying the interior sub-Schauder estimates (Theorem 8.32, Gilbarg and Trudinger 1983) on the region \(\Omega _{i_0}: ({\widetilde{r}}, {\widetilde{z}})\in [\frac{1}{2},1]\times [z_{i_0}-2,z_{i_0}+2]\), recalling also (4.19) and (4.20), we obtain

$$\begin{aligned} \begin{aligned} \Vert {\widetilde{p}}&\Vert _{C^{1+\alpha }([\frac{1}{2},1]\times [z_{i_0}-1,z_{i_0}+1])} \\&\le C \Big (\epsilon ^2 \Vert {\widetilde{f}}\Vert _{L^\infty (\Omega _{i_0})} + \Vert {\widetilde{p}}\Vert _{L^\infty (\Omega _{i_0})} + \Vert \widetilde{p}\Vert _{C^{1+\alpha }(\{\widetilde{r}=\frac{1}{2}\})}\Big )\\&\le C \Big (\epsilon ^2\Vert {\widetilde{f}}\Vert _{L^\infty ([\frac{1}{2},1]\times [0,\frac{T}{2\epsilon }])} + \Big \Vert {\widehat{p}}+\frac{1}{2}\Big \Vert _{L^\infty (\Omega _\tau )} + \Big \Vert {\widehat{p}}+\frac{1}{2}\Big \Vert _{C^{1+\alpha }(\Gamma _\tau )}\Big )\\&\le C \epsilon . \end{aligned} \end{aligned}$$

We use a series of sets \([\frac{1}{2},1]\times [z_{i_0}-1,z_{i_0}+1]\) to cover the whole region \([\frac{1}{2},1]\times [0,\frac{T}{2\epsilon }]\), as a result,

$$\begin{aligned} \Vert {\widetilde{p}}\Vert _{C^{1+\alpha }([\frac{1}{2},1]\times [0,\frac{T}{2\epsilon }])}\le C\epsilon . \end{aligned}$$

We then relate \({\widetilde{p}}\) with \({\widehat{p}}\) to derive

$$\begin{aligned} \Big \Vert {\widehat{p}}+\frac{1}{2}\Big \Vert _{C^1({\overline{\Omega }}_\tau )}\le \frac{1}{\epsilon }\Vert {\widetilde{p}}\Vert _{C^1([\frac{1}{2},1]\times [0,\frac{T}{2\epsilon }])}\le \frac{1}{\epsilon }\Vert {\widetilde{p}}\Vert _{C^{1+\alpha }([\frac{1}{2},1] \times [0,\frac{T}{2\epsilon }])}\le C, \end{aligned}$$

and hence

$$\begin{aligned} \Vert \nabla {\widehat{p}}\Vert _{L^\infty (\Omega _\tau )} \le C. \end{aligned}$$
(4.21)

Recalling equation (4.12) and the boundary conditions of \({\widehat{F}}\) in (4.14)-(4.15), we obtain, by the maximum principle,

$$\begin{aligned} 0\le {\widehat{F}}\le M_0 \text {in }\ {\overline{\Omega }}_\tau . \end{aligned}$$
(4.22)

We further claim that this bound for \({\widehat{F}}\) can be improved. By (4.9) and (4.22), the right-hand side of equation (4.12) is bounded, i.e.,

$$\begin{aligned} |-D\Delta {\widehat{F}} - \nabla {\widehat{F}}\cdot \nabla {\widehat{p}}|\le C. \end{aligned}$$
(4.23)

According to (4.21), (4.23) and Sect. 3.5, \(C(\xi (r)+c_1(\beta _2,\epsilon )+c_2(\beta _2,\tau ))\) is a supersolution for \({\widehat{F}}\), so that

$$\begin{aligned} \Vert {\widehat{F}}\Vert _{L^\infty (\Omega _\tau )}&\le \Big \Vert C\Big (\xi (r)+c_1(\beta _2,\epsilon )+c_2(\beta _2,\tau )\Big )\Big \Vert _{L^\infty (\Omega _\tau )}\!\\&\le C\Big (\frac{\epsilon }{\beta _2}+ \frac{2}{\beta _2}|\tau | + O(\epsilon ^2)\Big )\! \le C\epsilon . \end{aligned}$$

Then in a similar way as we did for \({\widehat{p}}\), we derive

$$\begin{aligned} \Vert \nabla {\widehat{F}}\Vert _{L^\infty (\Omega _\tau )} \le C. \end{aligned}$$
(4.24)

Above, we have shown that \(({\widehat{L}}, {\widehat{H}}, {\widehat{F}})\in {\mathscr {M}}\), which implies that \({\mathscr {L}}\) maps \({\mathscr {M}}\) into itself. We shall next prove that \({\mathscr {L}}\) is a contraction.

Step 2. \({\mathscr {L}}\) is a contraction.

Suppose that \(({\widehat{L}}_j, {\widehat{H}}_j, {\widehat{F}}_j)={\mathscr {L}}(L_j, H_j, F_j)\) for \(j=1,2\), and set

$$\begin{aligned} {\mathscr {A}}&= \Vert L_1-L_2\Vert _{L^\infty (\Omega _\tau )} + \Vert H_1-H_2\Vert _{L^\infty (\Omega _\tau )} + \Vert F_1-F_2\Vert _{L^\infty (\Omega _\tau )},\\ {\mathscr {B}}&= \Vert {\widehat{L}}_1-{\widehat{L}}_2\Vert _{L^\infty (\Omega _\tau )} + \Vert {\widehat{H}}_1-{\widehat{H}}_2\Vert _{L^\infty (\Omega _\tau )} + \Vert {\widehat{F}}_1-{\widehat{F}}_2\Vert _{L^\infty (\Omega _\tau )}. \end{aligned}$$

Recalling (4.10)-(4.13), (4.21) and (4.24), we get, for some constant \(C^*\) independent of \(\epsilon \) and \(\tau \),

$$\begin{aligned} \begin{aligned}&|\Delta ({\widehat{L}}_1-{\widehat{L}}_2)|\le C^*({\mathscr {A}}+{\mathscr {B}}), |\Delta ({\widehat{H}}_1-{\widehat{H}}_2)|\le C^*({\mathscr {A}}+{\mathscr {B}}),\\&|\nabla {\widehat{F}}_1| + |\nabla {\widehat{F}}_2| \le C^*,|\nabla {\widehat{p}}_1| + |\nabla {\widehat{p}}_2| \le C^*, \quad |\nabla ({\widehat{p}}_1 - {\widehat{p}}_2)|\le C^*{\mathscr {A}},\\&|D\Delta ({\widehat{F}}_1-{\widehat{F}}_2) + \nabla {\widehat{p}}_1 \cdot \nabla ({\widehat{F}}_1 - {\widehat{F}}_2)| \le C^*({\mathscr {A}}+{\mathscr {B}}). \end{aligned} \end{aligned}$$

The function \(C^*({\mathscr {A}}+{\mathscr {B}})(\xi (r) + c_1(\beta ,\epsilon ) + c_2(\beta ,\tau ))\) defined in Sect. 3.5 clearly serves as a supersolution and therefore by the maximum principle,

$$\begin{aligned}&|{\widehat{L}}_1 - {\widehat{L}}_2| \le C^*({\mathscr {A}}+{\mathscr {B}})(\xi (r)+c_1(\beta _1,\epsilon ) + c_2(\beta _1,\tau )),\\&|{\widehat{H}}_1 - {\widehat{H}}_2| \le C^*({\mathscr {A}}+{\mathscr {B}})(\xi (r)+c_1(\beta _1,\epsilon ) + c_2(\beta _1,\tau )),\\&|{\widehat{F}}_1 - {\widehat{F}}_2| \le C^*({\mathscr {A}}+{\mathscr {B}})(\xi (r)+c_1(\beta _2,\epsilon ) + c_2(\beta _2,\tau )), \end{aligned}$$

which leads to

$$\begin{aligned}&\Vert {\widehat{L}}_1 - {\widehat{L}}_2\Vert _{L^\infty (\Omega _\tau )}\le C^{**}({\mathscr {A}}+{\mathscr {B}})(\epsilon +|\tau |),\\&\Vert {\widehat{H}}_1 - {\widehat{H}}_2\Vert _{L^\infty (\Omega _\tau )}\le C^{**}({\mathscr {A}}+{\mathscr {B}})(\epsilon +|\tau |),\\&\Vert {\widehat{F}}_1 - {\widehat{F}}_2\Vert _{L^\infty (\Omega _\tau )}\le C^{**}({\mathscr {A}}+{\mathscr {B}})(\epsilon +|\tau |), \end{aligned}$$

where \(C^{**}\) is independent of \(\epsilon \) and \(\tau \). The above inequalities imply that

$$\begin{aligned} {\mathscr {B}}\le C^{**}({\mathscr {A}}+{\mathscr {B}})(\epsilon +|\tau |). \end{aligned}$$

By taking \(\epsilon \) sufficiently small and \(|\tau |\ll \epsilon \), we have

$$\begin{aligned} \frac{C^{**}(\epsilon +|\tau |)}{1-C^{**}(\epsilon +|\tau |)}< 1, \end{aligned}$$

so that \({\mathscr {L}}\) is a contraction mapping. Therefore, the proof is complete. \(\square \)

With p being uniquely determined in the system (4.1)-(4.7), we define \({\mathscr {F}}\) by

$$\begin{aligned} {\mathscr {F}}(\tau S,\mu ) = -\frac{\partial p}{\partial {{\varvec{n}}}}\Big |_{\Gamma _\tau }, \end{aligned}$$
(4.25)

where \(\mu \) is our bifurcation parameter defined earlier, then (LHFp) is a symmetry-breaking stationary solution if and only if \({\mathscr {F}}(\tau S,\mu )=0\).

To apply the Crandall-Rabinowitz theorem, we need to compute the Fréchet derivatives of \({\mathscr {F}}\). For a fixed small \(\epsilon \), we formally write (LHFp) as

$$\begin{aligned} L&= L_*+\tau L_1 + O(\tau ^2), \end{aligned}$$
(4.26)
$$\begin{aligned} H&= H_*+\tau H_1 + O(\tau ^2), \end{aligned}$$
(4.27)
$$\begin{aligned} F&= F_*+\tau F_1 + O(\tau ^2), \end{aligned}$$
(4.28)
$$\begin{aligned} p&= p_*+\tau p_1 + O(\tau ^2). \end{aligned}$$
(4.29)

Substituting (4.26)-(4.29) into the equations (4.1)-(4.7) and dropping the \(O(\tau ^2)\) terms, we obtain the following linearized system in \(\Omega _*=\{1-\epsilon<r<1, 0< z< T\}\):

$$\begin{aligned}&-\Delta L_1 = f_1(L_1,H_1,F_1) \quad \text {in }\Omega _*, \end{aligned}$$
(4.30)
$$\begin{aligned}&-\Delta H_1 = f_2(L_1,H_1,F_1) \quad \text {in }\Omega _*, \end{aligned}$$
(4.31)
$$\begin{aligned}&-D\Delta F_1 - \nabla F_1\cdot \nabla p_*-\nabla F_*\cdot \nabla p_1= f_3(L_1,H_1,F_1) \quad \text {in }\Omega _*, \end{aligned}$$
(4.32)
$$\begin{aligned}&-\Delta p_1=f_4(L_1,H_1,F_1) \quad \text {in }\Omega _*, \end{aligned}$$
(4.33)
$$\begin{aligned}&\frac{\partial L_1}{\partial r}=\frac{\partial H_1}{\partial r}=\frac{\partial F_1}{\partial r}=\frac{\partial p_1}{\partial r}=0 \quad \text {on }\Gamma _0, \end{aligned}$$
(4.34)
$$\begin{aligned}&-\frac{\partial L_1}{\partial r}+\beta _1 L_1=\Big (\frac{\partial ^2 L_*}{\partial r^2}-\beta _1\frac{\partial L_*}{\partial r}\Big )\Big |_{r=1-\epsilon } S(z)\quad \text {on }\Gamma _1, \end{aligned}$$
(4.35)
$$\begin{aligned}&-\frac{\partial H_1}{\partial r}+\beta _1 H_1=\Big (\frac{\partial ^2 H_*}{\partial r^2}-\beta _1\frac{\partial H_*}{\partial r}\Big )\Big |_{r=1-\epsilon } S(z) \quad \text {on }\Gamma _1, \end{aligned}$$
(4.36)
$$\begin{aligned}&-\frac{\partial F_1}{\partial r}+\beta _2 F_1=\Big (\frac{\partial ^2 F_*}{\partial r^2}-\beta _2\frac{\partial F_*}{\partial r}\Big )\Big |_{r=1-\epsilon } S(z) \quad \text {on }\Gamma _1, \end{aligned}$$
(4.37)
$$\begin{aligned}&p_1 = \frac{1}{2}\Big [\frac{S(z)}{(1-\epsilon )^2}+S_{zz}\Big ] \quad \text {on }\Gamma _1, \end{aligned}$$
(4.38)

where \(\Gamma _1=\{r=1-\epsilon \}\times [0,T]\), and

$$\begin{aligned}&f_1(L_1,H_1,F_1) =-k_1\frac{(M_0-F_*)K_1L_1}{(K_1+L_*)^2} + k_1\frac{L_* F_1}{K_1+L_*} -\rho _1L_1, \end{aligned}$$
(4.39)
$$\begin{aligned}&f_2(L_1,H_1,F_1) = -k_2 \frac{K_2 H_* F_1}{(K_2+F_*)^2}-k_2\frac{H_1 F_*}{K_2+F_*}-\rho _2 H_1, \end{aligned}$$
(4.40)
$$\begin{aligned}&\begin{array}{ll} f_3(L_1,H_1,F_1) =&{} k_1 \frac{(M_0-F_*)K_1L_1}{(K_1+L_*)^2} - \lambda \frac{F_*(M_0-F_*)L_1}{M_0(\gamma +H_*)} - k_2\frac{F_*H_1}{K_2+F_*} \\ &{}+ \lambda \frac{F_*(M_0-F_*)L_*H_1}{M_0(\gamma +H_*)^2} - k_1\frac{L_*F_1}{K_1+L_*} - k_2 \frac{H_*K_2F_1}{(K_2+F_*)^2} \\ &{} - \lambda \frac{(M_0-2F_*)L_*F_1}{M_0(\gamma +H_*)} + (\rho _3-\rho _4)\frac{(M_0-2F_*)F_1}{M_0}, \end{array} \end{aligned}$$
(4.41)
$$\begin{aligned}&\begin{array}{ll} f_4(L_1,H_1,F_1) =&{} \frac{1}{M_0}\Big [\lambda \frac{(M_0-F_*)L_1}{\gamma +H_*} -\lambda \frac{F_1L_*}{\gamma +H_*}\\ &{}-\lambda \frac{(M_0-F_*)L_*H_1}{(\gamma +H_*)^2} + (\rho _3-\rho _4)F_1\Big ]. \end{array} \end{aligned}$$
(4.42)

We shall show that the formal expansions (4.26)-(4.29) are actually rigorous.

Remark 4.2

The functions \((L_*, H_*, F_*, p_*)\) are defined in \(\Omega _*\), a domain which is different from \(\Omega _\tau \), so we first need to extend them to a bigger domain. Since these functions are of r only, the equations they satisfy form a system of second order ODE. Therefore we can extend \((L_*, H_*, F_*, p_*)\) from \(1-\epsilon<r<1\) to \(1-2\epsilon<r<1\) by solving a nonlinear initial value problem of second order ODE with the right hand-side taking from (2.17)–(2.20) and keeping the values at \(r=1-\epsilon \) together with their first order derivatives. Using these equations we then find that derivatives of all orders are continuous across the boundary \(r=1-\epsilon \). Thus we produce a smooth solution, denoted again by the same notation \((L_*, H_*, F_*, p_*)\), satisfying (2.17)–(2.20) in \(\{1-2\epsilon<r<1\}\) while confirming the boundary conditions at \(r=1\) and \(r=1-\epsilon \) (rather than \(r=1-2\epsilon \)).

In the reminder of this paper, we assume that \((L_*, H_*, F_*, p_*)\) is the extended solution.

4.1 First-order \(\tau \) estimates

Lemma 4.3

Fix \(\epsilon \) sufficiently small, if \(|\tau |\ll \epsilon \) and \(\Vert S\Vert _{C^{4+\alpha }([0,T])}\le 1\), then we have

$$\begin{aligned}&\max \big \{\Vert L-L_*\Vert _{L^\infty (\Omega _\tau )} , \Vert H-H_*\Vert _{L^\infty (\Omega _\tau )}, \Vert F-F_*\Vert _{L^\infty (\Omega _\tau )}, \Vert p-p_*\Vert _{L^\infty (\Omega _\tau )}\big \} \\&\le C|\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])},\\&\max \big \{ \Vert \nabla (F-F_*)\Vert _{L^\infty (\Omega _\tau )}, \Vert \nabla (p-p_*)\Vert _{L^\infty (\Omega _\tau )}\big \}\le \frac{C}{\epsilon }|\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])}, \end{aligned}$$

where C is independent of \(\epsilon \) and \(\tau \).

Proof

Combining (4.1) and the equation (2.17) that \(L_*\) satisfies, we obtain the following equation for \(L-L_*\),

$$\begin{aligned} -\Delta (L-L_*)= & {} -k_1\frac{(M_0-F)L}{K_1+L}-\rho _1 L + k_1\frac{(M_0-F_*)L_*}{K_1+L_*}+\rho _1 L_*\nonumber \\= & {} \Big [-k_1\frac{(M_0-F)K_1}{(K_1+L)(K_1+L_*)}-\rho _1\Big ](L-L_*) + k_1 \frac{L_*}{K_1+L_*}(F-F_*)\nonumber \\\triangleq & {} b_1\cdot (L-L_*) + b_2\cdot (F-F_*) , \end{aligned}$$
(4.43)

where \(b_1=b_1(r,z)\) and \(b_2=b_2(r)\) are both bounded since \(0\le L_*,L\le L_0\) and \(0\le F \le M_0\) based on Lemma 4.2 and Lemma 3.1 in Friedman et al. (2015). In addition, the boundary conditions for \(L-L_*\) are

$$\begin{aligned} \frac{\partial (L-L_*)}{\partial r}\Big |_{r=1}= & {} 0,\\ \Big (\frac{\partial (L-L_*)}{\partial {{\varvec{n}}}}+\beta _1 (L-L_*)\Big )\Big |_{\Gamma _\tau }= & {} \beta _1L_0 + \Big (\frac{1}{\sqrt{1+(\tau S_z)^2}}\frac{\partial L_*}{\partial r}-\beta _1 L_*\Big )\Big |_{r=1-\epsilon +\tau S} \\= & {} -\Big (\frac{\partial L_*}{\partial r}-\beta _1 L_*\Big )\Big |_{r=1-\epsilon } \\&+ \Big (\frac{\partial L_*}{\partial r}-\beta _1 L_*\Big )\Big |_{r=1-\epsilon +\tau S} + O(|\tau S_z|^2). \end{aligned}$$

Since \(L_*, H_*, F_*\) are all bounded and \(|L_*'|\le C\epsilon \) by (3.2), we find from the equation (2.17) that \(|L_*''|\) is bounded with a bounded independent of \(\epsilon \) and \(\tau \). Hence by the Taylor’s expansion, we have

$$\begin{aligned} \bigg |\Big (\frac{\partial (L-L_*)}{\partial {{\varvec{n}}}}+\beta _1 (L-L_*)\Big )\Big |_{\Gamma _\tau }\bigg | \le \widetilde{C} |\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])}, \end{aligned}$$
(4.44)

where \(\widetilde{C}\) does not depend upon \(\epsilon \) and \(\tau \). Similarly, \(H-H_*\), \(F-F_*\) and \(p-p_*\) satisfy

$$\begin{aligned}&-\Delta (H-H_*) = b_3\cdot (H-H_*) + b_4\cdot (F-F_*) \; \quad \text {in } \Omega _\tau ,\qquad \end{aligned}$$
(4.45)
$$\begin{aligned}&\begin{aligned} -D\Delta (F-F_*) - \nabla p_*\cdot \nabla (F-F_*) =&\nabla F\cdot \nabla (p-p_*) + b_5\cdot (L-L_*)\\&+b_6 \!\cdot (H-H_*) + b_7\!\cdot (F-F_*) \end{aligned}\; \quad \text {in }\Omega _\tau ,\; \end{aligned}$$
(4.46)
$$\begin{aligned}&-\Delta (p-p_*) = b_8\cdot (L-L_*) + b_9 \cdot (H-H_*) + b_{10} \cdot (F-F_*) \; \quad \text {in } \Omega _\tau , \end{aligned}$$
(4.47)
$$\begin{aligned}&\frac{\partial (H-H_*)}{\partial r}\Big |_{r=1} = \frac{\partial (F-F_*)}{\partial r}\Big |_{r=1} = \frac{\partial (p-p_*)}{\partial r}\Big |_{r=1} = 0, \end{aligned}$$
(4.48)
$$\begin{aligned}&\bigg |\Big (\frac{\partial (H-H_*)}{\partial {{\varvec{n}}}}+\beta _1 (H-H_*)\Big )\Big |_{\Gamma _\tau }\bigg |\le \widetilde{C}|\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])}, \end{aligned}$$
(4.49)
$$\begin{aligned}&\bigg |\Big (\frac{\partial (F-F_*)}{\partial {{\varvec{n}}}}+\beta _2 (F-F_*)\Big )\Big |_{\Gamma _\tau }\bigg |\le \widetilde{C}|\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])}, \end{aligned}$$
(4.50)
$$\begin{aligned}&\Big |(p-p_*)|_{\Gamma _\tau }\Big |\le \widetilde{C}|\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])}, \end{aligned}$$
(4.51)

where \(b_i=b_i(r,z)\), \(i=3,4,\ldots ,10\) are all bounded and the last inequality is based on the formula of \(\kappa \) in (4.8). It is shown earlier that \(\Vert \nabla F\Vert _{L^\infty (\Omega _\tau )}\) and \(\Vert \nabla p\Vert _{L^\infty (\Omega _\tau )}\) are bounded; for simplicity, we use the same constant \(\widetilde{C}\) to control \(\Vert \nabla F\Vert _{L^\infty (\Omega _\tau )}\) and \(\Vert \nabla p\Vert _{L^\infty (\Omega _\tau )}\), namely,

$$\begin{aligned} \Vert \nabla F\Vert _{L^\infty (\Omega _\tau )}\le \widetilde{C}, \Vert \nabla p\Vert _{L^\infty (\Omega _\tau )} \le \widetilde{C}. \end{aligned}$$
(4.52)

We shall use the idea of continuation (Lemma 3.3) to complete the rest of the proof. Multiplying the right-hand sides of (4.43)-(4.47) by \(\delta \) with \(0\le \delta \le 1\), we then combine the proofs for the case \(\delta = 0 \) as well as the case \(0<\delta \le 1\).

In the case \(\delta >0\), we assume that, for some \(M_1>0\) to be determined later on,

$$\begin{aligned}&\begin{array}{ll}\max \Big \{ \Vert L-L_*\Vert _{L^\infty (\Omega _\tau )}, &{}\Vert H-H_*\Vert _{L^\infty (\Omega _\tau )} , \Vert F-F_*\Vert _{L^\infty (\Omega _\tau )}\Big \} \\ &{} \le 2M_1|\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])}, \end{array} \end{aligned}$$
(4.53)
$$\begin{aligned}&\Vert \nabla (F-F_*)\Vert _{L^\infty (\Omega _\tau )} \le \frac{2M_1 C_s}{\epsilon }|\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])},\end{aligned}$$
(4.54)
$$\begin{aligned}&\displaystyle \Vert p-p_*\Vert _{L^\infty (\Omega _\tau )} \le 3 \widetilde{C} |\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])}, \;\end{aligned}$$
(4.55)
$$\begin{aligned}&\Vert \nabla (p-p_*)\Vert _{L^\infty (\Omega _\tau )} \le \frac{3 C_s \widetilde{C}}{\epsilon }|\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])}, \end{aligned}$$
(4.56)

where \(\widetilde{C}\) is from (4.44), (4.49)-(4.52), and \(C_s\) is a scaling factor which comes from applying the \(C^{1+\alpha }\) Schauder estimate as we did in Lemma 4.2; both \(\widetilde{C}\) and \(C_s\) are independent of \(\epsilon \) and \(\tau \). It follows from (4.53) that the right-hand side of (4.47) is bounded, i.e.,

$$\begin{aligned} \begin{aligned}&|\Delta (p-p_*)| \\&\le 2M_1\delta \big (\Vert b_8\Vert _{L^\infty (\Omega _\tau )}+\Vert b_9\Vert _{L^\infty (\Omega _\tau )} + \Vert b_{10}\Vert _{L^\infty (\Omega _\tau )}\big ) |\tau | \Vert S\Vert _{C^{4+\alpha }([0,T])}. \end{aligned} \end{aligned}$$
(4.57)

Let

$$\begin{aligned} \phi _1(r)=2{\widetilde{C}}|\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])}\cos \Big (\frac{1-r}{\epsilon }\Big ), \end{aligned}$$
(4.58)

where \(\widetilde{C}\) is defined above. By a direct computation, we obtain

$$\begin{aligned}&-\Delta \phi _1 = \Big [\frac{1}{\epsilon }\cos \Big (\frac{1-r}{\epsilon }\Big ) - \frac{1}{r} \sin \Big (\frac{1-r}{\epsilon }\Big )\Big ]\frac{2\widetilde{C}}{\epsilon }|\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])},\\&\phi _1'(1)=0, \phi _1\Big |_{\Gamma _\tau } = 2\widetilde{C}\cos \Big (1-\frac{\tau S}{\epsilon }\Big )|\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])}. \end{aligned}$$

Fix \(\epsilon \) sufficiently small such that the right-hand side of (4.57) is smaller than \(-\Delta \phi _1\), i.e.,

$$\begin{aligned} |\Delta (p-p_*) | \le -\Delta \phi _1. \end{aligned}$$

Moreover, notice that \(\cos 1\approx 0.54>1/2\), then by (4.51) and the boundary condition for \(\phi _1\), we derive

$$\begin{aligned} \Big |(p-p_*)|_{\Gamma _\tau }\Big |\le \phi _1|_{\Gamma _\tau }. \end{aligned}$$

Hence, by the maximum principle, we have

$$\begin{aligned} \Vert p-p_*\Vert _{L^\infty (\Omega _\tau )} \le \Vert \phi _1\Vert _{L^\infty (\Omega _\tau )} \le 2{\widetilde{C}}|\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])}. \end{aligned}$$

As in the proof of (4.21), we further get

$$\begin{aligned} \Vert \nabla (p-p_*)\Vert _{L^\infty (\Omega _\tau )} \le \frac{2C_s{\widetilde{C}}}{\epsilon }|\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])}. \end{aligned}$$
(4.59)

We shall next consider \(L-L_*\), \(H-H_*\) and \(F-F_*\). It follows from (4.43), (4.45) and the assumption (4.53) that

$$\begin{aligned} |\Delta (L-L_*)| \le C M_1 \delta |\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])}, \; \ |\Delta (H-H_*)| \le C M_1 \delta |\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])},\nonumber \\ \end{aligned}$$
(4.60)

where C is some universal constant. Recalling also (4.52) and (4.59), we have the following estimate for \(F-F_*\),

$$\begin{aligned} \begin{aligned} \Big |\Delta (F-F_*) + \frac{1}{D}\nabla p_*\cdot \nabla (F-F_*)\Big | \;\le&\;\; \Big \Vert \frac{\delta }{D} \nabla F\cdot \nabla (p-p_*)\Big \Vert _{L^\infty } + \Big \Vert \frac{\delta b_{5}}{D}(L-L_*)\Big \Vert _{L^\infty }\\&+ \Big \Vert \frac{\delta b_{6}}{D}(H-H_*)\Big \Vert _{L^\infty } + \Big \Vert \frac{\delta b_{7}}{D}(F-F_*)\Big \Vert _{L^\infty }\\ \;\le&\;\; \Big (\frac{2C_s}{\epsilon D} \widetilde{C}^2 + C M_1 \Big ) \delta |\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])}. \end{aligned}\nonumber \\ \end{aligned}$$
(4.61)

Let

$$\begin{aligned} \phi _2(r)=M_1 |\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])}\cos \Big (\frac{M_2(1-r)}{\sqrt{\epsilon }}\Big ), \; M_2 = \frac{1}{2} \min \Big (\sqrt{\beta _1},\sqrt{\beta _2}\Big ),\nonumber \\ \end{aligned}$$
(4.62)

where we set \(M_1\) as

$$\begin{aligned} M_1 = \max \Big (\frac{8}{\beta _1}\widetilde{C}, \; \frac{8}{\beta _2}\widetilde{C},\; \frac{32 C_s}{\beta _1 D} \widetilde{C}^2,\; \frac{32 C_s}{\beta _2 D} \widetilde{C}^2 \Big ) . \end{aligned}$$
(4.63)

We now proceed to prove that \(\phi _2(r)\) is a supersolution for \(L-L_*\), \(H-H_*\) and \(F-F_*\). Indeed, by a simple computation, we derive

$$\begin{aligned}&\phi _2'(r) = M_1 \frac{M_2}{\sqrt{\epsilon }} |\tau | \Vert S\Vert _{C^{4+\alpha }([0,T])} \sin \Big (\frac{M_2(1-r)}{\sqrt{\epsilon }}\Big ),\\&\phi _2''(r) = -M_1 \frac{M_2^2}{\epsilon } |\tau | \Vert S\Vert _{C^{4+\alpha }([0,T])} \cos \Big (\frac{M_2(1-r)}{\sqrt{\epsilon }}\Big ). \end{aligned}$$

Since \(\sin x \le x\) and \(\cos x\ge 1 - \frac{x^2}{2}\) for \(x\ge 0\), we have, for \(0<|\tau |\ll \epsilon \) and \(\epsilon \) small,

$$\begin{aligned}&\frac{M_2}{\sqrt{\epsilon }}\sin \Big (\frac{M_2(\epsilon -\tau S)}{\sqrt{\epsilon }}\Big )\le M_2^2\Big (1-\frac{\tau }{\epsilon }S\Big ) \le 2 M_2^2, \\&\cos \Big (\frac{M_2(\epsilon -\tau S)}{\sqrt{\epsilon }}\Big ) \ge 1 - \frac{M_2^2}{2\epsilon } (\epsilon ^2 + \tau ^2 ) \ge \frac{3}{4}. \end{aligned}$$

It follows that

$$\begin{aligned} -\Delta \phi _2= & {} -\phi _2''(r) -\frac{1}{r} \phi _2'(r) \\= & {} M_1\bigg [\frac{M_2^2}{\epsilon }\cos \Big (\frac{M_2(1-r)}{\sqrt{\epsilon }}\bigg ) - \frac{M_2}{\sqrt{\epsilon }\; r}\sin \Big (\frac{M_2(1-r)}{\sqrt{\epsilon }}\Big )\Big ] |\tau | \Vert S\Vert _{C^{4+\alpha }([0,T])}\\\ge & {} M_1\bigg [\frac{M_2^2}{\epsilon }\frac{3}{4} - \frac{2M_2^2}{r} \bigg ] |\tau | \Vert S\Vert _{C^{4+\alpha }([0,T])}\\\ge & {} \frac{1}{2 \epsilon } {M_1 M_2^2} |\tau | \Vert S\Vert _{C^{4+\alpha }([0,T])}, \quad r\in [1-\epsilon +\tau S,1] . \end{aligned}$$

It is clear that \(\phi _2'(1)=0\). For the boundary condition at \(\Gamma _\tau : r=1-\epsilon +\tau S\),

$$\begin{aligned} \begin{aligned} \Big (\frac{\partial \phi _2}{\partial {{\varvec{n}}}} + \beta _j \phi _2\Big )\Big |_{\Gamma _\tau }&= -\phi _2'(1-\epsilon +\tau S) + \beta _j \phi _2(1-\epsilon +\tau S) + O(|\tau S_z|^2)\\&= \bigg [- \frac{M_2}{\sqrt{\epsilon }}\sin \Big (\frac{M_2(\epsilon -\tau S)}{\sqrt{\epsilon }}\Big ) \\&\quad + \beta _j \cos \Big (\frac{M_2(\epsilon -\tau S)}{\sqrt{\epsilon }}\Big ) \bigg ] M_1 |\tau | \Vert S\Vert _{C^{4+\alpha }([0,T])} + O(|\tau S_z|^2)\\&\ge \;\; \Big [-2M_2^2 + \frac{3}{4} \beta _j\Big ]M_1 |\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])} + O(|\tau S_z|^2) \\&\ge \;\; \frac{1}{4}\beta _j M_1 |\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])} + O(|\tau S_z|^2)\\&\ge \;\; 2\widetilde{C} |\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])} + O(|\tau S_z|^2)\\&\ge \;\; {\widetilde{C}}|\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])}, \quad j=1,2. \end{aligned} \end{aligned}$$

Noticing that the leading order term in \(-\Delta \phi _2\) is \(\frac{1}{\epsilon }\), we can take \(\epsilon \) small such that

$$\begin{aligned} -\Delta \phi _2 \ge C M_1 |\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])} \ge \max \big \{|\Delta (L-L_*)|, |\Delta (H-H_*)|\big \}, \end{aligned}$$

where (4.60) is used. Hence, \(\phi _2\) is a supersolution for \(L-L_*\) as well as for \(H-H_*\). For \(F-F_*\), by our choice of \(M_1\) and \(M_2\),

$$\begin{aligned} -\Delta \phi _2 \ge \frac{1}{2\epsilon }\; M_1 M_2^2|\tau | \Vert S\Vert _{C^{4+\alpha }([0,T])} \ge \frac{4C_s{\widetilde{C}}^2}{\epsilon D}|\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])}, \end{aligned}$$

and \(\frac{1}{D} \nabla p_*\cdot \nabla \phi _2\) is of order \(O(1/\sqrt{\epsilon })\), then we obtain

$$\begin{aligned} -\Delta \phi _2 - \frac{1}{D} \nabla p_*\cdot \nabla \phi _2 \ge \Big (\frac{2C_s{\widetilde{C}}^2}{\epsilon D} + CM_1 \Big )\delta |\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])}, \end{aligned}$$

which implies that \(\phi _2\) is also a supersolution for \(F-F_*\). Hence, by the maximum principle,

$$\begin{aligned} \Vert L-L_*\Vert _{L^\infty (\Omega _\tau )}\le & {} M_1|\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])},\\ \Vert H-H_*\Vert _{L^\infty (\Omega _\tau )}\le & {} M_1|\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])},\\ \Vert F-F_*\Vert _{L^\infty (\Omega _\tau )}\le & {} M_1|\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])}, \end{aligned}$$

where \(M_1\) is independent of \(\epsilon \) and \(\tau \). Using a scaling argument as before, we further have

$$\begin{aligned} \Vert \nabla (F-F_*)\Vert _{L^\infty (\Omega _\tau )} \;\le \; \frac{M_1 C_s}{\epsilon }|\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])}. \end{aligned}$$

Combining the above analysis, we find that condition (ii) of Lemma 3.3 is satisfied for the vector \(\Big \{ \frac{1}{M_1}\Vert L-L_*\Vert _{L^\infty }, \frac{1}{M_1}\Vert H-H_*\Vert _{L^\infty }, \frac{1}{M_1}\Vert F-F_*\Vert _{L^\infty }, \frac{\epsilon }{M_1 C_s}\Vert \nabla (F-F_*)\Vert _{L^\infty }, \frac{1}{2\widetilde{C}^{}}\Vert p-p_*\Vert _{L^\infty }, \frac{\epsilon }{2 C_s\widetilde{C}}\Vert \nabla (p- p_*)\Vert _{L^\infty } \Big \}\). Moreover, these estimates are also valid for the case \(\delta =0\) without the assumptions (4.53)–(4.56) since the right-hand sides are all zero in this case, so that condition (i) of Lemma 3.3 is satisfied. Condition (iii) is obvious, thus the proof is complete. \(\square \)

Based on Lemma 4.3, we further get, by applying the Schauder estimates on the equations for \(L-L_*\), \(H-H_*\), \(F-F_*\) and \(p-p_*\), the following lemma.

Lemma 4.4

Let \(\epsilon \) be sufficiently small. For \(|\tau |\ll \epsilon \) and \(\Vert S\Vert _{C^{4+\alpha }([0,T])}\le 1\),

$$\begin{aligned}&\Vert L-L_*\Vert _{C^{4+\alpha }({\overline{\Omega }}_\tau )} + \Vert H-H_*\Vert _{C^{4+\alpha }({\overline{\Omega }}_\tau )} + \Vert F-F_*\Vert _{C^{4+\alpha }({\overline{\Omega }}_\tau )} \\&+ \Vert p-p_*\Vert _{C^{2+\alpha }({\overline{\Omega }}_\tau )} \le C|\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])}, \end{aligned}$$

where C is independent of \(\tau \), but is dependent upon \(\epsilon \).

4.2 Second-order \(\tau \) estimates

We now derive second-order \(\tau \) estimates. Notice that \(L_1\), \(H_1\), \(F_1\) and \(p_1\) are all defined in \(\Omega _*\), while \(L-L_*\), \(H-H_*\), \(F-F_*\) and \(p-p_*\) are defined in \(\Omega _\tau \), so we first need to transform the domain of \(L_1\), \(H_1\), \(F_1\) and \(p_1\) from \(\Omega _*\) to \(\Omega _\tau \). All our functions are independent of \(\theta \), and we introduce a transformation \(Y_\tau \):

$$\begin{aligned} r= \frac{({\overline{r}}-1)(\epsilon -\tau S({\overline{z}}))}{\epsilon } + 1, \; \; \qquad z={\overline{z}}. \end{aligned}$$
(4.64)

We point out that \(Y_\tau \) maps \(\Omega _*\) onto \(\Omega _\tau \) and the inverse transformation \(Y_\tau ^{-1}\) maps \(\Omega _\tau \) onto \(\Omega _*\). Set

$$\begin{aligned}&{\overline{L}}_1(r, z)=L_1(Y_\tau ^{-1}(r, z)), \quad {\overline{H}}_1(r, z)=H_1(Y_\tau ^{-1}(r, z)), \end{aligned}$$
(4.65)
$$\begin{aligned}&{\overline{F}}_1(r, z)=F_1(Y_\tau ^{-1}(r, z)), \quad {\overline{p}}_1(r, z)=p_1(Y_\tau ^{-1}(r, z)), \end{aligned}$$
(4.66)

then \({\overline{L}}_1\), \({\overline{H}}_1\), \({\overline{F}}_1\), \({\overline{p}}_1\) and \(L-L_*\), \(H-H_*\), \(F-F_*\), \(p-p_*\) are all defined in the same domain \(\Omega _\tau \) so that we can establish the second-order \(\tau \) estimates.

For the equations (4.30)-(4.42), using the same techniques as in the proof of Lemma 4.3, also recalling Lemma 4.4, we can derive \(L_1\), \(H_1\), \(F_1\in C^{4+\alpha }(\Omega _*)\) and \(p_1\in C^{2+\alpha }(\Omega _*)\) (or \({\overline{L}}_1,\ {\overline{H}}_1,\ {\overline{F}}_1\in C^{4+\alpha }(\Omega _\tau )\) and \({\overline{p}}_1\in C^{2+\alpha }(\Omega _\tau )\)), their Schauder estimates may depend on \(\epsilon \), but it is crucial that the \(L^\infty \) estimates are independent of \(\epsilon \) and \(\tau \).

Lemma 4.5

Let \(\epsilon \) be sufficiently small. For \(|\tau |\ll \epsilon \) and \(\Vert S\Vert _{C^{4+\alpha }([0,T])}\le 1\), the following estimates hold:

$$\begin{aligned}&\max \big \{\Vert L-L_*-\tau {\overline{L}}_1\Vert _{L^\infty (\Omega _\tau )} , \Vert H-H_*-\tau {\overline{H}}_1\Vert _{L^\infty (\Omega _\tau )}\big \}\\&\; \le \; C|\tau |^2 \Vert S\Vert _{C^{4+\alpha }([0,T])},\\&\max \big \{\Vert F-F_*-\tau {\overline{F}}_1\Vert _{L^\infty (\Omega _\tau )} , \Vert p-p_*-\tau {\overline{p}}_1\Vert _{L^\infty (\Omega _\tau )} \big \}\le C|\tau |^2 \Vert S\Vert _{C^{4+\alpha }([0,T])},\\&\max \big \{ \Vert \nabla (F-F_*-\tau {\overline{F}}_1)\Vert _{L^\infty (\Omega _\tau )}, \Vert \nabla (p-p_*-\tau {\overline{p}}_1) \Vert _{L^\infty (\Omega _\tau )}\big \}\\&\;\le \; \frac{C}{\epsilon }|\tau |^2\Vert S\Vert _{C^{4+\alpha }([0,T])}, \end{aligned}$$

where C is independent of \(\epsilon \) and \(\tau \).

Proof

We shall take the derivations of the estimate for \(F-F_*-\tau {\overline{F}}_1\) as an example. The estimates for \(L-L_*-\tau {\overline{L}}_1\), \(H-H_*-\tau {\overline{H}}_1\) and \(p-p_*-\tau {\overline{p}}_1\) are similar and are actually easier.

The first step in deriving second-order \(\tau \) estimate is to calculate the equation for \(F-F_*-\tau {\overline{F}}_1\). Recalling the transformation \(Y_\tau \) in (4.64), \({\overline{F}}_1(r,z)=F_1(Y_\tau ^{-1}(r,z))\) and (4.32), we obtain the equation for \({\overline{F}}_1\) in \(\Omega _\tau \),

$$\begin{aligned} -D\Delta {\overline{F}}_1 - \nabla {\overline{F}}_1\cdot \nabla p_*-\nabla F_*\cdot \nabla {\overline{p}}_1= k_1 \frac{(M_0-F_*)K_1{\overline{L}}_1}{(K_1+L_*)^2} - k_1\frac{{{\overline{F}}}_1L_*}{K_1+L_*} +\cdots + {\overline{f}}^F, \end{aligned}$$
(4.67)

where \({\overline{f}}^F\) comes from various terms of the transformation \(Y_\tau \) and it involves at most second order derivatives of \(\tau S\), hence

$$\begin{aligned} \Vert {\overline{f}}^F\Vert _{L^\infty (\Omega _\tau )}\le C|\tau |\Vert S\Vert _{C^{2+\alpha }([0,T])} \le C|\tau | \Vert S\Vert _{C^{4+\alpha }([0,T])}. \end{aligned}$$

Combining (4.3), the equation (2.19) for \(F_*\) and (4.67), we derive that \(F-F_*-\tau {\overline{F}}_1\) satisfies

$$\begin{aligned}&-D \Delta (F-F_*-\tau {\overline{F}}_1) - \nabla F\cdot \nabla p + \nabla F_*\cdot \nabla p_* + \tau \nabla {\overline{F}}_1 \cdot \nabla p_* + \tau \nabla F_* \cdot \nabla {\overline{p}}_1 \nonumber \\&\quad = \text {I} + \text {II} + \tau {\overline{f}}^F, \end{aligned}$$
(4.68)

where by Lemma 3.4, I is written as, for bounded functions \(b_{11}(r), b_{12}(r),\) and \(b_{13}(r)\),

$$\begin{aligned} \text {I} = b_{11}(r)(L-L_*-\tau {\overline{L}}_1) + b_{12}(r)(H-H_*-\tau {\overline{H}}_1) + b_{13}(r)(F-F_*-\tau {\overline{F}}_1); \end{aligned}$$

and II is bounded by \(|(L-L_*,H-H_*,F-F_*)|^2\), hence

$$\begin{aligned} \Vert \text {II}\Vert _{L^\infty (\Omega _\tau )} \le C|\tau |^2 \Vert S\Vert _{C^{4+\alpha }([0,T])}, \end{aligned}$$

here we have used Lemma 4.3.

In order to estimate \(F-F_*-\tau {\overline{F}}_1\), we rewrite the gradient terms of the left-hand side of (4.68) as

$$\begin{aligned} \begin{aligned}&- \nabla F\cdot \nabla p + \nabla F_*\cdot \nabla p_* + \tau \nabla {\overline{F}}_1 \cdot \nabla p_* + \tau \nabla F_* \cdot \nabla {\overline{p}}_1\\&\qquad = -\nabla p_* \cdot \nabla (F-F_*-\tau {\overline{F}}_1) - \nabla F\cdot \nabla (p-p_*-\tau {\overline{p}}_1) - \tau \nabla (F-F_*)\cdot \nabla {\overline{p}}_1, \end{aligned} \end{aligned}$$

then (4.68) yields

$$\begin{aligned}&-D\Delta (F-F_*-\tau {\overline{F}}_1) - \nabla p_*\cdot \nabla (F-F_*-\tau {\overline{F}}_1) \nonumber \\&\quad = \nabla F\cdot \nabla (p-p_*-\tau {\overline{p}}_1) + \tau \nabla (F-F_*)\cdot \nabla {\overline{p}}_1 + \text {I} + \text {II} + \tau {\overline{f}}^F. \end{aligned}$$
(4.69)

By Lemma 4.3,

$$\begin{aligned} \Vert \nabla (F-F_*)\Vert _{L^\infty (\Omega _\tau )} \le \frac{C}{\epsilon }|\tau |\Vert S\Vert _{C^{4+\alpha }([0,T])}. \end{aligned}$$
(4.70)

Furthermore, it follows from (4.33), (4.38) and \({\overline{p}}_1(r,z)=p_1(Y_\tau ^{-1}(r,z))\) in (4.66) that \({\overline{p}}_1\) satisfies

$$\begin{aligned} -\Delta {\overline{p}}_1=f_4({\overline{L}}_1,{\overline{H}}_1,{\overline{F}}_1) + {\overline{f}}^p, \qquad {\overline{p}}_1\big |_{\Gamma _\tau } = \frac{1}{2}\Big [\frac{S(z)}{(1-\epsilon )^2}+S_{zz}\Big ], \end{aligned}$$

where \({\overline{f}}^p\) is generated after applying the transformation \(Y_\tau \), hence as \({\overline{F}}_1\),

$$\begin{aligned} \Vert {\overline{f}}^p\Vert _{L^\infty (\Omega _\tau )}\le C|\tau |\Vert S\Vert _{C^{2+\alpha }([0,T])} \le C|\tau | \Vert S\Vert _{C^{4+\alpha }([0,T])}. \end{aligned}$$

Then we derive

$$\begin{aligned} \Big |\Delta \Big ({\overline{p}}_1 - \frac{1}{2}(S+S_{zz})\Big )\Big | \le C, \quad \text {and} \quad \Big \Vert {\overline{p}}_1 - \frac{1}{2}(S+S_{zz})\Big \Vert _{C^{1+\alpha }(\Gamma _\tau )} \le C\epsilon , \end{aligned}$$

as \(S\in C^{4+\alpha }\); using the same technique as in Lemma 4.2, we shall get

$$\begin{aligned} \Big \Vert \nabla \Big ({\overline{p}}_1 - \frac{1}{2}(S+S_{zz})\Big )\Big \Vert _{L^\infty (\Omega _\tau )} \le C, \end{aligned}$$

hence

$$\begin{aligned} \Vert \nabla {\overline{p}}_1\Vert _{L^\infty (\Omega _\tau )} \le C \end{aligned}$$

for a constant C which is independent of \(\epsilon \) and \(\tau \). Together with (4.70), we obtain

$$\begin{aligned} \Vert \tau \nabla (F-F_*) \cdot \nabla {\overline{p}}_1\Vert _{L^\infty } \le \frac{C}{\epsilon }|\tau |^2 \Vert S\Vert _{C^{4+\alpha }([0,T])}. \end{aligned}$$

Combining with the estimates we derived before, we have

$$\begin{aligned}&\Big | \Delta (F-F_*-\tau {\overline{F}}_1) + \frac{1}{D}\nabla p_* \cdot \nabla (F-F_*-\tau {\overline{F}}_1)\Big |\le \Big \Vert \frac{1}{D} \nabla F \cdot \nabla (p-p_*-\tau {\overline{p}}_1)\Big \Vert _{L^\infty }\\&\quad + \Big \Vert \frac{b_{11}(r)}{D}(L-L_*-\tau {\overline{L}}_1)\Big \Vert _{L^\infty } + \Big \Vert \frac{b_{12}(r)}{D}(H-H_*-\tau {\overline{H}}_1)\Big \Vert _{L^\infty }\\&\quad +\Big \Vert \frac{b_{13}(r)}{D}(F-F_*-\tau {\overline{F}}_1)\Big \Vert _{L^\infty } + \frac{C}{\epsilon } |\tau |^2 \Vert S\Vert _{C^{4+\alpha }([0,T])}. \end{aligned}$$

Notice that the above inequality present similar structure as (4.61), and the presence of \(\epsilon \) in the denominator does not cause a problem since we do have the extra factor \(\frac{1}{\epsilon }\) on the right-hand side if we apply our operator on our supersolutions. Hence we can use the same technique and similar supersolution to establish

$$\begin{aligned} \Vert F-F_*-\tau {\overline{F}}_1\Vert _{L^\infty (\Omega _\tau )} \le C|\tau |^2 \Vert S\Vert _{C^{4+\alpha }([0,T])}, \end{aligned}$$

and

$$\begin{aligned} \Vert \nabla (F-F_*-\tau {\overline{F}}_1)\Vert _{L^\infty (\Omega _\tau )} \le \frac{C}{\epsilon }|\tau |^2\Vert S\Vert _{C^{4+\alpha }([0,T])}. \end{aligned}$$

Therefore, our proof is complete. \(\square \)

Following Lemma 4.4, we further have

Lemma 4.6

Fix \(\epsilon \) sufficiently small, if \(|\tau |\ll \epsilon \) and \(\Vert S\Vert _{C^{4+\alpha }([0,T])}\le 1\), then

$$\begin{aligned}&\Vert L-L_*-\tau {\overline{L}}_1\Vert _{C^{4+\alpha }({\overline{\Omega }}_\tau )} \le C|\tau |^2\Vert S\Vert _{C^{4+\alpha }([0,T])}, \end{aligned}$$
(4.71)
$$\begin{aligned}&\Vert H-H_*-\tau {\overline{H}}_1\Vert _{C^{4+\alpha }({\overline{\Omega }}_\tau )} \le C|\tau |^2\Vert S\Vert _{C^{4+\alpha }([0,T])},\end{aligned}$$
(4.72)
$$\begin{aligned}&\Vert F-F_*-\tau {\overline{F}}_1\Vert _{C^{4+\alpha }({\overline{\Omega }}_\tau )} \le C|\tau |^2\Vert S\Vert _{C^{4+\alpha }([0,T])},\end{aligned}$$
(4.73)
$$\begin{aligned}&\Vert p-p_*-\tau {\overline{p}}_1\Vert _{C^{2+\alpha }({\overline{\Omega }}_\tau )} \le C|\tau |^2\Vert S\Vert _{C^{4+\alpha }([0,T])}, \end{aligned}$$
(4.74)

where C is independent of \(\tau \), but is dependent on \(\epsilon \).

Remark 4.3

The estimates (4.71)-(4.74) are uniformly valid for \(|\tau |\) small and \(\Vert S \Vert _{C^{4+\alpha }([0,T])}\le 1\), which implies that the expansions in (4.26)-(4.29) are rigorous. By now, we are ready to compute the Fréchet derivatives of \({\mathscr {F}}\). As in the proof of (Zhao and Hu 2021, Lemma 3.6), we can derive that the Fréchet derivatives of \({\mathscr {F}}({\widetilde{R}},\mu )\) at the point \((0,\mu )\) are given by

$$\begin{aligned}&\Big [{\mathscr {F}}_{{\widetilde{R}}}(0,\mu )\Big ]S(z) = \frac{\partial ^2 p_*}{\partial r^2}\Big |_{r=1-\epsilon }S(z) + \frac{\partial p_1}{\partial r}\Big |_{r=1-\epsilon }, \end{aligned}$$
(4.75)
$$\begin{aligned}&\left[ {\mathscr {F}}_{\mu {\widetilde{R}}}(0,\mu )\right] S(z) = \frac{\partial }{\partial \mu }\Big (\frac{\partial ^2 p_*}{\partial r^2}\Big |_{r=1-\epsilon }\Big )S(z) + \frac{\partial }{\partial \mu }\Big (\frac{\partial p_1}{\partial r}\Big |_{r=1-\epsilon }\Big ). \end{aligned}$$
(4.76)

By (4.75), we find that the mapping \({\mathscr {F}}(\cdot ,\mu ): X^{4+\alpha }_1 \rightarrow X^{1+\alpha }_1\) is continuous with continuous first order Frechét derivatives, and the same argument shows that it is also true for Frechét derivatives of any order. (4.75) accomplishes the following: (a) the mapping \({\mathscr {F}}\) is Frechét differentiable at \({{\widetilde{R}}}=0\), and (b) the Fréchet derivatives at \({{\widetilde{R}}}=0\) is given explicitly by the formula on the right-hand side. Since \(\mu \) is a scalar, the Frechét derivatives in \(\mu \) coincide with the regular derivatives in \(\mu \), which is much simpler in rigorous derivations. Notice that the above argument shows that the differentiablity is eventually reduced to the regularity of the corresponding PDEs, and explicit formula is not needed if we are only interested in differentiability; therefore a similar argument shows that this mapping is Fréchet differentiable in \(({\widetilde{R}},\mu )\); furthermore \({\mathscr {F}}_{{\widetilde{R}}}({\widetilde{R}},\mu )\) (or \({\mathscr {F}}_\mu ({\widetilde{R}},\mu )\)) is obtained by solving a linearized problem about \(({\widetilde{R}},\mu )\) with respect to \({\widetilde{R}}\) (or \(\mu \)). By using the Schauder estimates we can then further obtain the differentiability of \({\mathscr {F}}({\widetilde{R}},\mu )\) to any order.

5 Bifurcations - Proof of Theorem 2.2

In this section, we shall employ the Frechét derivatives of \({\mathscr {F}}\) obtained in (4.75) and (4.76) to verify the four conditions of the Crandall-Rabinowitz theorem and complete the proof of Theorem 2.2. Since \(p_1\) cannot be solved explicitly, we need to derive its sharp estimates. Note that the estimate on \(p_*\) is given by (3.4) and (3.5).

5.1 Estimates for \(p_1\)

Set \(S(z)=\cos \Big (\frac{2\pi n}{T} z\Big )\) in the linearized system (4.30)-(4.38). It is clear that \(S'(0)=S'(T)=0\). Using a separation of variables, we seek a solution of the form

$$\begin{aligned}&L_1=L_1^n(r) \cos \Big (\frac{2\pi n}{T}z\Big ), \quad H_1=H_1^n(r)\cos \Big (\frac{2\pi n}{ T} z\Big ), \end{aligned}$$
(5.1)
$$\begin{aligned}&F_1=F_1^n(r) \cos \Big (\frac{2\pi n}{T}z\Big ), \quad p_1=p_1^n(r)\cos \Big (\frac{2\pi n}{T}z\Big ). \end{aligned}$$
(5.2)

Substituting (5.1) and (5.2) into the equations (4.30)-(4.38), we derive that \((L_1^n(r),H_1^n(r),F_1^n(r),p_1^n(r))\) satisfies

$$\begin{aligned}&-\frac{\partial ^2 L_1^n}{\partial r^2}-\frac{1}{r}\frac{\partial L_1^n}{\partial r} + \Big (\frac{2\pi n}{T}\Big )^2L_1^n= f_1(L_1^n,H_1^n,F_1^n) \;&\text {in }\Omega _*, \end{aligned}$$
(5.3)
$$\begin{aligned}&-\frac{\partial ^2 H_1^n}{\partial r^2}-\frac{1}{r}\frac{\partial H_1^n}{\partial r} + \Big (\frac{2\pi n}{T}\Big )^2H_1^n= f_2(L_1^n,H_1^n,F_1^n) \;&\text {in }\Omega _*,\end{aligned}$$
(5.4)
$$\begin{aligned}&\begin{array}{ll} -D\frac{\partial ^2 F_1^n}{\partial r^2}&{}-\frac{D}{r}\frac{\partial F_1^n}{\partial r} + D\Big (\frac{2\pi n}{T}\Big )^2F_1^n - \frac{\partial F_1^n}{\partial r} \frac{\partial p_*}{\partial r} \\ &{}= f_3(L_1^n,H_1^n, F_1^n)+\frac{\partial F_*}{\partial r}\frac{\partial p_1^n}{\partial r} \;\end{array}&\text {in }\Omega _*, \end{aligned}$$
(5.5)
$$\begin{aligned}&-\frac{\partial ^2 p_1^n}{\partial r^2}-\frac{1}{r}\frac{\partial p_1^n}{\partial r} + \Big (\frac{2\pi n}{T}\Big )^2p_1^n=f_4(L_1^n,H_1^n,F_1^n) \;&\text {in }\Omega _*, \end{aligned}$$
(5.6)
$$\begin{aligned}&\frac{\partial L_1^n}{\partial r}=\frac{\partial H_1^n}{\partial r}=\frac{\partial F_1^n}{\partial r}=\frac{\partial p_1^n}{\partial r}=0&r=1,\end{aligned}$$
(5.7)
$$\begin{aligned}&-\frac{\partial L_1^n}{\partial r}+\beta _1 L_1^n=\Big (\frac{\partial ^2 L_*}{\partial r^2}-\beta _1\frac{\partial L_*}{\partial r}\Big )\Big |_{r=1-\epsilon }&r=1-\epsilon ,\end{aligned}$$
(5.8)
$$\begin{aligned}&-\frac{\partial H_1^n}{\partial r}+\beta _1 H_1^n=\Big (\frac{\partial ^2 H_*}{\partial r^2}-\beta _1\frac{\partial H_*}{\partial r}\Big )\Big |_{r=1-\epsilon }&r=1-\epsilon ,\end{aligned}$$
(5.9)
$$\begin{aligned}&-\frac{\partial F_1^n}{\partial r}+\beta _2 F_1^n=\Big (\frac{\partial ^2 F_*}{\partial r^2}-\beta _2\frac{\partial F_*}{\partial r}\Big )\Big |_{r=1-\epsilon }&r=1-\epsilon ,\end{aligned}$$
(5.10)
$$\begin{aligned}&p_1^n = \frac{1}{2}\Big [\frac{1}{(1-\epsilon )^2}-\Big (\frac{2\pi n}{T}\Big )^2\Big ]&r=1-\epsilon , \end{aligned}$$
(5.11)

where \(f_i\) \((i=1,2,3,4)\) is defined in (4.39)-(4.42). It has been shown earlier that the solution \((L_1^n(r),H_1^n(r),F_1^n(r),p_1^n(r))\) to the system (5.3)-(5.11) is unique. We now proceed to find out the structure of the solution and derive estimates needed for completing our proof of bifurcation.

For simplicity of the computation, we make the boundary conditions (5.8)-(5.10) homogeneous by setting

$$\begin{aligned} {\widetilde{L}}_1^n(r) = L_1^n(r) - \frac{1}{\beta _1}\Big (\frac{\partial ^2 L_*}{\partial r^2}-\beta _1\frac{\partial L_*}{\partial r}\Big )\Big |_{r=1-\epsilon }, \end{aligned}$$
(5.12)
$$\begin{aligned} {\widetilde{H}}_1^n(r) = H_1^n(r) - \frac{1}{\beta _1}\Big (\frac{\partial ^2 H_*}{\partial r^2}-\beta _1\frac{\partial H_*}{\partial r}\Big )\Big |_{r=1-\epsilon }, \end{aligned}$$
(5.13)
$$\begin{aligned} {\widetilde{F}}_1^n(r) = F_1^n(r) - \frac{1}{\beta _2}\Big (\frac{\partial ^2 F_*}{\partial r^2}-\beta _2\frac{\partial F_*}{\partial r}\Big )\Big |_{r=1-\epsilon }. \end{aligned}$$
(5.14)

Accordingly, \({\widetilde{L}}_1^n(r)\), \({\widetilde{H}}_1^n(r)\), \({\widetilde{F}}_1^n(r)\) satisfy the following equations:

$$\begin{aligned}&-\frac{\partial ^2 {\widetilde{L}}_1^n}{\partial r^2}-\frac{1}{r}\frac{\partial {\widetilde{L}}_1^n}{\partial r} + \Big (\frac{2\pi n}{T}\Big )^2{\widetilde{L}}_1^n= \widetilde{f}_1 \quad&\text {in }\Omega _*, \end{aligned}$$
(5.15)
$$\begin{aligned}&-\frac{\partial ^2 {\widetilde{H}}_1^n}{\partial r^2}-\frac{1}{r}\frac{\partial {\widetilde{H}}_1^n}{\partial r} + \Big (\frac{2\pi n}{T}\Big )^2{\widetilde{H}}_1^n=\widetilde{f}_2 \quad&\text {in }\Omega _*, \end{aligned}$$
(5.16)
$$\begin{aligned}&\begin{array}{ll} -D\frac{\partial ^2 {\widetilde{F}}_1^n}{\partial r^2}-\frac{D}{r}\frac{\partial {\widetilde{F}}_1^n}{\partial r} + D\Big (\frac{2\pi n}{T}\Big )^2{\widetilde{F}}_1^n- \frac{\partial {\widetilde{F}}_1^n}{\partial r} \frac{\partial p_*}{\partial r} = \widetilde{f}_3 \end{array} &\text {in }\Omega _*, \end{aligned}$$
(5.17)
$$\begin{aligned}&\frac{\partial {\widetilde{L}}_1^n}{\partial r}=\frac{\partial {\widetilde{H}}_1^n}{\partial r}=\frac{\partial {\widetilde{F}}_1^n}{\partial r}=0&r=1, \end{aligned}$$
(5.18)
$$\begin{aligned}&-\frac{\partial {\widetilde{L}}_1^n}{\partial r}+\beta _1 {\widetilde{L}}_1^n=0&r=1-\epsilon ,\end{aligned}$$
(5.19)
$$\begin{aligned}&-\frac{\partial {\widetilde{H}}_1^n}{\partial r}+\beta _1 {\widetilde{H}}_1^n=0&r=1-\epsilon ,\end{aligned}$$
(5.20)
$$\begin{aligned}&-\frac{\partial {\widetilde{F}}_1^n}{\partial r}+\beta _2 {\widetilde{F}}_1^n=0&r=1-\epsilon , \end{aligned}$$
(5.21)

where

$$\begin{aligned}&\widetilde{f}_1 \triangleq f_1(L_1^n,H_1^n,F_1^n) -\frac{1}{\beta _1}\Big (\frac{2\pi n}{T}\Big )^2\Big (\frac{\partial ^2 L_*}{\partial r^2}-\beta _1\frac{\partial L_*}{\partial r}\Big )\Big |_{r=1-\epsilon }, \end{aligned}$$
(5.22)
$$\begin{aligned}&\widetilde{f}_2 \triangleq f_2(L_1^n,H_1^n,F_1^n)-\frac{1}{\beta _1}\Big (\frac{2\pi n}{T}\Big )^2\Big (\frac{\partial ^2 H_*}{\partial r^2}-\beta _1\frac{\partial H_*}{\partial r}\Big )\Big |_{r=1-\epsilon }, \end{aligned}$$
(5.23)
$$\begin{aligned}&\widetilde{f}_3 \triangleq f_3(L_1^n,H_1^n,F_1^n) +\frac{\partial F_*}{\partial r}\frac{\partial p_1^n}{\partial r}- \frac{D}{\beta _2}\Big (\frac{2\pi n}{T}\Big )^2\Big (\frac{\partial ^2 F_*}{\partial r^2}-\beta _2\frac{\partial F_*}{\partial r}\Big )\Big |_{r=1-\epsilon },\qquad \end{aligned}$$
(5.24)

and \(p_1^n\) is defined by (5.6) and (5.11).

From now on we shall write \(m = \frac{2\pi n}{T}\), where \(n=0,1,2,\cdots \). Thus m takes values \(0, \frac{2\pi }{T}, \frac{4\pi }{T}, \frac{6\pi }{T}, \ldots \).

Lemma 5.1

Let \(\psi \) be a solution of

$$\begin{aligned}&- \psi ''- \frac{1}{r} \psi ' + m^2 \psi = \eta + f(r), \qquad 1-\epsilon< r < 1 , \end{aligned}$$
(5.25)
$$\begin{aligned}&\psi '(1) = 0, \end{aligned}$$
(5.26)

where \(\eta \) is a constant and \(m\ge 0\) is defined as above. Then the solution is given by

$$\begin{aligned}&\psi - \psi _1 = \left\{ \begin{array}{ll} \displaystyle A I_0(mr) + B K_0(mr) + Q[f](r), \quad &{} m\ne 0, \\ A + Q[f](r), &{} m= 0, \end{array}\right. \end{aligned}$$
(5.27)

with

$$\begin{aligned} B= & {} A\frac{I_1(m)}{K_1(m)}+\frac{1}{m K_1(m)} Q[f]'(1), \quad m\ne 0, \nonumber \\ Q[f](r)= & {} \left\{ \begin{array}{ll} \displaystyle I_0(mr) \int _r^1 K_0(ms) f(s)s \, \mathrm {d}s \\ \qquad \displaystyle + K_0(mr) \int _{1-\epsilon }^r I_0(ms) f(s)s \, \mathrm {d}s, \ &{} m\ne 0,\\ \qquad \displaystyle - \int _{r}^1 \Big (\log \frac{s}{r}\Big )\; f(s) s \, \mathrm {d}s, &{} m= 0, \end{array}\right. \end{aligned}$$
(5.28)

and

$$\begin{aligned}&\psi _1 = \left\{ \begin{array}{ll} \displaystyle \frac{\eta }{m^2}, &{}\quad m\ne 0, \\ \displaystyle \eta \Big ( \frac{1-r^2}{4} + \frac{1}{2}\log r\Big ), &{}\quad m = 0, \\ \end{array}\right. \qquad \psi _1'(1) = 0 . \end{aligned}$$
(5.29)

The special solution Q[f](r) satisfies

$$\begin{aligned} \big | Q[f](r) \big | \le \min \Big (\frac{1}{m^2},\frac{C\epsilon }{m} \Big )\Vert f\Vert _{L^\infty }, \ \big | Q[f]'(r) \big | \le \min \Big (\frac{2}{m}, C\epsilon \Big ) \Vert f\Vert _{L^\infty } , \; m\ne 0,\nonumber \\ \end{aligned}$$
(5.30)

and

$$\begin{aligned} \big | Q[f](r) \big | \le C\epsilon \Vert f\Vert _{L^\infty }, \qquad \big | Q[f]'(r) \big | \le C\epsilon \Vert f\Vert _{L^\infty } , \qquad m = 0, \end{aligned}$$
(5.31)

where C is independent of \(\varepsilon \) and m.

Proof

Recall from (10.25.1) of Frank et al. (2010) that \(I_0(\xi )\) and \(K_0(\xi )\) are two independent solutions of the equation \(\frac{d^2w}{d\xi ^2} + \frac{1}{\xi }\frac{dw}{d\xi } - w =0\), and they also satisfy ((10.29.3), (10.28.2) of Frank et al. (2010))

$$\begin{aligned} I'_0(\xi ) = I_1(\xi ), \quad K_0'(\xi )=-K_1(\xi ), \quad I_0(\xi )K_1(\xi )+I_1(\xi )K_0(\xi )=\frac{1}{\xi }. \end{aligned}$$
(5.32)

Using these identities, we find that \(I_0(m r)\) and \(K_0(m r)\) are our solutions to the homogeneous problem and Q[f](r) is a solution for the non-homogeneous problem. Moreover, the identities ((10.29.2) of Frank et al. (2010))

$$\begin{aligned} \frac{d}{d\xi }\Big (\xi I_1(\xi ) \Big ) = \xi I_0(\xi ), \qquad \qquad \frac{d}{d\xi }\Big (\xi K_1(\xi ) \Big ) = - \xi K_0(\xi ), \end{aligned}$$
(5.33)

or,

$$\begin{aligned} I'_1(\xi )= I_0(\xi ) - \frac{1}{\xi } I_1(\xi ), \qquad \qquad K'_1(\xi )=- K_0(\xi ) - \frac{1}{\xi } K_1(\xi ) \end{aligned}$$
(5.34)

are also very useful.

For \(m\ne 0\), it follows from (5.28), (5.33) and the last property of (5.32) that

$$\begin{aligned} \begin{aligned} \big |Q[f](r)\big | \le&\Vert f\Vert _{L^\infty } \Big [I_0(mr) \int _r^1 K_0(ms) s \, \mathrm {d}s+ K_0(mr) \int _{1-\epsilon }^r I_0(ms) s \, \mathrm {d}s \Big ]\\ =&\frac{\Vert f\Vert _{L^\infty }}{m} \Big [ I_0(mr) \Big (-K_1(m)+rK_1(mr)\Big ) \\&+ K_0(mr)\Big (rI_1(mr)-(1-\epsilon )I_1(m(1-\epsilon ))\Big )\Big ] \\ =&\frac{\Vert f\Vert _{L^\infty }}{m} \Big [ \frac{1}{m} - I_0(mr) K_1(m) - (1-\epsilon )K_0(mr)I_1(m(1-\epsilon )) \Big ]. \end{aligned} \end{aligned}$$
(5.35)

Throwing away the negative terms, we obtain

$$\begin{aligned} \big |Q[f](r)\big | \le \frac{1}{m^2}{\Vert f\Vert _{L^\infty }}, \quad m\ne 0. \end{aligned}$$
(5.36)

Recall (10.30.4) and (10.25.3) in Frank et al. (2010) that for the real number \(\nu \ge 0\) fixed,

$$\begin{aligned} I_\nu (\xi ) \sim \frac{e^{\xi }}{\sqrt{2\pi \xi }}, K_\nu (\xi ) \sim \sqrt{\frac{\pi }{2\xi }} e^{-\xi }, \qquad \xi \rightarrow \infty , \end{aligned}$$
(5.37)

then there exists large \(m_0\) such that for \(m>m_0\),

$$\begin{aligned} \begin{aligned} \big |Q[f](r)\big | \le&\Vert f\Vert _{L^\infty } \Big [I_0(mr) \int _r^1 K_0(ms) s \, \mathrm {d}s+ K_0(mr) \int _{1-\epsilon }^r I_0(ms) s \, \mathrm {d}s \Big ]\\ \le&C\Vert f\Vert _{L^\infty } \Big [ \frac{1}{2m} \int _r^1 e^{m(r-s)} \sqrt{\frac{s}{r}} \mathrm {d}s +\frac{1}{2m} \int _{1-\epsilon }^r e^{m(s-r)} \sqrt{\frac{s}{r}} \mathrm {d}s \Big ] \\ \le&\frac{C\epsilon }{m} \Vert f\Vert _{L^\infty }, \end{aligned} \end{aligned}$$
(5.38)

where C is independent of \(\varepsilon \) and m. For \(m=\frac{2\pi }{T},\frac{4\pi }{T},\cdots , \frac{4\pi }{T}\Big (\Big [\frac{m_0 T}{4\pi }\Big ]+1\Big )\), we clearly have, for \(1-\epsilon \le r\le 1\),

$$\begin{aligned} \big | Q[f](r) \big |\le C\varepsilon \Vert f\Vert _{L^\infty }. \end{aligned}$$
(5.39)

Combining (5.36), (5.38) and (5.39), we derive, for \(m\ne 0\),

$$\begin{aligned} \big | Q[f](r) \big | \le \min \Big ( \frac{1}{m^2}, \frac{C\epsilon }{m}\Big ) \Vert f\Vert _{L^\infty }. \end{aligned}$$

Since for fixed \(\xi >0\) (see (10.37) of Frank et al. (2010)),

$$\begin{aligned} I_1(\xi ) < I_0(\xi ), \end{aligned}$$

then together with (5.28), (5.32) and (5.33), a direct computation shows, for \(m\ne 0\),

$$\begin{aligned} \big |Q[f]'(r)\big |&= mI_1(mr) \int _r^1 K_0(ms) f(s)s \, \mathrm {d}s- mK_1(mr) \int _{1-\epsilon }^r I_0(ms) f(s)s \, \mathrm {d}s \nonumber \\&\le m \Vert f\Vert _{L^\infty } \Big [I_1(mr) \int _r^1 K_0(ms) s \, \mathrm {d}s+ K_1(mr) \int _{1-\epsilon }^r I_0(ms) s \, \mathrm {d}s \Big ]\nonumber \\&= \Vert f\Vert _{L^\infty } \Big [ I_1(mr) \Big (-K_1(m)+rK_1(mr)\Big ) \nonumber \\&\quad + K_1(mr)\Big (rI_1(mr)-(1-\epsilon )I_1(m(1-\epsilon ))\Big )\Big ] \nonumber \\&\le \Vert f\Vert _{L^\infty } \Big [2r I_1(mr) K_1(mr) \Big ] \nonumber \\&\le \Vert f\Vert _{L^\infty } \Big [2r I_0(mr) K_1(mr) \Big ] \nonumber \\&\le \frac{2}{m} \Vert f\Vert _{L^\infty }, \end{aligned}$$
(5.40)

where we utilized the last property of (5.32) in deriving the last inequality. Using the similar method as in (5.38) and (5.39), we also obtain

$$\begin{aligned} | Q[f]'(r) | \le C\epsilon \Vert f\Vert _{L^\infty } . \end{aligned}$$
(5.41)

This completes all the estimates for the case \(m\ne 0\). The case \(m=0\) is similar and is actually easier. \(\square \)

It is straightforward to verify:

Lemma 5.2

If we further assume the solution of (5.25) and (5.26) satisfies \(\psi (1-\epsilon ) = G\), then, for \(m\ne 0\),

$$\begin{aligned}&A = \frac{K_1(m) \big [G-\psi _1(\!1-\epsilon \!)\big ] \! - \! K_1(m)Q[f](\!1-\epsilon \!) - \frac{K_0(m(1-\varepsilon ))}{m} Q[f]'(1)}{I_0(m(1-\varepsilon ))K_1(m) +I_1(m)K_0(m(1-\varepsilon ))}, \end{aligned}$$
(5.42)
$$\begin{aligned}&B = \frac{I_1(m) \big [G-\psi _1(1-\epsilon )\big ] - I_1(m)Q[f](1-\epsilon ) + \frac{I_0(m(1-\varepsilon ))}{m} Q[f]'(1)}{I_0(m(1-\varepsilon ))K_1(m)+I_1(m)K_0(m(1-\varepsilon ))},\nonumber \\ \end{aligned}$$
(5.43)

and for \(m=0\),

$$\begin{aligned} A = G - \psi _1(1-\epsilon ) - Q[f](1-\epsilon ). \end{aligned}$$
(5.44)

Lemma 5.3

Define, for \(1-\epsilon \le r\le 1\), \(m = \frac{2\pi n}{T}\), \(n=0,1,2,\cdots \),

$$\begin{aligned}&W_1(r,m)\triangleq \frac{I_1(mr)K_1(m) - I_1(m)K_1(mr)}{I_0(m(1-\epsilon ))K_1(m) + I_1(m)K_0(m(1-\epsilon ))}, \end{aligned}$$
(5.45)
$$\begin{aligned}&W_2(r,m)\triangleq \frac{I_1(mr)K_0(m(1-\epsilon )) + I_0(m(1-\epsilon ))K_1(mr)}{I_0(m(1-\epsilon ))K_1(m) + I_1(m)K_0(m(1-\epsilon ))}. \end{aligned}$$
(5.46)

Then the following estimates hold:

$$\begin{aligned} |W_1(r,m)|\le M,\qquad |W_2(r,m)|\le M, \qquad 1-\epsilon \le r\le 1, \end{aligned}$$

where the constant M is independent of \(\epsilon \) and m (or n).

Proof

Since \(I_1(\xi )\) is increasing and \(K_1(\xi )\) is decreasing (see (10.37) of Frank et al. (2010)), then \(I_1(mr)K_1(m) - I_1(m)K_1(mr) < I_1(m)K_1(m) - I_1(m)K_1(m) = 0\) for \(r<1\). It follows that

$$\begin{aligned} |W_1(r,m)|&= \frac{ I_1(m)K_1(mr)-I_1(mr)K_1(m) }{I_0(m(1-\epsilon ))K_1(m) + I_1(m)K_0(m(1-\epsilon ))}\nonumber \\&\le \frac{ I_1(m)K_1(mr) }{I_0(m(1-\epsilon ))K_1(m) + I_1(m)K_0(m(1-\epsilon ))} \nonumber \\&\le \frac{ K_1(mr) }{ K_0(m(1-\epsilon ))} \nonumber \\&\le \frac{ K_1(m(1-\epsilon )) }{ K_0(m(1-\epsilon ))}. \end{aligned}$$
(5.47)

By (5.37), we obtain that for \(m = \frac{2\pi n}{T}\), \(n>n_0\) (\(n_0\) large enough),

$$\begin{aligned} \begin{aligned} |W_1(r,m)|&\le 2, \qquad 1-\epsilon \le r\le 1. \end{aligned} \end{aligned}$$

For each \(n\in [0,n_0]\), there exists \(C_n\), which is independent of \(\epsilon \), such that

$$\begin{aligned} |W_1(r,m)| \le C_n, \qquad 1-\epsilon \le r\le 1. \end{aligned}$$

Let \(M=\max \Big \{C_0,C_1,\cdots ,C_{n_0}, 2\Big \}\). Then by the above analysis, we have

$$\begin{aligned} |W_1(r,m)| \le M, \qquad 1-\epsilon \le r\le 1, \end{aligned}$$

where M is independent of \(\epsilon \) and n.

For \(n>n_0\) (\(n_0\) large enough), it follows from (5.37) that, for \(1-\epsilon \le r\le 1\),

$$\begin{aligned} W_2(r,m)\le & {} \frac{I_1(mr)K_0(m(1-\epsilon )) + I_0(m(1-\epsilon ))K_1(mr)}{ I_1(m)K_0(m(1-\epsilon ))}\\\le & {} \frac{I_1(mr)}{I_1(m)} + \frac{ I_0(m(1-\epsilon ))K_1(mr)}{ I_1(m)K_0(m(1-\epsilon ))}\\\le & {} \frac{I_1(m)}{I_1(m)} + \frac{ I_0(m(1-\epsilon ))K_1(m(1-\epsilon ))}{ I_1(m)K_0(m(1-\epsilon ))}\\\le & {} 1 + \frac{ I_0(m )K_1(m(1-\epsilon ))}{ I_1(m)K_0(m(1-\epsilon ))} \\\le & {} 4 . \end{aligned}$$

For each \(n\in [0,n_0]\), it is obvious that \(|W_2(r,m)| \le {\widetilde{C}}_n\) for \(1-\epsilon \le r\le 1\). Therefore, our proof is complete. \(\square \)

We are ready to establish the following estimates.

Lemma 5.4

For sufficiently small \(\epsilon \), the following estimates hold:

$$\begin{aligned}&\Vert {\widetilde{L}}_1^n\Vert _{L^\infty (1-\epsilon ,1)} +\Vert {\widetilde{H}}_1^n\Vert _{L^\infty (1-\epsilon ,1)} + \Vert {\widetilde{F}}_1^n\Vert _{L^\infty (1-\epsilon ,1)} \le C(n^2+1) \epsilon , \end{aligned}$$
(5.48)
$$\begin{aligned}&\Vert ({p_1^n})'\Vert _{L^\infty (1-\epsilon ,1)} \le C (n^3+1), \end{aligned}$$
(5.49)

where the constant C does not depend on \(\epsilon \) and n, but may depend on T.

Proof

We again use the idea of continuation (Lemma 3.3) to prove this lemma. To do that, we multiply the right-hand sides of (5.15)-(5.17) as well as (5.6) by \(\delta \).

Case I: \(\delta =0\). By the maximum principle, we derive that

$$\begin{aligned} {\widetilde{L}}_1^n = {\widetilde{H}}_1^n = {\widetilde{F}}_1^n =0, \end{aligned}$$

and then (5.48) clearly holds. Moreover, \(p_1^n\) satisfies, \(m= \frac{2\pi n}{T},\; n=0,1,2,\ldots ,\)

$$\begin{aligned}&-\frac{\partial ^2 p_1^n}{\partial r^2} -\frac{1}{r} \frac{\partial p_1^n}{\partial r} + m^2p_1^n = 0, \; 1-\epsilon< r < 1, \end{aligned}$$
(5.50)
$$\begin{aligned}&\frac{\partial p_1^n(1)}{\partial r} = 0, \qquad p_1^n(1-\epsilon ) = \frac{1}{2}\Big [\frac{1}{(1-\epsilon )^2}-m^2\Big ]. \end{aligned}$$
(5.51)

By Lemmas 5.1 and 5.2 , we find for \(n\ge 1\),

$$\begin{aligned} p_1^n(r) = \frac{1}{2}\Big [\frac{1}{(1-\epsilon )^2}-m^2\Big ] \frac{I_0(mr)K_1(m) + I_1(m)K_0(mr)}{I_0(m(1-\epsilon ))K_1(m) + I_1(m)K_0(m(1-\epsilon ))}.\nonumber \\ \end{aligned}$$
(5.52)

Hence, by (5.32),

$$\begin{aligned} (p_1^n)'(r) = \frac{m}{2}\Big [\frac{1}{(1-\epsilon )^2}-m^2\Big ] \frac{I_1(mr)K_1(m) - I_1(m)K_1(mr)}{I_0(m(1-\epsilon ))K_1(m) + I_1(m)K_0(m(1-\epsilon ))}.\nonumber \\ \end{aligned}$$
(5.53)

It follows from Lemma 5.3 that

$$\begin{aligned} \big |(p_1^n)'(r)\big | \le C\Big | \frac{m }{2} \Big (\frac{1}{(1-\epsilon )^2}-m^2\Big )\Big | \le C(n^3+1), \end{aligned}$$

where C is independent of \(\epsilon \) and n, then (5.49) holds. For the case \(n=0\), \(p_1^n(r)=\frac{1}{2(1-\epsilon )^2}\), which implies (5.49). Thus, condition (i) of Lemma 3.3 is true.

Case II: \(0<\delta \le 1\). We first assume that

$$\begin{aligned}&\Vert {\widetilde{L}}_1^n\Vert _{L^\infty (1-\epsilon ,1)} +\Vert {\widetilde{H}}_1^n\Vert _{L^\infty (1-\epsilon ,1)} + \Vert {\widetilde{F}}_1^n\Vert _{L^\infty (1-\epsilon ,1)} \le n^2+1, \end{aligned}$$
(5.54)
$$\begin{aligned}&\Vert ({p_1^n})'\Vert _{L^\infty (1-\epsilon ,1)} \le 2M\Big (\frac{2\pi }{T}\Big )^3(n^3+1), \end{aligned}$$
(5.55)

where M is from Lemma 5.3.

By the definition of \({\widetilde{f}}_1\) in (5.22) and the assumption (5.54), we clearly have

$$\begin{aligned} \Vert {\widetilde{f}}_1 \Vert _{L^\infty } \le C(n^2 +1). \end{aligned}$$

Set

$$\begin{aligned} \varphi _1(r)= & {} Q[\widetilde{f}_1](r) + \Big |Q[\widetilde{f}_1]'(1)\Big |\Big [r+\Big (\frac{T}{2\pi }\Big )^2\frac{1}{1-\epsilon } + \frac{1}{\beta _1} \Big ]+ \Big |\frac{1}{\beta _1}Q[\widetilde{f}_1]'(1-\epsilon )\\&-Q[\widetilde{f}_1](1-\epsilon )\Big |. \end{aligned}$$

It is easily shown that \(\varphi _1(r)\) is a supersolution for \({\widetilde{L}}_1^n(r)\) when \(n\ge 1\), so that by (5.30),

$$\begin{aligned} \big |{\widetilde{L}}_1^n(r)\big |\le \varphi _1(r) \le C(n^2+1)\epsilon , \end{aligned}$$

where C is independent of n and \(\epsilon \). For the case \(n=0\), \(\big (\sup |{\widetilde{f}}_1|+1\big )\big [\xi (r) + c_1(\beta _1,\epsilon )\big ]\), here \(\xi (r)\) and \(c_1(\beta _1,\epsilon )\) are defined in Sect. 3.5, is a supersolution for \({\widetilde{L}}_1^n(r)\), then \(|{\widetilde{L}}_1^n(r)|\le C\epsilon \).

Similarly, we get \( |{\widetilde{H}}_1^n(r)| \le C(n^2+1)\epsilon \). Now we establish the estimate for \({\widetilde{F}}_1^n\). It follows from (3.2), the assumptions (5.54) and (5.55) that

$$\begin{aligned} \Vert \widetilde{f}_3\Vert _{L^\infty } \le C(n^2+1)+ C\epsilon (n^3+1), \end{aligned}$$

then recalling that Q is a linear operator and using the respective estimates in the minimum expression of (5.30), we obtain

$$\begin{aligned} | Q[\widetilde{f}_3](r) | \le C(n+1) \epsilon , \qquad | Q[\widetilde{f}_3]'(r) | \le C(n^2+1) \epsilon . \end{aligned}$$

Let \(\varphi _2(r)\) be defined by

$$\begin{aligned} \varphi _2(r)= & {} \frac{1}{D} \Big \{Q[\widetilde{f}_3](r) + \Big |Q[\widetilde{f}_3]'(1)\Big |\Big [r+\Big (\frac{T}{2\pi }\Big )^2\frac{1}{1-\epsilon } + \frac{1}{\beta _2} \Big ]\\&+ \Big |\frac{1}{\beta _2}Q[\widetilde{f}_3]'(1-\epsilon )- Q[\widetilde{f}_3](1-\epsilon )\Big | + \epsilon \Big \}. \end{aligned}$$

By (3.2), i.e., \(p_*'(r) = O(\epsilon )\), we find that \(\varphi _2(r)\) is a supersolution for \(\widetilde{F}^n_1(r)\) when \(n\ge 1\) and \(\epsilon \) small, hence

$$\begin{aligned} \big |F^n_1(r)\big | \le \varphi _2(r) \le C(n^2+1)\epsilon . \end{aligned}$$

For \(n=0\), \(\frac{1}{D}\big (\sup |{\widetilde{f}}_3|+1\big )\big [\xi (r) + c_1(\beta _2,\epsilon ) + c_2(\beta _2,\tau )\big ]\), where \(\xi (r)\), \(c_1(\beta ,\epsilon )\) and \(c_2(\beta ,\tau )\) are defined in Sect. 3.5, is a supersolution for \({\widetilde{F}}_1^n(r)\), then \(|{\widetilde{F}}_1^n(r)|\le C\epsilon \). Finally, we estimate \((p^n_1)'\). Recall that \(p^n_1\) satisfies

$$\begin{aligned} \begin{aligned}&-\frac{\partial ^2 p_1^n}{\partial r^2}-\frac{1}{r}\frac{\partial p_1^n}{\partial r} + m^2 p_1^n= \delta f_4(L_1^n,H_1^n,F_1^n) \quad \text {in }\Omega _*,\\&\frac{\partial p_1^n(1)}{\partial r}=0, \quad p_1^n(1-\varepsilon )=\frac{1}{2}\Big [\frac{1}{(1-\varepsilon )^2}-m^2\Big ]. \end{aligned} \end{aligned}$$
(5.56)

It follows from (5.54) and (5.12)-(5.14) that

$$\begin{aligned} \Vert f_4\Vert _{L^\infty } \le C(n^2+1), \end{aligned}$$

then together with (5.30) and (5.31), we have

$$\begin{aligned} |Q[f_4](r)| \le C(n+1)\epsilon , |Q[f_4]'(r)| \le C(n^2+1)\epsilon . \end{aligned}$$
(5.57)

For \(n\ge 1\), taking \(\eta =0\) in Lemma 5.1 and \(G = \frac{1}{2}\Big [\frac{1}{(1-\varepsilon )^2}-m^2\Big ]\) in Lemma 5.2, we solve

$$\begin{aligned} p_1^n(r)= A I_0(mr) +B K_0(mr) + \delta Q[f_4](r), \end{aligned}$$

where A and B are defined in Lemma 5.2, namely,

$$\begin{aligned} A= & {} \frac{K_1(m) \big [G- \delta Q[f_4](\!1-\epsilon \!) \big ] - \frac{K_0(m(1-\varepsilon ))}{m} \delta Q[f_4]'(1)}{I_0(m(1-\varepsilon ))K_1(m)+I_1(m)K_0(m(1-\varepsilon ))}, \\ B= & {} \frac{I_1(m) \big [G- \delta Q[f_4](1-\epsilon ) \big ] + \frac{I_0(m(1-\varepsilon ))}{m} \delta Q[f_4]'(1)}{I_0(m(1-\varepsilon ))K_1(m)+I_1(m)K_0(m(1-\varepsilon ))}. \end{aligned}$$

Differentiating \(p_1^n(r)\) in r, we obtain, for \(\epsilon \) sufficiently small,

$$\begin{aligned} \begin{aligned} \Big |(p_1^n)'\Big |&= \Big | mA I_1(mr) - mB K_1(mr) + \delta Q[f_4]'(r)\Big |\\&\le \max \limits _{1-\epsilon \le r\le 1} \Big |\frac{m[I_1(mr)K_1(m) - I_1(m)K_1(mr)]}{I_0(m(1-\epsilon ))K_1(m) + I_1(m)K_0(m(1-\epsilon ))}\Big [G - \delta Q[f_4](1-\epsilon )\Big ]\Big | \\&\quad \;+ \max \limits _{1-\epsilon \le r\le 1}\Big |\frac{I_1(mr)K_0(m(1-\epsilon )) + I_0(m(1-\epsilon ))K_1(mr)}{I_0(m(1-\epsilon ))K_1(m) + I_1(m)K_0(m(1-\epsilon ))} \delta Q[f_4]'(1) \Big |\\&\quad \;+ \max \limits _{1-\epsilon \le r\le 1}\Big |\delta Q[f_4]'(r)\Big |\\&= \max \limits _{1-\epsilon \le r\le 1} \Big |m W_1(r,m)\Big [G - \delta Q[f_4](1-\epsilon )\Big ]\Big | + \max \limits _{1-\epsilon \le r\le 1} \Big |W_2(r,m) \delta Q[f_4]'(1) \Big | \\&\quad \;+ \max \limits _{1-\epsilon \le r\le 1}\Big |\delta Q[f_4]'(r)\Big | \\&\le 2M \Big (\frac{m}{2}\Big |\frac{1}{(1-\epsilon )^2}-m^2\Big | + C(n^2+1)\epsilon \Big )+ C(n^2+1)\epsilon \\&\le \frac{3}{2} M \Big (\frac{2\pi }{T}\Big )^3 (n^3+1) \end{aligned} \end{aligned}$$

where we have used Lemma 5.3 and (5.57). For the case \(n=0\), by Lemmas 5.1 and 5.2 , we solve

$$\begin{aligned} p_1^n(r) = G - \delta Q[f_4](1-\epsilon ) + \delta Q[f_4](r), \end{aligned}$$

then it follows from (5.57) that \(|(p_1^n)'|\le |Q[f_4]'(r)| \le C\epsilon \le \frac{3}{2} M \Big (\frac{2\pi }{T}\Big )^3\) for \(\epsilon \) small. Hence, condition (ii) of Lemma 3.3 is satisfied.

Since condition (iii) of Lemma 3.3 is obvious, then the proof is complete. \(\square \)

We have already established the estimate (5.49) for \(\frac{\partial p_1^n(1-\epsilon )}{\partial r}\). However, this estimate is not enough in verifying the four conditions of the Crandall-Rabinowitz theorem. We need to make the estimate on \(\frac{\partial p_1^n}{\partial r}\) more precise at \(r=1-\epsilon \). We shall extract dominate terms in this expression. Indeed, based on (5.48) and (5.49), we have the following more delicate estimate for \(\frac{\partial p_1^n(1-\epsilon )}{\partial r}\).

Lemma 5.5

For small \(0<\epsilon \ll 1\) and \(m=\frac{2\pi n}{T},\) \(n=0,1,2,\ldots ,\) the following estimate holds:

$$\begin{aligned} \begin{array}{l} \bigg |\frac{\partial p_1^n(1-\epsilon )}{\partial r} -m\frac{I_1(m(1-\varepsilon ))K_1(m)-I_1(m)K_1(m(1-\varepsilon ))}{I_0(m(1-\varepsilon ))K_1(m)+I_1(m)K_0(m(1-\varepsilon ))} \Big (G - \frac{1}{m^2}\frac{\mu }{\gamma +H_0}\Big ) \bigg | \\ \le C (m^2+1) \epsilon ^2, \qquad n\ne 0, \end{array} \end{aligned}$$
(5.58)

and

$$\begin{aligned} \bigg |\frac{\partial p_1^n(1-\epsilon )}{\partial r} - \frac{\mu }{\gamma +H_0} \epsilon \bigg | \le C \epsilon ^2, \qquad n=0, \end{aligned}$$
(5.59)

where \(G=\frac{1}{2}\Big [\frac{1}{(1-\varepsilon )^2}-m^2\Big ]\), and C is independent of \(\epsilon \) and m (or n).

Proof

We know from (5.6), (5.7) and (5.11) that \(p_1^n\) satisfies

$$\begin{aligned}&-\frac{\partial ^2 p_1^n}{\partial r^2}-\frac{1}{r}\frac{\partial p_1^n}{\partial r} + m^2 p_1^n= f_4(L_1^n,H_1^n,F_1^n) \quad \text {in }\Omega _*,\\&\frac{\partial p_1^n(1)}{\partial r}=0, \quad p_1^n(1-\varepsilon )=G. \end{aligned}$$

The following computation has been carried out in (Zhao and Hu 2021, (4.53)):

$$\begin{aligned} f_4(L_1^n,H_1^n,F_1^n) = \frac{\mu }{\gamma + H_0} + O((m^2+1)\epsilon ), \end{aligned}$$
(5.60)

which is based on the estimate (5.48). Taking \(\eta = \frac{\mu }{\gamma +H_0}\) and \(f(r) = f_4 - \eta \) in Lemma 5.1, we obtain

$$\begin{aligned} \Vert f\Vert _{L^\infty } = \Vert f_4 - \eta \Vert _{L^\infty } \le C(m^2 + 1) \epsilon , \end{aligned}$$

and then

$$\begin{aligned} | Q[f](r) | \le C(m+1)\epsilon ^2, | Q[f]'(r) | \le C(m^2+1)\epsilon ^2. \end{aligned}$$
(5.61)

By Lemmas 5.1 and  5.2, we can explicitly solve \(p_1^n\) as

$$\begin{aligned} p_1^n(r)= & {} \psi _1(r) + A I_0(mr) + B K_0(mr) + Q[f](r), n\ne 0, \end{aligned}$$
(5.62)
$$\begin{aligned} p_1^n(r)= & {} \psi _1(r) + Q[f](r) +G - \psi _1(1-\epsilon ) - Q[f](1-\epsilon ), n=0, \end{aligned}$$
(5.63)

where A and B are defined in Lemma 5.2. For \(\psi _1(r)\), it follows from (5.29) that

$$\begin{aligned} \psi _1(1-\epsilon ) = \frac{\eta }{m^2}, \qquad \psi _1'(1-\epsilon ) = 0, \qquad n\ne 0, \end{aligned}$$
(5.64)

and

$$\begin{aligned} \psi _1(1-\epsilon ) = O(\epsilon ^2), \quad \psi _1'(1-\epsilon ) = {\eta \epsilon } +O(\epsilon ^2), \quad n=0. \end{aligned}$$
(5.65)

When \(n\ne 0\), combining (5.42), (5.43) and (5.64), we compute the first derivative of \(p_1^n\) at \(r=1-\varepsilon \),

$$\begin{aligned} \begin{aligned} \frac{\partial p_1^n(1-\epsilon )}{\partial r}&=\; \psi _1'(1-\epsilon ) + Am I_1(m(1-\varepsilon )) - BmK_1(m(1-\varepsilon )) + Q[f]'(1-\epsilon )\\&= m W_1(1-\epsilon , m) \Big (G - \frac{\eta }{m^2}\Big ) - m W_1(1-\epsilon , m) Q[f](1-\epsilon ) \\&\qquad - W_2(1-\epsilon , m) Q[f]'(1) + Q[f]'(1-\epsilon ), \end{aligned} \end{aligned}$$

where \(W_1(r,m)\) and \(W_2(r,m)\) are defined in Lemma 5.3. Then by Lemma 5.3 and (5.61), we derive

$$\begin{aligned} \big |m W_1(1-\epsilon ,m) Q[f](1-\epsilon ) + W_2(1-\epsilon ,m) Q[f]'(1) - Q[f]'(1-\epsilon )\big | \le C(m^2+1)\epsilon ^2, \end{aligned}$$

which implies (5.58). For the case \(n=0\), it follows from (5.63), (5.65) and (5.61) that

$$\begin{aligned} \frac{\partial p_1^n(1-\epsilon )}{\partial r} = \psi _1'(1-\epsilon ) + Q[f]'(1-\epsilon ) = \eta \epsilon + O(\epsilon ^2), \end{aligned}$$

hence, (5.59) holds. \(\square \)

Denote

$$\begin{aligned} \begin{array}{ll} &{}J_2^n(\mu ,\rho _4(\mu )) \\ &{}\; = \frac{1}{\epsilon ^2}\bigg [\frac{\partial p_1^n(1-\epsilon )}{\partial r}-m\frac{I_1(m(1-\varepsilon ))K_1(m)-I_1(m)K_1(m(1-\varepsilon ))}{I_0(m(1-\varepsilon ))K_1(m)+I_1(m)K_0(m(1-\varepsilon ))}\Big (G - \frac{1}{m^2}\frac{\mu }{\gamma +H_0}\Big ) \bigg ]. \end{array} \end{aligned}$$

Then

$$\begin{aligned}&\frac{\partial p_1^n(1-\epsilon )}{\partial r} \nonumber \\&\quad = m\frac{I_1(m(1-\varepsilon ))K_1(m)-I_1(m)K_1(m(1-\varepsilon ))}{I_0(m(1-\varepsilon ))K_1(m)+I_1(m)K_0(m(1-\varepsilon ))} \Big (G - \frac{1}{m^2}\frac{\mu }{\gamma +H_0}\Big )\nonumber \\&\qquad +\epsilon ^2 J_2^n(\mu ,\rho _4(\mu )). \end{aligned}$$
(5.66)

As in the proof of (Zhao and Hu 2021, Lemma 4.8), we can establish the following lemma.

Lemma 5.6

There exists a constant C which is independent of \(\epsilon \) and n such that

$$\begin{aligned} |J^n_2(\mu ,\rho _4(\mu ))| \le C(n^2+1),\quad \Big |\frac{\mathrm {d}J^n_2(\mu , \rho _4(\mu ))}{\mathrm {d}\mu }\Big | \le C(n^2+1). \end{aligned}$$
(5.67)

5.2 Proof of Theorem 2.2

In this subsection, we shall derive some estimates that are essential in the proof of our bifurcation theorem and complete the proof of this theorem. The rest of the discussion is for \(n\ne 0\).

Lemma 5.7

The function

$$\begin{aligned} f(x,m)=\frac{I_1(m)K_1(mx)-I_1(mx)K_1(m)}{I_0(mx)K_1(m)+I_1(m)K_0(mx)}, \qquad \frac{1}{2} \le x\le 1, \end{aligned}$$
(5.68)

satisfies, uniformly for all \(0<\epsilon <1\) and all \(m>0\),

$$\begin{aligned} f(1-\epsilon , m) \ge \min \Big (\frac{1}{2}, \frac{3\epsilon m}{4}\Big ). \end{aligned}$$
(5.69)

Proof

As in the proof of Lemma 5.3, we have

$$\begin{aligned} f(x,m)> 0, \qquad 0<x<1. \end{aligned}$$
(5.70)

Using (5.32) and (5.34), we derive, by a direct computation,

$$\begin{aligned}&{\frac{\partial f}{\partial x}(x,m)} \\&\quad = \frac{1}{\big [I_0(mx)K_1(m)+I_1(m)K_0(mx)\big ]^2}\Big \{m\big [I_1(m)K_1'(mx)-I_1'(mx)K_1(m)\big ]\\&\quad \qquad \cdot \big [I_0(mx)K_1(m)+I_1(m)K_0(mx)\big ] \\&\qquad -\big [I_1(m)K_1(mx)-I_1(mx)K_1(m)\big ]\cdot m\big [I'_0(mx)K_1(m)+I_1(m)K'_0(mx)\big ] \Big \}\\&\quad =\frac{1}{\big [I_0(mx)K_1(m)+I_1(m)K_0(mx)\big ]^2}\Big \{ -m \big [I_0(mx)K_1(m)+I_1(m)K_0(mx)\big ]^2\\&\qquad -\frac{1}{x}\big [I_1(m)K_1(mx)-I_1(mx)K_1(m)\big ]\cdot \big [I_0(mx)K_1(m)+I_1(m)K_0(mx)\big ]\\&\qquad + m\big [I_1(m)K_1(mx)-I_1(mx)K_1(m)\big ]^2 \Big \}\\&\quad = -m - \frac{1}{x} \frac{I_1(m)K_1(mx)-I_1(mx)K_1(m)}{I_0(mx)K_1(m) +I_1(m)K_0(mx)} + m\Big [\frac{I_1(m)K_1(mx)-I_1(mx)K_1(m)}{I_0(mx)K_1(m) +I_1(m)K_0(mx)} \Big ]^2\\&\quad = -m - \frac{1}{x} f(x,m) + mf^2(x,m) . \end{aligned}$$

If \(f(1-\epsilon , m) \ge \frac{1}{2}\), then the conclusion holds immediately.

If \(f(1-\epsilon , m) < \frac{1}{2}\), then by the ODE comparison theorem, we have \(f(x, m) < \frac{1}{2}\) for all \(1-\epsilon \le x \le 1\). Therefore by the mean value theorem, for some \(1-\epsilon<y < 1\),

$$\begin{aligned} \begin{aligned} f(1-\epsilon ,m)\;&= \; f(1-\epsilon ,m)-f(1,m) = - \epsilon \; \frac{\partial f}{\partial x}(y,m) \\&=\; \epsilon \; \Big (m + \frac{1}{y} f(y,m) -m f^2(y, m)\Big ) \\&>\; \epsilon \; m \Big ( 1 - f^2(y, m) \Big ) \\&\ge \; \frac{3}{4}\;\epsilon \; m . \end{aligned} \end{aligned}$$

This completes the proof. \(\square \)

This lemma implies that

$$\begin{aligned} f(1-\epsilon ,m) \ge \frac{1}{2} \epsilon \quad \text {for } 0<\epsilon <1, \; m>\frac{2}{3}. \end{aligned}$$
(5.71)

Based on the preliminaries before, we are finally ready to prove our main result, Theorem 2.2.

Proof of Theorem 2.2

Substituting (5.2) into (4.75), we obtain the Fréchet derivative of \({\mathscr {F}}({\widetilde{R}},\mu )\) in \({\widetilde{R}}\) at the point \((0,\mu )\), namely,

$$\begin{aligned} {[}{\mathscr {F}}_{{\widetilde{R}}}(0,\mu )]\cos (mz) = \Big (\frac{\partial ^2 p_*(1-\epsilon )}{\partial r^2} + \frac{\partial p_1^n(1-\epsilon )}{\partial r}\Big )\cos (mz), \qquad m=\frac{2\pi n}{T} , \end{aligned}$$

then we combine the above formula with (3.4) and (5.66) to derive

$$\begin{aligned} \begin{aligned}&[{\mathscr {F}}_{{\widetilde{R}}}(0,\mu )] \cos (mz)\\&= \Big [m\frac{I_1(m(1-\varepsilon ))K_1(m)-I_1(m)K_1(m(1-\varepsilon ))}{I_0(m(1-\varepsilon ))K_1(m)+I_1(m)K_0(m(1-\varepsilon ))} \Big (G - \frac{1}{m^2}\frac{\mu }{\gamma +H_0}\Big ) \\&\quad + \epsilon ^2(J_1+J_2^n)\Big ]\cos (mz)\\&= \Big [ m f(1-\epsilon , m) \Big ( \frac{1}{m^2}\frac{\mu }{\gamma +H_0} - G\Big ) + \epsilon ^2(J_1+J_2^n)\Big ]\cos (mz)\\&= \Big [ m f(1-\epsilon , m) \Big ( \frac{1}{m^2}\frac{\mu }{\gamma +H_0} - \frac{1}{2(1-\epsilon )^2} + \frac{1}{2} m^2 \Big ) + \epsilon ^2(J_1+J_2^n)\Big ]\cos (mz), \end{aligned} \end{aligned}$$
(5.72)

where \(J_1=J_1(\mu ,\rho _4(\mu ))\) and \(J_2^n=J_2^n(\mu ,\rho _4(\mu ))\) are respectively estimated by (3.5) and (5.67), and f is defined by (5.68).

The expression \( [{\mathscr {F}}_{{\widetilde{R}}}(0,\mu )] \cos (mz)\) is equal to zero if and only if

$$\begin{aligned} m f(1-\epsilon , m) \Big ( \frac{1}{m^2}\frac{\mu }{\gamma +H_0} - \frac{1}{2(1-\epsilon )^2} + \frac{1}{2} m^2 \Big ) + \epsilon ^2(J_1+J_2^n)=0, \end{aligned}$$

i.e., for \(m=\frac{2\pi n}{T}\), \(\mu ^n(\epsilon )\) satisfies the equation

$$\begin{aligned} \begin{array}{ll} \mu ^n(\epsilon ) =&{} (\gamma +H_0)\Big \{ \frac{m^2}{2(1-\epsilon )^2} - \frac{1}{2}m^4 \Big \}\\ &{}- (\gamma +H_0)\frac{ m\;\epsilon ^2\big [J_1(\mu ^n(\epsilon ),\rho _4(\mu ^n(\epsilon )))+J_2^n(\mu ^n(\epsilon ),\rho _4(\mu ^n(\epsilon )))\big ]}{ f(1-\epsilon , m)} . \end{array} \end{aligned}$$
(5.73)

By (5.71), we find that

$$\begin{aligned} \lim _{\epsilon \rightarrow 0} \mu ^n(\epsilon ) = \frac{1}{2}(\gamma +H_0) m^2(1-m^2) \triangleq \mu ^n_0, \qquad m = \frac{2\pi n}{T}. \end{aligned}$$
(5.74)

The above limit is uniformly valid for all bounded m. Therefore, by the implicit function theorem, we obtain that for some \(\epsilon ^{**} = \epsilon ^{**}(n)\) and \(0<\epsilon <\epsilon ^{**}\), there exists a unique solution \(\mu ^n(\epsilon )\) for the equation (5.73). Notice that \( \epsilon ^{**}(n)\) may shrink to 0 as \(n\rightarrow \infty \).

Now we proceed to verify the four assumptions of the Crandall-Rabinowitz theorem to show that for \(\epsilon \) sufficiently small, \(\mu =\mu ^n(\epsilon )>\mu _c\) with the assumption (2.27) is a bifurcation point for the system (2.17)-(2.25). To begin with, we shall

$$\begin{aligned} \text {fix } n=n_0, \quad \text {and let } m_0 = \frac{2\pi n_0}{T}, \end{aligned}$$
(5.75)

and verify the conditions for this fixed \(n=n_0\). This would allow the estimates below to depend on \(n_0\).

By Theorem 2.1, it is obvious that for each \(\mu ^n(\epsilon ) > \mu _c\), we can find a small \(\epsilon ^*>0\) such that for \(0<\epsilon <\epsilon ^*\), there exists a unique solution \(\left( L_{*}(r), H_{*}(r), F_{*}(r), p_{*}(r)\right) \), i.e., \({\mathscr {F}}(0,\mu ^n(\epsilon ))=0\). Hence, the assumption (i) is satisfied. Next we shall verify the assumptions (ii) and (iii) for a fixed small \(\epsilon \). It suffices to show that for every k,

$$\begin{aligned}{}[{\mathscr {F}}_{{\widetilde{R}}}(0,\mu ^{n_0}(\epsilon ))]\cos (kz) \ne 0, \qquad k\ne \frac{2\pi n_0}{T}, \end{aligned}$$
(5.76)

or equivalently,

$$\begin{aligned} U(k,n_0) \ne 0 \quad \text { for } \; k \ne \frac{2\pi n_0}{T} , \end{aligned}$$
(5.77)

where

$$\begin{aligned} \begin{aligned} U(k,n)\triangleq&-\frac{\mu ^n(\epsilon )}{\gamma +H_0} + \frac{k^2}{2(1-\epsilon )^2} - \frac{1}{2}k^4 \\&- \frac{ k\;\epsilon ^2\big [J_1(\mu ^n(\epsilon ),\rho _4(\mu ^n(\epsilon )))+J_2^k(\mu ^n(\epsilon ),\rho _4(\mu ^n(\epsilon )))\big ]}{ f(1-\epsilon , k)}.\\ \end{aligned} \end{aligned}$$
(5.78)

Case I: \(k> k_0\) for some large \(k_0\). By (3.5) and (5.67), there exists a constant C which does not depend on \(\epsilon \) and k such that

$$\begin{aligned} |J_1| + |J_2^k| \le C(k^2+1). \end{aligned}$$
(5.79)

Substituting (5.79) and (5.71) into (5.78), we derive

$$\begin{aligned} U(k,n_0) \le -\frac{\mu ^{n_0}(\epsilon )}{\gamma +H_0} + \frac{k^2}{2(1-\epsilon )^2} - \frac{1}{2}k^4 + Ck(k^2+1)\epsilon . \end{aligned}$$

Since the leading order term is \(-\frac{1}{2}k^4\), we can easily find a bound for \(\epsilon \), denoted by \(E_1\), such that for \(0<\epsilon <E_1\) and \(k>k_0=k_0(n_0)\),

$$\begin{aligned} U(k, n_0) < 0 . \end{aligned}$$

Case II: \(k\le k_0\). For this case, the proof of (5.77) is similar to that of (Zhao and Hu 2021, Page 283, Case (iii)), but we need to verify the limiting \(\mu ^n(\epsilon )\) as \(\epsilon \rightarrow 0\) are all distinct from the one we are considering, namely,

$$\begin{aligned} \mu ^n_0 \ne \mu ^{n_0}_0 \quad \text {for } n\ne n_0, \; \frac{2\pi n}{T} \le k_0. \end{aligned}$$
(5.80)

With the definition from (5.74), it is easily verified that (5.80) is equivalent to the assumption (2.27), which is assumed in our theorem. This assumption is easily satisfied for almost all values with isolated exceptions for T and \(n_0\). (2.27) is obviously valid if \(\frac{2\pi n_0}{T}>1\).

Combining all these two cases, we obtain, for the mapping \({\mathscr {F}}: X_1^{4+\alpha }\rightarrow X_1^{1+\alpha }\),

$$\begin{aligned} \text {Ker}\left[ {\mathscr {F}}_{ {\widetilde{R}}}(0,\mu ^{n_0}(\epsilon ))\right] = \text {span}\Big \{\cos \Big (\frac{2\pi n_0}{T}z\Big )\Big \} \end{aligned}$$

and

$$\begin{aligned} Y_1= & {} \text {Im}\left[ {\mathscr {F}}_{ {\widetilde{R}}}(0,\mu ^{n_0}(\epsilon ))\right] \\= & {} \text {span}\Big \{1,\cos \Big (\frac{2\pi }{T}z\Big ),\cdots ,\cos \Big (\frac{ 2\pi (n_0-1)}{T}z\Big ),\cos \Big (\frac{2\pi (n_0+1)}{T}z\Big ),\cdots \Big \}, \end{aligned}$$

which implies

$$\begin{aligned} Y_1 \; \bigoplus \; \text {Ker}\left[ {\mathscr {F}}_{ {\widetilde{R}}}(0,\mu ^{n_0}(\epsilon ))\right] = X^{1+\alpha }_1. \end{aligned}$$

Hence, the assumptions (ii) and (iii) are satisfied for a fixed small \(\epsilon \).

To finish the whole proof, it remains to show the last assumption. Differentiating (5.72) in \(\mu \), we have

$$\begin{aligned} \begin{aligned}&\Big [{\mathscr {F}}_{\mu {\widetilde{R}}}(0,\mu ^{n_0}(\epsilon ))\Big ] \cos (m_0z) \\&\qquad = \Big [ f(1-\epsilon , m_0) \frac{1}{m_0}\frac{1}{\gamma +H_0} + \epsilon ^2\Big (\frac{d J_1}{d \mu } + \frac{d J_2^n}{d \mu } \Big )\Big ]\cos (m_0z)\\&\qquad = \epsilon \Big [ \frac{f(1-\epsilon , m_0)}{\epsilon } \frac{1}{m_0}\frac{1}{\gamma +H_0} + \epsilon \Big (\frac{d J_1}{d \mu } + \frac{d J_2^n}{d \mu } \Big )\Big ]\cos (m_0z),\quad m_0=\frac{2\pi n_0}{T}. \end{aligned} \end{aligned}$$

It follows from (3.5) and (5.67) that there exists a constant \(C>0\), which is independent of \(\epsilon \) and \(n_0\), such that

$$\begin{aligned} \Big |\frac{\mathrm {d}J_1}{\mathrm {d}\mu } + \frac{\mathrm {d}J_2^n }{\mathrm {d}\mu }\Big | \le \Big |\frac{\mathrm {d}J_1}{\mathrm {d}\mu }\Big | +\Big | \frac{\mathrm {d}J_2^n }{\mathrm {d}\mu }\Big |\le C(n_0^2+1). \end{aligned}$$
(5.81)

By Lemma 5.7 and (5.81), we can choose \(E_2\) to be small such that for \(0<\epsilon <E_2\),

$$\begin{aligned} \frac{f(1-\epsilon , m_0)}{\epsilon } \frac{1}{m_0}\frac{1}{\gamma +H_0} + \epsilon \Big (\frac{d J_1}{d \mu } + \frac{d J_2^n}{d \mu } \Big )> \frac{1}{2} \frac{1}{m_0(\gamma +H_0)} -CE_2(n_0^2+1) >0. \end{aligned}$$

Therefore,

$$\begin{aligned} \left[ {\mathscr {F}}_{\mu {\widetilde{R}}}(0,\mu ^{n_0}(\epsilon ))\right] \cos \Big (\frac{2\pi n_0}{T}z\Big ) \notin Y_1, \end{aligned}$$

namely, the assumption (iv) is satisfied.

Taking \(E=\min \big (E_1,E_2\big )\), we derive that for \(0<\epsilon <E\), the four assumptions of the Crandall-Rabinowitz theorem are satisfied. Hence, the proof of Theorem 2.2 is complete. \(\square \)

6 Conclusion

Even though the plaque model is simplified into a reaction-diffusion free boundary system of 4 equations, the problem is still very challenging. It is certainly more complex than the classical Stefan problem. Through mathematical analysis, results have been established that confirm the biological phenomena. It was established in Friedman et al. (2015) that the stability of the plaque depends heavily on the balance between the “good” cholesterol and “bad” cholesterol, and more “good” cholesterol (or less “bad” cholesterol) would induce stability or shrinkage of the plaque, a biological observation known for a long time. The result of Friedman et al. (2015) is restricted to the radially symmetric case only. The shape of the plaque, however, is unlikely to be radially symmetric; the question of non-radially symmetric solutions arises naturally. Just like the classical Stefan problem, non-radially symmetric solutions of the general free boundary problem are extremely challenging. As a sub-problem, it is quite reasonable to study whether non-radially symmetric stationary solution exists, and we can see plenty of biological examples of non-radially symmetric case. In this effort, non-radially symmetric stationary solutions were produced in the cross-section direction in Zhao and Hu (2021, 2022) through a bifurcation approach. In the current paper we extend the study for the non-radially symmetric stationary solution through a bifurcation to the longitude direction, with the shape given in Fig. 1.

Fig. 1
figure 1

Small three-space dimensional plaque–a small stationary plaque slightly protruding in the longitude direction