1 Introduction

In this paper we are concerning with the existence and localization of radial positive solutions for the Neumann boundary value problem in the annulus or in the ball

$$\begin{aligned} \left\{ \begin{array}{ll} -\text {div }\left( \psi \left( \left| \nabla u\right| \right) \nabla u\right) +\varepsilon u=f\left( \left| x\right| ,u\right) &{} \text {in }\Omega \\ \partial _{\nu }u=0 &{} \text {on }\partial \Omega , \end{array} \right. \end{aligned}$$
(1.1)

where \(\varepsilon >0,\) \(\ \psi :\left( -a,a\right) \rightarrow \mathbb {R} _{+}\ \) is such that the function

$$\begin{aligned} \phi :\left( -a,a\right) \rightarrow \left( -b,b\right) ,\quad \phi \left( s\right) =s\psi \left( s\right) \quad \left( 0<a,b\le +\infty \right) \end{aligned}$$

is an increasing homeomorphism, \(f:\left[ R_{0},R\right] \times \mathbb {R} _{+}\rightarrow \mathbb {R} _{+}\) is continuous and \(\ \Omega =\{x\in \mathbb {R} ^{n}:\ R_{0}<\left| x\right| <R\},\ n\ge 2.\) Here \(\ 0\le R_{0}<R<+\infty \) and \(\nu \) is the the exterior unit normal vector to the boundary of \(\Omega .\)

Problems of type (1.1) arise from mathematical modeling of real processes. Thus, equations involving the p-Laplacian come from fluid mechanics in porous media [2], equations with a singular homeomorphism arise from the relativistic mechanics [1], and equations involving a bounded homeomorphism intervene in capillarity problems [11].

Looking for radial solutions of (1.1), that is, functions of the form \( \ u(x)=v(r)\) with \(\ r=\left| x\right| ,\) problem (1.1) reduces to the boundary value problem

$$\begin{aligned} \left\{ \begin{array}{ll} -\left( r^{n-1}\phi \left( v^{\prime }\right) \right) ^{\prime }+\varepsilon r^{n-1}v=r^{n-1}f(r,v) &{} \text {in }\left( R_{0},R\right) \\ v^{\prime }\left( R_{0}\right) =v^{\prime }(R)=0. &{} \end{array} \right. \end{aligned}$$
(1.2)

Note that in the case of the ball, when \(R_{0}=0,\) the equality \(v^{\prime }(R)=0\) stands for the Neumann condition \(\ \partial _{\nu }u=0\) on the sphere, while the additional assumption \(\ v^{\prime }\left( 0\right) =0\) is required as a consequence of the regularity of the radially symmetric solutions u.

There are many contributions to radial solutions for boundary value problems in the annulus and in the ball. For instance, in papers [4, 5, 12, 14] and [19] there is considered the case of equations and systems with the classical Laplacian, the papers [6, 8] and [13] deal with the p-Laplacian, and in [3, 7] and [16] it is considered the case of the \(\phi \)-Laplacian, in particular, that of the mean curvature operators in the Euclidean and Minkowski spaces. The methods that are used are of the most spilled: fixed point principles, topological degree, upper and lower solution techniques, variational methods and shooting method. Although the problem of radial solutions returns to ordinary differential equations, the presence of a singularity at the origin makes the study more difficult. The analysis is even more difficult with the Neumann problem due mainly to the absence of an explicit expression of the solution operator (the integral type inverse of the differential operator).

In this paper, to our knowledge, the first consecrated to the localization of radial solutions for the Neumann problem involving a general \(\phi \) -Laplace operator, we use the homotopy technique - already applied in [15] and [16] for the Dirichlet problem - to obtain the existence of solutions v such that

$$\begin{aligned} \beta<\min _{\left[ R_{0},R\right] }v,\quad \max _{\left[ R_{0},R\right] }v<\alpha , \end{aligned}$$

for two given numbers \(0<\beta <\alpha .\)

From a physical point of view, assuming that function v stands for the state of a process and f is the external source, such a localization is imposed by two requirements: first, from the necessity to find the state-depending source \(f\left( r,v\right) \) (feedback law) in order to guarantee that the state v remains bounded between two given bounds and secondly, the state-depending source f being given, to find the bounds of its corresponding state v.

Mathematically, such a localization immediately gives multiple solutions in the case of oscillating functions \(f\left( r,s\right) .\) Additionally, we show that the solutions v are decreasing on \(\left[ R_{0},R\right] \) provided that \(f\left( r,s\right) \) has suitable monotonicity properties in r and s. Also, a certain behavior of the decreasing solution is emphasized in terms of a Harnack type inequality which is established by a variable change meant to eliminate the first-order term of the differential operator. Our results apply in particular to homeomorphisms with a physical significance as mentioned above, such as

$$\begin{aligned} \phi : \mathbb {R} \rightarrow \mathbb {R},\quad \phi \left( s\right) =\left| s\right| ^{p-2}s\quad \text { for } p>1\text { }(\text {here }a=b=+\infty ), \end{aligned}$$

involved by the classical p-Laplacian, and to the bounded and singular homeomorphisms

$$\begin{aligned} \phi : \mathbb {R} \rightarrow \left( -b,b\right) ,\quad \phi \left( s\right) =\frac{bs}{\sqrt{ 1+s^{2}}}\quad (\text {here }a=+\infty ,\ b<+\infty ) \end{aligned}$$

and

$$\begin{aligned} \phi :\left( -a,a\right) \rightarrow \mathbb {R},\quad \phi \left( s\right) =\frac{s}{\sqrt{a^{2}-s^{2}}}\quad ( \text {here }a<+\infty ,\ b=+\infty ), \end{aligned}$$

as in the mean curvature operators in the Euclidean and Minkowski spaces.

2 Solution Properties and the Solution Operator

By a solution of (1.2) we mean a function \( v\in C^{1}\left[ R_{0},R\right] \) such that \(\ v^{\prime }\in \left( -a,a\right) \ \)and \(\ r^{n-1}\phi \left( v^{\prime }\right) \) is differentiable and satisfies (1.2).

We look for solutions which are nonnegative on \(\left[ R_{0},R\right] .\)

2.1 The Solution Operator

According to Corollary 2.4 in [3] we have that for each \(\ h\in C \left[ R_{0},R\right] \) there is at least one solution to the problem

$$\begin{aligned} \left\{ \begin{array}{ll} L\left( v\right) \left( r\right) :=-\left( r^{n-1}\phi \left( v^{\prime }\right) \right) ^{\prime }+\varepsilon r^{n-1}v=r^{n-1}h\left( r\right) &{} \text {in }\left( R_{0},R\right) \\ v^{\prime }\left( R_{0}\right) =v^{\prime }(R)=0. &{} \end{array} \right. \end{aligned}$$
(2.1)

The next lemma gives a characterization of the solutions.

Lemma 2.1

A function v is a solution of (2.1) if and only if \(v\in C \left[ R_{0},R\right] \) and the following two conditions hold:

$$\begin{aligned}{} & {} v\left( r\right) =v\left( R_{0}\right) +\int _{R_{0}}^{r}\phi ^{-1}\left( s^{1-n}\int _{s}^{R}\tau ^{n-1}\left( h-\varepsilon v\right) d\tau \right) ds\quad (r\in \left[ R_{0},R\right] ); \nonumber \\ \end{aligned}$$
(2.2)
$$\begin{aligned}{} & {} \int _{R_{0}}^{R}\tau ^{n-1}\left( h-\varepsilon v\right) d\tau =0. \end{aligned}$$
(2.3)

Proof

Let v be a solution of (2.1). Then integrating from r to R by taking into account that \(v^{\prime }\left( R\right) =0\) gives

$$\begin{aligned} r^{n-1}\phi \left( v^{\prime }\right) =\int _{r}^{R}\tau ^{n-1}\left( h-\varepsilon v\right) d\tau . \end{aligned}$$

which immediately implies (2.2). Next (2.3) is obtained by integration in (2.1) on \(\left[ R_{0},R\right] \) and using \(v^{\prime }\left( R_{0}\right) =v^{\prime }(R)=0.\)

Conversely, if v satisfies (2.2) and (2.3), then clearly \(v\in C^{1}\left( R_{0},R\right) \) and by direct computation using (2.3),

$$\begin{aligned} v^{\prime }\left( r\right)= & {} \phi ^{-1}\left( r^{1-n}\int _{r}^{R}\tau ^{n-1}\left( h-\varepsilon v\right) d\tau \right) \nonumber \\= & {} -\phi ^{-1}\left( r^{1-n}\int _{R_{0}}^{r}\tau ^{n-1}\left( h-\varepsilon v\right) d\tau \right) , \end{aligned}$$
(2.4)

whence \(v^{\prime }\left( R\right) =v^{\prime }\left( R_{0}\right) =0.\) Moreover,

$$\begin{aligned} \phi \left( v^{\prime }\right) =r^{1-n}\int _{r}^{R}\tau ^{n-1}\left( h-\varepsilon v\right) d\tau , \end{aligned}$$
(2.5)

which shows that \(\phi \left( v^{\prime }\right) \) is differentiable and yields

$$\begin{aligned} -\left( r^{n-1}\phi \left( v^{\prime }\right) \right) ^{\prime }=r^{n-1}\left( h-\varepsilon v\right) \quad \text { in }\left( R_{0},R\right) .\text { } \end{aligned}$$

Thus, v is a solution of (2.1). \(\square \)

The following lemmas show that for every \(\ h,\) the solution is unique and the corresponding solution operator \(\ S:C\left[ R_{0},R\right] \rightarrow C\left[ R_{0},R\right] \) attaching to each \(\ h\) the corresponding solution \(\ v\) is isotone and sends nonnegative functions into nonnegative functions.

Lemma 2.2

Let \(h_{1},h_{2}\in C\left[ R_{0},R\right] ,\) \(h_{1}\le h_{2}\) on \(\left[ R_{0},R\right] ,\) and let \(v_{1},v_{2}\in C^{1}\left[ R_{0},R\right] \) be such that for \(i=1,2,\) one has \(\ v_{i}^{\prime }\left( R_{0}\right) =v_{i}^{\prime }\left( R\right) =0\) and

$$\begin{aligned} L\left( v_{i}\right) \left( r\right) =r^{n-1}h_{i}\left( r\right) \quad \textrm{for }\,r\in \left( R_{0},R\right) . \end{aligned}$$

Then \(\ v_{1}\le v_{2}\) in \(\left[ R_{0},R\right] .\)

Proof

Assume the contrary. Let \(I=\left( \alpha ,\beta \right) \) be a maximal subinterval of \(\left( R_{0},R\right) \) on which \(v_{2}-v_{1}\) is strictly negative. On I we then have

$$\begin{aligned} -\left( r^{n-1}\left( \phi \left( v_{2}^{\prime }\right) -\phi \left( v_{1}^{\prime }\right) \right) \right) ^{\prime }=r^{n-1}\left( h_{2}-h_{1}\right) -\varepsilon r^{n-1}\left( v_{2}-v_{1}\right) >0. \end{aligned}$$

Hence the function

$$\begin{aligned} \digamma \left( r\right) :=r^{n-1}\left( \phi \left( v_{2}^{\prime }\left( r\right) \right) -\phi \left( v_{1}^{\prime }\left( r\right) \right) \right) \end{aligned}$$

is strictly decreasing on I.

Assume \(\beta =R.\) Since \(v_{i}^{\prime }(R)=0\) for both \(i=1,2,\) we must have \(\ \phi \left( v_{2}^{\prime }\left( r\right) \right) -\phi \left( v_{1}^{\prime }\left( r\right) \right) >0,\) whence \(\ v_{2}^{\prime }\left( r\right) -v_{1}^{\prime }\left( r\right) >0\) for \(r\in I.\) This shows that the function \(\ v_{2}-v_{1}\) is strictly increasing on I. This together with its negativity implies that \(\ v_{2}-v_{1}\) is negative on the whole interval \([R_{0},R).\) Hence \(\alpha =R_{0}\) and consequently \(\digamma \) is strictly decreasing on \(\left[ R_{0},R\right] ,\) which is impossible since its values at \(R_{0}\) and R are equal to zero. Thus \(\beta <R\) and \( v_{2}\left( \beta \right) -v_{1}\left( \beta \right) =0.\)

Assume \(\alpha =R_{0}.\) Since \(v_{i}^{\prime }(R_{0})=0\) for both \(i=1,2,\) we have \(\phi \left( v_{2}^{\prime }\left( r\right) \right) -\phi \left( v_{1}^{\prime }\left( r\right) \right) <0,\) whence \(\ v_{2}^{\prime }\left( r\right) -v_{1}^{\prime }\left( r\right) <0\ \) for \(\ r\in I.\) But this is impossible in virtue of \(v_{2}\left( \beta \right) -v_{1}\left( \beta \right) =0\) and the negativity of \(\ v_{2}-v_{1}\) on I. Therefore \( R_{0}<\alpha<\beta <R\) and \(\ v_{2}\left( \alpha \right) -v_{1}\left( \alpha \right) =0,\ v_{2}\left( \beta \right) -v_{1}\left( \beta \right) =0.\) Let \(\ r_{0}\in \left( \alpha ,\beta \right) \) be such that \(\ v_{2}\left( r_{0}\right) -v_{1}\left( r_{0}\right) =\min _{r\in \left[ \alpha ,\beta \right] }\left( v_{2}\left( r\right) -v_{1}\left( r\right) \right) .\) Then \( \digamma \left( r_{0}\right) =0\) and since \(\digamma \) is decreasing we must have \(\digamma \left( r\right) >0\) on \(\left( \alpha ,r_{0}\right) \) and \( \digamma \left( r\right) <0\) on \(\left( r_{0},\beta \right) .\) Consequently, the function \(v_{2}-v_{1}\) should be increasing on \(\left( \alpha ,r_{0}\right) \), which is impossible since \(v_{2}\left( \alpha \right) -v_{1}\left( \alpha \right) =0\) and \(v_{2}\left( r_{0}\right) -v_{1}\left( r_{0}\right) <0.\)

Therefore, \(v_{1}\le v_{2}\) on \(\left[ R_{0},R\right] \) as desired. \(\square \)

Lemma 2.3

For each \(\ h\in C\left[ R_{0},R\right] ,\) problem (2.1) has a unique solution and the solution operator S is isotone and sends nonnegative functions into nonnegative functions. In addition, \(S\left( h\right) =h/\varepsilon \) for every constant h.

Proof

Let \(v_{1},v_{2}\) solve (2.1) for the same function h. Applying the previous lemma to \(h_{1}=h_{2}=h\) gives \(v_{1}\le v_{2}\) and \(v_{2}\le v_{1}.\) Thus \(v_{1}=v_{2}\) proving the uniqueness. The monotonicity and positivity of S are direct consequences of the previous lemma. Finally the fact that \(\ S\left( h\right) =h/\varepsilon \ \) for any constant function h follows directly from (2.1). \(\square \)

Lemma 2.4

If a function \(h\in C\left[ R_{0},R\right] \) is decreasing in \( \left[ R_{0},R\right] ,\) then the corresponding solution \(\ S\left( h\right) \) is decreasing in \(\left[ R_{0},R\right] \) too.

Proof

Assume otherwise. Then there is a maximal subinterval \(\left[ \alpha ,\beta \right] \) of \(\left[ R_{0},R\right] \) on which \(v:=S\left( h\right) \) is strictly increasing. If \(\alpha \) is interior, i.e., \(\alpha >R_{0},\) then clearly \(v^{\prime }\left( \alpha \right) =0.\) Otherwise \(\alpha =R_{0}\) and \(v^{\prime }\left( \alpha \right) =0\) due to the Neumann condition. Similarly, \(v^{\prime }\left( \beta \right) =0.\) The function \(h-\varepsilon v\) being decreasing on \(\left[ \alpha ,\beta \right] ,\) one has that \(\ r^{1-n}\left( r^{n-1}\phi \left( v^{\prime }\right) \right) ^{\prime }\) is increasing on \(\left( \alpha ,\beta \right) .\) Hence there are only two possibilities: either (a) \(\left( r^{n-1}\phi \left( v^{\prime }\right) \right) ^{\prime }\ge 0\) on \(\left( \alpha ,\beta \right) ,\) or (b) there is \(\gamma \in (\alpha ,\beta ]\) such that \(\left( r^{n-1}\phi \left( v^{\prime }\right) \right) ^{\prime }<0\) in \(\left( \alpha ,\gamma \right) \) and \(\left( r^{n-1}\phi \left( v^{\prime }\right) \right) ^{\prime }\ge 0\) in \(\left( \gamma ,\beta \right) .\)

In case (a), the monotonicity of \(r^{1-n}\left( r^{n-1}\phi \left( v^{\prime }\right) \right) ^{\prime }\) on \(\left( \alpha ,\beta \right) \) implies that \(\left( r^{n-1}\phi \left( v^{\prime }\right) \right) ^{\prime }\) is increasing on \(\left( \alpha ,\beta \right) ,\) which implies the convexity on \(\left( \alpha ,\beta \right) \) of the function \(r^{n-1}\phi \left( v^{\prime }\right) .\) Since this function vanishes at \(\alpha \) and \(\beta \) (like \(v^{\prime }\)), we must have \(r^{n-1}\phi \left( v^{\prime }\right) \le 0\) in \(\left( \alpha ,\beta \right) .\) But this gives \(v^{\prime }\le 0 \) in \(\left( \alpha ,\beta \right) ,\) which contradicts our assumption on v.

Assume case (b). Then the function \(r^{n-1}\phi \left( v^{\prime }\right) \) is decreasing on \(\left( \alpha ,\gamma \right) .\) Since its value at \( \alpha \) is zero, we have \(\ r^{n-1}\phi \left( v^{\prime }\right) \le 0\) in \(\left( \alpha ,\gamma \right) ,\) whence \(\ v^{\prime }\le 0\) in \(\left( \alpha ,\gamma \right) ,\) again a contradiction. \(\square \)

Lemma 2.5

The solution operator S is completely continuous from \(C\left[ R_{0},R\right] \) to \(C\left[ R_{0},R\right] .\)

Proof

  1. (a)

    \(S\left( M\right) \) is relatively compact for every bounded set \( M\subset C\left[ R_{0},R\right] .\) Indeed, if \(C>0\) is such that \(\ \left| h\right| _{\infty }=\max _{r\in \left[ R_{0},R\right] }\left| h\left( r\right) \right| \le C\ \) for all \(h\in M,\) then from \(-C\le h\le C\) one has \(S\left( -C\right) \le S\left( h\right) \le S\left( C\right) .\) Hence \(\left| S\left( h\right) \right| _{\infty }\le \max \left\{ \left| S\left( C\right) \right| _{\infty },\left| S\left( -C\right) \right| _{\infty }\right\} .\) Thus \( S\left( M\right) \) is bounded in \(C\left[ R_{0},R\right] .\) Now from (2.4) we see that the derivatives of the functions v from \(S\left( M\right) \) are uniformly bounded, that is \(S\left( M\right) \) is equicontinuous. Therefore \(S\left( M\right) \) is relatively compact in \(C \left[ R_{0},R\right] .\)

  2. (b)

    S is continuous. Let \(\ h_{k}\in C\left[ R_{0},R\right] \) be convergent to some \(\ h\) and let \(\ v_{k}=S\left( h_{k}\right) .\) We need to prove that \(\ v_{k}\rightarrow S\left( h\right) .\) According to Lemma 2.5 there is a convergent subsequence of \(\left( v_{k}\right) .\) Let v be its limit. Passing to the limit in (2.2) and (2.3) written for \(h_{k}\) and \(v_{k},\) we find that \(S\left( h\right) =v.\) As a result the whole sequence \(\left( v_{k}\right) \) converges to \(S\left( h\right) .\) \(\square \)

2.2 A Harnack Type Inequality

In this section we assume that the homeomorphism \(\phi \) satisfies the following condition:

(H\(_{\phi }\)):

\(\phi \) is \(C^{1},\)

$$\begin{aligned} s\phi ^{\prime }\left( s\right) \le \phi \left( s\right) \quad \text {and} \quad \phi ^{\prime }\left( s\right) \ge \sigma >0 \quad \text {for all }s\in (-a,0], \end{aligned}$$
(2.6)

for some \(\sigma >0.\)

For example, such homeomorphisms are those involved by the classical Laplacian and the mean curvature operator in the Minkowski space.

Let \(\ R_{1}\in \left( R_{0},R\right) \) be a fixed number and \(\ v\in W^{2,\infty }\left( R_{0},R\right) \cap C^{1}\left[ R_{0},R\right] \) be nonnegative on \(\left[ R_{0},R\right] ,\) decreasing on \(\left[ R_{1},R\right] ,\) with \(\ v^{\prime }\left( R\right) =0\) and

$$\begin{aligned} L\left( v\right) =-\left( r^{n-1}\phi \left( v^{\prime }\right) \right) ^{\prime }+\varepsilon r^{n-1}v\ge 0 \quad \text {a.e. in }\left( R_{0},R\right) . \end{aligned}$$
(2.7)

We make the change of variable \(\ t=\eta \left( r\right) ,\) where

$$\begin{aligned} \eta \left( r\right) =\left\{ \begin{array}{ll} \ln \frac{R}{r} &{} \text {for }n=2 \\ \frac{r^{2-n}-R^{2-n}}{n-2} &{} \text {for }n\ge 3 \end{array} \right. \end{aligned}$$

by witch the interval \((R_{0},R]\) of r becomes \([0,t_{1})\) for t,  where \( \ t_{1}=\ln \left( R/R_{0}\right) \) for \(n=2\) and \(\ t_{1}=\left( R_{0}^{2-n}-R^{2-n}\right) /\left( n-2\right) \) for \(n\ge 3.\) Note that \( t_{1}=+\infty \) if \(R_{0}=0.\) Also, \(R_{1}\) becomes \(t_{0}:=\eta \left( R_{1}\right) .\) Clearly \(\ 0<t_{0}<t_{1}.\) Then, letting \(\ w\left( t\right) =v\left( r\right) \) and using

$$\begin{aligned} v^{\prime }\left( r\right)= & {} -r^{1-n}w^{\prime }\left( t\right) ,\\ v^{\prime \prime }\left( r\right)= & {} r^{2\left( 1-n\right) }w^{\prime \prime }\left( t\right) +\left( n-1\right) r^{-n}w^{\prime }\left( t\right) =r^{2\left( 1-n\right) }w^{\prime \prime }\left( t\right) -\left( n-1\right) r^{-1}v^{\prime }\left( r\right) \end{aligned}$$

and

$$\begin{aligned} -\left( r^{n-1}\phi \left( v^{\prime }\right) \right) ^{\prime }=-r^{n-1}\phi ^{\prime }\left( v^{\prime }\right) v^{\prime \prime }-\left( n-1\right) r^{n-2}\phi \left( v^{\prime }\right) , \end{aligned}$$

we rewrite (2.7) as

$$\begin{aligned} -r^{n-1}\phi ^{\prime }\left( v^{\prime }\right) \left\{ r^{2\left( 1-n\right) }w^{\prime \prime }\left( t\right) -\left( n-1\right) r^{-1}v^{\prime }\left( r\right) \right\} -\left( n-1\right) r^{n-2}\phi \left( v^{\prime }\right) +\varepsilon r^{n-1}w\ge 0, \end{aligned}$$

or equivalently

$$\begin{aligned} -r^{1-n}\phi ^{\prime }\left( v^{\prime }\right) w^{\prime \prime }\left( t\right) +\left( n-1\right) r^{n-2}\left\{ v^{\prime }\phi ^{\prime }\left( v^{\prime }\right) -\phi \left( v^{\prime }\right) \right\} +\varepsilon r^{n-1}w\ge 0. \end{aligned}$$

Since \(\ v^{\prime }\le 0\) on \(\left[ R_{1},R\right] ,\) in virtue of (2.6), one has \(\ v^{\prime }\phi ^{\prime }\left( v^{\prime }\right) -\phi \left( v^{\prime }\right) \le 0.\) It follows that

$$\begin{aligned} -r^{1-n}\phi ^{\prime }\left( v^{\prime }\right) w^{\prime \prime }\left( t\right) +\varepsilon r^{n-1}w\left( t\right) \ge 0 \quad \text {for a.e. } t\in (0,t_{0}]. \end{aligned}$$

Hence

$$\begin{aligned} -w^{\prime \prime }\left( t\right) +\varepsilon \frac{r^{2\left( n-1\right) } }{\phi ^{\prime }\left( v^{\prime }\right) }w\left( t\right) \ge 0\quad \text {for a.e. }t\in (0,t_{0}]. \end{aligned}$$

Since \(\ w\ge 0,\) \(r\le R\ \) and \(\ \phi ^{\prime }\left( v^{\prime }\right) \ge \sigma ,\ \) we deduce that

$$\begin{aligned} -w^{\prime \prime }\left( t\right) +\frac{\varepsilon R^{2\left( n-1\right) } }{\sigma }w\left( t\right) \ge 0\quad \text {for a.e. }t\in \text { } (0,t_{0}]. \end{aligned}$$
(2.8)

Notice that in virtue of \(\ v^{\prime }\left( r\right) =-r^{1-n}w^{\prime }\left( t\right) ,\) \(\ w\) is increasing on \([0,t_{0}]\) and since \( v^{\prime }\left( R\right) =0,\) one has \(\ w^{\prime }\left( 0\right) =0.\)

Now if we first integrate in (2.8) from 0 to t \(\left( t\le t_{0}\right) \) obtaining

$$\begin{aligned} w^{\prime }\left( t\right) \le \frac{\varepsilon R^{2\left( n-1\right) }}{ \sigma }\int _{0}^{t}w\left( s\right) ds\le \frac{\varepsilon R^{2\left( n-1\right) }}{\sigma }w\left( t_{0}\right) t\quad \text {for }t\le t_{0}, \end{aligned}$$

and again from 0 to \(t_{0},\) we find that

$$\begin{aligned} w\left( t_{0}\right) -w\left( 0\right) \le \frac{\varepsilon R^{2\left( n-1\right) }}{2\sigma }t_{0}^{2}w\left( t_{0}\right) . \end{aligned}$$

Hence

$$\begin{aligned} w\left( 0\right) \ge \left( 1-\frac{\varepsilon R^{2\left( n-1\right) }}{ 2\sigma }t_{0}^{2}\right) w\left( t_{0}\right) . \end{aligned}$$

Letting

$$\begin{aligned} \gamma :=1-\frac{\varepsilon R^{2\left( n-1\right) }}{2\sigma }t_{0}^{2}, \end{aligned}$$

assuming that \(\gamma >0\) (which happens for small enough \(\varepsilon \)) and recalling that \(w\left( 0\right) =\min _{t\in [0,t_{0}]}w\left( t\right) ,\) \(w\left( t_{0}\right) =\max _{t\in \left[ 0,t_{0}\right] }w\left( t\right) ,\) we have

$$\begin{aligned} \min _{t\in [0,t_{0}]}w\left( t\right) \ge \gamma \max _{t\in \left[ 0,t_{0}\right] }w\left( t\right) . \end{aligned}$$

Coming back to function v,  we have the Harnack inequality

$$\begin{aligned} v\left( R\right) =\min _{r\in \left[ R_{1},R\right] }v\left( r\right) \ge \gamma \max _{r\in [R_{1},R]}v\left( r\right) =\gamma v\left( R_{1}\right) . \end{aligned}$$

Notice that in the case \(R_{0}>0,\) we may take \(R_{1}=R_{0}\) and so \( t_{0}=\psi \left( R_{1}\right) \) is finite and the above reasoning remains true yielding to the better inequality

$$\begin{aligned} v\left( R\right) =\min _{r\in \left[ R_{0},R\right] }v\left( r\right) \ge \gamma \max _{r\in [R_{0},R]}v\left( r\right) =\gamma v\left( R_{0}\right) . \end{aligned}$$

Thus we have the following result.

Theorem 2.6

Assume that condition (H\(_{\phi }\)) holds. Then for every number \(\ R_{1}\in [R_{0},R)\) with \(\ \eta \left( R_{1}\right) <+\infty ,\) there exists \(\ \varepsilon _{0}=\varepsilon _{0}\left( R_{1},\sigma ,R\right) >0\) such that for every \(0<\varepsilon <\varepsilon _{0},\) there is a constant \(\ \gamma =\gamma \left( R_{1},\sigma ,R,\varepsilon \right) >0\) such that

$$\begin{aligned} v\left( R\right) \ge \gamma v\left( R_{1}\right) \end{aligned}$$

for every \(\ v\in W^{2,\infty }\left( R_{0},R\right) \cap C^{1}\left[ R_{0},R \right] \) nonnegative on \(\left[ R_{0},R\right] ,\) decreasing on \(\left[ R_{1},R\right] ,\) with\(\ \ v^{\prime }\left( R\right) =0\) and\(\ \ L\left( v\right) :=-\left( r^{n-1}\phi \left( v^{\prime }\right) \right) ^{\prime }+\varepsilon r^{n-1}v\ge 0\ \) a.e. in \(\left( R_{0},R\right) .\)

3 Existence, Localization and Multiplicity

Let \(K_{+}\) be the positive cone of \(C\left[ R_{0},R \right] .\)

Now it is clear that v is a nonnegative solution of (1.2) if and only if v is a fixed point of the operator

$$\begin{aligned} T:K_{+}\rightarrow K_{+},\ \ \ T=S\circ N_{f}, \end{aligned}$$

where \(N_{f}\left( v\right) =f\left( \cdot ,v\left( \cdot \right) \right) \) is the Nemytski operator associated to f. According to the previous lemmas about the solution operator, the operator T is well-defined and completely continuous.

3.1 Existence and Localization

Now, for any number \(\alpha >0,\) consider the set

$$\begin{aligned} V_{\alpha }:=\left\{ v\in K_{+}:\ \left| v\right| _{\infty }<\alpha \right\} . \end{aligned}$$

Here \(\left| v\right| _{\infty }=\max _{r\in \left[ R_{0},R\right] }\left| v\left( r\right) \right| .\) The operator T being completely continuous, the set \(T\left( \overline{V}_{\alpha }\right) \) is bounded, so there is a number \(\widetilde{\alpha }\ge \alpha \) such that \( T\left( \overline{V}_{\alpha }\right) \subset \overline{V}_{\widetilde{ \alpha }}.\) Define the extended operator \(\widetilde{T}:\overline{V}_{ \widetilde{\alpha }}\rightarrow \overline{V}_{\widetilde{\alpha }}\) by

$$\begin{aligned} \widetilde{T}\left( v\right) =T\left( \min \left\{ \frac{\alpha }{\left| v\right| _{\infty }},1\right\} v\right) . \end{aligned}$$

The following two lemmas rely on the properties of the fixed point index (see, a.e., [9]).

Lemma 3.1

If

$$\begin{aligned} T\left( v\right) \ne \lambda v\quad \text {for }v\in K_{+}\text { with } \left| v\right| _{\infty }=\alpha \quad \text {and}\quad \lambda \ge 1, \end{aligned}$$
(3.1)

then the fixed point index \(\ i\left( \widetilde{T},V_{\alpha },\overline{V} _{\widetilde{\alpha }}\right) =1.\)

Next, denote \(\left| v\right| _{0}:=\min _{r\in \left[ R_{0},R \right] }v\left( r\right) \ \) and for a number \(\beta >0,\) consider the set

$$\begin{aligned} W_{\beta }:=\left\{ v\in \overline{V}_{\widetilde{\alpha }}:\ \left| v\right| _{0}<\beta \right\} . \end{aligned}$$

It is clear that \(W_{\beta }\) is open in \(\overline{V}_{\widetilde{\alpha } }. \)

Lemma 3.2

Assume that for a function \(h\in K_{+}\) such that \(\left| h\right| _{\infty }=\alpha ,\) \(\left| h\right| _{0}>\beta ,\) one has

$$\begin{aligned} \left( 1-\lambda \right) T\left( v\right) +\lambda h\ne v\quad \textrm{for }\, v\in K_{+}\text { with }\left| v\right| _{\infty }\le \alpha ,\ \left| v\right| _{0}=\beta \quad \textrm{and}\quad \lambda \in \left[ 0,1\right] .\nonumber \\ \end{aligned}$$
(3.2)

Then \(\ i\left( \widetilde{T},W_{\beta },\overline{V}_{\widetilde{\alpha } }\right) =0.\)

Lemma 3.3

Under the assumptions of Lemmas 3.1 and 3.2, the operator T has a fixed point v in \(\ V_{\alpha }{\setminus } \overline{W}_{\beta },\) that is problem (1.2) has a solution \(\ v\) which is nonnegative on \(\left[ R_{0},R\right] ,\) with \(\ \beta <\left| v\right| _{0}\ \)and \(\ \left| v\right| _{\infty }<\alpha .\)

Proof

One has

$$\begin{aligned} 1= & {} i\left( \widetilde{T},V_{\alpha },\overline{V}_{\widetilde{\alpha } }\right) =i\left( \widetilde{T},V_{\alpha }\setminus \overline{W}_{\beta }, \overline{V}_{\widetilde{\alpha }}\right) +i\left( \widetilde{T},V_{\alpha }\cap W_{\beta },\overline{V}_{\widetilde{\alpha }}\right) , \\ 0= & {} i\left( \widetilde{T},W_{\beta },\overline{V}_{\widetilde{\alpha } }\right) =i\left( \widetilde{T},W_{\beta }\setminus \overline{V}_{\alpha }, \overline{V}_{\widetilde{\alpha }}\right) +i\left( \widetilde{T},V_{\alpha }\cap W_{\beta },\overline{V}_{\widetilde{\alpha }}\right) . \end{aligned}$$

Subtracting gives

$$\begin{aligned} i\left( \widetilde{T},V_{\alpha }\setminus \overline{W}_{\beta },\overline{V} _{\widetilde{\alpha }}\right) -i\left( \widetilde{T},W_{\beta }\setminus \overline{V}_{\alpha },\overline{V}_{\widetilde{\alpha }}\right) =1. \end{aligned}$$
(3.3)

Hence at least one of the numbers \(i\left( \widetilde{T},V_{\alpha }{\setminus } \overline{W}_{\beta },\overline{V}_{\widetilde{\alpha }}\right) \) and \(i\left( \widetilde{T},W_{\beta }{\setminus } \overline{V}_{\alpha }, \overline{V}_{\widetilde{\alpha }}\right) \) is nonzero. We claim that the last one equals zero. Indeed, otherwise there would exist \(v\in W_{\beta }{\setminus } \overline{V}_{\alpha }\) with \(\widetilde{T}\left( v\right) =v,\) that is

$$\begin{aligned} T\left( \frac{\alpha }{\left| v\right| _{\infty }}v\right) =v, \end{aligned}$$

or equivalently \(T\left( \omega \right) =\lambda \omega ,\) where \(\omega = \frac{\alpha }{\left| v\right| _{\infty }}v\) and \(\lambda =\frac{ \left| v\right| _{\infty }}{\alpha }.\) Since \(\left| \omega \right| _{\infty }=\alpha \) and \(\lambda >1\) we arrived to a contradiction with (3.1). Therefore \(i\left( \widetilde{T},W_{\beta }{\setminus } \overline{V}_{\alpha },\overline{V}_{\widetilde{\alpha }}\right) =0\) and from (3.3) one has \(i\left( \widetilde{T},V_{\alpha }{\setminus } \overline{W}_{\beta },\overline{V}_{\widetilde{\alpha }}\right) =1,\) which implies our conclusion. \(\square \)

We are now ready to state and prove our main existence and localization result.

For any numbers \(0<\beta <\alpha ,\) denote

$$\begin{aligned} m_{\alpha ,\beta }&:&=\min \left\{ f\left( r,s\right) :\ r\in \left[ R_{0},R \right] ,\ s\in \left[ \beta ,\alpha \right] \right\} , \\ M_{\alpha }&:&=\max \left\{ f\left( r,s\right) :\ r\in \left[ R_{0},R\right] ,\ s\in \left[ 0,\alpha \right] \right\} . \end{aligned}$$

Theorem 3.4

If for two positive numbers \(\alpha ,\beta \) satisfying \(\alpha >\beta ,\) the following conditions

\((\textbf{h1})\):

\(M_{\alpha }<\varepsilon \alpha ,\)

\((\textbf{h2})\):

\(m_{\alpha ,\beta }>\varepsilon \beta \)

hold, then problem (1.2) has a positive solution \(\ v\) such that

$$\begin{aligned} \beta<\left| v\right| _{0}\quad \textrm{and}\quad \left| v\right| _{\infty }<\alpha . \end{aligned}$$

Proof

First, we remark that inequality \(\alpha >\beta \) guarantees the existence of a function \(h\in K_{+}\) such that \(\left| h\right| _{\infty }=\alpha \) and \(\left| h\right| _{0}>\beta \) as needed in Lemma 3.2. Such a function is the constant \(h=\alpha .\)

Assume that for some \(v\in K_{+}\) with \(\left| v\right| _{\infty }=\alpha \) and some \(\lambda \ge 1,\) one has \(T\left( v\right) =\lambda v.\) Then \(v\le \lambda v=S\left( N_{f}\left( v\right) \right) \le S\left( M_{\alpha }\right) ,\) so \(\ \alpha =\left| v\right| _{\infty }\le \left| S\left( M_{\alpha }\right) \right| _{\infty },\) which contradicts (h1). Hence Lemma 3.1 applies.

Assume that for some \(v\in K_{+}\) with \(\left| v\right| _{\infty }\le \alpha ,\ \left| v\right| _{0}=\beta \) and \(\lambda \in \left[ 0,1\right] \) we have \(\left( 1-\lambda \right) T\left( v\right) +\lambda \alpha =v.\) Clearly,

$$\begin{aligned} T\left( v\right) =S\left( N_{f}\left( v\right) \right) \ge S\left( m_{\alpha ,\beta }\right) =\frac{m_{\alpha \beta }}{\varepsilon }, \end{aligned}$$

hence according to (h2),

$$\begin{aligned} \left| T\left( v\right) \right| _{0}\ge \frac{m_{\alpha \beta }}{ \varepsilon }>\beta . \end{aligned}$$

Then

$$\begin{aligned} \beta =\left| v\right| _{0}=\left| \left( 1-\lambda \right) T\left( v\right) +\lambda \alpha \right| _{0}>\beta . \end{aligned}$$

Hence Lemma 3.2 also applies. The conclusion now follows from Lemma 3.3. \(\square \)

Remark 3.5

  1. (a)

    If we assume that for each \(r\in \left[ R_{0},R\right] ,\) the function \(f\left( r,\cdot \right) \) is increasing on \(\left[ 0,\alpha \right] ,\) then

    $$\begin{aligned} m_{\alpha ,\beta }=\min \left\{ f\left( r,\beta \right) :\ r\in \left[ R_{0},R\right] \right\} ,\quad M_{\alpha }=\max \left\{ f\left( r,\alpha \right) :\ r\in \left[ R_{0},R\right] \right\} . \end{aligned}$$
  2. (b)

    If \(f\left( r,s\right) =a\left( r\right) g\left( s\right) ,\) where a is continuous and positive on \(\left[ R_{0},R\right] \) and g is increasing on \(\left[ 0,\alpha \right] ,\) then

    $$\begin{aligned} m_{\alpha ,\beta }=m_{a}g\left( \beta \right) ,\quad M_{\alpha }=M_{a}g\left( \alpha \right) , \end{aligned}$$

    where \(\ m_{a}=\min _{\left[ R_{0},R\right] }a\left( r\right) ,\ M_{a}=\max _{ \left[ R_{0},R\right] }a\left( r\right) .\)

3.2 Decreasing Solutions

Here assume the following monotonicity properties of f : 

(H\(_{f}\)):

\(f\left( \cdot ,s\right) \) is decreasing in \(\left[ R_{0},R \right] \) for each \(s\in \mathbb {R} _{+}\) and \(f\left( r,\cdot \right) \) is increasing in \( \mathbb {R} _{+}\) for each \(r\in \left[ R_{0},R\right] .\)

Under this condition, if a nonnegative function \(\ v\) is decreasing on \( \left[ R_{0},R\right] ,\) then the function \(\ N_{f}\left( v\right) =f\left( \cdot ,v\left( \cdot \right) \right) \ \) is decreasing too. Thus, if we consider the sub-cone K of \(K_{+}\) defined by

$$\begin{aligned} K:=\left\{ v\in K_{+}:\ v\text { is decreasing on }\left[ R_{0},R\right] \right\} , \end{aligned}$$

then in view of Lemma 2.4, we have \(\ T\left( K\right) \subset K\) and we can apply the reasoning from the proof of Theorem 3.4, working in K instead of \(K_{+}.\) In this way, the existence of a decreasing solution is obtained. Using in addition Theorem 2.6, we obtain the following result.

Theorem 3.6

Assume that conditions (H\(_{\phi }\)) and (H\( _{f}\)) hold and that \(R_{1},\ \varepsilon \) and \(\ \gamma \) are as in Sect. 2.2. If for two numbers \(\ 0<\beta <\alpha \) one has

\((\textbf{h1}')\):

\(f\left( R_{0},\alpha \right) <\varepsilon \alpha ;\)

\((\textbf{h2}')\):

\(f\left( R,\beta \right) >\varepsilon \beta ,\)

then problem (1.2) has a decreasing positive solution v such that

$$\begin{aligned}{} & {} \beta<v\left( R\right) ,\quad v\left( R_{0}\right) <\alpha , \end{aligned}$$
$$\begin{aligned}{} & {} v\left( R\right) \ge \gamma v\left( R_{1}\right) . \end{aligned}$$
(3.4)

Remark 3.7

Inequality (3.4) gives us the bound \(1/\gamma ,\) independent on \( \alpha \) and \(\beta ,\) for the ratio \(v\left( R_{1}\right) /v\left( R\right) \) between the maximum and the minimum of v on the interval \(\left[ R_{1},R \right] .\) Thus, if for such a solution v\(v\left( R_{1}\right) \) is large, say \(\ v\left( R_{1}\right) >k,\) then its minimum \(v\left( R\right) \) is larger than \(\ \gamma k;\) if its minimum \(v\left( R\right) \) is small, say \(\ v\left( R\right) <1/k,\) then \(v\left( R_{1}\right) \) is smaller than \( \ 1/\left( \gamma k\right) .\) As noted above, in the case of the annulus, i.e., for \(R_{0}>0,\) one may take \(R_{1}=R_{0}\) and then \(\ 1/\gamma \) is a bound for the ratio between the maximum and the minimum of v on the whole interval \(\left[ R_{0},R\right] .\)

3.3 Multiple Solutions

We first give a three-solution result.

Theorem 3.8

Under the assumptions of Theorem 3.4, if in addition there exists \(\ \alpha _{0}\in \left( 0,\beta \right) \) such that

$$\begin{aligned} M_{\alpha _{0}}<\varepsilon \alpha _{0}, \end{aligned}$$

then problem (1.2) has at least three nonnegative solutions solutions \(\ v_{1},v_{2},v_{3}\) such that

$$\begin{aligned}{} & {} \beta<\left| v_{1}\right| _{0},\quad \left| v_{1}\right| _{\infty }<\alpha ; \\{} & {} \left| v_{2}\right| _{\infty }<\alpha _{0}; \\{} & {} \left| v_{3}\right| _{0} <\beta ,\quad \left| v_{3}\right| _{\infty }>\alpha _{0}. \end{aligned}$$

Proof

Solution \(v_{1}\) is guaranteed by Theorem 3.4. Next from \(\ i\left( \widetilde{T},V_{\alpha _{0}},\overline{V}_{\widetilde{\alpha }}\right) =1\) we obtain the solution \(\ v_{2}.\) Now, let us remark that \(\ \overline{V} _{\alpha _{0}}\subset W_{\beta }.\ \) Indeed, if \(\ v\in \overline{V}_{\alpha _{0}}\) then \(\ \left| v\right| _{\infty }\le \alpha _{0}<\beta \) and so \(\ \left| v\right| _{0}\le \left| v\right| _{\infty }<\beta .\) Hence \(\ v\in W_{\beta }.\) This inclusion implies

$$\begin{aligned} i\left( \widetilde{T},W_{\beta }\setminus \overline{V}_{\alpha _{0}}, \overline{V}_{\widetilde{\alpha }}\right) =i\left( \widetilde{T},W_{\beta }, \overline{V}_{\widetilde{\alpha }}\right) -i\left( \widetilde{T},V_{\alpha _{0}},\overline{V}_{\widetilde{\alpha }}\right) =0-1=-1, \end{aligned}$$

whence the existence of \(\ v_{3}.\) \(\square \)

Obviously, the solution \(v_{2}\) can be zero. However, this is not the case if \(f\left( \cdot ,0\right) \ne 0.\)

Next we establish the existence of an arbitrary number of solutions, or of a sequence of solutions, by assuming a strong oscillation in s of nonlinearity \(f\left( r,s\right) .\)

Theorem 3.9

\((1^{0})\) Let \(\ \left( \alpha _{i}\right) _{1\le i\le k},\ \left( \beta _{i}\right) _{1\le i\le k}\) \(\left( k\le +\infty \right) \) be increasing finite or infinite sequences of positive numbers with \(\ \beta _{i}<\alpha _{i}\le \beta _{i+1}\) for all i. If the assumptions of Theorem 3.4 are satisfied for each couple \(\ \left( \alpha _{i},\beta _{i}\right) ,\) then problem (1.2) has k (respectively, when \(k=+\infty ,\) an infinite sequence of) distinct solutions \(\ v_{i}\) with

$$\begin{aligned} \beta _{i}<\left| v_{i}\right| _{0},\quad \left| v_{i}\right| _{\infty }<\alpha _{i}. \end{aligned}$$
(3.5)

\(\left( 2^{0}\right) \) Let \(\ \left( \alpha _{i}\right) _{i\ge 1},\ \left( \beta _{i}\right) _{i\ge 1}\) be decreasing infinite sequences with \(\ \alpha _{i+1}\le \beta _{i}<\alpha _{i}\ \) for all i. If the assumptions of Theorem 3.4 are satisfied for each couple \(\ \left( \alpha _{i},\beta _{i}\right) ,\) then problem (1.2) has an infinite sequence of distinct solutions \(\ v_{i}\) satisfying (3.5).

Proof

Denote

$$\begin{aligned} K_{i}:=\left\{ v\in K_{+}:\ \beta _{i}<\left| v\right| _{0},\ \ \left| v\right| _{\infty }<\alpha _{i}\right\} . \end{aligned}$$

It is sufficient to remark that \(K_{i}\cap K_{i+1}=\emptyset \) for all i. To prove this let us first assume that sequences \(\left( \alpha _{i}\right) ,\left( \beta _{i}\right) \) are increasing (case (\(1^{0}\))). Then since \(\alpha _{i}\le \beta _{i+1},\) for any \(v\in K_{i}\) one has \(\ \left| v\right| _{0}\le \left| v\right| _{\infty }<\alpha _{i}\le \beta _{i+1}.\ \)Hence \(\ v\notin K_{i+1}.\) Similarly, in case (\( 2^{0}\)), if \(v\in K_{i},\) then \(\ \left| v\right| _{\infty }\ge \left| v\right| _{0}>\beta _{i}\ge \alpha _{i+1},\) so \(\ v\notin K_{i+1}.\) \(\square \)

Remark 3.10

Under assumptions (H\(_{\phi }\)) and (H\(_{f}\)), if multiple decreasing solutions are obtained via Theorem 3.6, then for all of them, one has the same bound \(\ 1/\gamma \ \) for the ratio between their maximum and minimum on the interval \(\left[ R_{1},R\right] .\)

3.4 Existence and Multiplicity under Asymptotic Conditions

In the situation where only the existence of solutions is of interest and not exactly their location, the asymptotic conditions on f are sufficient and easier to check than the punctual conditions.

Assume here again, as in Remark 3.5(b), the following form of f

$$\begin{aligned} f\left( r,s\right) =a\left( r\right) g\left( s\right) , \end{aligned}$$

where \(\ a\ \) is continuous and positive on \(\left[ R_{0},R\right] \) and \(\ g\ \) is increasing on \( \mathbb {R} _{+}.\)

Thus the existence of two numbers \(\ \alpha ,\) \(\beta \) with \(\ \alpha >\beta \ \) and satisfying (h1), (h2) obviously follows from the asymptotic conditions

$$\begin{aligned} \lim \inf _{\tau \rightarrow +\infty }\frac{g\left( \tau \right) }{\tau }< \frac{\varepsilon }{M_{a}}\quad \text {and}\quad \lim \sup _{\tau \rightarrow 0} \frac{g\left( \tau \right) }{\tau }>\frac{\varepsilon }{m_{a}}, \end{aligned}$$

respectively.

Also, two sequences \(\left( \alpha _{i}\right) \) and \(\left( \beta _{i}\right) \) exist as in Theorem 3.9 (1\(^{0}\)) provided that

$$\begin{aligned} \lim \inf _{\tau \rightarrow +\infty }\frac{g\left( \tau \right) }{\tau }< \frac{\varepsilon }{M_{a}}\quad \text {and}\quad \lim \sup _{\tau \rightarrow +\infty }\frac{g\left( \tau \right) }{\tau }>\frac{\varepsilon }{m_{a}}, \end{aligned}$$

and as in Theorem 3.9 (2\(^{0}\)) provided that

$$\begin{aligned} \lim \inf _{\tau \rightarrow 0}\frac{g\left( \tau \right) }{\tau }<\frac{ \varepsilon }{M_{a}},\ \ \lim \sup _{\tau \rightarrow 0}\frac{ g\left( \tau \right) }{\tau }>\frac{\varepsilon }{m_{a}}. \end{aligned}$$

4 Numerical Solutions

In order to carry out our numerical experiments we have use the MATLAB object-oriented package Chebfun. We refer only to Trefethen [17] and Trefethen et al. [18] for the details on using this package although the literature on this topic is much broader.

The numerical experiments performed on a similar problem in [10], encouraged us to use this programming environment and not others. It proved to be very simple and flexible in writing a code, including in imposing the boundary conditions, an otherwise non-trivial matter. The details it provides regarding the convergence of the Newton method are extremely useful.

We present three concrete Neumann problems for which numerical solutions are obtained confirming the theoretical results.

4.1 First Example

We look for a nonzero numerical solution and to confirm the theory for the Neumann boundary value problem involving the classical Laplacian

$$\begin{aligned} \left\{ \begin{array}{ll} -\left( rv^{\prime }\right) ^{\prime }+rv=r\frac{\sqrt{v}}{r+1}, &{} r\in \left( 0,1\right) \\ v^{\prime }\left( 0\right) =v^{\prime }\left( 1\right) . &{} \end{array} \right. \end{aligned}$$
(4.1)

Here, with the notations from the previous sections, \(\ n=2,\ \varepsilon =1,\ R_{0}=0,\ R=1\ \)and \(\ f\left( r,s\right) =\sqrt{s}/\left( r+1\right) .\ \ \)Notice the special form of f\(f\left( r,s\right) =a\left( r\right) g\left( s\right) ,\) where \(\ a\left( r\right) =1/\left( r+1\right) \ \) is decreasing and \(\ g\left( s\right) =\sqrt{s}\ \) is increasing.

The theory is confirmed if a decreasing positive solution \(\ v\ \) and numbers \(\ \alpha ,\) \(\beta >0,\) \(\beta <\alpha \ \) are found such that the following inequalities are satisfied:

$$\begin{aligned}{} & {} m_{a}g\left( \beta \right) >\varepsilon \beta ,\quad M_{a}g\left( \alpha \right)<\varepsilon \alpha , \\{} & {} \beta<v\left( R\right) ,\quad v\left( R_{0}\right) <\alpha , \end{aligned}$$

which applied to the present example, for which \(\varepsilon =1,\) \( m_{a}=1/2, \) \(M_{a}=1,\) \(g\left( \alpha \right) =\sqrt{\alpha }\) and \( g\left( \beta \right) =\sqrt{\beta },\ \) read as:

$$\begin{aligned} \beta<0.25,\quad 1<\alpha ,\quad \beta<v\left( 1\right) ,\quad v\left( 0\right) <\alpha . \end{aligned}$$
(4.2)

The numerical solution v is presented in Fig. 1 and the confirmation of the theory takes place, for example, with \(\ \alpha =0.4\) and \(\ \beta =0.35.\)

Fig. 1
figure 1

Graph of the numerical solution of problem (4.1). The initial guess for the initialization of the Newton procedure is \(v_{0}:=1\)

From Fig. 2a we observe that the Newton method converges in five steps and has at least order two of convergence. The panel b of the same figure shows that Chebfun use a polynomial of order 16 whose coefficients decrease linearly to order \(10^{-14}.\)

Fig. 2
figure 2

a The convergence of Newton method. b The behavior of Chebyshev coefficients of solution to problem (4.1)

The residual in approximating the differential operator has been of order \( 10^{-11}\) and the boundary conditions have been exactly satisfied.

4.2 Second Example

Here we look for a nonzero numerical solution and to confirm the theory for the Neumann boundary value problem involving the mean curvature operator in the Minkowski space,

$$\begin{aligned} \left\{ \begin{array}{ll} -\left( r\frac{v^{\prime }}{\sqrt{1-v^{\prime 2}}}\right) ^{\prime }+rv=r \frac{\sqrt{v}}{r+1}, &{} r\in \left( 0,1\right) \\ v^{\prime }\left( 0\right) =v^{\prime }\left( 1\right) , &{} \end{array} \right. \end{aligned}$$
(4.3)

or equivalently

$$\begin{aligned} \left\{ \begin{array}{ll} -v^{\prime \prime }-\frac{1}{r}\left( 1-v^{\prime 2}\right) v^{\prime }+\left( 1-v^{\prime 2}\right) ^{\frac{3}{2}}v=\left( 1-v^{\prime 2}\right) ^{\frac{3}{2}}\frac{\sqrt{v}}{r+1}, &{} r\in \left( 0,1\right) \\ v^{\prime }\left( 0\right) =v^{\prime }\left( 1\right) . &{} \end{array} \right. \end{aligned}$$

Here again, \(\ n=2,\ \varepsilon =1,\ R_{0}=0,\ R=1,\ \ f\left( r,s\right) = \sqrt{s}/\left( r+1\right) \) and the theory is confirmed by inequalities ( 4.2).

The numerical solution \(\ v\ \) is displayed in Fig. 3 and the confirmation of the theory takes place, for example, with \(\ \alpha =1.45 \) and \(\ \beta =1.35.\)

Fig. 3
figure 3

Graph of the numerical solution of problem (4.3). The initial guess for the initialization of the Newton procedure is \(v_{0}:=1\)

From Fig. 4a we observe that the Newton method converges in four steps and has at least order two of convergence. The panel b of the same figure shows that Chebfun use a polynomial of order 16 whose coefficients decrease linearly to order \(10^{-14}.\)

Fig. 4
figure 4

a The convergence of Newton method. b The behavior of Chebyshev coefficients of solution to problem (4.3)

The residual in approximating the differential operator has been of order \( 10^{-11}\) and the boundary conditions have been exactly satisfied.

4.3 Third Example

As the theory shows, the Neumann problem can have multiple positive solutions for functions \(f\left( r,s\right) \) which are oscillating with respect to s. To make more understandable this statement, let us first consider the simplest case of the autonomous equation (1.1), that is \( f\left( r,s\right) =g\left( s\right) .\) Then it is trivial to see that any constant C satisfying \(\ \varepsilon C=g\left( C\right) \ \) is a solution of the problem

$$\begin{aligned} \left\{ \begin{array}{ll} -\left( r^{n-1}\phi \left( v^{\prime }\right) \right) ^{\prime }+\varepsilon r^{n-1}v=r^{n-1}g(v) &{} \text {in }\left( R_{0},R\right) \\ v^{\prime }\left( R_{0}\right) =v^{\prime }(R)=0. &{} \end{array} \right. \end{aligned}$$

Hence if the graph of \(\ g\ \) intersects the line of equation \(\ y=\varepsilon x\ \) in several points, then the problem has at least as many solutions. Therefore one obtains multiple solutions when g is oscillating, here around the line \(y=\varepsilon x.\) The phenomenon also occurs in the non-autonomous case, as the theory shows. Thus, for \(f\left( r,s\right) =a\left( r\right) g\left( s\right) ,\) multiple solutions are guaranteed if g oscillates up and down the lines \(y=\left( \varepsilon /M_{a}\right) x\) and \(y=\left( \varepsilon /m_{a}\right) x,\) respectively.

As an example of such a function, we can mention

$$\begin{aligned} g\left( s\right) =as+bs\sin \left( c\ln \left( s+1\right) \right) ,\quad s\in \mathbb {R} _{+}, \end{aligned}$$

where \(\ a,b,c>0\) and \(\ a\ge \left( c+1\right) b\) (for g to be increasing). This function has a countable number of intersections with a line \(\ y=\lambda x,\ \) provided that \(\ a-b\le \lambda \le a+b.\ \) For numerical simulations, we choose the following values of parameters: \(a=2,\) \( b=c=1\) and we consider the following Neumann problem for the classical Laplacian

$$\begin{aligned} \left\{ \begin{array}{ll} -\left( rv^{\prime }\right) ^{\prime }+rv=\frac{r}{r+1}\left( 2v+v\sin \left( \ln \left( v+1\right) \right) \right) , &{} r\in \left( 0,1\right) \\ v^{\prime }\left( 0\right) =v^{\prime }\left( 1\right) . &{} \end{array} \right. \end{aligned}$$
(4.4)

Figs. 5 and 7 show two positive solutions of this problem. They were obtained by using the same procedure as before, with different initial approximations. In general, initial approximations can be suggested by the localization intervals as given by the theory.

Fig. 5
figure 5

Graph of a numerical solution of problem (4.4). The initial guess for the initialization of the Newton procedure is \(v_{0}:=\cos \left( \pi r\right) +10^{6}\)

From Fig. 6a we observe that the Newton method converges in six steps and the panel b) of the same figure shows that Chebfun use a polynomial of order 16 whose coefficients decrease linearly to order \( 10^{-8}.\)

Fig. 6
figure 6

a The convergence of Newton method. b The behavior of Chebyshev coefficients of solution to problem (4.4)

The residual in approximating the differential operator has been only of order \(10^{-5}\) but the boundary conditions have been exactly satisfied.

All these last three observations lead us to state that Chebfun no longer achieves the accuracy of the previous two problems. Another solution to problem (4.4) is displayed in Fig. 7.

Fig. 7
figure 7

Graph of a second numerical solution of problem (4.4). The initial guess for the initialization of the Newton procedure is \( v_{0}:=\cos \left( \pi r\right) +10^{2}\)

To find it, Newton’s algorithm starts from a different initial guess. However, comparing these last two solutions, i.e, Figs. 5 and 7, it is observed that they have identical allures (shapes).

We can conclude that using Chebfun we have succeeded to numerically confirm, with great accuracy, some of our theoretical results regarding the existence, localization and multiplicity of positive radial solutions for Neumann problems in the ball.