Keywords

Mathematics Subject Classification

1 Introduction

One of the most useful methods for the localization of positive solutions to nonlinear boundary value problems and to prove the existence of multiple positive solutions is Krasnoselskii’s cone fixed point theorem [4,5,6]. There are known several versions of this result that we present shortly.

Let X be a Banach space, \(K\subset X\) a cone and rR two numbers with \(0<r<R.\) Denote

$$\begin{aligned} K_{r}=\left\{ u\in K:\ \left\| u\right\| \le r\right\} ,\ \ \partial K_{r}=\left\{ u\in K:\ \left\| u\right\| =r\right\} , \end{aligned}$$

and consider the conical shell

$$\begin{aligned} K_{rR}=\left\{ u\in K:\ r\le \left\| u\right\| \le R\right\} . \end{aligned}$$

Let \(N:K_{rR}\rightarrow K\) be a continuous and compact mapping and consider the fixed point equation

$$\begin{aligned} u=N\left( u\right) ,\ \ \ u\in K_{rR}. \end{aligned}$$

The original Krasnoselskii’s cone fixed point theorem makes use of the strict order relation < in X,  with \(u<v\) if \(v-u\in K\setminus \left\{ 0\right\} :\)

Theorem 1

(Order version) The mapping N has a fixed point in \(K_{rR}\) if it satisfies one of the following conditions:

(a) \(N\left( u\right) \nless \ u\ \) for \(\ u\in \partial K_{r}\ \ \) and \(N\left( u\right) \ngtr u\) for \(\ u\in \partial K_{R}\ \ \) (compression condition);

(b) \(N\left( u\right) \ngtr \ u\ \) for \(\ u\in \partial K_{r}\ \ \)and \(N\left( u\right) \nless u\) for \(\ u\in \partial K_{R}\ \ \)(expansion condition).

Some other versions are the following ones:

Theorem 2

(Norm version) The mapping N has a fixed point in \(K_{rR}\) if it satisfies one of the following conditions:

(a) \(\left\| N\left( u\right) \right\| >\left\| u\right\| \ \) for \(\ u\in \partial K_{r}\ \ \)and \(\left\| N\left( u\right) \right\| <\left\| u\right\| \) for \(\ u\in \partial K_{R}\ \ \) (compression condition);

(b) \(\left\| N\left( u\right) \right\| <\left\| u\right\| \ \) for \(u\in \partial K_{r}\ \ \)and \(\left\| N\left( u\right) \right\| >\left\| u\right\| \) for \(\ u\in \partial K_{R}\ \ \) (expansion condition).

Theorem 3

(Homotopy version) The mapping N has a fixed point in \(K_{rR}\) if it satisfies one of the following conditions:

(a) \(N\left( u\right) \ne \mu u\ \) for \(\ u\in \partial K_{r},\) \(\mu <1,\)\(N\left( u\right) \ne \mu u\) for \(\ u\in \partial K_{R},\) \(\mu >1\ \ \) and \(\inf _{u\in \partial K_{r}}\left\| N\left( u\right) \right\| >0\ \ \)(compression condition);

(b) \(N\left( u\right) \ne \mu u\ \) for \(\ u\in \partial K_{r},\) \(\mu >1,\)\(N\left( u\right) \ne \mu u\) for \(\ u\in \partial K_{R},\) \(\mu <1\ \ \ \)and \(\inf _{u\in K_{R}}\left\| N\left( u\right) \right\| >0\ \ \ \)(expansion condition).

In many cases, the fixed point equation has a variational structure in the sense that it is equivalent to the problem of finding critical points of a certain functional \(\ F:X\rightarrow \) \(\mathbb {R}\ \), that is to the equation

$$\begin{aligned} F^{\prime }\left( u\right) =0,\ \ u\in K_{rR}. \end{aligned}$$
(1)

This clearly happens if X is a Hilbert space identified to its dual and

$$\begin{aligned} N\left( u\right) =u-F^{\prime }\left( u\right) , \end{aligned}$$

when the critical points of F coincide with the fixed points of N.

A simple example is given by the following two-points boundary value problem for Newton’s second law of motion,

$$\begin{aligned} mu^{\prime \prime }+f\left( t,u\right)= & {} 0,\ \ t\in \left[ 0,T\right] \\ u\left( 0\right)= & {} u\left( T\right) =0. \nonumber \end{aligned}$$
(2)

This can be expressed as a fixed point problem for the integral operator

\(N:C\left[ 0,T\right] \rightarrow C\left[ 0,T\right] ,\)

$$\begin{aligned} N\left( u\right) \left( t\right) =\int _{0}^{T}G\left( t,s\right) f\left( s,u\left( s\right) \right) ds, \end{aligned}$$

where G is the Green’s function of the differential operator \(-mu^{\prime \prime }\) under the conditions \(u\left( 0\right) =u\left( T\right) =0,\) and also as a critical point problem related to the functional \(F:H_{0}^{1}\left( 0,T\right) \rightarrow \mathbb {R},\)

$$\begin{aligned} F\left( u\right) =\int _{0}^{T}\left( \frac{m}{2}u^{\prime }\left( t\right) ^{2}-g\left( t,u\left( t\right) \right) \right) dt\ \ \text {where}\ \ g\left( t,u\right) =\int _{0}^{u}f\left( t,y\right) dy, \end{aligned}$$

for which (1) holds. Physically, \(F\left( u\right) \) is the total energy (kinetic + potential), and the kinetic energy (energy of motion) \(\ (m/2)\int _{0}^{T}u^{\prime }\left( t\right) ^{2}dt\ \) introduces the so called “energetic” norm in the function space \(H_{0}^{1}\left( 0,T\right) ,\)

$$\begin{aligned} \left\| u\right\| =\left( \int _{0}^{T}u^{\prime }\left( t\right) ^{2}dt\right) ^{1/2}. \end{aligned}$$

Thus, a localization of a solution/state u in terms of the energetic norm automatically gives bounds of the kinetic energy.

Compared to the fixed point approach, the variational methods have as benefice, the use of the energy functional allowing to obtain characterizations of solutions as extrema or saddle points. In addition, some specific techniques such as Ekeland’s variational principle and deformation lemmas [17] are available. In this paper we only deal with the direct variational method which exclusively uses Ekeland’s variational principle [18].

Lemma 1

(Ekeland’s principle—strong form) Let D be a complete metric space with metric d,  and let \(F:D\rightarrow \mathbb {R} \) be lower semicontinuous and bounded from below. Then for any \(\varepsilon ,\delta >0,\) and any \(w\in D\) with

$$F\left( w\right) \le \inf _{D}F+\varepsilon ,$$

there is an element \(u\in D\) such that

$$F\left( u\right) \le F\left( w\right) ,\ \ \ \ \ d\left( w,u\right) \le \delta $$

and

$$F\left( u\right) \le F\left( v\right) +\frac{\varepsilon }{\delta }d\left( u,v\right) \ \ \ \text {for all }v\in D.$$

As a consequence, one has the following weak form of Ekeland’s variational principle:

Lemma 2

(Ekeland’s principle—weak form) Let D be a complete metric space with metric d,  and let \(F:D\rightarrow \mathbb {R} \) be lower semicontinuous and bounded from below. Then for each \(\varepsilon >0,\) there is \(u\in D\) such that

$$F\left( u\right) \le \inf _{D}F+\varepsilon $$

and

$$F\left( u\right) \le F\left( v\right) +\varepsilon d\left( u,v\right) \ \ \ \text {for all }v\in D.$$

2 Variational Analogue of the Compression Krasnoselskii’s Cone Fixed Point Theorem

In what follows, for simplicity, we only consider the case where X is a Hilbert space, with inner product \(\left\langle \cdot ,\cdot \right\rangle \) and norm \(\left\| \cdot \right\| ,\) and we identify X to its dual.

Theorem 4

Let \(F\in C^{1}\left( X\right) \) be bounded from bellow on \(K_{rR}, \) \(I-F^{\prime }\) be continuous and compact on \(K_{rR},\) and let the positivity condition

$$\begin{aligned} \left( I-F^{\prime }\right) \left( K_{rR}\right) \subset K \end{aligned}$$
(3)

be satisfied. If

$$\begin{aligned} F^{\prime }\left( u\right) +\lambda u\ne & {} 0\,\,\text { for all }\,\,u\in \partial K_{R},\ \ \lambda >0, \\ F^{\prime }\left( u\right) +\lambda u\ne & {} 0\,\,\text { for all }\,\,u\in \partial K_{r},\ \ \lambda <0, \nonumber \end{aligned}$$
(4)

and

$$\begin{aligned} \inf _{u\in \partial K_{r}}\left\| \left( I-F^{\prime }\right) \left( u\right) \right\| >0, \end{aligned}$$
(5)

then there exists \(\ u\in K_{rR}\) such that

$$ F\left( u\right) =\inf _{K_{rR}}F\ \ \ \ \text { and }\,\,\,F^{\prime }\left( u\right) =0. $$

Proof

Step 1: Applying Ekeland’s variational principle—weak form to F,  on \(K_{rR},\) with \(\varepsilon =1/n,\) gives \(u_{n}\) \(\in K_{rR}\) such that

$$\begin{aligned} F\left( u_{n}\right) \le \inf _{K_{rR}}F+\frac{1}{n}, \end{aligned}$$
(6)
$$\begin{aligned} F\left( u_{n}\right) \le F\left( v\right) +\frac{1}{n}\left\| u_{n}-v\right\| \ \ \ \text {for all }v\in K_{rR}. \end{aligned}$$
(7)

Obviously, from (6), \(\left( u_{n}\right) \) is a minimizing sequence for F on \(K_{rR},\) i.e., \(F\left( u_{n}\right) \rightarrow \inf _{K_{rR}}F\) as \(n\rightarrow \infty .\) Next using (7) we shall estimate \(F^{\prime }\left( u_{n}\right) .\) To this aim we approach \(\ u_{n}\ \) making suitable choices of \(\ v\ \) in \(K_{rR}.\) The choices of v in (7) depend on the location in the conical shell of each element \(u_{n}.\) The following cases are possible:

(a) \(r<\left\| u_{n}\right\| <R;\) or \(\left\| u_{n}\right\| =R\) and \(\left\langle F^{\prime }\left( u_{n}\right) ,u_{n}\right\rangle >0;\) or \(\left\| u_{n}\right\| =r\) and \(\left\langle F^{\prime }\left( u_{n}\right) ,u_{n}\right\rangle <0;\)

(b) \(\left\| u_{n}\right\| =R\ \) and \(\ \left\langle F^{\prime }\left( u_{n}\right) ,u_{n}\right\rangle \le 0;\)

(c) \(\left\| u_{n}\right\| =r\ \) and \(\ \left\langle F^{\prime }\left( u_{n}\right) ,u_{n}\right\rangle \ge 0.\)

Case (a): If \(u_{n}\) is in case (a) then we may choose v of the form

$$ v=u_{n}-tF^{\prime }\left( u_{n}\right) , $$

with \(t>0\) sufficiently small. Indeed, for \(t\in \left( 0,1\right) ,\) one has

$$ v=\left( 1-t\right) u_{n}+t\left( u_{n}-F^{\prime }\left( u_{n}\right) \right) , $$

which, due to the positivity condition (3), belongs to K. In case that \(r<\left\| u_{n}\right\| <R,\) we also have \(v\in K_{rR}\) for small enough t. If \(\left\| u_{n}\right\| =R\) and \(\left\langle F^{\prime }\left( u_{n}\right) ,u_{n}\right\rangle >0,\) then from

$$\begin{aligned} \left\| v\right\| ^{2}= & {} \left\| u_{n}\right\| ^{2}+t^{2}\left\| F^{\prime }\left( u_{n}\right) \right\| ^{2}-2t\left\langle F^{\prime }\left( u_{n}\right) ,u_{n}\right\rangle \\= & {} t^{2}\left\| F^{\prime }\left( u_{n}\right) \right\| ^{2}-2t\left\langle F^{\prime }\left( u_{n}\right) ,u_{n}\right\rangle +R^{2}, \end{aligned}$$

we derive that \(\left\| v\right\| \le R\) for \(0<t\le 2\left\langle F^{\prime }\left( u_{n}\right) ,u_{n}\right\rangle /\left\| F^{\prime }\left( u_{n}\right) \right\| ^{2}.\) Hence \(v\in K_{rR}\) for every sufficiently small \(t>0.\) The same happens if \(\left\| u_{n}\right\| =r \) and \(\left\langle F^{\prime }\left( u_{n}\right) ,u_{n}\right\rangle <0. \) Replacing v in (7) gives

$$ F\left( u_{n}-tF^{\prime }\left( u_{n}\right) \right) -F\left( u_{n}\right) \ge -\frac{t}{n}\left\| F^{\prime }\left( u_{n}\right) \right\| . $$

From the definition of the Fréchet derivative one has

$$ F\left( u_{n}-tF^{\prime }\left( u_{n}\right) \right) -F\left( u_{n}\right) =\left\langle F^{\prime }\left( u_{n}\right) ,\ -tF^{\prime }\left( u_{n}\right) \right\rangle +o\left( t\right) . $$

Then

$$ \left\langle F^{\prime }\left( u_{n}\right) ,\ -tF^{\prime }\left( u_{n}\right) \right\rangle +o\left( t\right) \ge -\frac{t}{n}\left\| F^{\prime }\left( u_{n}\right) \right\| , $$

and dividing by t and letting \(t\rightarrow 0\) gives

$$ \left\| F^{\prime }\left( u_{n}\right) \right\| ^{2}\le \frac{1}{n}\left\| F^{\prime }\left( u_{n}\right) \right\| , $$

or

$$\begin{aligned} \left\| F^{\prime }\left( u_{n}\right) \right\| \le \frac{1}{n}. \end{aligned}$$
(8)

Case (b): Let \(\varepsilon >0\) and let

$$v=u_{n}-t\left( F^{\prime }\left( u_{n}\right) +\lambda _{n}u_{n}+\varepsilon u_{n}\right) ,$$

where \(\ t>0\) and

$$ \lambda _{n}=-\left\langle F^{\prime }\left( u_{n}\right) ,u_{n}\right\rangle /R^{2}\ge 0.$$

From \(\ v=\left( 1-t-t\lambda _{n}-t\varepsilon \right) u_{n}+t\left( u_{n}-F^{\prime }\left( u_{n}\right) \right) \ \) we see that \(v\in K\) for small \(t>0,\) while from

$$ \left\langle F^{\prime }\left( u_{n}\right) +\lambda _{n}u_{n}+\varepsilon u_{n},\ u_{n}\right\rangle =\varepsilon R^{2}>0 $$

and

$$ \left\| v\right\| ^{2}=t^{2}\left\| F^{\prime }\left( u_{n}\right) +\lambda _{n}u_{n}+\varepsilon u_{n}\right\| ^{2}-2t\varepsilon R^{2}+R^{2}, $$

we have \(\ \left\| v\right\| \le R,\ \) and finally that \(v\in K_{rR}\) for small enough \(t>0.\) Replacing v in (7) and proceeding as above we find

$$ \,\left\langle F^{\prime }\left( u_{n}\right) ,F^{\prime }\left( u_{n}\right) +\lambda _{n}u_{n}+\varepsilon u_{n}\right\rangle \le \frac{1}{n}\left\| F^{\prime }\left( u_{n}\right) +\lambda _{n}u_{n}+\varepsilon u_{n}\right\| . $$

Letting \(\varepsilon \rightarrow 0\) yields

$$ \left\langle F^{\prime }\left( u_{n}\right) ,F^{\prime }\left( u_{n}\right) +\lambda _{n}u_{n}\right\rangle \le \frac{1}{n}\left\| F^{\prime }\left( u_{n}\right) +\lambda _{n}u_{n}\right\| , $$

and since \(\left\langle F^{\prime }\left( u_{n}\right) +\lambda _{n}u_{n}, u_{n}\right\rangle =0,\)

$$\begin{aligned} \left\| F^{\prime }\left( u_{n}\right) +\lambda _{n}u_{n}\right\| \le \frac{1}{n}. \end{aligned}$$
(9)

Case (c) is analogous and leads to the same inequality (9), where now \(\lambda _{n}\le 0.\)

Step 2: Passing if necessary to a subsequence, we may assume without lost of generality that all the terms of the minimizing sequence \(\left( u_{n}\right) \) are either in case (a), or in case (b), or in case (c). Then in view of (8) and (9), the minimizing sequence is in one of the following situations:

(a) \(F^{\prime }\left( u_{n}\right) \rightarrow 0;\)

(b) \(F^{\prime }\left( u_{n}\right) +\lambda _{n}u_{n}\rightarrow 0,\) where \(\left\| u_{n}\right\| =R\) and \(\lambda _{n}=-\left\langle F^{\prime }\left( u_{n}\right) ,u_{n}\right\rangle /R^{2}\ge 0\) for all n

(c) \(F^{\prime }\left( u_{n}\right) +\lambda _{n}u_{n}\rightarrow 0,\) where \(\left\| u_{n}\right\| =r\) and \(\lambda _{n}=-\left\langle F^{\prime }\left( u_{n}\right) ,u_{n}\right\rangle /r^{2}\le 0\) for all n.

Step 3: Since \(u_{n}\) and \(F^{\prime }\left( u_{n}\right) =u_{n}-N\left( u_{n}\right) \) are bounded sequences, we may assume (passing if necessary again to a subsequence) that \(\left( \lambda _{n}\right) \) converges to some \(\lambda ,\) where \(\lambda \ge 0\) in case (b), and \(\lambda \le 0\) in case (c). Also, using the compactness of N,  the above convergences lead to a convergent subsequence \(\ u_{n}\rightarrow u.\ \) We show this for the cases (b) and (c). To this aim we denote \(\ v_{n}=F^{\prime }\left( u_{n}\right) +\lambda _{n}u_{n}.\ \) Then \(\ \left( 1+\lambda _{n}\right) u_{n}=v_{n}+N\left( u_{n}\right) \ \) and since \(\ v_{n}\rightarrow 0\ \) and N is compact, the sequence \(\ v_{n}+N\left( u_{n}\right) \ \) is compact. If \(\ 1+\lambda \ne 0,\ \) this clearly implies the compactness of the sequence \(\ u_{n}.\) The situation \(\ 1+\lambda =0\ \) is only possible in case (c), but is excluded by hypothesis (5).

Finally, passing to limit we obtain one of the following situations:

(a) \(F^{\prime }\left( u\right) =0;\)

(b) \(F^{\prime }\left( u\right) +\lambda u=0,\) where \(\left\| u\right\| =R\) and \(\lambda \ge 0;\)

(c) \(F^{\prime }\left( u\right) +\lambda u=0,\) where \(\left\| u\right\| =r\) and \(\lambda \le 0.\)

The cases \(\lambda >0\) in (b) and \(\lambda <0\) in (c) being excluded by the compression boundary conditions (4), it remains that in all cases \(F^{\prime }\left( u\right) =0,\) which finishes the proof.   \(\square \)

3 Variational Analogue of the Expansion Krasnoselskii’s Cone Fixed Point Theorem

In this section, we give a variational analogue of Krasnoselskii’s fixed point theorem of expansion. Recall that for proving Krasnoselskii’s fixed point theorem of expansion it suffices to pass from the operator N satisfying the expansion conditions, to the operator \(\widetilde{N} :K_{rR}\rightarrow K,\)

$$\begin{aligned} \widetilde{N}\left( u\right) =\frac{1}{\theta \left( u\right) }N\left( \theta \left( u\right) u\right) , \end{aligned}$$

where

$$\begin{aligned} \theta \left( u\right) =\frac{R+r}{\left\| u\right\| }-1. \end{aligned}$$

It is easy to check that \(\ \theta \left( u\right) u\in K_{rR}\ \) for every \(\ u\in K_{rR},\ \) \(\left\| \theta \left( u\right) u\right\| =r\) if \(\left\| u\right\| =R,\ \) \(\left\| \theta \left( u\right) u\right\| =R\) if \(\left\| u\right\| =r,\ \) and that for \(\widetilde{N}\ \) the compression conditions hold. We shall use the same idea in order to prove the variational analogue of the expansion fixed point result. More exactly, we shall pass from the functional F,  assumed to be bounded from above on \(K_{rR},\) to the functional

$$\begin{aligned} H\left( u\right) =-F\left( \theta \left( u\right) u\right) ,\ \ \ u\in X\setminus \left\{ 0\right\} , \end{aligned}$$

bounded from below on \(K_{rR}.\) We shall need the following result about the Fréchet derivative of the new functional.

Lemma 3

One has \(\ H\in C^{1}\left( X\setminus \left\{ 0\right\} \right) \ \) and

$$\begin{aligned} H^{\prime }\left( u\right) =F^{\prime }\left( \theta \left( u\right) u\right) +A\left( u\right) , \end{aligned}$$

where

$$\begin{aligned} A\left( u\right) =\frac{R+r}{\left\| u\right\| }\left[ \frac{\left\langle F^{\prime }\left( \theta \left( u\right) u\right) ,u\right\rangle }{\left\| u\right\| ^{2}}u-F^{\prime }\left( \theta \left( u\right) u\right) \right] . \end{aligned}$$

Proof

We first compute the derivative of the mapping \(\ u/\left\| u\right\| \ \) in direction \(\ v.\ \) By definition,

$$\begin{aligned} \left\langle \left( \frac{u}{\left\| u\right\| }\right) ^{\prime },\ v\right\rangle= & {} \lim _{t\rightarrow 0+}\frac{1}{t}\left( \frac{u+tv}{\left\| u+tv\right\| }-\frac{u}{\left\| u\right\| }\right) \\= & {} \frac{1}{\left\| u\right\| ^{2}}\lim _{t\rightarrow 0+}\frac{1}{t} \left( \left\| u\right\| \left( u+tv\right) -\left\| u+tv\right\| u\right) \\= & {} \frac{1}{\left\| u\right\| ^{2}}\lim _{t\rightarrow 0+}\frac{1}{t} \left( \left\| u\right\| -\left\| u+tv\right\| \right) u +\frac{v}{\left\| u\right\| }. \end{aligned}$$

Furthermore

$$\begin{aligned} \lim _{t\rightarrow 0+}\frac{1}{t} \left( \left\| u\right\| -\left\| u+tv\right\| \right) u =\lim _{t\rightarrow 0+}\frac{1}{t}\frac{\left\| u\right\| ^{2}-\left\| u+tv\right\| ^{2}}{\left\| u\right\| +\left\| u+tv\right\| }u=-\frac{\left\langle u,v\right\rangle }{\left\| u\right\| }u. \end{aligned}$$

Hence

$$\begin{aligned} \left\langle \left( \frac{u}{\left\| u\right\| }\right) ^{\prime } ,\ v\right\rangle =-\frac{\left\langle u,v\right\rangle }{\left\| u\right\| ^{3}}u+\frac{v}{\left\| u\right\| }. \end{aligned}$$

Next,

$$\begin{aligned} \left\langle \left( \theta \left( u\right) u\right) ^{\prime },\ v\right\rangle= & {} \left( R+r\right) \left[ -\frac{\left\langle u,v\right\rangle }{\left\| u\right\| ^{3}}u+\frac{v}{\left\| u\right\| }\right] -v \\= & {} \theta \left( u\right) v-\frac{R+r}{\left\| u\right\| ^{3}} \left\langle u,v\right\rangle u. \end{aligned}$$

Finally, using the formula for computing the derivative of the composition of two mappings, we obtain

$$\begin{aligned} \left\langle H^{\prime }\left( u\right) ,\ v\right\rangle= & {} -\left\langle F^{\prime }\left( \theta \left( u\right) u\right) ,\ \left\langle \left( \theta \left( u\right) u\right) ^{\prime },\ v\right\rangle \right\rangle \\= & {} -\left\langle F^{\prime }\left( \theta \left( u\right) u\right) ,\ \theta \left( u\right) v-\frac{R+r}{\left\| u\right\| ^{3}}\left\langle u,v\right\rangle u\right\rangle . \end{aligned}$$

Therefore

$$\begin{aligned} H^{\prime }\left( u\right)= & {} -\theta \left( u\right) F^{\prime }\left( \theta \left( u\right) u\right) +\frac{R+r}{\left\| u\right\| ^{3}} \left\langle F^{\prime }\left( \theta \left( u\right) u\right) ,\ u\right\rangle u \\= & {} F^{\prime }\left( \theta \left( u\right) u\right) +\frac{R+r}{\left\| u\right\| }\left[ \frac{\left\langle F^{\prime }\left( \theta \left( u\right) u\right) ,\ u\right\rangle }{\left\| u\right\| ^{2}} u-F^{\prime }\left( \theta \left( u\right) u\right) \right] , \end{aligned}$$

as claimed.   \(\square \)

Theorem 5

Let \(F\in C^{1}\left( X\right) \) be bounded from above on \(K_{rR},\) \(I-F^{\prime }\) be compact on \(K_{rR},\) the positivity condition

$$\begin{aligned} \left( I-F^{\prime }\right) \left( K_{rR}\right) \subset K \end{aligned}$$

be satisfied, and assume in addition that for every two sequences \(u_{n}\) and \(v_{n}\) in \(K_{rR},\)

$$\begin{aligned} u_{n}-v_{n}\rightarrow 0\ \ \text {implies (via subsequences) } N\left( u_{n}\right) -N\left( v_{n}\right) \rightarrow 0. \end{aligned}$$
(10)

If

$$\begin{aligned} F^{\prime }\left( u\right) +\lambda u\ne & {} 0\,\,\,\text { for all }\,\,\,u\in \partial K_{R},\ \ \lambda <0, \\ F^{\prime }\left( u\right) +\lambda u\ne & {} 0\,\,\,\text { for all }\,\,\,u\in \partial K_{r},\ \ \lambda >0, \nonumber \end{aligned}$$
(11)

and

$$ \inf _{u\in \partial K_{R}}\left\| \left( I-F^{\prime }\right) \left( u\right) \right\| >0, $$

then there exists \(\ u\in K_{rR}\) such that

$$ F\left( u\right) =\sup _{K_{rR}}F\ \ \ \ \text { and }\,\,\,F^{\prime }\left( u\right) =0. $$

Proof

Notice a useful property of \(\ A,\ \) namely

$$\left\langle A\left( u\right) ,u\right\rangle =0\ \ \text {for every}\ \ u\in X\setminus \left\{ 0\right\} .$$

Let us fix a minimizing sequence \(\left( v_{n}\right) \) of H in \(K_{rR},\) with

$$ H\left( v_{n}\right) \le \inf _{K_{rR}}H+\frac{1}{n^{2}}. $$

For each \(n\ge 1,\) consider the functional

$$ G_{n}\left( u\right) =H\left( u\right) -\left\langle A\left( v_{n}\right) ,u\right\rangle ,\ \ \ \ u\in X\setminus \left\{ 0\right\} . $$

One has

$$ G_{n}^{\prime }\left( u\right) =H^{\prime }\left( u\right) -A\left( v_{n}\right) =F^{\prime }\left( \theta \left( u\right) u\right) +A\left( u\right) -A\left( v_{n}\right) ,\ \ \ u\in X\setminus \left\{ 0\right\} . $$

Now apply Ekeland’s variational principle—strong form to \(G_{n}\) on \(K_{rR},\) with the given point \(v_{n}\) and for \(\varepsilon =n^{-2},\ \delta =n^{-1}.\) Hence, there exists \(u_{n}\in K_{rR}\) such that

$$ \left\| u_{n}-v_{n}\right\| \le \delta =\frac{1}{n}, $$
$$ G_{n}\left( u_{n}\right) \le G_{n}\left( v_{n}\right) =H\left( v_{n}\right) \le \inf _{K_{rR}}H+\frac{1}{n^{2}}, $$
$$\begin{aligned} G_{n}\left( u_{n}\right)\le & {} G_{n}\left( v\right) +\frac{\varepsilon }{\delta }\left\| u_{n}-v\right\| \ \\= & {} G_{n}\left( v\right) +\frac{1}{n}\left\| u_{n}-v\right\| \ \ \text {for all }v\in K_{rR}. \nonumber \end{aligned}$$
(12)

First we show that, passing if necessary to a subsequence, we may assume that

$$\begin{aligned} A\left( u_{n}\right) -A\left( v_{n}\right) \rightarrow 0\ \ \ \text {as } n\rightarrow \infty . \end{aligned}$$
(13)

Indeed, since \(u_{n}\) and \(v_{n}\) are bounded and N is compact on \(K_{rR},\) passing to a subsequence we have that

$$ \left\| u_{n}\right\| \rightarrow l_{1},\ \ \left\| v_{n}\right\| \rightarrow l_{2},\ \ N\left( \theta \left( u_{n}\right) u_{n}\right) \rightarrow w_{1},\ \ N\left( \theta \left( v_{n}\right) v_{n}\right) \rightarrow w_{2}. $$

From \(\left| \left\| u_{n}\right\| -\left\| v_{n}\right\| \right| \le \left\| u_{n}-v_{n}\right\| \le 1/n,\) we find that \( l_{1}=l_{2}=:l.\) Then \(\ \theta \left( u_{n}\right) u_{n}-\theta \left( v_{n}\right) v_{n} \rightarrow 0,\ \) and from (10), we deduce that \(w_{1}=w_{2}=:w.\) It remains to prove that \(\ \alpha _{n}\rightarrow 0,\ \) where

$$\begin{aligned} \alpha _{n} :&=\frac{\left\langle F^{\prime }\left( \theta \left( u_{n}\right) u_{n}\right) ,u_{n}\right\rangle }{\left\| u_{n}\right\| ^{2}}u_{n}-F^{\prime }\left( \theta \left( u_{n}\right) u_{n}\right) \\&-\frac{\left\langle F^{\prime }\left( \theta \left( v_{n}\right) v_{n}\right) ,v_{n}\right\rangle }{\left\| v_{n}\right\| ^{2}} v_{n}+F^{\prime }\left( \theta \left( v_{n}\right) v_{n}\right) . \end{aligned}$$

One has

$$ \alpha _{n}=N\left( \theta \left( u_{n}\right) u_{n}\right) -N\left( \theta \left( v_{n}\right) v_{n}\right) -\frac{\left\langle N\left( \theta \left( u_{n}\right) u_{n}\right) ,u_{n}\right\rangle }{\left\| u_{n}\right\| ^{2}}u_{n}+\frac{\left\langle N\left( \theta \left( v_{n}\right) v_{n}\right) ,v_{n}\right\rangle }{\left\| v_{n}\right\| ^{2}}v_{n} $$

and since \(N\left( \theta \left( u_{n}\right) u_{n}\right) -N\left( \theta \left( v_{n}\right) v_{n}\right) \rightarrow 0,\) it remains to show that

$$ \beta _{n} :=\frac{\left\langle N\left( \theta \left( u_{n}\right) u_{n}\right) ,u_{n}\right\rangle }{\left\| u_{n}\right\| ^{2}}u_{n}- \frac{\left\langle N\left( \theta \left( v_{n}\right) v_{n}\right) ,v_{n}\right\rangle }{\left\| v_{n}\right\| ^{2}}v_{n}\rightarrow 0. $$

This, via

$$\begin{aligned} \beta _{n}= & {} \frac{\left\langle N\left( \theta \left( u_{n}\right) u_{n}\right) ,u_{n}\right\rangle }{\left\| u_{n}\right\| ^{2}}\left( u_{n}-v_{n}\right) \\&+\left[ \frac{\left\langle N\left( \theta \left( u_{n}\right) u_{n}\right) ,u_{n}\right\rangle }{\left\| u_{n}\right\| ^{2}}-\frac{\left\langle N\left( \theta \left( v_{n}\right) v_{n}\right) ,v_{n}\right\rangle }{\left\| v_{n}\right\| ^{2}}\right] v_{n} \end{aligned}$$

reduces to

$$ \frac{\left\langle N\left( \theta \left( u_{n}\right) u_{n}\right) ,u_{n}\right\rangle }{\left\| u_{n}\right\| ^{2}}-\frac{\left\langle N\left( \theta \left( v_{n}\right) v_{n}\right) ,v_{n}\right\rangle }{\left\| v_{n}\right\| ^{2}}\rightarrow 0. $$

But this immediately follows if again we pass to a subsequence in order to assume the weak convergence \(u_{n}\rightharpoonup u,\) \(v_{n}\rightharpoonup u.\) Thus (13) is proved.

Next, as in the proof of Theorem 4, we discuss several cases depending on the location of each element of the minimizing sequence \(u_{n}.\)

(a) In each one of the cases: \(\ r<\left\| u_{n}\right\| <R;\ \) \(\left\| u_{n}\right\| =R\ \) and \(\ \left\langle F^{\prime }\left( \theta \left( u_{n}\right) u_{n}\right) ,u_{n}\right\rangle \) \( >0;\ \) \(\left\| u_{n}\right\| =r\ \) and \( \left\langle F^{\prime }\left( \theta \left( u_{n}\right) u_{n}\right) ,u_{n}\right\rangle <0,\ \) we may apply (12) to the element

$$ v=u_{n}-tF^{\prime }\left( \theta \left( u_{n}\right) u_{n}\right) $$

which belongs to \(K_{rR}\) for all sufficiently small \(t>0.\) Replacing into (12), dividing by t and then letting t go to zero it yields

$$ \left\langle G_{n}^{\prime }\left( u_{n}\right) ,F^{\prime }\left( \theta \left( u_{n}\right) u_{n}\right) \right\rangle \le \frac{1}{n}\left\| F^{\prime }\left( \theta _{n}\left( u_{n}\right) u_{n}\right) \right\| , $$

or equivalently

$$ \left\| F^{\prime }\left( \theta _{n}\left( u_{n}\right) u_{n}\right) \right\| ^{2}+\left\langle A\left( u_{n}\right) -A\left( v_{n}\right) ,F^{\prime }\left( \theta \left( u_{n}\right) u_{n}\right) \right\rangle \le \frac{1}{n}\left\| F^{\prime }\left( \theta _{n}\left( u_{n}\right) u_{n}\right) \right\| . $$

This implies

$$\begin{aligned} \left\| F^{\prime }\left( \theta _{n}\left( u_{n}\right) u_{n}\right) \right\| \le \frac{1}{n}+\left\| A\left( u_{n}\right) -A\left( v_{n}\right) \right\| . \end{aligned}$$
(14)

(b) Assume that \(\left\| u_{n}\right\| =R\) and \(\left\langle F^{\prime }\left( \theta \left( u_{n}\right) u_{n}\right) ,u_{n}\right\rangle \le 0.\) Then we choose

$$ v=u_{n}-t\left( F^{\prime }\left( \theta \left( u_{n}\right) u_{n}\right) +\lambda _{n}u_{n}+\varepsilon u_{n}\right) , $$

where \(\varepsilon >0\) and \(\lambda _{n}=-\left\langle F^{\prime }\left( \theta \left( u_{n}\right) u_{n}\right) ,u_{n}\right\rangle /R^{2}\ge 0.\) We deduce

$$\begin{aligned} \left\| F^{\prime }\left( \theta \left( u_{n}\right) u_{n}\right) +\lambda _{n}u_{n}\right\| \le \frac{1}{n}+\left\| A\left( u_{n}\right) -A\left( v_{n}\right) \right\| . \end{aligned}$$
(15)

(c) Similarly, if \(\left\| u_{n}\right\| =r\) and \(\left\langle F^{\prime }\left( \theta \left( u_{n}\right) u_{n}\right) ,u_{n}\right\rangle \ge 0,\) we derive inequality (15), where this time \(\lambda _{n}\le 0.\)

If there is a subsequence of \(u_{n}\) whose elements are all in case (a), then from (14) and (13) we have

$$ F^{\prime }\left( \theta \left( u_{n}\right) u_{n}\right) \rightarrow 0. $$

If there is a subsequence whose elements are all in case (b), or all in case (c), then

$$ F^{\prime }\left( \theta \left( u_{n}\right) u_{n}\right) +\lambda _{n}u_{n}\rightarrow 0. $$

As in the proof of Theorem 4 we may assume that \(u_{n}\rightarrow u\) for some \(u\in K_{rR}.\) Then \(v_{n}\rightarrow u,\) and passing to the limit we obtain: \(\ F^{\prime }\left( \theta \left( u\right) u\right) =0;\) or \(F^{\prime }\left( \theta \left( u\right) u\right) +\lambda u=0\) with \(\left\| u\right\| =R\) and \(\lambda \ge 0;\) or \(F^{\prime }\left( \theta \left( u\right) u\right) +\lambda u=0\) with \(\left\| u\right\| =r\) and \(\lambda \le 0.\ \) Denote \(\ \overline{u}=\theta \left( u\right) u.\ \) Then

$$F^{\prime }\left( \overline{u}\right) =0;\ \ \text {or}$$
$$F^{\prime }\left( \overline{u}\right) +\mu \overline{u}=0\ \ \text { with}\ \ \left\| \overline{u}\right\| =r\ \ \text { and}\ \ \mu =\lambda R/r\ge 0;\ \ \text {or}$$
$$F^{\prime }\left( \overline{u}\right) +\mu \overline{u}=0\ \ \text { with}\ \ \left\| \overline{u} \right\| =R\ \ \text { and}\ \ \mu =\lambda r/R\le 0.$$

The case \(\mu \ne 0\) being excluded by the expansion conditions (11), it remains that in any case, \(\ F^{\prime }\left( \overline{u}\right) =0.\)

Finally, from

$$ H\left( u_{n}\right) =G_{n}\left( u_{n}\right) +\left\langle A\left( v_{n}\right) ,u_{n}\right\rangle \le \inf _{K_{rR}}H+\frac{1}{n^{2}} +\left\langle A\left( v_{n}\right) ,u_{n}\right\rangle , $$

and \(\left\langle Av_{n}),u_{n}\right\rangle \rightarrow \left\langle A\left( u\right) ,u\right\rangle =0,\) we have \(H\left( u\right) =\inf _{K_{rR}}H,\) that is \(F\left( \overline{u}\right) =\sup _{K_{rR}}F.\)   \(\square \)

Remark 1

Most of the assumptions of Theorems 4 and 5 may be expressed in terms of operator \(\ N,\ \) as in Theorem 3. There is however an additional hypothesis of Theorems 4 and 5, namely the representation of the operator N under the form \(\ N=I-F^{\prime },\ \) with some functional F bounded from below or from above on \(\ K_{rR}.\ \) As a result, we have a stronger conclusion: the existence of a fixed point of \(\ N\ \) which is an extremum point of the functional \(\ F.\)

4 A General Scheme of Application to Semilinear Equations

Krasnoselskii’s cone fixed point theorem have been applied to numerous classes of boundary value problems. Also, in the last years, there have been given applications of critical point results in conical shells [1,2,3, 7, 11,12,13]. Thus, a natural question is which are the essential proprieties that allow applicability of this technique. We now present a general scheme of applicability of the variational analogue of Krasnoselskii’s theorem, by which we give an answer to that question. To this aim, we use Mikhlin’s variational theory for positive symmetric linear operators [8].

Consider a semilinear equation of the form

$$\begin{aligned} Lu=J^{\prime }\left( u\right) , \end{aligned}$$
(16)

where \(L:D\left( L\right) \subset H\rightarrow H\) is a positive symmetric densely defined linear operator in the Hilbert space H with inner product \(\left( \cdot ,\cdot \right) \) and norm \(\left| \cdot \right| \), while the nonlinear term is the Fréchet derivative of a \(C^{1}\) functional \(J:H\rightarrow \mathbb {R} .\)

Recall that the operator L is said to be symmetric if \(\ \left( Lu,v\right) =\left( u,Lv\right) \ \) for every \(u,v\in D(L),\) and positive if there exists a constant \(\ c>0\ \) such that

$$\begin{aligned} \left( Lu,u\right) \ge c^{2}\left| u\right| ^{2}\ \ \ \text {for every}\ \ u\in D\left( L\right) . \end{aligned}$$

For such a linear operator, we endow the dense linear subspace D(L) of H with the bilinear functional

$$\begin{aligned} \left\langle u,v\right\rangle :=\left( Lu,v\right) \ \ \ \left( u,v\in D\left( L\right) \right) . \end{aligned}$$

The completion of \(\ \left( D\left( L\right) ,\ \left\langle \cdot ,\cdot \right\rangle \right) \ \) is denoted by X and is called the energetic space of L. By the construction, \(D(L)\subset X\subset H\) with dense inclusions. We use the same symbol \(\left\langle \cdot ,\cdot \right\rangle \) to denote the extended inner product on X. The corresponding norm \(\ \left\| u\right\| =\sqrt{\left\langle u,u\right\rangle }\ \) is called the energetic norm associated to L. If \(\ u\in D(L),\ \) then in view of the positivity of L,  one has the Poincaré inequality

$$\left| u\right| \le c^{-1}\left\| u\right\| \ \ \text {for every}\ \ u\in D(L).$$

By density the above inequality extends to the whole X. Let \(X^{\prime }\) be the dual space of X. If we identify the dual \(H^{\prime }\) with H,  via Riesz’s representation theorem, then from \(X\subset H,\ \) we have \(\ H\subset X^{\prime }.\ \) We attach to the operator \(\ L\ \) the following problem

$$\begin{aligned} Lu=f,\ \ u\in X, \end{aligned}$$
(17)

where \(\ f\in X^{\prime }.\ \) By a weak solution of the problem we mean an element \(u\in X\) with

$$\begin{aligned} \left\langle u,v\right\rangle =\left( f,v\right) \ \ \ \text {for every}\ \ v\in X, \end{aligned}$$

where the notation \(\ \left( f,v\right) \ \) stands for the value of the functional \(\ f\ \) on the element \(\ v.\ \) In case that \(\ f\in H,\ \) then \(\ \left( f,v\right) \ \) is the inner product in H of f and v. Notice that, if the weak solution u belongs to D(L),  then it is a classical solution of the problem. Using Riesz’s representation theorem and the Poincaré inequality one has that for every \(f\in X^{\prime }\) there exists a unique weak solution \(u\in X\) of the problem (17). Thus we may speak about the inverse of L,  as the operator \(\ L^{-1}:X^{\prime }\rightarrow X\ \) attaching to each \(f\in X^{\prime },\) the unique weak solution \(u\in X\) of the corresponding Eq. (17). Thus,

$$\begin{aligned} \left\langle L^{-1}f,v\right\rangle =\left( f,v\right) \ \ \ \text {for all }v\in X. \end{aligned}$$

Note that the operator \(L^{-1}\) is an isometry between \(X^{\prime }\) and X.

We look for weak solutions of (16), namely, for \(u\in X\) such that

$$\begin{aligned} \left\langle u,v\right\rangle =\left( J^{\prime }\left( u\right) ,v\right) \ \ \ \text {for all } v\in X, \end{aligned}$$

that is for solutions of the fixed point equation

$$\begin{aligned} u=L^{-1}J^{\prime }\left( u\right) ,\ \ \ u\in X. \end{aligned}$$

We associate to the Eq. (16) the energy functional

$$\begin{aligned} F:X\rightarrow \mathbb {R} ,\ \ \ F\left( u\right) =\frac{1}{2}\left\| u\right\| ^{2}-J\left( u\right) . \end{aligned}$$

One can check that \(\ F\in C^{1}\left( X\right) ,\)

$$\begin{aligned} F^{\prime }\left( u\right) =Lu-J^{\prime }\left( u\right) \ \ \ \left( u\in X\right) , \end{aligned}$$

and, if we identify \(X^{\prime }\) to X via \(L^{-1},\) one has

$$ F^{\prime }=I-N,\ \ \ \text { where}\ \ \ N=L^{-1}J^{\prime }.$$

Our first hypothesis is a compactness condition:

(H1) The embedding \(\ X\subset H\ \) is compact.

Next we consider a cone \(\ K_{0}\ \) in \(\ H,\ \) the partial order relation \(\ \le \ \) in \(\ H\ \) induced by \(\ K_{0},\ \) and we assume that

$$\ \left( u,v\right) \ge 0\ \ \text { for every} \ u,v\in K_{0}, $$

or equivalently, that the norm of \(\ H\ \) is monotone. In addition assume that

(H2) \(J\ \) is bounded on every bounded subset of \(\ H;\) \(\ J^{\prime }\ \) is positive and increasing on \(\ H\ \) with respect to the order, i.e.,

$$\begin{aligned} 0\le u\le v\ \ \ \text {implies}\ \ \ 0\le J^{\prime }\left( u\right) \le J^{\prime }\left( v\right) . \end{aligned}$$

Also consider a cone \(\ K_{1}\ \) in \(\ H\ \) with

$$\begin{aligned} L^{-1}\left( K_{0}\right) \subset K_{1} \subset K_{0} \end{aligned}$$
(18)

and assume that the following conditions are satisfied:

(H3) \(J^{\prime }\left( K_{1}\cap X\right) \subset K_{1},\ \) and there exists \(\varphi \in K_{0}\setminus \left\{ 0\right\} \) such that for every \(u\in K_{1},\)

$$\begin{aligned} \left\| L^{-1}u\right\| \varphi \le L^{-1}u\ \ \ \text {(Harnack type inequality);} \end{aligned}$$

(H4) There is an element \(\ \psi \in K_{0}\setminus \left\{ 0\right\} \ \) such that for every \(\ u\in K_{1}\cap X,\)

$$\begin{aligned} u\le \left\| u\right\| \psi . \end{aligned}$$

Now, define a subcone of \(\ K_{1}\cap X,\)

$$\begin{aligned} K:=\left\{ u\in K_{1}\cap X:\ \left\| u\right\| \varphi \le u\right\} . \end{aligned}$$

Note that the cone \(\ K\ \) does not reduce to the origin. To show this, let \(\ \sigma \ \) be any element of \(\ K_{1}\setminus \left\{ 0\right\} \ \). For example, such an element is \(\ L^{-1}(\varphi ).\ \) Then \(L^{-1}\sigma \ne 0,\ \) and using (18) and (H3), \(\ L^{-1}\sigma \in K_{1}\cap X,\ \) and \(\ \left\| L^{-1}\sigma \right\| \varphi \le L^{-1}\sigma ,\ \) that is \(\ L^{-1}\sigma \in K\setminus \left\{ 0\right\} .\)

Theorem 6

Assume that (H1)–(H4) hold. If for two positive numbers \(\ \alpha \ \) and \(\ \beta \ \) with \(\ \alpha \ne \beta ,\ \) the following conditions are satisfied

$$\begin{aligned} \left( J^{\prime }\left( \alpha \psi \right) ,\psi \right)\le & {} \alpha ,\ \ \\ \beta\le & {} \left( J^{\prime }\left( \beta \varphi \right) ,\varphi \right) , \nonumber \end{aligned}$$
(19)

then Eq. (16) has a weak solution \(\ u\in K_{rR},\ \) an extremum point of F in \(K_{rR},\) where \(\ r=\min \left\{ \alpha ,\beta \right\} \ \) and \(\ R=\max \left\{ \alpha ,\beta \right\} .\)

Proof

We shall apply Theorem 4 in case that \(\beta <\alpha ,\) and Theorem 5 if \(\alpha <\beta .\)

First note that N maps K into K. Indeed, if \(u\in K,\) then \(u\in K_{1}\cap X\) and by (H3), \(J^{\prime }\left( u\right) \in K_{1}\) and \(\left\| L^{-1}J^{\prime }\left( u\right) \right\| \varphi \le L^{-1}J^{\prime }\left( u\right) ,\) which means that \(N\left( u\right) =L^{-1}J^{\prime }\left( u\right) \in K.\)

Clearly the operator N is continuous. Furthermore, \(K_{rR}\) being bounded in X,  it is relatively compact in H by (H1), and the continuity of \(J^{\prime }\) from H to H guarantees that \(J^{\prime }\left( K_{rR}\right) \) is relatively compact in H. Then \(N\left( K_{rR}\right) =L^{-1}J^{\prime }\left( K_{rR}\right) \) is relatively compact in X. Hence N is continuous and compact from \(K_{rR}\) to K. Also, the additional compactness condition (10) is satisfied as a consequence of (H1). Indeed, if

$$u_{n},v_{n}\in K_{rR}\ \ \text {and}\ \ u_{n}-v_{n}\rightarrow 0\ \text { in}\ X,$$

then in view of (H1), passing if necessary to subsequences, we may assume that \(u_{n}\) and \(v_{n}\) converge in H to some element u. Then \(J^{\prime }\left( u_{n}\right) ,J^{\prime }\left( v_{n}\right) \) converge in H to \(J^{\prime }\left( u\right) ,\) and next \(N\left( u_{n}\right) ,N\left( v_{n}\right) \) converge in X to \(N\left( u\right) .\) Consequently,

$$N\left( u_{n}\right) -N\left( v_{n}\right) \rightarrow 0\ \text { in}\ X,$$

which proves (10).

The functional J being bounded on the bounded set \(K_{R},\) one has that F is bounded from above and from below on \(K_{rR}.\)

Next we check the boundary conditions. Let \(u\in \partial K_{\alpha },\ \lambda >0,\) and assume that \(F^{\prime }\left( u\right) +\lambda u=0.\) Then \(N\left( u\right) =\left( 1+\lambda \right) u.\ \) From (H4), \(\ 0\le u\le \alpha \psi ,\ \) and the monotonicity of \(J^{\prime }\ \) yields to \(\ 0\le J^{\prime }\left( u\right) \le J^{\prime }\left( \alpha \psi \right) .\ \) Then

$$\begin{aligned} \alpha ^{2}< & {} \left( 1+\lambda \right) \alpha ^{2}=\left( 1+\lambda \right) \left\| u\right\| ^{2}=\left\langle N\left( u\right) ,u\right\rangle \\= & {} \left\langle L^{-1}J^{\prime }\left( u\right) ,u\right\rangle =\left( J^{\prime }\left( u\right) ,u\right) \le \left( J^{\prime }\left( \alpha \psi \right) ,\alpha \psi \right) \\= & {} \alpha \left( J^{\prime }\left( \alpha \psi \right) ,\psi \right) . \end{aligned}$$

Hence \(\ \alpha <\left( J^{\prime }\left( \alpha \psi \right) ,\psi \right) ,\ \) which is in contradiction with (19). Next, assume that \(u\in \partial K_{\beta },\ \ \lambda <0,\) and \(F^{\prime }\left( u\right) +\lambda u=0,\ \) that is \(\ N\left( u\right) =\left( 1+\lambda \right) u.\ \) Since both u and \(N\left( u\right) \) are in \(K_{0},\) this equality is possible only if \(\ 1+\lambda \ge 0.\ \) Hence \(\ 0\le 1+\lambda <1.\ \) Consequently,

$$ \beta ^{2}>\left( 1+\lambda \right) \beta ^{2}=\left( J^{\prime }\left( u\right) ,u\right) , $$

and since \(u\ge \left\| u\right\| \varphi =\beta \varphi \) gives \(J^{\prime }\left( u\right) \ge J^{\prime }\left( \beta \varphi \right) \ge 0,\) and the norm in H is monotone,

$$ \left( J^{\prime }\left( u\right) ,u\right) \ge \left( J^{\prime }\left( \beta \varphi \right) ,\beta \varphi \right) . $$

Hence we derive \(\ \beta >\left( J^{\prime }\left( \beta \varphi \right) ,\varphi \right) ,\ \) contrary to our hypothesis.

It remains to show that \(\ \inf _{u\in \partial K_{\beta }}\left\| N\left( u\right) \right\| >0.\ \) Assume the contrary. Then there is a sequence \(\ u_{n}\in \partial K_{\beta }\ \) with \(\ N\left( u_{n}\right) \rightarrow 0\ \) in \(\ X\ \) and also in \(\ H.\ \) From \(\ u_{n}\ge \beta \varphi \ge 0,\ \) we obtain that \(\ N\left( u_{n}\right) \ge N\left( \beta \varphi \right) \ge 0.\ \) Passing to limit as \(\ n\rightarrow \infty \ \) yields \(\ N\left( \beta \varphi \right) =0,\ \) whence \(\ J^{\prime }\left( \beta \varphi \right) =0\ \) which makes impossible the second inequality in (19).   \(\square \)

Remark 2

The inequality \(\ L^{-1}\left( K_{0}\right) \subset K_{0}\ \) can be seen as a weak maximum principle, while by the use of a second cone \(\ K_{1},\ \) the Harnack inequality is not required on the whole cone \(\ K_{0},\ \) but only on its subcone \(\ K_{1}.\ \) This is useful in applications as shown by the following Example 2.

Example 1

In the simple case of problem (2), for \(\ m=1,\) we have \(\ H=L^{2}\left( 0,1\right) ,\ \) \(X=H_{0}^{1}\left( 0,1\right) \ \) with inner product \(\ \left\langle u,v\right\rangle =\int _{0}^{1}u^{\prime }v^{\prime },\ \) \(K_{0}=K_{1}\ \) is the cone of positive functions in \(\ L^{2}\left( 0,1\right) ,\)

$$\psi =1\ \ \text { and }\ \ \varphi =\eta \chi _{\left[ a,b\right] },$$

where \(\ \eta =\min \left\{ a,1-b\right\} ,\ \) \(0<a<b<1\ \) and \(\ \chi _{\left[ a,b\right] }\ \) is the characteristic function of the interval \(\left[ a,b\right] .\ \) Here \(\ J^{\prime }\left( u\right) \left( t\right) =f\left( t,u\left( t\right) \right) ,\ \) where \(\ f\ge 0\ \) on \(\left[ 0,1\right] \times \mathbb {R} _{+}\ \) and \(\ f\ \) is increasing in the second variable on \(\mathbb {R}_{+}.\ \) Then, condition (19) reduces to

$$\begin{aligned} \int _{0}^{1}f\left( t,\alpha \right) dt\le & {} \alpha , \\ \beta\le & {} \eta \int _{a}^{b}f\left( t,\eta \beta \right) dt. \end{aligned}$$

For more general examples we may consider semilinear boundary value problems with a linear part of the form

$$\begin{aligned} Lu=\sum \limits _{k=0}^{m}\left( -1\right) ^{k}\frac{d^{k}}{dt^{k}}\left[ p_{k}\left( t\right) \frac{d^{k}u}{dt^{k}}\right] +q\left( t\right) u. \end{aligned}$$

Such an example is considered in the paper [3]:

Example 2

Consider the boundary value problem

$$ \left\{ \begin{array}{ll} u^{(4)}(t)=f(t,u(t)), &{} 0<t<1 \\ u(0)=u^{\prime }(0)=u^{\prime \prime }(1)=u^{\prime \prime \prime }(1)=0. &{} \end{array} \right. $$

Here \(H=L^{2}(0,1),\ \ Lu=d^{4}/dt^{4},\)\(X=\left\{ u\in H^{2}(0,1):\ u(0)=u^{\prime }(0)=0\right\} ,\)

$$\left\langle u,v\right\rangle =\int _{0}^{1}u^{\prime \prime }v^{\prime \prime }dt, \ \ J(u)=\int _{0}^{1}\int _{0}^{u(t)}f(t,s)dsdt,\ \ J^{\prime }(u)=f(\cdot ,u),$$
$$K_{0}=\left\{ u\in L^{2}(0,1):\ u\ge 0\right\} ,\ \ \ K_{1}=\left\{ u\in K_{0}:\ u\ \text {- nondecreasing}\right\} ,$$
$$\psi =\frac{2}{3}t^{\frac{3}{2}},\ \ \varphi =\frac{\sqrt{2}}{6}\left( 1-t\right) t^{3}\ \ \ \ (\text {see}\ [3]).\ $$

Assuming that \(\ f\ \) is nonnegative and nondecreasing in each of its variables on \(\ \left[ 0,1\right] \times \mathbb {R}_{+},\ \) inequalities (19) reduce to

$$\begin{aligned} \int _{0}^{1}\psi \left( t\right) f\left( t,\psi \left( t\right) \alpha \right) dt\le & {} \alpha ,\ \ \\ \beta\le & {} \int _{0}^{1}\varphi \left( t\right) f\left( t,\varphi \left( t\right) \beta \right) dt. \end{aligned}$$

The application of the general scheme to concrete classes of boundary value problems mainly depends on the possibility to obtain a Harnack type inequality in terms of the energetic norm, as required by the abstract condition (H3). In many cases, including elliptic boundary value problems, Harnack type inequalities are only known with respect to a norm different from the energetic one. This was the reason in [9] to consider conical shells defined by two norms. Even more general, for the definition of conical shells, one may consider functionals which are not norms any more, like in [14]. Of course, in such situations, the conditions required on the shell boundary have to be adapted accordingly.

Finally, we mention that analogous results in conical shells, of mountain pass type, can be found in [9, 10]. For some extensions to Banach spaces and related topics we refer the reader to the papers [7, 15, 16].