1 Introduction

Recently, many papers have been devoted to the study of boundary value problems (BVPs for short) associated with nonlinear ODEs involving the so-called \(\Phi \)-Laplace operator (see, e.g., [3,4,5, 9]). Namely, ODEs of the type:

$$\begin{aligned} \big (\Phi (x')\big )' = f(t,x,x'), \end{aligned}$$

where f is a Carathédory function and \(\Phi : {\mathbb {R}}\rightarrow {\mathbb {R}}\) is a strictly increasing homeomorphism such that \(\Phi (0)=0\).

The class of \(\Phi \)-Laplacian operators includes as a special case the classical r-Laplacian \(\Phi (y):=y|y|^{r-2}\), with \(r>1\). Such operators arise in some models, e.g., in non-Newtonian fluid theory, diffusion of flows in porous media, nonlinear elasticity, and theory of capillary surfaces. Other models (for example reaction-diffusion equations with non-constant diffusivity and porous media equations) lead to consider mixed differential operators, that is, differential equations of the type:

$$\begin{aligned} \big (a(x)\,\Phi (x')\big )' = f(t,x,x'), \end{aligned}$$
(1.1)

where a is a continuous positive function (see, e.g., [8]). Furthermore, several papers have been devoted to the case of singular or non-surjective operators (see [1, 6, 10]). Usually, these existence results stem from a combination of fixed point techniques with the upper and lower solution method. In this context, an important tool to get a priori bounds for the derivatives of the solutions is a Nagumo-type growth condition on the function f. Let us observe that, when in the differential operator is present the nonlinear term a, some assumptions are required on the differential operator \(\Phi \), which in general is assumed to be homogeneous, or having at most linear growth at infinity.

More recently, in collaboration with Cristina Marcelli, we considered two different generalizations of Eq. (1.1). In the paper [15], we investigated the case in which the function a may depend also on t. More precisely, we obtained existence results for general boundary value problems associated with the equation:

$$\begin{aligned} \big (a(t,x(t))\,\Phi (x'(t))\big )'= f(t,x(t),x'(t)), \quad \hbox { a.e. on}\ I:= [0,T], \end{aligned}$$

where a continuous and positive, and assuming a weak form of Wintner–Nagumo growth condition. Namely:

$$\begin{aligned} \big |f(t,x,y)\big | \le \psi \Big (a(t,x)\,|\Phi (y)|\Big )\cdot \Big (\ell (t) + \nu (t)\,|y|^{\frac{s-1}{s}}\Big ), \end{aligned}$$
(1.2)

where \(\nu \in L^s(I)\) (for some \(s > 1\)), \(\ell \in L^1(I)\), and \(\psi :(0,\infty )\rightarrow (0,\infty )\) is a measurable function, such that \(1/\psi \in L^1_{\mathrm {loc}}(0,\infty )\) and:

$$\begin{aligned} \int _1^{\infty } \frac{\mathrm {d}s}{\psi (s)} = \infty . \end{aligned}$$

This assumption is weaker than other Nagumo-type conditions previously considered, and allows to consider a very general operator \(\Phi \), which can be only strictly increasing, not necessarily homogeneous, nor having polynomial growth. Let us also observe that the same equation:

$$\begin{aligned} \big (a(t,x)\,\Phi (x')\big )' = f(t,x,x') \end{aligned}$$

was studied in [13, 14] to obtain heteroclinic solutions on the real line.

On the other hand, in [7], possibly singular equations are considered, including a non-autonomous differential operator which has an explicit dependence on t inside \(\Phi \). Namely:

$$\begin{aligned} \Big (\Phi \big (k(t)\,x'(t)\big )\Big )'=f(t,x(t),x'(t)), \hbox { a.e. on}\ I, \end{aligned}$$
(1.3)

where the function k is allowed to vanish in a set having null measure, so that Eq. (1.3) can become singular. In [7], we assumed \(1/k \in L^p(I)\) and we looked for solutions in the space \(W^{1,p}(I)\), rather than \(C^{1}(I,{\mathbb {R}})\). To the best of our knowledge, very few papers have been devoted to this type of equations, and just for a restricted class of nonlinearities f (see [11, 12]).

In this paper, we tackle a further generalization of equation (1.3), allowing also a dependence on x inside \(\Phi \). More in detail, we consider the BVP:

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \Big (\Phi \big (a(t,x(t))\,x'(t)\big )\Big )' = f(t,x(t),x'(t)), &{} \hbox { a.e. on}\ I, \\ x(0) = \nu _1,\,\,x(T) = \nu _2, \end{array}\right. } \end{aligned}$$
(1.4)

where \(\nu _1, \nu _2 \in {\mathbb {R}}\), \(\Phi :{\mathbb {R}}\rightarrow {\mathbb {R}}\) is a strictly increasing homeomorphism, f is a Carathéodory function, and \(a:I\times {\mathbb {R}}\rightarrow {\mathbb {R}}\) is a continuous non-negative function satisfying the following estimate from below:

$$\begin{aligned} a(t,x)\ge h(t) \ \hbox { for every } t\in I \hbox { and every } x\in {\mathbb {R}}, \end{aligned}$$
(1.5)

where \(h\in C(I,{\mathbb {R}})\) is non-negative and \(1/h \in L^p(I)\) for some \(p> 1\). Notice that, differently to other papers quoted above, here, we do not require the positivity of the function a, and thus, the equation in (1.4) may be singular. Consequently, as in [7], we look for solutions in \(W^{1,1}(I)\).

For example, estimate (1.5) is verified when a(tx) has a simpler structure of a product or of a sum, as in the following special cases:

\((\star )\):

if \(a(t,x)=\lambda (t)\cdot b(x),\) where \(\lambda \) is continuous, non-negative, such that \(1/\lambda \in L^p(I)\), and b is continuous, such that \(\inf _{\mathbb {R}}b > 0\);

\((\star )\):

if \(a(t,x)=h(t)+b(x),\) where h is continuous, non-negative, such that \(1/h\in L^p(I)\), and b is continuous and non-negative.

Our main result, Theorem 3.5 below yields the existence of a solution of the Dirichlet problem (1.4) assuming a weak Wintner–Nagumo condition, similar to the one in (1.2). Theorem 3.5 extends in a natural way the main result in [7], in the case when \(a(t,x)=k(t)\) does not depend on x. The proof is obtained by the method of lower/upper solutions, combined with a fixed point technique applied to an auxiliary functional Dirichlet problem (see Sect. 2). In Sect. 3, we provide some illustrating examples in which our main result can be applied.

Finally, in Sect. 4, we consider different BVPs, such as the periodic problem, Neumann-type problem, and Sturm–Liouville-type problem, and with classical techniques, we derive existence results.

2 The abstract setting

Let \(T > 0\) be fixed and let \(I := [0,T]\). Moreover, let \(\nu _1,\nu _2\in {\mathbb {R}}\). Throughout this section, we shall be concerned with BVPs of the type:

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \Big (\Phi \big (A_x(t)\,x'(t)\big )\Big )' = F_x(t), &{} \hbox { a.e. on}\ I, \\ x(0) = \nu _1,\,\,x(T) = \nu _2, \end{array}\right. } \end{aligned}$$
(2.1)

where \(\Phi , A\) and F satisfy the following structural assumptions:

  1. (H1)

    \(\Phi :{\mathbb {R}}\rightarrow {\mathbb {R}}\) is a strictly increasing homeomorphism;

  2. (H2)

    there exists \(p > 1\), such that:

    $$\begin{aligned} A:W^{1,p}(I)\subseteq C(I,{\mathbb {R}})\rightarrow C(I,{\mathbb {R}}) \end{aligned}$$

    is continuous when \(W^{1,p}(I)\) is endowed with the usual maximum norm of \(C(I,{\mathbb {R}})\); moreover, there exist \(h_1,h_2\in C(I,{\mathbb {R}})\), such that:

\((\mathrm {H2})_1\):

\(h_1,h_2\ge 0\) on I and:

$$\begin{aligned} 1/h_1,1/h_2\in L^p(I); \end{aligned}$$
\((\mathrm {H2})_2\):

\(h_1(t)\le A_x(t)\le h_2(t)\) for every \(x\in W^{1,p}(I)\) and every \(t\in I\);

(H3):

\(F:W^{1,p}(I)\rightarrow L^1(I)\) is continuous (with respect to the usual norms) and there exists a non-negative function \(\psi \in L^1(I)\), such that:

$$\begin{aligned} |F_x(t)| \le \psi (t) \quad \hbox { for every } x\in W^{1,1}(I) \hbox { and a.e. } t\in I. \end{aligned}$$
(2.2)

Here, \(p > 1\) is the same as in assumption (H2).

Remark 2.1

In this remark, we highlight some consequences of assumption (H2) which shall be repeatedly used in the sequel.

  1. (i)

    For every \(x\in W^{1,p}(I)\), we have that \(A_x\ge h_1 \ge 0\) and:

    $$\begin{aligned} \int _0^T\frac{1}{h_2(t)}\,\mathrm {d}t \le \int _0^T\frac{1}{A_x(t)}\,\mathrm {d}t \le \int _0^T\frac{1}{h_1(t)}\,\mathrm {d}t. \end{aligned}$$

    Since \(1/h_1,1/h_2\in L^p(I)\), then the same is true of \(1/A_x\).

  2. (ii)

    Since \(W^{1,p}(I)\) is continuously embedded in \(C(I,{\mathbb {R}})\), it follows that \({\mathcal {A}}\) is continuous with respect to the \(W^{1,p}\)-norm.

In the sequel, we shall indicate by \({\mathcal {F}}\) the integral operator associated with F, that is, the operator \({\mathcal {F}}: W^{1,p}(I)\rightarrow C(I,{\mathbb {R}})\) defined by:

$$\begin{aligned} {\mathcal {F}}_x(t) := \int _0^t F_x(s)\,\mathrm {d}s. \end{aligned}$$

Remark 2.2

We observe, for a future reference, that \({\mathcal {F}}\) is continuous from \(W^{1,p}(I)\) to \(C(I,{\mathbb {R}})\): this follows from the continuity of F and from the estimate (holding true for every \(x,y\in W^{1,p}(I)\)):

$$\begin{aligned} \sup _{t\in I}|{\mathcal {F}}_x(t)-{\mathcal {F}}_y(t)| \le \Vert F_x-F_y\Vert _{L^1}. \end{aligned}$$

Furthermore, assumption (2.2) gives:

$$\begin{aligned} \sup _{t\in I}|{\mathcal {F}}_x(t)|\le \Vert \psi \Vert _{L^1}, \quad \hbox { for every}\ x\in W^{1,p}(I). \end{aligned}$$
(2.3)

Definition 2.3

We say that a continuous function \(x\in C(I,{\mathbb {R}})\) is a solution of the boundary value problem (2.1) if it satisfies the following properties:

  1. (1)

    \(x\in W^{1,p}(I)\) and \(t\mapsto \Phi \big (A_x(t)\,x'(t)\big )\in W^{1,1}(I)\);

  2. (2)

    \(\Big (\Phi \big (A_x(t)\,x'(t)\big )\Big )' = F_x(t)\) for a.e. \(t\in I\);

  3. (3)

    \(x(0) = \nu _1\) and \(x(T) = \nu _2\).

Remark 2.4

We point out that, if \(x\in W^{1,p}(I)\) is a solution of (2.1), by condition (1) in Definition 2.3 (and the fact that \(\Phi \) is a homeomorphism, see assumption (H1)), there exists a unique \({\mathcal {A}}_x\in C(I,{\mathbb {R}})\), such that:

$$\begin{aligned} {\mathcal {A}}_x(t) = A_x(t)\,x'(t) \quad \hbox { for a.e. }\ t\in I. \end{aligned}$$

We shall use this fact in the next Sect. 3.

The main result of this section is the following existence result.

Theorem 2.5

Under the structural assumptions (H1), (H2), and (H3), the boundary value problem (2.1) admits at least one solution \(x\in W^{1,p}(I)\).

The proof of Theorem 2.5 requires some preliminary facts.

Lemma 2.6

For every \(x\in W^{1,p}(I)\), there exists a unique \(\xi _x\in {\mathbb {R}}\), such that:

$$\begin{aligned} \int _0^T\frac{1}{A_x(t)}\,\Phi ^{-1}\big (\xi _x + {\mathcal {F}}_x(t)\big )\,\mathrm {d}t = \nu _2-\nu _1. \end{aligned}$$
(2.4)

Furthermore, there exists a universal constant \({\mathbf {c}}_0 > 0\), such that:

$$\begin{aligned} |\xi _x| \le {\mathbf {c}}_0 \quad \hbox { for every}\ x\in W^{1,p}(I). \end{aligned}$$
(2.5)

Proof

Let \(x\in W^{1,p}(I)\) be fixed and let:

$$\begin{aligned} f_x:{\mathbb {R}}\longrightarrow {\mathbb {R}}, \qquad f_x(\xi ) := \int _0^T\frac{1}{A_x(t)}\,\Phi ^{-1}\big (\xi + {\mathcal {F}}_x(t)\big )\,\mathrm {d}t . \end{aligned}$$

Since \({\mathcal {F}}_x\) is continuous on I (see Remark 2.2) and since, by assumptions, \(\Phi \) is continuous on the whole of \({\mathbb {R}}\), an application of Lebesgue’s Dominated Convergence Theorem shows that \(f_x\in C({\mathbb {R}},{\mathbb {R}})\) (see also Remark 2.1); moreover, since \(\Phi \) is increasing, the same is true of \(f_x\) and, by (2.3), we have:

$$\begin{aligned} \begin{aligned}&\Phi ^{-1}(\xi -\Vert \psi \Vert _{L^1})\cdot \bigg (\int _0^T\frac{1}{A_x(t)}\,\mathrm {d}t\bigg ) \le f_x(\xi ) \\&\quad \le \Phi ^{-1}(\xi +\Vert \psi \Vert _{L^1})\cdot \bigg (\int _0^T\frac{1}{A_x(t)}\,\mathrm {d}t\bigg ). \end{aligned} \end{aligned}$$

From this, we deduce that \(f_x(\xi )\rightarrow \pm \infty \) as \(\xi \rightarrow \pm \infty \); thus, by Bolzano’s Theorem (and the monotonicity of \(f_x\)), there exists a unique \(\xi _x\in {\mathbb {R}}\), such that:

$$\begin{aligned} f_x(\xi _x) = \int _0^T\frac{1}{A_x(t)}\,\Phi ^{-1}\big (\xi _x + {\mathcal {F}}_x(t)\big )\,\mathrm {d}t = \nu _2-\nu _1. \end{aligned}$$

We now turn to prove estimate (2.5). To this end, we observe that, by (2.4) and the Mean Value Theorem, there exists \(t^* = t^*_x\in I\), such that:

$$\begin{aligned} \Phi ^{-1}(\xi _x + {\mathcal {F}}_x(t^*))\cdot \bigg (\int _0^T\frac{1}{A_x(t)}\,\mathrm {d}t\bigg ) = \nu _2-\nu _1; \end{aligned}$$

as a consequence, we obtain:

$$\begin{aligned} \xi _x + {\mathcal {F}}_x(t^*) = \Phi \bigg ((\nu _2-\nu _1)\cdot \bigg (\int _0^T\frac{1}{A_x(t)}\,\mathrm {d}t\bigg )^{-1}\bigg ). \end{aligned}$$

Now, by crucially exploiting Remark 2.1, we see that:

$$\begin{aligned} \bigg |(\nu _2-\nu _1)\cdot \bigg (\int _0^T\frac{1}{A_x(t)}\,\mathrm {d}t\bigg )^{-1}\bigg | \le |\nu _2-\nu _1|\cdot \bigg (\int _0^T\frac{1}{h_2(t)}\,\mathrm {d}t\bigg )^{-1} =: \rho , \end{aligned}$$

for every \(x\in W^{1,p}(I)\); setting \(M := \sup _{[-\rho ,\rho ]}|\Phi |\), we get (see also (2.3)):

$$\begin{aligned} \begin{aligned} |\xi _x|&\le |\xi _x+{\mathcal {F}}_x(t^*)|+|{\mathcal {F}}_x(t^*)| \\&\le \bigg |\Phi \bigg ((\nu _2-\nu _1)\cdot \bigg (\int _0^T\frac{1}{A_x(t)}\,\mathrm {d}t\bigg )^{-1}\bigg )\bigg | + \sup _{t\in I}|{\mathcal {F}}_x(t)| \\&\le M + \Vert \psi \Vert _{L^1} =: {\mathbf {c}}_0. \end{aligned} \end{aligned}$$

Since \({\mathbf {c}}_0 > 0\) does not depend on x, this gives the desired (2.5). \(\square \)

We now consider the operator \({\mathcal {P}}:W^{1,p}(I)\rightarrow W^{1,p}(I)\) defined by:

$$\begin{aligned} {\mathcal {P}}_{x}(t) := \nu _1 + \int _0^t \frac{1}{A_x(s)}\,\Phi ^{-1}\big (\xi _x+{\mathcal {F}}_x(s)\big )\,\mathrm {d}s, \end{aligned}$$
(2.6)

where \(\xi _x\) is as in Lemma 2.6. We note that \({\mathcal {P}}\) is well defined, in the sense that \({\mathcal {P}}_{x}\in W^{1,p}(I)\) for every \(x\in W^{1,p}(I)\): indeed, assumption \((\mathrm {H2})_2\) and (2.3) give:

$$\begin{aligned} \bigg |\frac{1}{A_x(s)}\,\Phi ^{-1}\big (\xi _x+{\mathcal {F}}_x(s)\big )\bigg | \le \frac{1}{h_1(t)}\,\Phi ^{-1}\big (\xi _x+\Vert \psi \Vert _{L^1}\big ); \end{aligned}$$

thus, since \(1/h_1\in L^p(I)\), we conclude that \({\mathcal {P}}_{x}\in W^{1,p}(I)\), as claimed. Furthermore, it is not difficult to see that the solutions of (2.1) (according to Definition 2.3) are precisely the fixed points (in \(W^{1,p}(I)\)) of \({\mathcal {P}}\).

In view of this fact, we can prove Theorem 2.5 by showing that \({\mathcal {P}}\) possesses at least one fixed point in \(W^{1,p}(I)\); in its turn, the existence of a fixed point of \({\mathcal {P}}\) follows from Schauder’s Fixed Point Theorem if we are able to demonstrate that \({\mathcal {P}}\) enjoys the following properties:

  • \({\mathcal {P}}\) is bounded in \(W^{1,p}(I)\).

  • \({\mathcal {P}}\) is continuous from \(W^{1,p}(I)\) into itself.

  • \({\mathcal {P}}\) is compact.

These facts are proved in the next lemmas.

Lemma 2.7

The operator \({\mathcal {P}}\) defined in (2.6) is bounded in \(W^{1,p}(I)\); that is, there exists a universal constant \({\mathbf {c}}_1 > 0\), such that:

$$\begin{aligned} \Vert {\mathcal {P}}_{x}\Vert _{W^{1,p}} \le {\mathbf {c}}_1 \quad \hbox { for every}\ x\in W^{1,p}(I). \end{aligned}$$

Proof

For every \(x\in W^{1,p}(I)\), by combining (2.3) and (2.5), we have:

$$\begin{aligned} |\xi _x + {\mathcal {F}}_x(s)| \le {\mathbf {c}}_0 + \Vert \psi \Vert _{L^1} =:\eta , \quad \hbox { for every}\ s\in I; \end{aligned}$$

thus, if we set \(M = \max _{[-\eta ,\eta ]}|\Phi ^{-1}|\), we obtain (see assumption \((\mathrm {H2})_2\)):

$$\begin{aligned} \bigg |\frac{1}{A_x(s)}\,\Phi ^{-1}\big (\xi _x+{\mathcal {F}}_x(s)\big )\bigg | \le \frac{M}{h_1(s)} \quad \hbox { for every}\ s\in I, \end{aligned}$$
(2.7)

and the estimate holds for every \(x\in W^{1,p}(I)\). With such an estimate at hand, we can easily prove the boundedness of \({\mathcal {P}}\): indeed, by (2.7), we have:

$$\begin{aligned} \Vert {\mathcal {P}}_{x}'\Vert _{L^p}&= \bigg (\int _0^T\bigg |\frac{1}{A_x(s)}\,\Phi ^{-1}\big (\xi _x+{\mathcal {F}}_x(s)\big )\bigg |^p\,\mathrm {d}s \bigg )^{1/p} \le M\,\Vert 1/h_1\Vert _{L^p} \end{aligned}$$

for every \(x\in W^{1,p}(I)\); moreover, one has:

$$\begin{aligned} \Vert {\mathcal {P}}_{x}\Vert _{L^p}&\le \bigg \{\int _0^T\bigg (|\nu _1| +\int _0^t\bigg |\frac{1}{A_x(s)}\,\Phi ^{-1}\big (\xi _x+{\mathcal {F}}_x(s)\big ) \bigg |\,\mathrm {d}s\bigg )^p\,\mathrm {d}t\bigg \}^{1/p} \\&\le T^{1/p}\,\big (|\nu _1|+ M\, \Vert 1/h_1 \Vert _{L^P}\big ), \end{aligned}$$

and again, the estimate holds for every \(x\in W^{1,p}(I)\). Summing up, if we introduce the constant (which does not depend on x):

$$\begin{aligned} {\mathbf {c}}_1 := T^{1/p}\,(|\nu _1|+ M\,\Vert 1/h_1\Vert _{L^p}) + M\,\Vert 1/h_1\Vert _{L^p} > 0, \end{aligned}$$

we conclude that, for every \(x\in W^{1,p}(I)\), one has:

$$\begin{aligned}&\Vert {\mathcal {P}}_{x}\Vert _{W^{1,p}} = \Vert {\mathcal {P}}_{x}\Vert _{L^p} + \Vert {\mathcal {P}}_{x}'\Vert _{L^p} \le {\mathbf {c}}_1. \end{aligned}$$

This ends the proof. \(\square \)

Remark 2.8

It is contained in the proof of Lemma 2.7 the following fact, which we shall repeatedly use in the sequel: there exists a constant \(M > 0\), such that, for every \(x\in W^{1,p}(I)\):

$$\begin{aligned} \max _{t\in I}\Big |\Phi ^{-1}\big (\xi _x+{\mathcal {F}}_x(t)\big )\Big | \le M. \end{aligned}$$
(2.8)

We also highlight that, since the injection \(W^{1,p}(I)\subseteq C(I,{\mathbb {R}})\) is continuous, the boundedness of \({\mathcal {P}}\) in \(W^{1,p}(I)\) implies the boundedness of \({\mathcal {P}}\) in \(C(I,{\mathbb {R}})\): more precisely, there exists a real \({\mathbf {c}}'_1 > 0\), such that:

$$\begin{aligned} \sup _{t\in I}|{\mathcal {P}}_{x}(t)| \le {\mathbf {c}}'_1, \quad \hbox { for every}\ x\in W^{1,p}(I). \end{aligned}$$
(2.9)

We now turn to prove the continuity of \({\mathcal {P}}\).

Lemma 2.9

The operator \({\mathcal {P}}\) defined in (2.6) is continuous on \(W^{1,p}(I)\).

Proof

Let \(x_0\in W^{1,p}(I)\) be fixed and let \(\{x_n\}_{n\in {\mathbb {N}}}\subseteq W^{1,p}(I)\) be a sequence converging to \(x_0\) as \(n\rightarrow \infty \). We need to prove that \({\mathcal {P}}_{x_n} \rightarrow {\mathcal {P}}_{x_0}\) as \(n\rightarrow \infty \).

To this end, we arbitrarily choose a sub-sequence \(\{x_{n_j}\}_{j\in {\mathbb {N}}}\) of \(\{x_n\}_{n\in {\mathbb {N}}}\) and we show that, by possibly choosing a further sub-sequence, we have:

$$\begin{aligned} \lim _{j\rightarrow \infty }{\mathcal {P}}_{x_{n_{j}}} = {\mathcal {P}}_{x_0} \quad \hbox { in}\ W^{1,p}(I). \end{aligned}$$

First of all, since (2.5) implies that the sequence \(\{\xi _{x_{n_j}}\}_{j\in {\mathbb {N}}}\) is bounded in \({\mathbb {R}}\), there exists a real \(\xi _0\in {\mathbb {R}}\), such that (up to a sub-sequence):

$$\begin{aligned} \xi _j := \xi _{x_{n_{j}}} \rightarrow \xi _0 \quad \hbox { as}\ j\rightarrow \infty . \end{aligned}$$

Moreover, since \({\mathcal {F}}\) is continuous from \(W^{1,p}(I)\) to \(C(I,{\mathbb {R}})\) (see Remark 2.2) and \({\mathcal {A}}\) is continuous wrt the \(W^{1,p}\)-norm (see Remark 2.1-(ii)), we have:

$$\begin{aligned} {\mathcal {F}}_j := {\mathcal {F}}_{x_{n_{j}}}\rightarrow {\mathcal {F}}_{x_0} \qquad \text { and } \qquad A_j := A_{x_{n_{j}}}\rightarrow A_{x_0} \end{aligned}$$

uniformly on I as \(j\rightarrow \infty \). Gathering together all these facts (and reminding that, by assumption, \(\Phi \in C({\mathbb {R}},{\mathbb {R}})\)), we obtain:

$$\begin{aligned} \begin{aligned}&\lim _{j\rightarrow \infty }\frac{1}{A_j(t)}\, \Phi ^{-1}\big (\xi _j+{\mathcal {F}}_j(t)\big ) \\&\quad = \frac{1}{A_{x_0}(t)}\,\Phi ^{-1}\big (\xi _0+{\mathcal {F}}_{x_0}(t)\big ) \quad \hbox { for a.e. }\ t\in I. \end{aligned} \end{aligned}$$
(2.10)

From this, owing to estimate (2.8) and Remark 2.1, we infer that:

$$\begin{aligned} \begin{aligned}&\lim _{j\rightarrow \infty }\int _0^t \frac{1}{A_j(s)}\, \Phi ^{-1}\big (\xi _j+{\mathcal {F}}_j(s)\big )\,\mathrm {d}s\\&\quad = \int _0^t \frac{1}{A_{x_0}(s)}\,\Phi ^{-1}\big (\xi _0+{\mathcal {F}}_{x_0}(s)\big )\,\mathrm {d}s \qquad \hbox { for every}\ t\in I. \end{aligned} \end{aligned}$$
(2.11)

In particular, since we know from Lemma 2.6 that:

$$\begin{aligned} \int _0^T\frac{1}{A_j(s)}\, \Phi ^{-1}\big (\xi _j+{\mathcal {F}}_j(s)\big )\,\mathrm {d}s = \nu _2-\nu _1 \quad \hbox { for every}\ j\in {\mathbb {N}}, \end{aligned}$$

identity (2.11) implies that:

$$\begin{aligned} \int _0^T\frac{1}{A_{x_0}(s)}\,\Phi ^{-1}\big (\xi _0+{\mathcal {F}}_{x_0}(s)\big )\,\mathrm {d}s = \nu _2-\nu _1; \end{aligned}$$

thus, by the uniqueness property of \(\xi _x\) in Lemma 2.6, we get \(\xi _0 = \xi _{x_0}\). As a consequence, by exploiting the very definition of \({\mathcal {P}}\) (see (2.6)), identity (2.11) allows us to conclude that \({\mathcal {P}}_{x_{n_{j}}}\rightarrow {\mathcal {P}}_{x_0}\) point-wise on I as \(j\rightarrow \infty \).

To complete the proof of the lemma, we need to show that the sequence \({\mathcal {P}}_{x_{n_{j}}}\) actually converges to \({\mathcal {P}}_{x_0}\) in \(W^{1,p}(I)\) as \(j\rightarrow \infty \). To this end we first observe that, by exploiting estimate (2.8), for almost every \(t\in I\), one has:

$$\begin{aligned} \begin{aligned}&\bigg | \frac{1}{A_j(t)}\, \Phi ^{-1}\big (\xi _j+{\mathcal {F}}_j(t)\big ) - \frac{1}{A_{x_0}(t)}\,\Phi ^{-1}\big (\xi _{x_0}+{\mathcal {F}}_{x_0}(t)\big )\bigg |^p \le 2^p\,\frac{M^p}{h_1^p(t)}; \end{aligned} \end{aligned}$$

as a consequence, since \(1/h_1\in L^p(I)\) (by assumption (H2)), a standard application of Lebesgue’s Dominated Convergence Theorem gives (see also (2.10)):

$$\begin{aligned} \begin{aligned}&\lim _{j\rightarrow \infty }\Vert {\mathcal {P}}_{x_{n_{j}}}'-{\mathcal {P}}_{x_0}'\Vert _{L^p}^p \\&\quad = \lim _{j\rightarrow \infty }\int _0^T \bigg | \frac{1}{A_j(t)}\, \Phi ^{-1}\big (\xi _j+{\mathcal {F}}_j(t)\big ) - \frac{1}{A_{x_0}(t)}\,\Phi ^{-1}\big (\xi _{x_0}+{\mathcal {F}}_{x_0}(t)\big )\bigg |^p\mathrm {d}t = 0. \end{aligned} \end{aligned}$$

On the other hand, since \({\mathcal {P}}\) is bounded in \(C(I,{\mathbb {R}})\) (see Remark 2.8), one has:

$$\begin{aligned} |{\mathcal {P}}_{x_{n_{j}}}(t)- {\mathcal {P}}_{x_0}(t)|^p \le 2^{p}\,{\mathbf {c}}'_1\text { for every }t\in I; \end{aligned}$$

thus, again by Lebesgue’s Dominated Convergence Theorem, we get:

$$\begin{aligned} \begin{aligned}&\lim _{j\rightarrow \infty }\Vert {\mathcal {P}}_{x_{n_{j}}}-{\mathcal {P}}_{x_0}\Vert ^p_{L^p} \\&\quad = \lim _{j\rightarrow \infty }\int _0^T|{\mathcal {P}}_{x_{n_{j}}}(t)- {\mathcal {P}}_{x_0}(t)|^p\,\mathrm {d}t = 0. \end{aligned} \end{aligned}$$

Gathering together these facts, we conclude that \(\Vert {\mathcal {P}}_{x_{n_{j}}}- {\mathcal {P}}_{x_0}\Vert _{W^{1,p}}\rightarrow 0\) as \(j\rightarrow \infty \), and this finally completes the demonstration. \(\square \)

Finally, we prove that \({\mathcal {P}}\) is compact.

Lemma 2.10

The operator \({\mathcal {P}}\) defined in (2.6) is compact on \(W^{1,p}(I)\).

Proof

Let \(\{x_n\}_{n\in {\mathbb {N}}}\subseteq W^{1,p}(I)\) be bounded. We need to prove that the sequence \(\{{\mathcal {P}}_{x_n}\}_{n\in {\mathbb {N}}}\) possesses a sub-sequence which is convergent (in the \(W^{1,p}\)-norm) to some function \(y_0\in W^{1,p}(I)\).

Fist of all, since \(\{\xi _{x_n}\}_{n\in {\mathbb {R}}}\) is bounded in \({\mathbb {R}}\) (see (2.5)), there exist a real \(\xi _0\) and a sub-sequence of \(\{x_n\}_{n\in {\mathbb {N}}}\), denoted again by \(\{x_n\}_{n\in {\mathbb {N}}}\), such that:

$$\begin{aligned} \lim _{n\rightarrow \infty }\xi _{x_n} = \xi _0 \quad \text {and} \quad |\xi _0|\le {\mathbf {c}}_0. \end{aligned}$$
(2.12)

Moreover, since \(\{x_n\}_{n\in {\mathbb {N}}}\) is bounded in \(W^{1,p}(I)\) and \(p > 1\), there exist a suitable function \(x_0\in W^{1,p}(I)\) and another sub-sequence of \(\{x_n\}_{n\in {\mathbb {N}}}\), which we still denote by \(\{x_n\}_{n \in {\mathbb {N}}}\), such that \(x_n \rightarrow x_0\) uniformly on I as \(n\rightarrow \infty \).

As a consequence, since the operator A is continuous with respect to the uniform topology of \(C(I,{\mathbb {R}})\) (by assumption (H2)), we have:

$$\begin{aligned} \hbox { } A_{x_n}\rightarrow A_{x_0} \hbox { uniformly on} I\hbox { as }j\rightarrow \infty . \end{aligned}$$
(2.13)

We now observe that, by assumption (H3), we have the estimate:

$$\begin{aligned} F_{x_n}(t)\le \psi (t)\text {, holding true for a.e. }t\in I\text { and every }n\in {\mathbb {N}}; \end{aligned}$$

thus, \(\{F_{x_n}\}_{n\in {\mathbb {N}}}\) is bounded and equi-integrable in \(L^1\). Owing to the Dunford–Pettis Theorem, we infer the existence of a function \(g\in L^1(I)\) and of another sub-sequence of \(\{x_n\}_{n\in {\mathbb {N}}}\), denoted once again by \(\{x_n\}_{n\in {\mathbb {N}}}\), such that:

\((\star )\):

  \(\displaystyle \lim _{n\rightarrow \infty }\int _0^T F_{x_n}(s)\,v(s)\,d s = \int _0^T g(s)\,v(s)\,\mathrm {d}s \quad \hbox { for every}\ v\in L^\infty (I)\);

\((\star )\):

  \(\Vert g\Vert _{L^1} \le \Vert \psi \Vert _{L^1}.\)

Choosing v as the indicator function of [0, t] (with \(t\in I\)), we get:

$$\begin{aligned} \begin{aligned}&{\mathcal {F}}_{x_n}(t) \rightarrow {\mathcal {G}}(t) := \int _0^t g(s)\,\mathrm {d}s \quad \hbox { for every}\ t\in I \\&\quad \text {and} \quad \sup _{t\in I}|{\mathcal {G}}(t)|\le \Vert \psi \Vert _{L^1}. \end{aligned} \end{aligned}$$
(2.14)

Gathering together (2.12), (2.13), and (2.14), we deduce that:

$$\begin{aligned} \begin{aligned}&\lim _{n\rightarrow \infty }\frac{1}{A_{x_n}(t)}\, \Phi ^{-1}\big (\xi _{x_n} + {\mathcal {F}}_{x_n}(t)\big ) \\&\quad = \frac{1}{A_{x_0}(t)}\,\Phi ^{-1}\big (\xi _0+{\mathcal {G}}(t)\big ) \quad \hbox { for a.e. }\ t\in I. \end{aligned} \end{aligned}$$
(2.15)

From this, owing to (2.12), (2.14), and Remark 2.1, we conclude that:

$$\begin{aligned} \bigg |\frac{1}{A_{x_0}(t)}\,\Phi ^{-1}\big (\xi _0+{\mathcal {G}}(t)\big )\bigg | \le \frac{M}{h_1(t)}\in L^p(I) \quad \hbox { for a.e. }\ t\in I, \end{aligned}$$
(2.16)

and that, for every \(t\in I\), one has:

$$\begin{aligned}&\lim _{n\rightarrow \infty }{\mathcal {P}}_{x_n}(t) = \lim _{n\rightarrow \infty } \Big \{\nu _1+\int _0^t \frac{1}{A(x_n)(s)}\, \Phi ^{-1}\big (\xi _{x_n}+{\mathcal {F}}_{x_n}(s)\big )\Big \} \\&\quad = \nu _1 + \int _0^t \frac{1}{A_{x_0}(s)}\,\Phi ^{-1}\big (\xi _0+{\mathcal {G}}(s)\big ) =:y_0(t) \quad \hbox { for every }\ t\in I. \end{aligned}$$

To complete the proof of the lemma, we need to show that the sequence \(\{{\mathcal {P}}_{x_n}\}_{n}\) actually converges to \(y_0\)in \(W^{1,p}(I)\) as \(n\rightarrow \infty \).

On one hand, using estimate (2.16) and by arguing exactly as in the proof of Lemma 2.9, we easily recognize that:

$$\begin{aligned} \begin{aligned}&\lim _{n\rightarrow \infty }\Vert {\mathcal {P}}_{x_n}'-y_0'\Vert _{L^p}^p \\&\quad = \lim _{n\rightarrow \infty }\int _0^T \bigg | \frac{1}{A_{x_n}(t)}\, \Phi ^{-1}\big (\xi _{x_n}+{\mathcal {F}}_{x_n}(t)\big ) - \frac{1}{A_{x_0}(t)}\,\Phi ^{-1}\big (\xi _0+{\mathcal {G}}(t)\big )\bigg |^p\mathrm {d}t = 0. \end{aligned} \end{aligned}$$

On the other hand, since \({\mathcal {P}}_{x_n}\rightarrow y_0\) point-wise on I, from (2.9), we get:

$$\begin{aligned} |y_0(t)|\le {\mathbf {c}}'_1 \hbox { for every } t\in I; \end{aligned}$$

hence, by arguing once again as in the proof of Lemma 2.9, we conclude that:

$$\begin{aligned} \begin{aligned}&\lim _{n\rightarrow \infty }\Vert {\mathcal {P}}_{x_n}-y_0\Vert ^p_{L^p} \\&\quad = \lim _{n\rightarrow \infty }\int _0^T|{\mathcal {P}}_{x_n}(t)-y_0(t)|^p\,\mathrm {d}t = 0. \end{aligned} \end{aligned}$$

Summing up, \({\mathcal {P}}_{x_n}\rightarrow y_0\) in \(W^{1,p}(I)\) as \(n\rightarrow \infty \), and the proof is complete.

\(\square \)

Gathering Lemmas 2.72.9 and  2.10, we can prove Theorem 2.5.

Proof of Theorem 2.5

We have already recognized that a function x in the space \(W^{1,p}(I)\) is a solution of the boundary-value problem (2.1) if and only if x is a fixed point of the operator \({\mathcal {P}}\) defined in (2.6).

On the other hand, since \({\mathcal {P}}\) is bounded, continuous, and compact on the Banach space \(W^{1,p}(I)\), the Schauder Fixed Point Theorem ensures the existence of (at least) one \(x\in W^{1,p}(I)\), such that \({\mathcal {P}}_{x} = x\), and thus, the problem (2.1) possesses at least one solution. \(\square \)

3 The Dirichlet problem for singular ODEs

In this section, we exploit the existence result in Theorem 2.5 to prove the solvability of boundary value problems of the following type:

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \Big (\Phi \big (a(t,x(t))\,x'(t)\big )\Big )' = f(t,x(t),x'(t)), &{} \hbox { a.e. on }\ I, \\ x(0) = \nu _1,\,\,x(T) = \nu _2. \end{array}\right. } \end{aligned}$$
(3.1)

As in Sect. 2, \(I = [0,T]\) (for some real \(T > 0\)) and \(\nu _1,\nu _2\in {\mathbb {R}}\); furthermore, the functions \(\Phi , a\) and f satisfy the following structural assumptions:

  1. (A1)

    \(\Phi :{\mathbb {R}}\rightarrow {\mathbb {R}}\) is a strictly increasing homeomorphism;

  2. (A2)

    \(a\in C(I\times {\mathbb {R}},{\mathbb {R}})\) and there exists \(h\in C(I,{\mathbb {R}})\), such that:

\((\mathrm {A2})_1\):

\(h \ge 0\) on I and there exists \(p > 1\), such that:

$$\begin{aligned} 1/h\in L^p(I); \end{aligned}$$
\((\mathrm {A2})_2\):

\(a(t,x) \ge h(t)\) for every \(t\in I\) and every \(x\in {\mathbb {R}}\);

  1. (A3)

    \(f:I\times {\mathbb {R}}^2\rightarrow {\mathbb {R}}\) is a Carathéodory function; that is:

\((*)\)  the map \(t\mapsto f(t,x,y)\) is measurable on I, for every \((x,y)\in {\mathbb {R}}^2\);

\((*)\)  the map \((x,y)\mapsto f(t,x,y)\) is continuous on \({\mathbb {R}}^2\), for a.e. \(t\in I\).

Remark 3.1

As in Sect. 2, we point out that, as a consequence of \((\mathrm {A2})_1\)-\((\mathrm {A2})_2\), for every \((t,x)\in I\times {\mathbb {R}}\), one has \(a(t,x)\ge h(t)\ge 0\) and:

$$\begin{aligned} 0 \le \int _0^T\frac{1}{a(t,x(t))}\,\mathrm {d}t \le \int _0^T \frac{1}{h(t)}\,\mathrm {d}t, \end{aligned}$$

for any measurable function \(x:I\rightarrow {\mathbb {R}}\). Hence, \(t\mapsto a(t,x(t))\in L^p(I)\).

We now give the definition of solution of the problem (3.1).

Definition 3.2

We say that a continuous function \(x\in C(I,{\mathbb {R}})\) is a solution of the Dirichlet problem (3.1) if it satisfies the following properties:

  1. (1)

    \(x\in W^{1,1}(I)\) and \(t\mapsto \Phi \big (a(t,x(t))\,x'(t)\big )\in W^{1,1}(I)\);

  2. (2)

    \(\Big (\Phi \big (a(t,x(t))\,x'(t)\big )\Big )' = f(t,x(t),x'(t))\) for almost every \(t\in I\);

  3. (3)

    \(x(0) = \nu _1\) and \(x(T) = \nu _2\).

If x fulfills only (1) and (2), we say that x is a solution of the ODE

$$\begin{aligned} \displaystyle \Big (\Phi \big (a(t,x(t))\,x'(t)\big )\Big )' = f(t,x(t),x'(t)). \end{aligned}$$
(3.2)

To clearly state the main result of this section, we also need to introduce the definition of upper/lower solution of the equation in (3.1).

Definition 3.3

We say that a continuous function \(\alpha \in C(I,{\mathbb {R}})\) is a lower [respectively upper] solution of the differential equation (3.2) if:

  1. (1)

    \(\alpha \in W^{1,1}(I)\) and \(t\mapsto \Phi \big (a(t,\alpha (t))\,\alpha '(t)\big )\in W^{1,1}(I)\);

  2. (2)

    \(\Big (\Phi \big (a(t,\alpha (t))\,\alpha '(t)\big )\Big )' \ge \,[\le ]\,\, f(t,\alpha (t),\alpha '(t))\) for almost every \(t\in I\).

Remark 3.4

If \(x\in W^{1,1}(I)\) is a solution of the problem (3.1), we denote by \({\mathcal {A}}_x\) the unique continuous function on I, such that (see also Remark 2.4):

$$\begin{aligned} {\mathcal {A}}_x(t) = a(t,x(t))\,x'(t) \quad \hbox { for a.e. }\ t\in I. \end{aligned}$$

Notice that, as a consequence of condition (1) in Definition 3.2 (and again of the fact that \(\Phi \) is a homeomorphism, see (H1)), such a function exists.

Analogously, if \(\alpha \in W^{1,1}(I)\) is a lower/upper solution of (3.2), we denote by \({\mathcal {A}}_\alpha \) the unique continuous function on I, such that:

$$\begin{aligned} {\mathcal {A}}_\alpha (t) = a(t,\alpha (t))\,\alpha '(t) \quad \hbox { for a.e. }\ t\in I. \end{aligned}$$

The existence of such a function follows from (1) in Definition 3.3.

We are ready to state our main existence result.

Theorem 3.5

Let us assume that, together with the structural assumptions (A1)-to-(A3), the following additional hypotheses are satisfied:

  1. (A1’)

    there exists a pair of lower and upper solutions \(\alpha ,\beta \in W^{1,p}(I)\) of the differential equation (3.2), such that \(\alpha (t)\le \beta (t)\) for every \(t\in I\);

  2. (A2’)

    for every \(R > 0\) and every non-negative function \(\gamma \in L^p(I)\), there exists a non-negative function \(h = h_{R,\gamma }\in L^1(I)\), such that:

    $$\begin{aligned} |f(t,x,y(t))|\le h_{R,\gamma }(t) \end{aligned}$$
    (3.3)

    for a.e. \(t\in I\), every \(x\in {\mathbb {R}}\) with \(|x|\le R\) and every function \(y\in L^p(I)\) satisfying \(|y(t)|\le \gamma (t)\) a.e. on I.

  3. (A3’)

    there exist a constant \(H > 0\), a non-negative function \(\mu \in L^q(I)\)(for some \(1<q\le \infty \)), a non-negative function \(l\in L^1(I)\), and a non-negative measurable function \(\psi :(0,\infty )\rightarrow (0,\infty )\), such that:

    $$\begin{aligned}&(\star )\,\,1/\psi \in L^1_{\mathrm {loc}}(0,\infty ) \quad \text {and} \quad \int _1^\infty \frac{1}{\psi (t)}\,\mathrm {d}t = \infty ; \end{aligned}$$
    (3.4)
    $$\begin{aligned}&(\star )\,\,|f(t,x,y)| \le \psi \Big (|\Phi (a(t,x)\,y)|\Big ) \cdot \Big (l(t)+\mu (t)\,|y|^{\frac{q-1}{q}}\Big ); \end{aligned}$$
    (3.5)

    for a.e. \(t\in I\), every \(x\in [\alpha (t),\beta (t)]\) and every \(y\in {\mathbb {R}}\) with \(|y| \ge H\).

Then, for every choice of \(\nu _1\in [\alpha (0),\beta (0)]\) and \(\nu _2\in [\alpha (T),\beta (T)]\), the Dirichlet problem (3.1) possesses at least one solution \(x\in W^{1,p}(I)\)(where \(p > 1\) as is assumption (A2)), further satisfying that:

$$\begin{aligned} \alpha (t)\le x(t)\le \beta (t) \quad \hbox { for every}\ t\in I. \end{aligned}$$
(3.6)

Moreover, if \(M > 0\) is any real number, such that \(\sup _I|\alpha |,\,\sup _I|\beta | \le M\), it is possible to find a real \(L_M > 0\), only depending on M, such that:

$$\begin{aligned} \max _{t\in I}\big |x(t)\big |\le M \quad \text {and} \quad \max _{t\in I}\big |{\mathcal {A}}_x(t)\big | \le L_M. \end{aligned}$$
(3.7)

The main idea behind the proof of Theorem 3.5 is to think of the Dirichlet problem (3.1) as a particular case of an abstract BVPs of the form (2.1), and then to apply the existence result contained in Theorem 2.5.

Unfortunately, we cannot directly apply our Theorem 2.5 to the problem (3.1): in fact, in general, we cannot expect the (well-defined) functional:

$$\begin{aligned} W^{1,p}(I)\ni x \mapsto F_x := f(t,x(t),x'(t)) \in L^1(I) \end{aligned}$$

to satisfy assumption (H3) (or, more precisely, estimate (2.2)).

Thus, following an approach similar to that exploited by [10, 15], we introduce a suitable truncated version of problem (3.1), to which Theorem 3.5 can apply.

To this end, to simplify the notation, we first fix some relevant constants that we shall need for the proof of Theorem 3.5; henceforth, we suppose that all the assumption in the statement of Theorem 3.5 are satisfied.

Let \(M > 0\) be any real number, such that \(\sup _I|\alpha |,\,\sup _I|\beta | \le M\) and let \(H > 0\) be the constant appearing in assumption (A3’); moreover, we define:

$$\begin{aligned} a_0 := \max \big \{a(t,x)\,:\,(t,x)\in I\times [-M,M]\big \}. \end{aligned}$$
(3.8)

Now, since \(\Phi \) is increasing, we choose \(N > 0\), such that:

$$\begin{aligned} \begin{aligned}&\Phi (N)> 0, \quad \Phi (-N) < 0\qquad \text {and} \\&N > \max \bigg \{H, \frac{2M}{T}\bigg \}\cdot a_0; \end{aligned} \end{aligned}$$
(3.9)

accordingly, we fix \(L_M > 0\) in such a way that (see (3.4)):

$$\begin{aligned} \begin{aligned} \min \bigg \{\int _{\Phi (N)}^{\Phi (L_M)}&\frac{1}{\psi (s)}\,\mathrm {d}s, \int _{-\Phi (-N)}^{-\Phi (-L_M)}\frac{1}{\psi (s)}\,\mathrm {d}s\bigg \} > \Vert l\Vert _{L^1} + \Vert \mu \Vert _{L^q}\,(2M)^{\frac{q-1}{q}}. \end{aligned} \end{aligned}$$
(3.10)

Notice that \(L_M\) depends on M (and also on l and \(\mu \)), but not on \(\alpha ,\,\beta \) nor on \(\nu _1\) and \(\nu _2\).

We then consider the function:

$$\begin{aligned} \gamma _M := L_M/h+|\alpha '|+|\beta '| \in L^p(I), \end{aligned}$$

and we introduce the following truncating operators:

$$\begin{aligned} {\mathcal {T}}: W^{1,p}(I)\longrightarrow W^{1,p}(I), \qquad {\mathcal {T}}(x)(t):= & {} {\left\{ \begin{array}{ll} \alpha (t), &{}\quad \hbox {if }\ x(t)< \alpha (t); \\ x(t), &{}\quad \hbox {if }\ x(t)\in [\alpha (t),\beta (t)]; \\ \beta (t), &{}\quad \hbox {if }\ x(t)> \beta (t); \end{array}\right. } \\ {\mathcal {D}}: L^p(I)\longrightarrow L^1(I), \qquad {\mathcal {D}}(z)(t):= & {} {\left\{ \begin{array}{ll} -\gamma _M(t), &{}\quad \hbox {if }\ z(t) < -\gamma _M(t); \\ z(t), &{}\quad \hbox {if }\ |z(t)|\le \gamma _M(t); \\ \gamma _M(t), &{}\quad \hbox {if }\ z(t) > \gamma _M(t). \end{array}\right. } \end{aligned}$$

We also consider the truncated function \(f^*:I\times {\mathbb {R}}^2\rightarrow {\mathbb {R}}\) defined by:

$$\begin{aligned} f^*(t,x,y) := {\left\{ \begin{array}{ll} f\big (t,\beta (t),\beta '(t)\big ) + \arctan \big (x-\beta (t)\big ), &{}\quad \hbox {if }\ x > \beta (t); \\ f(t,x,y), &{}\quad \hbox {if }\ x\in [\alpha (t),\beta (t)]; \\ f\big (t,\alpha (t),\alpha '(t)\big ) + \arctan \big (x-\alpha (t)\big ), &{}\quad \hbox {if }\ x < \alpha (t). \end{array}\right. } \end{aligned}$$

By means of the function \(f^*\) and of the operators \({\mathcal {T}}\) and \({\mathcal {D}}\), we are finally in a position to introduce a “truncated version” of the Dirichlet problem: (3.1):

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \bigg (\Phi \Big (a\big (t,{\mathcal {T}}(x)(t)\big )\,x'(t)\Big )\bigg )' = f^*\Big (t, x(t), {\mathcal {D}}\big ({\mathcal {T}}(x)'(t)\big )\Big ), &{} \hbox { a.e. on}\ I, \\ x(0) = \nu _1,\,\,x(T) = \nu _2. \end{array}\right. } \end{aligned}$$
(3.11)

The next proposition shows that the “abstract” existence result in Theorem 2.5 does apply to the “truncated” Dirichlet problem (3.11).

Proposition 3.6

Let the above assumptions and notation apply. Then, there exists (at least) one solution \(x\in W^{1,p}(I)\) of the Dirichlet problem (3.11).

Proof

We consider the operators A and F defined as follows:

$$\begin{aligned}&A: W^{1,p}(I)\longrightarrow C(I,{\mathbb {R}}), \qquad A_x(t) := a\big (t,{\mathcal {T}}(x)(t)\big ), \\&F: W^{1,p}(I)\longrightarrow L^1(I), \qquad F_x(t) := f^*\Big (t, x(t), {\mathcal {D}}\big ({\mathcal {T}}(x)'(t)\big )\Big ). \end{aligned}$$

By means of these operators, the problem (3.11) can be re-written as:

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \Big (\Phi \big (A_x(t)\,x'(t)\big )\Big )' = F_x(t), &{} \hbox { a.e. on}\ I, \\ x(0) = \nu _1,\,\,x(T) = \nu _2. \end{array}\right. } \end{aligned}$$

We claim that A and F satisfy assumptions (H2) and (H3) in Theorem 2.5.

First of all, since \({\mathcal {T}}\) is continuous with respect to the uniform topology of \(C(I,{\mathbb {R}})\) (as is very easy to see) and since, by the choice of M, we have:

$$\begin{aligned} -M \le \alpha (t) \le {\mathcal {T}}(x)(t) \le \beta (t)\le M \quad \text {for every }t\in I, \end{aligned}$$

the uniform continuity of a on \(I\times [-M,M]\) implies that A is continuous from \(W^{1,p}(I)\) (as a subspace of \(C(I,{\mathbb {R}})\)) to \(C(I,{\mathbb {R}})\). Moreover, by (3.8), one has:

$$\begin{aligned} A_x(t) = a\big (t,{\mathcal {T}}(x)(t)\big )\le a_0 \quad \text {for every }x\in W^{1,p}(I)\text { and every }t\in I. \end{aligned}$$

Finally, by assumption (A2), there exists \(h\in C(I,{\mathbb {R}})\), such that:

\((\star )\):

  \(h\ge 0\) and \(1/h\in L^p(I)\);

\((\star )\):

  \(A_x(t)\ge h(t)\) for every \(x\in W^{1,p}(I)\) and every \(t\in I\).

Thus, the operator A satisfies assumption (H2) in Theorem 2.5 (with the choice \(h_1:=h\) and \(h_2(t) \equiv a_0\)).

As for the functional F, by arguing exactly as in [7, Theorem 3.1] (and by making crucial use of assumption (A2’)), one can recognize that F is continuous from \(W^{1,p}(I)\) to \(L^1(I)\) and that:

$$\begin{aligned} |F_x(t)| \le \Theta (t) := h_{M,\gamma _M}(t)+\frac{\pi }{2} \end{aligned}$$

for every \(x\in W^{1,p}(I)\) and almost every \(t\in I\) (here, \( h_{M,\gamma _M}\) is the function appearing in assumption (A2’) and corresponding to M and \(\gamma _M\in L^p(I)\)). Since \(\Theta \in L^1(I)\) (as it follows from assumption (A2)\('\)), we conclude that F satisfies assumption (H3) in Theorem 2.5.

Gathering together all these facts, we are allowed to apply Theorem 2.5 to problem (3.11), which, therefore, admits a solution \(x\in W^{1,p}(I)\). \(\square \)

We now turn to prove that any solution of (3.11) actually solves (3.1).

Proposition 3.7

Let the above assumptions and notation do apply, and let \(x\in W^{1,p}(I)\) be any solution of the truncated problem (3.11). Then, the following facts hold:

  1. (i)

    \(\alpha (t)\le x(t)\le \beta (t)\) for every \(t\in I\);

  2. (ii)

    \(\sup _I|x|\le M\);

  3. (iii)

    \(|{\mathcal {A}}_x(t)|\le L_M\) for every \(t\in I\);

  4. (iv)

    \(|x'(t)|\le L_M/h(t)\le \gamma _M(t)\) for a.e. \(t\in I\).

Proof

Let \(x\in W^{1,p}(I)\) be any solution of (3.11). According to Remark 2.4, we denote by \({\mathcal {A}}_x\) the unique continuous function on I, such that:

$$\begin{aligned} {\mathcal {A}}_x(t) = a\big (x,{\mathcal {T}}(x)(t)\big )\,x'(t) \quad \hbox { for a.e. }\ t\in I. \end{aligned}$$

Once we have proved that \(x(t)\in [\alpha (t),\beta (t)]\) for all \(t\in I\), we shall obtain:

$$\begin{aligned} {\mathcal {A}}_x(t) = a(t,x(t))\,x'(t) \quad \hbox { for a.e. }\ t\in I. \end{aligned}$$

Let us then turn to prove statements (i)-to-(iii).

(i)  Let us assume, by contradiction, that \(x({\overline{t}})\notin [\alpha ({\overline{t}}),\beta ({\overline{t}})]\) for some \({\overline{t}}\in I\); moreover, to fix ideas, let us suppose that \(x({\overline{t}}) < \alpha ({\overline{t}})\).

Since, by assumptions, \(\nu _1 = x(0) \ge \alpha (0)\) and \(\nu _2 = x(T) \ge \alpha (T)\), it is possible to find suitable points \(t_1,t_2,\theta \in I\), with \(t_1< \theta < t_2\), such that:

  1. (a)

      \(\displaystyle x(\theta ) - \alpha (\theta ) = \min _{t\in I}\big (x(t)-\alpha (t)\big ) < 0;\)

  2. (b)

      \(x(t_1) - \alpha (t_1) = x(t_2)-\alpha (t_2) = 0\) and \(x<\alpha \) on \((t_1,t_2)\).

In particular, from (b), we infer that \({\mathcal {T}}(x) \equiv \alpha \) on \((t_1,t_2)\) and that:

$$\begin{aligned} \begin{aligned} f^*\Big (t, x(t), {\mathcal {D}}\big ({\mathcal {T}}(x)'(t)\big )\Big )&= f(t,\alpha (t),\alpha '(t)) + \arctan \big (x(t)-\alpha (t)\big ) \\&< f(t,\alpha (t),\alpha '(t)), \qquad \hbox { for a.e. }\ t\in (t_1,t_2). \end{aligned} \end{aligned}$$

As a consequence, since x solves the Dirichlet problem (3.11) and \(\alpha \) is a lower solution of the ODE (3.2), for almost every \(t\in (t_1,t_2)\), we obtain:

$$\begin{aligned} \begin{aligned} \Big (\Phi \big ({\mathcal {A}}_x(t)\big )\Big )'&= \bigg (\Phi \Big (a\big (t,{\mathcal {T}}(x)(t)\big )\,x'(t)\Big )\bigg )' \\&= f^*\Big (t, x(t), {\mathcal {D}}\big ({\mathcal {T}}(x)'(t)\big )\Big ) < f(t,\alpha (t),\alpha '(t)) \\&\le \bigg (\Phi \Big (a\big (t,\alpha (t)\big )\,\alpha '(t)\Big )\bigg )' = \Big (\Phi \big ({\mathcal {A}}_\alpha (t)\big )\Big )'. \end{aligned} \end{aligned}$$
(3.12)

We now introduce the subsets \(I_1,I_2\) of I defined as follows:

$$\begin{aligned} I_1 := \{t\in (t_1,\theta ): x'(t) < \alpha '(t)\} \quad \text {and} \quad I_2 := \{t\in (\theta ,t_2): x'(t) > \alpha '(t)\}. \end{aligned}$$

Since \(x < \alpha \) on \((t_1,t_2)\), it is readily seen that both \(I_1\) and \(I_2\) must have positive measure; thus, it is possible to find \(\tau _1\in I_1\) and \(\tau _2\in I_2\), such that:

\((\star )\):

  \(0 < h_1(\tau _i) \le a(\tau _i,\alpha (\tau _i))\) for \(i = 1,2\);

\((\star )\):

  \({\mathcal {A}}_\alpha (\tau _i) = a(\tau _i,\alpha (\tau _i))\,\alpha '(\tau _i)\) for \(i = 1,2\) (see Remark 3.4);

\((\star )\):

  \({\mathcal {A}}_x(\tau _i) = a(\tau _i,{\mathcal {T}}(x)(\tau _i))\,x'(\tau _i) = a(\tau _i,\alpha (\tau _i))\,x'(\tau _i)\) for \(i = 1,2\).

From this, by integrating both sides of inequality (3.12) on \([\tau _1,\theta ]\), we get:

$$\begin{aligned} \begin{aligned}&\Phi \big ({\mathcal {A}}_x(\theta )\big ) - \Phi \Big (a\big (\tau _1,\alpha (\tau _1)\big )\,x'(\tau _1)\Big ) \le \Phi \big ({\mathcal {A}}_\alpha (\theta )\big ) - \Phi \Big (a\big (\tau _1,\alpha (\tau _1)\big )\,\alpha '(\tau _1)\Big ); \end{aligned} \end{aligned}$$

hence, by the choice of \(\tau _1\) and the fact that \(\Phi \) is strictly increasing, one has:

$$\begin{aligned} \Phi \big ({\mathcal {A}}_x(\theta )\big ) - \Phi \big ({\mathcal {A}}_\alpha (\theta )\big ) < 0. \end{aligned}$$
(3.13)

On the other hand, if we integrate both sides of (3.12) on \([\theta ,\tau _2]\), we get:

$$\begin{aligned} \begin{aligned}&\Phi \Big (a\big (\tau _2,\alpha (\tau _2)\big )\,x'(\tau _2)\Big ) - \Phi \big ({\mathcal {A}}_x(\theta )\big ) \le \Phi \Big (a\big (\tau _2,\alpha (\tau _2)\big )\,\alpha '(\tau _2)\Big ) - \Phi \big ({\mathcal {A}}_\alpha (\theta )\big ), \end{aligned} \end{aligned}$$

and thus, by the choice of \(\tau _2\) and again the monotonicity of \(\Phi \), we obtain:

$$\begin{aligned} \begin{aligned}&\Phi \big ({\mathcal {A}}_x(\theta )\big ) - \Phi \big ({\mathcal {A}}_\alpha (\theta )\big ) > 0. \end{aligned} \end{aligned}$$

This is clearly in contradiction with (3.13), hence, \(x\ge \alpha \) on I. By arguing analogously, one can prove that \(x\le \beta \) on I, and statement (i) is established.

(ii)  By statement (i) and the choice of M, we immediately get:

$$\begin{aligned} -\,M \le \alpha (t)\le x(t)\le \beta (t)\le M \quad \hbox { for every}\ t\in I. \end{aligned}$$

(iii)  We split the proof of this statement into two steps.

Step I: We begin by showing that, if \(N > 0\) is as in (3.9), then:

$$\begin{aligned} \min _{t\in I}\big |{\mathcal {A}}_x(t)\big | \le N. \end{aligned}$$
(3.14)

We argue again by contradiction and, to fix ideas, we assume that:

$$\begin{aligned} {\mathcal {A}}_x(t) = a\big (t,x(t)\big )\,x'(t) > N \quad \hbox { for a.e. }\ t\in I. \end{aligned}$$

By integrating both sides of this inequality on [0, T], we get:

$$\begin{aligned} \int _0^T{\mathcal {A}}_x(t)\,\mathrm {d}t = \int _0^Ta(t,x(t))\,x'(t)\,\mathrm {d}t > NT; \end{aligned}$$

from this, by statement (ii), (3.8) and the choice of N (see (3.9)), we obtain:

$$\begin{aligned} \begin{aligned} NT&< \int _0^Ta(t,x(t))\,x'(t)\,\mathrm {d}t \le a_0\cdot \int _0^Tx'(t)\,\mathrm {d}t \\&= \big (x(T)-x(0)\big )\cdot a_0 \le (2M)\cdot a_0 < NT. \end{aligned} \end{aligned}$$

This is clearly a contradiction, and hence, \({\mathcal {A}}_x\le N\) on I. By arguing analogously, one can also prove that \({\mathcal {A}}_x\ge - N\) on I, and (3.14) is established.

Step II: We now turn to prove statement (iii). To this end, arguing once again by contradiction, we assume that there exists \({\overline{t}}\in I\), such that:

$$\begin{aligned} \big |{\mathcal {A}}_x({\overline{t}})\big | > L_M; \end{aligned}$$

moreover, to fix ideas, we suppose that \({\mathcal {A}}_x({\overline{t}}) > L_M\).

Since, by definition, \(L_M > N\), from Step I and Remark 3.4, we infer the existence of two points \(t_1,t_2\in I\), with \(t_1 < t_2\), such that (for example):

\((*)\):

  \({\mathcal {A}}_x(t_1) = N\) and \({\mathcal {A}}_x(t_2) = L_M\);

\((**)\):

  \(0< N< {\mathcal {A}}_x(t) < L_M\) for every \(t\in (t_1,t_2)\);

from this, by statement (ii), (3.8) and the choice of N (see (3.9)), we obtain:

$$\begin{aligned} 0< H< \frac{N}{a_0}< x'(t) < \frac{L_M}{h(t)} \le \gamma _M(t) \quad \hbox { for a.e. }\ t\in (t_1,t_2). \end{aligned}$$
(3.15)

We explicitly notice that, since \(\Phi \) is increasing and \(\Phi (N) > 0\) (by the choice of N, see (3.9)), from \((**)\), we also infer that:

$$\begin{aligned} \Phi \big ({\mathcal {A}}_x(t)\big )> \Phi (N) > 0\qquad \hbox { for all}\ t\in (t_1,t_2). \end{aligned}$$
(3.16)

Now, by definition of \({\mathcal {D}}\), we deduce from (3.15) that \({\mathcal {D}}(x') = x'\) a.e. on \((t_1,t_2)\); moreover, by statement (i) and the definition of \(f^*\), we have:

$$\begin{aligned} f^*\Big (t, x(t), {\mathcal {D}}\big ({\mathcal {T}}(x)'(t)\big )\Big ) = f(t,x(t),x'(t)) \quad \hbox { for a.e. }\ t\in (t_1,t_2). \end{aligned}$$

Thus, since \(x(t)\in [\alpha (t),\beta (t)]\) for every \(t\in I\) (by statement (i)) and

$$\begin{aligned} x'(t)> H > 0 \hbox { for a.e. } t\in (t_1,t_2) \end{aligned}$$

(again by (3.15)), we are entitled to apply estimate (3.5), obtaining (remind that x solves (3.11), and see \((**)\) and (3.16))

$$\begin{aligned} \bigg |\Big (\Phi \big ({\mathcal {A}}_x(t)\big )\Big )'\bigg |&= \displaystyle \bigg |\Big (\Phi \big (a(t,x(t))\,x'(t)\big )\Big )'\bigg | = \big |f(t,x(t),x'(t))\big | \\&= \psi \Big (\Phi \big ({\mathcal {A}}_x(t)\big )\Big )\cdot \Big (l(t)+\mu (t)\,(x'(t))^{\frac{q-1}{q}}\Big ) \qquad \big (\hbox { a.e. on}\ (t_1,t_2)\big ). \end{aligned}$$

In particular, by exploiting this inequality, we obtain:

$$\begin{aligned} \int _{\Phi (N)}^{\Phi (L_M)}\frac{1}{\psi (s)}\,\mathrm {d}s&= \int _{\Phi ({\mathcal {A}}_x(t_1))}^{\Phi ({\mathcal {A}}_x(t_2))} \frac{1}{\psi (s)}\,\mathrm {d}s \\&\le \int _{t_0}^{t_1}\frac{\big (\Phi \big ({\mathcal {A}}_x(t)\big )\big )'}{\psi \big (\Phi \big ({\mathcal {A}}_x(t)\big )\big )}\,\mathrm {d}t \le \int _{t_0}^{t_1} \Big (l(t)+\mu (t)\,(x'(t))^{\frac{q-1}{q}}\Big )\,\mathrm {d}t \\&\quad \big (\text {by H}{\ddot{\mathrm{o}}}\text {lder}'\text {s inequality}\big ) \\&\le \Vert l\Vert _{L^1} + \Vert \mu \Vert _{L^q}\cdot \bigg (\int _{t_0}^{t_1} x'(t)\,\mathrm {d}t\bigg )^{\frac{q-1}{q}} \\&\le \Vert l\Vert _{L^1} + \Vert \mu \Vert _{L^q}\cdot \big (x(t_1)-x(t_0)\big )^{\frac{q-1}{q}} \\&\quad \big (\text {by statement (ii)}\big ) \\&\le \Vert l\Vert _{L^1} + \Vert \mu \Vert _{L^q}\cdot (2M)^{\frac{q-1}{q}}. \end{aligned}$$

This is in contradiction with the choice of \(L_M\) (see (3.10)), and hence, \({\mathcal {A}}_x \le L_M\) on I. Analogously, one can prove that \({\mathcal {A}}_x\ge -L_M\) on I and statement (iii) is completely proved.

(iv)  From statement (iii) and assumption (A2), we immediately infer that:

$$\begin{aligned} |x'(t)|\le \frac{L_M}{a(t,x(t))} \le \frac{L_M}{h(t)}\le \gamma _M(t) \quad \text {for almost every }t\in I, \end{aligned}$$

and the proof is finally complete. \(\square \)

By combining Propositions 3.6 and 3.7, we can prove Theorem 3.5.

Proof of Theorem 3.5

First of all, by Proposition 3.6, there exists (at least) one solution \(x\in W^{1,p}(I)\) of the “truncated” Dirichlet problem (3.11); moreover, by statements (i) and (iv) of Proposition 3.7 (and the very definitions of the operators \({\mathcal {T}}\) and \({\mathcal {D}}\)), for almost every \(t\in I\), we obtain:

$$\begin{aligned} \bigg (\Phi \Big (a\big (t,x(t)\big )\,x'(t)\Big )\bigg )'&= \bigg (\Phi \Big (a\big (t,{\mathcal {T}}(x)(t)\big )\,x'(t)\Big )\bigg )'\\&= f^*\Big (t, x(t), {\mathcal {D}}\big ({\mathcal {T}}(x)'(t)\big )\Big ) = f(t,x(t),x'(t)). \end{aligned}$$

Thus, x is actually a solution of the Dirichlet problem (3.1).

To complete the demonstration of the theorem, we show that x satisfies (3.6) and (3.7).

As for (3.6), it is precisely statement (i) of Proposition 3.7; estimate (3.7), instead, follows from statements (ii) and (iii) of the same proposition. \(\square \)

Some examples We close the section with a few illustrating examples, in which we consider a generic function a(tx) satisfying assumption (A2). We explicitly point out that (A2) is verified, e.g., in the following special cases:

  1. (1)

    when a(tx) has a product structure:

    $$\begin{aligned} a(t,x)=\lambda (t)\cdot b(x), \end{aligned}$$

    where \(\lambda :I\rightarrow {\mathbb {R}}\) is a continuous non-negative function on I, such that \(1/\lambda \) is in \(L^p(I)\) (for some \(p> 1\)) and \(b\in C({\mathbb {R}})\) is such that \(\inf _{\mathbb {R}}b > 0\);

  2. (2)

    when a(tx) is a sum:

    $$\begin{aligned} a(t,x)=h(t)+b(x), \end{aligned}$$

    where \(h:I\rightarrow {\mathbb {R}}\) is a continuous non-negative function on I, such that 1/h is in \(L^p(I)\) (for some \(p> 1\)) and \(b\in C({\mathbb {R}},{\mathbb {R}})\) is non-negative.

In the next Example 3.8, the growth of the right-hand side f with respect to the variable y is linear, and this allows the choice \(\psi \equiv 1\) in the Wintner–Nagumo condition (3.5). Thus, condition (3.5) does not require any relation among the differential operator \(\Phi \), the function a appearing inside \(\Phi \), and f.

Example 3.8

Let us consider the Dirichlet problem:

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \Big (\Phi \big (a(t,x(t))\,x'(t)\big )\Big )' = \sigma (t)(x(t)+ \rho (t)) + g(x(t))\,x'(t) \\ x(0) = \nu _1,\,\,x(T) = \nu _2, \end{array}\right. } \end{aligned}$$
(3.17)

where \(\varphi ,\,a,\,\sigma ,\,\rho \) and g satisfy the following assumptions:

\((\star )\):

  \(\Phi :{\mathbb {R}}\rightarrow {\mathbb {R}}\) is a generic strictly increasing homeomorphism;

\((\star )\):

  \(a\in C(I\times {\mathbb {R}},{\mathbb {R}})\) satisfies assumption (A2);

\((\star )\):

  \(\sigma \in L^1(I)\) and \(\sigma \ge 0\) a.e. on I;

\((\star \)):

  \(\rho \in C(I)\) and \(g\in C({\mathbb {R}},{\mathbb {R}})\) are generic.

We aim to show that our Theorem 3.5 can be applied to problem (3.17). To this end, we consider the function f defined as follows:

$$\begin{aligned} f:I\times {\mathbb {R}}^2\rightarrow {\mathbb {R}}, \qquad f(t,x,y):= \sigma (t)(x+\rho (t))+ g(x)y. \end{aligned}$$

Obviously, f is a Carathéodory function; moreover, it is very easy to recognize that f satisfies assumption (A2)\('\). Indeed, let \(R > 0\) be arbitrarily fixed and let \(\gamma \) be a non-negative function in \(L^{p}(I)\); setting:

$$\begin{aligned} M_R := \max _{[-R,R]}|g|, \end{aligned}$$

we then have:

$$\begin{aligned} \big |f(t,x,y(t))\big | \le \sigma (t)\,\big (R+|\rho (t)|\big ) + M_R\cdot \gamma (t)=: h_{R,\gamma }(t), \end{aligned}$$

for every \(x\in {\mathbb {R}}\) with \(|x|\le R\) and every \(y\in L^1(I)\) satisfying \(|y(t)|\le \gamma (t)\) for almost every \(t\in I\). Since the function \(h_{R,\gamma }\) is non-negative and belongs to \(\in L^1(I)\) (by the assumptions on \(\sigma ,\,\rho \) and \(\gamma \)), we conclude that f fulfills (3.3) in assumption (A2)\('\), as claimed.

We now observe that, setting \(N := \max _{I} |\rho |\), the constant functions:

$$\begin{aligned} \alpha (t) := -N \qquad \beta (t) := N \qquad \big (\hbox { for}\ t\in I\big ) \end{aligned}$$

are, respectively, a lower and a upper solution of (3.2), such that \(\alpha \le \beta \) on I; hence, assumption (A1)\('\) is satisfied. Furthermore, since we have:

$$\begin{aligned} |f(t,x,y)|\,\le \,(2 N)\,\sigma (t)+ \Big (\max _{x\in [-N,N]}|g(x)|\Big )\cdot |y| \end{aligned}$$

for every \(t\in I\), every \(|x|\le N\), and every \(y\in {\mathbb {R}}\), we conclude that f also satisfies assumption (A3)\('\) with the choice (here, \(M_N := \max _{[-N,N]}|g|\)):

$$\begin{aligned} H := 1, \quad \psi \equiv 1, \quad l(t) := 2 N\,\sigma (t), \quad \mu (t):= M_N\ \quad \text {and} \quad q=\infty . \end{aligned}$$

We are then entitled to apply Theorem 3.5, which ensures that there exists (at least) one solution of problem (3.17) for every fixed \(\nu _1,\,\nu _2\in [-N,N]\).

In the next Example 3.9, we give an application of Theorem 3.5 for a rather general right-hand side, with possible superlinear growth with respect to \(u'\).

Example 3.9

Let us consider the following Dirichlet problem:

$$\begin{aligned} {\left\{ \begin{array}{ll} \Big (\Phi _r\big (a(t,x(t))\,x'(t)\big )\Big )' = \sigma (t)\cdot g(x(t))\cdot |x'(t)|^{\delta } \\ u(0) = \nu _1,\,\,u(T) = \nu _2, \end{array}\right. } \end{aligned}$$
(3.18)

where \(\Phi _r,\,a,\,\sigma ,\,g\) and the exponent \(\delta \) satisfy the following assumptions:

\((\star )\):

  \(\Phi _r:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is the standard r-Laplacian, that is:

$$\begin{aligned} \Phi _r(\xi ):= |\xi |^{r-2}\cdot \xi \qquad \big (\hbox {for a suitable}\ r > 1\big ); \end{aligned}$$
\((\star )\):

  \(a\in C(I\times {\mathbb {R}},{\mathbb {R}})\) and there exists \(h\in C(I,{\mathbb {R}})\), such that:

  1. 1.

    \(h \ge 0\) on I and \(1/h\in L^p(I)\) (for some \(p > 1\));

  2. 2.

    \(a(t,x) \ge h(t)\) for every \(t\in I\) and every \(x\in {\mathbb {R}}\).

\((\star )\):

  \(\sigma \in L^\tau (I)\) for a suitable \(\tau > 1\) satisfying the relation:

$$\begin{aligned} \frac{1}{\tau } + \frac{r-1}{p} < 1; \end{aligned}$$
(3.19)
\((\star )\):

  \(g\in C({\mathbb {R}},{\mathbb {R}})\) is a generic function;

\((\star )\):

  \(\delta \) is a positive real constant satisfying the relation:

$$\begin{aligned} \delta \le 1-\frac{1}{\tau } + (r-1)\,\left( 1-\frac{1}{p}\right) . \end{aligned}$$
(3.20)

We aim to show that our Theorem 3.5 can be applied to problem (3.9). To this end, we consider the function f defined as follows:

$$\begin{aligned} f:I\times {\mathbb {R}}^2\rightarrow {\mathbb {R}}, \qquad f(t,x,y):= \sigma (t)\cdot g(x)\cdot |y|^\delta . \end{aligned}$$

Obviously, f is a Carathéodory function; moreover, it is not difficult to recognize that f satisfies assumption (A2)\('\). Indeed, let \(R > 0\) be arbitrarily fixed and let \(\gamma \) be a non-negative function in \(L^p(I)\); setting

$$\begin{aligned} M_R := \max _{[-R,R]}|g|, \end{aligned}$$

we then have:

$$\begin{aligned} \big |f(t,x,y(t))\big | \le M_R\cdot |\sigma (t)|\cdot (\gamma (t))^{\delta } =: h_{R,\gamma }(t) \end{aligned}$$

for every \(|x|\le R\) and every \(y\in L^1(I)\), such that \(|y(t)|\le \gamma (t)\) a.e. on I. Now, by combining (3.19) with (3.20), we readily see that:

$$\begin{aligned} \delta < \left( 1-\frac{1}{\tau }\right) p; \end{aligned}$$

from this, by Hölder’s inequality (and the assumptions on \(\sigma \) and \(\gamma \)), we infer that \(h_{R,\gamma }\) (which is non-negative on I) belongs to \(L^1(I)\), whence f satisfies (A2)\('\). To prove that also assumptions (A1)\('\) and (A3)\('\) are satisfied, we first notice that, if \(N > 0\) is arbitrary, the constant functions:

$$\begin{aligned} \alpha (t) := -N \qquad \beta (t) := N \qquad \big (\hbox {for}\ t\in I\big ) \end{aligned}$$

are, respectively, a lower and a upper solution of (3.2) such that \(\alpha \le \beta \) on I; hence, assumption (A1)\('\) is fulfilled. Moreover, by (3.20), we have:

$$\begin{aligned} \delta \le (r-1) + \frac{q-1}{q}, \quad \text {where}\,\,q:=\frac{\tau \,p}{p+\tau \,(r-1)} > 1. \end{aligned}$$

From this, setting \(M_N:={\max _{[-N,N]}|g|}\), we obtain:

$$\begin{aligned} \big |f(t,x,y)\big |&\le M_N\cdot |\sigma (t)|\cdot |y|^{\delta } \le M_N\cdot |\sigma (t)|\cdot |y|^{r-1} \cdot |y|^\frac{q-1}{q} \\&= \big |\Phi (a(t,x)y)\big |\cdot \bigg (\frac{M_N\cdot |\sigma (t)|}{(a(t,x))^{r-1}}\bigg )\,|y|^{\frac{q-1}{q}} \\&\quad \big (\hbox { since } a(t,x)\ge h(t) \hbox { for every } (t,x)\in I\times {\mathbb {R}}\big ) \\&\le \big |\Phi (a(t,x)y)\big |\cdot \bigg (\frac{M_N\cdot |\sigma (t)|}{(h(t))^{r-1}}\bigg )\,|y|^{\frac{q-1}{q}} \end{aligned}$$

for a.e. \(t\in I\), every \(x\in {\mathbb {R}}\) with \(|x|\le N\) and every \(y\in {\mathbb {R}}\) satisfying \(|y|\ge 1\). Thus, if we are able to prove that

$$\begin{aligned} t\mapsto \frac{|\sigma (t)|}{(h(t))^{r-1}} \in L^q(I), \end{aligned}$$
(3.21)

we can conclude that f satisfies assumption (A3)\('\) with the choice

$$\begin{aligned} H := 1, \quad \psi (s) := s, \quad l(t) := 0, \quad \mu (t):= \frac{M_N\cdot |\sigma (t)|}{(h(t))^{r-1}} \end{aligned}$$

and q as above. On the other hand, the needed (3.21) is an easy consequence of Hölder’s inequality, assumption (3.19) and of the fact that \(1/h\in L^p(I)\).

We are then entitled to apply Theorem 3.5, which ensures the existence of (at least) one solution of the Dirichlet problem (3.18) for every \(\nu _1\,\nu _2\in {\mathbb {R}}\).

4 General nonlinear boundary conditions

The main aim of this last section is to prove the solvability of general boundary value problems associated with the (possibly singular) differential equation:

$$\begin{aligned} \displaystyle \Big (\Phi \big (a(t,x(t))\,x'(t)\big )\Big )' = f(t,x(t),x'(t)), \qquad \hbox { a.e. on }\ I. \end{aligned}$$
(4.1)

(here, \(\Phi ,\,a\) and f satisfy assumptions (A1)-to-(A3) in Sect. 3).

As a particular case, we shall obtain existence results for periodic BVPs and for Sturm–Liouville-type problems (associated with (4.1)).

To be more precise, taking for fixed all the notation introduced so far, we aim to study the following general BVPs (associated with (4.1)):

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \displaystyle \Big (\Phi \big (a(t,x(t))\,x'(t)\big )\Big )' = f(t,x(t),x'(t)), \qquad \hbox { a.e. on }\ I, \\ g(x(0),x(T),{\mathcal {A}}_x(0),{\mathcal {A}}_x(T)) = 0, \\ x(T) = \rho (x(0)). \end{array}\right. } \end{aligned}$$
(4.2)

Here, \(\rho :{\mathbb {R}}\rightarrow {\mathbb {R}}\) and \(g:{\mathbb {R}}^4\rightarrow {\mathbb {R}}\) satisfy the following general assumptions:

  1. (G1)

    \(\rho \in C({\mathbb {R}},{\mathbb {R}})\) and is increasing on \({\mathbb {R}}\);

  2. (G2)

    \(g\in C({\mathbb {R}}^4,{\mathbb {R}})\) and, for every fixed \(u,v\in {\mathbb {R}}\), it holds that:

\(\mathrm {(G2)}_1\):

\(g(u,v,\cdot ,z)\) is increasing for every fixed \(z\in {\mathbb {R}}\);

\(\mathrm {(G2)}_2\):

\(g(u,v,w,\cdot )\) is decreasing for every fixed \(w\in {\mathbb {R}}\).

We now state one of the main existence results of this section.

Theorem 4.1

Let us assume that all the hypotheses of Theorem 3.5 are satisfied, and that g and \(\rho \) fulfill the assumptions (G1)–(G2) introduced above. Moreover, if \(\alpha ,\beta \in W^{1,p}(I)\) are as in assumption (A1\('\)), we suppose that:

$$\begin{aligned} {\left\{ \begin{array}{ll} g(\alpha (0),\alpha (T),{\mathcal {A}}_\alpha (0),{\mathcal {A}}_\alpha (T)) \ge 0, \\ \alpha (T) = \rho (\alpha (0)) \end{array}\right. }\,\, {\left\{ \begin{array}{ll} g(\beta (0),\beta (T),{\mathcal {A}}_\beta (0),{\mathcal {A}}_\beta (T)) \le 0, \\ \beta (T) = \rho (\beta (0)). \end{array}\right. } \end{aligned}$$
(4.3)

Finally, let us assume that the function a satisfies the following condition:

$$\begin{aligned} a(0,x) \ne 0 \quad \hbox {and} \quad a(T,x) \ne 0 \qquad \text {for every }\ x\in {\mathbb {R}}. \end{aligned}$$

Then, the problem (4.2) possesses a solution \(x\in W^{1,p}(I)\), such that:

$$\begin{aligned} \alpha (t)\le x(t)\le \beta (t) \qquad \hbox { for every}\ t\in I. \end{aligned}$$
(4.4)

Furthermore, if \(M > 0\) is any real number, such that \(\sup _I|\alpha |,\,\sup _I|\beta |\le M\) and \(L_M > 0\)is as in Theorem 3.5 (see (3.10)), one has:

$$\begin{aligned} \max _{t\in I}\big |x(t)\big |\le M \quad \text { and } \quad \max _{t\in I}\big |{\mathcal {A}}_x(t)\big | \le L_M. \end{aligned}$$
(4.5)

The basic idea behind the proof of Theorem 4.1, inspired by [3] and already exploited in [15], is to think of the boundary value problem (4.2) as a “superposition” of Dirichlet problems to which our existence result in Theorem 3.5 apply. Following this approach, we first establish a compactness-type result for the solutions of the ODE (4.1).

Proposition 4.2

For every \(n\in {\mathbb {N}}\), let \(x_n\in W^{1,p}(I)\) be a solution of

$$\begin{aligned} \Big (\Phi \big (a(t,x(t))\,x'(t)\big )\Big )' = f(t,x(t),x'(t)), \qquad \hbox { a.e. on}\ I. \end{aligned}$$
(4.6)

We assume that, together with (A1)(A3), f satisfies assumption \(\mathrm {(A2')}\) of Theorem 3.5; moreover, we suppose that there exist \(M,L > 0\), such that:

$$\begin{aligned} \sup _I|x_n|\le M \quad \text {and} \quad \sup _I|{\mathcal {A}}_{x_n}| \le L \qquad \hbox { for every}\ n\in {\mathbb {N}}. \end{aligned}$$
(4.7)

It is then possible to find a sub-sequence \(\{x_{n_k}\}_{k\in {\mathbb {N}}}\) of \(\{x_n\}_{n\in {\mathbb {N}}}\) and a solution \(x_0\in W^{1,p}(I)\) of Eq. (4.1) with the following properties:

(1):

\(x_{n_k}(t)\rightarrow x_0(t)\) for every \(t\in I\) as \(n\rightarrow \infty \);

(2):

\({\mathcal {A}}_{x_{n_k}}(t)\rightarrow {\mathcal {A}}_{x_0}(t)\) for every \(t\in I\) as \(n\rightarrow \infty \).

Proof

For every natural n, we set \(z_n := \big (\Phi ({\mathcal {A}}_{x_n})\big )'\). Since \(x_n\) is a solution of (4.6), by (4.7) and the fact that f satisfies \(\mathrm {(A2')}\), we have (see also (3.3)):

$$\begin{aligned} \big |z_n(t)\big | = \big |f(t,x_n(t),x'_n(t))\big | \le h_{M,\gamma }(t) \quad \hbox { for a.e. }\ t\in I, \end{aligned}$$
(4.8)

where \(\gamma = L/h\) and \(h_{M,\gamma }\in L^1(I)\) is the function appearing in assumption \(\mathrm {(A2')}\) and corresponding to M and \(\gamma \). Moreover, again by (4.7), one has:

$$\begin{aligned} |x'_n(t)| \le \frac{L}{h(t)} = \gamma (t) \quad \hbox { for a.e. }\ t\in I. \end{aligned}$$
(4.9)

As a consequence, both \(\{z_n\}_n\) and \(\{x'_n\}_n\) are uniformly integrable in \(L^1(I)\). Then, by Dunford–Pettis Theorem (see, e.g., [2]), there exist \(v,w\in L^1(I)\), such that, up to a sub-sequence, \(x'_n\rightharpoonup v\) and \(z_n\rightharpoonup w\) in \(L^1(I)\) as \(n\rightarrow \infty \).

Now, since the sequence \(\{x_n(0)\}_n\) is bounded in \({\mathbb {R}}\) (again by (4.7)), we can assume that \(x_n(0)\) converges to some \(\nu _0\in {\mathbb {R}}\) as \(n\rightarrow \infty \); from this, reminding that \(x'_n \rightharpoonup v\) in \(L^1(I)\), we get:

$$\begin{aligned} x_n(t) = x_n(0)+\int _0^tx'_n(s)\,\mathrm {d}s \underset{n\rightarrow \infty }{\longrightarrow } \nu _0 + \int _0^t v(s)\,\mathrm {d}s =: x_0(t) \quad \ \forall \,t\in I. \end{aligned}$$
(4.10)

Notice that, by its very definition, \(x_0\) satisfies the following properties:

  1. (a)

      \(x_0\) is absolutely continuous on I and \(x_0' = v \in L^1(I)\);

  2. (b)

      \(\sup _I|x_0| \le M\) (this follows also from (4.7)) .

Thus, to complete the demonstration, we need to prove that \(x_0\) is a solution of the equation (4.6) and that \({\mathcal {A}}_{x_n}\rightarrow {\mathcal {A}}_{x_0}\) point-wise on I as \(n\rightarrow \infty \).

First of all, since also the sequence \(\{{\mathcal {A}}_{x_n}(0)\}_n\) is bounded in \({\mathbb {R}}\) (again by (4.7)), we can suppose that \({\mathcal {A}}_{x_n}(0)\rightarrow \nu '_0\) as \(n\rightarrow \infty \) for some \(\nu '_0\in {\mathbb {R}}\); thus, since \(z_n\rightharpoonup w\) in \(L^1(I)\), we have:

$$\begin{aligned} \Phi \big ({\mathcal {A}}_{x_n}(t)\big ) = \Phi \big ({\mathcal {A}}_{x_n}(0)\big ) + \int _0^t z'_n(s)\,\mathrm {d}s \underset{n\rightarrow \infty }{\longrightarrow } \Phi (\nu '_0)+\int _0^t w(s)\,\mathrm {d}s. \end{aligned}$$

As a consequence, by the continuity of \(\Phi ^{-1}\), we obtain:

$$\begin{aligned} {\mathcal {A}}_{x_n}(t) \underset{n\rightarrow \infty }{\longrightarrow } \Phi ^{-1}\bigg (\Phi (\nu '_0) + \int _0^t w(s)\,\mathrm {d}s\bigg ) =: {\mathcal {U}}(t) \quad \hbox { for every}\ t\in I.\qquad \end{aligned}$$
(4.11)

Notice that, by definition, \({\mathcal {U}}\in C(I,{\mathbb {R}})\) and it satisfies:

\(\mathrm {(a)}_1\):

  \(\Phi \circ {\mathcal {U}}\) is absolutely continuous on I and \((\Phi \circ {\mathcal {U}})' = w\in L^1(I)\);

\(\mathrm {(b)}_1\):

  \(\sup _I|{\mathcal {U}}| \le L\) (this follows also from (4.7)) .

Now, since a is continuous on \(I\times {\mathbb {R}}\), we derive from (4.10) that \(a(t,x_n(t))\) converges to \(a(t,x_0(t))\) for every \(t\in I\) as \(n\rightarrow \infty \); thus, the above (4.11) (together with the fact that \(a(t,x)\ge h(t) > 0\) for a.e. \(t\in I\)) gives:

$$\begin{aligned} x'_n(t) \underset{n\rightarrow \infty }{\longrightarrow } \frac{1}{a(t,x_0(t))}\,{\mathcal {U}}(t) \quad \hbox { for a.e. }\ t\in I. \end{aligned}$$
(4.12)

Taking into account (4.9) and the fact that \(1/h\in L^p(I)\) (see assumption (A2)), it is easy to recognize that:

$$\begin{aligned} x'_n\rightarrow \frac{{\mathcal {U}}}{a(\cdot ,x_0(\cdot ))}\qquad \hbox { also in}\ L^1(I); \end{aligned}$$

on the other hand, since we already know that \(x'_n\rightharpoonup v\) in \(L^1(I)\) as \(n\rightarrow \infty \), we necessarily have:

$$\begin{aligned} v(t) = \frac{1}{a(t,x_0(t))}\,{\mathcal {U}}(t) \quad \hbox { a.e. on}\ I. \end{aligned}$$
(4.13)

From this, by reminding that \(v = x_0'\) (see (a)), we infer that:

\((\star )\):

  \(x_0' = v\in L^p(I)\), whence \(x_0\in W^{1,p}(I)\) (as \(|{{\mathcal {U}}}/{a(\cdot ,x_0(\cdot ))}|\le L/h\));

\((\star )\):

  \(a(t,x_0)\,x'_0 = {\mathcal {U}}\) a.e. on I;

\((\star )\):

  \(\Phi \circ \big (a(t,x_0)\,x_0'\big ) = \Phi \circ {\mathcal {U}} \in W^{1,1}(I)\) and (see \(\mathrm {(a)_1}\)):

$$\begin{aligned} \big (\Phi (a(t,x_0)\,x_0')\big )' = w. \end{aligned}$$

We now turn to prove that \(x_0\) solves the ODE (4.6). To this end, we observe that, by (4.12) and (4.13), we have \(x_n'(t)\rightarrow v(t) = x_0'(t)\) for a.e. \(t\in I\); as a consequence, since \(x_n\) is a solution of (4.6) for every n and f is a Carathéodory function (see (A3)), we obtain (remind that \(x_n\rightarrow x_0\) point-wise on I):

$$\begin{aligned} z_n = \Big (\Phi \big ({\mathcal {A}}_{x_n}\big )\Big )' = f(t,x_n(t),x_n'(t)) \underset{n\rightarrow \infty }{\longrightarrow } f(t,x_0(t),x'_0(t)) \quad \hbox { for a.e. }\ t\in I. \end{aligned}$$

On the other hand, by (4.8), we have that \(z_n\rightarrow f(t,x_0(t),x_0'(t)\) also in \(L^1(I)\); since we already know that \( z_n\rightharpoonup w\) in \(L^1(I)\), we conclude that:

$$\begin{aligned} \big (\Phi (a(t,x_0(t))\,x_0'(t))\big )' = w(t) = f(t,x_0(t),x_0'(t)) \quad \hbox { for a.e. }\ t\in I, \end{aligned}$$

that is, \(x_0\) is a solution of (4.6). Finally, since \({\mathcal {U}}\) is a continuous function on I, such that \({\mathcal {U}} = a(t,x_0)\,x_0'\) a.e. on I, we have \({\mathcal {U}} = {\mathcal {A}}_x\) on I and, by (4.11):

$$\begin{aligned} {\mathcal {A}}_{x_n}(t) \underset{n\rightarrow \infty }{\longrightarrow } {\mathcal {A}}_x(t) \quad \hbox { for every}\ t\in I. \end{aligned}$$

This ends the proof. \(\square \)

We also need the following technical lemma.

Lemma 4.3

Let \(\alpha ,\,\beta \in W^{1,p}(I)\) be, respectively, a lower and a upper solution of Eq. (4.1), such that \(\alpha \le \beta \). Moreover, let us assume that:

$$\begin{aligned} a(0,x) \ne 0 \quad \hbox { and} \quad a(T,x) \ne 0 \qquad \text {for every}\ x\in {\mathbb {R}}. \end{aligned}$$

Then, the following facts hold true:

(i):

if \(\alpha (0) = \beta (0)\), then \({\mathcal {A}}_\alpha (0) \le {\mathcal {A}}_\beta (0)\);

(ii):

if \(\alpha (T) = \beta (T)\), then \({\mathcal {A}}_\alpha (T)\ge {\mathcal {A}}_\beta (T)\).

Proof

We only prove statement (i), since (ii) is analogous.

First of all, since both \(a(0,\alpha (0))\) and \(a(0,\beta (0))\) are different from 0 (by assumption), it is possible to find \(\delta > 0\), such that, for a.e. \(t\in [0,\delta ]\), we have:

$$\begin{aligned} \alpha '(t) = \frac{{\mathcal {A}}_{\alpha }(t)}{a(t,\alpha (t))}=:u_1(t) \quad \text {and}\quad \beta '(t) = \frac{{\mathcal {A}}_{\beta }(t)}{a(t,\beta (t))} =: u_2(t); \end{aligned}$$

moreover, both \(u_1\) and \(u_2\) are continuous on \([0,\delta ]\). Let us now assume, by contradiction, that \({\mathcal {A}}_\alpha (0) > {\mathcal {A}}_\beta (0)\). Since, by assumption, \(\alpha (0) = \beta (0)\) (and a is non-negative on \(I\times {\mathbb {R}}\)), there exists \(\delta ' < \delta \), such that:

$$\begin{aligned} \alpha '(t) = u_1(t) > u_2(t) = \beta '(t) \quad \hbox { for a.e. }\ t\in [0,\delta ']; \end{aligned}$$

thus, by integrating this inequality on \([0,\delta ']\), we get:

$$\begin{aligned} \alpha (t)&= \alpha (0) + \int _0^t\alpha '(s)\,\mathrm {d}s = \beta (0) + \int _0^t\alpha '(s)\,\mathrm {d}s \\&> \beta (0) + \int _0^t\beta '(s)\,\mathrm {d}s = \beta (t) \qquad (\hbox { for every}\ t\in [0,\delta '], \end{aligned}$$

which contradicts the fact that \(\alpha \le \beta \) on I. This ends the proof. \(\square \)

We can now prove Theorem 4.1.

Proof of Theorem 4.1

Let \(\nu \in [\alpha (0),\beta (0)]\) be fixed. Since, by assumption, \(\rho \) is increasing on \({\mathbb {R}}\) and \(\alpha ,\beta \) satisfy (4.3), we have \(\rho (\nu )\in [\alpha (T),\beta (T)]\); as a consequence, by the existence result in Theorem 3.5, the Dirichlet problem

$$\begin{aligned} (\mathrm {D}_{\nu }) \qquad \quad {\left\{ \begin{array}{ll} \displaystyle \Big (\Phi \big (a(t,x(t))\,x'(t)\big )\Big )' = f(t,x(t),x'(t)), \qquad \hbox { a.e. on}\ I, \\ x(0) = \nu ,\,\,x(T) = \rho (\nu ) \end{array}\right. } \end{aligned}$$

admits a solution \(x_\nu \), such that \(\alpha \le x_\nu \le \beta \) on I. Moreover, if \(M > 0\) is such that \(\sup _I|\alpha |,\,\sup _I|\beta |\le M\) and \(L_M > 0\) is as in Theorem 3.5, we have:

$$\begin{aligned} (*) \qquad \quad \sup _{t\in I}|x_\nu (t)|\le M\text { and } \sup _{t\in I}|{\mathcal {A}}_{x_\nu }(t)|\le L_M. \end{aligned}$$

We then consider the following set:

$$\begin{aligned}&V := \Big \{\nu \in [\alpha (0),\beta (0)]\,:\,\,\exists \text { a solution } x_\nu \in W^{1,p}(I)\text { of }(\mathrm {D}_\nu )\text { s.t. }\alpha \le x_\nu \le \beta , \\&\qquad \qquad \quad x_\nu \text { satisfies }(*)\text { and } g(x_\nu (0),x_\nu (T),{\mathcal {A}}_{x_\nu }(0),{\mathcal {A}}_{x_\nu }(T))\ge 0\Big \}. \end{aligned}$$

Claim 1

We have \(\nu := \alpha (0)\in V\). In fact, by Theorem 3.5, there exists a solution \(x_{\nu }\in W^{1,p}(I)\) of \((\mathrm {D}_{\nu })\), such that:

$$\begin{aligned} \alpha \le x_{\nu }\le \beta \hbox { on} I, \end{aligned}$$

and satisfying \((*)\); in particular, we have:

$$\begin{aligned} x_{\nu }(0) = \nu = \alpha (0). \end{aligned}$$

From this, by applying Lemma 4.3 with \(x_\nu \) in place of \(\beta \) (notice that, \(x_\nu \) being a solution of \((\mathrm {D}_{\nu })\), it is also a upper solution of (4.1)), we get:

$$\begin{aligned} {\mathcal {A}}_\alpha (0)\le {\mathcal {A}}_{x_\nu }(0). \end{aligned}$$

Analogously, since we have (remind that \(\alpha \) satisfies (4.3)):

$$\begin{aligned} x_{\nu }(T) = \rho (\nu ) = \rho (\alpha (0)) = \alpha (T), \end{aligned}$$

again by Lemma 4.3 (with \(\beta \) replaced by \(x_{\nu }\)), we have:

$$\begin{aligned} {\mathcal {A}}_{\alpha }(T)\ge {\mathcal {A}}_{x_{\nu }}(T). \end{aligned}$$

Thus, by (4.3) and assumption (G2), we obtain:

$$\begin{aligned} g(x_\nu (0),x_\nu (T),{\mathcal {A}}_{x_\nu }(0),{\mathcal {A}}_{x_\nu }(T)) \ge g(\alpha (0),\alpha (T),{\mathcal {A}}_\alpha (0),{\mathcal {A}}_\alpha (T))\ge 0. \end{aligned}$$

This proves that \(\nu \in V\), as claimed. In particular, \(V\ne \varnothing \).

Claim 2

If \({\overline{\nu }} := \sup V\), we have \({\overline{\nu }}\in V\). In fact, if \({\overline{\nu }} = \alpha (0)\), we have already proved in Claim I that \({\overline{\nu }}\in V\); if, instead, \({\overline{\nu }} > \alpha (0)\), we choose a sequence \(\{\nu _n\}_n\subseteq V\) such that \(\nu _n\rightarrow {\overline{\nu }}\) as \(n\rightarrow \infty \) and \(\nu _n\le {\overline{\nu }}\) for every n. Since \(\{\nu _n\}_n\subseteq V\), there exists a solution \(x_n\in W^{1,p}(I)\) of \(\mathrm {(D_{\nu _n})}\), show that:

  1. (a)

      \(\alpha \le x_n\le \beta \) on I;

  2. (b)

      \(x_n\) satisfies \((*)\);

  3. (c)

      \(g(x_n(0),x_n(T),{\mathcal {A}}_{x_n}(0),{\mathcal {A}}_{x_n}(T))\ge 0\).

On account of (b), we can apply Proposition 4.2, which provides us with a solution \(x_0\in W^{1,p}(I)\) of (4.1), such that (up to a sub-sequence):

$$\begin{aligned} x_n(t)\rightarrow x_0(t) \quad \text {and} \quad {\mathcal {A}}_{x_n}(t)\rightarrow {\mathcal {A}}_{x_0}(t) \qquad \hbox { for every}\ t\in I. \end{aligned}$$

Now, since \(\nu _n\rightarrow {\overline{\nu }}\) and \(\rho \) is continuous, it is readily seen that \(x_0\) is a solution of \(\mathrm {(D_{{\overline{\nu }}})}\); moreover, since \(x_n\) satisfies \((*)\) and \(\alpha \le x_n\le \beta \) on I for every \(n\in {\mathbb {N}}\), then the same is true of \(x_0\). Finally, by (c) and the continuity of the function g on whole of \({\mathbb {R}}^4\) [see assumption (G2)], we conclude that:

$$\begin{aligned} \begin{aligned}&g(x_0(0),x_0(T),{\mathcal {A}}_{x_0}(0),{\mathcal {A}}_{x_0}(T)) \\&\quad = \lim _{n\rightarrow \infty }g(x_n(0),x_n(T),{\mathcal {A}}_{x_n}(0),{\mathcal {A}}_{x_n}(T)) \ge 0, \end{aligned} \end{aligned}$$

and this proves that \({\overline{\nu }}\in V\).

With Claims I and II at hand, we now prove the existence of a solution for (4.2). In fact, let \({\overline{\nu }} = \sup V \in [\alpha (0),\beta (0)]\) and let \(x_{{\overline{\nu }}}\in W^{1,p}(I)\) be a corresponding solution of \(\mathrm {(D_{{\overline{\nu }}})}\) satisfying \((\star )\) and such that:

  1. (i)

    \(\alpha \le x_{{\overline{\nu }}}\le \beta \) on I;

  2. (ii)

    \(g(x_{{\overline{\nu }}}(0),x_{{\overline{\nu }}}(T), {\mathcal {A}}_{x_{{\overline{\nu }}}}(0),{\mathcal {A}}_{x_{{\overline{\nu }}}}(T))\ge 0\).

If \({\overline{\nu }} = \beta (0)\), we have \(x_{{\overline{\nu }}}(0) = \beta (0)\) and (by (4.3)):

$$\begin{aligned} x_{{\overline{\nu }}}(T) = \rho (\beta (0)) = \beta (T); \end{aligned}$$

on the other hand, since we also know that \(x_0\le \beta \) on I, from Lemma 4.3 (with \(\alpha \) replaced by \(x_{{\overline{\nu }}}\), which is a solution of \(\mathrm {(D_{{\overline{\nu }}})}\)), we infer that:

$$\begin{aligned} {\mathcal {A}}_{x_{{\overline{\nu }}}}(0)\le {\mathcal {A}}_\beta (0)\text { and } {\mathcal {A}}_{x_{{\overline{\nu }}}}(T)\ge {\mathcal {A}}_\beta (T). \end{aligned}$$

Hence, by (ii), the monotonicity of g [see (G2)], and (4.3), we obtain:

$$\begin{aligned}&0 \le g(x_{{\overline{\nu }}}(0),x_{{\overline{\nu }}}(T), {\mathcal {A}}_{x_{{\overline{\nu }}}}(0),{\mathcal {A}}_{x_{{\overline{\nu }}}}(T)) = g(\beta (0),\beta (T),{\mathcal {A}}_{x_{{\overline{\nu }}}}(0), {\mathcal {A}}_{x_{{\overline{\nu }}}}(T)) \\&\qquad \qquad \le g(\beta (0),\beta (T),{\mathcal {A}}_{\beta }(0),{\mathcal {A}}_{\beta }(T)) \le 0, \end{aligned}$$

and this proves that \(x_{{\overline{\nu }}}\) is a solution of (4.2) satisfying (4.4) and (4.5).

If, instead \({\overline{\nu }} < \beta (0)\), we choose a sequence \(\{\mu _m\}_m\subseteq [\alpha (0),\beta (0)]\), such that \(\mu _m\rightarrow {\overline{\nu }}\) as \(m\rightarrow \infty \) and \(\mu _m > {\overline{\nu }}\) for every m. Since \(x_{{\overline{\nu }}}\) is a solution of \(\mathrm {(D_{{\overline{\nu }}})}\) satisfying (i)–(ii) above, we can think of \(x_{{\overline{\nu }}}\) and \(\beta \) as, respectively, a lower and a upper solution of (4.1) satisfying (A1\('\)) in Theorem 3.5; moreover, by the very choice of \(M > 0\), we also have that:

$$\begin{aligned} \sup _{t\in I}|x_{{\overline{\nu }}}(t)|,\,\,\sup _{t\in I}|\beta (t)|\le M. \end{aligned}$$

Hence, for every m, there exists a solution \(u_m\) of \(\mathrm {(D_{\mu _m})}\), such that:

  •   \(\alpha \le x_{{\overline{\nu }}} \le u_m\le \beta \) on I;

  •   \(\sup _I|u_m|\le M\) and \( \sup _I|{\mathcal {A}}_{u_m}| \le L_M\).

In particular, \(u_m\) satisfies \((*)\) for any m. We can then apply Proposition 4.2, which provides us with a solution \(u_0\) of (4.1), such that (up to a sub-sequence):

$$\begin{aligned} u_m(t)\rightarrow u_0(t) \quad \text {and} \quad {\mathcal {A}}_{u_m}(t)\rightarrow {\mathcal {A}}_{u_0}(t) \qquad \hbox { for every}\ t\in I. \end{aligned}$$

Since \(\mu _m\rightarrow {\overline{\nu }}\) and \(\rho \) is continuous, \(u_0\) solves \(\mathrm {(D_{{\overline{\nu }}})}\); hence:

$$\begin{aligned} u_0(T) = \rho (u_0(0)). \end{aligned}$$

We now observe that, since \(\mu _m > {\overline{\nu }} = \sup V\), then \(\mu _m\notin V\); as a consequence, since \(\alpha \le u_m\le \beta \) on I and \(u_m\) satisfies \((*)\) for every m, we necessarily have:

$$\begin{aligned} g(u_m(0),u_m(T),{\mathcal {A}}_{u_m}(0),{\mathcal {A}}_{u_m}(T)) < 0. \end{aligned}$$

From this, by the continuity of g (see assumption (G1)), we get:

$$\begin{aligned} g(u_0(0),u_0(T),{\mathcal {A}}_{u_0}(0),{\mathcal {A}}_{u_0}(T)) \le 0. \end{aligned}$$
(4.14)

On the other hand, since both \(x_{{\overline{\nu }}}\) and \(u_0\) solve \(\mathrm {(D_{{\overline{\nu }}})}\), we have:

$$\begin{aligned} u_0(0) = x_{{\overline{\nu }}}(0) = {\overline{\nu }}\text { and } u_0(T) = x_{{\overline{\nu }}}(T) = \rho ({\overline{\nu }}); \end{aligned}$$

moreover, since \(u_m\ge x_{{\overline{\nu }}}\) on I for every m, then the same is true of \(u_0\). From this, by exploiting once again Lemma 4.3 (with \(\alpha = x_{{\overline{\nu }}}\) and \(\beta = u_m\), which are solutions of \(\mathrm {(D_{{\overline{\nu }}})}\)), we infer that:

$$\begin{aligned} {\mathcal {A}}_{u_0}(0)\ge {\mathcal {A}}_{x_0}(0) \quad \text {and} \quad {\mathcal {A}}_{u_0}(T)\le {\mathcal {A}}_{x_0}(T). \end{aligned}$$

By (4.14), and by monotonicity of g and (ii) above, we then obtain:

$$\begin{aligned}&0 \ge g(u_0(0),u_0(T),{\mathcal {A}}_{u_0}(0),{\mathcal {A}}_{u_0}(T)) = g(x_{{\overline{\nu }}}(0),x_{{\overline{\nu }}}(T), {\mathcal {A}}_{u_0}(0),{\mathcal {A}}_{u_0}(T)) \\&\qquad \qquad \ge g(x_{{\overline{\nu }}}(0),x_{{\overline{\nu }}}(T), {\mathcal {A}}_{x_{{\overline{\nu }}}}(0), {\mathcal {A}}_{x_{{\overline{\nu }}}}(T)) \ge 0, \end{aligned}$$

and this shows that \(u_0\) solves (4.2). Finally, since \(\alpha \le u_m\le \beta \) on I and \(u_m\) satisfies \((*)\) for every m, we conclude that \(u_0\) satisfies (4.4)-(4.5). \(\square \)

As a particular case of Theorem 4.1, we have the following result.

Corollary 4.4

Let us assume that all the hypotheses of Theorem 3.5 are satisfied; moreover, if \(\alpha ,\beta \in W^{1,p}(I)\) are as in assumption (A1\('\)), we suppose that the following inequality hold:

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {A}}_\alpha (0) \ge {\mathcal {A}}_\alpha (T), \\ \alpha (T) = \alpha (0), \end{array}\right. } \quad {\left\{ \begin{array}{ll} {\mathcal {A}}_\beta (0) \le {\mathcal {A}}_\beta (T), \\ \beta (T) = \beta (0). \end{array}\right. } \end{aligned}$$

Finally, let us assume that the function a satisfies the condition:

$$\begin{aligned} a(0,x) \ne 0 \quad \hbox { and} \quad a(T,x) \ne 0 \qquad \text {for every}\ x\in {\mathbb {R}}. \end{aligned}$$

Then, there exists (at least) one solution \(x\in W^{1,p}(I)\) of:

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \displaystyle \Big (\Phi \big (a(t,x(t))\,x'(t)\big )\Big )' = f(t,x(t),x'(t)), \qquad \hbox { a.e. on}\ I, \\ {\mathcal {A}}_x(0) = {\mathcal {A}}_x(T), \\ x(0) = x(T). \end{array}\right. } \end{aligned}$$

Proof

It is a straightforward consequence of Theorem 4.1 applied to:

$$\begin{aligned} \rho (r) = r \quad \text {and} \quad g(u,v,w,z) = w-z, \end{aligned}$$

which trivially satisfy assumptions (G1) and (G2). This ends the proof. \(\square \)

We conclude the present section by turning our attention to Sturm–Liouville-type and Neumann-type problems associated with the ODE (4.1).

To be more precise, we consider the following boundary value problems:

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \displaystyle \Big (\Phi \big (a(t,x(t))\,x'(t)\big )\Big )' = f(t,x(t),x'(t)), \qquad \hbox { a.e. on}\ I, \\ p(x(0),{\mathcal {A}}_x(0)) = 0,\,\,q(x(T),{\mathcal {A}}_x(T)) = 0. \end{array}\right. } \end{aligned}$$
(4.15)

Here, the functions \(p,q:{\mathbb {R}}^2\longrightarrow {\mathbb {R}}\) satisfy the following general assumptions:

  1. (S1)

    \(p\in C({\mathbb {R}}^2,{\mathbb {R}})\) and, for every \(s\in {\mathbb {R}}\), the map \(p(s,\cdot )\) is increasing on \({\mathbb {R}}\);

  2. (S2)

    \(q\in C({\mathbb {R}}^2,{\mathbb {R}})\) and, for every \(s\in {\mathbb {R}}\), the map \(q(s,\cdot )\) is decreasing on \({\mathbb {R}}\).

The following theorem is the second main result of this section.

Theorem 4.5

Let us assume that all the hypotheses of Theorem 3.5 are satisfied, and that the functions p and q fulfill the assumptions (S1)–(S2) introduced above. Moreover, if \(\alpha ,\beta \in W^{1,p}(I)\) are as in assumption (A1\('\)), we suppose that the following inequality holds:

$$\begin{aligned} {\left\{ \begin{array}{ll} p(\alpha (0),{\mathcal {A}}_\alpha (0)) \ge 0, \\ q(\alpha (T),{\mathcal {A}}_\alpha (T))\ge 0; \end{array}\right. } \quad {\left\{ \begin{array}{ll} p(\beta (0),{\mathcal {A}}_\beta (0)) \le 0, \\ q(\beta (T),{\mathcal {A}}_\beta (T))\le 0. \end{array}\right. } \end{aligned}$$
(4.16)

Finally, let us assume that a satisfies the “compatibility” condition:

$$\begin{aligned} a(0,x) \ne 0 \quad \hbox { and} \quad a(T,x) \ne 0 \qquad \text {for every}\ x\in {\mathbb {R}}. \end{aligned}$$

Then, the problem (4.15) possesses one solution \(x\in W^{1,p}(I)\), such that:

$$\begin{aligned} \alpha (t)\le x(t)\le \beta (t) \qquad \hbox { for every}\ t\in I. \end{aligned}$$
(4.17)

Furthermore, if \(M > 0\) is any real number, such that \(\sup _I|\alpha |,\,\sup _I|\beta |\le M\) and \(L_M > 0\) is as in Theorem 3.5(see (3.10)), then:

$$\begin{aligned} \max _{t\in I}\big |x(t)\big |\le M \quad \text {and} \quad \max _{t\in I}\big |{\mathcal {A}}_x(t)\big | \le L. \end{aligned}$$
(4.18)

The proof of Theorem 4.5 relies on the following lemma.

Lemma 4.6

Let the assumptions and the notation of Theorem 4.5 apply. Then, for every fixed \(\nu \in [\alpha (T),\beta (T)]\), the boundary value problem

$$\begin{aligned} \mathrm {(D_\nu )} \qquad \quad {\left\{ \begin{array}{ll} \displaystyle \Big (\Phi \big (a(t,x(t))\,x'(t)\big )\Big )' = f(t,x(t),x'(t)), \qquad \hbox { a.e. on}\ I, \\ p(x(0),{\mathcal {A}}_x(0)) = 0, \\ x(T) = \nu . \end{array}\right. } \end{aligned}$$

possesses at least one solution \(x\in W^{1,p}(I)\), such that \(\alpha \le x\le \beta \) on I. Furthermore, if M and \(L_M\) are as in the statement of Theorem 4.5, then:

$$\begin{aligned} \sup _{t\in I}|x(t)|\le M \quad \text {and}\quad \sup _{t\in I}|{\mathcal {A}}_x(t)|\le L_M. \end{aligned}$$
(4.19)

Proof

We fix \(\nu \in [\alpha (T),\beta (T)]\) and we consider the following functions:

\((\star )\):

  \(\rho :{\mathbb {R}}\rightarrow {\mathbb {R}}, \quad \rho (r) := \nu \);

\((\star )\):

  \(g:{\mathbb {R}}^4\rightarrow {\mathbb {R}}, \quad g(u,v,w,z) := p(u,w)\).

Then, by means of these functions, we can re-write the problem \((\mathrm {D}_\nu )\) as:

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \Big (\Phi \big (a(t,x(t))\,x'(t)\big )\Big )' = f(t,x(t),x'(t)), \qquad \hbox { a.e. on}\ I, \\ g(x(0),x(T),{\mathcal {A}}_x(0),{\mathcal {A}}_x(T)) = 0, \\ x(T) = \rho (x(0)). \end{array}\right. } \end{aligned}$$

Now, taking into account \((\mathrm {S1})\), it is readily seen that \(\rho \) and g satisfy, respectively, (G1) and (G2) in the statement of Theorem 4.1; thus, to prove the lemma, it suffices to show the existence of a lower and a upper solution for (4.1) satisfying (A1\('\)) and (4.3) (with the above choices of \(\rho \) and g).

To this end, we first observe that, by assumption, \(\alpha \) and \(\beta \) are, respectively, a lower and a upper solution for (4.1) satisfying (A1\('\)) (that is, \(\alpha \le \beta \) on I); as a consequence, by Theorem 3.5, the Dirichlet problem

$$\begin{aligned} \mathrm {(D)_1}\qquad \quad {\left\{ \begin{array}{ll} \displaystyle \Big (\Phi \big (a(t,x(t))\,x'(t)\big )\Big )' = f(t,x(t),x'(t)), \qquad \hbox { a.e. on}\ I, \\ x(0) = \alpha (0),\,\,x(T) = \nu \end{array}\right. } \end{aligned}$$

possesses (at least) one solution \(x_1\in W^{1,p}(I)\), such that \(\alpha \le x_1\le \beta \) on I. Moreover, if M and \(L_M\) are as in the statement of Theorem 4.1, we have:

$$\begin{aligned} \sup _{t\in I}|x_1(t)| \le M \quad \text {and} \quad \sup _{t\in I}|{\mathcal {A}}_{x_1}(t)| \le L_M. \end{aligned}$$
(4.20)

We claim that the function \(x_1\), which is obviously a lower solution of (4.1), satisfies the first assumption in (4.3) (with g as above). In fact, since:

$$\begin{aligned} x_1(0) = \alpha (0)\quad \hbox {and}\quad x_1\ge \alpha \text { on }I, \end{aligned}$$

from Lemma 4.3 (with \(x_1\) in place of \(\beta \)), we infer that:

$$\begin{aligned} {\mathcal {A}}_{x_1}(0)\ge {\mathcal {A}}_\alpha (0); \end{aligned}$$

as a consequence, by assumption (S1) and (4.16), we obtain:

$$\begin{aligned}&g(x_1(0),x_1(T),{\mathcal {A}}_{x_1}(0),{\mathcal {A}}_{x_1}(T)) = p(x_1(0),{\mathcal {A}}_{x_1}(0)) \\&\quad = p(\alpha (0),{\mathcal {A}}_{x_1}(0)) \ge p(\alpha (0),{\mathcal {A}}_\alpha (0)) \ge 0. \end{aligned}$$

Furthermore, since \(x_1\) solves \(\mathrm {(D)_1}\), we have \(x_1(T) = \nu = \rho (x_1(0))\), and this proves that \(x_1\) satisfies the first assumption in (4.3).

We now turn to prove the existence of a upper solution \(x_2\) of (4.1), such that \(x_2\ge x_1\) on I and satisfying the second assumption in (4.3).

First of all, we notice that \(x_1\) and \(\beta \) are, respectively, a lower and a upper solution for (4.1) such that \(x_1\le \beta \) on I; moreover:

$$\begin{aligned} \nu = x_1(T)\in [x_1(T),\beta (T)]. \end{aligned}$$

Finally, by (4.20) and the choice of M, we have:

$$\begin{aligned} \sup _{t\in I}|x_1(t)|,\,\,\sup _{t\in I}|\beta (t)|\le M. \end{aligned}$$

As a consequence, by Theorem 3.5, the Dirichlet problem

$$\begin{aligned} \mathrm {(D)_2}\qquad \quad {\left\{ \begin{array}{ll} \displaystyle \Big (\Phi \big (a(t,x(t))\,x'(t)\big )\Big )' = f(t,x(t),x'(t)), \qquad \hbox { a.e. on}\ I, \\ x(0) = \beta (0),\,\,x(T) = \nu \end{array}\right. } \end{aligned}$$

has a solution \(x_2\in W^{1,p}(I)\), such that \(x_1\le x_2\le \beta \) on I, further satisfying:

$$\begin{aligned} \sup _{t\in I}|x_2(t)| \le M \quad \text {and} \quad \sup _{t\in I}|{\mathcal {A}}_{x_2}(t)| \le L_M, \end{aligned}$$
(4.21)

for the same \(M > 0\) fixed at beginning (and \(L_M\) as in Theorem 3.5). We claim that \(x_2\), which is obviously a upper solution of (4.1), satisfies the second assumption in (4.3). In fact, since

$$\begin{aligned} \hbox { } x_2(0) = \beta (0) \hbox { and} x_2\le \beta \hbox { on }I, \end{aligned}$$

by exploiting Lemma 4.3 (with \(x_2\) in place of \(\alpha \)), we get:

$$\begin{aligned} {\mathcal {A}}_{x_2}(0)\le {\mathcal {A}}_{\beta }(0); \end{aligned}$$

thus, by assumption (G2) and again (4.16), we have:

$$\begin{aligned}&g(x_2(0),x_2(T),{\mathcal {A}}_{x_2}(0),{\mathcal {A}}_{x_2}(T)) = p(x_2(0),{\mathcal {A}}_{x_2}(0)) \\&\quad = p(\beta (0),{\mathcal {A}}_{x_2}(0)) \le p(\beta (0),{\mathcal {A}}_\beta (0)) \le 0. \end{aligned}$$

Furthermore, since \(x_2\) solves \((\mathrm {D}_2)\), one has \(x_2(T) = \nu = \rho (x_2(0)) \), and this proves that \(x_2\ge x_1\) satisfies the second assumption in (4.16).

Gathering together all these facts, we can conclude that all the assumptions in Theorem 4.1 are fulfilled (with the above choice of g and \(\rho \)); thus, there exists (at least) one solution \(x\in W^{1,p}(I)\) of \(\mathrm {(D_\nu )}\), such that:

$$\begin{aligned} \alpha \le x_1\le x\le x_2\le \beta \quad \hbox { on}\ I. \end{aligned}$$

In particular, by the choice of M and \(L_M\) (according to (3.10)) and the fact that \(x_1,\,x_2\) fulfill, respectively, (4.20) and (4.21), we deduce that:

$$\begin{aligned} \sup _{t\in I}|x(t)|\le M \quad \text {and}\quad \sup _{t\in I}|{\mathcal {A}}_x(t)|\le L_M, \end{aligned}$$

with the very same \(M, L_M > 0\)fixed at the beginning. This ends the proof. \(\square \)

Remark 4.7

Let the assumptions and the notation of Lemma 4.6 apply.

By giving a closer inspection to the proof of this lemma, we see that the only property of \(\alpha \) and \(\beta \) that we have used is the following (see (4.16)):

$$\begin{aligned} \big (\bigstar \big )\qquad \quad p(\alpha (0),{\mathcal {A}}_\alpha (0))\ge 0 \qquad \text {and}\qquad p(\beta (0),{\mathcal {A}}_\beta (0))\le 0. \end{aligned}$$

Hence, Lemma 4.6 still holds if we replace (4.16) with the weaker \(\big (\bigstar \big )\).

Thanks to Lemma 4.6, we are able to prove Theorem 4.5.

Proof of Theorem 4.5

Let \(\nu \in [\alpha (T),\beta (T)]\) be fixed. By Lemma 4.6, there exists (at least) one solution \(x_\nu \in W^{1,p}(I)\) of the BVP:

$$\begin{aligned} \mathrm {(D_\nu )} \qquad \quad {\left\{ \begin{array}{ll} \displaystyle \Big (\Phi \big (a(t,x(t))\,x'(t)\big )\Big )' = f(t,x(t),x'(t)), \qquad \hbox { a.e. on}\ I, \\ p(x(0),{\mathcal {A}}_x(0)) = 0, \\ x(T) = \nu , \end{array}\right. } \end{aligned}$$

such that \(\alpha \le x_\nu \le \beta \) on I; moreover, if \(M,L_M > 0\) are as in the statement of the theorem (that is, \(L_M\) is as in (3.10)), we have (see Lemma 4.6):

$$\begin{aligned} \big (*)\qquad \quad \sup _{t\in I}|x_{\nu }(t)|\le M \quad \text {and} \quad \sup _{t\in I}|{\mathcal {A}}_{x_\nu }(t)|\le L_M. \end{aligned}$$

We then consider the following set:

$$\begin{aligned}&V := \Big \{\nu \in [\alpha (T),\beta (T)]\,:\,\,\exists \text { a solution } x_\nu \in W^{1,p}(I)\text { of }(\mathrm {D}_\nu )\text { s.t. }\alpha \le x_\nu \le \beta , \\&\qquad \qquad \qquad x_\nu \text { satisfies }(*) \text { and } q(x_\nu (T),{\mathcal {A}}_{x_\nu }(T))\ge 0\Big \}. \end{aligned}$$

Step I: We have \(\nu = \alpha (T)\in V\). In fact, by Lemma 4.6, there exists a solution \(x_\nu \in W^{1,p}(I)\) of \(\mathrm {(D_\nu )}\), such that \(\alpha \le x_\nu \le \beta \) and satisfying \((*)\); in particular, we have \(x_\nu (T) = \alpha (T)\). Hence, by Lemma 4.3, we get:

$$\begin{aligned} {\mathcal {A}}_{x_\nu }(T) \le {\mathcal {A}}_\alpha (T). \end{aligned}$$

From this, by assumption (S2) and (4.16), we obtain:

$$\begin{aligned} q(x_\nu (T),{\mathcal {A}}_{x_\nu }(T)) = q(\alpha (T),{\mathcal {A}}_{x_{\nu }}(T)) \ge q(\alpha (T),{\mathcal {A}}_\alpha (T)) \ge 0. \end{aligned}$$

This proves that \(\nu \in V\), as claimed. In particular, \(V\ne \varnothing \).

Step II: Setting \({\overline{\nu }} := \sup V\), we have \({\overline{\nu }}\in V\). In fact, if \({\overline{\nu }} = \alpha (T)\), by Step I, we know that \({\overline{\nu }}\in V\); if, instead, \({\overline{\nu }} > \alpha (T)\), we can choose a sequence \(\{\nu _n\}_n\subseteq V\), such that \(\nu _n\rightarrow {\overline{\nu }}\) as \(n\rightarrow \infty \) and \(\nu _n\le {\overline{\nu }}\) for every n.

Then, for every natural n, there exists a solution \(x_n\in W^{1,p}(I)\) of \((\mathrm {D}_{\nu _n})\), such that:

  1. (a)

      \(\alpha \le x_n\le \beta \) on I;

  2. (b)

      \(x_n\) satisfies \((*)\);

  3. (c)

      \(q(x_n(T),{\mathcal {A}}_{x_n}(T))\ge 0\).

On account of (b) we can apply Proposition  4.2, which provides us with a solution \(x_0\in W^{1,p}(I)\) of (4.1), such that, up to a sub-sequence:

$$\begin{aligned} x_n(t)\rightarrow x_0(t) \quad \text {and} \quad {\mathcal {A}}_{x_n}(t)\rightarrow {\mathcal {A}}_{x_0}(t) \qquad \hbox { for every}\ t\in I. \end{aligned}$$

Now, since \(\nu _n\rightarrow {\overline{\nu }}\) and p is continuous, we see that \(x_0\) solves \(\mathrm {(D_{{\overline{\nu }}})}\); hence:

$$\begin{aligned} p(x_0(0),{\mathcal {A}}_{x_0}(0)) = 0. \end{aligned}$$

Moreover, since \(x_n\) satisfies \((*)\) and \(\alpha \le x_n\le \beta \) on I for every natural n, then the same is true of \(x_0\). Finally, by (c) and the continuity of q, one has:

$$\begin{aligned} q(x_0(T),{\mathcal {A}}_{x_0}(T)) = \lim _{n\rightarrow \infty }q(x_n(T),{\mathcal {A}}_{x_n}(T)) \ge 0. \end{aligned}$$

This proves that \({\overline{\nu }}\in V\), as claimed.

Now, we have established Claims I and II, we can finally prove the existence of a solution to (4.15). In fact, let \({\overline{\nu }} = \sup V \in [\alpha (T),\beta (T)]\), and let \(x_{{\overline{\nu }}}\in W^{1,p}(I)\) be a solution of \(\mathrm {(D_{{\overline{\nu }}})}\) satisfying \((\star )\) and such that:

  1. (i)

    \(\alpha \le x_{{\overline{\nu }}}\le \beta \) on I;

  2. (ii)

    \(q(x_{{\overline{\nu }}}(T),{\mathcal {A}}_{x_{{\overline{\nu }}}}(T))\ge 0\).

If \({\overline{\nu }} = \beta (T)\), we have:

$$\begin{aligned} x_{{\overline{\nu }}}(T) = {\overline{\nu }} = \beta (T)\,\, and \,\, p(x_{{\overline{\nu }}}(0),{\mathcal {A}}_{x_{{\overline{\nu }}}}(0)) = 0; \end{aligned}$$

moreover, since \(x_{{\overline{\nu }}}\le \beta \) on I, Lemma 4.3 implies that \({\mathcal {A}}_{x_{{\overline{\nu }}}}(T)\ge {\mathcal {A}}_{\beta }(T)\); from this, by (ii), the monotonicity of q (see (S2)) and (4.16), we obtain:

$$\begin{aligned} 0\le q(x_{{\overline{\nu }}}(T),{\mathcal {A}}_{x_{{\overline{\nu }}}}(T)) = q(\beta (T),{\mathcal {A}}_{x_{{\overline{\nu }}}}(T)) \le q(\beta (T),{\mathcal {A}}_\beta (T))\le 0, \end{aligned}$$

and this proves that \(x_{{\overline{\nu }}}\) is a solution of (4.15) satisfying (4.17) and (4.18).

If, instead, \({\overline{\nu }} < \beta (T)\), we choose a sequence \(\{\mu _m\}_m\subseteq [\alpha (T),\beta (T)]\), such that \(\mu _m\rightarrow {\overline{\nu }}\) as \(m\rightarrow \infty \) and \(\mu _m > {\overline{\nu }}\) for any m. Since \(x_{{\overline{\nu }}}\) solves \(\mathrm {(D_{{\overline{\nu }}})}\) and \(x_{{\overline{\nu }}}\le \beta \) on I, we can think of \( x_{{\overline{\nu }}}\) and \(\beta \) as, respectively, a lower and a upper solution of (4.1) satisfying (A1\('\)) in Theorem 3.5 and \(\big (\bigstar \big )\) in Remark 4.7 (see indeed (4.16)); moreover, by \((*)\) and the choice of M, we have:

$$\begin{aligned} \sup _{t\in I}|x_{{\overline{\nu }}}(t)|,\,\,\sup _{t\in I}|\beta (t)|\le M. \end{aligned}$$

As a consequence, by Remark 4.7, for every \(m\in {\mathbb {N}}\), there exists a solution \(u_m\in W^{1,p}(I)\) of \(\mathrm {(D_{\mu _m})}\), such that:

  • \(\alpha \le x_{{\overline{\nu }}}\le u_m\le \beta \) on I;

  • \(\sup _I|u_m|\le M\) and \( \sup _I|{\mathcal {A}}_{u_m}| \le L_M\).

In particular, \(u_m\) satisfies \((*)\) for every m. We can then apply Proposition 4.2, which provides us with a solution \(u_0\) of (4.1), such that (up to a sub-sequence):

$$\begin{aligned} u_m(t)\rightarrow u_0(t) \quad \text {and} \quad {\mathcal {A}}_{u_m}(t)\rightarrow {\mathcal {A}}_{u_0}(t) \qquad \hbox { for every}\ t\in I. \end{aligned}$$

Thus, since \(\mu _m\rightarrow {\overline{\nu }}\) and p is continuous, we see that \(u_0\) solves \(\mathrm {(D_{{\overline{\nu }}})}\); hence:

$$\begin{aligned} p(u_0(0),{\mathcal {A}}_{u_0}(0)) = 0. \end{aligned}$$

We now observe that, since \(\mu _m > {\overline{\nu }} = \sup V\), then \(\mu _m\notin V\); as a consequence, since \(\alpha \le u_m\le \beta \) on I and \(u_m\) satisfies \((*)\) for every m, we necessarily have:

$$\begin{aligned} q(u_m(T),{\mathcal {A}}_{u_m}(T)) < 0 \quad \hbox { for every}\ m\in {\mathbb {N}}. \end{aligned}$$

From this, by the continuity of q (see assumption (S2)), we get:

$$\begin{aligned} q(u_0(T),{\mathcal {A}}_{u_0}(T)) = \lim _{m\rightarrow \infty }q(u_m(T),{\mathcal {A}}_{u_m}(T)) \le 0. \end{aligned}$$
(4.22)

On the other hand, since both \(x_{{\overline{\nu }}}\) and \(u_0\) solve \(\mathrm {(D_{{\overline{\nu }}})}\), we have:

$$\begin{aligned} x_{{\overline{\nu }}}(T) = {\overline{\nu }} = u_0(T); \end{aligned}$$

moreover, since \(u_m\ge x_{{\overline{\nu }}}\) for every natural m (by the construction of \(u_m\)), then the same is true of \(u_0\). From this, by using Lemma 4.3, we infer that:

$$\begin{aligned} {\mathcal {A}}_{u_0}(T) \le {\mathcal {A}}_{x_{{\overline{\nu }}}}(T). \end{aligned}$$

By (4.22), the monotonicity of q [see (S2)] and (ii) above, we then get:

$$\begin{aligned} 0 \ge q(u_0(T),{\mathcal {A}}_{u_0}(T)) = q(x_{{\overline{\nu }}}(T),{\mathcal {A}}_{u_0}(T)) \ge q(x_{{\overline{\nu }}}(T),{\mathcal {A}}_{x_{{\overline{\nu }}}}(T)) \ge 0, \end{aligned}$$

and this shows that \(u_0\) solves (4.18). Finally, since \(\alpha \le u_m\le \beta \) and \(u_m\) satisfies \((*)\) for every m, we conclude that \(u_0\) fulfills (4.17)–(4.18). \(\square \)

From Theorem 4.5, we easily deduce the following results.

Corollary 4.8

Let us assume that all the hypotheses of Theorem 3.5 are satisfied. Moreover, let \(\ell _1,\,\ell _2,\,\nu _1,\,\nu _2\in {\mathbb {R}}\) and let \(m_1,\,m_2\in [0,\infty )\). If \(\alpha \) and \(\beta \) are as in assumption (A1)\('\), we suppose that:

$$\begin{aligned} {\left\{ \begin{array}{ll} \ell _1\,\alpha (0) + m_1\,{\mathcal {A}}_\alpha (0) \ge \nu _1, \\ \ell _2\,\alpha (T) - m_2\,{\mathcal {A}}_\alpha (T) \ge \nu _2; \\ \end{array}\right. } \qquad {\left\{ \begin{array}{ll} \ell _1\,\beta (0) + m_1\,{\mathcal {A}}_\beta (0) \le \nu _1, \\ \ell _2\,\beta (T) - m_2\,{\mathcal {A}}_\beta (T) \le \nu _2. \end{array}\right. } \end{aligned}$$

Finally, we assume that the function a fulfills the following assumption:

$$\begin{aligned} a(0,x) \ne 0 \quad \hbox { and} \quad a(T,x) \ne 0 \qquad \text {for every}\ x\in {\mathbb {R}}. \end{aligned}$$

Then, there exists a solution \(x\in W^{1,p}(I)\) of the Sturm–Liouville problem:

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \displaystyle \Big (\Phi \big (a(t,x(t))\,x'(t)\big )\Big )' = f(t,x(t),x'(t)), \qquad \hbox { a.e. on}\ I, \\ \ell _1\,x(0) + m_1\,{\mathcal {A}}_x(0) = \nu _1, \\ \ell _2\,x(T) - m_2\,{\mathcal {A}}_x(T) = \nu _2. \end{array}\right. } \end{aligned}$$

Proof

It is a direct consequence of Theorem 4.5 applied to the functions:

$$\begin{aligned} p(s,t) := \ell _1\,s + m_1\,t - \nu _1 \qquad \text {and} \qquad q(s,t) := \ell _2\,s - m_2\,t - \nu _2, \end{aligned}$$

which satisfy (S1)–(S2) (since \(m_1,m_2\ge 0\)). This ends the proof. \(\square \)

Corollary 4.9

Let us assume that all the hypotheses of Theorem 3.5 are satisfied. Moreover, let \(\nu _1,\,\nu _2\in {\mathbb {R}}\) be arbitrarily fixed. If \(\alpha \) and \(\beta \) are as in assumption (A1)\('\), we suppose that the following conditions are satisfied:

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {A}}_\alpha (0) \ge \nu _1, \\ {\mathcal {A}}_\alpha (T) \le \nu _2; \\ \end{array}\right. } \qquad {\left\{ \begin{array}{ll} {\mathcal {A}}_\beta (0) \le \nu _1, \\ {\mathcal {A}}_\beta (T) \ge \nu _2. \end{array}\right. } \end{aligned}$$

Finally, we assume that the function a fulfills the following assumption:

$$\begin{aligned} a(0,x) \ne 0 \quad \hbox { and} \quad a(T,x) \ne 0 \qquad \text {for every}\ x\in {\mathbb {R}}. \end{aligned}$$

Then, there exists a solution \(x\in W^{1,p}(I)\) of the Neumann problem:

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \displaystyle \Big (\Phi \big (a(t,x(t))\,x'(t)\big )\Big )' = f(t,x(t),x'(t)), \qquad \hbox { a.e. on}\ I, \\ {\mathcal {A}}_x(0) = \nu _1, \\ {\mathcal {A}}_x(T) = \nu _2. \end{array}\right. } \end{aligned}$$

Proof

It is another direct consequence of Theorem 4.5 applied to:

$$\begin{aligned} p(s,t) := t - \nu _1 \qquad \text {and} \qquad q(s,t) := \nu _2 - t, \end{aligned}$$

which obviously satisfy assumptions (S1)–(S2). This ends the proof. \(\square \)