1 Introduction

Boundary value problems for ordinary differential equations play a very important role in theory and application see for example [8, 14, 16, 17]. They describe a large number of nonlinear problems in physics, biology and chemistry. For example, the deformations of an elastic beam are described by a fourth-order differential equation, often referred to as the beam equation, which has been studied under a variety of boundary conditions [1, 8, 10]. This kind of problem was studied by many authors via various methods, such as the Leray–Schauder continuation method, the topological degree theory, the shooting method, fixed point theorems on cones, the critical point theory, and the lower and upper solution method, we refere the readers to [2,3,4,5, 8, 9, 12] and the references therein.

Recently, Sun et al. [15] investigated the existence of positive solutions for the following fourth-order boundary value problem:

$$\begin{aligned}&u^{(4)}(t)+f(t,u(t))=0,~0<t<1 \end{aligned}$$
(1.1)
$$\begin{aligned}&u(0)=u'(0)=u''(0)=0, \end{aligned}$$
(1.2)
$$\begin{aligned}&u_{i}''(1)-\alpha u_{i}''(\eta )=\lambda , \end{aligned}$$
(1.3)

where \(\alpha \in [0,\frac{1}{\eta })\), \(0<\eta <1\) are constants, \(\lambda \in [0,+\infty )\) is a parameter, f(tu(t)) singular at \(t=0\) and \(t=1\). Using Guo–Krasnosel’skii fixed point theorem the authors prove that (1.1)–(1.3) has at least one positive solution. In this paper, we generalize the results in [15] to a multi-point boundary value problem of the form.

$$\begin{aligned} u_{i}^{(4)}(t)+f_{i}(t,\mathbf{u }(t),\mathbf{u }'(t),\mathbf{u }''(t))=0, ~0<t<1, \end{aligned}$$
(1.4)

subject to multi-points and integral boundary conditions

$$\begin{aligned} \left\{ \begin{array}{ll} u_{i}(0)=h_{1,i}(\psi _{1}[u_{1}],\ldots ,\psi _{1}[u_{n}]),\\ u_{i}'(0)=h_{2,i}(\psi _{2}[u_{1}],\ldots ,\psi _{2}[u_{n}]),\\ u_{i}''(0)=0,\\ u_{i}''(1)=\displaystyle \sum ^{p}_{j=1}\beta _{j,i} u_{i}''(\eta _{j,i})+h_{3,i}(\psi _{3}[u_{1}],\ldots ,\psi _{3}[u_{n}]),\\ \end{array} \right. \end{aligned}$$
(1.5)

where \(\mathbf{u }(s)=(u_{1}(s),\ldots ,u_{n}(s))\), for \(i\in \{1,\ldots ,n\}\), \(j\in \{1,\ldots ,p\}\), \(k\in \{1,2,3\}\), \(\beta _{j,i}\ge 0\), \(\eta _{j,i}> 0\) such that \(0 \le \sum ^{p}_{j=1}\beta _{j,i}\eta _{j,i}<1\), \(f_{i}:(0,1)\times {\mathbb {R}}_{+}^{n}\times {\mathbb {R}}_{+}^{n}\times {\mathbb {R}}_{+}^{n}\rightarrow {\mathbb {R}}_{+}\) are continuous functions and may be singular at \(t = 0, 1\), \(h_{k,i}:{\mathbb {R}}^{n}\rightarrow {\mathbb {R}}_{+}\) are continuous functions, \(\psi _{k}:C([0,1])\rightarrow {\mathbb {R}}\) is a linear functional defined in the Lebesgue–Stieltjes sense by \(\psi _{k}[w]:=\int ^{1}_{0}w(s)d\phi _{k}(s),\) where \(\phi _{k}\) is a function of bounded variation. Note that if \(h_{k,i}(\psi _{1}[u_{1}],\ldots ,\psi _{1}[u_{n}])= \sum ^{n}_{k=0}|a_{k,i}|u_{i}(\xi _{k,i})\), then, we have multi-point boundary conditions. The particularity of problem (1.4)–(1.5) is that the boundary condition involves multi-points and nonlinear integral conditions, which leads to extra difficulties.

In the special case, our problem reduces to the following classical boundary value problems coupled with the cantilever beam boundary conditions:

$$\begin{aligned} \left\{ \begin{array}{ll} u^{(4)}(t)+f(t,u(s))=0,~0<t<1,\\ u(0)=u'(0)=u''(0)=u''(1)=0. \end{array} \right. \end{aligned}$$
(1.6)

Note that problem (1.4)–(1.5) is a generalization of system (1.1)–(1.3). However, to the best of the authors knowledge, there are no results for triple positive solutions of the nonlinear differential equation (1.4) jointly with conditions (1.5) by using the Leggett–Williams fixed-point theorem. The aim of this paper is to fill the gap in the relevant literature. This paper is structured as follows: in next section, we give some properties of the Green’s function associated to the problem (1.4)–(1.5) and transform problem (1.4)–(1.5) into Hammerstein integral equations. Moreover, we show some preliminary results which are used along the paper. In Sect. 3, we state the main theorems and give the proofs. Indeed, we firstly apply the well known Leggett–Williams fixed point theorem to prove the existence of at least three positive solutions, and after, by induction method we show the existence of countably many positive solutions for the problem (1.4)–(1.5). An example is presented in Sect.  4 to illustrate our main results.

2 Preliminaries

In this section we present some preliminary results which are useful in the proofs of the main results. First let us give the definition and some properties of the Green’s function. Unless otherwise specified, the letters i and k in the remainder of this work always denote arbitrary integers in \(\{1,2,\ldots ,n\}\) and in \(\{1,2,3\}\) respectively.

Lemma 2.1

Let \(h_{i}\in C([0,1];{\mathbb {R}})\) and \(g_{k,i}\in {\mathbb {R}}\), then the problem

$$\begin{aligned} \left\{ \begin{array}{ll} u_{i}^{(4)}(t)+h_{i}(t)=0,\,\,0<t<1,\\ u_{i}(0)=g_{1,i},\\ u_{i}'(0)=g_{2,i},\\ u_{i}''(0)=0,\\ u_{i}''(1)=\displaystyle \sum ^{p}_{j=1}\beta _{j,i} u_{i}''(\eta _{j,i})+g_{3,i}\\ \end{array} \right. \end{aligned}$$
(2.1)

is equivalent to

$$\begin{aligned} u_{i}(t)=\displaystyle \int ^{1}_{0}\left( G(t,s)+\frac{t^{3}}{6}K_{i}\displaystyle \sum ^{p}_{j=1}\beta _{j,i}\frac{\partial ^{2}G(\eta _{j,i},s)}{\partial t^{2}}\right) h_{i}(s)\,ds+\varphi _{i}(t), \end{aligned}$$

where

$$\begin{aligned} G(t,s)= & {} \frac{1}{6}\left\{ \begin{array}{l l} (1-s)t^{3}, &{} \quad \text {if }0\le t\le \text { s},\\ (1-s)t^{3}-(t-s)^{3}, &{} \quad \text {if }s\le t\le 1, \end{array} \right. \end{aligned}$$
(2.2)
$$\begin{aligned} \varphi _{i}(t)= & {} \frac{K_{i} g_{3,i}t^{3}}{6}+t g_{2,i}+g_{1,i}, \end{aligned}$$
(2.3)

and \(K_{i}\) be such that

$$\begin{aligned} K_{i}\left( 1-\displaystyle \sum ^{p}_{j=1}\beta _{j,i}\eta _{j,i}\right) =1. \end{aligned}$$
(2.4)

Proof

We can integrate equation (2.1) to obtain

$$\begin{aligned} u_{i}(t)=-\displaystyle \frac{1}{6}\displaystyle \int ^{t}_{0} (t-s)^{3}h_{i}(s)\,ds+\frac{1}{6}C_{3,i}t^{3}+\frac{1}{2} C_{2,i}t^{2}+C_{1,i}t+C_{0,i}. \end{aligned}$$

By the boundary conditions \(u_{i}(0)=g_{1,i}\), \(u_{i}'(0)=g_{2,i}\) and \(u_{i}''(0)=0\) we have \(C_{0,i}= g_{1,i}\), \(C_{1,i}=g_{2,i}\) and \(C_{2,i}=0\).

On the other hand, from the condition \(u_{i}''(1)=\sum ^{p}_{j=1}\beta _{j,i} u_{i}''(\eta _{j,i})+g_{3,i},\) we obtain

$$\begin{aligned} C_{3,i}=K_{i}\displaystyle \int ^{1}_{0}(1-s)h_{i}(s)\,ds-K_{i}\displaystyle \sum ^{p}_{j=1} \beta _{j,i}\displaystyle \int ^{\eta _{j,i}}_{0}(\eta _{j,i}-s)h_{i}(s)\,ds+K_{i}g_{3,i}, \end{aligned}$$

where \(K_i\) is given by (2.4). It follows from the above informations that

$$\begin{aligned} \begin{aligned} u_{i}(t)&=\frac{t^{3}}{6}\left( K_{i}\,\displaystyle \int ^{1}_{0}(1-s) h_{i}(s)\,ds \right. \left. -K_{i}\,\displaystyle \sum ^{p}_{j=1}\beta _{j,i} \displaystyle \int ^{\eta _{j,i}}_{0}(\eta _{j,i}-s)h_{i}(s)\,ds +K_{i}g_{3,i}\right) \\&\quad +t\, g_{2,i} +g_{1,i}-\frac{1}{6}\displaystyle \int ^{t}_{0}(t-s)^{3}h_{i}(s)\,ds\\&= t^{3}\, \displaystyle \sum ^{p}_{j=1}\beta _{j,i}\times \left( \eta _{j,i} \displaystyle \int ^{1}_{0}\frac{(1-s)}{6}h_{i}(s)\,ds \right. -\left. \left. \!\! \displaystyle \int ^{\eta _{j,i}}_{0}\frac{(\eta _{j,i}-s)}{6} h_{i}(s)\,ds\right) \right. \\&\quad +\frac{t^{3}}{6}\displaystyle \int ^{1}_{0}(1-s)h_{i}(s)\,ds + \frac{t^{3}}{6} K_{i}\,g_{3,i} +t\, g_{2,i}+g_{1,i}-\frac{1}{6}\displaystyle \int ^{t}_{0}(t-s)^{3}h_{i}(s)\,ds\\&=\displaystyle \int ^{1}_{0}\left( G(t,s)+\frac{t^{3}}{6}K_{i} \displaystyle \sum ^{p}_{j=1}\beta _{j,i}\frac{\partial ^{2}G(\eta _{j,i},s)}{\partial t^{2}}\right) h_{i}(s)\,ds+\varphi _{i}(t), \end{aligned} \end{aligned}$$

where G(ts) and \(\varphi _{i}(t)\) are given by (2.8) and (2.3) respectively. The proof of Lemma 2.1 is now completed. \(\square \)

Now, we need some properties of the Green function G(ts) for more details, we refer the interested reader to [7, 11, 15].

Lemma 2.2

The Green function has the following property:

Let \(\displaystyle \varphi (s)=\frac{(1-s)s}{2}\), we have:

  1. 1.
    • For all \((t,s)\in [0,1]\times [0,1]\), \(\displaystyle 0\le G(t,s)\le 2 \varphi (s)\).

    • For all \((t,s)\in [0,1]\times [0,1]\), \(\displaystyle 0\le \frac{\partial G(t,s)}{\partial t}\le \varphi (s)\).

    • For all \((t,s)\in [0,1]\times [0,1]\), \(\displaystyle 0\le \frac{\partial ^{2} G(t,s)}{\partial t^{2}}\le 2\varphi (s)\).

  2. 2.

    Let \(\theta \in \left( 0,\frac{1}{2}\right) \), then

    • For all \((t,s)\in [\theta ,1-\theta ]\times [0,1]\), \(\displaystyle G(t,s)\ge \frac{\theta ^{3}}{3}\varphi (s)\).

    • For all \((t,s)\in [\theta ,1-\theta ]\times [0,1]\), \(\displaystyle \frac{\partial G(t,s)}{\partial t}\ge \theta ^{2}\varphi (s)\).

    • For all \((t,s)\in [\theta ,1-\theta ]\times [0,1]\), \(\displaystyle \frac{\partial ^{2} G(t,s)}{\partial t^{2}}\ge \theta \varphi (s)\).

The Leggett–Williams fixed point theorem is the main tools for proving the multiplicity results. For the convenience of the reader, we present here the Leggett–Williams fixed point theorem [13].

Let P be a cone in a real Banach space E, \(0< a < b\) and let \(\beta \) be a nonnegative continuous concave functional on K. Define the convex sets \(P_{r}\) and \(P(\beta ,a,b)\) by

$$\begin{aligned} P_{r}=\{x\in K\mid \Vert x\Vert \le r\} \end{aligned}$$

and

$$\begin{aligned} P(\beta ,a,b)=\{x\in K\mid a\le \beta (x),\Vert x\Vert \le b\}. \end{aligned}$$

Theorem 2.3

(Leggett–Williams fixed point theorem) (see [13]) Let \(A:{{\overline{P}}_{c}}\rightarrow {{\overline{P}}_{c}}\) be completely continuous operator and \(\beta \) be a nonnegative continuous concave functional on P such that \(\beta (x)\le \Vert x\Vert \) for \(x\in {{\overline{P}}_{c}}\). Suppose there exist \(0< a< b < d \le c\) such that

\((A_{1})\) :

\(\{x\in P(\beta ,b,d);\beta (x)>b\}\ne \emptyset \) and \(\beta (A x)> b\) for \(x\in P(\beta ,b,d)\)

\((A_{2})\) :

\(\Vert A x\Vert < a\) for \(\Vert x\Vert \le a\),

\((A_{3})\) :

\(\beta (A x)> b\) for \(x\in P(\beta ,b,c)\) with \(\Vert A x\Vert > d\).

Then A has at least three fixed points \(x_{1}\), \(x_{2}\), \(x_{3}\) in \({{\overline{P}}_{c}}\) such that

\(\Vert x_{1}\Vert <a\), \(\beta (x_{2})> b\) and \(\Vert x_{3}\Vert >a\) with \(\beta (x_{3})< b\).

For convenience, we introduce the following notations. Define

$$\begin{aligned} M_{i}=\displaystyle \max _{d\in \{0,1,2\}}\displaystyle \max _{t\in [0,1]} \displaystyle \int ^{1}_{0}\frac{\partial ^{d} H_{i}(t,s)}{\partial t^{d}}\,ds \end{aligned}$$

where

$$\begin{aligned} H_{i}(t,s)=G(t,s)+\frac{t^{3}}{6}K_{i}\displaystyle \sum ^{p}_{j=1} \beta _{j,i}\frac{\partial ^{2}G(\eta _{j,i},s)}{\partial t^{2}}. \end{aligned}$$

Put

$$\begin{aligned} m_{d}(\theta )=\displaystyle \min _{t\in [\theta ,1-\theta ]} \displaystyle \int ^{1-\theta }_{\theta }\frac{\partial ^{d} G(t,s)}{\partial t^{d}}\,ds, \,\, d\in \{0,1,2\} \end{aligned}$$

and

$$\begin{aligned} L_{1,i}=\frac{1}{\psi _{1}[1]},\;\;L_{2,i}=\frac{1}{\psi _{2}[1]}, \;\;L_{3,i}=\frac{1}{K_{i}\psi _{3}[1]}. \end{aligned}$$

The basic space used in this paper is a real Banach space \(E=(C^{2}([0,1];{\mathbb {R}}))^{n}\) equipped with the norm

$$\begin{aligned} \Vert \mathbf{u }\Vert =\displaystyle \sum ^{n}_{i=1}\displaystyle \sum ^{2}_{d=0}\Vert u_{i}^{(d)}\Vert _{\infty }. \end{aligned}$$

Let

$$\begin{aligned} E^{+}=\{\mathbf{u }=(u_{1},\ldots ,u_{n})\in E, u_{i}(t)\ge 0, u_{i}'(t)\ge 0, u_{i}''(t)\ge 0, t\in [0,1], i\in \{1,\ldots ,n\}\}. \end{aligned}$$

Then the set

$$\begin{aligned} K(\theta )=\left\{ \mathbf{u }\in E^{+}, \displaystyle \min _{t\in [\theta ,1-\theta ]}\displaystyle \sum ^{n}_{i=1}\sum ^{2}_{d=0}u_{i}^{(d)}(t)\ge \gamma (\theta ) \Vert \mathbf{u }\Vert \right\} \end{aligned}$$

is a cone of E, where \(\theta \in (0,\frac{1}{2})\) and \(\gamma (\theta )=\frac{\theta ^{3}}{6}\). The following result follows immediately from Lemma 2.1

Corollary 2.4

Assume that \(h_{k,i}\in C([0,1]\times {\mathbb {R}}^{n},{\mathbb {R}}_{+})\) and \(f_{i}\in C((0,1)\times {\mathbb {R}}^{n}_{+}\times {\mathbb {R}}^{n}_{+}\times {\mathbb {R}}^{n}_{+},{\mathbb {R}}_{+})\). Then, \(\mathbf{u }\in E\) is a solution of (1.4)–(1.5) if and only if

$$\begin{aligned} \mathbf{T }(\mathbf{u })=\mathbf{u }, \end{aligned}$$

where \(\mathbf{T }\) is the operator defined on E by

$$\begin{aligned} \mathbf{T }(\mathbf{u })=(T_{1}(\mathbf{u }),\ldots ,T_{n}(\mathbf{u })), \end{aligned}$$

and for all \(t\in [0,1]\).

$$\begin{aligned} T_{i}(\mathbf{u })(t)= & {} \displaystyle \int ^{1}_{0}\left( G(t,s)+\frac{t^{3}}{6}K_{i} \displaystyle \sum ^{p}_{j=1}\beta _{j,i} \frac{\partial ^{2}G(\eta _{j,i},s)}{\partial t^{2}}\right) \nonumber \\&f_{i}(s,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\,ds +P_{i}(t), \end{aligned}$$
(2.5)

with

$$\begin{aligned} P_{i}(t)= & {} h_{1,i}\left( \psi _{1}[u_{1}],\ldots ,\psi _{1}[u_{n}]\right) +t\,h_{2,i}\left( \psi _{2}[u_{1}],\ldots ,\psi _{2}[u_{n}]\right) \nonumber \\&+\frac{K_{i}t^{3}}{6}h_{3,i}\left( \psi _{3}[u_{1}],\ldots ,\psi _{3}[u_{n}] \right) . \end{aligned}$$
(2.6)

Definition 2.5

A function \(\mathbf{u }=(u_{1},\ldots ,u_{n})\) is called a nonnegative solution of (1.4)–(1.5) if \(\mathbf{u }\) satisfies (1.4)–(1.5) and \(u_{i}\ge 0\) in [0, 1]. If in addition, \(u_{i}(t)> 0\) in [0, 1], then, u is called a positive solution.

Lemma 2.6

Let \(\theta \in \left( 0,\frac{1}{2}\right) \) and assume that \(\int ^{1}_{0 } f_{i}(s,x,y,z) \,ds < +\infty ,\) for any \(x,y,z \in [0,+\infty )\) then, the operator \(\mathbf{T }\) given by (2.5) maps \(K(\theta )\) into itself, i.e., \(\mathbf{T }: K(\theta ) \rightarrow K(\theta )\). Moreover, \(\mathbf{T }\) is completely continuous that is \(\mathbf{T }\) is continuous and maps bounded sets into precompact sets.

Proof

Let \(\mathbf{u }\in K(\theta )\), then, from the positivity of the Green function, it is easy to see that for all \(t\in [0,1]\)

$$\begin{aligned} T_{i}(\mathbf{u })(t)\ge 0,\quad T_{i}(\mathbf{u })'(t)\ge 0\quad \text{ and }\quad T_{i}(\mathbf{u })''(t)\ge 0. \end{aligned}$$

Thus, to prove that \(\mathbf{T }(K(\theta ))\subset K(\theta )\), it suffices to prove that

$$\begin{aligned} \displaystyle \min _{t\in [\theta ,1-\theta ]}\displaystyle \sum ^{n}_{i=1}\sum ^{2}_{d=0}T_{i}(\mathbf{u })^{(d)}(t)\ge \gamma (\theta ) \Vert \mathbf{T }(\mathbf{u })\Vert . \end{aligned}$$

Indeed, for all \(t\in [0,1]\),

$$\begin{aligned} |T_{i}(\mathbf{u })(t)|&\le \displaystyle \int ^{1}_{0}\left( G(t,s)+\frac{t^{3}}{6}K_{i} \displaystyle \sum ^{p}_{j=1}\beta _{j,i}\frac{\partial ^{2}G(\eta _{j,i},s)}{\partial t^{2}}\right) f_{i}(t,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s)) \,ds+P_{i}(t)\\&\le \displaystyle \int ^{1}_{0}\left( \frac{1}{6}K_{i}\displaystyle \sum ^{p}_{j=1}\beta _{j,i}\frac{\partial ^{2}G(\eta _{j,i},s)}{\partial t^{2}}\right) f_{i}(t,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\, ds+P_{i}(1)\\&\quad + 2\displaystyle \int ^{1}_{0}\varphi (s)f_{i}(t,\mathbf{u }(s), \mathbf{u }'(s),\mathbf{u }''(s))\,ds. \end{aligned}$$

Then

$$\begin{aligned} \begin{aligned} \Vert T_{i}(\mathbf{u })\Vert _{\infty }&\le \displaystyle \int ^{1}_{0}\left( \frac{1}{6}K_{i}\displaystyle \sum ^{p}_{j=1}\beta _{j,i}\frac{\partial ^{2}G(\eta _{j,i},s)}{\partial t^{2}}\right) f_{i}(t,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\, ds+P_{i}(1)\\&\quad +2 \displaystyle \int ^{1}_{0}\varphi (s) f_{i}(t,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\,ds. \end{aligned} \end{aligned}$$

On the other hand, it follows from Lemma 2.2 that, for all \(t\in [\theta ,1-\theta ]\),

$$\begin{aligned} T_{i}(\mathbf{u })(t)&\ge \displaystyle \int ^{1}_{0}\left( \frac{\theta ^{3}}{6}K_{i}\displaystyle \sum ^{p}_{j=1}\beta _{j,i}\frac{\partial ^{2}G(\eta _{j,i},s)}{\partial t^{2}}\right) f_{i}(t,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\,ds\\&\quad +\frac{\theta ^{3}}{3} \displaystyle \int ^{1}_{0}\varphi (s)f_{i}(t,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\,ds + \frac{\theta ^{3}}{6}K_{i}h_{3,i}(\psi _{2}[u_{1}],\ldots ,\psi _{2}[u_{n}])\\&\quad +\theta h_{2,i}(\psi _{2}[u_{1}],\ldots ,\psi _{2}[u_{n}])+ h_{1,i}(\psi _{2}[u_{1}],\ldots ,\psi _{2}[u_{n}])\\&\ge \frac{\theta ^{3}}{6} \left[ \displaystyle \int ^{1}_{0}\left( K_{i}\displaystyle \sum ^{p}_{j=1} \beta _{j,i}\frac{\partial ^{2}G(\eta _{j,i},s)}{\partial t^{2}}\right) f_{i}(t,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\,ds\right. \\&\quad +2\displaystyle \int ^{1}_{0}\varphi (s)f_{i}(t,\mathbf{u }(s), \mathbf{u }'(s),\mathbf{u }''(s))\,ds + K_{i} h_{3,i}(\psi _{2}[u_{1}],\ldots ,\psi _{2}[u_{n}]) \\&\quad \left. + \frac{6}{\theta ^{2}}h_{2,i}(\psi _{2}[u_{1}],\ldots ,\psi _{2}[u_{n}]) +\frac{6}{\theta ^{3}} h_{1,i}(\psi _{2}[u_{1}],\ldots ,\psi _{2}[u_{n}])\right] \\&\ge \frac{\theta ^{3}}{6} \left[ \displaystyle \int ^{1}_{0}\left( K_{i}\displaystyle \sum ^{p}_{j=1} \beta _{j,i}\frac{\partial ^{2}G(\eta _{j,i},s)}{\partial t^{2}}\right) f_{i}(t,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\,ds\right. \\&\quad \left. +2\displaystyle \int ^{1}_{0}\varphi (s)f_{i}(t,\mathbf{u }(s),\mathbf{u }'(s), \mathbf{u }''(s))\,ds +P_{i}(1)\right] \\&\ge \frac{\theta ^{3}}{6} \Vert T_{i}(\mathbf{u })\Vert _{\infty }. \end{aligned}$$

Then, we obtain

$$\begin{aligned} \displaystyle \min _{t\in [\theta ,1-\theta ]}T_{i}(\mathbf{u })(t)\ge \gamma (\theta ) \Vert T_{i}(\mathbf{u })\Vert _{\infty }. \end{aligned}$$
(2.7)

In addition, we have

$$\begin{aligned} |T_{i}(\mathbf{u })'(t)|&\le \displaystyle \int ^{1}_{0}\left( \frac{\partial G(t,s)}{\partial t}+\frac{t^{2}}{2}K_{i}\displaystyle \sum ^{p}_{j=1}\beta _{j,i} \frac{\partial ^{2}G(\eta _{j,i},s)}{\partial t^{2}}\right) f_{i}(t,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\,ds +P_{i}'(t)\\&\le \displaystyle \int ^{1}_{0}\left( \frac{1}{2}K_{i}\displaystyle \sum ^{p}_{j=1} \beta _{j,i}\frac{\partial ^{2}G(\eta _{j,i},s)}{\partial t^{2}}\right) f_{i}(t,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\, ds+P_{i}'(1)\\&\quad +\displaystyle \int ^{1}_{0}\varphi (s)f_{i}(t,\mathbf{u }(s), \mathbf{u }'(s),\mathbf{u }''(s))\,ds. \end{aligned}$$

Therefore

$$\begin{aligned} \Vert T_{i}(\mathbf{u })'\Vert _{\infty }&\le \displaystyle \int ^{1}_{0}\left( \frac{1}{2}K_{i}\displaystyle \sum ^{p}_{j=1} \beta _{j,i}\frac{\partial ^{2}G(\eta _{j,i},s)}{\partial t^{2}}\right) f_{i}(t,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\, ds+P_{i}'(1)\\&\quad +\displaystyle \int ^{1}_{0}\varphi (s)f_{i}(t,\mathbf{u }(s), \mathbf{u }'(s),\mathbf{u }''(s))\,ds. \end{aligned}$$

It follows from Lemma 2.2 that, for all \(t\in [\theta ,1-\theta ]\),

$$\begin{aligned} T_{i}(\mathbf{u })'(t)&\ge \displaystyle \int ^{1}_{0}\left( \frac{\theta ^{2}}{2}K_{i}\displaystyle \sum ^{p}_{j=1}\beta _{j,i}\frac{\partial ^{2}G(\eta _{j,i},s)}{\partial t^{2}}\right) f_{i}(t,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\,ds\\&\quad +\theta ^{2} \displaystyle \int ^{1}_{0}\varphi (s)f_{i}(t,\mathbf{u }(s), \mathbf{u }'(s),\mathbf{u }''(s))\,ds +\frac{\theta ^{2}}{2}K_{i}\left( h_{3,i}(\psi _{3}[u_{1}],\ldots , \psi _{3}[u_{n}])\right) \\&\quad +h_{2,i}(\psi _{2}[u_{1}],\ldots ,\psi _{2}[u_{n}])\\&\ge \theta ^{2} \left( \displaystyle \int ^{1}_{0}\left( \frac{1}{2}K_{i}\displaystyle \sum ^{p}_{j=1}\beta _{j,i}\frac{\partial ^{2}G(\eta _{j,i},s)}{\partial t^{2}}\right) f_{i}(t,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\,ds \right. \\&\quad +\displaystyle \int ^{1}_{0}\varphi (s)f_{i}(t,\mathbf{u }(s), \mathbf{u }'(s),\mathbf{u }''(s))\,ds + \frac{1}{2}K_{i} h_{3,i}(\psi _{2}[u_{1}],\ldots ,\psi _{2}[u_{n}])\\&\quad +\left. \frac{1}{\theta ^{2}}h_{2,i}(\psi _{2}[u_{1}],\ldots ,\psi _{2} [u_{n}])\right) \\&\ge \theta ^{2} \left( \displaystyle \int ^{1}_{0}\left( \frac{1}{2}K_{i}\displaystyle \sum ^{p}_{j=1}\beta _{j,i}\frac{\partial ^{2}G(\eta _{j,i},s)}{\partial t^{2}}\right) f_{i}(t,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\, ds\right. \\&\quad \left. +\displaystyle \int ^{1}_{0}\varphi (s)f_{i}(t,\mathbf{u }(s),\mathbf{u }'(s), \mathbf{u }''(s))\,ds + P_{i}'(1)\right) \\&\ge \theta ^{2} \Vert T_{i}(\mathbf{u })'\Vert _{\infty }. \end{aligned}$$

Thus

$$\begin{aligned} \displaystyle \min _{t\in [\theta ,1-\theta ]}T_{i}(\mathbf{u })'(t)\ge \gamma (\theta ) \Vert T_{i}(\mathbf{u })'\Vert _{\infty }. \end{aligned}$$

Besides,

$$\begin{aligned} |T_{i}(\mathbf{u })''(t)|&\le \displaystyle \int ^{1}_{0}\left( \frac{\partial ^{2} G(t,s)}{\partial t^{2}}+t K_{i}\displaystyle \sum ^{p}_{j=1}\beta _{j,i}\frac{\partial ^{2}G(\eta _{j,i},s)}{\partial t^{2}}\right) f_{i}(t,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\,ds+P_{i}''(t)\\&\le \displaystyle \int ^{1}_{0}\left( K_{i}\displaystyle \sum ^{p}_{j=1}\beta _{j,i}\frac{\partial ^{2}G(\eta _{j,i},s)}{\partial t^{2}}\right) f_{i}(t,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\,ds\\&\quad +2 \displaystyle \int ^{1}_{0}\varphi (s)f_{i}(t,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\,ds+P_{i}''(1). \end{aligned}$$

Moreover, it follows from Lemma 2.2 that for each \(t\in [\theta ,1-\theta ]\)

$$\begin{aligned} T_{i}(\mathbf{u })''(t)&\ge \theta \displaystyle \int ^{1}_{0}\left( K_{i}\displaystyle \sum ^{p}_{j=1}\beta _{j,i}\frac{\partial ^{2}G(\eta _{j,i},s)}{\partial t^{2}}\right) f_{i}(t,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\,ds\\&\quad +\theta \displaystyle \int ^{1}_{0}\varphi (s)f_{i}(t,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\,ds +\theta K_{i} h_{3,i}(\psi _{2}[u_{1}],\ldots ,\psi _{2}[u_{n}])\\&\ge \frac{\theta }{2} \left( \displaystyle \int ^{1}_{0}\left( 2 K_{i}\displaystyle \sum ^{p}_{j=1}\beta _{j,i}\frac{\partial ^{2}G(\eta _{j,i},s)}{\partial t^{2}}\right) f_{i}(t,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\,ds\right. \\&\quad \left. +\displaystyle \int ^{1}_{0}2\varphi (s)f_{i}(t,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\,ds+2 K_{i} h_{3,i}(\psi _{2}[u_{1}],\ldots ,\psi _{2}[u_{n}])\right) \\&\ge \frac{\theta }{2} \left( \displaystyle \int ^{1}_{0}\left( K_{i}\displaystyle \sum ^{p}_{j=1}\beta _{j,i}\frac{\partial ^{2}G(\eta _{j,i},s)}{\partial t^{2}}\right) f_{i}(t,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\,ds \right. \\&\quad \left. +\displaystyle \int ^{1}_{0}2\varphi (s)f_{i}(t,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\,ds+ P_{i}''(1)\right) \\&\ge \frac{\theta ^{3}}{6} \Vert T_{i}(\mathbf{u })''\Vert _{\infty }. \end{aligned}$$

We deduce that \(\mathbf{T }(K(\theta ))\subset K(\theta )\).

Now we prove the operator \(\mathbf{T }\) is completely continuous. For any natural number m\((m \ge 2)\), we set, for all \(u,v,w \in [0,+\infty )\)

$$\begin{aligned} f_{i, m}(t,u,v,w)=\left\{ \begin{array}{l l} \inf _{t< s\le \frac{1}{m} } f_{i}(s,u,v,w), &{} \quad \text {if } 0 \le t\le \frac{1}{m},\\ f_{i}(t,u,v,w), &{} \quad \text {if }\frac{1}{m} \le t\le 1-\frac{1}{m},\\ \inf _{1-\frac{1}{m} \le s < t } f_{i}(s,u,v,w), &{} \quad \text {if }1-\frac{1}{m} \le t \le 1. \end{array} \right. \end{aligned}$$
(2.8)

Then \(f_{i, m}:[0, 1]\times {\mathbb {R}}^{n}_{+}\times {\mathbb {R}}_{+}^{n}\times {\mathbb {R}}_{+}^{n}\rightarrow [0,+\infty )\) is continuous and \(0 \le f_{i, m}(t,u,v,w)\le f_{i}(t,u,v,w)\) for all \(t \in (0, 1)\).

Let \(T_{i,m}(\mathbf{u })(t)=\displaystyle \int ^{1}_{0}H_{i}(t,s) f_{i,m}(s,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\,ds +P_{i}(t)\) and \(\mathbf{T }_{m}(\mathbf{u })=(T_{1,m}(\mathbf{u }),\ldots ,T_{n,m}(\mathbf{u }))\).

Since [0, 1] is compact, \(f_{i, m}\) and \(H_{i}\) are continuous, it is easy to show by using of Arzel–Ascoli theorem [6] that \(\mathbf{T }_{m}\) is completely continuous. Furthermore, for any \(R>0\), set \(B_{R}=\{\mathbf{u }\in K(\theta ): \Vert \mathbf{u }\Vert \le R\}\), then \(\mathbf{T }_{m}\) converges uniformly to \(\mathbf{T }\) as \(m \rightarrow \infty \). In fact, for all \(d\in \{0,1,2\}\), we denote by \(J_{d}=\max _{(t,s)\in [0,1]\times [0,1] }\frac{\partial ^{ d }H_{i}(t,s) }{\partial t^{d}}\). For \(R >0\) and \(\mathbf{u } \in B_{R}\), we have

$$\begin{aligned}&|T_{i,m}(\mathbf{u })^{(d)} (t)-T_{i}(\mathbf{u })^{(d)} (t) | \\&\quad \le J_{d} \displaystyle \int ^{1}_{0} |f_{i,m}(s,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))-f_{i}(s, \mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))|\,ds\\&\quad \le J_{d} \displaystyle \int ^{\frac{1}{m}}_{0} |f_{i,m}(s,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))-f_{i}(s, \mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))|\,ds\\&\qquad + J_{d} \displaystyle \int ^{1}_{1-\frac{1}{m}} |f_{i,m}(s,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))-f_{i}(s, \mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))|\,ds\\&\quad \le J_{d} \left( \displaystyle \int ^{\frac{1}{m}}_{0} f_{i}(s,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s)) \,ds+\displaystyle \int ^{1}_{1-\frac{1}{m}} f_{i}(s,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s)) \,ds \right) \\&\qquad \rightarrow 0 \text { as } (m\rightarrow \infty ) \end{aligned}$$

So we conclude that \(\mathbf{T }_{m}\) converges uniformly to \(\mathbf{T }\) as \(m\rightarrow \infty \). Thus, \(\mathbf{T }\) is completely continuous. The proof is completed. \(\square \)

3 Main results and proofs

Let \(\beta :K(\theta )\rightarrow [0,+\infty )\) be a functional defined by:

$$\begin{aligned} \beta (\mathbf{u })=\displaystyle \min _{t\in [\theta ,1-\theta ]}\displaystyle \sum ^{n}_{i=1}\sum ^{2}_{d=0}u_{i}^{(d)}(t). \end{aligned}$$

Then, it is easy to see that \(\beta \) is a nonnegative continuous and concave functional on \(K(\theta )\), moreover, for each \(\mathbf{u }=(u_{1},\ldots ,u_{n})\in K(\theta )\), one has

$$\begin{aligned}\beta (\mathbf{u })\le \Vert \mathbf{u }\Vert . \end{aligned}$$

Let \(p_{i}\), \(q_{k,i}\) be positive numbers such that \(\sum ^{n}_{i=1}\frac{1}{p_{i}}+\frac{5}{3q_{3,i}}+\frac{2}{q_{2,i}}+\frac{1}{q_{1,i}}\le 1\).

Our first existence result is the following:

Theorem 3.1

Let abc in \({\mathbb {R}}\) such that \(0< a< b < \frac{b}{\gamma (\theta )} \le c\). Assume that

\((H_{1})\) :

For all \(u\in {\mathbb {R}}^{n}\) such that \(\sum ^{n}_{i=1}u_{i}\in [0,c]\), we have

$$\begin{aligned} h_{k,i}(u)\le \frac{L_{k,i}}{q_{k,i}}\sum ^{n}_{i=1}u_{i} \; \text { for all } k\in \{1,2,3\}. \end{aligned}$$
\((H_{2})\) :

For all \(u_{k}=(u_{k,1},\ldots ,u_{k,n})\) such that \(\sum ^{3}_{k=1}\sum ^{n}_{i=1}u_{k,i}\in [0,c]\), we have

$$\begin{aligned} \displaystyle f_{i}(t,u_{1},u_{2},u_{3})\le \frac{c}{3 p_{i}M_{i}},\;t \in [0, 1]. \end{aligned}$$
\((H_{3})\) :

For all \(u_{k}=(u_{k,1},\ldots ,u_{k,n})\) such that \(\sum ^{3}_{k=1}\sum ^{n}_{i=1}u_{k,i}\in [0,a]\), we have

$$\begin{aligned} \displaystyle f_{i}(t,u_{1},u_{2},u_{3})\le \frac{a}{3 p_{i}M_{i}},\;t \in [0, 1]. \end{aligned}$$
\((H_{4})\) :

For all \(u_{k}=(u_{k,1},\ldots ,u_{k,n})\) such that \(\sum ^{3}_{k=1}\sum ^{n}_{i=1}u_{k,i}\in \left[ b,\frac{b}{\gamma (\theta )} \right] \) we have

$$\begin{aligned} \displaystyle f_{i}(t,u_{1},u_{2},u_{3})\ge \frac{b}{n\sum ^{2}_{d=0}m_{d}(\theta )},\;t \in [\theta , 1-\theta ]. \end{aligned}$$

Then the boundary value problem (1.4)–(1.5) has at least three nonnegative solutions \(\mathbf{u }_{1}\), \(\mathbf{u }_{2}\), \(\mathbf{u }_{3}\) in \({{\overline{P}}_{c}}\) such that \(\Vert \mathbf{u }_{1}\Vert <a\), \(\beta (\mathbf{u }_{2})> b\) and \(\Vert \mathbf{u }_{3}\Vert >a\) with \(\beta (\mathbf{u }_{3})< b\).

Proof

First, let us prove that the operator \(\mathbf{T }\) maps \({{\overline{P}}_{c}}\) into itself. Indeed, if \(\mathbf{u }=(u_{1},\ldots ,u_{n})\in {{\overline{P}}_{c}}\), then \(\Vert \mathbf{u }\Vert \le c\). Moreover, by hypothesis (\(H_{1}\)), we get

$$\begin{aligned} h_{1,i}(\psi _{1}[u_{1}],\ldots ,\psi _{1}[u_{n}])\le & {} \frac{L_{1,i}}{q_{1,i}}(\psi _{1}[u_{1}+\cdots +u_{n}])\le \frac{L_{1,i}}{q_{1,i}}\psi _{1}[1]\Vert \mathbf{u }\Vert \le \frac{c}{q_{1,i}},\\ h_{2,i}(\psi _{2}[u_{1}],\ldots ,\psi _{2}[u_{n}])\le & {} \frac{L_{2,i}}{q_{2,i}}(\psi _{2}[u_{1}+\cdots +u_{n}]) \le \frac{L_{2,i}}{q_{2,i}}\psi _{2}[1]\Vert \mathbf{u }\Vert \le \frac{c}{q_{2,i}} \end{aligned}$$

and

$$\begin{aligned} h_{3,i}(\psi _{3}[u_{1}],\ldots ,\psi _{3}[u_{n}]) \le \frac{L_{3,i}}{q_{3,i}}(\psi _{3}[u_{1}+\cdots +u_{n}]) \le \frac{L_{3,i}}{q_{3,i}}\psi _{3}[1]\Vert \mathbf{u }\Vert \le \frac{c}{K_{i}q_{3,i}}. \end{aligned}$$

Thus, from hypothesis (\(H_{2}\)), we have

$$\begin{aligned} \Vert T_{i}(\mathbf{u })\Vert _{\infty }&=\max _{t\in [0,1]}\displaystyle \int ^{1}_{0} H_{i}(t,s)f_{i}(s,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\,ds +\max _{t\in [0,1]}\displaystyle P_{i}(t)\\&\le \max _{t\in [0,1]}\displaystyle \int ^{1}_{0} H_{i}(t,s)\,ds \frac{c}{3 p_{i}M_{i}} +\frac{ c}{6q_{3,i}}+\frac{ c}{q_{2,i}}+\frac{c}{q_{1,i}}\\&\le \frac{c}{3 p_{i}} +\frac{ c}{6q_{3,i}}+\frac{ c}{q_{2,i}}+\frac{c}{q_{1,i}},\\ \Vert T_{i}(\mathbf{u })'\Vert _{\infty }&=\max _{t\in [0,1]}\displaystyle \int ^{1}_{0} \frac{\partial H_{i}(t,s)}{\partial t}f_{i}(s,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\,ds +\max _{t\in [0,1]}\displaystyle \frac{ \partial P_{i}(t)}{\partial t}\\&\le \max _{t\in [0,1]}\displaystyle \int ^{1}_{0} \frac{\partial H_{i}(t,s)\,ds}{\partial t}\frac{c}{3 p_{i}M_{i}} +\frac{ c}{2 q_{3,i}}+\frac{ c}{q_{2,i}}\\&\le \frac{c}{3 p_{i}} +\frac{ c}{2q_{3,i}}+\frac{ c}{q_{2,i}}+\frac{c}{q_{1,i}} \end{aligned}$$

and

$$\begin{aligned} \Vert T_{i}(\mathbf{u })''\Vert _{\infty }&=\max _{t\in [0,1]}\displaystyle \int ^{1}_{0} \frac{\partial ^{2} H_{i}(t,s)}{\partial t^{2}}f_{i}(s,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\,ds +\max _{t\in [0,1]}\displaystyle \frac{ \partial ^{2} P_{i}(t)}{\partial t^{2}}\\&\le \max _{t\in [0,1]}\displaystyle \int ^{1}_{0} \frac{\partial ^{2} H_{i}(t,s)\,ds}{\partial t^{2}}\frac{c}{3 p_{i}M_{i}} +\frac{ c}{2 q_{3,i}}+\frac{ c}{q_{2,i}}\\&\le \frac{c}{3 p_{i}} +\frac{ c}{q_{3,i}}+\frac{ c}{q_{2,i}}, \end{aligned}$$

which yields to

$$\begin{aligned} \Vert \mathbf{T }(\mathbf{u })\Vert&=\displaystyle \sum ^{n}_{i=1}\sum ^{2}_{d=0}\Vert T_{i}(\mathbf{u })^{(d)}\Vert _{\infty }\\&\le \displaystyle \sum ^{n}_{i=1}\frac{c}{3 p_{i}} +\frac{ c}{6q_{3,i}}+\frac{ c}{q_{2,i}}+\frac{c}{q_{1,i}}\\&\quad +\displaystyle \sum ^{n}_{i=1}\frac{c}{3 p_{i}} +\frac{ c}{2q_{3,i}}+\frac{ c}{q_{2,i}}+\frac{c}{q_{1,i}}\\&\quad + \displaystyle \sum ^{n}_{i=1}\frac{c}{3 p_{i}} +\frac{ c}{q_{3,i}}+\frac{ c}{q_{2,i}} \\&=\displaystyle \sum ^{n}_{i=1}\frac{c}{p_{i}}+\frac{5 c}{3q_{3,i}}+\frac{2c}{q_{2,i}}+\frac{c}{q_{1,i}}\le c. \end{aligned}$$

Hence, \(\Vert \mathbf{T }(\mathbf{u })\Vert \le c\), that is, \(\mathbf{T }:{{\overline{P}}_{c}}\rightarrow {{\overline{P}}_{c}}\). It is easy to prove by Arzel–Ascoli [6] that the operator \(\mathbf{T }\) is completely continuous. In the same way, the condition \((H_{3})\) implies that the condition (\(A_{2}\)) of Theorem 2.3 is satisfied.

We now show that condition (\(A_{1}\)) of Theorem 2.3 is satisfied. Clearly, if

$$\begin{aligned}\mathbf{u }(t)=\left( \frac{b}{2\times 3n}+\frac{b}{2\times \gamma (\theta )3n}\right) (1,\ldots ,1), \end{aligned}$$

then, \(\beta (\mathbf{u })>b\) and \(\Vert \mathbf{u }\Vert \le \frac{b}{\gamma (\theta )}\), that is

$$\begin{aligned} \left\{ \mathbf{u }\in P\left( \beta ,b,\frac{b}{\gamma (\theta )}\right) ;\beta (\mathbf{u })>b\right\} \ne \emptyset . \end{aligned}$$

Let \(\mathbf{u }=(u_{1},\ldots ,u_{n})\in P\left( \beta ,b,\frac{b}{\gamma (\theta )}\right) \), then, from \((H_{4})\) we have

$$\begin{aligned} b\le \displaystyle \sum ^{n}_{i=1}\sum ^{2}_{d=0}u_{i}^{(d)}(t)\le \frac{b}{\gamma (\theta )},\;\; \;\;\; t\in [\theta ,1-\theta ]. \end{aligned}$$

Moreover

$$\begin{aligned} \beta (\mathbf{T }(\mathbf{u }))&=\displaystyle \min _{t\in [\theta ,1-\theta ]}\displaystyle \sum ^{n}_{i=1}\sum ^{2}_{d=0}\displaystyle \int ^{1}_{0} \frac{\partial ^{d} G(t,s)}{\partial t^{d}}f_{i}(s,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s))\,ds +\displaystyle \min _{t\in [\theta ,1-\theta ]}\displaystyle \frac{\partial ^{d}P_{i}(t)}{\partial t^{d}}\\&\ge \displaystyle \sum ^{n}_{i=1}\sum ^{2}_{d=0}\min _{t\in [\theta ,1-\theta ]}\displaystyle \int ^{1-\theta }_{\theta } \frac{\partial ^{d} G(t,s)}{\partial t^{d}}\,ds \frac{b}{n\sum ^{2}_{d=0}m_{d}(\theta )}\\&\ge b. \end{aligned}$$

Therefore, condition (\(A_{1}\)) of Theorem 2.3 is satisfied.

Finally, if

$$\begin{aligned} \mathbf{u }=(u_{1},\ldots ,u_{n})\in P(\beta ,b,c)\;\; \text{ and }\;\; \Vert \mathbf{T }(\mathbf{u })\Vert > \frac{b}{\gamma (\theta )}, \end{aligned}$$

then

$$\begin{aligned} \beta (\mathbf{T }(\mathbf{u }))=\min _{t\in [\theta ,1-\theta ]}\displaystyle \sum ^{n}_{i=1}\sum ^{2}_{d=0}T_{i}(\mathbf{u })^{(d)}(t)\ge \gamma (\theta ) \Vert \mathbf{T }(\mathbf{u })\Vert \ge \gamma (\theta )\frac{b}{\gamma (\theta )}=b. \end{aligned}$$

Therefore, the condition (\(A_{3}\)) of Theorem 2.3 is also satisfied. By Theorem 2.3, there exist three positive solutions \(\mathbf{u }_{1}\), \(\mathbf{u }_{2}\) and \(\mathbf{u }_{3}\) such that \(\Vert \mathbf{u }_{1}\Vert <a\), \(\beta (\mathbf{u }_{2})> b\) and \(\Vert \mathbf{u }_{3}\Vert >a\) with \(\beta (\mathbf{u }_{3})< b\). The proof of Theorem 3.1 is now completed. \(\square \)

From the proof of Theorem 3.1, it is easy to see that, if the conditions like \((H_{1})\)-\((H_{4})\) are appropriately combined, we can obtain an arbitrary number of positive solutions of problem (1.4)–(1.5). More precisely, let m be an arbitrary positive integer with \(m \ge 1\). Assume that there exist numbers \(b_{j}\) (\(1 \le j \le m-1\)) and \(c_{l}\) (\(1 \le l \le m\)) such that

$$\begin{aligned} 0< c_{1}< b_{1}< \frac{b_{1}}{\gamma (\theta )} \le c_{2}< b_{2}< \frac{b_{2}}{\gamma (\theta )} \le c_{3}<\cdots \le c_{m-1}< b_{m-1} < \frac{b_{m-1}}{\gamma (\theta )} \le c_{m}, \end{aligned}$$

then, if we replace the hypothesis \((H_{1})\)\((H_{4})\) of Theorem 3.1 by the following hypothesis:

\((H_{m,1})\) :

For all \(1 \le l \le m\) and \(u\in {\mathbb {R}}^{n}\) such that \(\sum ^{n}_{i=1}u_{i}\in [0,c_{l}]\), we have

$$\begin{aligned} \displaystyle h_{k,i}(u)\le \frac{L_{k,i}}{q_{k,i}}\displaystyle \sum ^{n}_{j=1}u_{j}, \text { for all } k \in \{1,2,3\}. \end{aligned}$$
\((H_{m,2})\) :

For all \(1 \le l \le m\) and \((u_{1},u_{2},u_{3})\in {\mathbb {R}}^{3 n}\) such that \(\sum ^{3}_{k=1}\sum ^{n}_{i=1}u_{k,i}\in [0,c_{l}]\), we have

$$\begin{aligned} \displaystyle f_{i}(t,u_{1},u_{2},u_{3})\le \frac{c_{l}}{3 p_{i}M_{i}},\;t \in [0, 1]. \end{aligned}$$
\((H_{m,3})\) :

For all \(1 \le j \le m-1\) and \((u_{1},u_{2},u_{3})\in {\mathbb {R}}^{3 n}\) such that \(\sum ^{3}_{k=1}\sum ^{n}_{i=1}u_{k,i}\in \left[ b_{j},\frac{b_{j}}{\gamma (\theta )}\right] \), we have

$$\begin{aligned} \displaystyle f_{i}(t,u_{1},u_{2},u_{3})\ge \frac{b_{j}}{n\sum ^{2}_{d=0}m_{d}(\theta )},\;\;t \in [\theta , 1-\theta ]. \end{aligned}$$

we obtain the following result:

Theorem 3.2

Under hypothesis \((H_{m,1})-(H_{m,3})\), problem (1.4)–(1.5) has at least \(2m-1\) nonnegative solutions in \(\overline{P_{c_{m}}}\).

Proof

In order to prove Theorem 3.2, observe that for \(m = 1\), we know from \((H_{3})\) that \(\mathbf{T }:\overline{P_{c_{1}}}\rightarrow P_{c_{1}}\). Then it follows from Schauder fixed point theorem that (1.4)–(1.5) has at least one positive solution in \(\overline{P_{c_{1}}}\). Moreover, for \(m = 2\), it is clear that Theorem 3.1 holds (with \(a=c_{1} \), \(b=b_{1} \) and \(c=c_{2} \)). Then, we can obtain three positive solutions \(x_{2}\), \(x_{3}\), and \(x_{4}\).

Along this way, we can finish the proof by the induction method. To this aim, we suppose that there exist numbers \(b_{j}\) (\(1 \le j \le m\)) and \(c_{l}\) (\(1 \le l \le m+1\)) such that

$$\begin{aligned} 0< c_{1}< b_{1}< \frac{b_{1}}{\gamma (\theta )} \le c_{2}< b_{2}< \frac{b_{2}}{\gamma (\theta )} \le \cdots \le c_{m}< b_{m} < \frac{b_{m}}{\gamma (\theta )} \le c_{m+1}, \end{aligned}$$

and \((H_{m+1,1})\), \((H_{m+1,2})\) and \((H_{m+1,3})\) hold true. We know by the inductive hypothesis that (1.4)–(1.5) has at least \(2m-1\) positive solutions \(u_{i}\)\((i = 1, 2, \ldots , 2m - 1)\) in \(\overline{P_{c_{m}}}\). At the same time, it follows from Theorem 3.1, \((H_{m+1,1})\), \((H_{m+1,2})\) and \((H_{m+1,3})\) that (1.4)–(1.5) has at least three positive solutions \(\mathbf{u }\), \(\mathbf v \) and \(\mathbf w \) in \(P_{c_{m+1}}\) such that \(\Vert \mathbf{u }\Vert <c_{m}\), \(\beta (\mathbf v )> b_{m}\) and \(\Vert \mathbf w \Vert >c_{m}\) with \(\beta (\mathbf w )< b_{m}\). Obviously, \(\mathbf v \) and \(\mathbf w \) are not in \(\overline{P_{c_{m}}}\). Therefore, (1.4)–(1.5) has at least \(2m + 1\) nonnegative solutions in \(P_{c_{m+1}}\). This completes the proof. \(\square \)

We can generalize the above result and present the following result which is especially important and useful in applications.

Theorem 3.3

Under the assumptions of Theorem  3.2. If the following additional assumption:

$$\begin{aligned} \text {there exists } t_{0,i}\in (0,1) \text { such that }f_{i}(t_{0,i},x,y,z)>0,\;\;\;\forall x, y,z\in {\mathbb {R}}^{n}_{+}, \end{aligned}$$
(3.1)

holds true. Then (1.4)–(1.5) has at least \(2m-1\) positive solutions in \(\overline{P_{c_{m}}}\).

Proof

Let \(u_{i,l}\), for \(l\in \{1,\ldots ,2m-1\}\) be the \(2m-1\) nonnegative solutions of problem (1.4)–(1.5) whose existence is guaranteed by Theorem 3.2. Then, \(u_{i,l}\) satisfy the following integral equation

$$\begin{aligned} u_{i,l}(t)=\displaystyle \int ^{1}_{0}H_{i}(t,s)f_{i}(s,\mathbf{u }_{l}(s), \mathbf{u }_{l}'(s),\mathbf{u }_{l}''(s))ds+P_{i}(t). \end{aligned}$$

Indeed, on the contrary case we can find \(t^{*}\in (0,1)\) such that \(u_{i,l}(t^{*}) = 0\). Since \(u_{i,l}(t) \ge 0\), \(u_{i,l}'(t) \ge 0\) and \(u_{i,l}''(t) \ge 0\) for all \(t\in [0,1]\), we have

$$\begin{aligned} u_{i,l}(t^{*}) =0=\displaystyle \int ^{1}_{0}H_{i}(t^{*},s)f_{i}(s,\mathbf{u }_{l}(s), \mathbf{u }_{l}'(s),\mathbf{u }_{l}''(s))ds+P_{i}(t^{*})\ge 0. \end{aligned}$$

Since the functions \(\displaystyle H_{i}\) and \(f_{i}\) are nonnegative and continuous, we obtain

$$\begin{aligned} \displaystyle H_{i}(t^{*},s)f_{i}(s,\mathbf{u }_{l}(s),\mathbf{u }_{l}'(s), \mathbf{u }_{l}''(s))=0\;\;\;\text {almost everywhere s}. \end{aligned}$$

Since \(f_{i}(s,\mathbf{u }_{l}(s),\mathbf{u }_{l}'(s),\mathbf{u }_{l}''(s))\ge 0\) and \(\displaystyle H_{i}\) is positive on (0, 1), we deduce that

$$\begin{aligned} f_{i}(s,\mathbf{u }_{l}(s),\mathbf{u }_{l}'(s), \mathbf{u }_{l}''(s))=0\;\;\;\text { almost everywhere s}. \end{aligned}$$

Now, by the condition (3.1) and the continuity of the functions \(f_{i}\), we deduce that there exists a subset \(\Omega \subset (0,1)\) with \(\mu (\Omega ) > 0\) where \(\mu \) is the Lebesgue measure on [0, 1] such that \(f_{i}(s,\mathbf{u }_{l}(s),\mathbf{u }_{l}'(s),\mathbf{u }_{l}''(s))>0\) on \(\Omega \) and this is a contradiction. This ends the proof. \(\square \)

Remark 3.4

It is clear that the conclusion of Theorem 3.2 remains valid if we replace condition (3.1) by: There exist \(k_{0}\in \{1,2,3\}\) for all \(i\in \{1,\ldots ,n\}\), there exists, \(t_{0,i}\in (0,1)\) such that \(h_{k_{0},i}(t_{0,i},x)>0\) for all \(x\in {\mathbb {R}}^{n}_{+}\).

Remark 3.5

In the special case when the functions \(f_{i}\) are nondecreasing with respect to the second, third and the fourth variable on (0, 1), the condition (3.1) can be replaced by

$$\begin{aligned} \text {For all } i\in \{1,\ldots ,n\}, \text { there exists } t_{0,i}\in (0,1) \text { such that }f_{i}(t_{0,i},\mathbf{0 },\mathbf{0 },\mathbf{0 })>0, \end{aligned}$$
(3.2)

where \(\mathbf{0 }=(0,\ldots ,0)\in {\mathbb {R}}^{n}\).

4 Example

In this section, we present an example to illustrate our main theorems. Let \(f_{1}\) and \(f_{2}\) be two functions defined by:

$$\begin{aligned} f_{1}(t,u_{1},u_{2},u_{3})=\left\{ \begin{array}{ll} \displaystyle \frac{\sin ^{2}\left( 21 \pi u\right) }{5}+\frac{2}{10}+\frac{1}{100} e^{-\frac{1}{1-t}}, &{} \quad \text {if }0\le u\le \frac{1}{2},\\ \displaystyle \left( u-\frac{1}{2}\right) ^{2} + \frac{4}{10} +\frac{1}{100} e^{-\frac{2 u}{1-t}}, &{} \quad \text {if } \frac{1}{2}\le u\le 1,\\ \displaystyle \frac{ 2935}{100} u-\frac{287}{10} +\frac{1}{100} e^{-\frac{2 u}{1-t}}, &{} \quad \text {if }1\le u\le 2,\\ \displaystyle \frac{\left| \cos \left( \frac{\pi u}{4}\right) \right| }{1000}+30 +\frac{1}{100} e^{-\frac{2 u}{1-t}}, &{} \quad \text {if }2\le u\le 768,\\ \displaystyle \frac{30001}{1000}e^{768-u}+\frac{2 u}{5}\left| \sin \left( \frac{\pi u}{768}\right) \right| +\frac{1}{100} e^{-\frac{2 u}{1-t}}, &{} \quad \text {if }768\le u, \end{array} \right. \end{aligned}$$

\(f_{2}(t,u_{1},u_{2},u_{3})=\)

$$\begin{aligned} \left\{ \begin{array}{ll} \displaystyle \frac{\sqrt{\frac{3+3 u}{2}}}{|\sin \left( \pi u\right) |+14}+\frac{1}{1000} \left| \cos \left( \frac{1}{\sqrt{t-t^{2}}}\right) \right| , &{} \quad \text {if } 0\le u\le 1,\\ \displaystyle \ln \left( \frac{1+u}{2}\right) +\frac{\sqrt{3}}{14}+30 \left| \cos \left( \frac{\pi }{2} u\right) \right| +\frac{1}{1000} \left| \cos \left( \frac{u}{\sqrt{t-t^{2}}}\right) , \right| &{} \quad \text {if }1\le u\le 2,\\ \displaystyle \left( \ln \left( \frac{3}{2}\right) +\frac{\sqrt{3}}{14}\right) \left| \cos \left( \frac{\pi }{2} u\right) \right| +30 +\frac{1}{1000} \left| \cos \left( \frac{2}{\sqrt{t-t^{2}}}\right) , \right| &{} \quad \text {if } 2\le u\le 768,\\ \displaystyle \left( \ln \left( \frac{3}{2}\right) +30+\frac{\sqrt{3}}{14}\right) +\frac{u}{10}\left| \sin \left( \pi u\right) \right| +\frac{1}{1000} \left| \cos \left( \frac{2}{\sqrt{t-t^{2}}}\right) , \right| &{} \quad \text {if } 768\le u, \end{array} \right. \end{aligned}$$

where \(u_k=(u_{1,k},u_{2,k})\) and \(u=\sum ^{2}_{i=1}\sum ^{3}_{k=1}u_{i,k}\).

For \(i=1,2\) and \(v=\sum ^{2}_{i=1}u_{i,1}\), we define the functions \(h_{k,i}\) as follows:

$$\begin{aligned} h_{1,i}(u_{1})= & {} \left\{ \begin{array}{ll} \displaystyle \frac{\ln (1+v)}{100\,i\,\ln (i+2)\left( v+1\right) },&{} \quad \text {if }t\in [0,1], 0\le v\le 2,\\ \displaystyle \frac{\ln (3)}{300\,i\,\ln (i+2)},&{} \quad \text {if } t\in [0,1], 2\le v, \end{array} \right. \\ h_{2,i}(u_{1})= & {} \left\{ \begin{array}{ll} \displaystyle \frac{v \,e^{v}}{40\, i \,e^{2 i}(v+1)},&{} \quad \text {if }t\in [0,1], 0\le v\le 2,\\ \displaystyle \frac{v\, e^{2}}{120 \,i\,e^{2 i}},&{} \quad \text {if } t\in [0,1], 2\le v, \end{array} \right. \\ h_{3,i}(u_{1})= & {} \left\{ \begin{array}{ll} \displaystyle \frac{7}{2250\,\left| \sin i\right| }\times \frac{v\ln (2)}{2(1+v)\left( \sqrt{i v}+1\right) },&{} \quad \text {if }t\in [0,1], 0\le v\le 2,\\ \displaystyle \frac{7}{2250\, |\sin i|}\times \frac{2\ln (2)}{6\left( \sqrt{2 i}+1\right) },&{} \quad \text {if } t\in [0,1], 2\le v, \end{array} \right. \end{aligned}$$

and we consider the following boundary value problem:

$$\begin{aligned} \left\{ \begin{array}{ll} u_{1}^{(4)}(t)+f_{1}(t,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s)) =0,~0<t<1,\\ u_{2}^{(4)}(t)+f_{2}(t,\mathbf{u }(s),\mathbf{u }'(s),\mathbf{u }''(s)) =0,~0<t<1,\\ u_{1}(0)=h_{1,1}(\psi _{1}[u_{1}],\psi _{1}[u_{2}]),\\ u_{2}(0)=h_{1,2}(\psi _{1}[u_{1}],\psi _{1}[u_{2}]),\\ u_{1}'(0)=h_{2,1}(\psi _{2}[u_{1}],\psi _{1}[u_{2}]),\\ u_{2}'(0)=h_{2,2}(\psi _{2}[u_{1}],\psi _{1}[u_{2}]),\\ u_{1}''(0)=u_{2}''(0)=0,\\ u_{1}''(1)=\displaystyle \sum ^{2}_{j=1}\beta _{j,i} u_{1}''(\eta _{j,1}) +h_{3,1}(\psi _{3}[u_{1}],\psi _{3}[u_{2}]),\\ u_{2}''(1)=\displaystyle \sum ^{2}_{j=1}\beta _{j,i} u_{2}''(\eta _{j,2}) +h_{3,2}(\psi _{3}[u_{1}],\psi _{3}[u_{2}]). \end{array} \right. \end{aligned}$$
(4.1)

We shall apply Theorem 3.3 in the following special cases

\(a=1\), \(b=2\), \(c=844.8\), \(\theta =\frac{1}{4}\), \(\gamma (\theta )=\frac{\theta ^{3}}{6}\), \(\frac{b}{\gamma (\theta )}=768\), \(m_{0}(\theta )=\frac{1}{1536}\), \(m_{1}(\theta )=\frac{1}{128}\), \(m_{2}(\theta )=\frac{1}{16}\), \(\psi _{1}[1]=1\), \(\psi _{2}[1]=2\), \(\psi _{3}[1]=3\), \(\beta _{j,i}=\frac{i}{5}\), \(\eta _{j,i}=\frac{i}{6}\), \(K_{1}=\frac{15}{14}\), \(K_{2}=\frac{15}{11}\), \(K_{3}=\frac{5}{2}\), \(M_{0,1}=\frac{17}{336}\), \(M_{1,1}=\frac{37}{336}\), \(M_{2,1}=\frac{5}{28}\), \(M_{1}=\frac{5}{28}\), \(M_{0,2}=\frac{17}{264}\), \(M_{1,2}=\frac{5}{33}\), \(M_{2,2}=\frac{23}{88}\), \(M_{2}=\frac{23}{88}\), \(L_{1,1}=1\), \(L_{2,2}=1\), \(L_{3,1}=\frac{1}{2}\), \(L_{1,2}=\frac{1}{2}\), \(L_{2,1}=\frac{14}{45}\), \(L_{3,2}=\frac{14}{45}\), \(q_{1,i}=100\,\ln (i+2)\), \(q_{2,i}=20 i\), \(q_{3,i}=100\,|\sin i|\) and \(p_{i}=e^{i}\).

We can easily know that the following statements hold:

  1. 1.

    By calculating we have

    $$\begin{aligned} \displaystyle \sum ^{2}_{i=1}\frac{1}{p_{i}}+\frac{5}{3q_{3,i}}+\frac{2}{q_{2,i}}+\frac{1}{q_{1,i}}=0.707666\le 1, \end{aligned}$$

    and also we have: \(\displaystyle \frac{L_{1,1}}{q_{1,1}}=\frac{1}{100\ln 3}\), \(\displaystyle \frac{L_{1,2}}{q_{1,2}}=\frac{1}{200\ln 2}\), \(\displaystyle \frac{L_{2,1}}{q_{2,1}}=\frac{1}{40}\), \(\displaystyle \frac{L_{2,2}}{q_{2,2}}=\frac{1}{80}\), \(\displaystyle \frac{L_{3,1}}{q_{3,1}}=\frac{7}{2250\,|\sin 1|}\) and \(\displaystyle \frac{L_{3,2}}{q_{3,2}}=\frac{7}{2250\,|\sin 2|}\).

  2. 2.

    \(f_{1}\) satisfies the following conditions:

    • \(\displaystyle f_{1}(t,u_{1},u_{2},u_{3})\le 0.41\le \frac{a}{\displaystyle 3p_{1}M_{1}}=0.686708\) for all \(u\in [0,1]\).

    • \(\displaystyle f_{1}(t,u_{1},u_{2},u_{3})\ge 30\ge \frac{b}{2\sum ^{2}_{d=0}m_{d}(\theta )}=28.1835\) for all \(u\in [2,768]\).

    • \(f_{1}(t,u_{1},u_{2},u_{3})\le 367.931 \le \frac{c}{\displaystyle 3p_{1}M_{1}}= 580.131\) for all \(u\in [0,844.8]\).

    • \(\displaystyle \int ^{1}_{0 } f_{1}(s,x,y,z) \,ds < +\infty \text{ for } \text{ any } x, y, z \in [0,+\infty )\).

  3. 3.

    \(f_{2}\) satisfies the following conditions:

    • \(\displaystyle f_{2}(t,u_{1},u_{2},u_{3})\le \frac{\sqrt{3}}{14}=0.124718 \le \frac{a}{\displaystyle 3p_{2}M_{2}}=0.172602\) for all \(u\in [0,1]\).

    • \(\displaystyle f_{2}(t,u_{1},u_{2},u_{3})\ge 30\ge \frac{b}{2\sum ^{2}_{d=0}m_{d}(\theta )}=28.1835\) for all \(u\in [2,768]\).

    • \(\displaystyle f_{2}(t,u_{1},u_{2},u_{3})\le \ln \left( \frac{3}{2}\right) +110.001+\frac{\sqrt{3}}{14} \le \frac{c}{\displaystyle 3p_{2}M_{2}}= 213.418\) for all \(u\in [0,844.8]\).

    • \(\displaystyle \int ^{1}_{0 } f_{2}(s,x,y,z) \,ds < +\infty \text{ for } \text{ any } x,y,z \in [0,+\infty )\).

  4. 4.

    \(h_{k,i}\) satisfies the following conditions: \(\displaystyle h_{1,i}(u_{1})\le \frac{L_{1,i}}{q_{1,i}} v\), \(\displaystyle h_{2,i}(u_{1})\le \frac{L_{2,i}}{q_{2,i}} v\) and \(\displaystyle h_{3,i}(u_{1})\le \frac{L_{3,i}}{q_{3,i}} v\).

Hence, all assumptions of Theorem 3.3 hold. Then, Theorem 3.3 implies that problem (4.1) has at least three positive solutions \(\mathbf{u }_{1}\), \(\mathbf{u }_{2}\) and \(\mathbf{u }_{3}\) with \(\Vert \mathbf{u }_{1}\Vert <1\), \(\beta (\mathbf{u }_{2})> 2\) and \(\Vert \mathbf{u }_{3}\Vert >1\) with \(\beta (\mathbf{u }_{3})< 2\).