1 Introduction

Maximum principles play an important role in the study of the qualitative theory of impulsive differential equations [1]. The monotone iterative technique coupled with the method of lower and upper solutions have used maximum principles to ensure that the sequences of approximate solutions converge to the extremal solutions of nonlinear impulsive problems (see, for example, [28]). Recently, some excellent results have been obtained by applying this concept to several impulsive problems which include local jump conditions, see [913]. These local jump conditions involve discontinuities in the solution values or derivative of solution values at a set of discrete points. However, there have only been a few papers that have studied maximum principles for impulsive problems with nonlocal jump conditions, see [1418].

In a recent paper [19], the authors considered the following periodic boundary value problem for second-order impulsive integro-differential equations with integral jump conditions:

$$\begin{aligned} {\left\{ \begin{array}{ll} x''(t)=f(t, x(t), (Kx)(t), (Sx)(t)),\quad t\in J=[0, T],\quad t\ne t_k,\\ \Delta x(t_k)=I_k\left( \int _{t_k-\delta _k}^{t_k-\varepsilon _k}x'(s)\text {d}s\right) ,\quad k=1, 2,\ldots ,m,\\ \displaystyle \Delta x'(t_k)=I_k^*\left( \int _{t_k-\tau _k}^{t_k-\sigma _k}x(s)\text {d}s\right) ,\quad k=1, 2,\ldots ,m,\\ \displaystyle x(0)=x(T),\quad x'(0)=x'(T), \end{array}\right. } \end{aligned}$$
(1.1)

where \(0=t_0<t_1<t_2<\cdots <t_k<\cdots <t_m<t_{m+1}=T\), \(f: J\times R^3\rightarrow R\) is continuous everywhere except at \(\{t_k\}\times R^3\), \(f(t_k^+,x,y,z)\) and \(f(t_k^-,x,y,z)\) exist, \(f(t_k^-,x,y,z)=f(t_k,x,y,z)\), \(I_k\in C(R, R)\), \(I_k^*\in C(R, R)\), \(\Delta x(t_k)=x(t_k^+)-x(t_k^-)\), \(\Delta x'(t_k)=x'(t_k^+)-x'(t_k^-)\), \(0\le \varepsilon _k\le \delta _k\le t_k-t_{k-1}\), \(0\le \sigma _k\le \tau _k\le t_k-t_{k-1}\), \(k=1,2,\ldots , m\),

$$\begin{aligned} (Kx)(t)=\int _0^tk(t,s)x(s)\text {d}s,\quad (Sx)(t)=\int _0^Th(t, s)x(s)\text {d}s, \end{aligned}$$

\(k(t,s)\in C(D, R^+)\), \(h(t, s)\in C(J\times J, R^+)\), \(D=\{(t, s)\in R^2\), \(0\le s\le t\le T\}\), \(R^+=[0, +\infty )\), \(k_0=\max \{k(t,s)\,:\,(t,s)\in D\}\), \(h_0=\max \{h(t,s)\,:\,(t,s)\in J\times J\}\). They gave some maximum principles for integral jump conditions and used the monotone iterative technique to obtain two sequences which approximate the extremal solutions of (1.1) between a lower and upper solution. We note that jump conditions of problem (1.1) depend on the areas under the curve of solutions and the derivative of solutions of the past states. This means that impulse effects of such problem have memory of path history.

In this paper, we mainly investigate maximum principles related to impulsive integro-differential equation for the following integral jump conditions:

$$\begin{aligned} \displaystyle \Delta x(t_k)=L_k\int _{t_k-\tau _k}^{t_k-\sigma _k}x(s)\text {d}s,\quad \Delta x'(t_k)=L_k^*\int _{t_k-\delta _k}^{t_k-\varepsilon _k}x'(s)\text {d}s,\quad k=1, 2,\ldots ,m, \end{aligned}$$
(1.2)

where \(0\le \sigma _k<\tau _k\le t_k-t_{k-1}\), \(0\le \varepsilon _k<\delta _k\le t_k-t_{k-1}\), \(L_k\), \(L_k^*\) are given constants, \(k=1, 2, \ldots ,m\). The key tool for our proof is impulsive differential inequalities with jump conditions.

The plan of this paper is as follows. In Sect. 2, we present two new maximum principles. In Sect. 3, we obtain the existence of an extreme solution of a periodic boundary value problem using the method of upper and lower solutions and the monotone iterative technique with a comparison result.

2 Maximum Principles

Denote \(l=\max \{k:\) \(t\ge t_k\), \(k= 1, 2, \ldots \}\) and \(J^-=J{\setminus } \{t_i,\) \(i=1,2,\ldots , m\}\). Let \(J\subset R\) be an interval. We define \(PC(J,R)=\{x:J\rightarrow R;\) x(t) to be continuous everywhere except at some finite points \(t_k\) at which \(x(t_k^+)\) and \(x(t_k^-)\) exist and \(x(t_k^-)=x(t_k), k=1,2,\ldots m\}\). We also define \(PC^1(J,R)=\{x\in PC(J,R)\): \(x'(t)\) is continuous everywhere except for some \(t_k\) at which \(x'(t_k^+)\) and \(x'(t_k^-)\) exist and \(x'(t_k^-)=x'(t_k), k=1,2,\ldots m\}\) and \(PC^2(J,R)=\{x\in PC^1(J,R):\) \(x|_{(t_k,t_{k+1})}\in C^2((t_k, t_{k+1}),R), k=1,2,\ldots m\}\). We prove the maximum principle by using the following lemma.

Lemma 2.1

([20]) Let \(\displaystyle r\in \{t_0, t_1,\ldots , t_m\}\), \(\displaystyle c_k > -1/(\tau _k-\sigma _k)\), \(\displaystyle 0 \le \sigma _k < \tau _k\le t_k-t_{k-1}\), \(\gamma _k\), \(k=1,2,\ldots ,m\) be constants and let \(q \in PC(J,R)\), \(x \in PC^1(J,R)\).

  1. (i)

    If

    $$\begin{aligned} {\left\{ \begin{array}{ll} x'(t)\le q(t),\quad t \in (r,T), \quad t\ne t_k,\\ \Delta x(t_k) \le c_k \int _{t_k-\tau _k}^{t_k-\sigma _k}x(s)\mathrm{d}s+ \gamma _k,\quad t_k \in (r,T),\quad k=1,2,\ldots ,m. \end{array}\right. } \end{aligned}$$

    Then for \(t \in (r,T]\),

    $$\begin{aligned} x(t)&\le x(r^+) \left( \prod _{r^+<t_k<t}[1+c_k(\tau _k-\sigma _k)]\right) +\sum _{r^+<t_k<t}\Bigg [\prod _{t_k<t_j<t}[1+c_j(\tau _j-\sigma _j)]\\&\quad \times \bigg ([1+c_k(\tau _k-\sigma _k)]\int _{t_{k-1}}^{t_k-\tau _k}q(s)\mathrm{d}s +\int _{t_k-\tau _k}^{t_k-\sigma _k} [1+c_k(t_k-\sigma _k-s)]q(s)\mathrm{d}s\\&\quad +\int _{t_k-\sigma _k}^{t_k} q(s)\mathrm{d}s\bigg )\Bigg ]+\int _{t_l}^tq(s)\mathrm{d}s. \end{aligned}$$
  2. (ii)

    If

    $$\begin{aligned} {\left\{ \begin{array}{ll} x'(t)\ge q(t),\quad t \in (r,T),\quad t\ne t_k,\\ \Delta x(t_k) \ge c_k \int _{t_k-\tau _k}^{t_k-\sigma _k}x(s)\mathrm{d}s+ \gamma _k,\quad t_k \in (r,T), \quad k=1,2,\ldots ,m. \end{array}\right. } \end{aligned}$$

    Then for \(t \in (r,T]\),

    $$\begin{aligned} x(t)&\ge x(r^+) \left( \prod _{r^+<t_k<t}[1+c_k(\tau _k-\sigma _k)]\right) +\sum _{r^+<t_k<t}\Bigg [\prod _{t_k<t_j<t}[1+c_j(\tau _j-\sigma _j)]\\&\quad \times \bigg ([1+c_k(\tau _k-\sigma _k)]\int _{t_{k-1}}^{t_k-\tau _k}q(s)\mathrm{d}s +\int _{t_k-\tau _k}^{t_k-\sigma _k} [1+c_k(t_k-\sigma _k-s)]q(s)\mathrm{d}s\\&\quad +\int _{t_k-\sigma _k}^{t_k} q(s)\text {d}s\bigg )\Bigg ]+\int _{t_l}^tq(s)\mathrm{d}s. \end{aligned}$$

We now present two new maximum principles.

Theorem 2.1

Assume that \(x \in PC^2(J,R)\) satisfies

$$\begin{aligned} x''(t)\le & {} \displaystyle -Mx(t)-Wx(\theta (t))-N\int _0^tk(t,s)x(s)\text {d}s\nonumber \\&\displaystyle -L\int _0^Th(t,s)x(s)\text {d}s,\quad t\in J^-, \end{aligned}$$
(2.1)
$$\begin{aligned} \Delta x(t_k)\le & {} \displaystyle -L_k \int _{t_k-\tau _k}^{t_k-\sigma _k}x(s)\mathrm{d}s, \quad k=1,2,\ldots ,m, \end{aligned}$$
(2.2)
$$\begin{aligned} \Delta x'(t_k)\le & {} \displaystyle -L_k^* \int _{t_k-\delta _k}^{t_k-\varepsilon _k} x'(s)\mathrm{d}s,\quad k=1,2,\ldots ,m, \end{aligned}$$
(2.3)
$$\begin{aligned} x(0)\le & {} x(T),\quad x'(0)\le x'(T), \end{aligned}$$
(2.4)

where constants \(M>0\), \(W\ge 0\), \(N\ge 0\), \(L\ge 0\), \(\displaystyle 0 \le L_k < 1/(\tau _k-\sigma _k)\), \(\displaystyle 0 \le L_k^* < 1/(\delta _k-\varepsilon _k)\), \(\displaystyle 0\le \sigma _k<\tau _k\le t_k-t_{k-1}\), \(\displaystyle 0\le \varepsilon _k<\delta _k\le t_k-t_{k-1}\), \(k=1, 2, \ldots ,m\) and \(\displaystyle \theta \in C(J, J)\).

If

$$\begin{aligned} \displaystyle \prod _{k=1}^mA_k \ge \hat{L}+\int _0^TU(s)\text {d}s, \end{aligned}$$
(2.5)

where

$$\begin{aligned} \displaystyle A_k= & {} 1-L_k(\tau _k-\sigma _k),\\ \displaystyle \hat{L}= & {} \max \{L_k(\tau _k-\sigma _k)\},\\ \displaystyle U(t)= & {} \frac{\prod _{0<t_k<t}A_k^*}{1-\prod _{k=1}^m A_k^*}\left[ \sum _{k=1}^m \prod _{j=k+1}^m A_j^* B_k^*+\int _{t_m}^T r(s)\text {d}s\right] \\&\displaystyle +\left[ \sum _{0<t_k<t} \prod _{t_k<t_j<t} A_j^* B_k^*+\int _{t_l}^tr(s)\text {d}s\right] ,\\ A_k^*= & {} \displaystyle 1-L_k^*(\delta _k-\varepsilon _k),\quad \text {which}\; \prod _{k=1}^m A_k^*< 1,\\ B_k^*= & {} \displaystyle A_k^*\int _{t_{k-1}}^{t_k-\delta _k}r(s)\text {d}s+\int _{t_k-\delta _k}^{t_k-\varepsilon _k} [1-L_k^*(t_k-\varepsilon _k-s)]r(s)\text {d}s+\int _{t_k-\varepsilon _k}^{t_k}r(s)\text {d}s,\\ r(t)= & {} \displaystyle M+W+N\int _0^tk(t,s)\text {d}s+L\int _0^Th(t,s)\text {d}s, \end{aligned}$$

then \(x(t)\le 0\), \(t\in J\).

Proof

Suppose, to the contrary, that \(x(t)>0\) for some \(t \in J\). Assume that there exists \(t^*\in J\) such that \(x(t^*)>0\). We now consider the following two cases:

Case (i). If \(x(t)\ge 0\) for all \(t\in J\) and \(x\not \equiv 0\). Then, from (2.1), we have

$$\begin{aligned} \displaystyle x''(t)\le & {} -Mx(t)-Wx(\theta (t))-N\int _0^tk(t,s)x(s)\text {d}s-L\nonumber \\&\times \int _0^Th(t,s)x(s)\text {d}s\le 0,\quad t\in J^-. \end{aligned}$$
(2.6)

Applying Lemma 2.1 for (2.3) and (2.6), we obtain

$$\begin{aligned} \displaystyle x'(t)\le x'(0)\prod _{0<t_k<t}\left[ 1-L_k^*(\delta _k-\varepsilon _k)\right] . \end{aligned}$$

For \(t=T\), we get \(\displaystyle x'(0)\le x'(T)\le x'(0)\prod \nolimits _{0<t_k<T}[1-L_k^*(\delta _k-\varepsilon _k)]\) and therefore \(x'(0)\le 0\). Hence \(x'(t)\le 0\). Also, we have

$$\begin{aligned} \displaystyle x(t_k^+) \le x(t_k^-)-L_k \int _{t_k-\tau _k}^{t_k-\sigma _k}x(s)\text {d}s\le x(t_k^-). \end{aligned}$$

Thus, x(t) is non-increasing for \(t \in J\), and therefore \(x(T)\le x(0)\). From condition (2.4), we have \(x(0)\le x(T)\) and therefore \(x(0)=x(T)=x\) is a constant. Therefore, \(x(t)\equiv C>0\), which implies that

$$\begin{aligned} \displaystyle 0=x''(t)\le -MC-WC-N\int _0^tk(t,s)C \text {d}s-L\int _0^Th(t,s)C \text {d}s\le -MC < 0, \end{aligned}$$

which is a contradiction.

Case (ii). If \(x(t)< 0\) for some \(t\in J\). Let \(\inf _{t\in J} x(t)=-\lambda < 0\), then there exists \(t_*\in (t_u, t_{u+1}]\), for some u such that \(x(t_*)=-\lambda \) or \(x(t_u^+)=-\lambda \). Without loss of generality, we only consider \(x(t_*)=-\lambda \). For the case \(x(t_u^+)=-\lambda \) the proof is similar. From (2.1), we get

$$\begin{aligned} \displaystyle x''(t)\le \lambda \Bigg (M+W+N\int _0^tk(t,s)\text {d}s+L\int _0^Th(t,s)\text {d}s\Bigg )=\lambda r(t), \quad t\in J^-.\ \end{aligned}$$
(2.7)

Using Lemma 2.1 part (i) for (2.3), (2.7), we obtain

$$\begin{aligned} x'(t)&\le x'(0) \left( \prod _{0<t_k<t}[1-L_k^*(\delta _k-\varepsilon _k)]\right) +\lambda \sum _{0<t_k<t}\Bigg [\prod _{t_k<t_j<t}[1-L_j^*(\delta _j -\varepsilon _j)]\\&\quad \times \bigg ([1-L_k^*(\delta _k-\varepsilon _k)]\int _{t_{k-1}}^{t_k -\delta _k}r(s)\text {d}s+\int _{t_k-\delta _k}^{t_k-\varepsilon _k} [1-L_k^*(t_k-\varepsilon _k-s)]r(s)\text {d}s\\&\quad +\int _{t_k-\varepsilon _k}^{t_k} r(s)\text {d}s\bigg )\Bigg ]+\lambda \int _{t_l}^tr(s)\text {d}s. \end{aligned}$$

Taking into account the definition of constants \(A_k^*\) and \(B_k^*\), the above inequality can be written as

$$\begin{aligned} x'(t)\le x'(0)\prod _{0<t_k<t}A_k^*+\lambda \sum _{0<t_k<t}\prod _{t_k<t_j<t}A_j^*B_k^*+\lambda \int _{t_l}^tr(s)\text {d}s, \quad t\in J. \end{aligned}$$
(2.8)

Substituting \(t=T\) in (2.8), we get

$$\begin{aligned} x'(0) \le x'(T)\le x'(0)\prod _{0<t_k<T}A_k^*+\lambda \sum _{0<t_k<T}\prod _{t_k<t_j<T}A_j^*B_k^*+\lambda \int _{t_m}^Tr(s)\text {d}s, \end{aligned}$$

which implies

$$\begin{aligned} x'(0)\le \frac{\lambda }{1-\prod _{k=1}^mA_k^*} \Bigg [\sum _{k=1}^m\prod _{j=k+1}^mA_j^*B_k^*+\int _{t_m}^Tr(s)\text {d}s\Bigg ]. \end{aligned}$$
(2.9)

From (2.8) and (2.9), we obtain

$$\begin{aligned} x'(t)&\le \frac{\lambda \prod _{0<t_k<t}A_k^*}{1-\prod _{k=1}^mA_k^*} \left[ \sum _{k=1}^m\prod _{j=k+1}^mA_j^*B_k^*+\int _{t_m}^Tr(s)\text {d}s\right] \\&\quad +\lambda \left[ \sum _{0<t_k<t}\prod _{t_k<t_j<t}A_j^*B_k^*+\int _{t_l}^tr(s)\text {d}s\right] . \end{aligned}$$

Therefore,

$$\begin{aligned} x'(t)\le \lambda U(t), \quad t\in J. \end{aligned}$$
(2.10)

The above inequality together with Lemma 2.1 part (i) and (2.2) imply that

$$\begin{aligned} x(t)&\le x(t_{u+1}^+) \left( \prod _{t_{u+1}<t_k<t}[1-L_k(\tau _k-\sigma _k)]\right) +\lambda \sum _{t_{u+1}<t_k<t}\Bigg [\prod _{t_k<t_j<t}[1-L_j (\tau _j-\sigma _j)]\nonumber \\&\quad \times \displaystyle \bigg ([1-L_k(\tau _k-\sigma _k)]\int _{t_{k-1}}^{t_k-\tau _k}U(s)\text {d}s +\int _{t_k-\tau _k}^{t_k-\sigma _k} [1-L_k(t_k-\sigma _k-s)]U(s)\text {d}s\nonumber \\&\quad +\int _{t_k-\sigma _k}^{t_k} U(s)\text {d}s\bigg )\Bigg ]+\lambda \int _{t_l}^tU(s)\text {d}s. \end{aligned}$$
(2.11)

Let

$$\begin{aligned} \displaystyle B_k=A_k\int _{t_{k-1}}^{t_k-\tau _k}U(s)\text {d}s +\int _{t_k-\tau _k}^{t_k-\sigma _k} [1-L_k(t_k-\sigma _k-s)]U(s)\text {d}s+\int _{t_k-\sigma _k}^{t_k}U(s)\text {d}s. \end{aligned}$$

Then, (2.11) can be written as

$$\begin{aligned} \displaystyle x(t)\le x(t_{u+1}^+)\prod _{t_{u+1}<t_k<t}A_k+\lambda \Bigg [ \sum _{t_{u+1}<t_k<t}\prod _{t_k<t_j<t}A_jB_k+\int _{t_l}^tU(s)\text {d}s\Bigg ], \quad t\in J. \end{aligned}$$
(2.12)

Since

$$\begin{aligned} \displaystyle x(t_{u+1}^+) \le x(t_{u+1})-L_{u+1} \int _{t_{u+1}-\tau _{u+1}}^{t_{u+1}-\sigma _{u+1}}x(s)\text {d}s \le x(t_{u+1})+\lambda L_{u+1}(\tau _{u+1}-\sigma _{u+1}), \end{aligned}$$

the Eq. (2.12) can be expressed as

$$\begin{aligned} x(t)&\le x(t_{u+1})\left( \prod _{t_{u+1}<t_k<t}A_k\right) +\lambda L_{u+1}(\tau _{u+1}-\sigma _{u+1})\left( \prod _{t_{u+1}<t_k<t}A_k\right) \nonumber \\&\quad +\lambda \left[ \sum _{t_{u+1}<t_k<t}\prod _{t_k<t_j<t}A_jB_k +\int _{t_l}^tU(s)\text {d}s\right] , \quad t\in J. \end{aligned}$$
(2.13)

Integrating (2.10) from \(t_*\) to \(t_{u+1}\), we obtain

$$\begin{aligned} x(t_{u+1})\le x(t_*)+\lambda \int _{t_*}^{t_{u+1}}U(s)\text {d}s. \end{aligned}$$

Hence,

$$\begin{aligned} x(t)&\le x(t_*)\left( \prod _{t_{u+1}<t_k<t}A_k\right) +\lambda L_{u+1}(\tau _{u+1}-\sigma _{u+1})\left( \prod _{t_{u+1}<t_k<t}A_k\right) \\&\quad +\lambda \left( \prod _{t_{u+1}<t_k<t}A_k\right) \int _{t_*}^{t_{u+1}}U(s)\text {d}s +\lambda \left[ \sum _{t_{u+1}<t_k<t}\prod _{t_k<t_j<t}A_jB_k+\int _{t_l}^tU(s)\text {d}s\right] . \end{aligned}$$

This yields

$$\begin{aligned} x(t)&\le \displaystyle x(t_*)\left( \prod _{t_{u+1}<t_k<t}A_k\right) +\lambda \Bigg [ L_{u+1}(\tau _{u+1}-\sigma _{u+1})\left( \prod _{t_{u+1}<t_k<t}A_k\right) \nonumber \\&\quad +\sum _{t_{u}<t_k<t}\prod _{t_k<t_j<t}A_j\int _{t_{k-1}}^{t_k}U(s)\text {d}s +\int _{t_l}^tU(s)\text {d}s\Bigg ]. \end{aligned}$$
(2.14)

If \(t^*>t_*\) for \(t^* \in [t_v, t_{v+1})\), then

$$\begin{aligned} 0<x(t^*)&\le x(t_*)\left( \prod _{t_{u+1}<t_k<t^*}A_k\right) +\lambda \Bigg [ L_{u+1}(\tau _{u+1}-\sigma _{u+1})\left( \prod _{t_{u+1}<t_k<t^*}A_k\right) \\&\quad +\sum _{t_{u}<t_k<t^*}\prod _{t_k<t_j<t^*}A_j\int _{t_{k-1}}^{t_k}U(s)\text {d}s +\int _{t_v}^{t^*}U(s)\text {d}s\Bigg ], \end{aligned}$$

which gives

$$\begin{aligned} \prod _{t_{u+1}<t_k<t^*}A_k&< L_{u+1}(\tau _{u+1}-\sigma _{u+1})\prod _{t_{u+1}<t_k<t^*}A_k\nonumber \\&\quad +\sum _{t_{u}<t_k<t^*}\prod _{t_k<t_j<t^*}A_j \int _{t_{k-1}}^{t_k}U(s)\text {d}s+\int _{t_v}^{t^*}U(s)\text {d}s. \end{aligned}$$
(2.15)

Therefore,

$$\begin{aligned} \prod _{k=1}^mA_k<\hat{L}+\int _0^TU(s)\text {d}s, \end{aligned}$$

contradicting condition (2.5).

Next, assume that \(t^*<t_*\). For \(t=T\) in (2.14), we get

$$\begin{aligned} x(T)&\le x(t_*)\left( \prod _{t_{u+1}<t_k<T}A_k\right) +\lambda \Bigg [ L_{u+1}(\tau _{u+1}-\sigma _{u+1})\left( \prod _{t_{u+1}<t_k<T}A_k\right) \nonumber \\&\quad +\sum _{t_{u}<t_k<T}\prod _{t_k<t_j<T}A_j\int _{t_{k-1}}^{t_k}U(s)\text {d}s +\int _{t_m}^TU(s)\text {d}s\Bigg ]. \end{aligned}$$
(2.16)

From (2.10) and (2.2), we have by Lemma 2.1 part (i) that

$$\begin{aligned} x(t)\le x(0)\prod _{0<t_k<t}A_k+\lambda \left[ \sum _{0<t_k<t}\prod _{t_k<t_j<t}A_jB_k +\int _{t_l}^tU(s)\text {d}s\right] . \end{aligned}$$

In particular, for \(t=t^*\),

$$\begin{aligned} \displaystyle x(t^*)\le x(0)\prod _{0<t_k<t^*}A_k+\lambda \left[ \sum _{0<t_k<t^*}\prod _{t_k<t_j<t^*}A_jB_k +\int _{t_v}^{t^*}U(s)\text {d}s\right] . \end{aligned}$$
(2.17)

From (2.4), (2.16) and (2.17), we obtain

$$\begin{aligned} 0<x(t^*)&\le \displaystyle \lambda \left[ \sum _{0<t_k<t^*} \prod _{t_k<t_j<t^*}A_jB_k+\int _{t_v}^{t^*}U(s)\text {d}s\right] +x(t_*)\prod _{t_{u+1}<t_k<T}A_k\prod _{0<t_k<t^*}A_k\\&\quad \displaystyle +\lambda L_{u+1}(\tau _{u+1}-\sigma _{u+1})\prod _{t_{u+1}<t_k<T}A_k \prod _{0<t_k<t^*}A_k\\&\quad \displaystyle +\lambda \prod _{0<t_k<t^*}A_k\left[ \sum _{t_{u}<t_k<T}\prod _{t_k<t_j<T}A_j \int _{t_{k-1}}^{t_k}U(s)\text {d}s+\int _{t_m}^TU(s)\text {d}s\right] , \end{aligned}$$

which gives

$$\begin{aligned} \displaystyle \prod _{t_{u+1}<t_k<T}A_k\prod _{0<t_k<t^*}A_k&<\left[ \sum _{0<t_k<t^*}\prod _{t_k<t_j<t^*}A_jB_k+\int _{t_v}^{t^*}U(s)\text {d}s\right] \\&\quad \displaystyle + L_{u+1}(\tau _{u+1}-\sigma _{u+1})\prod _{t_{u+1}<t_k<T}A_k \prod _{0<t_k<t^*}A_k\\&\quad \displaystyle +\prod _{0<t_k<t^*}A_k\left[ \sum _{t_{u}<t_k<T}\prod _{t_k<t_j<T}A_j \int _{t_{k-1}}^{t_k}U(s)\text {d}s+\int _{t_m}^TU(s)\text {d}s\right] . \end{aligned}$$

Hence

$$\begin{aligned} \displaystyle \prod _{k=1}^mA_k<\hat{L}+\int _0^TU(s)\text {d}s, \end{aligned}$$

which is a contradiction. This completes the proof. \(\square \)

Theorem 2.2

Assume that \(x \in PC^2(J,R)\) satisfies

$$\begin{aligned}&\displaystyle x''(t)\ge Mx(t)+Wx(\theta (t))+N\int _0^tk(t,s)x(s)\text {d}s +L\int _0^Th(t,s)x(s)\mathrm{d}s,\quad t\in J^-, \end{aligned}$$
(2.18)
$$\begin{aligned}&\displaystyle \Delta x(t_k) \ge L_k \int _{t_k-\tau _k}^{t_k-\sigma _k}x(s)\mathrm{d}s,\quad k=1,2,\ldots ,m, \end{aligned}$$
(2.19)
$$\begin{aligned}&\displaystyle \Delta x'(t_k) \ge -L_k^* \int _{t_k-\delta _k}^{t_k-\varepsilon _k}x'(s)\mathrm{d}s,\quad k=1,2,\ldots ,m, \end{aligned}$$
(2.20)
$$\begin{aligned}&\displaystyle x(0)\ge x(T),\quad x'(0)\ge x'(T), \end{aligned}$$
(2.21)

where constants \(M>0\), \(W\ge 0\), \(N\ge 0\), \(L\ge 0\), \(L_k\ge 0\), \(0 \le L_k^* < 1/(\delta _k-\varepsilon _k)\), \(0\le \sigma _k<\tau _k\le t_k-t_{k-1}\), \(0\le \varepsilon _k<\delta _k\le t_k-t_{k-1}\), \(k=1, 2, \ldots ,m\) and \(\displaystyle \theta \in C(J, J)\).

If

$$\begin{aligned} \prod _{k=1}^mC_k\left[ \hat{L}+\sum _{k=1}^mD_k+\int _0^TU(s)\text {d}s\right] \le 1, \end{aligned}$$
(2.22)

where

$$\begin{aligned} C_k&=1+L_k(\tau _k-\sigma _k),\\ D_k&=C_k\int _{t_{k-1}}^{t_k-\tau _k}U(s)\text {d}s+\int _{t_k-\tau _k}^{t_k-\sigma _k} [1+L_k(t_k-\sigma _k-s)]U(s)\text {d}s+\int _{t_k-\sigma _k}^{t_k}U(s)\mathrm{d}s, \end{aligned}$$

and \(\widehat{L}\), \(L_k\), \(A_k\), U(t) are defined in Theorem 2.1. Then \(x(t)\le 0\), \(t\in J\).

Proof

The proof of this Theorem is similar to the proof of Theorem 2.1. By applying Lemma 2.1 part (ii) and using the definition of constants \(C_k\) and \(D_k\), we obtain the conclusion as desired. \(\square \)

Now, we show two examples to illustrate an application of the new results.

Example 2.1

Consider the following impulsive problem:

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle x''(t)\le -\frac{1}{4}x(t)-\frac{1}{5}x\left( \frac{t}{2}\right) -\frac{1}{5} \int _0^ttsx(s)\text {d}s-\frac{1}{4}\int _0^1t^2sx(s)\text {d}s,\quad t\in [0,1],~~t\ne \frac{1}{2}, \\ \displaystyle \Delta x\left( \frac{1}{2}\right) \le -\frac{1}{3} \int _{\frac{1}{6}}^{\frac{5}{14}}x(s)\text {d}s,\quad k=1, \\ \displaystyle \Delta x'\left( \frac{1}{2}\right) \le -8 \int _{\frac{1}{6}}^{\frac{1}{4}}x'(s)\text {d}s,\quad k=1, \\ \displaystyle x(0)\le x(1),\quad x'(0)\le x'(1), \end{array}\right. } \end{aligned}$$
(2.23)

where \(k(t,s)=ts\), \(h(t,s)=t^2s\), \(\theta (t)=t/2\), \(m=1\), \(t_1=1/2\), \(M=1/4\), \(W=1/5\), \(N=1/5\), \(L=1/4\), \(L_k=1/3\), \(L_k^*=8\), \(\sigma _k=1/7\), \(\tau _k=1/3\), \(\varepsilon _k=1/4\) and \(\delta _k=1/3\). It is easy to see that

$$\begin{aligned} A_1&=1-\frac{1}{3}\left( \frac{1}{3}-\frac{1}{7}\right) =\frac{59}{63}\\ \hat{L}&=\frac{1}{3}\left( \frac{1}{3}-\frac{1}{7}\right) =\frac{4}{63}\\ U(t)&=\frac{\prod _{0<t_k<t}A_k^*}{1-\prod _{k=1}^m A_k^*}\left[ \sum _{k=1}^m \prod _{j=k+1}^m A_j^* B_k^* +\int _{\frac{1}{2}}^1 r(s)\text {d}s\right] \\&\quad +\left[ \sum _{0<t_k<t} \prod _{t_k<t_j<t} A_j^* B_k^* +\int _{t_l}^tr(s)\text {d}s\right] \\ A_1^*&=1-8\left( \frac{1}{3}-\frac{1}{4}\right) =\frac{1}{3}\\ B_1^*&=A_1^*\int _{0}^{\frac{1}{6}}r(s)\text {d}s +\int _{\frac{1}{6}}^{\frac{1}{4}} \left[ 1-8\left( \frac{1}{2}-\frac{1}{4}-s\right) \right] r(s)\text {d}s +\int _{\frac{1}{4}}^{\frac{1}{2}}r(s)\text {d}s\\ r(t)&=\frac{1}{4}+\frac{1}{5}+\frac{1}{5}\int _0^tts\text {d}s +\frac{1}{4}\int _0^1t^2s\text {d}s. \end{aligned}$$

Through a simple calculation we can get

$$\begin{aligned} \prod _{k=1}^mA_k \approx 0.9365079365\ge \hat{L}+\int _0^TU(s)\text {d}s\approx 0.7454373117. \end{aligned}$$

Applying Theorem 2.1, we get that \(x(t)\le 0\) for \(t\in [0, 1]\).

Example 2.2

Consider the following impulsive problem:

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle x''(t)\ge \frac{1}{4}x(t)+\frac{1}{5}x\left( \frac{t}{2}\right) +\frac{1}{5} \int _0^ttsx(s)\text {d}s+\frac{1}{4}\int _0^1t^2sx(s)\text {d}s,\quad t\in [0,1],~~t\ne \frac{1}{2}, \\ \displaystyle \Delta x\left( \frac{1}{2}\right) \ge \frac{1}{3} \int _{\frac{1}{6}}^{\frac{5}{14}}x(s)\text {d}s,\quad k=1, \\ \displaystyle \Delta x'\left( \frac{1}{2}\right) \ge -10 \int _{\frac{1}{6}}^{\frac{1}{4}}x'(s)\text {d}s,\quad k=1, \\ \displaystyle x(0)\ge x(1),\quad x'(0)\ge x'(1), \end{array}\right. } \end{aligned}$$
(2.24)

where \(k(t,s)=ts\), \(h(t,s)=t^2s\), \(\theta (t)=t/2\), \(m=1\), \(t_1=1/2\), \(M=1/4\), \(W=1/5\), \(N=1/5\), \(L=1/4\), \(L_1=1/3\), \(L_1^*=10\), \(\sigma _1=1/7\), \(\tau _1=1/3\), \(\varepsilon _1=1/4\) and \(\delta _1=1/3\). It is easy to see that

$$\begin{aligned}&\displaystyle C_1=1+\frac{1}{3}\left( \frac{1}{3}-\frac{1}{7}\right) =\frac{67}{63}\\&\displaystyle D_1=C_1\int _0^{\frac{1}{6}}U(s)\text {d}s+\int _{\frac{1}{6}}^{\frac{5}{14}} \left[ 1+L_1\left( \frac{1}{2}-\frac{1}{7}-s\right) \right] U(s)\text {d}s +\int _{\frac{5}{14}}^{\frac{1}{2}}U(s)\text {d}s\\&\displaystyle \hat{L}=\frac{1}{3}\left( \frac{1}{3}-\frac{1}{7}\right) =\frac{4}{63}\\&\displaystyle U(t)=\frac{\prod _{0<t_k<t}A_k^*}{1-\prod _{k=1}^m A_k^*}\left[ \sum _{k=1}^m \prod _{j=k+1}^m A_j^* B_k^* +\int _{\frac{1}{2}}^1 r(s)\text {d}s\right] \\&\qquad \qquad +\left[ \sum _{0<t_k<t} \prod _{t_k<t_j<t} A_j^* B_k^*+\int _{t_l}^tr(s) \text {d}s\right] \\&\displaystyle A_1^*=1-10\left( \frac{1}{3}-\frac{1}{4}\right) =\frac{1}{6}\\&\displaystyle B_1^*=A_1^*\int _{0}^{\frac{1}{6}}r(s)\text {d}s+\int _{\frac{1}{6}}^{\frac{1}{4}} \left[ 1-8\left( \frac{1}{2}-\frac{1}{4}-s\right) \right] r(s)\text {d}s +\int _{\frac{1}{4}}^{\frac{1}{2}}r(s)\text {d}s\\&\displaystyle r(t)=\frac{1}{4}+\frac{1}{5}+\frac{1}{5}\int _0^tts\text {d}s +\frac{1}{4}\int _0^1t^2s\text {d}s. \end{aligned}$$

Through a simple calculation we can get

$$\begin{aligned} \prod _{k=1}^mC_k \left[ \hat{L}+\sum _{k=1}^m D_k+\int _0^TU(s)\text {d}s\right] \approx 0.9836243209\le 1. \end{aligned}$$

Applying Theorem 2.2, we get that \(x(t)\le 0\) for \(t\in [0, 1]\).

3 Existence Results

In this section, using the method of upper and lower solutions and the monotone iterative technique, we obtain the existence of extreme solutions for the following periodic boundary value problem (PBVP):

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle x''(t)= f(t, x(t), x\left( \theta (t)\right) ,(Kx)(t), (Sx)(t)),\quad t\in J^-,\\ \displaystyle \Delta x(t_k)= I_k\left( \int _{t_k-\tau _k}^{t_k-\sigma _k}x(s)\text {d}s\right) ,\quad k=1, 2, \ldots , m,\\ \displaystyle \Delta x'(t_k)= I_k^*\left( \int _{t_k-\delta _k}^{t_k-\varepsilon _k}x'(s)\text {d}s\right) ,\quad k=1, 2, \ldots , m,\\ \displaystyle x(0)= x(T),\quad x'(0)= x'(T), \end{array}\right. } \end{aligned}$$
(3.1)

which satisfies all assumptions of PBVP (1.1).

Now, we give a new definition of lower and upper solutions of PBVP (3.1).

Definition 3.1

We say that the function \(\alpha _0\), \(\beta _0\) \(\in E\) are lower and upper solutions of PBVP (3.1) where \(0\le \sigma _k<\tau _k\le t_k-t_{k-1}\), \(0\le \varepsilon _k<\delta _k\le t_k-t_{k-1}\), such that

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \alpha ''_0(t)\ge f(t, \alpha _0(t), \alpha _0\left( \theta (t)\right) ,(K\alpha _0)(t), (S\alpha _0)(t)),\quad t\in J^-,\\ \displaystyle \Delta \alpha _0(t_k)\ge I_k\left( \int _{t_k-\tau _k}^{t_k-\sigma _k}\alpha _0(s)\text {d}s\right) ,\quad k=1, 2, \ldots , m,\\ \displaystyle \Delta \alpha '_0(t_k)\ge I_k^*\left( \int _{t_k-\delta _k}^{t_k-\varepsilon _k}\alpha '_0(s)\text {d}s\right) ,\quad k=1, 2, \ldots , m,\\ \displaystyle \alpha _0(0)\ge \alpha _0(T),\quad \alpha '_0(0)\ge \alpha '_0(T), \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \beta ''_0(t)\le f(t, \beta _0(t), \beta _0\left( \theta (t)\right) ,(K\beta _0)(t), (S\beta _0)(t)),\quad t\in J^-,\\ \displaystyle \Delta \beta _0(t_k)\le I_k\left( \int _{t_k-\tau _k}^{t_k-\sigma _k}\beta _0(s)\text {d}s\right) ,\quad k=1, 2, \ldots , m,\\ \displaystyle \Delta \beta '_0(t_k)\le I_k^*\left( \int _{t_k-\delta _k}^{t_k-\varepsilon _k}\beta '_0(s)\text {d}s\right) ,\quad k=1, 2, \ldots , m,\\ \displaystyle \beta _0(0)\le \beta _0(T), \quad \beta '_0(0)\le \beta '_0(T). \end{array}\right. } \end{aligned}$$

Consider the linear PBVP

$$\begin{aligned} {\left\{ \begin{array}{ll} &{}\displaystyle x''(t)= Mx(t)+Wx(\theta (t))+N\left( Kx\right) (t)+L\left( Sx\right) (t)+g(t),~t\in J^-,\\ &{}\displaystyle \Delta x(t_k)=L_k\int _{t_k-\tau _k}^{t_k-\sigma _k}x(s)\text {d}s+\gamma _k, \quad k=1, 2,\ldots , m,\\ &{}\displaystyle \Delta x'(t_k)=-L_k^*\int _{t_k-\delta _k}^{t_k-\varepsilon _k}x'(s)\text {d}s +\lambda _k,\quad k=1, 2,\ldots , m,\\ &{}\displaystyle x(0)=x(T), \quad x'(0)=x'(T), \end{array}\right. } \end{aligned}$$
(3.2)

where constants \(M>0\), \(W, N, L\ge 0\), \(L_k\ge 0\), \(L_k^*\ge 0\), \(g\in PC(J,R)\), \(0\le \varepsilon _k<\delta _k\le t_k-t_{k-1}\), \(0\le \sigma _k<\tau _k\le t_k-t_{k-1}\), \(\gamma _k\), \(\lambda _k\) are constants, \(k=1, 2, \ldots ,m\). PC(JR) and \(PC^1(J, R)\) are Banach spaces with the norms \(\Vert x\Vert _{PC}=\sup \{x(t)\,:\, t\in J\}\) and \(\Vert x\Vert _{PC^1}=\max \{\Vert x\Vert _{PC}, \Vert x'\Vert _{PC}\}\). Let \(E=PC^1(J, R)\cap C^2(J^-, R)\). A function \(x\in E\) is called a solution of PBVP (3.2) if it satisfies (3.2).

Lemma 3.1

\(x \in E \) is a solution of PBVP (3.2) if and only if \(x \in PC^1(J,R)\) is a solution of the following the impulsive integral equation:

$$\begin{aligned} x(t)&=\displaystyle \int _0^T G_1(t,s)D\left( s,x\right) \mathrm{d}s+\sum _{k=1}^m \Bigg [-G_1(t,t_k)\left( L_k\int _{t_k-\tau _k}^{t_k-\sigma _k}x(s)\mathrm{d}s+\gamma _k\right) \nonumber \\&\quad +G_2(t,t_k)\left( -L_k^*\int _{t_k-\delta _k}^{t_k-\varepsilon _k}x'(s)\mathrm{d}s +\lambda _k\right) \Bigg ], \end{aligned}$$
(3.3)

where \(D\left( t,x\right) = -Wx(\theta (t)) -N\left( Kx\right) (t)-L\left( Sx\right) (t)-g(t)\),

$$\begin{aligned} \displaystyle G_1(t,s)= & {} [2\sqrt{M}(e^{\sqrt{M} T}-1)]^{-1} {\left\{ \begin{array}{ll} e^{\sqrt{M}(T-t+s)}+e^{\sqrt{M}(t-s)},\quad 0 \le s < t \le T,\\ e^{\sqrt{M}(T+t-s)}+e^{\sqrt{M}(s-t)},\quad 0 \le s \le t \le T, \end{array}\right. }\\ G_2(t,s)= & {} [2(e^{\sqrt{M} T}-1)]^{-1} {\left\{ \begin{array}{ll} e^{\sqrt{M}(T-t+s)}-e^{\sqrt{M}(t-s)},\quad 0 \le s < t \le T,\\ -e^{\sqrt{M}(T+t-s)}+e^{\sqrt{M}(s-t)},\quad 0 \le s \le t \le T. \end{array}\right. } \end{aligned}$$

This proof is similar to proof of Lemma 2.1 in [2], here we omit it.

Theorem 3.1

(Banach’s fixed point theorem) [21] Let M be a closed nonempty subset of the Banach space X and \(A:M\rightarrow M\) is a contraction mapping, i.e.,

$$\begin{aligned} \Vert Au-Av\Vert \le k\Vert u-v\Vert , \end{aligned}$$

for all \(u,v \in M\), and fixed k, \(0\le k<1\). Then the operator A has a unique fixed point \(u^*\in M\).

Lemma 3.2

Let constants \(M>0\), \(W,L,N\ge 0\), \(L_k\ge 0\), \(L_k^*\ge 0\), \(0\le \varepsilon _k<\delta _k\le t_k-t_{k-1}\), \(0\le \sigma _k<\tau _k\le t_k-t_{k-1}\), \(k=1, 2, \ldots ,m\). If

$$\begin{aligned} \psi&=:\frac{1+e^{\sqrt{M} T}}{2\sqrt{M}\left( e^{\sqrt{M} T}-1\right) }\Bigg [\int _0^T \left( W+N\int _0^sk(s,r)\mathrm{d}r+L\int _0^Th(s,r)\mathrm{d}r\right) \mathrm{d}s \nonumber \\&\quad +\sum _{k=1}^mL_k(\tau _k-\sigma _k)\Bigg ]+\frac{1}{2}\sum _{k=1}^m L_k^*(\delta _k-\varepsilon _k)<1, \end{aligned}$$
(3.4)

and

$$\begin{aligned} \mu&=:\frac{1}{2}\left[ \int _0^T \left( W+N\int _0^sk(s,r)\mathrm{d}r+L\int _0^Th(s,r)\mathrm{d}r\right) \mathrm{d}s+\sum _{k=1}^mL_k(\tau _k-\sigma _k) \right] \nonumber \\&\quad +\frac{\sqrt{M}\left( 1+e^{\sqrt{M} T}\right) }{2\left( e^{\sqrt{M} T} -1\right) } \sum _{k=1}^m L_k^*(\delta _k-\varepsilon _k)<1, \end{aligned}$$
(3.5)

then PBVP (3.2) has a unique solution x in E.

Proof

For any \(x \in E\), we define an operator A by

$$\begin{aligned} (Ax)(t)=&\displaystyle \int _0^T G_1(t,s)D\left( s,x\right) \text {d}s+\sum _{k=1}^m \Bigg [-G_1(t,t_k)\left( L_k\int _{t_k-\tau _k}^{t_k-\sigma _k}x(s)\text {d}s+\gamma _k\right) \\&\displaystyle +G_2(t,t_k)\left( -L_k^*\int _{t_k-\delta _k}^{t_k-\varepsilon _k}x'(s)\text {d}s +\lambda _k\right) \Bigg ], \end{aligned}$$

where \(G_1,G_2\) are given by Lemma 3.1. Then, \(Ax\in PC^1(J,R)\), and

$$\begin{aligned} (Ax)'(t)=&\displaystyle -\int _0^T G_2(t,s)D\left( s,x\right) \text {d}s+\sum _{k=1}^m \Bigg [G_2(t,t_k)\left( L_k\int _{t_k-\tau _k}^{t_k-\sigma _k}x(s)\text {d}s+\gamma _k\right) \\&\displaystyle -MG_1(t,t_k)\left( -L_k^*\int _{t_k-\delta _k}^{t_k-\varepsilon _k}x'(s)\text {d}s +\lambda _k\right) \Bigg ]. \end{aligned}$$

By computing directly, we have

$$\begin{aligned} \displaystyle \max _{(t,s) \in J^2}\{G_1(t,s)\}=\frac{1+e^{\sqrt{M} T}}{2\sqrt{M}\left( e^{\sqrt{M} T}-1\right) } \end{aligned}$$

and

$$\begin{aligned} \displaystyle \max _{(t,s) \in J^2}\{G_2(t,s)\}=\frac{1}{2}. \end{aligned}$$

For \(x,y \in PC^1(J,R)\), we have

$$\begin{aligned} \Vert Ax-Ay\Vert _{PC}&=\sup _{t \in J}|(Ax)(t)-(Ay)(t)|\\&=\sup _{t \in J}\bigg |\int _0^T G_1(t,s)\bigg [W(x\left( \theta (s)\right) -y\left( \theta (s)\right) )\\&\quad +N\int _0^sk(s,r)(x(r)-y(r))\mathrm{d}r+L\int _0^Th(s,r)(x(r)-y(r))\mathrm{d}r \bigg ]\text {d}s\\&\quad +\sum _{k=1}^m \bigg [-G_1(t,t_k)\left( L_k\int _{t_k -\tau _k}^{t_k-\sigma _k}(x(s)-y(s))\text {d}s \right) \\&\quad +G_2(t,t_k)\left( -L_k^*\int _{t_k-\delta _k}^{t_k -\varepsilon _k}(x'(s)-y'(s))\text {d}s \right) \bigg ]\bigg |\\&\le \frac{1+e^{\sqrt{M} T}}{2\sqrt{M}(e^{\sqrt{M} T}-1)}\bigg [\int _0^T \left( W+N\int _0^sk(s,r)\mathrm{d}r+L\int _0^Th(s,r)\mathrm{d}r\right) \text {d}s \\&\quad +\sum _{k=1}^mL_k(\tau _k-\sigma _k) \bigg ]\Vert x-y\Vert _{PC}+\frac{1}{2}\Vert x'-y'\Vert _{PC}\sum _{k=1}^m L_k^*(\delta _k-\varepsilon _k)\\&\le \bigg \{\frac{1+e^{\sqrt{M} T}}{2\sqrt{M}(e^{\sqrt{M} T}-1)}\bigg [\int _0^T \left( W+N\int _0^sk(s,r)\mathrm{d}r+L\int _0^Th(s,r)\mathrm{d}r\right) \text {d}s \\&\quad +\sum _{k=1}^mL_k(\tau _k-\sigma _k)\bigg ]+\frac{1}{2}\sum _{k=1}^m L_k^*(\delta _k-\varepsilon _k)\bigg \} \Vert x-y\Vert _{PC^1}\\&\le \psi \Vert x-y\Vert _{PC^1}, \end{aligned}$$

and

$$\begin{aligned} \Vert (Ax)'-(Ay)'\Vert _{PC}&=\sup _{t \in J}\bigg | -\int _0^T G_2(t,s)\bigg [W(x\left( \theta (s)\right) -y\left( \theta (s)\right) ) \nonumber \\&\quad +N\int _0^sk(s,r)(x(r)-y(r))\mathrm{d}r+L\int _0^Th(s,r)(x(r)-y(r)) \mathrm{d}r\bigg ]\text {d}s\nonumber \\&\quad +\sum _{k=1}^m\bigg [G_2(t,t_k)\left( L_k\int _{t_k-\tau _k}^{t_k -\sigma _k}(x(s)-y(s))\text {d}s \right) \nonumber \\&\quad -MG_1(t,t_k)\left( -L_k^*\int _{t_k-\delta _k}^{t_k -\varepsilon _k}(x'(s)-y'(s))\text {d}s \right) \bigg ]\nonumber \bigg |\nonumber \\&\le \bigg \{\frac{1}{2}\bigg [\int _0^T \left( W+N\int _0^sk(s,r)\mathrm{d}r+L\int _0^Th(s,r)\mathrm{d}r\right) \text {d}s\nonumber \\&\quad +\sum _{k=1}^mL_k(\tau _k-\sigma _k)\bigg ] +\frac{\sqrt{M}\left( 1+e^{\sqrt{M} T}\right) }{2(e^{\sqrt{M} T}-1)} \sum _{k=1}^m L_k^*(\delta _k-\varepsilon _k)\bigg \}\Vert x-y\Vert _{PC^1}\nonumber \\&\le \mu \Vert x-y\Vert _{PC^1}. \end{aligned}$$
(3.6)

This implies that

$$\begin{aligned} \displaystyle \Vert Ax-Ay\Vert _{PC^1}\le \max \{\psi ,\mu \} \Vert x-y\Vert _{PC^1}. \end{aligned}$$

By Theorem 3.1, A has a unique fixed point \(x \in PC^1(J,R)\). From Lemma 3.1, x is also the unique solution of PBVP (3.2). This completes the proof.\(\square \)

Now, we are in the position to establish existence criteria for solutions of the PVBP (3.1) by the method of lower and upper solutions and monotone iterative technique. For \(\alpha _0, \beta _0 \in E\), we write \(\alpha _0\le \beta _0\) if \(\alpha _0(t)\le \beta _0(t)\) for all \(t\in J\). In such a case, we denote \([\alpha _0, \beta _0]=\{x\in E~:\) \(\alpha _0(t)\le x(t)\le \beta _0(t)\), \(t\in J\}\).

Theorem 3.2

Suppose that the following conditions hold:

\((H_1)\) \(\alpha _0\) and \(\beta _0\) are lower and upper solutions for PBVP (3.1), respectively, such that \(\alpha _0 \le \beta _0\).

\((H_2)\) The function f satisfies

$$\begin{aligned}&f(t,w_2,x_2,y_2,z_2)-f(t,w_1,x_1,y_1,z_1)\le M(w_2-w_1)+W(x_2-x_1) \\&\quad +\,N(y_2-y_1)+L(z_2-z_1), \end{aligned}$$

for all \(t\in J\), \(\alpha _0(t)\le w_1\le w_2\le \beta _0(t)\), \(\alpha _0\left( \theta (t)\right) \le x_1\le x_2\le \beta _0\left( \theta (t)\right) \), \((K\alpha _0)(t)\le y_1\le y_2\le (K\beta _0)(t)\), \((S\alpha _0)(t)\le z_1\le z_2\le (S\beta _0)(t)\).

\((H_3)\) Constants \(M>0\), \(W,N,L\ge 0\), \(\displaystyle 0 \le L_k < 1/(\tau _k-\sigma _k)\), \(\displaystyle 0 \le L_k^* < 1/(\delta _k-\varepsilon _k)\), \(\displaystyle 0\le \varepsilon _k<\delta _k\le t_k-t_{k-1}\), \(\displaystyle 0\le \sigma _k<\tau _k\le t_k-t_{k-1}\), \(k=1, 2, \ldots , m\), and they satisfy (2.22), (3.4) and (3.5).

\((H_4)\) The functions \(I_k\), \(I_k^*\) satisfy

$$\begin{aligned}&\displaystyle I_k\Bigg (\int _{t_k-\tau _k}^{t_k-\sigma _k}x(s)\text {d}s\Bigg ) -I_k\Bigg (\int _{t_k-\tau _k}^{t_k-\sigma _k}y(s)\text {d}s\Bigg )\le L_k\int _{t_k-\tau _k}^{t_k-\sigma _k}(x(s)-y(s))\text {d}s,\\&\displaystyle I_k^*\Bigg (\int _{t_k-\delta _k}^{t_k-\varepsilon _k}x'(s)\text {d}s\Bigg ) -I_k^*\Bigg (\int _{t_k-\delta _k}^{t_k-\varepsilon _k}y'(s)\text {d}s\Bigg )\le -L_k^*\int _{t_k-\delta _k}^{t_k-\varepsilon _k}(x'(s)-y'(s))\text {d}s, \end{aligned}$$

where \(\alpha _0(t)\le y(t)\le x(t)\le \beta _0(t)\), \(0\le \sigma _k<\tau _k\le t_k-t_{k-1}\), \(0\le \varepsilon _k<\delta _k\le t_k-t_{k-1}\), \(k=1, 2, \ldots , m\).

Then there exist monotone sequences \(\{\alpha _n\}\), \(\{\beta _n\}\subset E\) which converge in E to the extreme solutions of PBVP (3.1) in \([\alpha _0, \beta _0]\), respectively.

Proof

Firstly, we consider the following sequences \(\{\alpha _i\}\), \(\{\beta _i\}\), \(i=1, 2, \ldots \), such that

$$\begin{aligned} \alpha ''_n(t)&-M\alpha _n(t)-W\alpha _n\left( \theta (t)\right) -N(K\alpha _n)(t)-L(S\alpha _n)(t)\\&\quad = f(t, \alpha _{n-1}(t), \alpha _{n-1}\left( \theta (t)\right) , (K\alpha _{n-1})(t), (S\alpha _{n-1})(t))\\&\qquad -M\alpha _{n-1}(t)-W\alpha _{n-1}\left( \theta (t)\right) -N(S\alpha _{n-1})(t)-L(S\alpha _{n-1})(t), \quad t\in J^-,\\ \displaystyle \Delta \alpha _n(t_k)&=L_k\int _{t_k-\tau _k}^{t_k-\sigma _k} \alpha _n(s)\text {d}s+I_k\left( \int _{t_k-\tau _k}^{t_k-\sigma _k}\alpha _{n-1}(s)\text {d}s\right) -L_k\int _{t_k-\tau _k}^{t_k-\sigma _k} \alpha _{n-1}(s)\text {d}s,\\&\quad k=1, 2, \ldots , m,\\ \Delta \alpha '_n(t_k)&=-L_k^*\int _{t_k-\delta _k}^{t_k-\varepsilon _k} \alpha '_n(s)\text {d}s+I_k^*\left( \int _{t_k-\delta _k}^{t_k-\varepsilon _k} \alpha '_{n-1}(s)\text {d}s\right) +L_k^*\int _{t_k-\delta _k}^{t_k-\varepsilon _k} \alpha '_{n-1}(s)\text {d}s,\\&\quad k=1, 2, \ldots , m,\\ \displaystyle \alpha _n(0)&=\alpha _n(T), \quad \alpha '_n(0)=\alpha '_n(T), \end{aligned}$$

and

$$\begin{aligned} \displaystyle \beta ''_n(t)&-M\beta _n(t)-W\beta _n\left( \theta (t)\right) -N(K\beta _n)(t)-L(S\beta _n)(t)\\&\quad =f(t, \beta _{n-1}(t), \beta _{n-1}\left( \theta (t)\right) , (K\beta _{n-1})(t), (S\beta _{n-1})(t))\\&\qquad -M\beta _{n-1}(t)-W\beta _{n-1}\left( \theta (t)\right) -N(S\beta _{n-1})(t)-L(S\beta _{n-1})(t), \quad t\in J^-,\\ \displaystyle \Delta \beta _n(t_k)&=L_k\int _{t_k-\tau _k}^{t_k-\sigma _k} \beta _n(s)\text {d}s+I_k\left( \int _{t_k-\tau _k}^{t_k-\sigma _k}\beta _{n-1}(s)\text {d}s\right) -L_k\int _{t_k-\tau _k}^{t_k-\sigma _k} \beta _{n-1}(s)\text {d}s,\\&\displaystyle ~~~k=1, 2, \ldots , m,\\ \Delta \beta '_n(t_k)&=-L_k^*\int _{t_k-\delta _k}^{t_k-\varepsilon _k} \beta '_n(s)\text {d}s+I_k^*\left( \int _{t_k-\delta _k}^{t_k-\varepsilon _k} \beta '_{n-1}(s)\text {d}s\right) +L_k^*\int _{t_k-\delta _k}^{t_k-\varepsilon _k} \beta '_{n-1}(s)\text {d}s,\\&\displaystyle ~~~k=1, 2, \ldots , m,\\ \displaystyle \beta _n(0)&=\beta _n(T), \quad \beta '_n(0)=\beta '_n(T). \end{aligned}$$

Moreover, by Lemma 3.2, we have \(\alpha _1\) and \(\beta _1\) are well defined.

Next, we shall show that

$$\begin{aligned} \displaystyle \alpha _0(t)\le \alpha _1(t)\le \beta _1(t)\le \beta _0(t),\quad t\in J. \end{aligned}$$
(3.7)

Let \(p(t)=\alpha _0(t)-\alpha _1(t)\). By definition 3.1 of a lower solution of PBVP (3.1), we have

$$\begin{aligned} \displaystyle p''(t)&=\alpha ''_0(t)-\alpha ''_1(t)\\&\ge \displaystyle Mp(t)+Wp\left( \theta (t)\right) +N(Kp)(t)+L(Sp)(t), \end{aligned}$$

and

$$\begin{aligned} \displaystyle \Delta p(t_k)&=\Delta \alpha _0(t_k)-\Delta \alpha _1(t_k)\\&\ge \displaystyle L_k\int _{t_k-\tau _k}^{t_k-\sigma _k}p(s)\text {d}s,~k=1, 2, \ldots , m, \end{aligned}$$

and

$$\begin{aligned} \displaystyle \Delta p'(t_k)&=\Delta \alpha '_0(t_k)-\Delta \alpha '_1(t_k)\\&\ge \displaystyle -L_k^*\int _{t_k-\delta _k}^{t_k-\varepsilon _k}p'(s)\text {d}s,~k=1, 2, \ldots , m, \end{aligned}$$

and

$$\begin{aligned}&\displaystyle p(0)=\alpha _0(0)-\alpha _1(0)\ge \alpha _0(T)-\alpha _1(T)=p(T),\\&\displaystyle p'(0)=\alpha '_0(0)-\alpha '_1(0)\ge \alpha '_0(T)-\alpha '_1(T)=p'(T). \end{aligned}$$

Then by Theorem 2.2, \(p(t)\le 0\), which implies that \(\alpha _0(t)\le \alpha _1(t),t\in T\). In similar way, we can show that \(\beta _0(t)\ge \beta _1(t)\), \(t\in J\).

Further, we will show that \(\alpha _1(t)\le \beta _1(t)\), \(t\in J\). Let \(p(t)=\alpha _1(t)-\beta _1(t)\) and by \((H_2)\), we obtain

$$\begin{aligned}&p''(t)-Mp(t)-Wp\left( \theta (t)\right) -N(Kp)(t)-L(Sp)(t)\\&=f(t, \alpha _0(t), \alpha _0\left( \theta (t)\right) , (K\alpha _0)(t), (S\alpha _0)(t))\\&\quad -f(t, \beta _0(t), \beta _0\left( \theta (t)\right) , (K\beta _0)(t), (S\beta _0)(t))\\&\quad -M\left( \alpha _0(t)-\beta _0(t)\right) -W\left[ \alpha _0\left( \theta (t)\right) -\beta _0\left( \theta (t)\right) \right] \\&\quad -N\left[ (K\alpha _0)(t)-(K\beta _0)(t)\right] -L\left[ (S\alpha _0)(t) -(S\beta _0)(t)\right] \\&\ge 0. \end{aligned}$$

From \((H_4)\), we get

$$\begin{aligned} \Delta p(t_k)&\ge L_k\int _{t_k-\tau _k}^{t_k-\sigma _k} p(s)\text {d}s,\quad k=1, 2, \ldots , m,\\ \Delta p'(t_k)&\ge -L_k^*\int _{t_k-\delta _k}^{t_k-\varepsilon _k} p'(s)\text {d}s,\quad k=1, 2, \ldots , m,\\ p(0)&=p(T), \quad p'(0)= p'(T). \end{aligned}$$

Applying Theorem 2.2, we get \(p(t)\le 0\), which implies \(\alpha _1(t)\le \beta _1(t)\).

Using mathematical induction, we can show that

$$\begin{aligned} \displaystyle \alpha _0(t)\le \alpha _1(t)\le \cdots \le \alpha _n(t)\le \beta _n(t)\le \cdots \le \beta _1(t)\le \beta _0(t), \quad t\in J, \end{aligned}$$

for \(n=1, 2, \ldots .\) Employing standard argument, we have

$$\begin{aligned} \displaystyle \lim _{n\rightarrow \infty }\alpha _n(t)= \alpha (t),\quad \lim _{n\rightarrow \infty }\beta _n(t)=\beta (t), \end{aligned}$$

uniformly on \(t\in J\) and the limit functions \(\alpha (t)\), \(\beta (t)\) satisfy problem (3.1). Moreover, \(\alpha (t), \beta (t)\in [\alpha _0(t), \beta _0(t)]\).

Finally, we will show that \(\alpha \) is the minimal solution and \(\beta \) is the maximal solution of PBVP (3.1), respectively. To prove it we assume that x is any solution of problem PBVP (3.1) such that \(x\in [\alpha _0, \beta _0]\). Let \(\alpha _{n-1}(t)\le x(t)\le \beta _{n-1}(t)\), \(t\in J\), for some positive integer n. Put \(p(t)=\alpha _n(t)-x(t)\). Then

$$\begin{aligned} p''(t)&\displaystyle -Mp(t)-Wp\left( \theta (t)\right) -N(Kp)(t)-L(Sp)(t)\\&\displaystyle =f(t, \alpha _{n-1}(t), \alpha _{n-1}\left( \theta (t)\right) , (K\alpha _{n-1})(t), (S\alpha _{n-1})(t))\\&\quad -f(t, x(t), x\left( \theta (t)\right) , (Kx)(t),(Sx)(t))\\&\quad -M\left( \alpha _{n-1}(t)-x(t)\right) -W\left[ \alpha _{n-1}\left( \theta (t)\right) -x\left( \theta (t)\right) \right] \\&\quad -N\left[ (K\alpha _{n-1})(t)-(Kx)(t)\right] -L\left[ (S\alpha _{n-1})(t) -(Sx)(t)\right] \\&\ge 0. \end{aligned}$$

From \((H_4)\), we have

$$\begin{aligned} \Delta p(t_k)&\ge L_k\int _{t_k-\tau _k}^{t_k-\sigma _k} p(s)\text {d}s, \quad k=1, 2, \ldots , m,\\ \Delta p'(t_k)&\ge -L_k^*\int _{t_k-\delta _k}^{t_k-\varepsilon _k} p'(s)\text {d}s,\quad k=1, 2, \ldots , m,\\ p(0)&=p(T), \quad p'(0)= p'(T). \end{aligned}$$

Using Theorem 2.2, we have for all \(t\in J\), \(p(t)\le 0\), i.e., \(\alpha _{n}(t)\le x(t)\). Similarly, we can prove that \(x(t)\le \beta _{n}(t)\), \(t\in J\). Therefore, \(\alpha _{n}(t)\le x(t)\le \beta _{n}(t)\), for all \(t\in J\), which implies \(\alpha (t)\le x(t)\le \beta (t)\). The proof is complete. \(\square \)

Now, in order to illustrate our results, we consider an example.

Example 3.1

Consider the following impulsive periodic boundary value problem

$$\begin{aligned} {\left\{ \begin{array}{ll} u''(t)=\displaystyle \frac{1}{12}\sin t^3(u(t)-3t)+\frac{t}{21}u\left( \frac{t}{2}\right) +\frac{\sin t}{40}\left[ \int _0^t t^2s^2 u(s)\text {d}s\right] ^2\\ \quad \quad \quad \quad +\frac{\cos t}{32}\left[ \int _0^1t^2s^4 u(s)\text {d}s\right] ^2,\quad t\in J =[0,1],t\ne \frac{1}{2},\\ \displaystyle \Delta u \left( \frac{1}{2}\right) =\frac{1}{15}\int _\frac{1}{6}^\frac{5}{14}u(s)\text {d}s,\quad k=1, \\ \displaystyle \Delta u' \left( \frac{1}{2}\right) =-\frac{13}{5}\int _\frac{1}{6}^\frac{1}{4}u'(s)\text {d}s,\quad k=1, \\ \displaystyle u(0)=u(1),\quad u'(0)=u'(1), \end{array}\right. } \end{aligned}$$
(3.8)

where \(k(t,s)=t^2s^2\), \(h(t,s)=t^2s^4\), \(m=1\), \(t_1=1/2\), \(\tau _1=1/3\), \(\sigma _1=1/7\), \(\delta _1=1/3\), \(\varepsilon _1=1/4\), \(T=1\). Obviously, \(\alpha _0=0\), \(\beta _0=4\) are lower and upper solutions for (3.8), respectively, and \(\alpha _0\le \beta _0\).

Let

$$\begin{aligned} \displaystyle f(t,w_1,x_1,y_1,z_1)=\frac{1}{12}\sin t^3(w_1-3t)+\frac{t}{21}x_1+\frac{\sin t}{40}y_1^2+\frac{\cos t}{32}z_1^2, \end{aligned}$$

we have

$$\begin{aligned}&\displaystyle f(t,w_2,x_2,y_2,z_2)-f(t,w_1,x_1,y_1,z_1)\\&\quad \le \frac{1}{12}(w_2-w_1) +\frac{1}{21}(x_2-x_1)+\frac{1}{15}(y_2-y_1) +\frac{1}{20}(z_2-z_1), \end{aligned}$$

where \(\alpha (t)\le w_1 \le w_2\le \beta (t)\), \(\alpha (\theta (t))\le x_1 \le x_2\le \beta (\theta (t))\), \((K\alpha )(t)\le y_1 \le y_2\le (K\beta )(t)\), \((S\alpha )(t)\le z_1\le z_2\le (S\beta )(t)\), \(t\in J\). It is easy to see that

$$\begin{aligned} \displaystyle I_1\left( \int _\frac{1}{6}^\frac{5}{14}x(s)\text {d}s\right) -I_1\left( \int _\frac{1}{6}^\frac{5}{14}y(s)\text {d}s\right) = \frac{1}{15}\int _\frac{1}{6}^\frac{5}{14}x(s)-y(s)\text {d}s, \end{aligned}$$

and

$$\begin{aligned} \displaystyle I_1^*\left( \int _\frac{1}{6}^\frac{1}{4}x'(s)\text {d}s\right) -I_1^*\left( \int _\frac{1}{6}^\frac{1}{4}y'(s)\text {d}s\right) = -\frac{13}{5}\int _\frac{1}{6}^\frac{1}{4}x'(s)-y'(s)\text {d}s, \end{aligned}$$

whenever \(\alpha _0(t)\le y(t)\le x(t)\le \beta _0(t)\).

Taking \(M=1/12\), \(W=1/21\), \(N=1/15\), \(L=1/20\), \(L_1=1/15\), \(L_1^*=13/5\), it follows that

$$\begin{aligned} C_1&=1+\frac{1}{15}\left( \frac{1}{3}-\frac{1}{7}\right) =\frac{319}{315},\\ D_1&=C_1\int _0^{\frac{1}{6}}U(s)\text {d}s+\int _{\frac{1}{6}}^{\frac{5}{14}} \left[ 1+\frac{1}{15}\left( \frac{1}{2}-\frac{1}{7}-s\right) \right] U(s)\text {d}s +\int _{\frac{5}{14}}^{\frac{1}{2}}U(s)\text {d}s,\\ \hat{L}&=\frac{1}{15}\left( \frac{1}{3}-\frac{1}{7}\right) =\frac{4}{315},\\ U(t)&=\frac{\prod _{0<t_k<t}A_k^*}{1-\prod _{k=1}^m A_k^*}\left[ \sum _{k=1}^m \prod _{j=k+1}^m A_j^* B_k^*+\int _{\frac{1}{2}}^1 r(s)\text {d}s\right] \\&\quad +\left[ \sum _{0<t_k<t} \prod _{t_k<t_j<t} A_j^* B_k^*+\int _{t_l}^tr(s)\text {d}s\right] ,\\ A_1^*&=1-\frac{13}{5}\left( \frac{1}{3}-\frac{1}{4}\right) ,\\ B_1^*&=A_1^*\int _{0}^{\frac{1}{6}}r(s)\text {d}s+\int _{\frac{1}{6}}^{\frac{1}{4}} \left[ 1-\frac{3}{15}\left( \frac{1}{2}-\frac{1}{4}-s\right) \right] r(s)\text {d}s +\int _{\frac{1}{4}}^{\frac{1}{2}}r(s)\text {d}s,\\ r(t)&=\frac{1}{12}+\frac{1}{21}+\frac{1}{15}\int _0^t t^2s^2 \text {d}s +\frac{1}{20}\int _0^1t^2s^4\text {d}s. \end{aligned}$$

Through a simple calculation we can get

$$\begin{aligned} \prod _{k=1}^mC_k\left[ \hat{L}+\sum _{k=1}^mD_k+\int _0^TU(s)\text {d}s\right] =0.9717485093 \le 1. \end{aligned}$$

Furthermore, we have

$$\begin{aligned} \psi&=:\frac{1+e^{\sqrt{\frac{1}{12}}}}{2\sqrt{\frac{1}{12}} \left( e^{\sqrt{\frac{1}{12}}}-1\right) }\bigg [\int _0^1 \left( \frac{1}{21}+\frac{1}{15}\int _0^s s^2r^2 \mathrm{d}r+\frac{1}{20}\int _0^1 s^2r^4 \mathrm{d}r \right) \text {d}s \\&\quad +\frac{1}{15}\left( \frac{1}{3}-\frac{1}{7}\right) \bigg ]+\frac{13}{10} \left( \frac{1}{3}-\frac{1}{4}\right) ,\\&=0.922192396<1,\\ \mu&=:\frac{1}{2}\left[ \int _0^1 \left( \frac{1}{21}+\frac{1}{15}\int _0^s s^2r^2 \mathrm{d}r+\frac{1}{20}\int _0^1 s^2r^4 \mathrm{d}r\right) \text {d}s+\frac{1}{15}\left( \frac{1}{3}-\frac{1}{7}\right) \right] \\&\quad +\frac{\sqrt{\frac{1}{12}}\left( 1+e^{\sqrt{\frac{1}{12}}} \right) }{2\left( e^{\sqrt{\frac{1}{12}} }-1\right) } \frac{13}{5}\left( \frac{1}{3}-\frac{1}{4}\right) ,\\&=0.2566612741<1. \end{aligned}$$

Therefore, PBVP (3.8) satisfies all conditions of Theorem 3.2. Thus, PBVP (3.8) has minimal and maximal solutions in the segment \([\alpha _0,\beta _0]\).

4 Conclusion

This paper is devoted to establish two new maximum principles, i.e., Theorems 2.1 and 2.2. Theorem 2.2 can be used as a tool for proving the existence of extreme solutions for the periodic boundary value problem of impulsive functional integro-differential equation with integral jump conditions (3.1) as mentioned in Theorem 3.2.