1 Introduction

In the last decades, fixed point theory has been extended to various abstract spaces and has also been used extensively in the study of all kinds of scientific problems successfully, establishing a connection between pure and applied approaches and even including very relevant computational issues. In particular, several applications of fixed point theory have been introduced for the study and calculation of solutions to differential equations, integral equations, dynamical systems, models in economy and related areas, game theory, physics, engineering, computer science, or neural networks, among many others. Besides, they are basic tools for the study of nonlinear systems, by setting a framework which helps to elevate some basic properties of the solutions to linear models in order to deduce (or approximate) the behavior of the nonlinear ones, whose solutions can be found as the fixed points of a certain operator. Among them, the most influential and celebrating fixed point theorem, known as the Banach contraction principle (see [3]), was proved by the Polish mathematician Banach in 1922. Since then, fixed point theory has had a rapid development. In [2] or [5], the author introduced b-metric space as a sharp generalization of metric space, and proved fixed point theorems in b-metric spaces, which generalized the famous Banach contraction principle. Subsequently, several papers have dealt with fixed point theory or the variational principle for single-valued and multi-valued operators in b-metric spaces (see [4, 6, 9, 10, 18,19,20,21]). In recent years, stability results for fixed point iteration procedures become the center of strong research activity in applications of many branches of mathematics. There are numerous works about stabilities for iteration procedures in various spaces (see, for example, [1, 7, 13, 14]). But the most important iteration procedure among them, is Picard’s iteration, whose stability occupies a prominent place in many areas. On the other hand, quite a few authors are interested in the P properties of fixed points for some mappings (see [7, 8]). In this article, we obtain some fixed point theorems for a class of contractive mappings in b-metric spaces. Moreover, we consider the T-stability of Picard’s iteration and the P property for such mappings. The results greatly improve and generalize the previous results from [6, 8, 9, 14]. Besides this, we illustrate our assertions with an example. In addition, we give some applications to two classes of ordinary differential equations with initial value conditions. We verify the existence and uniqueness of solution to such equations. Further, we give the concrete mathematical expressions of solutions to such equations. To the best of our knowledge, by using different methods of fixed point theory, many authors usually merely deal with the existence and uniqueness of solution for various differential or integral equations. However, they seldom consider the expression of solution. On this basis alone our results are quite valuable.

In the sequel, we always denote by \(\mathbb {N}\), \(\mathbb {R}\), \(\mathbb {R}^+\) the sets of positive integers, real numbers and nonnegative real numbers, respectively.

First of all, let us recall the concept of b-metric space.

Definition 1.1

[2, 5, 9] Let X be a (nonempty) set and \(s\ge 1\) be a given real number. A function \(d:X\times X\rightarrow \mathbb {R}^+\) is called a b-metric on X if, for all \(x,y,z\in X\), the following conditions hold:

  1. (b1)

    \(d(x,y)=0\) if and only if \(x=y\);

  2. (b2)

    \(d(x,y)=d(y,x)\);

  3. (b3)

    \(d(x,z)\le s[d(x,y)+d(y,z)]\).

In this case, the pair (Xd) is called a b-metric space or metric type space.

For some examples of b-metric spaces, the reader may refer to [2, 4, 5, 10, 18,19,20,21] and the references therein. Motivated by Example 1.2 of [6], we give an example of unusual b-metric space as follows.

Example 1.2

Let \(H^p(U)=\{f\in H(U):\Vert f\Vert _{H^p}<\infty \}\) (\(0<p<1\)) be \(H^p\) space defined on the unit disk U, where H(U) is the set of all holomorphic functions on U and

$$\begin{aligned} \Vert f\Vert _{H^p}=\sup _{0<r<1}\left( \frac{1}{2\pi }\int _{-\pi }^\pi |f(r\mathrm{e}^{\mathrm{i}\theta })|^p\mathrm{d}\theta \right) ^{\frac{1}{p}}. \end{aligned}$$

Denote \(X=H^p(U)\). Define a mapping \(d:X\times X\rightarrow \mathbb {R}^+\) by

$$\begin{aligned} d(f,g)=\sup _{0<r<1}\left( \frac{1}{2\pi }\int _{-\pi }^\pi |f(r\mathrm{e}^{\mathrm{i}\theta })-g(r\mathrm{e}^{\mathrm{i}\theta })|^p\mathrm{d}\theta \right) ^{\frac{1}{p}} \end{aligned}$$
(1.1)

for all \(f,g\in X\). Then (Xd) is a b-metric space with coefficient \(s=2^{\frac{1}{p}-1}\).

Indeed, we only prove that condition (b3) in Definition 1.1 is satisfied. To this end, letting \(f,g,h\in X\), by (1.1), we need to show that

$$\begin{aligned}&\sup _{0<r<1}\left( \frac{1}{2\pi }\int _{-\pi }^\pi |f(r\mathrm{e}^{\mathrm{i} \theta })-h(r\mathrm{e}^{\mathrm{i}\theta })|^p\mathrm{d}\theta \right) ^{\frac{1}{p}}\nonumber \\&\quad \le 2^{\frac{1}{p} -1}\left[ \sup _{0<r<1}\left( \frac{1}{2\pi } \int _{-\pi }^\pi |f(r\mathrm{e}^{\mathrm{i}\theta })-g(r\mathrm{e}^{\mathrm{i} \theta })|^p\mathrm{d}\theta \right) ^{\frac{1}{p}}\right. \nonumber \\&\qquad \left. +\sup _{0<r<1}\left( \frac{1}{2\pi }\int _{-\pi }^\pi |g(r\mathrm{e}^{\mathrm{i}\theta })-h(r\mathrm{e}^{\mathrm{i}\theta })|^p\mathrm{d} \theta \right) ^{\frac{1}{p}}\right] . \end{aligned}$$
(1.2)

Denote \(u(r\mathrm{e}^{\mathrm{i}\theta })=f(r\mathrm{e}^{\mathrm{i}\theta })-g(r\mathrm{e}^{\mathrm{i}\theta })\), \(v(r\mathrm{e}^{\mathrm{i}\theta })=g(r\mathrm{e}^{\mathrm{i}\theta })-h(r\mathrm{e}^{\mathrm{i}\theta })\), it follows immediately from (1.2) that

$$\begin{aligned}&\sup _{0<r<1}\left( \frac{1}{2\pi }\int _{-\pi }^\pi |u(r\mathrm{e}^{\mathrm{i}\theta })+v(r\mathrm{e}^{\mathrm{i}\theta })|^p\mathrm{d}\theta \right) ^{\frac{1}{p}}\nonumber \\&\quad \le 2^{\frac{1}{p} -1}\left[ \sup _{0<r<1}\left( \frac{1}{2\pi }\int _{-\pi }^\pi |u(r\mathrm{e}^{\mathrm{i}\theta })|^p\mathrm{d}\theta \right) ^{\frac{1}{p}}\right. \nonumber \\&\quad \left. +\sup _{0<r<1}\left( \frac{1}{2\pi }\int _{-\pi }^\pi |v(r\mathrm{e}^{\mathrm{i}\theta })|^p\mathrm{d}\theta \right) ^{\frac{1}{p}}\right] . \end{aligned}$$
(1.3)

To prove (1.3), notice that the following inequalities:

$$\begin{aligned} (a+b)^p\le & {} a^p+b^p\quad (a,b\ge 0,\ 0<p\le 1),\nonumber \\ (a+b)^p\le & {} 2^{p-1}(a^p+b^p)\quad (a,b\ge 0,\ p\ge 1), \end{aligned}$$
(1.4)

then

$$\begin{aligned}&\sup _{0<r<1}\left( \frac{1}{2\pi }\int _{-\pi }^\pi |u(r\mathrm{e}^{\mathrm{i}\theta })+v(r\mathrm{e}^{\mathrm{i}\theta })|^p\mathrm{d}\theta \right) ^{\frac{1}{p}}\\&\quad \le \sup _{0<r<1}\left( \frac{1}{2\pi }\int _{-\pi }^\pi (|u(r\mathrm{e}^{\mathrm{i}\theta })|+|v(r\mathrm{e}^{\mathrm{i}\theta })|)^p\mathrm{d}\theta \right) ^{\frac{1}{p}}\\&\quad \le \sup _{0<r<1}\left( \frac{1}{2\pi }\int _{-\pi }^\pi (|u(r\mathrm{e}^{\mathrm{i}\theta })|^p+|v(r\mathrm{e}^{\mathrm{i}\theta })|^p)\mathrm{d}\theta \right) ^{\frac{1}{p}}\\&\quad =\sup _{0<r<1}\left( \frac{1}{2\pi }\int _{-\pi }^\pi |u(r\mathrm{e}^{\mathrm{i}\theta })|^p\mathrm{d}\theta +\frac{1}{2\pi } \int _{-\pi }^\pi |v(r\mathrm{e}^{\mathrm{i}\theta })|^p\mathrm{d}\theta \right) ^{\frac{1}{p}}\\&\quad \le 2^{\frac{1}{p} -1}\sup _{0<r<1}\left[ \left( \frac{1}{2\pi }\int _{-\pi }^\pi |u(r\mathrm{e}^{\mathrm{i}\theta })|^p\mathrm{d}\theta \right) ^{\frac{1}{p}} +\left( \frac{1}{2\pi }\int _{-\pi }^\pi |v(r\mathrm{e}^{\mathrm{i}\theta })|^p\mathrm{d}\theta \right) ^{\frac{1}{p}}\right] \\&\quad \le 2^{\frac{1}{p} -1}\left[ \sup _{0<r<1}\left( \frac{1}{2\pi }\int _{-\pi }^\pi |u(r\mathrm{e}^{\mathrm{i}\theta })|^p\mathrm{d}\theta \right) ^{\frac{1}{p}} +\sup _{0<r<1}\left( \frac{1}{2\pi }\int _{-\pi }^\pi |v(r\mathrm{e}^{\mathrm{i}\theta })|^p\mathrm{d}\theta \right) ^{\frac{1}{p}}\right] . \end{aligned}$$

Definition 1.3

[18, 19] Let (Xd) be a b-metric space and \(\{x_n\}\) a sequence in X. We say that

  1. (1)

    \(\{x_n\}\) b-converges to \(x\in X\) if \(d(x_n,x)\rightarrow 0\) as \(n\rightarrow \infty \);

  2. (2)

    \(\{x_n\}\) is a b-Cauchy sequence if \(d(x_m,x_n)\rightarrow 0\) as \(m,n\rightarrow \infty \);

  3. (3)

    (Xd) is b-complete if every b-Cauchy sequence in X is b-convergent.

Each b-convergent sequence in a b-metric space has a unique limit and it is also a b-Cauchy sequence. Moreover, in general, a b-metric is not necessarily continuous. The following example illustrates this claim.

Example 1.4

[10] Let \(X=\mathbb {N}\cup \{\infty \}\). Define a mapping \(d:X\times X\rightarrow \mathbb {R}^+\) as follows:

$$\begin{aligned}&d(m,n) \\&\quad = \left\{ \begin{array}{ll} 0, &{}\quad \text {if } m=n;\\ |\frac{1}{m}-\frac{1}{n}|, &{}\quad \! \text {if one of } m,n~(m\ne n) \text { is even and the other is even or } \infty ;\\ 5, &{} \quad \!\text {if one of } m,n~(m\ne n) \text { is odd and the other is odd or } \infty ;\\ 2, &{} \quad \!\text {others}. \end{array} \right. \end{aligned}$$

It is not hard to verify that

$$\begin{aligned} d(m,p)\le \frac{5}{2}[d(m,n)+d(n,p)]\quad (m,n,p\in X). \end{aligned}$$

Then (Xd) is a b-metric space with coefficient \(s=\frac{5}{2}\). Choose \(x_n=2n~(n\in \mathbb {N})\), then

$$\begin{aligned} d(x_n,\infty )=\frac{1}{2n}\rightarrow 0\quad (n\rightarrow \infty ), \end{aligned}$$

that is, \(x_n\rightarrow \infty ~(n\rightarrow \infty )\). However, \(d(x_n,1)=2\nrightarrow 5=d(\infty ,1)~(n\rightarrow \infty )\).

Recently, Qing and Rhoades [14] established the notion of T-stability of Picard’s iteration in metric space as follows.

Definition 1.5

[14] Let (Xd) be a metric space and T a self-map on X. Let \(x_0\) be a point of X, and assume that \(x_{n+1}=f(T,x_n)\) is an iteration procedure, involving T, which yields a sequence \(\{x_n\}\) of points from X. Then the iteration procedure \(x_{n+1}=f(T,x_n)\), is said to be T-stable with respect to T if \(\{x_n\}\) converges to a fixed point q of T and whenever \(\{y_n\}\) is a sequence in X with \(\lim _{n\rightarrow \infty }d(y_{n+1},f(T,y_n))=0\), we have \(\lim _{n\rightarrow \infty }y_n=q\). Particularly, if these conditions hold for Picard’s iteration procedure \(x_{n+1}=Tx_n\), then we will say that Picard’s iteration is T-stable.

In the following, we simplify Definition 1.5 and introduce the concept of T-stability of Picard’s iteration in b-metric space.

Definition 1.6

Let (Xd) be a b-metric space, \(x_0\in X\) and \(T:X\rightarrow X\) be a mapping with \(F(T)\ne \emptyset \), where F(T) denotes the set of all fixed points of T and similarly hereinafter. Then Picard’s iteration \(x_{n+1}=Tx_n\) is said to be T-stable with respect to T if \(\lim _{n\rightarrow \infty }x_n=q\in F(T)\) and whenever \(\{y_n\}\) is a sequence in X with \(\lim _{n\rightarrow \infty }d(y_{n+1},Ty_n)=0\), we have \(\lim _{n\rightarrow \infty }y_n=q\).

For the convenience of reader, we recall the well-posedness of fixed point problems as defined and studied in [15,16,17] as follows.

Definition 1.7

[15,16,17] Let \((K,\rho )\) be a bounded complete metric space. We say that the fixed point problem for a mapping \(T:K\rightarrow K\) is well posed if there exists a unique \(q\in K\) such that \(q\in F(T)\) and whenever \(\{y_n\}\) is a sequence in X with \(\lim _{n\rightarrow \infty }d(y_n,Ty_n)=0\) we have \(\lim _{n\rightarrow \infty }y_n=q\).

Remark 1.8

Comparing Definitions 1.6 to 1.7, we see that the T-stability is different from the well-posedness. Indeed, on the one hand, their space is different since (Xd) is a b-metric space and \((K,\rho )\) is a bounded complete metric space. On the other hand, Definition 1.6 aims at Picard’s iterative sequence and Definition 1.7 aims at a general sequence. In addition, \(\lim _{n\rightarrow \infty }d(y_{n+1},Ty_n)=0\) and \(\lim _{n\rightarrow \infty }d(y_n,Ty_n)=0\) differ from each other in many ways. Limited by the length of paper, this article only discusses the T-stability of fixed point problems.

What follows is a useful lemma for the proof of our main results.

Lemma 1.9

[11] Let \(\{a_n\},\) \(\{c_n\}\) be nonnegative sequences satisfying \(a_{n+1}\le ha_n+c_n\) for all \(n\in \mathbb {N},\) \(0\le h<1,\) \(\lim _{n\rightarrow \infty }c_n=0.\) Then,  \(\lim _{n\rightarrow \infty }a_n=0.\)

The following lemma was frequently utilized by many authors in order to overcome the problem of discontinuity for b-metric. However, throughout this paper, we do not use this lemma because we avoid such problem.

Lemma 1.10

[10] Let (Xd) be a b-metric space with coefficient \(s\ge 1\) and let \(\{x_n\}\) and \(\{y_n\}\) be b-convergent to points \(x,y\in X,\) respectively. Then we have

$$\begin{aligned} \frac{1}{s^2}d(x,y)\le \liminf _{n\rightarrow \infty }d(x_n,y_n)\le \limsup _{n\rightarrow \infty }d(x_n,y_n)\le s^2d(x,y). \end{aligned}$$

In particular,  if \(x=y,\) then we have \(\lim _{n\rightarrow \infty }d(x_n,y_n)=0\). Moreover,  for each \(z\in X,\) we have

$$\begin{aligned} \frac{1}{s}d(x,z)\le \liminf _{n\rightarrow \infty }d(x_n,z)\le \limsup _{n\rightarrow \infty }d(x_n,z)\le sd(x,z). \end{aligned}$$

2 Main results

In this section, we firstly give a useful lemma, which greatly generalizes and implements the counterpart of the existing literature. Secondly, we give several fixed point theorems for contractive mappings on b-complete b-metric spaces. Thirdly, we deduce the T-stability of Picard’s iteration and the P property for such mappings. Fourthly, we give an example to illustrate our conclusions.

Lemma 2.1

Let (Xd) be a b-metric space with coefficient \(s\ge 1\) and \(T:X\rightarrow X\) be a mapping. Suppose that \(\{x_n\}\) is a sequence in X induced by \(x_{n+1}=Tx_n\) such that

$$\begin{aligned} d(x_n,x_{n+1})\le \lambda d(x_{n-1},x_n), \end{aligned}$$
(2.1)

for all \(n\in \mathbb {N},\) where \(\lambda \in [0,1)\) is a constant. Then \(\{x_n\}\) is a b-Cauchy sequence.

Proof

Let \(x_0\in X\) and \(x_{n+1}=Tx_n\) for all \(n\in \mathbb {N}\). We divide the proof into three cases.

Case 1 \(\lambda \in [0,\frac{1}{s})~(s>1)\). By (2.1), we have

$$\begin{aligned} d(x_n,x_{n+1})\le & {} \lambda d(x_{n-1},x_n)\\\le & {} \lambda ^2d(x_{n-2},x_{n-1})\\&\vdots&\nonumber \\\le & {} \lambda ^nd(x_0,x_1). \end{aligned}$$

Thus, for any \(n>m\) and \(n,m\in \mathbb {N}\), we have

$$\begin{aligned}&d(x_m,x_n)\\&\quad \le s[d(x_m,x_{m+1})+d(x_{m+1},x_n)]\\&\quad \le sd(x_m,x_{m+1})+s^2[d(x_{m+1},x_{m+2})+d(x_{m+2},x_n)]\\&\quad \le sd(x_m,x_{m+1})+s^2d(x_{m+1},x_{m+2})+s^3[d(x_{m+2},x_{m+3})+d(x_{m+3},x_n)]\\&\quad \le sd(x_m,x_{m+1})+s^2d(x_{m+1},x_{m+2})+s^3d(x_{m+2},x_{m+3})\\&\qquad +\,\cdots +s^{n-m-1}d(x_{n-2},x_{n-1})+s^{n-m-1}d(x_{n-1},x_n)\\&\quad \le s\lambda ^m d(x_0,x_1)+s^2\lambda ^{m+1} d(x_0,x_1)+s^3\lambda ^{m+2} d(x_0,x_1)\\&\qquad +\cdots +s^{n-m-1}\lambda ^{n-2} d(x_0,x_1)+s^{n-m-1}\lambda ^{n-1} d(x_0,x_1)\\&\quad \le s\lambda ^m(1+s\lambda +s^2\lambda ^2+\cdots +s^{n-m-2}\lambda ^{n-m-2}+s^{n-m-1}\lambda ^{n-m-1})d(x_0,x_1)\\&\quad \le s\lambda ^m\left[ \sum _{i=0}^\infty (s\lambda )^i\right] d(x_0,x_1)\\&\quad =\frac{s\lambda ^m}{1-s\lambda }d(x_0,x_1)\rightarrow 0\quad (m\rightarrow \infty ), \end{aligned}$$

which implies that \(\{x_n\}\) is a b-Cauchy sequence. In other words, \(\{T^nx_0\}\) is a b-Cauchy sequence.

Case 2   Let \(\lambda \in [\frac{1}{s},1)~(s>1)\). In this case, we have \(\lambda ^n\rightarrow 0\) as \(n\rightarrow \infty \), so there is \(n_0\in \mathbb {N}\) such that \(\lambda ^{n_0}<\frac{1}{s}\). Thus, by Case 1, we claim that

$$\begin{aligned} \{(T^{n_0})^nx_0\}_{n=0}^\infty :=\{x_{n_0},x_{n_0+1},x_{n_0+2},\ldots ,x_{n_0+n},\ldots \} \end{aligned}$$

is a b-Cauchy sequence. Then

$$\begin{aligned} \{x_n\}_{n=0}^\infty =\{x_0,x_1,x_2,\ldots ,x_{n_0-1}\}\cup \{x_{n_0},x_{n_0+1},x_{n_0+2},\ldots ,x_{n_0+n},\ldots \} \end{aligned}$$

is a b-Cauchy sequence in X.

Case 3 Let \(s=1\). Similar to the process of Case 1, the claim holds. \(\square \)

Remark 2.2

Lemma 2.1 expands the range of [9, Lemma 3.1] from \(\lambda \in [0,\frac{1}{s})\) to \(\lambda \in [0,1)\). Clearly, this is a sharp generalization. Otherwise, though Lemma 2.1 is the special case of [12, Lemma 2.2], our proof of Lemma 2.1 is straightforward without utilizing [12, Lemma 2.1] while the proof of [12, Lemma 2.2] strongly relies on this lemma.

Theorem 2.3

Let (Xd) be a b-complete b-metric space with coefficient \(s\ge 1\) and \(T:X\rightarrow X\) be a mapping such that

$$\begin{aligned} d(Tx,Ty)\le & {} \lambda _1d(x,y)+\lambda _2\frac{d(x,Tx)d(y,Ty)}{1+d(x,y)}+\lambda _3\frac{d(x,Ty)d(y,Tx)}{1+d(x,y)}\nonumber \\&+\,\lambda _4\frac{d(x,Tx)d(x,Ty)}{1+d(x,y)}+\lambda _5\frac{d(y,Ty)d(y,Tx)}{1+d(x,y)}, \end{aligned}$$
(2.2)

where \(\lambda _1,\lambda _2,\lambda _3,\lambda _4\) and \(\lambda _5\) are nonnegative constants with \(\lambda _1+\lambda _2+\lambda _3+s\lambda _4+s\lambda _5<1\). Then T has a unique fixed point in X. Moreover,  for any \(x\in X,\) the iterative sequence \(\{T^nx\}~(n\in \mathbb {N})\) b-converges to the fixed point.

Proof

Choose \(x_0\in X\) and construct a Picard iterative sequence \(\{x_n\}\) by \(x_{n+1}=Tx_n~(n\in \mathbb {N})\). If there exists \(n_0\in \mathbb {N}\) such that \(x_{n_0}=x_{n_0+1}\), then \(x_{n_0}=x_{n_0+1}=Tx_{n_0}\), i.e., \(x_{n_0}\) is a fixed point of T. Next, without loss of generality, let \(x_n\ne x_{n+1}\) for all \(n\in \mathbb {N}\). By (2.2), we have

$$\begin{aligned}&d(x_n,x_{n+1})\\&\quad =(Tx_{n-1},Tx_n)\\&\quad \le \lambda _1d(x_{n-1},x_n)+\lambda _2\frac{d(x_{n-1},Tx_{n-1}) d(x_n,Tx_n)}{1+d(x_{n-1},x_n)}\\&\qquad +\,\lambda _3\frac{d(x_{n-1},Tx_n)d(x_n,Tx_{n-1})}{1+d(x_{n-1},x_n)}\\&\qquad +\,\lambda _4\frac{d(x_{n-1},Tx_{n-1})d(x_{n-1},Tx_n)}{1+d(x_{n-1},x_n)}+ \lambda _5\frac{d(x_n,Tx_n)d(x_n,Tx_{n-1})}{1+d(x_{n-1},x_n)}\\&\quad =\lambda _1d(x_{n-1},x_n)+\lambda _2\frac{d(x_{n-1},x_n) d(x_n,x_{n+1})}{1+d(x_{n-1},x_n)}+\lambda _3\frac{d(x_{n-1}, x_{n+1})d(x_n,x_n)}{1+d(x_{n-1},x_n)}\\&\qquad +\,\lambda _4\frac{d(x_{n-1},x_n)d(x_{n-1},x_{n+1})}{1+d(x_{n-1},x_n)}+\lambda _5\frac{d(x_n,x_{n+1})d(x_n,x_n)}{1+d(x_{n-1},x_n)}\\&\quad \le \lambda _1d(x_{n-1},x_n)+\lambda _2d(x_n,x_{n+1})+s\lambda _4[d(x_{n-1},x_n) +d(x_n,x_{n+1})]. \end{aligned}$$

It follows that

$$\begin{aligned} (1-\lambda _2-s\lambda _4)d(x_n,x_{n+1})\le (\lambda _1+s\lambda _4)d(x_{n-1},x_n). \end{aligned}$$
(2.3)

Again by (2.2), we have

$$\begin{aligned}&d(x_n,x_{n+1})\\&\quad =d(Tx_n,Tx_{n-1})\\&\quad \le \lambda _1d(x_n,x_{n-1})+\lambda _2\frac{d(x_n,Tx_n) d(x_{n-1},Tx_{n-1})}{1+d(x_n,x_{n-1})}+\lambda _3 \frac{d(x_n,Tx_{n-1})d(x_{n-1},Tx_n)}{1+d(x_n,x_{n-1})}\\&\qquad +\,\lambda _4\frac{d(x_n,Tx_n)d(x_n,Tx_{n-1})}{1+d(x_n,x_{n-1})}+ \lambda _5\frac{d(x_{n-1}, Tx_{n-1})d(x_{n-1},Tx_n)}{1+d(x_n,x_{n-1})}\\&\quad =\lambda _1d(x_n,x_{n-1})+\lambda _2\frac{d(x_n,x_{n+1}) d(x_{n-1},x_n)}{1+d(x_n,x_{n-1})}+\lambda _3 \frac{d(x_n,x_n)d(x_{n-1},x_{n+1})}{1+d(x_n,x_{n-1})}\\&\qquad +\,\lambda _4\frac{d(x_n,x_{n+1})d(x_n,x_n)}{1+d(x_n,x_{n-1})}+ \lambda _5\frac{d(x_{n-1},x_n)d(x_{n-1},x_{n+1})}{1+d(x_n,x_{n-1})}\\&\quad \le \lambda _1d(x_{n-1},x_n)+\lambda _2d(x_n,x_{n+1})+s \lambda _5[d(x_{n-1},x_n)+d(x_n,x_{n+1})]. \end{aligned}$$

This establishes that

$$\begin{aligned} (1-\lambda _2-s\lambda _5)d(x_n,x_{n+1})\le (\lambda _1+s\lambda _5)d(x_{n-1},x_n). \end{aligned}$$
(2.4)

Adding up (2.3) and (2.4) yields

$$\begin{aligned} d(x_n,x_{n+1})\le \frac{2\lambda _1+s\lambda _4+s\lambda _5}{2-2\lambda _2-s\lambda _4-s\lambda _5}d(x_{n-1},x_n). \end{aligned}$$

Put \(\lambda =\frac{2\lambda _1+s\lambda _4+s\lambda _5}{2-2\lambda _2-s\lambda _4-s\lambda _5}\). In view of \(\lambda _1+\lambda _2+\lambda _3+s\lambda _4+s\lambda _5<1\), then \(0\le \lambda <1\). Thus, by Lemma 2.1, \(\{x_n\}\) is a b-Cauchy sequence in X. Since (Xd) is b-complete, then there exists some point \(x^*\in X\) such that \(x_n\rightarrow x^*\) as \(n\rightarrow \infty \).

By (2.2), it is easy to see that

$$\begin{aligned}&d(x_{n+1},Tx^*)\end{aligned}$$
(2.5)
$$\begin{aligned}&\quad =d(Tx_n,Tx^*)\nonumber \\&\quad \le \lambda _1d(x_n,x^*)+\lambda _2\frac{d(x_n,Tx_n)d(x^*,Tx^*)}{1+d(x_n,x^*)}+\lambda _3\frac{d(x_n,Tx^*)d(x^*,Tx_n)}{1+d(x_n,x^*)}\nonumber \\&\qquad +\,\lambda _4\frac{d(x_n,Tx_n)d(x_n,Tx^*)}{1+d(x_n,x^*)}+\lambda _5\frac{d(x^*,Tx^*)d(x^*,Tx_n)}{1+d(x_n,x^*)}\nonumber \\&\quad =\lambda _1d(x_n,x^*)+\lambda _2\frac{d(x_n,x_{n+1})d(x^*,Tx^*)}{1+d(x_n,x^*)}+\lambda _3\frac{d(x_n,Tx^*) d(x^*,x_{n+1})}{1+d(x_n,x^*)}\nonumber \\&\qquad +\,\lambda _4\frac{d(x_n,x_{n+1})d(x_n,Tx^*)}{1+d(x_n,x^*)}+\lambda _5\frac{d(x^*,Tx^*)d(x^*,x_{n+1})}{1+d(x_n,x^*)}. \end{aligned}$$
(2.6)

Taking the limit as \(n\rightarrow \infty \) from both sides of (2.6), we get \(\lim _{n\rightarrow \infty }d(x_{n+1},Tx^*)=0\). That is, \(x_n\rightarrow Tx^*\) \((n\rightarrow \infty )\). Hence, by the uniqueness of limit of b-convergent sequence, it gives that \(Tx^*=x^*\). In other words, \(x^*\) is a fixed point of T.

Finally, we show the uniqueness of the fixed point. Indeed, if there is another fixed point \(y^*\), then by (2.2),

$$\begin{aligned} d(x^*,y^*)= & {} d(Tx^*,Ty^*)\nonumber \\\le & {} \lambda _1d(x^*,y^*)+\lambda _2\frac{d(x^*,Tx^*)d(y^*,Ty^*)}{1+d(x^*,y^*)} +\lambda _3\frac{d(x^*,Ty^*)d(y^*,Tx^*)}{1+d(x^*,y^*)}\nonumber \\&+\,\lambda _4\frac{d(x^*,Tx^*)d(x^*,Ty^*)}{1+d(x^*,y^*)}+\lambda _5\frac{d(y^*,Ty^*)d(y^*,Tx^*)}{1+d(x^*,y^*)}\nonumber \\= & {} \lambda _1d(x^*,y^*)+\lambda _3\frac{d(x^*,y^*)d(x^*,y^*)}{1+d(x^*,y^*)}\nonumber \\\le & {} (\lambda _1+\lambda _3)d(x^*,y^*). \end{aligned}$$
(2.7)

Because \(\lambda _1+\lambda _2+\lambda _3+s\lambda _4+s\lambda _5<1\) implies \(\lambda _1+\lambda _3<1\), we conclude from (2.7) that \(d(x^*,y^*)=0\), i.e., \(x^*=y^*\). \(\square \)

Corollary 2.4

Let (Xd) be a complete metric space and \(T:X\rightarrow X\) be a mapping such that

$$\begin{aligned} d(Tx,Ty)\le & {} \lambda _1d(x,y)+\lambda _2\frac{d(x,Tx)d(y,Ty)}{1+d(x,y)}+ \lambda _3\frac{d(x,Ty)d(y,Tx)}{1+d(x,y)}\\&+\,\lambda _4\frac{d(x,Tx)d(x,Ty)}{1+d(x,y)}+\lambda _5\frac{d(y,Ty)d(y,Tx)}{1+d(x,y)}, \end{aligned}$$

where \(\lambda _1,\lambda _2,\lambda _3,\lambda _4\) and \(\lambda _5\) are nonnegative constants with \(\lambda _1+\lambda _2+\lambda _3+\lambda _4+\lambda _5<1\). Then T has a unique fixed point in X. Moreover,  for any \(x\in X,\) the iterative sequence \(\{T^nx\}~(n\in \mathbb {N})\) converges to the fixed point.

Proof

Take \(s=1\) in Theorem 2.3, thus the claim holds. \(\square \)

Remark 2.5

Take \(\lambda _2=\lambda _3=\lambda _4=\lambda _5\) in Theorem 2.3 or in Corollary 2.4, then Theorem 2.3 and Corollary 2.4 are reduced to [6, Corollary 2.3] and Banach contraction principle, respectively. From this point of view, our results are genuine generalizations of the previous results. Otherwise, by the whole proof of Theorem 2.3, we are able to see that Lemma 1.10 does not be used since we dismiss the problem of whether the b-metric being continuous or discontinuous. However, some previous results strongly lie on the discontinuity of b-metric and hence they often have to make the most of Lemma 1.10 (see [10, 18,19,20,21]).

Theorem 2.6

Under the conditions of Theorem 2.3, if \(2s\lambda _1+2\lambda _3+(s+s^2)(\lambda _4+\lambda _5)<2,\) then Picard’s iteration is T-stable.

Proof

Taking advantage of Theorem 2.3, we get T has a unique fixed point \(x^*\) in X. Assume that \(\{y_n\}\) is a sequence in X such that \(d(y_{n+1},Ty_n)\rightarrow 0\) as \(n\rightarrow \infty \).

Making full use of (2.2), on the one hand, we have

$$\begin{aligned} d(Ty_n,x^*)= & {} d(Ty_n,Tx^*)\\\le & {} \lambda _1d(y_n,x^*)+\lambda _2\frac{d(y_n,Ty_n)d(x^*,Tx^*)}{1+d(y_n,x^*)} +\lambda _3\frac{d(y_n,Tx^*)d(x^*,Ty_n)}{1+d(y_n,x^*)}\\&+\,\lambda _4\frac{d(y_n,Ty_n)d(y_n,Tx^*)}{1+d(y_n,x^*)} +\lambda _5\frac{d(x^*,Tx^*)d(x^*,Ty_n)}{1+d(y_n,x^*)}\\\le & {} \lambda _1d(y_n,x^*)+\lambda _3d(x^*,Ty_n)+\lambda _4d(y_n,Ty_n)\\\le & {} (\lambda _1+s\lambda _4)d(y_n,x^*)+(\lambda _3+s\lambda _4)d(x^*,Ty_n), \end{aligned}$$

which means that

$$\begin{aligned} (1-\lambda _3-s\lambda _4)d(x^*,Ty_n)\le (\lambda _1+s\lambda _4)d(y_n,x^*). \end{aligned}$$
(2.8)

On the other hand, we have

$$\begin{aligned} d(Ty_n,x^*)= & {} d(Tx^*,Ty_n)\\\le & {} \lambda _1d(x^*,y_n)+\lambda _2\frac{d(x^*,Tx^*)d(y_n,Ty_n)}{1+d(x^*,y_n)} +\lambda _3\frac{d(x^*,Ty_n)d(y_n,Tx^*)}{1+d(x^*,y_n)}\\&+\,\lambda _4\frac{d(x^*,Tx^*)d(x^*,Ty_n)}{1+d(x^*,y_n)}+ \lambda _5\frac{d(y_n,Ty_n)d(y_n,Tx^*)}{1+d(x^*,y_n)}\\\le & {} \lambda _1d(x^*,y_n)+\lambda _3d(x^*,Ty_n)+\lambda _5d(y_n,Ty_n)\\\le & {} (\lambda _1+s\lambda _5)d(y_n,x^*)+(\lambda _3+s\lambda _5)d(x^*,Ty_n), \end{aligned}$$

which means that

$$\begin{aligned} (1-\lambda _3-s\lambda _5)d(x^*,Ty_n)\le (\lambda _1+s\lambda _5)d(y_n,x^*). \end{aligned}$$
(2.9)

Combining (2.8) and (2.9) yields

$$\begin{aligned} (2-2\lambda _3-s\lambda _4-s\lambda _5)d(x^*,Ty_n)\le (2\lambda _1+s\lambda _4+s\lambda _5)d(y_n,x^*). \end{aligned}$$
(2.10)

As a result, we have

$$\begin{aligned} d(x^*,Ty_n)\le \frac{2\lambda _1+s\lambda _4+s\lambda _5}{2-2\lambda _3-s\lambda _4-s\lambda _5}d(y_n,x^*). \end{aligned}$$

Denote \(h=\frac{s(2\lambda _1+s\lambda _4+s\lambda _5)}{2-2\lambda _3-s\lambda _4-s\lambda _5}\). It follows immediately from \(2s\lambda _1+2\lambda _3+(s+s^2)(\lambda _4+\lambda _5)<2\) that \(0\le h<1\). Let \(a_n=d(y_n,x^*)\), \(c_n=sd(y_{n+1},Ty_n)\), by (2.10), then

$$\begin{aligned} a_{n+1}=d(y_{n+1},x^*)\le s[d(y_{n+1},Ty_n)+d(Ty_n,x^*)]\le ha_n+c_n. \end{aligned}$$

Thus, by Lemma 1.9, it leads to \(a_n=d(y_n,x^*)\rightarrow 0~(n\rightarrow \infty )\), that is, \(y_n\rightarrow x^*~(n\rightarrow \infty )\). As a consequence, Picard’s iteration is T-stable. \(\square \)

Corollary 2.7

Under the conditions of Corollary 2.4, Picard’s iteration is T-stable.

Proof

Let \(s=1\) in Theorem 2.6, then \(2s\lambda _1+2\lambda _3+(s+s^2)(\lambda _4+\lambda _5)<2\) becomes \(\lambda _1+\lambda _3+\lambda _4+\lambda _5<1\). Noticing that Corollary 2.4 is the special case of Theorem 2.3, therefore, by Theorem 2.6, we complete the proof.

It is clear that if T is a map which has a fixed point \(x^*\), then \(x^*\) is also a fixed point of \(T^n\) for each \(n\in \mathbb {N}\). It is well known that the converse is not true. If a map T satisfies \(F(T)=F(T^n)\) for each \(n\in \mathbb {N}\), then it is said to have the P property (see [7, 8]). The following results are generalizations of the corresponding results in metric spaces.

Theorem 2.8

Let (Xd) be a b-metric space with coefficient \(s\ge 1\). Let \(T:X\rightarrow X\) be a mapping such that \(F(T)\ne \emptyset \) and that

$$\begin{aligned} d(Tx,T^2x) \le \lambda d(x,Tx) \end{aligned}$$
(2.11)

for all \(x\in X,\) where \(0\le \lambda <1\) is a constant. Then T has the P property.

Proof

We always assume that \(n>1\), since the statement for \(n=1\) is trivial. Let \(z\in F(T^n)\). By the hypotheses, it is clear that

$$\begin{aligned} d(z,Tz)= & {} d( TT^{n-1}z,T^2T^{n-1}z) \le \lambda d( T^{n-1}z,T^nz) =\lambda d( TT^{n-2}z,T^2T^{n-2}z)\\\le & {} \lambda ^2d( T^{n-2}z,T^{n-1}z) \le \cdots \le \lambda ^nd(z,Tz) \rightarrow 0~(n\rightarrow \infty ). \end{aligned}$$

Hence, \(d(z,Tz)=0\), that is., \(Tz=z\). \(\square \)

Theorem 2.9

Under the conditions of Theorem 2.3, T has the P property.

Proof

We have to prove that the mapping T satisfies (2.11). In fact, for any \(x\in X\), for one thing, we have

$$\begin{aligned} d(Tx,T^2x)= & {} d(Tx,TTx)\\\le & {} \lambda _1d(x,Tx)+\lambda _2\frac{d(x,Tx)d(Tx,TTx)}{1+d(x,Tx)}+\lambda _3 \frac{d(x,TTx)d(Tx,Tx)}{1+d(x,Tx)}\\&+\,\lambda _4\frac{d(x,Tx)d(x,TTx)}{1+d(x,Tx)}+ \lambda _5\frac{d(Tx,TTx)d(Tx,Tx)}{1+d(x,Tx)}\\\le & {} \lambda _1d(x,Tx)+\lambda _2d(Tx,T^2x)+\lambda _4d(x,T^2x)\\\le & {} (\lambda _1+s\lambda _4)d(x,Tx)+(\lambda _2+s\lambda _4)d(Tx,T^2x), \end{aligned}$$

which implies that

$$\begin{aligned} (1-\lambda _2-s\lambda _4)d(Tx,T^2x)\le (\lambda _1+s\lambda _4)d(x,Tx). \end{aligned}$$
(2.12)

For another thing, we have

$$\begin{aligned} d(Tx,T^2x)= & {} d(TTx,Tx)\\\le & {} \lambda _1d(Tx,x)+\lambda _2\frac{d(Tx,TTx)d(x,Tx)}{1+d(Tx,x)}+\lambda _3 \frac{d(Tx,Tx)d(x,TTx)}{1+d(Tx,x)}\\&+\,\lambda _4\frac{d(Tx,TTx)d(Tx,Tx)}{1+d(Tx,x)}+ \lambda _5\frac{d(x,Tx)d(x,TTx)}{1+d(Tx,x)}\\\le & {} \lambda _1d(Tx,x)+\lambda _2d(Tx,T^2x)+\lambda _5d(x,T^2x)\\\le & {} (\lambda _1+s\lambda _5)d(Tx,x)+(\lambda _2+s\lambda _5)d(Tx,T^2x), \end{aligned}$$

which establishes that

$$\begin{aligned} (1-\lambda _2-s\lambda _5)d(Tx,T^2x)\le (\lambda _1+s\lambda _5)d(x,Tx). \end{aligned}$$
(2.13)

On adding up (2.12) and (2.13), it follows that

$$\begin{aligned} (2-2\lambda _2-s\lambda _4-s\lambda _5)d(Tx,T^2x) \le (2\lambda _1+s\lambda _4+s\lambda _5)d(x,Tx). \end{aligned}$$

This implies that

$$\begin{aligned} d(Tx,T^2x)\le \frac{2\lambda _1+s\lambda _4+s\lambda _5}{2-2\lambda _2-s\lambda _4-s\lambda _5}d(x,Tx). \end{aligned}$$

Denote that \(\lambda =\frac{2\lambda _1+s\lambda _4+s\lambda _5}{2-2\lambda _2-s\lambda _4-s\lambda _5}\). Note that \(\lambda _1+\lambda _2+\lambda _3+s\lambda _4+s\lambda _5<1\), then \(\lambda <1\). Accordingly, (2.11) is satisfied. Consequently, by Theorem 2.8, T has the P property. \(\square \)

Corollary 2.10

Under the conditions of Corollary 2.4, T has the P property.

Proof

Since Corollary 2.4 is the special case of Theorem 2.3, then by Theorem 2.9, we obtain the desired result. \(\square \)

Example 2.11

Let \(X=[0,1]\) and define a mapping \(d:X\times X\rightarrow \mathbb {R}^+\) by \(d(x,y)=|x-y|^p~(p\ge 1)\). Taking account of (1.4), we claim that (Xd) is a b-complete b-metric space with coefficient \(s=2^{p-1}\). Define a mapping \(T:X\rightarrow X\) by \(Tx=\mathrm{e}^{x-\lambda }\), where \(\lambda >1+\ln 2\) is a constant. Then by mean value theorem of differentials, for any \(x,y\in X\) and \(x\ne y\), there exists some real number \(\xi \) belonging to between x and y such that

$$\begin{aligned} d(Tx,Ty)= & {} |\mathrm{e}^{x-\lambda }-\mathrm{e}^{y-\lambda }|^p=\left( \mathrm{e}^{\xi -\lambda }\right) ^p|x-y|^p\le \left( \mathrm{e}^{1-\lambda }\right) ^pd(x,y)\\\le & {} \lambda _1d(x,y)+\lambda _2\frac{d(x,Tx)d(y,Ty)}{1+d(x,y)}+\lambda _3\frac{d(x,Ty)d(y,Tx)}{1+d(x,y)}\\&+\,\lambda _4\frac{d(x,Tx)d(x,Ty)}{1+d(x,y)}+\lambda _5\frac{d(y,Ty)d(y,Tx)}{1+d(x,y)}, \end{aligned}$$

where \(\lambda _1=(\mathrm{e}^{1-\lambda })^p\), \(\lambda _2=\lambda _3=\lambda _4=\lambda _5=0\). Obviously, \(\lambda _1+\lambda _2+\lambda _3+s\lambda _4+s\lambda _5<1\). Hence, all the conditions of Theorem 2.3 are satisfied and T has a unique fixed point in X. See the following Fig. 1. The abscissa of point A, i.e., \(x_0\), is the fixed point.

Otherwise, by virtue of \(\lambda >1+\ln 2\), then \(\lambda _1=\left( \mathrm{e}^{1-\lambda }\right) ^p<2^{1-p}=\frac{1}{s}\), so \(2s\lambda _1+2\lambda _3+(s+s^2)(\lambda _4+\lambda _5)<2\). Accordingly, all conditions of Theorem 2.6 are satisfied. By Theorem 2.6, Picard’s iteration is T-stable. Indeed, take \(y_n=\frac{n}{n+1}x_0\in X\), it follows that

$$\begin{aligned} d(y_{n+1},Ty_n)=\left| \frac{n+1}{n+2}x_0-\mathrm{e}^{\frac{n}{n+1}x_0- \lambda }\right| ^2\rightarrow |x_0-\mathrm{e}^{x_0-\lambda }|=0\quad (n\rightarrow \infty ). \end{aligned}$$

Note that \(y_n=\frac{n}{n+1}x_0\rightarrow x_0~(n\rightarrow \infty )\), the validity of Theorem 2.6 is beyond all doubt.

Fig. 1
figure 1

The formation of fixed point of the given mapping

3 Applications

In this section, firstly, we apply Theorem 2.3 to the first-order initial value problem

$$\begin{aligned} {\left\{ \begin{array}{ll}x'(t)=f(t,x(t)),\\ x(t_0)=x_0, \end{array}\right. } \end{aligned}$$
(3.1)

where \(f:[t_0-(\frac{1}{k})^{r-1},t_0+(\frac{1}{k})^{r-1}]\times [x_0-\frac{k}{2},x_0+\frac{k}{2}]\rightarrow \mathbb {R}\) is a continuous function and \(k>1\), \(r>2\), \(t_0\), \(x_0\) are four real constants.

Theorem 3.1

Consider the initial value problem (3.1) and suppose that

  1. (i)

    f satisfies the Lipschitz condition,  i.e., 

    $$\begin{aligned} |f(t,x(t))-f(t,y(t))|\le k|x(t)-y(t)| \end{aligned}$$
    (3.2)

    for all \((t,x),(t,y)\in R,\) where \(R=\{(t,x):|t-t_0|\le (\frac{1}{k})^{r-1},|x-x_0|\le \frac{k}{2}\};\)

  2. (ii)

    f is bound on R,  i.e., 

    $$\begin{aligned} |f(t,x)|\le \frac{k^r}{2}. \end{aligned}$$
    (3.3)

Then the initial value problem (3.1) has a unique solution on the interval \(I=[t_0-(\frac{1}{k})^{r-1},t_0+(\frac{1}{k})^{r-1}]\). Further,  the solution is exhibited as follows : 

$$\begin{aligned} x(t)=x_0+\lim _{n\rightarrow \infty }\int _{t_0}^t f(\tau ,x_n(\tau ))\mathrm{d}\tau , \end{aligned}$$

where

$$\begin{aligned} x_0(t)=x_0,\quad x_n(t)=x_0+\int _{t_0}^t f(\tau ,x_{n-1}(\tau )) \mathrm{d}\tau ~(n=1,2,\ldots ). \end{aligned}$$

Proof

Let C(I) be the set of all continuous functions on I. Let \(X=\{x\in C(I):|x(t)-x_0|\le \frac{k}{2}\}\). Define a mapping \(d:X\times X\rightarrow \mathbb {R}^+\) by

$$\begin{aligned} d(x,y)=\max _{t\in I}|x(t)-y(t)|^2. \end{aligned}$$
(3.4)

Clearly, (C(I), d) is a b-complete b-metric space with coefficient \(s=2\). Since X is a closed subspace of C(I), then (Xd) is a b-complete b-metric space with coefficient \(s=2\).

Integrating (3.1), we have

$$\begin{aligned} x(t)=x_0+\int _{t_0}^t f(\tau ,x(\tau ))\mathrm{d}\tau . \end{aligned}$$
(3.5)

As a consequence, finding the solution of (3.1) is equivalent to finding the fixed point of mapping \(T:X\rightarrow X\) defined by

$$\begin{aligned} Tx(t)=x_0+\int _{t_0}^t f(\tau ,x(\tau ))\mathrm{d}\tau . \end{aligned}$$
(3.6)

Note that if \(\tau \in I\) then \(|\tau -t_0|\le (\frac{1}{k})^{r-1}\), and \(x\in X\) means \(|x(\tau )-x_0|\le \frac{k}{2}\), so \((\tau ,x(\tau ))\in R\). Since f is continuous on R, then the integral (3.6) exists and T is well-defined for all \(x\in X\).

We make a conclusion that T is a self-mapping on X. Indeed, making full use of (3.3) and (3.6), it follows that

$$\begin{aligned} |Tx(t)-x_0|= & {} \left| \int _{t_0}^t f(\tau ,x(\tau ))\mathrm{d}\tau \right| \\\le & {} \int _{t_0}^t |f(\tau ,x(\tau ))|\mathrm{d}\tau \\\le & {} \frac{k^r}{2}|t-t_0|\\\le & {} \frac{k^r}{2}\left( \frac{1}{k}\right) ^{r-1}\\= & {} \frac{k}{2}. \end{aligned}$$

Next, by using (3.2), (3.4) and (3.6), we get

$$\begin{aligned} |Tx(t)-Ty(t)|^2= & {} \left| \int _{t_0}^t [f(\tau ,x(\tau )) -f(\tau ,y(\tau ))]\mathrm{d}\tau \right| ^2\\\le & {} \left[ \int _{t_0}^t |f(\tau ,x(\tau ))-f(\tau ,y(\tau ))|\mathrm{d}\tau \right] ^2\\\le & {} \left[ \int _{t_0}^t k|x(\tau )-y(\tau )|\mathrm{d}\tau \right] ^2\\\le & {} k^2\left[ \int _{t_0}^t \max _{\tau \in I}|x(\tau )-y(\tau )|\mathrm{d}\tau \right] ^2\\= & {} k^2\max _{\tau \in I}|x(\tau )-y(\tau )|^2|t-t_0|^2\\\le & {} k^2\max _{\tau \in I}|x(\tau )-y(\tau )|^2\left( \frac{1}{k}\right) ^{2r-2}\\= & {} \left( \frac{1}{k}\right) ^{2r-4}\max _{\tau \in I}|x(\tau )-y(\tau )|^2\\= & {} \left( \frac{1}{k}\right) ^{2r-4}d(x,y), \end{aligned}$$

which establishes that

$$\begin{aligned} d(Tx,Ty)\le & {} \left( \frac{1}{k}\right) ^{2r-4}d(x,y)\\\le & {} \lambda _1d(x,y)+\lambda _2\frac{d(x,Tx)d(y,Ty)}{1+d(x,y)}+\lambda _3\frac{d(x,Ty)d(y,Tx)}{1+d(x,y)}\\&+\,\lambda _4\frac{d(x,Tx)d(x,Ty)}{1+d(x,y)}+\lambda _5\frac{d(y,Ty)d(y,Tx)}{1+d(x,y)}, \end{aligned}$$

where \(\lambda _1=\left( \frac{1}{k}\right) ^{2r-4}\), \(\lambda _2=\lambda _3=\lambda _4=\lambda _5=0\). Because \(k>1\) and \(r>2\), it means that \(\lambda _1+\lambda _2+\lambda _3+s\lambda _4+s\lambda _5<1\).

Owing to the above statement, all conditions of Theorem 2.3 are satisfied. Hence, T has a unique fixed point. That is to say, there exists a unique solution to (3.1).

Now, by utilizing successive approximation method, we find the unique solution of (3.1). For this purpose, put \(x_0(t)=x_0\) and

$$\begin{aligned} x_n(t)=x_0+\int _{t_0}^t f(\tau ,x_{n-1}(\tau ))\mathrm{d}\tau . \end{aligned}$$
(3.7)

It is easy to see that

$$\begin{aligned} x_{n-1}(t)=x_0+\int _{t_0}^t f(\tau ,x_{n-2}(\tau ))\mathrm{d}\tau . \end{aligned}$$
(3.8)

Combining (3.7) and (3.8), we deduce

$$\begin{aligned} x_n(t)-x_{n-1}(t)=\int _{t_0}^t [f(\tau ,x_{n-1}(\tau ))-f(\tau ,x_{n-2}(\tau ))]\mathrm{d}\tau . \end{aligned}$$
(3.9)

Letting

$$\begin{aligned} y_n(t)=x_n(t)-x_{n-1}(t),\quad y_0(t)=x_0, \end{aligned}$$
(3.10)

we get

$$\begin{aligned} x_n(t)=\sum _{i=0}^ny_i(t). \end{aligned}$$

Using (3.2), (3.9) and (3.10), we have

$$\begin{aligned} |y_n(t)|\le k\int _{t_0}^t|y_{n-1}(\tau )|\mathrm{d}\tau . \end{aligned}$$

Further, we have

$$\begin{aligned} |y_n(t)|\le \frac{|x_0|k^n}{n!}|t-t_0|^n. \end{aligned}$$

Since the series \(\sum _{n=1}^{\infty }\frac{|x_0|k^n}{n!}|t-t_0|^n\) is convergent in I, then the series \(\sum _{n=1}^{\infty }y_n(t)\) is convergent to some function x(t), that is., \(x_n(t)=\sum _{i=0}^ny_i(t)\) converges to x(t) as \(n\rightarrow \infty \).

In the following, we show that \(x(t)=\sum _{n=0}^{\infty }y_n(t)\) is the solution of (3.5). This implies that \(x(t)=\sum _{n=0}^{\infty }y_n(t)\) is also the solution of (3.1). For this reason, assume that

$$\begin{aligned} x(t)=x_n(t)+\triangle _n(t). \end{aligned}$$
(3.11)

It is valid that \(\lim _{n\rightarrow \infty }|\triangle _n(t)|=0\). Combining (3.7) and (3.11), we get

$$\begin{aligned} x(t)-\triangle _n(t)=x_0+\int _{t_0}^t f(\tau ,x(\tau )-\triangle _{n-1}(\tau ))\mathrm{d}\tau . \end{aligned}$$

Thus, we arrive at

$$\begin{aligned}&x(t)-x_0-\int _{t_0}^t f(\tau ,x(\tau ))\mathrm{d}\tau \nonumber \\&\quad =\triangle _n(t)+\int _{t_0}^t [f(\tau ,x(\tau )-\triangle _{n-1}(\tau )) -f(\tau ,x(\tau ))]\mathrm{d}\tau .\nonumber \\ \end{aligned}$$
(3.12)

By (3.2) and (3.12) it yields that

$$\begin{aligned}&\left| x(t)-x_0-\int _{t_0}^t f(\tau ,x(\tau ))\mathrm{d}\tau \right| \nonumber \\&\quad =\left| \triangle _n(t)+\int _{t_0}^t [f(\tau ,x(\tau )-\triangle _{n-1}(\tau )) -f(\tau ,x(\tau ))]\mathrm{d}\tau \right| \nonumber \\&\quad \le |\triangle _n(t)|+\int _{t_0}^t |f(\tau ,x(\tau )-\triangle _{n-1}(\tau )) -f(\tau ,x(\tau ))|\mathrm{d}\tau \nonumber \\&\quad \le |\triangle _n(t)|+\int _{t_0}^t k|\triangle _{n-1}(\tau )|\mathrm{d}\tau \nonumber \\&\quad \le |\triangle _n(t)|+k\left( \frac{1}{k}\right) ^{r-1}\max _{\tau \in I}|\triangle _{n-1}(\tau )|. \end{aligned}$$
(3.13)

Taking the limit as \(n\rightarrow \infty \) from both sides of (3.13) and noting \(\lim _{n\rightarrow \infty }|\triangle _n(t)|=0\), it is evident that

$$\begin{aligned} x(t)-x_0-\int _{t_0}^t f(\tau ,x(\tau ))\mathrm{d}\tau =0. \end{aligned}$$

Hence, we deduce

$$\begin{aligned} x(t)=x_0+\int _{t_0}^t f(\tau ,x(\tau ))\mathrm{d}\tau . \end{aligned}$$

Then \(x(t)=\sum _{n=0}^{\infty }y_n(t)\) is the solution of (3.5). In other words, \(x(t)=\sum _{n=0}^{\infty }y_n(t)\) is the solution of (3.1).

Finally, we look for the mathematical expression of the solution of (3.1). To this end, taking advantage of (3.9) and (3.10), we obtain

$$\begin{aligned} x(t)= & {} \sum \limits _{n=0}^{\infty }y_n(t)\nonumber \\= & {} y_0(t)+y_1(t)+\sum \limits _{n=2}^{\infty }y_n(t)\nonumber \\= & {} y_0(t)+y_1(t) +\sum \limits _{n=2}^{\infty }[x_n(t)-x_{n-1}(t)]\nonumber \\= & {} x_0+x_1(t)-x_0+\sum \limits _{n=2}^{\infty } \int _{t_0}^t [f(\tau ,x_{n-1}(\tau ))-f(\tau ,x_{n-2}(\tau ))]\mathrm{d}\tau \nonumber \\= & {} x_1(t)+\sum \limits _{n=2}^{\infty } \int _{t_0}^t [f(\tau ,x_{n-1}(\tau ))-f(\tau ,x_{n-2}(\tau ))]\mathrm{d}\tau \nonumber \\= & {} x_0+\int _{t_0}^tf(\tau ,x_0)\mathrm{d}\tau +\int _{t_0}^t \sum \limits _{n=2}^{\infty }[f(\tau ,x_{n-1}(\tau )) -f(\tau ,x_{n-2}(\tau ))]\mathrm{d}\tau \nonumber \\= & {} x_0+\int _{t_0}^tf(\tau ,x_0)\mathrm{d}\tau +\int _{t_0}^t \lim _{n\rightarrow \infty }f(\tau ,x_{n-1}(\tau ))\mathrm{d}\tau - \int _{t_0}^tf(\tau ,x_0)\mathrm{d}\tau \nonumber \\= & {} x_0+\lim _{n\rightarrow \infty }\int _{t_0}^t f(\tau ,x_n(\tau ))\mathrm{d}\tau , \end{aligned}$$
(3.14)

where \(x_n(t)~(n=1,2,\ldots )\) is written by (3.7). \(\square \)

Remark 3.2

In the proof of Theorem 3.1, (3.14) can also be obtained by the continuity of f. Actually, since f is continuous and \(x(t)=\lim _{n\rightarrow \infty }x_n(t)\) is the solution of (3.1), then by (3.5), we have

$$\begin{aligned} x(t)= & {} x_0+\int _{t_0}^t f(\tau ,x(\tau ))\mathrm{d}\tau \\= & {} x_0+\int _{t_0}^t f\left( \tau ,\lim _{n\rightarrow \infty }x_n(\tau )\right) \mathrm{d}\tau \\= & {} x_0+\lim _{n\rightarrow \infty }\int _{t_0}^t f(\tau ,x_n(\tau ))\mathrm{d}\tau . \end{aligned}$$

Secondly, we look for the general solution of nth-order nonhomogeneous differential equation. In order to start this purpose, we give the following nth-order initial value problem:

$$\begin{aligned} {\left\{ \begin{array}{ll} \dfrac{{\mathrm {d}}^ny}{{\mathrm {d}}x^n}+a_{n-1}(x)\dfrac{{\mathrm {d}}^{n-1}y}{{\mathrm {d}}x^{n-1}} +a_{n-2}(x)\dfrac{{\mathrm {d}}^{n-2}y}{{\mathrm {d}}x^{n-2}}+\cdots \\ \qquad +\,a_1(x) \dfrac{{\mathrm {d}}y}{{\mathrm {d}}x}+a_0(x)y=f(x),\quad x\in [0,x_0],\\ y(0)=C_0,~y'(0)=C_1,~y''(0)=C_2,\,\ldots ,~y^{(n-1)}(0)=C_{n-1}, \end{array}\right. } \end{aligned}$$
(3.15)

where \(a_{n-1}(x),a_{n-2}(x),\ldots , a_1(x), a_0(x)\in C([0,x_0])\) (the set of all continuous real functions defined on \([0,x_0]\)) are given, and \(C_0,C_1,C_2,\ldots ,C_{n-1}\) are constants.

Theorem 3.3

Consider initial value problem (3.15), and set

$$\begin{aligned} M=\max \limits _{0\le t,x\le x_0}\left| \sum _{k=0}^{n-1}\frac{a_k(x)}{(n-1-k)!}(x-t)^{n-1-k}\right| . \end{aligned}$$

If \({x_0}M<1,\) then (3.15) has a unique solution in \(C([0,x_0])\). Further,  the solution is exhibited as follows : 

$$\begin{aligned} y=\frac{1}{(n-1)!}\sum \limits _{i=0}^\infty \int _0^x(x-t)^{n-1}u_i(t){\mathrm {d}}t+\sum _{k=0}^{n-1}\frac{C_k}{k!}x^k, \end{aligned}$$

where

$$\begin{aligned} u_0(x)= & {} f(x)-\sum _{j=0}^{n-1}\sum _{k=0}^{n-j-1}a_j(x)\frac{C_{k+j}}{k!}x^k,\\ u_i(x)= & {} -\sum _{k=0}^{n-1}\int _0^x\frac{a_k(x)}{(n-1-k)!} (x-t)^{n-1-k}u_{i-1}(t){\mathrm {d}}t,\quad i=1,2,\ldots \end{aligned}$$

Proof

Put

$$\begin{aligned} u(x)=\dfrac{{\mathrm {d}}^ny}{{\mathrm {d}}x^n},\quad p(x)= \dfrac{{\mathrm {d}}^{n-1}y}{{\mathrm {d}}x^{n-1}},\quad q(x)=\dfrac{{\mathrm {d}}^{n-2}y}{{\mathrm {d}}x^{n-2}},\quad r(x)=\dfrac{{\mathrm {d}}^{n-3}y}{{\mathrm {d}}x^{n-3}}, \end{aligned}$$

then \(u(x),p(x),q(x),r(x)\in C([0,x_0])\). Considering the initial conditions, we get that

$$\begin{aligned} \dfrac{{\mathrm {d}}^{n-1}y}{{\mathrm {d}}x^{n-1}}=\int _0^xu(t){\mathrm {d}}t+C_{n-1}, \end{aligned}$$
(3.16)

and

$$\begin{aligned} \dfrac{{\mathrm {d}}^{n-2}y}{{\mathrm {d}}x^{n-2}}= & {} \int _0^xp(s){\mathrm {d}}s+C_{n-2}\nonumber \\= & {} \int _0^x\left[ \int _0^su(t) {\mathrm {d}}t+C_{n-1}\right] {\mathrm {d}}s+C_{n-2}\nonumber \\= & {} \int _0^x\int _0^su(t) {\mathrm {d}}t\,{\mathrm {d}}s+C_{n-1}x+C_{n-2}\nonumber \\= & {} \int _0^x{\mathrm {d}}t\int _t^xu(t) {\mathrm {d}}s+C_{n-1}x+C_{n-2}\nonumber \\= & {} \int _0^x(x-t)u(t){\mathrm {d}}t+C_{n-1}x+C_{n-2}, \end{aligned}$$
(3.17)

and

$$\begin{aligned} \dfrac{{\mathrm {d}}^{n-3}y}{{\mathrm {d}}x^{n-3}}= & {} \int _0^xq(s){\mathrm {d}}s+C_{n-3}\nonumber \\= & {} \int _0^x\left[ \int _0^s(s-t)u(t){\mathrm {d}}t+C_{n-1}s+C_{n-2}\right] {\mathrm {d}}s+C_{n-3}\nonumber \\= & {} \int _0^x\int _0^s(s-t)u(t) {\mathrm {d}}t\,{\mathrm {d}}s+\int _0^x(C_{n-1}s+C_{n-2}){\mathrm {d}}s+C_{n-3}\nonumber \\= & {} \int _0^x{\mathrm {d}}t\int _t^x(s-t)u(t) {\mathrm {d}}s+\frac{1}{2}C_{n-1}x^2+C_{n-2}x+C_{n-3}\nonumber \\= & {} \frac{1}{2}\int _0^x(x-t)^2u(t){\mathrm {d}}t +\frac{1}{2}C_{n-1}x^2+C_{n-2}x+C_{n-3}, \end{aligned}$$
(3.18)

and

$$\begin{aligned} \dfrac{{\mathrm {d}}^{n-4}y}{{\mathrm {d}}x^{n-4}}= & {} \int _0^xr(s){\mathrm {d}}s+C_{n-4}\nonumber \\= & {} \int _0^x\left[ \frac{1}{2}\int _0^s(s-t)^2u(t){\mathrm {d}}t+\frac{1}{2}C_{n-1}s^2+C_{n-2}s+C_{n-3}\right] {\mathrm {d}}s\nonumber \\&+\,C_{n-4}\nonumber \\= & {} \frac{1}{2}\int _0^x\int _0^s(s-t)^2u(t) {\mathrm {d}}t\,{\mathrm {d}}s+\int _0^x\left( \frac{1}{2}C_{n-1}s^2+C_{n-2}s+C_{n-3}\right) {\mathrm {d}}s\nonumber \\&+\,C_{n-4}\nonumber \\= & {} \frac{1}{2}\int _0^x{\mathrm {d}}t\int _t^x(s-t)^2u(t) {\mathrm {d}}s+\frac{1}{6}C_{n-1}x^3+\frac{1}{2}C_{n-2}x^2+C_{n-3}x+C_{n-4}\nonumber \\= & {} \frac{1}{6} \int _0^x(x-t)^3u(t){\mathrm {d}}t+\frac{1}{6}C_{n-1}x^3 +\frac{1}{2}C_{n-2}x^2+C_{n-3}x+C_{n-4}.\nonumber \\ \end{aligned}$$
(3.19)

By mathematical induction, we arrive at

$$\begin{aligned} \dfrac{{\mathrm {d}}y}{{\mathrm {d}}x}= & {} \frac{1}{(n-2)!}\int _0^x(x-t)^{n-2}u(t){\mathrm {d}}t +\frac{1}{(n-2)!}C_{n-1}x^{n-2}+\frac{1}{(n-3)!}C_{n-2}x^{n-3} \nonumber \\&+\cdots +\frac{1}{6}C_4x^3+\frac{1}{2}C_3x^2+C_2x+C_1, \end{aligned}$$
(3.20)

and

$$\begin{aligned} y= & {} \frac{1}{(n-1)!}\int _0^x(x-t)^{n-1}u(t){\mathrm {d}}t+\frac{1}{(n-1)!}C_{n-1}x^{n-1}+\frac{1}{(n-2)!}C_{n-2}x^{n-2} \nonumber \\&+\cdots +\frac{1}{6}C_3x^3+\frac{1}{2}C_2x^2+C_1x+C_0. \end{aligned}$$
(3.21)

Substituting (3.16)–(3.21) into (3.15), we have

$$\begin{aligned}&u(x)+a_{n-1}(x)\left[ \int _0^xu(t){\mathrm {d}}t+C_{n-1}\right] \\&\qquad +\,a_{n-2}(x)\left[ \int _0^x(x-t)u(t){\mathrm {d}}t+C_{n-1}x+C_{n-2}\right] \\&\qquad +\,a_{n-3}(x)\left[ \frac{1}{2}\int _0^x(x-t)^2u(t){\mathrm {d}}t+\frac{1}{2}C_{n-1}x^2+C_{n-2}x+C_{n-3}\right] \\&\qquad +\,a_{n-4}(x)\left[ \frac{1}{6}\int _0^x(x-t)^3u(t){\mathrm {d}}t+\frac{1}{6}C_{n-1}x^3 +\frac{1}{2}C_{n-2}x^2\right. \\&\qquad \left. +\,C_{n-3}x+C_{n-4}\right] \\&\qquad +\,\cdots \\&\qquad +\,a_1(x)\left[ \frac{1}{(n-2)!}\int _0^x(x-t)^{n-2}u(t){\mathrm {d}}t+ \frac{1}{(n-2)!}C_{n-1}x^{n-2}\right. \\&\qquad \left. +\,\frac{1}{(n-3)!}C_{n-2}x^{n-3}+\cdots +\frac{1}{6}C_4x^3+ \frac{1}{2}C_3x^2+C_2x+C_1\right] \\&\qquad +\,a_0(x)\left[ \frac{1}{(n-1)!}\int _0^x(x-t)^{n-1}u(t) {\mathrm {d}}t+\frac{1}{(n-1)!}C_{n-1}x^{n-1}\right. \\&\qquad \left. +\,\frac{1}{(n-2)!}C_{n-2}x^{n-2}+\cdots + \frac{1}{6}C_3x^3+\frac{1}{2}C_2x^2+C_1x+C_0\right] \\&\quad =f(x). \end{aligned}$$

As a consequence, we have

$$\begin{aligned} f(x)= & {} u(x)+\int _0^x\left[ a_{n-1}(x)+a_{n-2}(x)(x-t)+\frac{a_{n-3}(x)}{2}(x-t)^2\right. \\&+\,\frac{a_{n-4}(x)}{6}(x-t)^3+\cdots +\frac{a_1(x)}{(n-2)!}(x-t)^{n-2}\\&\left. +\,\frac{a_0(x)}{(n-1)!}(x-t)^{n-1}\right] u(t){\mathrm {d}}t\\&+\,a_{n-1}(x)C_{n-1}+a_{n-2}(x)(C_{n-1}x+C_{n-2})+a_{n-3}(x)\\&\times \left( \frac{1}{2}C_{n-1}x^2+C_{n-2}x+C_{n-3}\right) \\&+\,a_{n-4}(x)\left( \frac{1}{6}C_{n-1}x^3+\frac{1}{2} C_{n-2}x^2+C_{n-3}x+C_{n-4}\right) \\&+\cdots \\&+\,a_1(x)\left( \frac{1}{(n-2)!}C_{n-1}x^{n-2}+\frac{1}{(n-3)!}C_{n-2}x^{n-3}+\cdots \right. \\&\left. +\frac{1}{6}C_4x^3+\frac{1}{2}C_3x^2+C_2x+C_1 \right) \\&+\,a_0(x)\left( \frac{1}{(n-1)!}C_{n-1}x^{n-1}+\frac{1}{(n-2)!}C_{n-2}x^{n-2} +\cdots \right. \\&\left. +\frac{1}{6}C_3x^3+\frac{1}{2}C_2x^2+C_1x+C_0\right) \\= & {} u(x)+\int _0^x\sum _{k=0}^{n-1}\frac{a_k(x)}{(n-1-k)!}(x-t)^{n-1-k}u(t){\mathrm {d}}t+a_0(x)\sum _{k=0}^{n-1}\frac{C_k}{k!}x^k\\&+\,a_1(x)\sum _{k=0}^{n-2}\frac{C_{k+1}}{k!}x^k+a_2(x)\sum _{k=0}^{n-3}\frac{C_{k+2}}{k!}x^k\\&+\cdots +a_{n-4}(x)\sum _{k=0}^{3}\frac{C_{n-4+k}}{k!}x^k\\&+\,a_{n-3}(x)\sum _{k=0}^{2}\frac{C_{n-3+k}}{k!}x^k+a_{n-2}(x)\sum _{k=0}^{1}\frac{C_{n-2+k}}{k!}x^k\\&+\,a_{n-1}(x)\sum _{k=0}^{0}\frac{C_{n-1+k}}{k!}x^k\\= & {} u(x)+\sum _{k=0}^{n-1}\int _0^x\frac{a_k(x)}{(n-1-k)!}(x-t)^{n-1-k}u(t){\mathrm {d}}t\\&+\sum _{j=0}^{n-1} \sum _{k=0}^{n-j-1}a_j(x)\frac{C_{k+j}}{k!}x^k. \end{aligned}$$

Then we speculate that

$$\begin{aligned} u(x)= & {} f(x)-\sum _{k=0}^{n-1}\int _0^x\frac{a_k(x)}{(n-1-k)!}(x-t)^{n-1-k}u(t) {\mathrm {d}}t\nonumber \\&-\sum _{j=0}^{n-1}\sum _{k=0}^{n-j-1}a_j(x)\frac{C_{k+j}}{k!}x^k \nonumber \\= & {} -\int _0^x\sum _{k=0}^{n-1}\frac{a_k(x)}{(n-1-k)!}(x-t)^{n-1-k}u(t){\mathrm {d}}t\nonumber \\&+\,f(x)-\sum _{j=0}^{n-1}\sum _{k=0}^{n-j-1}a_j(x)\frac{C_{k+j}}{k!}x^k. \end{aligned}$$
(3.22)

Take

$$\begin{aligned}&K(x,t)=-\sum _{k=0}^{n-1}\frac{a_k(x)}{(n-1-k)!}(x-t)^{n-1-k},\nonumber \\&\quad F(x)=f(x) -\sum _{j=0}^{n-1}\sum _{k=0}^{n-j-1}a_j(x)\frac{C_{k+j}}{k!}x^k. \end{aligned}$$
(3.23)

Now by (3.22) and (3.23), we claim that (3.15) is equivalent to the following Volterra-type integral equation:

$$\begin{aligned} u(x)=\int _0^x K(x,t)u(t){\mathrm {d}}t+F(x). \end{aligned}$$

Let \(X=C([0,x_0])\). Put \(d:X\times X\rightarrow \mathbb {R}^+\) as \(d(u,v)=\max \limits _{0\le x\le x_0}|u(x)-v(x)|^2\). It is valid that (Xd) is a b-complete b-metric space with coefficient \(s=2\).

Define a mapping \(T:X\rightarrow X\) by

$$\begin{aligned} Tu(x)=\int _0^x K(x,t)u(t){\mathrm {d}}t+F(x). \end{aligned}$$

For any \(u,v\in X\), we have

$$\begin{aligned} d(Tu,Tv)= & {} \max \limits _{0\le x\le x_0}\left| \int _0^x K(x,t)u(t){\mathrm {d}}t-\int _0^x K(x,t)v(t){\mathrm {d}}t\right| ^2\\= & {} \max \limits _{0\le x\le x_0}\left| \int _0^x K(x,t)[u(t)-v(t)]{\mathrm {d}}t\right| ^2\\\le & {} {x_0}^2M^2\max \limits _{0\le t\le x_0}|u(t)-v(t)|^2={x_0}^2M^2d(u,v)\\\le & {} \lambda _1d(u,v)+\lambda _2\frac{d(u,Tu)d(v,Tv)}{1+d(u,v)}+ \lambda _3\frac{d(u,Tv)d(v,Tu)}{1+d(u,v)}\\&+\,\lambda _4\frac{d(u,Tu)d(u,Tv)}{1+d(u,v)}+\lambda _5\frac{d(v,Tv)d(v,Tu)}{1+d(u,v)}, \end{aligned}$$

where \(\lambda _1={x_0}^2M^2\), \(\lambda _2=\lambda _3=\lambda _4=\lambda _5=0\).

By virtue of \({x_0}M<1\), then \(\lambda _1<1\), so \(\lambda _1+\lambda _2+\lambda _3+s\lambda _4+s\lambda _5<1\). Owing to the above statement, all conditions of Theorem 2.3 are satisfied, then by Theorem 2.3, T has a unique fixed point in X. That is to say, the initial value problem (3.15) has a unique solution in \(C([0,x_0])\).

Eventually, we look for the expression of solution. For this purpose, take

$$\begin{aligned} y_n(x)=\sum _{i=0}^nu_i(x),\quad x\in [0,x_0], \end{aligned}$$

where

$$\begin{aligned} u_0(x)=F(x),\quad u_i(x)=\int _0^xK(x,t)u_{i-1}(t){\mathrm {d}}t,\ i=1,2,\ldots \end{aligned}$$

Note that

$$\begin{aligned} y_{n+1}(x)= & {} u_0(x)+\sum _{i=1}^{n+1}u_i(x)\\= & {} F(x)+\sum _{i=1}^{n+1}\int _0^xK(x,t)u_{i-1}(t){\mathrm {d}}t\\= & {} F(x)+\int _0^x\left[ K(x,t)\sum _{i=1}^{n+1}u_{i-1}(t)\right] {\mathrm {d}}t\\= & {} F(x)+\int _0^x\left[ K(x,t)\sum _{i=0}^nu_i(t)\right] {\mathrm {d}}t\\= & {} F(x)+\int _0^xK(x,t)y_n(t){\mathrm {d}}t\\= & {} Ty_n(x) \end{aligned}$$

for any \(n\in \mathbb {N}\), so \(y_{n+1}=Ty_n\) is a Picard’s iteration. Based on the proof of Theorem 2.3, it is not hard to verify that \(\{y_n\}\) b-converges to the fixed point u(x) of T. In other words,

$$\begin{aligned} u(x)=\lim _{n\rightarrow \infty }y_n(x)=\sum _{i=0}^\infty u_i(x). \end{aligned}$$

Substituting \(u(x)=\sum _{i=0}^\infty u_i(x)\) into (3.21), we easily claim that the solution of (3.15) is the following form:

$$\begin{aligned} y=\frac{1}{(n-1)!}\sum \limits _{i=0}^\infty \int _0^x(x-t)^{n-1}u_i(t){\mathrm {d}}t+\sum _{k=0}^{n-1}\frac{C_k}{k!}x^k. \end{aligned}$$

\(\square \)