1 Introduction and preliminaries

Throughout this article, we assume that \(\mathbb {Z}_{+}\) is the set of nonnegative integers, Y a nonempty, closed and convex subset of a Banach space X, and F(G), the set of fixed points of the self-mapping G defined on Y.

The iterative approximation of fixed points of linear and nonlinear mappings is one of the most significant tools in the fixed point theory that has many applications in different fields like Engineering, Differential equations, Integral equations, etc. Hence, a large number of researchers introduced and studied many iterative algorithms for certain classes of mappings (e.g., see Ali et al. 2020; Khan 2013; Thakur et al. 2016; Katchang and Kumam 2010; Maingè and Măruşter 2011). The following iterative algorithms are called (Picard 1890; Mann 1953; Ishikawa 1974), S (Agrawal et al. 2007), normal-S (Sahu 2011), and Varat (Sintunavarat and Pitea 2016) algorithms, respectively, for the self-mapping G defined on Y:

$$\begin{aligned}&\left\{ \begin{array}{ll} p_{0} \in Y,\\ p_{n+1} =Gp_{n}, \quad n \in {\mathbb {Z}_{+}} \end{array} \right. \end{aligned}$$
(1.1)
$$\begin{aligned}&\left\{ \begin{array}{ll} p_{0}\in Y,\\ p_{n+1} = (1-\tilde{r_{n}})p_{n} + {\tilde{r_{n}}}Gp_{n}, \quad n \in {\mathbb {Z}_{+}} \end{array} \right. \end{aligned}$$
(1.2)
$$\begin{aligned}&\left\{ \begin{array}{ll} p_{0} \in Y,\\ p_{n+1} = (1-\tilde{r_{n}})p_{n} + {\tilde{r_{n}}}Gq_{n},\\ q_{n} = (1-\tilde{s_{n}})p_{n} + \tilde{s_{n}}Gp_{n}, \quad n \in {\mathbb {Z}_{+}} \end{array} \right. \end{aligned}$$
(1.3)
$$\begin{aligned}&\left\{ \begin{array}{ll} p_{0}\in Y,\\ p_{n+1} = (1-\tilde{r_{n}})Gp_{n} + {\tilde{r_{n}}}Gq_{n},\\ q_{n} = (1-\tilde{s_{n}})p_{n} + \tilde{s_{n}}Gp_{n}, \quad n \in {\mathbb {Z}_{+}} \end{array} \right. \end{aligned}$$
(1.4)
$$\begin{aligned}&\left\{ \begin{array}{ll} p_{0} \in Y,\\ p_{n+1} = G((1-\tilde{r_{n}})p_{n} + \tilde{r_{n}}Gp_{n}), \quad n \in {\mathbb {Z}_{+}} \end{array} \right. \end{aligned}$$
(1.5)
$$\begin{aligned}&\left\{ \begin{array}{ll} p_{0} \in Y,\\ p_{n+1} = (1-\tilde{r_{n}})Gz_{n} + {\tilde{r_{n}}}Gq_{n},\\ z_{n} = (1-\tilde{t_{n}})p_{n} + \tilde{t_{n}}q_{n},\\ q_{n} = (1-\tilde{s_{n}})p_{n} + \tilde{s_{n}}Gp_{n}, \quad n \in {\mathbb {Z}_{+}}, \end{array} \right. \end{aligned}$$
(1.6)

where \(\{\tilde{r_{n}}\}\), \(\{\tilde{s_{n}}\}\), and \(\{\tilde{t_{n}}\}\) are sequences in (0, 1).

Motivated by the above, one can raise the following natural question:

Question: Is it possible to define a two-step iterative algorithm whose rate of convergence is faster than S iterative algorithm (1.4) and some other iterative algorithms?

As an answer, we introduce a new two-step iterative algorithm, called \(F^{*}\) algorithm, which is defined as follows:

For a self-mapping G on a nonempty closed and convex subset Y of a Banach space X, the sequence \(\{p_{n}\}\) is defined by:

$$\begin{aligned} \left\{ \begin{array}{ll} p_{0} \in Y,\\ p_{n+1} =Gq_{n},\\ q_{n} =G((1-\tilde{r_{n}})p_{n} + \tilde{r_{n}}Gp_{n}), \quad n \in {\mathbb {Z}_{+}}, \end{array} \right. \end{aligned}$$
(1.7)

where \(\{\tilde{r_{n}}\}\) is a sequence in (0, 1).

A mapping \(G:X \rightarrow X\) is said to be \(\delta \)-contraction if there is a constant \(\delta \in [0,1)\), such that:

$$\begin{aligned} \Vert Gx-Gy \Vert \le \delta \Vert x-y \Vert ,\quad \forall ~x,y \in X. \end{aligned}$$

In 2003, Berinde (2003) introduced the concept of weak contractions which is a very important class of mappings and wider than the classes of contraction mappings, Kannan mappings (Kannan 1968), Chettarjee mappings (Chatterjea 1972), Zamfirescu mappings (Zamfirescu 1972), etc.

Now, we recall the definition of weak contraction.

Definition 1.1

A self-map G on a Banach space X is called weak contraction if there is a constant \(\delta \in (0,1)\) and some constant \(L\ge 0\), such that:

$$\begin{aligned} \Vert Gx-Gy \Vert \le \delta \Vert x-y\Vert + L\Vert y-Gx\Vert ,\quad \forall ~x,y \in X. \end{aligned}$$
(1.8)

Later, many authors also called it almost contraction.

Berinde (2003) proved the following theorem for existence and uniqueness of a fixed point of the weak contraction.

Theorem 1.2

Let G be a self-map on a Banach space X satisfying (1.8) and:

$$\begin{aligned} \Vert Gx-Gy \Vert \le \delta \Vert x-y\Vert + L\Vert x-Gx\Vert ,\quad \forall ~x,y \in X. \end{aligned}$$
(1.9)

Then, the mapping G has a unique fixed point in X.

On the other hand, the concept of stability of iterative algorithms was first introduced by Ostrowski (1967). He proved that the Picard iterative algorithm is stable for contraction mappings. The definition of stability introduced by Ostrowski runs as follows:

Definition 1.3

(Ostrowski 1967) Let G be a self-map on a Banach space X with fixed point p. Assume that \(p_{0} \in X\) and \(p_{n+1}=h(G,p_{n})\) is an iterative algorithm for some function h. Let \(\{z_{n}\}\) be an approximate sequence of the sequence \(\{p_n\}\) in X and define \(\sigma _{n}=\Vert z_{n+1}-h(G,z_{n})\Vert \). Then, iterative algorithm \(p_{n+1}=h(G,p_{n})\) is called G-stable if:

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\sigma _{n}=0 \iff \lim \limits _{n\rightarrow \infty }z_{n}=p. \end{aligned}$$

Using Definition 1.3, Harder (1987), Harder and Hicks (1988) proved the stability of various iterative algorithms for several classes of contractive-type operators. Rhoades (1990) generalized the results of Harder and Hicks (1988) for two different classes of contraction mappings of Çiric type. Furthermore, Osilike (1995, (1996, (1997, (1998, (2000) showed the stability of Mann and Ishikawa algorithms for a class of contractive-type operators. In 1998, Osilike (1998) introduced the concept of almost stability of iterative algorithms, which is weaker class of stability due to Ostrowski (1967) and defined as follows:

Definition 1.4

Let G be a self-map on a Banach space X with a fixed point p. Assume that \(p_{0} \in X\) and \(p_{n+1}=h(G,p_{n}),~n \in \mathbb {Z}_{+}\) is an iterative algorithm for some function h. Let \(\{z_{n}\}\) be an approximate sequence of the sequence \(\{p_n\}\) in X and define \(\sigma _{n}=\Vert z_{n+1}-h(G,z_{n})\Vert \). Then, iterative algorithm \(p_{n+1}=h(G,p_{n})\) is called almost G-stable if:

$$\begin{aligned} \sum _{n=0}^{\infty }\sigma _{n}<\infty \implies \lim \limits _{n\rightarrow \infty }z_{n}=p. \end{aligned}$$

Also, Osilike proved stability of Ishikawa algorithm for a class of pseudo-contractive operators.

The following lemma plays a crucial role in proving the main result of this paper.

Lemma 1.5

Berinde (1997) Let \(\{u_{n}\}\) and \(\{v_{n}\}\) be two sequences in \(\mathbb {R_+}\) (the set of nonnegative real numbers) and \(0\le s<1\), such that \(u_{n+1}\le su_{n}+v_{n}\), \(\forall \) \(n\ge 0\).

(i) If \(\lim \nolimits _{n\rightarrow \infty }v_{n}=0\), then \(\lim \nolimits _{n\rightarrow \infty }u_{n}=0\).

Şoltuz and Grosan (2008) proved the following most valuable lemma which used to prove the data dependence results.

Lemma 1.6

Let \(\{\theta _{n}\}\) be a sequence in \(\mathbb {R_+}\) and there exists \(N\in \mathbb {Z}_{+}\), such that for all \(n\ge N\) satisfying the following inequality:

$$\begin{aligned} \theta _{n+1}\le (1-\mu _{n})\theta _{n}+\mu _{n} \eta _{n}, \end{aligned}$$

where \(\mu _{n}\in (0,1)\) \(\forall ~ n\in \mathbb {Z}_{+} \), such that \(\sum _{n=0}^{\infty }\mu _{n}=\infty \) and \(\eta _{n}\ge 0\). Then:

$$\begin{aligned} 0\le \lim \limits _{n\rightarrow \infty }\sup \theta _{n}\le \lim \limits _{n\rightarrow \infty }\sup \eta _{n}. \end{aligned}$$

We observe the following remark.

Remark 1.7

The Lemma 1.6 need not to be true in general, because there is no guarantee of the existence of \(\lim \nolimits _{n\rightarrow \infty }\sup \theta _{n}\) and \(\lim \nolimits _{n\rightarrow \infty }\sup \eta _{n}\). As, we choose the sequences \(\theta _{n}=\frac{n}{2}\), \(\mu _{n}=\frac{1}{2}\), and \(\eta _{n}=n\) for all \(n\in \mathbb {Z}_{+}\). Then, obviously, all the conditions of Lemma 1.6 are satisfied. However, neither \(\lim \nolimits _{n\rightarrow \infty }\sup \theta _{n}\) nor \(\lim \nolimits _{n\rightarrow \infty }\sup \eta _{n}\) exists.

Now, we modify Lemma 1.6 as follows:

Lemma 1.8

Let \(\{\theta _{n}\}\) be a sequence in \(\mathbb {R_+}\) and there exists \(N\in \mathbb {Z}_{+}\), such that for all \(n\ge N\), \(\{\theta _{n}\}\) satisfies the following inequality:

$$\begin{aligned} \theta _{n+1}\le (1-\mu _{n})\theta _{n}+\mu _{n} \eta _{n}, \end{aligned}$$

where \(\mu _{n}\in (0,1)\) \(\forall ~ n\in \mathbb {Z}_{+} \), such that \(\sum _{n=0}^{\infty }\mu _{n}=\infty \) and \(\eta _{n}\ge 0\) is a bounded sequence. Then:

$$\begin{aligned} 0\le \lim \limits _{n\rightarrow \infty }\sup \theta _{n}\le \lim \limits _{n\rightarrow \infty }\sup \eta _{n}. \end{aligned}$$

Proof

One can easily prove this lemma using the proof of Lemma 1 in Park (1994). \(\square \)

To compare the rate of convergence of iterative algorithms, Berinde (2004) gave the following definitions.

Definition 1.9

Let \(\{\theta _{n}\}\) and \(\{\eta _{n}\}\) be sequences in \(\mathbb {R_+}\) that converge to \(\theta \) and \(\eta \), respectively. Suppose that:

$$\begin{aligned} \ell =\lim \limits _{n\rightarrow \infty }\frac{|\theta _{n}-\theta |}{|\eta _{n}-\eta |}. \end{aligned}$$

(i) If \(\ell =0\), then \(\{\theta _{n}\}\) converges to \(\theta \) faster than \(\{\eta _{n}\}\) to \(\eta \).

(ii) If \(0<\ell <\infty \), then \(\{\theta _{n}\}\) and \(\{\eta _{n}\}\) converge at the same rate of convergence.

Definition 1.10

Let \(\{p_{n}\}\) and \(\{q_{n}\}\) be two iterative algorithms both converging to the same point p with the following error estimates(best ones available):

$$\begin{aligned} |p_{n}-p|\le & {} \theta _{n},\\ |q_{n}-p|\le & {} \eta _{n}. \end{aligned}$$

If \(\lim \limits _{n\rightarrow \infty }\frac{\theta _{n}}{\eta _{n}}=0\), then \(\{p_{n}\}\) converges faster than \(\{q_{n}\}\).

Definition 1.11

(Berinde 2007) Let S and G be two self operators on a nonempty subset Y of a Banach space X. An operator S is said to be an approximate operator of G if, for all \(x\in Y\), there exists a fixed \(\epsilon >0\), such that \(\Vert Gx-Sx\Vert \le \epsilon \).

In this study, we prove that \(F^{*}\) iterative algorithm converges strongly to a unique fixed point of weak contractions satisfying (1.9) and \(F^{*}\) algorithm is almost G-stable. Also, we prove that the proposed algorithm converges faster than Picard, Mann, Ishikawa, S, normal-S, and Varat algorithms. To support the results, we present a couple of numerical examples. Finally, we give a data dependence result for weak contractions satisfying (1.9) using \(F^{*}\) algorithm and we present an illustrative numerical example to show the validity of the result. As an application, we approximate the solution of nonlinear quadratic Volterra integral equation via the proposed iterative algorithm.

2 Main results

In this section, we prove the main results for weak contractions satisfying (1.9) using \(F^{*}\) iterative algorithm in an arbitrary Banach space. First, we prove the following strong convergence result.

Theorem 2.1

Let \(G:Y \rightarrow Y\) be a weak contraction satisfying (1.9), where Y is a nonempty, closed, and convex subset of a Banach space X. Then, the sequence \(\{p_{n}\}\) defined by \(F^{*}\) iterative algorithm (1.7) converges to a unique fixed point of G.

Proof

By condition (1.9), we have:

$$\begin{aligned} \Vert Gp_{n}-p\Vert= & {} \Vert Gp_{n}-Gp\Vert \\\le & {} \delta \Vert p_{n}-p\Vert +L\Vert p-Gp\Vert \\= & {} \delta \Vert p_{n}-p\Vert ,~ \forall ~n\in \mathbb {Z}_{+}. \end{aligned}$$

Now, by \(F^{*}\) iterative algorithm (1.7), we have:

$$\begin{aligned} \Vert q_{n}-p \Vert= & {} \Vert G((1-\tilde{r_{n}})p_{n}+ \tilde{r_{n}}Gp_{n})-p \Vert \nonumber \\\le & {} \delta \Vert (1-\tilde{r_{n}})p_{n}+\tilde{r_{n}}Gp_{n}-p\Vert \nonumber \\\le & {} \delta ((1-\tilde{r_{n}})\Vert p_{n}-p \Vert +\tilde{r_{n}}\Vert Gp_{n}-p \Vert )\nonumber \\\le & {} \delta ((1-\tilde{r_{n}})\Vert p_{n}-p \Vert +\delta \tilde{r_{n}}\Vert p_{n}-p \Vert )\nonumber \\= & {} \delta (1-\tilde{r_{n}}+\delta \tilde{r_{n}})\Vert p_{n}-p\Vert \nonumber \\= & {} \delta (1-(1-\delta )\tilde{r_{n}})\Vert p_{n}-p \Vert . \end{aligned}$$
(2.1)

Using Eq. (2.1), we get:

$$\begin{aligned} \Vert p_{n+1}-p \Vert= & {} \Vert Gq_{n}-p \Vert \nonumber \\\le & {} \delta \Vert q_{n}-p \Vert \nonumber \\\le & {} \delta ^{2}(1-(1-\delta )\tilde{r_{n}})\Vert p_{n}-p\Vert . \end{aligned}$$
(2.2)

Since \(0< \delta <1\) and \(\tilde{r_{n}} \in (0,1)\), therefore, using the fact \((1-(1-\delta )\tilde{r_{n}})\le 1\), we get:

$$\begin{aligned} \Vert p_{n+1}-p \Vert \le \delta ^{2} \Vert p_{n}-p \Vert . \end{aligned}$$

Inductively, we get:

$$\begin{aligned} \Vert p_{n+1}-p \Vert \le \delta ^{2(n+1)} \Vert p_{0}-p \Vert . \end{aligned}$$
(2.3)

Since \(0< \delta <1\), hence, \(\{p_{n}\}\) converges strongly to p. This completes the proof. \(\square \)

The following theorem shows the almost G-stability of \(F^{*}\) iterative algorithm (1.7).

Theorem 2.2

Let \(G:Y\rightarrow Y\) be a weak contraction satisfying (1.9), where Y is a nonempty, closed, and convex subset of a Banach space X. Then, \(F^{*}\) iterative algorithm (1.7) is almost G-stable.

Proof

Suppose that \(\{z_{n}\}\) is an arbitrary sequence in Y and the sequence defined by \(F^{*}\) algorithm is \(p_{n+1}=h(G,p_{n})\) and \(\sigma _{n}=\Vert z_{n+1}-h(G,z_{n})\Vert ,~n\in \mathbb {Z}_{+}\). Now, we will prove that:

$$\begin{aligned} \sum _{n=0}^{\infty }\sigma _{n}< \infty \implies \lim \limits _{n\rightarrow \infty } z_{n}= p. \end{aligned}$$

Let \(\sum _{n=0}^{\infty }\sigma _{n}<\infty \), and then, by \(F^{*}\) algorithm, we have:

$$\begin{aligned} \Vert z_{n+1}-p\Vert\le & {} \Vert z_{n+1}-h(G,z_{n})\Vert +\Vert h(G,z_{n})-p\Vert \\= & {} \sigma _{n}+\Vert G(G((1-\tilde{r_{n}})z_{n}+\tilde{r_{n}}Gz_{n}))-p\Vert \\\le & {} \sigma _{n}+\delta ^{2}(1-(1-\delta )\tilde{r_{n}})\Vert z_{n}-p\Vert \\\le & {} \sigma _{n}+\delta ^{2}(1-(1-\delta )r)\Vert z_{n}-p\Vert . \end{aligned}$$

Define \(u_{n}=\Vert z_{n}-p\Vert \) and \(q=\delta ^{2}(1-(1-\delta )r)\), and then, \(0\le q< 1\) and:

$$\begin{aligned} u_{n+1}\le qu_{n}+\sigma _{n}. \end{aligned}$$

Thus, conclusion follows by Lemma 1.5. \(\square \)

The following theorem proves that \(F^{*}\) iterative algorithm converges faster than the algorithms (1.1)–(1.6) for weak contractions.

Theorem 2.3

Let \(G:Y \rightarrow Y\) be a weak contraction satisfying (1.9), where Y is a nonempty, closed, and convex subset of a Banach space X. Let the sequences \(\{p_{1,n}\}\), \(\{p_{2,n}\}\), \(\{p_{3,n}\}\), \(\{p_{4,n}\}\), \(\{p_{5,n}\}\), \(\{p_{6,n}\}\), and \(\{p_{n}\}\) be defined by Picard, Mann, Ishikawa, S, normal-S, Varat, and \(F^{*}\) iterative algorithms, respectively, and converge to same fixed point say, p. Then, \(F^{*}\) algorithm converges to a fixed point p of a mapping G faster than the algorithms (1.1)–(1.6).

Proof

In view of Eq. (2.1) in Theorem 2.1, we have:

$$\begin{aligned} \Vert p_{n+1}-p\Vert \le \delta ^{2(n+1)}\Vert p_{0}-p\Vert =\alpha _{n},~n\in {\mathbb {Z}_{+}}. \end{aligned}$$

As proved by Khan (2013, Proposition 1):

$$\begin{aligned} \Vert p_{1,n}-p\Vert \le \delta ^{n+1}\Vert p_{1,0}-p\Vert =\alpha _{1,n},~n\in {\mathbb {Z}_{+}}. \end{aligned}$$

Then:

$$\begin{aligned} \frac{\alpha _{n}}{\alpha _{1,n}}=\frac{\delta ^{2(n+1)}\Vert p_{0}-p\Vert }{\delta ^{n+1}\Vert p_{1,0}-p\Vert }=\delta ^{(n+1)}\frac{\Vert p_{0}-p\Vert }{\Vert p_{1,0}-p\Vert }. \end{aligned}$$

Since \(\delta <1\), therefore, we have \(\frac{\alpha _{n}}{\alpha _{1,n}}\rightarrow 0\) as \(n\rightarrow \infty \). Hence, the sequence \(\{p_{n}\}\) converges faster than \(\{p_{1,n}\}\) to p.

Now, by normal-S algorithm (1.5), we get:

$$\begin{aligned} \Vert p_{n+1}-p\Vert= & {} \Vert G((1-\tilde{r_{n}})p_{n}+\tilde{r_{n}}Gp_{n})-p\Vert \\\le & {} \delta [(1-\tilde{r_{n}})p_{n}+\tilde{r_{n}} Gp_{n}-p\Vert ]\\\le & {} \delta [(1-\tilde{r_{n}})\Vert p_{n}-p\Vert +\delta \tilde{r_{n}}\Vert p_{n}-p\Vert ]\\= & {} \delta (1-(1-\delta ) \tilde{r_{n}})\Vert p_{n}-p\Vert \\\le & {} \delta \Vert p_{n}-p\Vert . \end{aligned}$$

Inductively, we get:

$$\begin{aligned} \Vert p_{n+1}-p\Vert \le \delta ^{n+1}\Vert p_{0}-p\Vert . \end{aligned}$$

Let

$$\begin{aligned} \Vert p_{5,n}-p\Vert \le \delta ^{n+1}\Vert p_{5,0}-p\Vert =\alpha _{5,n}. \end{aligned}$$

Then:

$$\begin{aligned} \frac{\alpha _{n}}{\alpha _{5,n}}=\frac{\delta ^{2(n+1)}\Vert p_{0}-p\Vert }{\delta ^{n+1}\Vert p_{5,0}-p\Vert }=\delta ^{n+1}\frac{\Vert p_{0}-p\Vert }{\Vert p_{5,0}-p\Vert }. \end{aligned}$$

We get \(\frac{\alpha _{n}}{\alpha _{5,n}}\rightarrow 0\) as \(n\rightarrow \infty \). Hence, the sequence \(\{p_{n}\}\) converges faster than \(\{p_{5,n}\}\) to the fixed point p.

As proved by Sintunavarat and Pitea (2016, Theorem 2.1):

$$\begin{aligned} \Vert p_{6,n}-p\Vert \le \delta ^{n+1}[1-(1-\delta )s(t-r+rt)]^{n+1}\Vert p_{6,0}-p\Vert =\alpha _{6,n},~n\in {\mathbb {Z}_{+}}. \end{aligned}$$

And using the fact \(1-(1-\delta )s(t-r+rt)\le 1\), we get:

$$\begin{aligned} \Vert p_{6,n}-p\Vert \le \delta ^{n+1}\Vert p_{6,0}-p\Vert =\alpha _{6,n}. \end{aligned}$$

Then:

$$\begin{aligned} \frac{\alpha _{n}}{\alpha _{6,n}}=\frac{\delta ^{2(n+1)}\Vert p_{0}-p\Vert }{\delta ^{n+1}\Vert p_{6,0}-p\Vert }=\delta ^{n+1}\frac{\Vert p_{0}-p\Vert }{\Vert p_{6,0}-p\Vert }. \end{aligned}$$

Thus, we get \(\frac{\alpha _{n}}{\alpha _{6,n}}\rightarrow 0\) as \(n\rightarrow \infty \). Hence, \(\{p_{n}\}\) converges faster than \(\{p_{6,n}\}\) to p.

Also, Sintunavarat and Pitea (2016) showed that the Varat algorithm converges faster than Mann, Ishikawa, and S iterative algorithms for the class of weak contractions. Thus, \(F^{*}\) iterative algorithm converges faster than all the iterative algorithms as discussed earlier. \(\square \)

Now, we furnish the following examples in support of the above claim.

Example 2.4

Let \(X=\mathbb {R}\) be a Banach space with usual norm and \(Y=[0,100]\), a subset of X. Let G be a self-mapping on Y defined by \(Gx=x-1+e^{-x}\), for all \(x\in Y\). It can be easily verified that G is a weak contraction satisfying (1.9) and G has a unique fixed point \(p=0\). Choose the control sequences \(\tilde{r_{n}}=0.85\), \(\tilde{s_{n}}=0.15\), and \(\tilde{t_{n}}=0.45\) with the initial guess \(p_{0}=5\).

With the help of Matlab program 2015a, we find that \(F^{*}\) iterative algorithm (1.7) converges to the fixed point \(p=0\) faster than the Picard, Mann, Ishikawa, S, normal-S, and Varat iterative algorithms, see Table 1 and Fig. 1.

Table 1 A comparison of the different iterative algorithms for Example 2.4

Example 2.5

Let \(X=\mathbb {R}^{2}\) be a Banach space with respect to the norm \(\Vert x\Vert =\Vert (x_{1},x_{2})\Vert =|x_{1}|+|x_{2}|\) and \(Y=\{x=(x_{1},x_{2}): (x_{1},x_{2})\in [0,1]\times [0,1]\}\) be a subset of X. Let \(G:Y\rightarrow Y\) be defined by:

$$\begin{aligned} G(x_{1},x_{2})=\left\{ \begin{array}{ll} \Big (\frac{1}{2}sin(x_{1}),\frac{1}{4}sin(x_{2})\Big ),&{}\quad \text{ if }~(x_{1},x_{2})\in \left[ 0,\frac{1}{2}\right] \times \left[ 0,\frac{1}{2}\right] ,\\ \\ \Big (\frac{1}{2}x_{1},\frac{1}{4}x_{2}\Big ),&{}\quad \text{ if }~ (x_{1},x_{2})\in \big (\frac{1}{2},1\big ]\times \big (\frac{1}{2},1\big ]. \end{array} \right. \end{aligned}$$

Then, G is a weak contraction satisfying (1.9) for \(\delta =\frac{1}{2}=L\) and G has a unique fixed point \((p,q)=(0,0)\), but G is not a contraction mapping.

Now, all the conditions of the Theorems 2.1 and 2.3 are satisfied. Thus, with the help of Matlab 2015a, we show that the sequence defined by \(F^{*}\) iterative algorithm (1.7) converges to a unique fixed point \((p,q)=(0,0)\) of the mapping G faster than the algorithms (1.1)–(1.6) which is showed in Tables 2, 3 and Fig. 2. For this claim, we choose the control sequences \(\tilde{r_{n}}=0.85\), \(\tilde{s_{n}}=0.15\) and \(\tilde{t_{n}}=0.45\) with the initial guess \((p_{0},q_{0})=(0.25,0.5)\).

Fig. 1
figure 1

Convergence behavior of the sequences defined by different iterative algorithms for Example 2.4

Table 2 A comparison of the different iterative algorithms for Example 2.5
Table 3 A comparison of the different iterative algorithms for Example 2.5

3 A data dependence result

In recent years, the data dependence research of fixed points is an important area of fixed point theory. Notable researchers who have made contributions to the study of data dependence of fixed points are Markin (1973), Rus and Muresan (1998), Rus et al. (2001, (2003), Berinde (2003), Espínola and Petruşel (2005), Şoltuz (2004), Şoltuz and Grosan (2008) and Olatinwo (2009) and the references cited therein.

Now, we prove the following theorem for data dependence of fixed points.

Theorem 3.1

Let S be an approximate operator of a weak contraction G satisfying (1.9) and \(\{p_{n}\}\) be a sequence defined by \(F^{*}\) iterative algorithm (1.7) for G. Now, define a sequence \(\{u_{n}\}\) for S as follows:

$$\begin{aligned} \left\{ \begin{array}{ll} u_{0}= u \in Y,\\ u_{n+1} =Sv_{n},\\ v_{n} =S((1-\tilde{r_{n}})u_{n} + \tilde{r_{n}}Su_{n}), \quad n \in {\mathbb {Z}_{+}}, \end{array} \right. \end{aligned}$$
(3.1)

where \(\{\tilde{r_{n}}\}\) is a sequence in (0, 1) satisfying \(\frac{1}{2}\le \tilde{r_{n}}\) for all \(n \in \mathbb {Z}_{+}\) and \(\sum _{n=0}^{\infty }\tilde{r_{n}}=\infty \). If \(Gp=p\) and \(Sq=q\) such that \(u_{n}\rightarrow q\) as \(n \rightarrow \infty \), then we have:

$$\begin{aligned} \Vert p-q \Vert \le \frac{5\epsilon }{1-\delta }, \end{aligned}$$

where \(\epsilon >0\) is a fixed number.

Proof

It follows from (1.7), (1.9), and (3.1) that:

$$\begin{aligned} \Vert q_{n}-v_{n}\Vert= & {} \Vert G((1-\tilde{r_{n}})p_{n} + \tilde{r_{n}}Gp_{n})-S((1-\tilde{r_{n}})u_{n} + \tilde{r_{n}}Su_{n})\Vert \nonumber \\\le & {} \Vert G((1-\tilde{r_{n}})p_{n} + \tilde{r_{n}}Gp_{n})-G((1-\tilde{r_{n}})u_{n} + \tilde{r_{n}}Su_{n})\Vert \nonumber \\&+ \Vert G((1-\tilde{r_{n}})u_{n} + \tilde{r_{n}}Su_{n})-S((1-\tilde{r_{n}})u_{n} + \tilde{r_{n}}Su_{n})\Vert \nonumber \\\le & {} \delta \big ( (1-\tilde{r_{n}})\Vert p_{n}-u_{n}\Vert +\tilde{r_{n}}\Vert Gp_{n}-Su_{n}\Vert \big )\nonumber \\&+ L\Vert (1-\tilde{r_{n}})p_{n} + \tilde{r_{n}}Gp_{n}-G((1-\tilde{r_{n}})p_{n} + \tilde{r_{n}}Gp_{n})\Vert + \epsilon \nonumber \\\le & {} \delta \big ( (1-\tilde{r_{n}})\Vert p_{n}-u_{n}\Vert +\tilde{r_{n}}\big (\Vert Gp_{n}-Gu_{n}\Vert +\Vert Gu_{n}-Su_{n}\Vert \big )\big )\nonumber \\&+L( 1-(1-\delta )\tilde{r_{n}})(1+\delta )\Vert p_{n}-p\Vert + \epsilon \nonumber \\\le & {} \delta \big ( (1-\tilde{r_{n}})\Vert p_{n}-u_{n}\Vert +\tilde{r_{n}}\big (\delta \Vert p_{n}-u_{n}\Vert +L\Vert p_{n}-Gp_{n}\Vert +\epsilon \big )\big )\nonumber \\&+ L( 1-(1-\delta )\tilde{r_{n}})(1+\delta )\Vert p_{n}-p\Vert + \epsilon \nonumber \\\le & {} \delta \big ( (1-(1-\delta )\tilde{r_{n}})\Vert p_{n}-u_{n}\Vert +\tilde{r_{n}}L\Vert p_{n}-Gp_{n}\Vert +\tilde{r_{n}}\epsilon \big )\nonumber \\&+ L( 1-(1-\delta )\tilde{r_{n}})(1+\delta )\Vert p_{n}-p\Vert + \epsilon . \end{aligned}$$
(3.2)
Fig. 2
figure 2

Convergence behavior of the sequences defined by different iterative algorithms for Example 2.5

Using (3.2), we have:

$$\begin{aligned} \Vert p_{n+1}-u_{n+1}\Vert= & {} \Vert Gq_{n}-Sv_{n}\Vert \nonumber \\\le & {} \Vert Gq_{n}-Gv_{n}\Vert +\Vert Gv_{n}-Sv_{n}\Vert \nonumber \\\le & {} \delta \Vert q_{n}-v_{n}\Vert +L\Vert q_{n}-Gq_{n}\Vert +\epsilon \nonumber \\\le & {} \delta ^{2}\big ( (1-(1-\delta )\tilde{r_{n}})\Vert p_{n}-u_{n}\Vert +\tilde{r_{n}}L\Vert p_{n}-Gp_{n}\Vert +\tilde{r_{n}}\epsilon \big )\nonumber \\&+ \delta L( 1-(1-\delta )\tilde{r_{n}})(1+\delta )\Vert p_{n}-p\Vert + \delta \epsilon +L\Vert q_{n}-Gq_{n}\Vert +\epsilon .\nonumber \\ \end{aligned}$$
(3.3)

Since \(\delta \in (0,1)\), \(\tilde{r_{n}} \in (0,1)\) with \(\tilde{r_{n}}\ge \frac{1}{2}\); therefore, using the inequalities \(\delta <1\), \(\delta ^{2}<1\), \(1-\tilde{r_{n}}\le \tilde{r_{n}}\) and \(1-(1-\delta )\tilde{r_{n}}\le 1\) in (3.3), we get:

$$\begin{aligned} \Vert p_{n+1}-u_{n+1}\Vert\le & {} (1-(1-\delta )\tilde{r_{n}})\Vert p_{n}-u_{n}\Vert +\tilde{r_{n}}L\Vert p_{n}-Gp_{n}\Vert +\tilde{r_{n}}\epsilon \nonumber \\&+ L(1+\delta )\Vert p_{n}-p\Vert +L\Vert q_{n}-Gq_{n}\Vert +2\epsilon \nonumber \\\le & {} (1-(1-\delta )\tilde{r_{n}})\Vert p_{n}-u_{n}\Vert +\tilde{r_{n}}L\Vert p_{n}-Gp_{n}\Vert +5\tilde{r_{n}}\epsilon \nonumber \\&+ 2\tilde{r_{n}}L(1+\delta )\Vert p_{n}-p\Vert +2\tilde{r_{n}}L\Vert q_{n}-Gq_{n}\Vert . \end{aligned}$$
(3.4)

Now, define:

$$\begin{aligned} \theta _{n}=: & {} \Vert p_{n}-u_{n}\Vert ,\\ \mu _{n}=: & {} \tilde{r_{n}}(1-\delta ) \in (0,1),\\ \eta _{n}=: & {} \frac{L\Vert p_{n}-Gp_{n}\Vert +2L\Vert q_{n}-Gq_{n}\Vert +2L( 1+\delta )\Vert p_{n}-p\Vert +5\epsilon }{1-\delta }. \end{aligned}$$

Equation (3.4) becomes:

$$\begin{aligned} \theta _{n+1}\le (1-\mu _{n})\theta _{n}+\mu _{n}\eta _{n}. \end{aligned}$$

All the conditions of Lemma 1.8 are satisfied. Hence, applying Lemma 1.8, we get:

$$\begin{aligned} 0\le & {} \lim \sup \limits _{n\rightarrow \infty }\Vert p_{n}-u_{n}\Vert \\\le & {} \lim \sup \limits _{n\rightarrow \infty } \frac{L\Vert p_{n}-Gp_{n}\Vert +2L\Vert q_{n}-Gq_{n}\Vert +2L( 1+\delta )\Vert p_{n}-p\Vert +5\epsilon }{1-\delta }\\= & {} \frac{5\epsilon }{1-\delta }. \end{aligned}$$

In view of Theorem 2.1, we know that \(p_{n}\rightarrow p\), and using hypothesis, we obtain:

$$\begin{aligned} \Vert p-q\Vert \le \frac{5\epsilon }{1-\delta }. \end{aligned}$$

\(\square \)

The following example supports Theorem 3.1.

Example 3.2

Let \(X=\mathbb {R}\) be a Banach space and \(Y=[-1,1]\subset X\). Define an operator \(G:Y\rightarrow Y\) by:

$$\begin{aligned} Gx=\left\{ \begin{array}{ll} \frac{9}{20}sin (\frac{9x}{20}),&{}\quad -1\le x<0,\\ \\ -\frac{9}{20}sin (\frac{9x}{20}),&{}\quad 0\le x\le 1. \end{array} \right. \end{aligned}$$
(3.5)

It can be easily checked that G is a weak contraction satisfying (1.9) with \(\delta \in [\frac{81}{400},1)\) and has a unique fixed point \(p=0\). Now, define an operator \(S:Y\rightarrow Y\) by:

$$\begin{aligned} Sx=\left\{ \begin{array}{ll} \frac{(x-0.09)}{4.01}+\frac{(x+0.2)^3}{96.06}-\frac{(x-0.4)^5}{7601.16}-\frac{(x+0.3)^7}{129998.03},&{}\quad -1\le x<0,\\ \\ -\frac{x}{5.02}-\frac{(x-0.3)^3}{108.95}-\frac{(x+0.2)^5}{7598.27}+\frac{(x-0.6)^7}{130050.03},&{}\quad 0\le x\le 1. \end{array} \right. \end{aligned}$$
(3.6)

Using Matlab program 2015a software, we get:

$$\begin{aligned} \max \limits _{x\in Y}\Vert Gx-Sx\Vert =0.080707. \end{aligned}$$

Hence, for all \(x\in Y\) and for a fixed \(\epsilon =0.080707>0\), we have:

$$\begin{aligned} \Vert Gx-Sx\Vert \le 0.080707=\epsilon . \end{aligned}$$

Thus, in view of Definition 1.11, S is an approximate operator of G. Moreover, from (3.6), \(q=0.000308\) is a unique fixed point of S in Y and the distance between two fixed points p and q is \(\Vert p-q\Vert =0.000308\).

If \(Sx=-\frac{x}{5.02}-\frac{(x-0.3)^3}{108.95}-\frac{(x+0.2)^5}{7598.27}+\frac{(x-0.6)^7}{130050.03}\) and we choose \(\tilde{r_{n}}=\frac{n+1}{n+2}\), \(n\in \mathbb {Z}_{+}\) in (3.1), then we obtain:

$$\begin{aligned} \left\{ \begin{array}{ll} u_{0}= u \in Y,\\ u_{n+1} =-\frac{v_{n}}{5.02}-\frac{(v_{n}-0.3)^3}{108.95}-\frac{(v_{n}+0.2)^5}{7598.27}+\frac{(v_{n}-0.6)^7}{130050.03},\\ \\ v_{n} =S((1-\frac{n+1}{n+2})u_{n} +\frac{n+1}{n+2}\big (-\frac{u_{n}}{5.02}-\frac{(u_{n}-0.3)^3}{108.95}-\frac{(u_{n}+0.2)^5}{7598.27}+\frac{(u_{n}-0.6)^7}{130050.03}\big )), ~ n \in {\mathbb {Z}_{+}}. \end{array} \right. \end{aligned}$$
(3.7)

The sequence \(\{u_n\}\) is defined by (3.7) which converges to the fixed point \(q=0.000308\), see Table 4 .

Now, using Theorem 3.1, we calculate the following estimate:

$$\begin{aligned} \Vert p-q\Vert \le \frac{5\times (0.080707)}{1-\frac{81}{400}}=0.506000. \end{aligned}$$
Table 4 Approximated fixed point of operator S by using the Iterative algorithm (3.7)

4 An application to nonlinear quadratic integral equation

In this section, we approximate the solution of a nonlinear quadratic Volterra integral equation via \(F^{*}\) iterative algorithm.

Now, consider the following nonlinear quadratic Volterra integral equation:

$$\begin{aligned} x(s) = h(s)+g (s, x(s))\int _{0}^{s}w(s,\tau , x(\tau ))\mathrm{d}\tau ,~ \forall ~s \in A=[0, 1]. \end{aligned}$$
(4.1)

Assume that the following conditions are satisfied:

\((C_{1})\):

\(h\in C(A)\) and h is nonnegative and nondecreasing on \(A=[0,1]\).

\((C_{2})\):

\(g:A\times B\longrightarrow \mathbb {R}\) satisfies the following circumstances.

(i):

g is continuous on the set \(A\times B\);

(ii):

For any fixed \(x\in B\), the function \(s\longmapsto g(s,x)\) is nondecreasing on A;

(iii):

For any fixed \(s\in A\), the function \(x\longmapsto g(s,x)\) is nondecreasing on B;

(iv):

The function \(g=g(s,x)\) is Lipschitz with respect to the variable x,

where \(B\subset \mathbb {R^{+}}\) is an unbounded interval and \(h_{0}\in B\), where \(h_{0}=h(0)=\min \{h(s), s\in A\}\). Furthermore, g is nonnegative on the set \(A\times B\).

\((C_{3})\):

There is a nondecreasing function \(k(\rho )=k:[h_{0},+\infty ]\longrightarrow \mathbb {R^{+}}\), such that:

$$\begin{aligned} |g(s,x_{1})-g(s,x_{2})|\le k(\rho )|x_{1}-x_{2}|, \end{aligned}$$

for any \(s\in A\) and for all \(x_{1},x_{2}\in [h_{0},\rho ]\).

\((C_{4})\):

The function \(w:A\times A\times \mathbb {R}\longrightarrow \mathbb {R}\) is continuous, such that \(w:A\times A\times \mathbb {R^{+}}\longrightarrow \mathbb {R^{+}}\) and the function \(s\longmapsto w(s,\tau ,x)\) is nondecreasing on A for any fixed \(\tau \in A\) and \(x\in \mathbb {R^{+}}\).

\((C_{5})\):

There is a nondecreasing map \(q:\mathbb {R^{+}}\longrightarrow \mathbb {R^{+}}\), such that \(w(s,\tau ,x)\le q(x)\) for \(s,\tau \in A\) and \(x\ge 0\).

\((C_{6})\):

There is a positive solution \(\rho _{0}\) for the inequality:

$$\begin{aligned} \Vert h\Vert +(\rho k(\rho )+H_{1})q(\rho )\le \rho , \end{aligned}$$

where \(H_{1}=\sup \{g(s,0):s\in A\}\). Moreover: \(k(\rho _{0})q(\rho _{0})<1\).

Banaś and Sadarangani (2008) proved the following existence result for the problem (4.1).

Theorem 4.1

Under the assumptions \((C_{1})-(C_{6})\), Eq. (4.1) has at least one solution \(x=x(s)\in C(A)\) which is nondecreasing and nonnegative on A.

Now, we present some assumptions for the approximation of solution of the integral equation (4.1). Let \(K=\{x\in C(A):x(s)\ge h_{0} ~for~ s\in A\}\) be the subset of the space C(A) and \(K_{\rho _{0}}=\{x\in K: \Vert x\Vert \le \rho _{0}\}\), where \(\rho _{0}>0\) comes from assumption \((C_{6})\). \(K_{\rho _{0}}\) is nonempty since \(\rho _{0}\ge h_{0}\), bounded, closed, and convex subset of C(A).

Assume that the following conditions are fulfilled:

\((\alpha )\):

\(w(s,\tau ,x)\) is Lipschitz function with respect to x, i.e. for \(s,\tau \in A\) and for \(x_{1},x_{2}\in K_{\rho _{0}}\), there exists \(N>0\) such that:

$$\begin{aligned} \Vert w(s,\tau ,x_{1})-w(s,\tau ,x_{2})\Vert \le N\Vert x_{1}-x_{2}\Vert . \end{aligned}$$
\((\beta )\):

\(\rho _{0}\) in assumption \((C_{6})\) satisfies the following inequality:

$$\begin{aligned} q(\rho _{0})k(\rho _{0})+(\rho _{0}k(\rho _{0})+H_{1})N<1. \end{aligned}$$

Now, define the operator G on the set \(K_{\rho _{0}}\) by:

$$\begin{aligned} Gx(s) = h(s)+g (s, x(s))\int _{0}^{s}w(s,\tau , x(\tau ))\mathrm{d}\tau ,~ \forall ~s \in A. \end{aligned}$$
(4.2)

According to the proof of Theorem 4.1 in Banaś and Sadarangani (2008), G transforms the set \(K_{\rho _{0}}\) into itself as well as K. Also, G is continuous on \(K_{\rho _{0}}\) and has at least one fixed point in \(K_{\rho _{0}}\).

We now show that the operator G is a weak contraction on \(K_{\rho _{0}}\). Suppose that \(x,y\in K_{\rho _{0}}\), and then, for \(s\in A\), we have:

$$\begin{aligned} \Vert Gx(s)-Gy(s)\Vert= & {} \Vert g (s, x(s))\int _{0}^{s}w(s,\tau , x(\tau ))\mathrm{d}\tau -g (s, y(s))\int _{0}^{s}w(s,\tau , y(\tau ))\mathrm{d}\tau \Vert \\\le & {} \Vert g (s, x(s))\int _{0}^{s}w(s,\tau , x(\tau ))\mathrm{d}\tau -g (s, y(s))\int _{0}^{s}w(s,\tau , x(\tau ))\mathrm{d}\tau \Vert \\&+ \Vert g (s, y(s))\int _{0}^{s}w(s,\tau , x(\tau ))\mathrm{d}\tau -g (s, y(s))\int _{0}^{s}w(s,\tau , y(\tau ))\mathrm{d}\tau \Vert \\\le & {} k(\rho _{0})\Vert x(s)-y(s)\Vert \int _{0}^{s}|w(s,\tau ,x(\tau ))|\mathrm{d}\tau \\&+|g(s,y(s))|\int _{0}^{s}|w(s,\tau ,x(\tau ))-w(s,\tau ,y(\tau ))|\mathrm{d}\tau \\\le & {} q(\rho _{0})k(\rho _{0})\Vert x(s)-y(s)\Vert +(\rho _{0}k(\rho _{0})+H_{1})N\Vert x(s)-y(s)\Vert \\= & {} (q(\rho _{0})k(\rho _{0})+(\rho _{0}k(\rho _{0})+H_{1})N)\Vert x(s)-y(s)\Vert . \end{aligned}$$

Let \(\delta =q(\rho _{0})k(\rho _{0})+(\rho _{0}k(\rho _{0})+H_{1})N\), and by assumption \((\beta )\), we have \(\delta <1\). Hence, the operator G is a weak contraction which satisfies the following inequality for some \(L\ge 0\):

$$\begin{aligned} \Vert Gx(s)-Gy(s)\Vert \le \delta \Vert x(s)-y(s)\Vert +L\Vert x(s)-Gx(s)\Vert . \end{aligned}$$

Taking \(X=C(A)\), \(C=K_{\rho _{0}}\), and G as in Eq. (4.2), we get the following desired result.

Theorem 4.2

Under the assumptions \((\alpha )-(\beta )\), the sequence \(\{p_{n}\}\) defined by \(F^{*}\) iterative algorithm (1.7) converges strongly to the unique solution of integral equation (4.1).

5 Conclusion

In this paper, we introduced a new two-step iterative algorithm for the approximation of fixed points of weak contractions in Banach spaces which is more efficient and converges faster than some leading iterative algorithms as shown by Theorem 2.3. In Theorem 2.2, we also proved that \(F^{*}\) iterative algorithm is almost G-stable. Examples 2.4 and 2.5 verified our claim. Furthermore, we obtained a data dependence result using \(F^{*}\) algorithm and we presented an example to show the validity of the result. Also, we approximated the solution of a nonlinear quadratic Volterra integral equation via the proposed algorithm.

Now, we raise the following two open questions for interested mathematicians.

Question 1

Can one define a new two-step iterative algorithm whose rate of convergence is even faster than \(F^{*}\) iterative algorithm?

Question 2

Does the sequence \(\{p_{n}\}\) generated by \(F^{*}\) iterative algorithm converge to a fixed point of non-expansive or pseudo-contractive mappings?