1 Introduction

The construction of fixed point iterative methods for solving nonlinear equations or systems of nonlinear equations is an interesting and challenging task in numerical analysis and many applied scientific branches. The huge importance of this subject has led to the development of many numerical methods, most frequently of iterative nature (see [4,5,6,7, 31, 32]). With the advancement of computer hardware and software, the problem of solving nonlinear equations by numerical methods has gained an additional importance. In this paper, we consider the problem of approximating a solution \(x^*\) of the equation \(F(x)=0\); where \(F:\Omega \subseteq B_1\rightarrow B_2\), \(B_1\) and \(B_2\) are Banach spaces and \(\Omega \) is a nonempty open convex subset of \(B_1\), by iterative methods of a high order of convergence. The solution \(x^*\) can be obtained as a fixed point of some function \(\Phi :\Omega \subseteq B_1\rightarrow B_2\) by means of fixed point iteration

$$\begin{aligned} {x}_{n+1}=\Phi (x_n),\quad n= 0,\,1,\,2,\ldots . \end{aligned}$$

There are a variety of iterative methods for solving nonlinear equations. A classical method is the quadratically convergent Newton’s method [4]

$$\begin{aligned} {x}_{n+1}={x}_n- F'({x}_n)^{-1}{F}({x}_n), \end{aligned}$$
(1)

where \( {F}'({x})^{-1}\) is the inverse of first Fréchet derivative \({F}'(x)\) of the function F(x). This method converges if the initial approximation \(x_0\) is closer to the solution \(x^*\) and \( F'({x})^{-1}\) exists in an open neighborhood \(\Omega \) of \(x^*.\) In order to attain the higher order of convergence, a number of modified Newton’s or Newton-like methods have been proposed in literature, see, for example [1,2,3, 6, 8,9,10, 12,13,30] and references therein.

In this paper, we consider a three-step iterative scheme and its multistep version for solving the nonlinear system \(F(x)=0\). The three-step scheme is given by

$$\begin{aligned} y_n&=x_{n}-\alpha F'(x_n)^{-1}F(x_n),\nonumber \\ z_n&=\varphi ^{(p)}_{\alpha }(x_{n}, y_{n})\nonumber ,\\ x_{n+1}&=z_{n}-\psi (x_n, y_n)F(z_n), \end{aligned}$$
(2)

where \(\varphi ^{(p)}_{\alpha }(x_{n}, y_{n})\) is any iterative scheme of convergence order \(p\ge 3 \), \(\psi (x_n, y_n)=\big (\beta I+\gamma F'(y_n)^{-1}F'(x_n)+\delta F'(x_n)^{-1}F'(y_n)\big )F'(x_n)^{-1}\) and \(\{\alpha , \, \beta ,\, \gamma ,\, \delta \} \in \mathbb {R}\).

The multistep version of (2), consisting of \(q+2\) steps, is expressed as

$$\begin{aligned} y_n&=x_{n}-\alpha F'(x_n)^{-1}F(x_n),\nonumber \\ z_n&=\varphi ^{(p)}_{\alpha }(x_{n}, y_{n})\nonumber ,\\ z_{n}^{(1)}&= z_n-\psi (x_n, y_n) F(z_n),\nonumber \\ z_{n}^{(2)}&= z_{n}^{(1)}-\psi (x_n, y_n) F(z_{n}^{(1)}),\nonumber \\ \ldots \ldots&\ldots \ldots \ldots \ldots \nonumber \\ z_{n}^{(q-1)}&= z_n^{(q-2)}-\psi (x_n, y_n)F(z_n^{(q-2)}),\nonumber \\ z_{n}^{(q)}&=x_{n+1}= z_n^{(q-1)}-\psi (x_n, y_n) F(z_n^{(q-1)}), \end{aligned}$$
(3)

where \(q \in \mathbb {N}\) and \(z_n^{(0)}=z_n.\)

In Sect. 2, we show that for a particular set of values of the parameters \(\alpha , \, \beta ,\, \gamma \) and \(\delta \) the methods (2) and (3) possess convergence order \(p+3\) and \(p+3q\), respectively. In Sect. 3, the local convergence including radius of convergence, computable error bounds and uniqueness results of the proposed methods is presented. In order to verify the theoretical results, some numerical examples are presented in Sect. 4. Finally, in Sect. 5 the methods are applied to solve some systems of nonlinear equations.

2 Convergence-I

We present the convergence of method (2), when \( F:\Omega \subset \mathbb {R}^{m}\rightarrow \mathbb {R}^{m}\).

Theorem 1

Suppose that

  1. (i)

    \(F:\Omega \subset \mathbb {R}^{m}\rightarrow \mathbb {R}^{m}\) is a sufficiently many times differentiable mapping.

  2. (ii)

    There exists a solution \(x^*\in \Omega \) of equation \(F(x)=0\) such that \(F'(x^*)\) is nonsingular.

Then, sequence \(\{x_n\}\) generated by method (2) for \(x_0\in \Omega \) converges to \(x^*\) with order \(p+3\) for \(p\ge 3\) if and only if

$$\begin{aligned} \alpha =1, \quad \beta =-1, \quad \gamma =\frac{3}{2} \quad \text {and} \quad \delta =\frac{1}{2}. \end{aligned}$$
(4)

Proof

Let \(e_n=x_n-x^*\). Using Taylor’s theorem and the hypothesis \(F(x^*)=0\), we obtain in turn that

$$\begin{aligned} F(x_n)=&\, F'(x^*)(x_n-x^*)+\frac{1}{2!}F''(x^*)(x_n-x^*)^2+\frac{1}{3!}F'''(x^*)(x_n-x^*)^3+O\left( \Vert x_n-x^*\Vert ^4\right) ,\\ =&\, F'(x^*)\left( e_n+T_2(e_n)^2+T_3(e_n)^3+O(e_n)^4\right) , \end{aligned}$$

where \(T_{i}=\frac{1}{i!}F'(x^*)^{-1}F^{(i)}(x^*)\, \, \, \text {and} \, \, (e_n)^{i}=(e_{n}, e_{n},{\mathop {\ldots }\limits ^{i}},e_{n}), \, \, e_{n}\in \mathbb {R},\, \, {i}\in \mathbb {N}\).

Also

$$\begin{aligned} F'(x_n)= & {} F'(x^*)\left( I+2T_2e_n+3T_3(e_n)^2+O((e_n)^3)\right) , \end{aligned}$$
(5)
$$\begin{aligned} F'(x_n)^{-1}= & {} \left( I-2T_2e_n+(4T_2^2-3T_3)(e_n)^2+O((e_n)^3)\right) F'(x^*)^{-1}, \end{aligned}$$
(6)

and

$$\begin{aligned} F'(x_n)^{-1}F(x_n)= e_n-T_2(e_n)^2+2(T_2^2-T_3)(e_n)^3+O((e_n)^4). \end{aligned}$$
(7)

For \(\tilde{e}_n=y_n-x^*\), we have that

$$\begin{aligned} \tilde{e}_n=(1-\alpha )e_n+\alpha T_2(e_n)^2-2\alpha (T_2^2-T_3)(e_n)^3+O((e_n)^4). \end{aligned}$$

Using again Taylor’s theorem on \(F'(y_n)\) about \(y_n=x^*\), we get in turn that

$$\begin{aligned} F'(y_n)=&F'(x^*)\big (I+2T_2\tilde{e}_n+3T_3(\tilde{e}_n)^2 +O((\tilde{e}_n)^3)\big ),\nonumber \\ =&\ F'(x^*)\big (I+2(1-\alpha )T_2 e_n+\big (2\alpha T_2^2 +3(1-\alpha )^2T_3\big )(e_n)^2+O((e_n)^3)\big ), \end{aligned}$$
(8)

so

$$\begin{aligned} F'(y_n)^{-1}=&\, \big (I-2T_2\tilde{e}_n+(4T_2^2-3T_3)(\tilde{e}_n)^2+O((\tilde{e}_n)^3) \big )F'(x^*)^{-1},\nonumber \\ =&\, \big (I-2(1-\alpha )T_2{e}_n+\big (2(2-5\alpha +2\alpha ^2)T_2^2-3 (1-\alpha )^2T_3\big )({e}_n)^2\nonumber \\&+O(({e}_n)^3)\big )F'(x^*)^{-1}. \end{aligned}$$
(9)

It then follows from the Eqs. (5), (6), (8) and (9), respectively that

$$\begin{aligned} F'(x_n)^{-1}F'(y_n)=I-2\alpha T_2 e_n+\big (6\alpha T_2^2-3\alpha (2-\alpha )T_3\big )(e_n)^2+O((e_n)^3) \end{aligned}$$

and

$$\begin{aligned} F'(y_n)^{-1}F'(x_n)=I+2\alpha T_2e_n-\big (2\alpha (3-2\alpha )T_2^2+3\alpha (\alpha -2)T_3\big )(e_n)^2+O((e_n)^3). \end{aligned}$$

Consequently, summing up we get in turn that

$$\begin{aligned} \beta I&+ \gamma F'(y_n)^{-1}F'(x_n)+\delta F'(x_n)^{-1}F'(y_n)= \ (\beta +\gamma +\delta )I+2\alpha (\gamma -\delta )T_2e_n\\&+\big (2\alpha (3\delta -3\beta +2\alpha \gamma )T_2^2-3\alpha (\alpha -2)(\gamma -\delta )T_3\big )(e_n)^2 +O\big ((e_n)^3\big ). \end{aligned}$$

Then

$$\begin{aligned} \psi (x_n, y_n)&= \ \big (\beta I+ \gamma F'(y_n)^{-1}F'(x_n) +\delta F'(x_n)^{-1}F'(y_n)\big )F'(x_n)^{-1}\nonumber \\&= \ \big ((\beta +\gamma +\delta )I-2(\beta +\gamma +\delta -\alpha (\gamma -\delta ))T_2e_n\nonumber \\&\quad +\big ((4\alpha ^2\gamma -10\alpha (\gamma -\delta )+4(\beta +\gamma +\delta ))T_2^2-3(\beta +\gamma +\delta \nonumber \\&\quad +\alpha (\alpha -2)(\gamma -\delta ))T_3\big )(e_n)^2+O((e_n)^3) \big )F'(x^*)^{-1}. \end{aligned}$$
(10)

By hypothesis \(\{z_n\}\) is of order p. Set \(\bar{e}_{n}:z_n-x^*=K((e_{n})^{p})+O((e_{n})^{p+1}), \ K\ne 0\). Then, we have

$$\begin{aligned} F(z_n)=F'(x^*)(\bar{e}_{n}+O((\bar{e}_{n})^2)). \end{aligned}$$
(11)

Using (10) and (11) in the third substep of method (2), it follows that

$$\begin{aligned} e_{n+1}&= (1-\beta -\gamma -\delta )\bar{e}_{n}+2\big (\beta +\gamma +\delta -\alpha (\gamma -\delta )\big )T_2(e_n\bar{e}_{n})\nonumber \\&\quad -\big (2(2(\alpha ^2\gamma +\beta +\gamma +\delta )-5 \alpha (\gamma -\delta ))T_2^2-3(\beta +\gamma +\delta \nonumber \\&\quad +\alpha (\alpha -2)(\gamma -\delta ))T_3\big )\big ((e_n)^2\bar{e}_{n}\big ) +O\big ((e_n)^3\bar{e}_{n}\big ). \end{aligned}$$
(12)

Therefore, the order of convergence to \(x^*\) is of order \(p+3\, \, (p\ge 3)\), if and only if, the parameters \(\alpha \), \(\beta \), \(\gamma \) and \(\delta \) satisfy

$$\begin{aligned}&\beta +\gamma +\delta =1,\nonumber \\&\alpha (\gamma -\delta )=1,\nonumber \\&2(\alpha ^2\gamma +\beta +\gamma +\delta )-5\alpha (\gamma -\delta )=0,\nonumber \\&\beta +\gamma +\delta +\alpha (\alpha -2)(\gamma -\delta )=0, \end{aligned}$$
(13)

leading to the unique solutions of the system (13) given in (4).

Note that we have not shown the coefficient of \((e_n)^3\bar{e}_{n}\) in (12) due to lengthy expression. However, using the values of parameters given in (4), we can write the error equation in simplified form as

$$\begin{aligned} e_{n+1} =&\ 2T_2\big (4T_2^2-T_3\big ) \big ((e_n)^{3}\bar{e}_{n} \big )+O\big ((e_n)^{4}\bar{e}_{n}\big )\\ =&\ 2KT_2\big (4T_2^2-T_3\big ) \big ((e_n)^{p+3})+O\big (({e}_{n})^{p+4}\big ). \end{aligned}$$

It follows from Theorem 1 that the method to be used from the family (2) is given by

$$\begin{aligned} y_n&=\, x_n-F'(x_n)^{-1}F(x_n),\nonumber \\ z_n&=\, \varphi ^{(p)}_1(x_n, y_n),\nonumber \\ x_{n+1}&=\, z_n-\Big (-I+\frac{3}{2}F'(y_n)^{-1}F'(x_n)+\frac{1}{2} F'(x_n)^{-1}F'(y_n)\Big )F'(x_n)^{-1}F(z_n). \end{aligned}$$
(14)

Next we show that the method (3), on using the values of parameters \(\alpha , \, \beta ,\, \gamma \) and \(\delta \) given in (4), possesses convergence order \(p+3q\). Thus the following theorem is proved:

Theorem 2

Under the hypotheses of Theorem 1, the sequence \(\{x_n\}\) generated by method (3) for \(x_0\in \Omega \) converges to \(x^*\) with order \(p+3q\) for \(p\ge 3\) and \(q \in \mathbb {N}\).

Proof

Taylor’s expansion of \(F(z^{(q-1)}_n)\) about \(x^*\) yields

$$\begin{aligned} F(z^{(q-1)}_n)=F'(x^*)\Big ((z^{(q-1)}_n-x^*)+T_2(z^{(q-1)}_n-x^*)^2+\cdots \Big ). \end{aligned}$$
(15)

Then, we have that

$$\begin{aligned} \psi (x_n,y_n) F(z^{(q-1)}_n)&= \ \big (I-2(4T_2^3-T_2T_3)(e_n)^3+O((e_n)^4))\big )F'(x^*)^{-1}\nonumber \\&\quad \times F'(x^*)\big ((z^{(q-1)}_n-x^*)+T_2(z^{(q-1)}_n-x^*)^2+\cdots \big )\nonumber \\&= (z^{(q-1)}_n-x^*)-2(4T_2^3-T_2T_3)(e_n)^3(z^{(q-1)}_n-x^*)\nonumber \\&\quad +T_2(z^{(q-1)}_n-x^*)^2+\cdots . \end{aligned}$$
(16)

Using (16) in (3), we obtain

$$\begin{aligned} z^{(q)}_n-x^* = 2(4T_2^3-T_2T_3)(e_n)^3(z^{(q-1)}_n-x^*)+T_2(z^{(q-1)}_n-x^*)^2+\cdots . \end{aligned}$$
(17)

As we know that \(z^{(1)}_n-x^*=2KT_2(4T_2^2-T_3)(e_n)^{p+3}+O((e_n)^{p+4})\), therefore, from (17) for \(q=2,3\), we have

$$\begin{aligned} z^{(2)}_n-x^*=&\ 2(4T_2^3-T_2T_3)(e_n)^3(z^{(1)}_n-x^*)+\cdots \\ =&\ 2^2KT_2^2(4T_2^2-T_3)^2(e_n)^{p+6}+O\big ((e_{n})^{p+7}\big ) \end{aligned}$$

and

$$\begin{aligned} z^{(3)}_n-x^*=&\ 2(4T_2^3-T_2T_3)(e_n)^3(z^{(2)}_n-x^*)+\cdots \\ =&\ 2^3KT_2^3(4T_2^2-T_3)^3(e_n)^{p+9}+O\big ((e_{n})^{p+10}\big ). \end{aligned}$$

Proceeding by induction, we have

$$\begin{aligned} e_{n+1}=z^{(q)}_n-x^*=2^qKT_2^{q}(4T_2^2-T_3)^q(e_n)^{p+3q}+O\big ((e_n)^{p+3q+1}\big ). \end{aligned}$$

This completes the proof of Theorem 2. \(\square \)

3 Convergence-II

In this section we study the convergence of new methods in Banach space settings. Let \(w_0:\mathbb {R}_+\cup \,\{0\}\rightarrow \mathbb {R}_+\cup \,\{0\}\) be a continuous and nondecreasing function with \(w_0(0)=0\). Let also \(\varrho \) be the smallest positive solution of equation

$$\begin{aligned} w_0(t)=1. \end{aligned}$$
(18)

Consider, function \(w:[0, \varrho )\rightarrow \mathbb {R}_+\cup \,\{0\}\) continuous and nondecreasing with \(w(0)=0\). Define functions \(g_1\) and \(h_1\) on the interval \([0, \varrho )\) by

$$\begin{aligned} g_1(t)=\frac{\int _0^1w((1-\theta )t)d\theta }{1-w_0(t)} \end{aligned}$$
(19)

and

$$\begin{aligned} h_1(t)=g_1(t)-1. \end{aligned}$$

We have \(h_1(0)=-1<0\) and \( h_1(t)\rightarrow +\infty \) as \(t\rightarrow \varrho ^{-}\). The intermediate value theorem guarantees that equation \(h_1(t)=0\) has solutions in \((0,\varrho )\). Denote by \(\varrho _1\) the smallest such solution. Let \(\lambda \ge 1\) and \(g_2:[0, \varrho _1)\rightarrow \mathbb {R}_+\cup \,\{0\}\) be a continuous and nondecreasing function. Define function \(h_2\) on \([0, \varrho _1)\) by

$$\begin{aligned} h_2(t)=g_2(t)t^{\lambda -1}-1. \end{aligned}$$
(20)

Suppose that \(g_2(t)t^{\lambda -1}-1\rightarrow +\infty \) or a positive number as \(t\rightarrow \varrho _1^{-}\).

Then, we get that \(h_2(0)=-1<0\) and \(h_2(t)\rightarrow +\infty \) or a positive number as \(t\rightarrow \varrho _1^{-}\). Denote by \(\varrho _2\) the smallest solution in \((0, \varrho _1)\) of equation \(h_2(t)=0\). If \(\lambda =\,1\), suppose instead of (20) that

$$\begin{aligned} g_2(0)< 1 \end{aligned}$$
(21)

and \(g_2(t)-1\rightarrow +\infty \) or a positive number as \(t\rightarrow \varrho _1^{-}\). Denote again by \(\varrho _2\) the smallest solution of equation \(h_2(t)=0\).

Let \(v:(0, \varrho _1)\rightarrow \mathbb {R}_+\cup \,\{0\}\) be a continuous and nondecreasing function. Define functions \(g_3\) and \(h_3\) on the interval \((0, \varrho _1)\) by

$$\begin{aligned} g_3(t)&= \ \Bigg (\frac{\int _0^1w((1-\theta )g_2(t)t^{\lambda })d\theta }{1-w_0(g_2(t)t^{\lambda })}\nonumber \\&\quad +\frac{(w_0(t)+w_0(g_1(t)t))\int _0^1v(\theta g_2(t)t^{\lambda })d\theta }{(1-w_0(t))(1-w_0(g_1(t)t))}\nonumber \\&\quad +\frac{1}{2}\frac{(w_0(g_2(t)t^{\lambda })+w_0(g_1(t)t))\int _0^1v(\theta g_2(t)t^{\lambda })d\theta }{(1-w_0(g_2(t)t^{\lambda }))(1-w_0(g_1(t)t))}\nonumber \\&\quad +\frac{1}{2}\frac{\Big (\frac{v(t)}{1-w_0(g_1(t)t)}+\frac{v(g_2(t)t)}{1-w_0(t)}\Big )v(g_1(t)t)\int _0^1v(\theta g_2(t)t^{\lambda })d\theta }{(1-w_0(g_2(t)t^\lambda ))(1-w_0(t))}\Bigg )g_2(t)t^{\lambda -1} \end{aligned}$$
(22)

and

$$\begin{aligned} h_3(t)=g_3(t)-1. \end{aligned}$$

We obtain that \(h_3(0)=-1<0\) and \(h_3(t)\rightarrow \infty \) as \(t\rightarrow \varrho _2^{-}\). Denote by \(\varrho _3\) the smallest solution of equation \(h_3(t)\) in \((0, \varrho _2)\). Then, we have that for each \(t\in [0,\varrho )\)

$$\begin{aligned} 0\le g_i(t)<1,\quad i=1,2,3. \end{aligned}$$

Denote by \(U(\mu ,\varepsilon )=\{x\in B_1: \Vert x-\mu \Vert <\varepsilon \}\) the ball with center \(\mu \in B_1\) and of radius \(\varepsilon >0\). Moreover, \(\bar{U}(\mu ,\varepsilon )\) denotes the closure of \(U(\mu ,\varepsilon )\). We shall show the local convergence analysis of method (14) in a Banach space setting under hypotheses (A):

  1. (a1)

    \(F:\Omega \subseteq B_1\rightarrow B_2\) is a continuously Fréchet-differentiable operator.

  2. (a2)

    There exists \(x^*\in \Omega \) such that \(F(x^*)=0\) and \(F'(x^*)^{-1}\in \mathfrak {L}(B_2,B_1).\)

  3. (a3)

    There exists function \(w_0: {\mathbb {R}_{+}}\cup \{0\}\rightarrow {\mathbb {R}_{+}}\cup \{0\}\) continuous and nondecreasing with \(w_0(0)=0\) such that for each \(x\in \Omega \)

    $$\begin{aligned} \Vert F'(x^*)^{-1}(F'(x)-F'(x^*))\Vert \le w_0(\Vert x-x^*\Vert ). \end{aligned}$$
  4. (a4)

    Let \(\Omega _0=\Omega \cap U(x^*,\varrho )\), where \(\varrho \) was defined previously. There exist functions \(w:[0,\varrho )\rightarrow {\mathbb {R}_{+}}\cup \{0\}\), \(v:[0,\varrho )\rightarrow {\mathbb {R}_{+}}\cup \{0\}\) continuous and nondecreasing with \(w(0)=0\) such that for each \(x, y \in \Omega _0\)

    $$\begin{aligned} \Vert F'(x^*)^{-1}(F'(x)-F'(y))\Vert \le w(\Vert x-y\Vert ) \end{aligned}$$

    and

    $$\begin{aligned} \Vert F'(x^*)^{-1}F'(x)\Vert \le v(\Vert x-x^*\Vert ). \end{aligned}$$
  5. (a5)

    There exists function \(g_2: [0,\varrho _1)\rightarrow {\mathbb {R}_{+}}\cup \,\{0\}\) continuous and nondecreasing and \(\lambda \ge 1\) satisfying (20) if \(\lambda > 1\) and (21), if \(\lambda = 1\) such that

    $$\begin{aligned} \left\| \varphi _\alpha ^{(p)} \big (x, x-F'(x)^{-1}F'(x)\big )-x^*\right\| \le g_2(\Vert x-x^*\Vert )\Vert x-x^*\Vert ^{\lambda }. \end{aligned}$$
  6. (a6)

    \(\bar{U}(x^*,\varrho _3)\subseteq \Omega \).

  7. (a7)

    Let \(\varrho ^*\ge \varrho _3\) and set \(\Omega _1=\Omega \cap \bar{U}(x^*,\varrho ^*)\), \(\int _0^1w_0(\theta \varrho ^*)d\theta <1.\)

Theorem 3

Suppose that the hypotheses (A) are satisfied. Then, the sequence \(\{x_n\}\) generated for \(x_0\in U(x^*,\varrho _3)-\{x^*\}\) by method (14) is well defined in \(U(x^*,\varrho _3)\), remains in \(U(x^*,\varrho _3)\) for all \(n=0,1,2,\ldots \) and converges to \(x^*\), so that

$$\begin{aligned} \Vert y_n-x^*\Vert\le & {} g_1(\Vert x_n-x^*\Vert )\Vert x_n-x^*\Vert \le \Vert x_n-x^*\Vert <\varrho _3, \end{aligned}$$
(23)
$$\begin{aligned} \Vert z_n-x^*\Vert\le & {} g_2(\Vert x_n-x^*\Vert )\Vert x_n-x^* \Vert ^{\lambda }\le \Vert x_n-x^*\Vert \end{aligned}$$
(24)

and

$$\begin{aligned} \Vert x_{n+1}-x^*\Vert \le g_3(\Vert x_n-x^*\Vert )\Vert x_n-x^*\Vert \le \Vert x_n-x^*\Vert , \end{aligned}$$
(25)

where the functions \(g_i, \, \, i=1,2,3\) are defined previously. Moreover, the vector \(x^*\) is the only solution of equation \(F(x)=0\) in \(\Omega _1\).

Proof

We shall show estimates (23)–(25) using mathematical induction. By hypothesis (a2) and for \(x\in U(x^*,\varrho _3)\), we have that

$$\begin{aligned} \Vert F'(x^*)^{-1}(F'(x)-F'(x^*))\Vert \le w_0(\Vert x-x^*\Vert )\le w_0(\varrho _3)<1. \end{aligned}$$
(26)

By the Banach perturbation Lemma [4, 6] and (26) we get that \(F'(x)^{-1}\in \mathfrak {L}(B_2, B_1)\) and

$$\begin{aligned} \Vert F'(x)^{-1}F'(x^*)\Vert \le \frac{1}{1-w_0(\Vert x-x^*\Vert )}. \end{aligned}$$
(27)

In particular, (27) holds for \(x=x_0\), since \(x_0\in U(x^*,\varrho )-\{x^*\}\) and \(y_0\), \(z_0\) are well defined by the first and second substep of method (14) for \(n=0\). We can write by the first substep of method (14) and (a2) that

$$\begin{aligned} y_0-x^*&= x_0-x^*-F'(x_0)^{-1}F(x_0)\nonumber \\&= \int _0^1F'(x_0)^{-1}\big (F'(x^*+\theta (x_0-x^*))-F'(x_0)\big ) (x_0-x^*)d\theta . \end{aligned}$$
(28)

Then, using (22) (for \(i=1\)), the first condition in (a4), (27)  (for \(x=x_0\)) and (28) we get in turn that

$$\begin{aligned} \Vert y_0-x^*\Vert&= \Vert F'(x_0)^{-1}F'(x^*)\Vert \bigg \Vert \int _0^1F'(x^*)^{-1} [F'(x^*+\theta (x_0-x^*)-F'(x_0)](x_0-x^*)d\theta \bigg \Vert \nonumber \\&\le \frac{\int _0^1w((1-\theta )\Vert x_0-x^*\Vert )d\theta \Vert x_0-x^*\Vert }{1-w_0(\Vert x_0-x^*\Vert )}\nonumber \\&= g_1(\Vert x_0-x^*\Vert )\Vert x_0-x^*\Vert \le \Vert x_0-x^*\Vert <\varrho _3, \end{aligned}$$
(29)

which implies (23) for \(n=0\) and \(y_0\in U(x^*,\varrho _3)\).

Using (a5) and (22) for \(i=2\), we get that

$$\begin{aligned} \Vert z_0-x^*\Vert =\Vert \varphi _1(x_0,y_0)-x^*\Vert \le g_2(\Vert x_0-x^*\Vert )\Vert x_0-x^*\Vert ^{\lambda }\le \Vert x_0-x^*\Vert < \varrho _3, \end{aligned}$$
(30)

so (24) holds for \(n=0\) and \(z_0\in U(x^*,\varrho _3)\). Notice that since \(y_0, z_0\in U(x^*, \varrho _3)\), we have that

$$\begin{aligned} \Vert F'(y_0)^{-1}F'(x^*)\Vert \le&\ \frac{1}{1-w_0(\Vert y_0-x^*\Vert )}\nonumber \\ \le&\ \frac{1}{1-w_0(g_1(\Vert x_0-x^*\Vert )\Vert x_0-x^*\Vert )} \end{aligned}$$
(31)

and

$$\begin{aligned} \Vert F'(z_0)^{-1}F'(x^*)\Vert \le&\ \frac{1}{1-w_0(\Vert z_0-x^*\Vert )}\nonumber \\ \le&\ \frac{1}{1-w_0(g_2(\Vert x_0-x^*\Vert )\Vert x_0-x^*\Vert ^{\lambda })}. \end{aligned}$$
(32)

Moreover, \(x_1\) is well defined by the third substep of method (14) for \(n=0\). We can write by the third substep of method (14) for \(n=0\)

$$\begin{aligned} x_1-x^*=\,&z_0-x^*-F'(z_0)^{-1}F(z_0)+F'(z_0)^{-1}F(z_0)+F'(x_0)^{-1} F(z_0)\nonumber \\&-\frac{3}{2}F'(y_0)^{-1}F(z_0)-\frac{1}{2}F'(x_0)^{-1}F'(y_0) F'(x_0)^{-1}F(z_0)\nonumber \\ =\,&(z_0-x^*-F'(z_0)^{-1}F(z_0))+(F'(x_0)^{-1}-F'(y_0)^{-1})F(z_0)\nonumber \\&+\frac{1}{2}(F'(z_0)^{-1}-F'(y_0)^{-1})F(z_0)\nonumber \\&+\frac{1}{2}(F'(z_0)^{-1}-F'(x_0)^{-1}F'(y_0)F'(x_0)^{-1})F(z_0)\nonumber \\ =\,&(z_0-x^*-F'(z_0)^{-1}F(z_0))+F'(x_0)^{-1}[(F'(y_0)-F'(x^*))\nonumber \\&+(F'(x^*)-F'(x_0))]F'(y_0)^{-1}F(z_0)\nonumber \\&+\frac{1}{2}F'(z_0)^{-1}\left[ (F'(y_0)-F'(x^*))+(F'(x^*)-F'(z_0))\right] F'(y_0)^{-1}F(z_0)\nonumber \\&+\frac{1}{2}F'(z_0)^{-1}\left[ F'(x_0)F'(y_0)^{-1}-F'(z_0)F'(x_0)^{-1}\right] F'(y_0) F'(x_0)^{-1}F(z_0). \end{aligned}$$
(33)

Using (22) (for \(i=3\)), (27), (a3), (29)–(33), and the triangle inequality, we obtain in turn that

$$\begin{aligned} \Vert x_1-x^*\Vert&\le \Bigg (\frac{\int _0^1w(1-\theta )\Vert z_0-x^*\Vert d\theta }{1-w_0(\Vert z_0-x^*\Vert )}\\&\quad +\frac{[w_0(\Vert x_0-x^*\Vert )+w_0(\Vert y_0-x^*\Vert )]\int _0^1v(\theta \Vert z_0-x^*\Vert )d\theta }{(1-w_0(\Vert x_0-x^*\Vert ))(1-w_0(\Vert y_0-x^*\Vert ))}\\&\quad +\frac{1}{2}\frac{[w_0(\Vert z_0-x^*\Vert )+w_0(\Vert y_0-x^*\Vert )]\int _0^1v(\theta \Vert z_0-x^*\Vert ) d\theta }{(1-w_0(\Vert z_0-x^*\Vert ))(1-w_0(\Vert y_0-x^*\Vert ))}\\&\quad +\frac{1}{2}\frac{[\frac{v\Vert x_0-x^*\Vert )}{1-w_0(\Vert y_0-x^*\Vert )} +\frac{v\Vert z_0-x^*\Vert )}{1-w_0(\Vert x_0-x^*\Vert )}]v(\Vert y_0-x^*\Vert )\int _0^1 v(\theta \Vert z_0-x^*\Vert )d\theta }{(1-w_0(\Vert z_0-x^*\Vert ))(1-w_0(\Vert x_0-x^*\Vert ))}\Bigg )\\&\qquad \times \Vert z_0-x^*\Vert \le g_3(\Vert x_0-x^*\Vert )\Vert x_0-x^*\Vert \le \Vert x_0-x^*\Vert <\varrho _3, \end{aligned}$$

which shows (25) for \((k=0)\) and \(x_1\in U(x^*,\varrho _3)\). The induction for estimates (23)–(25) is completed by simply replacing \(x_0\), \(y_0\), \(z_0\), \(x_1\) by \(x_k\), \(y_k\), \(z_k\), \(x_{k+1}\) in the preceding estimates. Then, from estimate

$$\begin{aligned} \Vert x_{k+1}-x^*\Vert \le c\Vert x_k-x^*\Vert <\varrho _3,\quad \text {where}\,\, c=g_3(\Vert x_0-x^*\Vert )\in [0,1), \end{aligned}$$

we deduce that \({\lim }_{k\rightarrow \infty } {x_k}=x^*\) and \(x_{k+1}\in U(x^*,\varrho _3)\).

The uniqueness part is shown using (a3) and (a7) as follows:

Define operator Q by \(Q=\int _0^1F'(x^{**}+\theta (x^*-x^{**}))d\theta \) for some \(x^{**}\in \Omega _1\) with \(F(x^{**})=0\). Then, we have that

$$\begin{aligned} \Vert F'(x^*)^{-1}\big (Q-F'(x^*)\big )\Vert \le&\int _0^1w_0(\theta \Vert x^*-x^{**}\Vert )d\theta \\ \le&\int _0^1w_0(\theta \varrho ^*)d\theta < 1, \end{aligned}$$

so \(Q^{-1}\in \mathfrak {L}(B_2, B_1)\). Then, from the identity

$$\begin{aligned} 0=F(x^*)-F(x^{**})=Q(x^*-x^{**}), \end{aligned}$$

we conclude that \(x^*=x^{**}\). \(\square \)

Next, we present the local convergence analysis of method (3) along the same lines of method (14). Define functions \(\bar{g}_2\), \( \lambda \), \(\mu \) and \(h_{\mu }\) on the interval \([0,\varrho _2)\) by

$$\begin{aligned} \bar{g}_2(t)= & {} \frac{1}{1-w_0(t)}+\frac{3}{2}\frac{1}{1-w_0(g_1(t)t)} +\frac{1}{2}\frac{\int _0^1v(\theta g_1(t)t)d\theta }{(1-w_0(t))^2},\nonumber \\ \lambda (t)= & {} 1+\bar{g}_2(t)\int _0^1v\big (\theta g_2(t)t^{\lambda }\big )d\theta ,\nonumber \\ \mu (t)= & {} \lambda ^{q}(t)g_2(t)t^{\lambda -1} \end{aligned}$$
(34)

and

$$\begin{aligned} h_{\mu }(t)=\mu (t)-1. \end{aligned}$$

We have that \(h_{\mu }(0)<0\). Suppose that

$$\begin{aligned} \mu (t)\rightarrow +\infty \quad \text {or a positive number as}\, \, t\rightarrow \varrho ^{-}_2. \end{aligned}$$
(35)

Denote by \(\varrho ^{(q)}\) the smallest zero of function \(h_{\mu }\) on the interval \((0,\varrho _2)\). Define the radius of convergence \(\varrho ^*\) by

$$\begin{aligned} \varrho ^*=\text {min}\{\varrho _1, \varrho ^{(q)}\}. \end{aligned}$$
(36)

Denote by \((A')\) the conditions (A) but with \(\varrho ^*\) replacing \(\varrho \) together with condition (35).

Proposition 1

Suppose that the conditions \((A')\) hold. Then, sequence \(\{x_n\}\) generated for \(x_0\in U(x^*, \varrho ^*)-\{x^*\}\) by method (3) is well defined in \(U(x^*, \varrho ^*)\), remains in \(U(x^*, \varrho ^*)\) and converges to \(x^*\). Moreover, the following estimates hold

$$\begin{aligned} \Vert y_k-x^*\Vert&\le g_1(\Vert x_k-x^*\Vert )\Vert x_k-x^*\Vert \le \Vert x_k-x^*\Vert <\varrho ^*,\nonumber \\ \Vert z_k-x^*\Vert&\le g_2(\Vert x_k-x^*\Vert )\Vert x_k-x^*\Vert ^{\lambda }\le \Vert x_k-x^*\Vert ,\nonumber \\ \Vert z_k^{(i)}-x^*\Vert&\le \lambda ^{i}(\Vert x_k-x^*\Vert )\Vert z_k^{(i-1)}-x^*\Vert \nonumber \\&\le {\lambda }^{i}(\Vert x_k-x^*\Vert )g_2(\Vert x_k-x^*\Vert )\Vert x_k-x^*\Vert ^{\lambda }\nonumber \\&\le \Vert x_k-x^*\Vert , \quad i=1, 2,\ldots , q-1 \end{aligned}$$
(37)

and

$$\begin{aligned} \Vert x_{k+1}-x^*\Vert&=\Vert z_k^{(q)}-x^*\Vert \le \lambda ^{q}(\Vert x_k-x^*\Vert ) \Vert z_k^{(q-1)}-x^*\Vert \nonumber \\&\le \mu (\Vert x_k-x^*\Vert )\Vert x_k-x^*\Vert , \end{aligned}$$
(38)

where the functions \(\lambda \) and \(\mu \) are defined previously. Furthermore, the vector \(x^*\) is the only solution of equation \(F(x)=0\) in \(\Omega _1\).

Proof

We shall only show new estimates (37) and (38). Using the proof of Theorem 3, we show the first two estimates. Then, we can obtain that

$$\begin{aligned} \Vert \psi (x_k, y_k) F'(x^*)\Vert&\le \Vert F'(x_k)^{-1}F'(x^*)\Vert +\frac{3}{2} \Vert F'(y_k)^{-1}F'(x^*)\Vert \\&\quad +\frac{1}{2}\Vert F'(x_k)^{-1}F'(x^*)\Vert \Vert F'(x^*)^{-1}F'(y_k)\Vert \Vert F'(x_k)^{-1}F'(x^*)\Vert \\&\le \frac{1}{1-w_0(\Vert x_k-x^*\Vert )}+\frac{3}{2}\frac{1}{1-w_0 (\Vert y_k-x^*\Vert )}\\&\quad +\frac{1}{2}\frac{\int _0^1v(\theta \Vert y_k-x^*\Vert )d\theta }{(1-w_0(\Vert x_k-x^*\Vert ))^2}\le \bar{g}_2(\Vert x_k-x^*\Vert ). \end{aligned}$$

Moreover, we have in turn the estimates

$$\begin{aligned} \Vert z_k^{(1)}-x^*\Vert&=\Vert z_k-x^*-\psi (x_k,y_k) F(z_k)\Vert \\&\le \Vert z_k-x^*\Vert +\Vert \psi (x_k, y_k) F'(x^*)\Vert \Vert F'(x^*)^{-1}F(z_k)\Vert \\&\le \Vert z_k-x^*\Vert +\bar{g}_2(\Vert x_k-x^*\Vert )\int _0^1v(\theta \Vert z_k-x^*\Vert )d\theta \Vert z_k-x^*\Vert \\&\le \lambda (\Vert x_k-x^*\Vert )\Vert z_k-x^*\Vert \\&\le \mu (\Vert x_k-x^*\Vert )\Vert x_k-x^*\Vert . \end{aligned}$$

Similarly, we get that

$$\begin{aligned} \Vert z_k^{(2)}-x^*\Vert&\le \lambda (\Vert x_k-x^*\Vert )\Vert z_k^{(1)}-x^*\Vert \\&\le \lambda ^2(\Vert x_k-x^*\Vert )\Vert z_k-x^*\Vert \\&\ldots \ldots \ldots \ldots \ldots \ldots \\ \Vert z_k^{(i)}-x^*\Vert&\le \lambda ^i(\Vert x_k-x^*\Vert )\Vert z_k^{(i-1)}-x^*\Vert \\ \Vert x_{k+1}-x^*\Vert&=\Vert z_k^{(q)}-x^*\Vert \le \lambda ^q(\Vert x_k-x^*\Vert )\Vert z_k^{(q-1)}-x^*\Vert \\&\le \mu (\Vert x_k-x^*\Vert )\Vert x_k-x^*\Vert . \end{aligned}$$

That is we have \(x_k\), \(z_k\), \(z_k^{(i)}\) \(\in U(x^*, \varrho ^*)\), \(i=1,2,\ldots , q \) and

$$\begin{aligned} \Vert x_{k+1}-x^*\Vert \le \bar{c} \Vert x_k-x^*\Vert , \end{aligned}$$

where \(\bar{c}=\mu (\Vert x_0-x^*\Vert )\in [0,1)\), so \(\lim _{k\rightarrow \infty }x_k=x^*\) and \(x_{k+1}\in U(x^*, \varrho ^*)\). \(\square \)

Remark

  1. (a)

    The result obtained here can be used for operators F satisfying autonomous differential equation [5] of the form

    $$\begin{aligned} F'(x)=T(F(x)), \end{aligned}$$

    where T is a known continuous operator. Since \(F'(x^*)=T(F(x^*))=T(0)\), we can apply the results without actually knowing the solution \(x^*\). Let as an example \(F(x)=e^x-1\). Then, we can choose: \(T(x)=x+1\).

  2. (b)

    It is worth noticing that methods (14) and (3) do not change when we use the conditions of Theorem 3 instead of stronger conditions used in Theorems 1 and 2. Moreover, we can compute the computational order of convergence (COC) [32] defined by

    $$\begin{aligned} COC=\text {ln}\bigg (\frac{\Vert x_{n+1}-x^*\Vert }{\Vert x_{n}-x^*\Vert }\bigg )\bigg / \text {ln}\bigg (\frac{\Vert x_{n}-x^*\Vert }{\Vert x_{n-1}-x^*\Vert }\bigg ),\quad n=1,2,\ldots \end{aligned}$$
    (39)

    or the approximate computational order of convergence (ACOC) [14], given by

    $$\begin{aligned} ACOC=\text {ln}\bigg (\frac{\Vert x_{n+1}-x_n\Vert }{\Vert x_{n}-x_{n-1}\Vert }\bigg ) \bigg /\text {ln}\bigg (\frac{\Vert x_{n}-x_{n-1}\Vert }{\Vert x_{n-1}-x_{n-2}\Vert }\bigg ). \quad n=1,2,\ldots \end{aligned}$$
    (40)

    This way we obtain in practice the order of convergence.

  3. (c)

    Numerous choices for function \(\varphi _\alpha ^{(p)}\) are possible. Let us choose, e.g. \(p=4\), \(\alpha =1\) and

    $$\begin{aligned} \varphi _1^{(4)}(x_n, y_n) = y_n-F'(y_n)^{-1}F(y_n), \end{aligned}$$
    (41)

    which is a fourth order iteration function. Then, we can have as in (29) that

    $$\begin{aligned} \Vert z_n-x^*\Vert \le&\ \frac{\int _0^1w(1-\theta )\Vert y_n-x^*\Vert )d\theta \Vert y_n-x^*\Vert }{1-w_0(\Vert y_n-p\Vert )}\\ \le&\ \frac{\int _0^1w(1-\theta )g_1(\Vert x_n-x^*\Vert )\Vert x_n-x^*\Vert )d\theta g_1 (\Vert x_n-x^*\Vert )\Vert x_n-x^*\Vert }{1-w_0(g_1(\Vert x_n-x^*\Vert )\Vert x_n-x^*\Vert )}. \end{aligned}$$

    So, we can choose

    $$\begin{aligned} g_2(t)=\frac{\int _0^1w((1-\theta )g_1(t)t)g_1(t)d\theta }{1-w_0(g_1(t)t)}\, \, \text {and}\, \, \lambda =1. \end{aligned}$$

Then function \(g_3\) is given by Eq. (22).

It is worth noticing that the definition of function \(g_2\) (and consequently of function \(g_3\)) is not unique. Indeed, let \(w_0(t)=l_0\,t\), \(w(t)=l\,t\). Then we get \(g_1(t)=\frac{lt}{2(1-l_0t)}\), \(g_2(t)=\Big (\frac{l}{2}\Big )^3 \frac{1}{(1-l_0t)^2}\) and \(\lambda =4\) (see also Example 2 for \(l_0=15 \) and \(l=30\)).

4 Numerical examples

Here, we shall demonstrate the theoretical results which we have proved in section 3. For this, the methods of the family (3) chosen, with the choices \(g_1\), \({g_2}\), \(g_3\) and \(\varphi _1^{(4)}(x_n, y_n)\) given in Remark (c), are of order seven and ten that now we denote by \(M_7\) and \(M_{10}\), respectively. We consider two numerical examples, which are defined as follows:

Example 1

Let \(B_1=B_2=C[0,1]\). Consider the equation

$$\begin{aligned} x(s)=\int _0^1T(s, t)\Bigg (\frac{1}{2}\,x(t)^{\frac{3}{2}}+\frac{x(t)^2}{8}\Bigg )dt, \end{aligned}$$
(42)

where the kernel T is the Green’s function defined on the interval \([0,1]\times [0,1]\) by

$$\begin{aligned} T(s,t)= \left\{ \begin{array}{l} (1-s)t, \quad t\le s,\\ s(1-t), \quad s\le t. \end{array}\right. \end{aligned}$$
(43)

Define operator \(F:C[0,1]\rightarrow C[0,1]\) by

$$\begin{aligned} F(x)(s)=x(s)-\int _0^1T(s, t)\Bigg (\frac{1}{2}\,x(t)^{\frac{3}{2}}+\frac{x(t)^2}{8}\Bigg )dt, \end{aligned}$$

then,

$$\begin{aligned} F'(x)\mu (s)=\mu (s)-\int _0^1T(s,t)\Bigg (\frac{3}{4}\,x(t)^{\frac{1}{2}} +\frac{x(t)}{4}\Bigg )\mu (t)dt. \end{aligned}$$
(44)

Notice that \(x^*(s)=0\) is a solution of (42). Using (43), we obtain

$$\begin{aligned} \Bigg \Vert \int _0^1T(s,t)dt\Bigg \Vert \le \frac{1}{8}. \end{aligned}$$

Then, by (43) and (a4), we have that

$$\begin{aligned} \Vert F'(x)-F'(y)\Vert \le \frac{1}{32}\big (3\Vert x-y\Vert ^{\frac{1}{2}}+\Vert x-y\Vert \big ). \end{aligned}$$
(45)

Hence, we can set \(w_0(t)=w(t)=\frac{1}{32}\big (3t^{\frac{1}{2}}+t \big )\) and \(v(t)=1+w_0(t)\). Numerical results are displayed in Table 1. From the numerical values we observe that \(\varrho *> 1\). But, since we cannot go outside the unit ball, we choose \(\varrho *\) to be the maximum available value which is 1. to be the maximum available value which is 1. Thus, for both methods M\(_7\) and M\(_{10}\), we have \(\varrho * = 1\).

Thus the convergence of the methods \(M_7\) and \(M_{10}\) to \(x^*(s)= 0\) is guaranteed, provided that \(x_0 \in U(x^*,\varrho ^*)\). Notice that in view of (45) earlier results using hypotheses on the second derivative or higher cannot be used to solve this problem [3, 4, 26].

Table 1 Numerical results of Example 1
Table 2 Numerical results of Example 2

Example 2

Let \(B_1=B_2=C[0,1]\), be the space of continuous functions defined on the interval [0, 1] and be equipped with max norm. Let \(\Omega =\bar{U}(0,1)\). Define function F on \(\Omega \) by

$$\begin{aligned} F(\varphi )(x)=\phi (x)-10\int _0^1x\theta \varphi (\theta )^3d\theta . \end{aligned}$$

We have that

$$\begin{aligned} F'(\varphi (\xi ))(x)=\xi (x)-30\int _0^1x\theta \varphi (\theta )^2\xi (\theta )d\theta ,\quad for\ \ each\ \ \xi \in \Omega . \end{aligned}$$

Then for \(x^*=0\) we have \(w_0(t)=15t\), \(w(t)=30t\), \(v(t)=2\). The parameters are shown in Table 2.

Thus, the methods \(M_7\) and \(M_{10}\) converge to \(x^*=0\), provided that \(x_0 \in U(x^*,\varrho ^*)\).

5 Applications

We apply the methods \(M_7\) and \(M_{10}\) of the proposed family (3) to solve systems of nonlinear equations in \(\mathbb {R}^{m}\). A comparison between the performance of present methods with existing higher order methods is also shown. For example, we choose the fifth order method proposed by Madhu et al. [23]; sixth order methods by Parhi and Gupta [26], Esmaeili and Ahmadi [15], Behl et al. [9] and Grau et al. [17]; eighth order method by Sharma and Arora [28]. These methods are given as follows:

Madhu–Babajee–Jayaraman method (\(MBJ_5\)):

$$\begin{aligned} y_{n}=&\, {x}_{n}- {F}'({x}_{n})^{-1}{F}({x}_{n}),\\ x_{n+1}=&\, y_{n}-H{F}'({x}_{n})^{-1}{F}({y}_{n}), \end{aligned}$$

where \(H=2I-t(x_n)+\frac{5}{4}(t(x_n)-1)^2 \) and \(t(x_n)=\ {F}'({x}_{n})^{-1}{F'}({y}_{n}).\)

Behl–Cordero–Motsa–Torregrosa method (\(BCMT_6\))

$$\begin{aligned} y_{n}=&\ {x}_{n}- a{F}'({x}_{n})^{-1}{F}({x}_{n}),\\ z_{n}=&\ y_{n}-\big (bF'(x_n)^{-1}+(cF'(x_n)+dF'(y_n))^{-1}\big ){F}(x_{n}),\\ x_{n+1}=&\ z_{n}-\big (gF'(x_n)^{-1}+(eF'(x_n)+hF'(y_n))^{-1}\big ){F}(z_{n}), \end{aligned}$$

where \(a=\frac{2}{3}\), \(b=-\frac{1}{6}\), \(c=-1\), \(d=3\), \(g=\frac{1}{2}\), \(e=-\frac{2g+1}{2(g-1)^2}\) and \(h=\frac{3}{2(g-1)^2}\).

Parhi–Gupta method (\(PG_6\)):

$$\begin{aligned} y_{n}=&\, {x}_{n}- {F}'({x}_{n})^{-1}{F}({x}_{n}),\\ z_{n}=&\, y_{n}-2\big (F'(x_{n})+F'(y_n)\big )^{-1}{F}(x_{n}),\\ x_{n+1}=&\, z_{n}-{F}'({x}_{n})^{-1}{F}({z}_{n})(3F'(y_n)-F'(x_n))^{-1}(F'(x_n)+F'(y_n)). \end{aligned}$$

Esmaeili–Ahmadi method (\(EA_6\)):

$$\begin{aligned} y_{n}=&\ {x}_{n}- {F}'({x}_{n})^{-1}{F}({x}_{n}),\\ z_{n}=&\ y_{n}+\frac{1}{3}\big (F'(x_n)^{-1}+2(F'(x_n)-3F'(y_n))^{-1}\big ){F}(x_{n}),\\ x_{n+1}=&\ z_{n}+\frac{1}{3}\big (-F'(x_n)^{-1}+4(F'(x_n)-3F'(y_n))^{-1}\big ){F}(z_{n}). \end{aligned}$$

Grau–Grau–Noguera method (\(GGN_6\)):

$$\begin{aligned} y_{n}=&\, {x}_{n}- {F}'({x}_{n})^{-1}{F}({x}_{n}),\\ z_{n}=&\, y_{n}-\big (2[y_{n},x_{n}\,;F]-F'(x_{n})\big )^{-1}{F}(y_{n}),\\ x_{n+1}=&\, z_{n}-\big (2[y_{n},x_{n}\,;F]-F'(x_{n})\big )^{-1}{F}(z_{n}). \end{aligned}$$

Sharma–Arora method (\(SA_8\)):

$$\begin{aligned} y_{n}=&\, {x}_{n}- {F}'({x}_{n})^{-1}{F}({x}_{n}),\\ z_{n}=&\, y_{n}-\bigg (\frac{13}{4}I-G_n\Big (\frac{7}{2}I-\frac{5}{4}G_n\Big )\bigg )F'(x_{n})^{-1}{F}(y_{n}),\\ x_{n+1}=&\, z_{n}-\bigg (\frac{7}{2}I-G_n\Big ({4I}-\frac{3}{2}G_n\Big )\bigg )F'(x_{n})^{-1}{F}(z_{n}), \end{aligned}$$

where \(G_n=F'(x_{n})^{-1}{F}'(y_{n})\).

The programs are performed in the processor, AMD A8-7410 APU with AMU Radeon R5 Graphics @ 2.20 GHz(64 bit Operating System) Microsoft Window 10 Ultimate 2016 and are complied by Mathematica using multi-precision arithmetics. For every method, we record the number of iterations (n) needed to converge to the solution such that the stopping criterion

$$\begin{aligned} ||x_{n+1}-x_n||+||F(x_{n})||<10^{-400} \end{aligned}$$

is satisfied. In order to verify the theoretical order of convergence, we calculate the approximate computational order of convergence (ACOC) using the formula (40). In the comparison of performance of methods, we also include CPU time utilized in the execution of program which is computed by the Mathematica command “TimeUsed[ ]”.

Table 3 Comparison of performance of methods for problem (47)

We consider the nonlinear integral equation \(F(x)=0\) where

$$\begin{aligned} F(x)(s)=x(s)-1+\frac{1}{2}\int _0^1s\, \text {cos}(x(t))dt, \end{aligned}$$
(46)

wherein \(s\in [0,1]\) and \(x\in \Omega =U(0,2)\subset X\). Here, \(X=C[0,1]\) is the space of continuous functions on [0, 1] with the max-norm,

$$\begin{aligned} \Vert x\Vert =\text {max}_{s\in [0,1]}|x(s)|. \end{aligned}$$

Integral equation of this kind is called Chandrasekhar equation (see [11]) which arises in the study of radiative transfer theory, neutron transport problems and kinetic theory of the gases.

Using the trapezoidal rule of integration with step \(h=1/m\) to discretize (46), we obtain the following system of nonlinear equations

$$\begin{aligned} 0= x_i-1 +\frac{s_i}{2m}\Bigg (\frac{1}{2}\, \text {cos}(x_0)+\sum _{j=1}^{m-1}\text {cos}(x_j)+\frac{1}{2}\, \text {cos}(x_m)\Bigg ),\quad i=0,1,\ldots ,m, \end{aligned}$$
(47)

where \(s_i=t_i=i/m\) and \(x_i=x(t_i)\) with \(x_0=1/2\). We apply the methods to solve (47) for the size \(m= 8, 25, 50, 100\) by selecting the initial value \(\{x_1,x_2, \ldots , x_m\}^T=\{\frac{1}{10},\frac{1}{10}, {\mathop {\cdot \,\cdot \,\cdot \,\cdot \,\cdot \,\cdot \,,}\limits ^{m-times}} \frac{1}{10}\}^T\) towards the required solutions of the systems. The corresponding solutions are given by

and

Numerical results are displayed in Table 3, which include:

  • The dimension (m) of the system of equations.

  • The required number of iterations (n).

  • The error \(||x_{n+1}-x_n||\) of approximation to the corresponding solution of considered problems, where \(A(-h)\) denotes \(A \times 10^{-h}\) in each table.

  • The approximate computational order of convergence (ACOC) calculated by the formula (40).

  • The elapsed CPU time (CPU-time) in seconds.

It is clear from the numerical results displayed in Table 3 that the new methods like the existing methods show stable convergence behavior. Also, observe that at same iteration the absolute value of error of approximating solution obtained by the higher order methods is smaller than the error by the lower order methods which justifies the superiority of higher order methods. From the calculation of computational order of convergence, it is also verified that the theoretical order of convergence is preserved. The CPU time used in the execution of program shows the efficient nature of proposed methods as compared to other methods. Similar numerical experimentations, carried out for a number of problems of different type, confirmed the above conclusions to a large extent.