1 Introduction

Solving nonlinear equations is one of the most important problems that has interesting applications in all fields in science and engineering. We consider iterative methods to find a simple root \(\alpha \) of a nonlinear equation \(f(x)=0\), where \(f : D \subset \mathbb {R} \rightarrow \mathbb {R}\) for an open interval \(D\) is a scalar function. Newton-Raphson iteration \(x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}\) is probably the most widely used algorithm for finding roots. It is of second order and requires two evaluations for each iteration step, one evaluation of \(f\) and one of \(f'\) [10, 17].

Kung and Traub [7] conjectured that no multi-point method without memory with \(n\) evaluations could have a convergence order larger than \(2^{n-1}\). A multi-point method with convergence order \(2^{n-1}\) is called optimal. The efficiency index, gives a measure of the balance between those quantities, according to the formula \(p^{1/n}\), where \(p\) is the convergence order of the method and \(n\) is the number of function evaluations per iteration.

Some well-known optimal two-point methods have been introduced by Jarratt [5], King [6], Ostrowski [10] and Petkovic et al. [12]. Some optimal multi-point methods have been proposed by Chun and Lee [1], Cordero et al. [2], Lotfi et al. [8], Neta [9], Salimi et al. [13] and Sharifi et al. [14, 15]. Zheng et al. [19] have presented an optimal Steffensen-type family. Dzunic et al. [3], Petkovic et al. [11] and Sharma et al. [16] have constructed methods with memory to solve nonlinear equations.

In this paper, we present a modification of the family of King’s methods which is derivative-free. The result of this paper is organized as follows: Sect. 2 and 3 are devoted to the construction and convergence analysis of two-point and three-point optimal derivative-free methods. We introduce the methods with memory and prove their R-order in Sect. 4. In Sect. 5, the new methods are compared with a closest competitor in a series of numerical examples. Section 6 contains a short conclusion.

2 Derivative-free two-point method of fourth order

2.1 Description of derivative-free two-point method

We start with King’s family of methods which is one of the most important two-point families for solving nonlinear equations [6].

$$\begin{aligned} {\left\{ \begin{array}{ll} y_n=x_n-\dfrac{f\left( x_n\right) }{f'(x_n)},\\ x_{n+1}=y_n-\dfrac{f(y_n)}{f'(x_n)}\cdot \dfrac{f(x_n)+\gamma f(y_n)}{f(x_n)+(\gamma -2)f(y_n)},\quad (n=0, 1, \ldots ),\quad \gamma \in \mathbb {R}, \end{array}\right. } \end{aligned}$$
(2.1)

where \(x_0\) is an initial approximation of a simple zero \(\alpha \) of \(f\). The main idea is to construct a derivative-free class of two-point methods with optimal order of convergence four. We consider Steffensen-Like’s method for the first step and for the second step approximate \(f'(x_n)\) by

$$\begin{aligned} f'(x_n)\approx \dfrac{f\left[ y_n,w_n\right] }{G(t_n)}, \end{aligned}$$

where \( w_n=x_n-\beta f(x_n)\), \(\beta \ne 0\), \(f[y_n,w_n]=\dfrac{f(y_n)-f(w_n)}{y_n-w_n}\), \(t_n=\dfrac{f(y_n)}{f(x_n)}\) and \(G\) is a real function.

Hence, we obtain

$$\begin{aligned} {\left\{ \begin{array}{ll} y_n=x_n-\dfrac{\beta f(x_n)^2}{f(x_n)-f(w_n)}, \\ x_{n+1}=y_n-\dfrac{f(x_n)+\gamma f(y_n)}{f(x_n)+(\gamma -2)f(y_n)}\cdot \dfrac{f(y_n)}{f[y_n,w_n]}G(t_n). \end{array}\right. } \end{aligned}$$
(2.2)

2.2 Convergence analysis

We shall state the convergence theorem for the family of methods (2.2).

Theorem 1

Let \(D \subseteq \mathbb {R}\) be an open interval, \(f : D \rightarrow \mathbb {R}\) four times continuously differentiable and let \(\alpha \in D\) be a simple zero of \(f\). If the initial point \(x_{0}\) is sufficiently close to \(\alpha \), then the method defined by (2.2) converges to \(\alpha \) with order four if the weight function \(G : \mathbb {R} \rightarrow \mathbb {R}\) is continuously differentiable and satisfies the conditions

$$\begin{aligned} G(0)=1, \quad \quad G'(0)=-1. \end{aligned}$$

Proof

Let \(e_{n}:=x_{n}-\alpha \), \(e_{n,w}:=w_n-\alpha \), \(e_{n,y}:=y_{n}-\alpha \) and \(c_{n}:=\dfrac{f^{(n)}(\alpha )}{n!f^{'}(\alpha )}, \quad n=2,3,\ldots \)

Using Taylor expansion of \(f\) at \(\alpha \) and taking into account \(f(\alpha )=0\), we have

$$\begin{aligned} f(x_{n}) = f^{'}(\alpha )\left( e_{n} + c_{2}e_{n}^{2}+c_{3}e_{n}^{3}+c_{4}e_{n}^{4}\right) +O\left( e_n^5\right) , \end{aligned}$$
(2.3)

then

$$\begin{aligned} e_{n,w}=w_n-\alpha =\left( 1-\beta f'(\alpha )\right) e_n-\beta f'(\alpha )c_2e_n^2+O\left( e_n^3\right) , \end{aligned}$$

and

$$\begin{aligned} f(w_{n})=f^{'}(\alpha )\left( e_{n,w} + c_{2}e_{n,w}^{2}+c_{3}e_{n,w}^{3}+c_{4}e_{n,w}^{4}\right) +O\left( e_{n,w}^5\right) . \end{aligned}$$
(2.4)

From (2.3) and (2.4), we have

$$\begin{aligned} \dfrac{\beta f\left( x_n\right) ^2}{f(x_n)-f(w_n)}=e_n-\left( 1-\beta f'(\alpha )\right) c_2e_n^2+O\left( e_n^3\right) , \end{aligned}$$
(2.5)

by substituting (2.5) in (2.2), we get

$$\begin{aligned} e_{n,y}=y_n-\alpha =\left( 1-\beta f'(\alpha )\right) c_2e_n^2+O\left( e_n^3\right) . \end{aligned}$$

As well as

$$\begin{aligned} f(y_{n})=f^{'}(\alpha )\left( e_{n,y} + c_{2}e_{n,y}^{2}+c_{3}e_{n,y}^{3}+c_{4}e_{n,y}^{4}\right) +O\left( e_{n,y}^5\right) . \end{aligned}$$
(2.6)

From (2.3) and (2.6), we obtain

$$\begin{aligned} t_{n}= & {} \dfrac{f(y_n)}{f(x_n)}=\left( 1-\beta f'(\alpha )\right) c_2e_n+\left( -\left( 3+\beta f'(\alpha )\left( -3+\beta f'(\alpha )\right) \right) c_2^2\right. \nonumber \\&\left. +\left( -2+\beta f'(\alpha )\right) \left( -1+\beta f'(\alpha )\right) c_3\right) e_n^2 +O\left( e_n^3\right) , \end{aligned}$$
(2.7)

by expanding \(G(t_n)\) around \(0\), we have

$$\begin{aligned} G(t_n)=G(0)+G'(0)t_n+O\left( t_n^2\right) , \end{aligned}$$
(2.8)

moreover, from (2.4) and (2.6), we obtain

$$\begin{aligned} f\left[ y_n,w_n\right]= & {} \dfrac{f(y_n)-f(w_n)}{y_n-w_n} =f'(\alpha )+f'(\alpha )\left( 1-\beta f'(\alpha )\right) c_2e_n\nonumber \\&+f'(\alpha )\left( \left( 1\!-\!2\beta f'(\alpha )\right) c_2^2\!+\!\left( -1+\beta f'(\alpha )\right) ^2c_3\right) e_n^2\!+\!O\left( e_n^3\right) ,\qquad \end{aligned}$$
(2.9)

then, by substituting (2.3)–(2.9) in (2.2) we get

$$\begin{aligned} e_{n+1}=z_n-\alpha =R_2e_n^2+R_3e_n^3+R_4e_n^4+O\left( e_n^5\right) , \end{aligned}$$

with

$$\begin{aligned} R_2&=-c_2\left( 1-\beta f'(\alpha )\right) \left( -1+G(0)\right) , \\ R_3&=-c_2^2\left( 1-\beta f'(\alpha )\right) ^2\left( 1+G'(0)\right) ,\\ R_4&=-c_2\left( 1-\beta f'(\alpha )\right) ^2\left( \left( -1-2\gamma +\beta f'(\alpha )(-1+2\gamma )\right) c_2^2+c_3\right) . \end{aligned}$$

Therefore, to provide the fourth order convergence of the two-point method (2.2), it is necessary to choose \(R_i=0\) for \(i=2,3\), and to achieve this we use the fact that

$$\begin{aligned} R_2&=0 \quad \text {if} \quad G(0)=1,\\ R_3&=0 \quad \text {if} \quad G'(0)=-1. \end{aligned}$$

It is clear that \(R_{4}\ne 0\) in general. Thus, method (2.2) converges to \(\alpha \) with order four and the error equation becomes

$$\begin{aligned} e_{n+1}\!=\!\left( \!-c_2\left( 1\!-\!\beta f'(\alpha )\right) ^2\left( \left( -1-2\gamma +\beta f'(\alpha )(-1+2\gamma )\right) c_2^2+c_3\right) \right) e_n^4\!+\!O\left( e_n^5\right) \!.\nonumber \\ \end{aligned}$$
(2.10)

This finishes the proof of the theorem. \(\square \)

3 Derivative-free three-point method with order eight

3.1 Description of derivative-free three-point method

Again, we will add one Newton step to the method (2.2)

$$\begin{aligned} {\left\{ \begin{array}{ll} y_n=x_n-\dfrac{\beta f\left( x_n\right) ^2}{f(x_n)-f(w_n)},\\ z_n=y_n-\dfrac{f(x_n)+\gamma f(y_n)}{f(x_n)+(\gamma -2)f(y_n)}\dfrac{f(y_n)}{f[y_n,w_n]}G(t_n),\\ x_{n+1}=z_n-\dfrac{f(z_n)}{f'(z_n)}. \end{array}\right. } \end{aligned}$$
(3.1)

As it is seen the function is evaluated for five times, so this method is not optimal and it is also not free of derivatives. To make it an optimal and derivative-free method, we approximate \( f'(z_n) \) with Newton’s interpolation polynomial of degree three at the point \(x_n\), \(w_n\), \(y_n\) and \(z_n\).

$$\begin{aligned} N_3\left( t;z_n,y_n,x_n,w_n\right)&:=\,f\left( z_n\right) +f\left[ z_n,y_n\right] (t-z_n)\\&\quad +\,f[z_n,y_n,x_n](t-z_n)(t-y_n)\\&\quad +\,f[z_n,y_n,x_n,w_n](t-z_n)(t-y_n)(t-x_n). \end{aligned}$$

It is clear that

$$\begin{aligned} N_3(z_n)=f(z_n), \quad \text {and} \quad N_3'(t)\mid _{t=z_n}=f'(z_n). \end{aligned}$$

Then

$$\begin{aligned} N_3'(z_n)&=\left[ \frac{d}{dt}N_3(t)\right] _{t=z_n}\\&= f[z_n,y_n]+f[z_n,y_n,x_n](z_n-y_n)\\&\quad + f[z_n,y_n,x_n,w_n](z_n-y_n)(z_n-x_n), \end{aligned}$$

and hence we get

$$\begin{aligned} {\left\{ \begin{array}{ll} y_n=x_n-\dfrac{\beta f(x_n)^2}{f(x_n)-f(w_n)},\\ z_n=y_n-\dfrac{f(x_n)+\gamma f(y_n)}{f(x_n)+(\gamma -2)f(y_n)}\dfrac{f(y_n)}{f[y_n,w_n]}G(t_n),\\ x_{n+1}=z_n-\dfrac{f(z_n)}{f[z_n,y_n]+f[z_n,y_n,x_n](z_n-y_n)+f[z_n,y_n,x_n,w_n](z_n-y_n)(z_n-x_n)}.\\ \end{array}\right. }\nonumber \\ \end{aligned}$$
(3.2)

3.2 Convergence analysis

The following theorem shows that the method (3.2) has convergence order eight.

Theorem 2

Let \(D \subseteq \mathbb {R}\) be an open interval, \(f : D \rightarrow \mathbb {R}\) eight times continuously differentiable and let \( \alpha \in D\) be a simple zero of \(f\). If the initial point \(x_{0}\) is sufficiently close to \(\alpha \) and conditions of Theoerem 1 are established, then the method defined by (3.2) converges to \(\alpha \) with order eight.

Proof

Define \(e_{n}:=x_{n}-\alpha \), \(e_{n,w}:=w_n-\alpha \), \(e_{n,y}:=y_{n}-\alpha \), \(e_{n,z}:=z_{n}-\alpha \) and \(c_{n}:=\frac{f^{(n)}(\alpha )}{n!f^{'}(\alpha )}\) for \(n=2,3,\ldots \)

Using Taylor expansion of \(f\) at \(\alpha \) and taking into account that \(f(\alpha )=0\), we have

$$\begin{aligned} f(x_{n}) = f^{'}(\alpha )\left( e_{n} + c_{2}e_{n}^{2}+c_{3}e_{n}^{3}+\cdots +c_{8}e_{n}^{8}\right) +O\left( e_n^9\right) , \end{aligned}$$
(3.3)

then

$$\begin{aligned} \begin{aligned} e_{n,w}&=w_n-\alpha =\left( 1-\beta f'(\alpha )\right) e_n-\beta f'(\alpha )c_2e_n^2-\beta f'(\alpha )c_3e_n^3\\&\quad -\beta f'(\alpha )c_4e_n^4-\beta f'(\alpha )c_5e_n^5+O\left( e_n^6\right) , \end{aligned} \end{aligned}$$

and

$$\begin{aligned} f(w_{n})=f^{'}(\alpha )(e_{n,w} + c_{2}e_{n,w}^{2}+c_{3}e_{n,w}^{3}+\cdots +c_{8}e_{n,w}^{8})+O\left( e_{n,w}^9\right) , \end{aligned}$$
(3.4)

from (3.3) and (3.4), we have

$$\begin{aligned} \begin{aligned} e_{n,y}=y_{n}-\alpha&=\left( 1-\beta f'(\alpha )\right) c_{2}e_{n}^2+\left( -\left( 2+\beta f'(\alpha )\left( -2+\beta f'(\alpha )\right) \right) c_{2}^{2}\right. \\&\left. \quad + \left( -2+\beta f'(\alpha )\right) \left( -1+\beta f'(\alpha )\right) c_{3}\right) e_{n}^{3}\\&\quad + \left( \left( 4-\beta f'(\alpha )\left( 5+\beta f'(\alpha )\left( -3+\beta f'(\alpha )\right) \right) \right) c_{2}^{3}\right. \\&\left. \quad + \left( -7+\beta f'(\alpha )\left( 10+\beta f'(\alpha )\left( -7+2\beta f'(\alpha )\right) \right) \right) c_{2}c_{3}\right. \\&\left. \quad - \left( -1+\beta f'(\alpha )\right) \left( 3+\beta f'(\alpha )\left( -3+\beta f'(\alpha )\right) \right) c_{4}\right) e_{n}^{4}+O\left( e_{n}^{5}\right) , \end{aligned}\nonumber \\ \end{aligned}$$
(3.5)

as well as

$$\begin{aligned} f(y_{n})=f^{'}(\alpha )\left( e_{n,y} + c_{2}e_{n,y}^{2}+c_{3}e_{n,y}^{3}+\cdots +c_{8}e_{n,y}^{8}\right) +O\left( e_{n,y}^9\right) . \end{aligned}$$
(3.6)

According to Theorem 1, we get

$$\begin{aligned} e_{n,z}= & {} z_n-\alpha \\= & {} \left( -c_2\left( 1-\beta f'(\alpha )\right) ^2\left( \left( -1-2\gamma +\beta f'(\alpha )(-1+2\gamma )\right) c_2^2+c_3\right) \right) e_n^4\\&\quad + \left( 1-\beta f'(\alpha )\right) \left( \left( -4+\beta f'(\alpha )-\beta ^3 \left( f'(\alpha )\right) ^3\right. \right. \\&\quad + 2\left( -1+\beta f'(\alpha )\right) \left( 6+\beta f'(\alpha )\left( -4+\beta f'(\alpha )\right) \right) \gamma \\&\quad \left. + 2\left( -1+\beta f'(\alpha )\right) ^3\gamma ^2\right) c_2^4+\left( 8+12\gamma -\beta f'(\alpha )\left( 4+30\gamma +\beta f'(\alpha )\right. \right. \\&\quad \times \left. \left. \left( 5\!-\!3\beta f'(\alpha )\!+\!6\left( -4\!+\!\beta f'(\alpha )\right) \gamma \right) \right) \right) c_2^2c_3\!-\!\left( -2\!+\!\beta f'(\alpha )\right) \left( \!-\!1\!+\!\beta f'(\alpha )\right) c_3^2\\&\quad \left. - \left( -2+\beta f'(\alpha )\right) \left( -1+\beta f'(\alpha )\right) c_2c_4\right) e_n^5+O\left( e_n^6\right) , \end{aligned}$$

by using the expansion of \(f(z_n)\), we have

$$\begin{aligned} f(z_{n}) = f^{'}(\alpha )\left( e_{n,z} + c_{2}e_{n,z}^{2}+c_{3}e_{n,z}^{3}+\cdots +c_{8}e_{n,z}^{8}\right) +O\left( e_{n,z}^9\right) . \end{aligned}$$
(3.7)

From (3.6) and (3.7), we get

$$\begin{aligned} f[z_n,y_n]= & {} \dfrac{f(z_n)-f(y_n)}{z_n-y_n}\nonumber \\= & {} f'(\alpha )+f'(\alpha )c_2^2\left( 1-\beta f'(\alpha )\right) e_n^2+\left( -f'(\alpha )\left( 2+\beta f'(\alpha )\right) c_2^3\right. \nonumber \\&\left. + f'(\alpha )\left( -2+\beta f'(\alpha )\right) \left( -1+\beta f'(\alpha )\right) c_2c_3\right) e_n^3+O\left( e_n^4\right) . \end{aligned}$$
(3.8)

And from (3.3) and (3.6), we have

$$\begin{aligned} f[y_n,x_n]= & {} \dfrac{f(y_n)-f(x_n)}{y_n-x_n}\nonumber \\= & {} f'(\alpha )+c_2f'(\alpha )e_n+f'(\alpha )\left( c_2^2\left( 1-\beta f'(\alpha )+c_3\right) e_n^2\right. \nonumber \\&+ f'(\alpha )\left( -\left( 2+\beta f'(\alpha )\left( -2+\beta f'(\alpha )\right) \right) c_2^3\right. \nonumber \\&\left. + \left( -3+\beta f'(\alpha )\right) \left( -1+\beta f'(\alpha )\right) c_2c_3+c_4\right) e_n^3 +O\left( e_n^4\right) . \end{aligned}$$
(3.9)

From (3.3) and (3.4), we obtain

$$\begin{aligned} f[x_n,w_n]= & {} \dfrac{f(x_n)-f(w_n)}{x_n-w_n}\nonumber \\= & {} f'(\alpha )+c_2f'(\alpha )\left( 2-\beta f'(\alpha )\right) e_n+f'(\alpha )\left( -c_2^2\beta f'(\alpha )\right. \nonumber \\&\left. + \left( 3+c_3\beta f'(\alpha ) \left( -3+\beta f'(\alpha )\right) \right) \right) e_n^2\nonumber \\&- f'(\alpha ) \left( -2 +\beta f'(\alpha ) \right) \left( -2c_2c_3 \beta f'(\alpha )\right. \nonumber \\&\left. + \left( 2 +\beta f'(\alpha ) \left( -2 + \beta f'(\alpha )\right) \right) c_4 \right) e_n^3+O\left( e_n^4\right) . \end{aligned}$$
(3.10)

From (3.8) and (3.9), we get

$$\begin{aligned} f[z_n,y_n,x_n]= & {} \dfrac{f[z_n,y_n]-f[y_n,x_n]}{z_n-x_n}\nonumber \\= & {} c_2f'(\alpha )+c_3f'(\alpha ) e_n+f'(\alpha )\left( c_2c_3\left( 1-\beta f'(\alpha )\right) +c_4\right) e_n^2\nonumber \\&+ f'(\alpha )\left( -c_2^2c_3\left( 2+\beta f'(\alpha )\left( -2+\beta f'(\alpha )\right) \right) \right. \nonumber \\&+ c_3^2\left( -2+\beta f'(\alpha )\right) \left( -1+\beta f'(\alpha )\right) \nonumber \\&\left. + c_2c_4\left( 1-\beta f'(\alpha )\right) +c_5\right) e_n^3+O\left( e_n^4\right) . \end{aligned}$$
(3.11)

And from (3.9) and (3.10), we get

$$\begin{aligned} f[y_n,x_n,w_n]= & {} \dfrac{f[y_n,x_n]-f[x_n,w_n]}{y_n-w_n}\nonumber \\= & {} c_2f'(\alpha ) +c_3 f'(\alpha ) \left( 2\!-\!\beta f'(\alpha )\right) e_n+ c_2c_3 f'(\alpha )\left( \left( 1- 2\beta f'(\alpha )\right) \right. \nonumber \\&\left. + \left( 3+c_4\beta f'(\alpha ) \left( -3 + \beta f'(\alpha )\right) \right) \right) e_n^2+O\left( e_n^3\right) . \end{aligned}$$
(3.12)

Therefore, from (3.11) and (3.12), we get

$$\begin{aligned} f[z_n,y_n,x_n,w_n]= & {} \dfrac{f[z_n,y_n,x_n]-f[y_n,x_n,w_n]}{z_n-w_n}\nonumber \\= & {} c_3f'(\alpha ) \!+\!c_4 f'(\alpha ) \left( 2-\beta f'(\alpha )\right) e_n+ f'(\alpha )\left( c_2c_4\left( 1-2\beta f(\alpha )\right) \right. \nonumber \\&+ \left( 3+c_5\beta f'(\alpha )\left( -3+\beta f'(\alpha )\right) \right) e_n^2+O(e_n^3). \end{aligned}$$
(3.13)

Finally, by substituting (3.3)–(3.13) in (3.2), we obtain

$$\begin{aligned} \begin{aligned} e_{n+1}&=x_{n+1}-\alpha =\left( 1-\beta f'(\alpha )\right) ^4c_2^2\left( \left( -1-2\gamma +\beta f'(\alpha )\left( -1+2\gamma \right) \right) c_2^2+c_3\right) \\&\left( \left( -1-2\gamma +\beta f'(\alpha )\left( -1+2\gamma \right) \right) c_2^3+c_2c_3-c_4 \right) e_n^8+O\left( e_n^9\right) . \end{aligned} \end{aligned}$$

Which shows that the method (3.2) has optimal convergence order equal to eight. \(\square \)

4 The development of a new method with memory

In this section, we design a new method with memory by using self-accelerating parameters on method (3.2). We observe that the order of convergence of method (3.2) is eight when \(\beta \ne 1/f'(\alpha )\). If \(\beta =1/f'(\alpha )\) the convergence order of method (3.2) would be 12. Since the value \(f'(\alpha )\) is not available, we use an approximation \(\widehat{f'}(\alpha )\approx f'(\alpha )\), instead. The goal is to construct a method with memory that incorporates the calculation of the parameter \(\beta =\beta _n\) as the iteration proceeds by the formula \(\beta _n=1/\widehat{f'}(\alpha )\) for \(n=1,2,3,\ldots \) It is assumed that an initial estimate \(\beta _0\) should be chosen before starting the iterative process. In the following, we use the symbols \(\rightarrow \), \(O\) and \(\sim \) according to the following convention [17]: If \(\lim _{n\rightarrow \infty }f(x_n)=C\), we write \(f(x_n)\rightarrow C\) or \(f\rightarrow C\), where \(C\) is a nonzero constant. If \(\tfrac{f}{g}\rightarrow C\), we shall write \(f=O(g)\) or \(f\sim C(g)\).

We approximate \(\widehat{f'}(\alpha )\) by \(N'_4(x_n)\), then we have

$$\begin{aligned} \beta _n=\dfrac{1}{N'_4(x_n)}, \end{aligned}$$
(4.1)

where \(N_4'(t):=N_4(t;x_n, z_{n-1}, y_{n-1}, w_{n-1}, x_{n-1})\) is Newton’s interpolation polynomial of fourth degree, set through five available approximations \((x_n, z_{n-1}, y_{n-1}, w_{n-1}, x_{n-1})\).

$$\begin{aligned} \begin{aligned} N'_4(x_n)&= \left[ \frac{d}{dt}N_4(t)\right] _{t=x_n}\\&=f\left[ x_n,z_{n-1}\right] +f\left[ x_n,z_{n-1},y_{n-1}\right] \left( x_n-z_{n-1}\right) \\&\quad + f\left[ x_n,z_{n-1},y_{n-1},w_{n-1}\right] \left( x_n-z_{n-1}\right) \left( x_n-y_{n-1}\right) \\&\quad + f\left[ x_n,z_{n-1},y_{n-1},w_{n-1},x_{n-1}\right] \left( x_n,z_{n-1}\right) \left( x_n-y_{n-1}\right) \left( x_n-w_{n-1}\right) . \end{aligned}\nonumber \\ \end{aligned}$$
(4.2)

Lemma 3

([16]) If \(\beta _n=\frac{1}{N_4'(x_n)}\), \(n=1,2,\ldots \) then the estimate

$$\begin{aligned} \left( 1-\beta _n f'(\alpha )\right) \sim c_5e_{n-1}e_{n-1,w}e_{n-1,y}e_{n-1,z}, \end{aligned}$$
(4.3)

holds.

Theorem 4

If \(x_0\) is close to a simple zero of the function \(f\), then the convergence R-order of the method (3.2) with memory with the corresponding expression (4.1) of \(\beta _n\) is at least 12.

Proof

Let \((x_n)\) be a sequence of approximations produced by an iterative method (3.2). If \(f(\alpha )=0\) and this sequence converges to \(\alpha \) with the R-order \(Q_R((3.2), \alpha )\ge r\), we will write

$$\begin{aligned} e_{n+1}\sim D_{n,r} e_n^r, \quad e_n=x_n-\alpha , \end{aligned}$$
(4.4)

where \(D_{n,r}\) refers to the asymptotic error of (3.2) when \(n\) tend to infinity. In other words, we get

$$\begin{aligned} e_{n+1}\sim D_{n,r}\left( D_{n-1,r} e_{n-1}^r\right) ^r = D_{n,r} D_{n-1,r}^r e_{n-1}^{r^2}. \end{aligned}$$
(4.5)

Assume that the sequences \(( w_n) \), \(( y_n)\) and \((z_n)\) have the R-order \(q\), \(p\) and \(s\), respectively, that is

$$\begin{aligned} e_{n,w}\sim D_{n,q}e_n^q \sim D_{n,q}\left( D_{n-1,r}e_{n-1}^r\right) ^q= & {} D_{n,q}D_{n-1,r}^q e_{n-1}^{rq}, \end{aligned}$$
(4.6)
$$\begin{aligned} e_{n,y}\sim D_{n,p}e_n^p \sim D_{n,p}\left( D_{n-1,r}e_{n-1}^r\right) ^p= & {} D_{n,p}D_{n-1,r}^p e_{n-1}^{rp}, \end{aligned}$$
(4.7)

and

$$\begin{aligned} e_{n,z}\sim D_{n,s}e_n^s \sim D_{n,s}\left( D_{n-1,r}e_{n-1}^r\right) ^s=D_{n,s}D_{n-1,r}^s e_{n-1}^{rs}. \end{aligned}$$
(4.8)

Also, we have

$$\begin{aligned} e_{n,w}\sim & {} \left( 1-\beta _n f'(\alpha )\right) e_n,\\ e_{n,y}\sim & {} c_2\left( 1-\beta _n f'(\alpha )\right) e_n^2,\\ e_{n,z}\sim & {} B\left( 1-\beta _n f'(\alpha )\right) ^2e_n^4, \end{aligned}$$

where \(B=-c_2\left( \left( -1-2\gamma +\beta f'(\alpha )(-1+2\gamma )\right) c_2^2+c_3\right) ,\)

$$\begin{aligned} e_{n+1}\sim A(1-\beta _n f'(\alpha ))^4e_n^8, \end{aligned}$$

where \(A=c_2^2\left( \left( -1-2\gamma +\beta f'(\alpha )(-1+2\gamma )\right) c_2^2+c_3\right) \left( \left( -1-2\gamma +\beta f'(\alpha )(-1+2\gamma )\right) c_2^3+c_2c_3-c_4\right) \).

Therefore by Lemma 3, we obtain

$$\begin{aligned} e_{n,w}\sim & {} \left( 1-\beta _n f'(\alpha )\right) e_n\sim \left( c_5e_{n-1}e_{n-1,w}e_{n-1,y}e_{n-1,z}\right) e_n\nonumber \\\sim & {} c_5D_{n-1,q}D_{n-1,p}D_{n-1,s}D_{n-1,r}e_{n-1}^{r+p+s+q+1}, \end{aligned}$$
(4.9)
$$\begin{aligned} e_{n,y}\sim & {} c_2\left( 1-\beta _n f'(\alpha )\right) e_n^2\sim c_2\left( c_5e_{n-1}e_{n-1,w}e_{n-1,y}e_{n-1,z}\right) e_n^2\nonumber \\\sim & {} c_2c_5D_{n-1,q}D_{n-1,p}D_{n-1,s}D_{n-1,r}^2 e_{n-1}^{2r+s+p+q+1}, \end{aligned}$$
(4.10)
$$\begin{aligned} e_{n,z}\sim & {} B \left( 1-\beta _n f'(\alpha )\right) ^2e_n^4\sim B\left( c_5e_{n-1}e_{n-1,w}e_{n-1,y}e_{n-1,z}\right) ^2e_n^4\nonumber \\\sim & {} Bc_5^2D_{n-1,q}D_{n-1,p}^2D_{n-1,s}^2D_{n-1,r}^4 e_{n-1}^{4r+2s+2p+2q+2}, \end{aligned}$$
(4.11)
$$\begin{aligned} e_{n+1}\sim & {} A \left( 1-\beta _n f'(\alpha )\right) ^4e_n^8\sim A\left( c_5e_{n-1}e_{n-1,w}e_{n-1,y}e_{n-1,z}\right) ^4e_n^8\nonumber \\\sim & {} Ac_5^4D_{n-1,q}D_{n-1,p}^4D_{n-1,s}^4D_{n-1,r}^6 e_{n-1}^{8r+4s+4p+4q+4}. \end{aligned}$$
(4.12)

By comparing exponents of \(e_{n-1}\) appearing in two pairs of relations (4.5), (4.12) and (4.6), (4.9) and (4.7), (4.10) and (4.8), (4.11), we obtain the nonlinear system of four equations and four unknown \(r\), \(s\), \(p\) and \(q\).

$$\begin{aligned} {\left\{ \begin{array}{ll} &{}r^2-8r-4s-4p-4q-4=0,\\ &{}rs-4r-2s-2p-2q-2=0,\\ &{}rp-2r-s-p-q-1=0,\\ &{}rq-r-s-p-q-1=0. \end{array}\right. } \end{aligned}$$

A non-trivial solution of the above system is \( r=12\), \(s=6\), \(p=3\), \(q=2\).

We have proved that the convergence order of the iterative method (3.2) is at least 12. \(\square \)

5 Numerical example

In this section, we show the results of some numerical tests to compare the efficiencies of methods, using the programming package Mathematica. To obtain very high accuracy and avoid the loss of significant digits, we employed multi-precision arithmetic with 1200 significant decimal digits in the programming package Mathematica 8.

In what follows, we present some concrete iterative methods from the scheme (3.2).

Method 1. Choose the weight function \(G\) as follows:

$$\begin{aligned} G(t_n)=1-t_n, \end{aligned}$$
(5.1)

where \(t_n=\frac{f(y_n)}{f(x_n)}\). The function \(G\) in (5.1) satisfies the assumptions of Theorem 2, then we have the following method

$$\begin{aligned} {\left\{ \begin{array}{ll} y_n=x_n-\dfrac{\beta _n f(x_n)^2}{f(x_n)-f(w_n)},\quad w_n=x_n-\beta _nf(x_n), \quad \beta _n=\frac{1}{N'_4(x_n)},\\ z_n=y_n-\dfrac{f(x_n)+\gamma f(y_n)}{f(x_n)+(\gamma -2)f(y_n)}\cdot \dfrac{f(x_n)-f(y_n)}{f(x_n)}\cdot \dfrac{f(y_n)}{f[y_n,w_n]},\\ x_{n+1}=z_n-\dfrac{f(z_n)}{f[z_n,y_n]+f[z_n,y_n,x_n](z_n-y_n)+f[z_n,y_n,x_n,w_n](z_n-y_n)(z_n-x_n)},\\ \end{array}\right. }\nonumber \\ \end{aligned}$$
(5.2)

where in general, the divide differential of order \(n\) is obtained as:

$$\begin{aligned} f[x_0,x_1,\ldots ,x_n]=\dfrac{f[x_1,x_2,\ldots , x_n]-f[x_0,x_1,\ldots ,x_{n-1}]}{x_n-x_0}\quad \hbox {for}\, n=1,2,\ldots \end{aligned}$$

Method 2. Choose the weight function \(G\) as follows:

$$\begin{aligned} G(t_n)=1-\dfrac{t_n}{1+t_n}, \end{aligned}$$
(5.3)

where \(t_n=\frac{f(y_n)}{f(x_n)}\). The function \(G\) in (5.3) satisfies the assumptions of Theorem 2, then we have the following method

$$\begin{aligned} {\left\{ \begin{array}{ll} y_n=x_n-\dfrac{\beta _n f\left( x_n\right) ^2}{f(x_n)-f(w_n)},\quad w_n=x_n-\beta _nf(x_n), \quad \beta _n=\frac{1}{N'_4(x_n)},\\ z_n=y_n-\dfrac{f(x_n)+\gamma f(y_n)}{f(x_n)+(\gamma \!-\!2)f(y_n)}\cdot \dfrac{f(x_n)}{f(x_n)+f(y_n)}\cdot \dfrac{f(y_n)}{f[y_n,w_n]},\\ x_{n+1}=z_n\!-\!\dfrac{f(z_n)}{f[z_n,y_n]+f[z_n,y_n,x_n](z_n\!-\!y_n)+f[z_n,y_n,x_n,w_n](z_n\!-\!y_n)(z_n\!-\!x_n)}.\\ \end{array}\right. }\nonumber \\ \end{aligned}$$
(5.4)

Method 3. Choose the weight function \(G\) as follows:

$$\begin{aligned} G(t_n)=\dfrac{1-2t_n}{1-t_n}, \end{aligned}$$
(5.5)

where \(t_n=\frac{f(y_n)}{f(x_n)}\). The function \(G\) in (5.5) satisfies the assumptions of Theorem 2, then we have the following method

$$\begin{aligned} {\left\{ \begin{array}{ll} y_n=x_n-\dfrac{\beta _n f(x_n)^2}{f(x_n)-f(w_n)},\quad w_n=x_n-\beta _nf(x_n), \quad \beta _n=\frac{1}{N'_4(x_n)},\\ z_n=y_n-\dfrac{f(x_n)+\gamma f(y_n)}{f(x_n)+(\gamma -2)f(y_n)}\cdot \dfrac{f(x_n)-2f(y_n)}{f(x_n)-f(y_n)}\cdot \dfrac{f(y_n)}{f[y_n,w_n]},\\ x_{n+1}=z_n\!-\!\dfrac{f(z_n)}{f[z_n,y_n]\!+\!f[z_n,y_n,x_n](z_n-y_n)\!+\!f[z_n,y_n,x_n,w_n](z_n-y_n)(z_n-x_n)}.\\ \end{array}\right. }\nonumber \\ \end{aligned}$$
(5.6)

Method 4. Choose the weight function \(G\) as follows:

$$\begin{aligned} G(t_n)=(1-t_n)^{\frac{2t_n+1}{t_n+1}}, \end{aligned}$$
(5.7)

where \(t_n=\frac{f(y_n)}{f(x_n)}\). The function \(G\) in (5.7) satisfies the assumptions of Theorem 2, then we have the following method

$$\begin{aligned} {\left\{ \begin{array}{ll} y_n=x_n-\dfrac{\beta _n f(x_n)^2}{f(x_n)-f(w_n)},\quad w_n=x_n-\beta _nf(x_n), \quad \beta _n=\frac{1}{N'_4(x_n)},\\ z_n=y_n-\dfrac{f(x_n)+\gamma f(y_n)}{f(x_n)+(\gamma -2)f(y_n)}\cdot \left( \dfrac{f(x_n)-f(y_n)}{f(x_n)}\right) ^{\frac{2f(y_n)+f(x_n)}{f(y_n)+f(x_n)}}\cdot \dfrac{f(y_n)}{f[y_n,w_n]},\\ x_{n+1}\!=\!z_n-\dfrac{f(z_n)}{f[z_n,y_n]+f[z_n,y_n,x_n](z_n-y_n)+f[z_n,y_n,x_n,w_n](z_n-y_n)(z_n-x_n)}.\\ \end{array}\right. }\nonumber \\ \end{aligned}$$
(5.8)

The new proposed methods with memory were compared with existing three-point methods given in what follows, having the same order convergence and with the same initial data \(x_0\) and \(\beta _0\).

Method 5. The derivative-free method by Kung and Traub [7] given by

$$\begin{aligned} {\left\{ \begin{array}{ll} y_n=x_n-\dfrac{f(x_n)}{f[w_n,x_n]},\quad w_n=x_n+\beta _n f(x_n),\quad \beta _n=\frac{-1}{N'(x_n)},\\ z_n=y_n-\dfrac{f(y_n)f(w_n)}{\left( f(w_n)-f(y_n)\right) f[x_n,y_n]},\\ x_{n+1}=z_n-\dfrac{f(y_n)f(w_n)\left( y_n-x_n+\dfrac{f(x_n)}{f[x_n,z_n]}\right) }{\left( f(y_n)-f(z_n)\right) \left( f(w_n-f(z_n)\right) }+\dfrac{f(y_n)}{f[y_n,z_n]}. \end{array}\right. } \end{aligned}$$
(5.9)

Method 6. The method by Sharma et al. [16] given by

$$\begin{aligned} {\left\{ \begin{array}{ll} y_n=x_n-\dfrac{ f(x_n)}{\varphi (x_n)},\\ z_n=y_n-H(u_n,v_n)\dfrac{f(y_n)}{\varphi (x_n)},\\ x_{n+1}\!=\!z_n\!-\!\dfrac{f(z_n)}{f[z_n,y_n]+f[z_n,y_n,x_n](z_n-y_n)+f[z_n,y_n,x_n,w_n](z_n-y_n)(z_n-x_n)}, \end{array}\right. }\nonumber \\ \end{aligned}$$
(5.10)

where \(\varphi (x_n)=\frac{f(w_n)-f(x_n)}{\beta _n f(x_n)}\), \(w_n=x_n+\beta _n f(x_n)\), \(\beta _n=\frac{-1}{N'(x_n)}\), \(H(u_n,v_n)=\dfrac{1+u_n}{1-v_n}\), \(u_n=\frac{f(y_n)}{f(x_n)}\) and \(v_n=\frac{f(y_n)}{f(w_n)}\).

Method 7. The method by Zheng et al. [19] given by

$$\begin{aligned} {\left\{ \begin{array}{ll} y_{n}=x_{n}-\dfrac{f(x_{n})}{f[x_{n},w_{n}]},\quad w_n=x_n+\beta _n f(x_n),\quad \beta _n=\frac{-1}{N'(x_n)},\\ z_{n}=y_{n}- \dfrac{f(y_{n})}{f[y_n,x_n]+f[y_n,x_n,w_n](y_n-x_n)},\\ x_{n+1}\!=\!z_n\!-\!\dfrac{f(z_n)}{f[z_n,y_n]+f[z_n,y_n,x_n](z_n-y_n)+f[z_n,y_n,x_n,w_n](z_n-y_n)(z_n-x_n)}, \end{array}\right. }\nonumber \\ \end{aligned}$$
(5.11)

In order to test our proposed methods with memory (5.2), (5.4), (5.6) and (5.8) and also compare them with the methods (5.9), (5.10) and (5.11), we choose the initial value \(x_0\) using the Mathematica command FindRoot [4], pp. 158–160] and compute the error and the computational order of convergence (coc) by the approximate formula [18]

$$\begin{aligned} \text {coc}\approx \frac{\ln \left| (x_{n+1}-\alpha )/(x_{n}-\alpha )\right| }{\ln |(x_{n}-\alpha )/(x_{n-1}-\alpha )|}. \end{aligned}$$

We introduced three test functions with their roots in Table 1. In Addition, in Tables 2 and 3, the proposed methods with memory with weight functions (5.1), (5.3), (5.5) and (5.7) and the methods (5.9)–(5.11) have been tested on these three different nonlinear equations respectively. It is clear that these methods are in accordance with the developed theory.

Table 1 Test functions \(f_1, f_2, f_3\) and root \(\alpha \)
Table 2 Errors and coc for methods (5.2), (5.4), (5.6) and (5.8) with \(\gamma =0\)
Table 3 Errors and coc for methods (5.9), (5.10) and (5.11)

6 Conclusion

We have introduced a new method with memory for approximating a simple root of a given nonlinear equation. An increase of the convergence order is attained without any additional function evaluations, which points to a very high computational efficiency of the proposed methods with memory. We have proved that the convergence order of the new method with memory is, at least 12. So its efficiency index is \(12^{1/4}=1.86121\) which is greater than that of the three-point methods of order eight \(8^{1/4}=1.68179\) with the same function evaluations. Numerical examples show that our methods with memory work and can compete with other methods under the same conditions.