1 Introduction

Solving nonlinear equation \(f(x)=0\), is a very significant problem in the real world. To determine the solution of nonlinear equations, many iterative methods have been projected (see[1,2,3,4,5]); these iterative methods have a vital area of research in numerical analysis as they have applications in numerous branches of pure and applied sciences. Beyond them the most famous one-point iterative without memory method is the Newton–Raphson method, given by

$$\begin{aligned} x_{n+1}= & {} x_n-\frac{f{(x_n)}}{f'{(x_n)}}, \end{aligned}$$
(1)

where \(x_0\) is the initial guess for the exact solution of \(f(x)=0\), and \(n=0,1,2,\ldots \) This converges quadratically. One drawback of this method is the condition \(f'(x_n) \ne 0\), which confines its applications in practice. To resolve this difficulty, Kumar et al. [6] developed a new one-point iterative method, specified by

$$\begin{aligned} x_{n+1}= & {} x_n-\frac{f{(x_n)}}{f'{(x_n)}-\lambda _1 f{(x_n)}}. \end{aligned}$$
(2)

Setting \(\lambda _1 = 0\) in (2), we obtain Newton’s method. The error expression for the above method is:

$$\begin{aligned} e_{n+1}= & {} (\lambda _1 -c_2)e_n^{2}+O(e_n^{3}), \end{aligned}$$
(3)

where \(e_n=x_n- \alpha \), \(c_k=\frac{1}{k!} \frac{f^{(k)}{(\alpha )}}{f'{(\alpha )}}\), \(k=2,3,\ldots \) and \( \alpha \) is a root of \(f{(x)}=0\).

Subsequently, we talk about the one-point iterative method with memory. In the mapping \(x_{n+1}= \phi {(x_n;x_{n-1},\ldots ,x_{n-k})}\), the iteration function \(\phi \) is called the one-point iteration function with memory, since \(x_n\) is novel information, while \(x_{k-1},\ldots ,x_{n-k}\) are reused information. The best known model one-point with memory method is the secant method, given by

$$\begin{aligned} x_{n+1}= & {} x_n-\frac{{x_n-x_{n-1}}}{f{(x_n)}- f{(x_{n-1})}}f{(x_n)}, n=1,2,3\ldots \end{aligned}$$
(4)

Now, the next iteration function \(\phi \), defined through the mapping \(x_{n+1}= \phi {(x_n, w_1{(x_{n})},\ldots , w_n{(x_{n})})}\), is termed the multipoint iteration function without memory [7], where the terms \(w_1{(x_{n})},\ldots ,w_n{(x_{n})} \), have common argument \(x_n\). Let us classify one more iteration function \(\phi \) which has arguments \(z_j\), where every argument characterizes \(n+1\) quantities \(x_j, w_1{(x_{j})},\ldots ,w_n{(x_{j})} (n \ge 1)\). The iteration mapping \(x_{n+1}= \phi {(z_n, z_{n-1},\ldots ,z_{n-k})}\) is called a multipoint iteration function with memory, where in each iterative step we must preserve information of the last n approximations \(x_j\) and for each approximation we are required to calculate n expressions \( w_1{(x_{j})},\ldots ,w_n{(x_{j})}\). Now a days lots of researchers are paying their attention on developing with memory iterative methods using one or more self accelerating parameters. There are nice contributions are available devoted to derivative free with memory iterative methods such as [8,9,10,11]. But very few with memory with derivative iterative methods are available in literature for solving nonlinear equations. The main objective of this paper is to work on multipoint iteration function with memory, which can improve the order of convergence of the without memory with derivative methods, without using any additional calculations and it has very high computational efficiency. In this paper, we present a new multipoint iterative method with memory, to solve nonlinear equations. Their convergence analysis is also discussed. In Sect. 2, we discuss two-point iterative method with memory. This method is acquired by employing a self-accelerating parameter. This parameter is designed by the Hermite interpolating polynomial, where the R-order convergence of two-point method is increased from 4 to 4.5616, 4.7913 and 5, respectively. Section 3, corroborates theoretical results by means of different numerical examples.

2 Development and convergence analysis of with memory method

In the ensuing section, we will improve the convergence rate of the method given by Hafiz and Bahgat [12]. First, we consider the optimal fourth-order without memory scheme presented in [12],

$$\begin{aligned} y_n= & {} x_n-\frac{f{(x_n)}}{f'{(x_n)}},\nonumber \\ x_{n+1}= & {} y_n-\frac{f{(y_n)}P_1{(x_n,y_n)}}{2P_1^{2}{(x_n,y_n)}-f{(y_n) P_2{(x_n,y_n)}}}, \end{aligned}$$
(5)

where \(P_1{(x_n,y_n)}=2 \left( \frac{f{(y_n)}-f{(x_n)}}{y_n-x_n}\right) -f'{(x_n)}\) and \(P_2{(x_n,y_n)}=\frac{2}{y_n-x_n} \left( \frac{f{(y_n)}-f{(x_n)}}{y_n-x_n}-f'{(x_n)}\right) \). The error expression for iterative scheme (5) is

$$\begin{aligned} e_{n+1}= & {} -c_2c_3e_n^4+O{(e_n^5)}. \end{aligned}$$
(6)

Now, we use the idea considered in Kumar et al. [6] to change the first step of the above method and consider

$$\begin{aligned} y_n= & {} x_n-\frac{f{(x_n)}}{f'{(x_n)}-Tf{(x_n)}},\nonumber \\ x_{n+1}= & {} y_n-\frac{f{(y_n)}P_1{(x_n,y_n)}}{2P_1^{2}{(x_n,y_n)}-f{(y_n) P_2{(x_n,y_n)}}}. \end{aligned}$$
(7)

If we set \(T=0\), then we obtain the first step of the method (5). The error expressions for each sub-step of scheme (7),

$$\begin{aligned} e_{n,y}= & {} y_n - \alpha = (c_2-T)e_n^2 + (2(T-c_2)c_2-T^2-2c_2T+2c_3)e_n^3\nonumber \\&+\,(-T^3-5Tc_2^2+4c_2^3+4Tc_3-c_2(7c_3-3T^2)+3c_4)e_n^4 \nonumber \\&+\,O{(e_n^5)} \end{aligned}$$
(8)

and

$$\begin{aligned} e_{n+1}= & {} {(T-c_2)c_3}e_n^4+O{(e_n^5)}, \end{aligned}$$
(9)

where \(c_j=\frac{f^{(j)}(\alpha )}{j!f'{(\alpha )}}\), for \(j=2,3,\ldots \) and \(T, \gamma \in R\). Replacing T by \(T_n\) in the scheme (7), we obtain the following with memory method

$$\begin{aligned} y_n= & {} x_n-\frac{f{(x_n)}}{f'{(x_n)}-T_nf{(x_n)}},\nonumber \\ x_{n+1}= & {} y_n-\frac{f{(y_n)}P_1{(x_n,y_n)}}{2P_1^{2}{(x_n,y_n)}-f{(y_n) P_2{(x_n,y_n)}}}, \end{aligned}$$
(10)

which is denoted by OM4(2.6). It is easy to recognize from Eq. (6) that the order of convergence of scheme (5) is four and after changing the first step of method (5) to that of the first step of the scheme (7), we see that the order of scheme (7) is also four when \(T \ne c_2\). Now, by taking \(T=c_2=f''{(\alpha )}/(2f'{(\alpha )})\), it can be established that the order of the method (7) would be 5. For this type of acceleration of convergence the exact values of \(f'{(\alpha )}\) and \(f''{(\alpha )}\) are not obtainable. Other than this we could replace the parameter T by \(T_n\). To locate the values of the parameter, we can utilize the information accessible from the current and previous iteration and it satisfies \(\lim _{n\rightarrow \infty } T_n = c_2 = f''{(\alpha )}/(2f'{(\alpha )})\), such that the fourth-order asymptotic convergence constant to be zero in the error expression (9). We consider the following formula for \(T_n\):

Method 1:

$$\begin{aligned} T_n= & {} \frac{H''_2{(x_n)}}{2f'{(x_n)}}, \end{aligned}$$
(11)

where \(H_2{(x)}= f{(x_n)}+ f{[x_n,x_n]}(x-x_n)+ f{[x_n,x_n,y_{n-1}](x-x_n)^2}\) and \(H''_2{(x)}=2 f{[x_n,x_n,y_{n-1}]}\).

Method 2:

$$\begin{aligned} T_n= & {} \frac{H''_3{(x_n)}}{2f'{(x_n)}}, \end{aligned}$$
(12)

where \(H_3{(x)}= H_2{(x)}+ f{[x_n,x_n,y_{n-1},x_{n-1}]}(x-x_n)^2(x-y_{n-1})\) and \(H''_3{(x)}= 2f{[x_n,x_n,y_{n-1}]}+2f{[x_n,x_n,y_{n-1},x_{n-1}]}(x_n-y_{n-1})\).

Method 3:

$$\begin{aligned} T_n= & {} \frac{H''_4{(x_n)}}{2f'{(x_n)}}, \end{aligned}$$
(13)

where \(H_4{(x)}= H_3{(x)}+ f{[x_n,x_n,y_{n-1},x_{n-1},x_{n-1}]}(x-x_n)^2(x-y_{n-1})(x-x_{n-1})\) and \(H''_4{(x)}= 4f{[x_n,x_n,y_{n-1}]}+(2f{[x_n,x_n,y_{n-1},x_{n-1}]} -2f{[x_n,y_{n-1}x_{n-1},x_{n-1}]})(x_n-y_{n-1})\).

Note: The Hermite interpolation polynomial \(H_m(x) (m=2,3,4)\) satisfied the condition \(H'_m(x_n)=f'(x_n)\) \((m=2,3,4)\). So, \(T_n=\frac{H''_m(x_n)}{2f'(x_n)}\) can be expressed as \(T_n=\frac{H''_m(x_n)}{2H'(x_n)}\) \((m=2,3,4)\).

Theorem 1

Let \(H_m\) be the Hermite interpolating polynomial of degree m that interpolates a function f at interpolation nodes \(x_n,x_n,t_0\ldots \) \(t_{m-2}\) contained in an interval I and the derivative \(f^{(m+1)}\) is continuous in I and the Hermite interpolating polynomial \(H_m{(x_n)}= f{(x_n)}\), \(H'_m{(x_n)}=f'{(x_n)}\), \(H_m{(t_j)}=f{(t_j)}\) \((j=0,1,\ldots ,m-2)\). Define the errors \(e_{t,j}=t_j-\alpha (j=0,1,\ldots ,m-2)\) and assume that

(1) all nodes \(x_n,t_0,\ldots ,t_{m-2}\) are sufficiently close to the zero \(\alpha \),

(2) the condition \(e_n=O(e_{t,0}\ldots e_{t,m-2})\) holds. Then

$$\begin{aligned} H''_{m}{(x_n)}= & {} 2f'{(\alpha )}\Bigg (c_2-(-1)^{m-1}c_{m+1}\prod _{j=0}^{m-2}e_{t,j}+3c_3e_n\Bigg ), \end{aligned}$$
(14)
$$\begin{aligned} T_{n}= & {} \frac{H''(x_n)}{2f'{(x_n)}} \sim \Bigg (c_2-(-1)^{m-1}c_{m+1}\prod _{j=0}^{m-2}e_{t,j}+(3c_3-2c_2^2)e_n\Bigg ) \end{aligned}$$
(15)

and

$$\begin{aligned} T_{n}-c_2 \sim \Bigg (-(-1)^{m-1}c_{m+1}\prod _{j=0}^{m-2}e_{t,j}+(3c_3-2c_2^2)e_n\Bigg ). \end{aligned}$$
(16)

Proof

The error expression of the Hermite interpolation can be expressed like this

$$\begin{aligned} f{(x)}-H_{m}{(x_n)}= & {} \frac{f^{(m+1)}(\xi )}{(m+1)!}(x-x_n)^2\prod _{j=0}^{m-2}(x_n-t_j),(\xi \in I). \end{aligned}$$
(17)

After differentiating (17) twice at the point \(x=x_n\), we obtain

$$\begin{aligned} H''_{m}{(x_n)}= & {} f''{(x)}-2\frac{f^{(m+1)}(\xi )}{(m+1)!} \prod _{j=0}^{m-2}(x_n-t_j),(\xi \in I). \end{aligned}$$
(18)

Taylor’s series expansion of derivative of f at the point \(x_n \in I\) and \(\xi \in I\) about the zero \(\alpha \) of f give

$$\begin{aligned} f'{(x_n)}= & {} f'{(\alpha )}\big (1+2c_2e_n+3c_3e_n^2+O{(e_n^3)}\big ), \end{aligned}$$
(19)
$$\begin{aligned} f''{(x_n)}= & {} f'{(\alpha )}\big (2c_2+6c_3e_n+O{(e_n^2)}\big ), \end{aligned}$$
(20)

and

$$\begin{aligned} f^{(m+1)}{(\xi )}= & {} f'{(\alpha )}\big ((m+1)!c_{m+1}+(m+2)!c_{m+2}e_{\xi }+O{(e_{\xi }^2)}\big ), \end{aligned}$$
(21)

where \(e_{\xi }=\xi -\alpha \). Placing (20), (21) in (18), we find

$$\begin{aligned} H''_{m}{(x_n)}= & {} 2f'{(\alpha )}\Bigg (c_2-(-1)^{m-1}c_{m+1}\prod _{j=0}^{m-2}e_{t,j}+3c_3e_n\Bigg ), \end{aligned}$$
(22)

This implies

$$\begin{aligned} \frac{H''_m(x_n)}{2f'{(x_n)}} \sim \Bigg (c_2-(-1)^{m-1}c_{m+1}\prod _{j=0}^{m-2}e_{t,j}+(3c_3-2c_2^2)e_n\Bigg ). \end{aligned}$$
(23)

Thus

$$\begin{aligned} T_{n} \sim \Bigg (c_2-(-1)^{m-1}c_{m+1}\prod _{j=0}^{m-2}e_{t,j}+(3c_3-2c_2^2)e_n\Bigg ), \end{aligned}$$
(24)

or

$$\begin{aligned} T_{n}-c_2 \sim \Bigg (-(-1)^{m-1}c_{m+1}\prod _{j=0}^{m-2}e_{t,j}+(3c_3-2c_2^2)e_n\Bigg ). \end{aligned}$$
(25)

The concept of R-order of convergence [13] and the subsequent declaration (see [14], p.287) will be applied to approximate the convergence order of the iterative method (10).

Theorem 2

If the errors of approximations \(e_j=x_j-\alpha \) obtained in an iterative root finding method IM satisfy

$$\begin{aligned} e_{k+1} \sim \prod _{j=0}^{m-2} (e_{k-i})^{m_i}, \quad k \ge k{(\{e_k\})}, \end{aligned}$$

then the R-order of convergence of IM, denoted with \(O_R (IM,\alpha )\), satisfies the inequality \(O_R (IM,\alpha ) \ge s^*\), where \(s^*\) is the unique positive solution of the equation \(s_{n+1}-\sum _{i=0}^{n} m_i s^{n-i}=0\).

We state the following convergence theorem designed for iterative method with memory (10).

Theorem 3

Let the varying parameter \(T_n\) in the iterative method (10) be calculated by (11)–(13). If an initial approximation \(x_0\) is sufficiently close to a simple root \(\alpha \) of f(x), then the R-order of convergence of iterative methods (11)–(10), (12)–(10) and (13)–(10) with memory is at least \((5+\sqrt{17})/2 \approx 4.5616\), \((5+\sqrt{21})/2 \approx 4.7913\) and 5, respectively.

Proof

Let the sequence \(\{x_n\}\) be generated by an iterative method (IM) converges to the root \(\alpha \) of f(x), with R-order \(O_R(IM,\alpha ) \ge r\), we write down

$$\begin{aligned} e_{n+1} \sim D_{n,r} e_n^r,&e_n=x_n-\alpha . \end{aligned}$$
(26)

If we take \(n \rightarrow \infty \), then \(D_{n,r}\) tends to the asymptotic error constant \(D_r\) of IM. Consequently

$$\begin{aligned} e_{n+1} \sim D_{n,r}( D_{n-1,r}e_{n-1}^r)^r=D_{n,r}D_{n-1,r}e_{n-1}^{r^2}. \end{aligned}$$
(27)

The subsequent error expression of the with memory method (10), can be attained through (8), (9) and the varying parameter \(T_n\)

$$\begin{aligned} e_{n,y}= & {} y_n-\alpha \sim (T_n-c_2)e_n^2, \end{aligned}$$
(28)

and

$$\begin{aligned} e_{n+1}= & {} x_{n+1}-\alpha \sim B_{n,4}(T_n-c_2)e_n^4, \end{aligned}$$
(29)

where \(B_{n,4}\) is a varying quantity because of the self accelerating parameter \(T_n\) and it comes from (9). Here, we excluded higher order terms in (28) and (29).

Method 1. \(T_n\) is calculated by (11): It is similar to the derivation of (26). We suppose that the iterative sequence \(\{y_n\}\) has the R-order p,

$$\begin{aligned} e_{n,y} \sim D_{n,p} e_n^p \sim D_{n,p} (D_{n-1,r} e_{n-1}^r) ^p = D_{n,p}D_{n-1,r}^p e_{n-1}^{rp}. \end{aligned}$$
(30)

Using Theorem 1 for \(m=2\) and \(t_0=y_{n-1}\), we obtain

$$\begin{aligned} T_n-c_2 \sim c_3e_{t,0}= & {} c_3e_{n-1,y}. \end{aligned}$$
(31)

Now from (28), (29) and (31), we obtain

$$\begin{aligned} e_{n,y} \sim c_3e_{n-1,y}(D_{n-1,r} e_{n-1}^r)^2 \sim c_3D_{n-1,p}D_{n-1,r}^2 e_{n-1}^{2r+p}, \end{aligned}$$
(32)

and

$$\begin{aligned} e_{n+1}\sim & {} B_{n,4} c_3e_{n-1,y}e_n^4 \sim B_{n,4} c_3D_{n-1,p} e_{n-1}^p(D_{n-1,r} e_{n-1}^r)^4,\nonumber \\\sim & {} B_{n,4} c_3D_{n-1,p}D_{n-1,r}^4 e_{n-1}^{4r+ p}. \end{aligned}$$
(33)

The following system of equations, are obtained by comparing the components of \(e_{n-1}\) featuring in two pairs of relation (30)–(32) and (27)–(33),

$$\begin{aligned} 2r+p= & {} rp,\nonumber \\ 4r+p= & {} r^2. \end{aligned}$$
(34)

Positive solution of system (34) is specified through \(r=(5+ \sqrt{17})/2\) and \( p=(1+\sqrt{17})/2\). As a result, we obtain at least \((5+ \sqrt{17})/2 \approx 4.5616\) is the R-order of with memory method (10)–(11).

Method 2. \(T_n\) is calculated by (12): Using Theorem 1 for \(m=3\), \(t_0=y_{n-1}\) and \(t_1=x_{n-1}\), we obtain

$$\begin{aligned} T_n-c_2 \sim -c_4e_{t,0}e_{t,1}= & {} -c_4e_{n-1,y}e_{n-1}. \end{aligned}$$
(35)

Now using (28), (29) and (34), we obtain

$$\begin{aligned} e_{n,y}\sim & {} (T_n-c_2)e_n^2 \sim -c_4e_{n-1} e_{n-1,y}(D_{n-1,r} e_{n-1}^r)^2\nonumber \\&\sim -c_4 D_{n-1,p}D_{n-1,r}^2 e_{n-1}^{2r+p+1}, \end{aligned}$$
(36)

and

$$\begin{aligned} e_{n+1}\sim & {} - B_{n,4}c_4e_{n-1}e_{n-1,y}e_n^4 \sim - B_{n,4}c_4e_{n-1} D_{n-1,p}e_{n-1}^p(D_{n-1,r} e_{n-1}^r)^4\nonumber \\&\sim - B_{n,4}c_4 D_{n-1,p} D_{n-1,r}^4e_{n-1}^{4r+p+1}. \end{aligned}$$
(37)

We obtain the following system of equations, by comparing the components of \(e_{n-1}\) featuring in two pairs of relation (30)–(36) and (27)–(37)

$$\begin{aligned} 2r+p+1= & {} rp,\nonumber \\ 4r+p+1= & {} r^2. \end{aligned}$$
(38)

Positive solution of system (38) is specified through \(r=(5+ \sqrt{21})/2\) and \( p=(1+\sqrt{21})/2\). As a result, we obtain at least \((5+ \sqrt{21})/2 \approx 4.7913\) is the R-order of the methods with memory (10)–(12).

Method 3. \(T_n\) is calculated by (13): Hermite interpolating polynomial \(H_4(x)\) satisfies the condition \(H_4(x_n)=f(x_n)\), \(H'_4(x_n)=f'(x_n)\), \(H_4(y_{n-1})=f(y_{n-1)}\), \(H_4(x_{n-1})=f(x_{n-1})\) and \(H'_4(x_{n-1})=f'(x_{n-1})\). The error expression of the Hermite interpolation can be expressed as follows:

$$\begin{aligned} f(x)- H_4(x)= & {} \frac{f^{(5)}(\xi )}{5!}(x-x_n)^2(x-x_{n-1})^2(x-y_{n-1}),(\xi \in I). \end{aligned}$$
(39)

After differentiating (39) twice at the point \(x=x_n\), we obtain

$$\begin{aligned} H''_4(x_n)=f''(x_n)- 2\frac{f^{(5)}(\xi )}{5!}(x_n-x_{n-1})^2(x_n-y_{n-1}),(\xi \in I). \end{aligned}$$
(40)

Taylor’s series expansion of derivatives of f at the point \(x_n \in I\) and \(\xi \in I\) about the zero \(\alpha \) of f give

$$\begin{aligned} f'(x_n)= & {} f'(\alpha )(1+2c_2e_n+3c_3e_n^2+O(e_n^3)), \end{aligned}$$
(41)
$$\begin{aligned} f''(x_n)= & {} f'(\alpha )(2c_2+6c_3e_n+O(e_n^2)), \end{aligned}$$
(42)

and

$$\begin{aligned} f^{(m+1)}(\xi )= & {} f'(\alpha )\big ((m+1)!c_{m+1}+(m+2)!c_{m+2}e_{\xi }+O(e_{\xi }^2)\big ), \end{aligned}$$
(43)

where \(e_{\xi }=\xi -\alpha \).

Substituting (43) and (42) into (40), we obtain

$$\begin{aligned} H''_4(x_n)= & {} 2f'(\alpha )(c_2-c_5e_{n-1,y}e_{n-1}^2+3c_3e_n). \end{aligned}$$
(44)

By means of (28) and (29), we have

$$\begin{aligned} e_{n-1,y}= & {} y_{n-1}-\alpha \sim (T_{n-1}-c_2)e_{n-1}^2 \end{aligned}$$
(45)

and

$$\begin{aligned} e_{n}= & {} x_{n}-\alpha \sim B_{n-1,4}(T_{n-1}-c_2)e_{n-1}^4. \end{aligned}$$
(46)

Then

$$\begin{aligned} e_{n-1,y}e_{n-1}^2 \sim (T_{n-1}-c_2)e_{n-1}^4 \sim \frac{1}{B_{n-1,4}}e_n, \end{aligned}$$
(47)

after substituting (47) into (44), we obtain

$$\begin{aligned} H''_4(x_n)= & {} 2f'(a) \Bigg (c_2+ \Bigg (3c_3-\frac{c_5}{B_{n-1,4}}\Bigg ) e_n\Bigg ), \end{aligned}$$
(48)

which implies

$$\begin{aligned} \frac{H''_4(x_n)}{2f'{(x_n)}} \sim c_2+\Bigg (3c_3-2c_2^2-\frac{c_5}{B_{n-1,4}}\Bigg )e_n. \end{aligned}$$
(49)

And hence

$$\begin{aligned} T_n= & {} \frac{H''_4(x_n)}{2f'{(x_n)}} \sim c_2+\Bigg (3c_3-2c_2^2-\frac{c_5}{B_{n-1,4}}\Bigg )e_n, \end{aligned}$$
(50)

or

$$\begin{aligned} T_n-c_2\sim & {} \Bigg (3c_3-2c_2^2-\frac{c_5}{B_{n-1,4}}\Bigg )e_n. \end{aligned}$$
(51)

Substituting (51) into (29), we obtain

$$\begin{aligned} e_{n+1}\sim & {} B_{n,4}\left( 3c_3-2c_2^2-\frac{c_5}{B_{n-1,4}}\right) e_n^5, \end{aligned}$$
(52)

As a result, the R-order of the methods with memory (10)–(13) is at least 5. Thus the proof is over.

3 Results and discussion

Now we attempt to compare the two-step with memory method with our two-step method OM4(2.6). Wang and Zang [15] proposed two with memory methods. Both are two-step methods in the subsequent form:

$$\begin{aligned} y_n= & {} x_n-\frac{f{(x_n)}}{T_n f{(x_n)}+f'{(x_n)}},\nonumber \\ x_{n+1}= & {} y_n-\left( 1-\frac{f{(y_n)}}{2T_n f{(x_n)}+f'{(x_n)}}\right) \nonumber \\&\times \left( 1+\frac{2f{(y_n)}}{f{(x_n)}}+a\left( \frac{2f{(y_n)}}{f{(x_n)}}\right) ^2\right) , \end{aligned}$$
(53)

where \(a \in R\), which is denoted by XW41(16), and

$$\begin{aligned} y_n= & {} x_n-\frac{f{(x_n)}}{T_n f{(x_n)}+f'{(x_n)}},\nonumber \\ x_{n+1}= & {} y_n-\left( 1-\frac{f{(y_n)}}{2T_n f{(x_n)}+f'{(x_n)}}\right) \left( \frac{f{(x_n)}+(2+b)f{(y_n)}}{f{(x_n)}+bf{(y_n)}}\right) , \end{aligned}$$
(54)

where \(b \in R\), which is denoted by XW42(17). We are captivating the values of the self-accelerating parameter \(T_n\) for both methods in the following form

Method 4:

$$\begin{aligned} T_n= & {} -\frac{H''_2{(x_n)}}{2f'{(x_n)}}, \end{aligned}$$
(55)

where \(H_2{(x)}= f{(x_n)}+ f{[x_n,x_n]}(x-x_n)+ f{[x_n,x_n,y_{n-1}](x-x_n)^2}\) and \(H''_2{(x)}= 2f{[x_n,x_n,y_{n-1}]}\).

Method 5:

$$\begin{aligned} T_n= & {} -\frac{H''_3{(x_n)}}{2f'{(x_n)}}, \end{aligned}$$
(56)

where \(H_3{(x)}= H_2{(x)}+ f{[x_n,x_n,y_{n-1},x_{n-1}]}(x-x_n)^2(x-y_{n-1})\) and \(H''_3{(x)}= 2f{[x_n,x_n,y_{n-1}]}+2f{[x_n,x_n,y_{n-1},x_{n-1}]}(x_n-y_{n-1})\).

Table 1 Test functions and their roots
Table 2 Comparison of absolute error

Method 6:

$$\begin{aligned} T_n= & {} -\frac{H''_4{(x_n)}}{2f'{(x_n)}}, \end{aligned}$$
(57)

where \(H_4{(x)}= H_3{(x)}+ f{[x_n,x_n,y_{n-1},x_{n-1},x_{n-1}]}(x-x_n)^2(x-y_{n-1})(x-x_{n-1}))\) and \(H''_4{(x)}= 4f{[x_n,x_n,y_{n-1}]}+(2f{[x_n,x_n,y_{n-1},x_{n-1}]} -2f{[x_n,y_{n-1}x_{n-1},x_{n-1}]})(x_n-y_{n-1})\).

Table 1 has four nonlinear test functions with their roots; two functions are considered from [16] and the other two are taken from [17]. The numerical results illustrated in Table 2 are in accordance with the theory developed in the paper. The absolute errors \(|x_n-\alpha |\) are given for our proposed method OM4(2.6), where \(\alpha \) is an exact root. All the computations have been performed using the programming package MATHEMATICA 8. The computational order of convergence in [18] is approximated by means of

$$\begin{aligned} COC \approx \frac{ln|f(x_{n+1})/f(x_{n})|}{ln|f(x_{n})/f(x_{n-1})|}, \end{aligned}$$

to confirm the computational efficiency, which established all the theoretical rate of convergence. Our proposed method OM4(2.6) have been used to solve the nonlinear functions and the calculated results are compared with other existing methods of the same nature XW41(16) and XW42(17). Another effective approach to compare the efficiency of methods is CPU time used in the execution of program. At this moment, the CPU time has been calculated by means of the command \(``TimeUsed []''\) in MATHEMATICA 8. The CPU time depends on the specification of computer. The computer characteristics are Microsoft Windows 7 Intel Core i3-2330M CPU@ 2.20 GHz with 2 GB of RAM, 64-bit operating system throughout the paper. The mean CPU time is calculated by considering the mean of 30 performance of the program. It can be observed from Table 2, that the results obtained by our proposed method are efficient and show better performance than other existing methods.

4 Conclusion

We have presented a multipoint with memory iterative method for finding simple roots of nonlinear equations with minimum computational effort, of improved convergence order by modifying the existing without memory method. Since our aim is to construct the method of higher order convergence without any additional calculation, so we have used three different approximations of self-correcting parameters, designed by Hermite interpolating polynomials in the fourth-order method to achieve higher order convergence. The R-order of convergence of new with memory iterative methods is increased from 4 to 4.5616, 4.7913 and 5 respectively, without any additional calculation. Our measure for the proposed method is also based on the efficiency index, computational order of convergence (COC) and CPU time. At the end the numerical results confirm validity of theoretical results.