Introduction

The main goal of the article is to calculate the simple root of nonlinear equation \(f(x)=0\) as it is a very important problem in real world phenomena, which attracted so much attention recently. In order to find a solution to these types of nonlinear equations, numerous multipoint iterative schemes have been proposed (one can see [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18]). These iterative techniques play a vital role in area of research in numerical analysis because they have numerous applications in different branches of sciences, economics, dynamical model, engineering, and so on. Out of them the most eminent one-point iterative scheme without memory is the classical Newton–Raphson method, defined as follows:

$$\begin{aligned} z_{n+1}=z_n-\frac{f{(z_n)}}{f'{(z_n)}}, \end{aligned}$$
(1)

used to find the solution of the nonlinear equation and it’s convergence order is 2. A drawback of this scheme is the assumption \(f'(z_n) \ne 0\), which limits its practical applications. To resolve this issue, Kumar et al. [12] developed a one-point iterative scheme, given as:

$$\begin{aligned} z_{n+1}=z_n-\frac{f{(z_n)}}{f'{(z_n)}-\gamma f{(z_n)}}. \end{aligned}$$
(2)

It is clear from the above equation that by taking the value of \(\gamma = 0\) in (2), we will get Newton’s method (1). The error equation for Kumar’s approach is:

$$\begin{aligned} e_{n+1}=(\gamma -c_2)e_n^{2}+O(e_n^{3}), \end{aligned}$$
(3)

where \(e_n=x_n- \alpha \), \(c_i=\frac{1}{i!} \frac{f^{(i)}{(\alpha )}}{f'{(\alpha )}}\), \(i=2,3,\ldots \) and \( \alpha \) is the root of \(f{(z)}=0\). The error equation given in (3) clearly shows that by taking \(\gamma = c_2\), the convergence order of the method can be increased.

Next, we discuss the classification of possible type of iteration function (I.F.). These I.F. have been categorized on the basis of the information which they require [19, 20]:

(i) One-point iterative function without memory \(x_{k+1}=\phi ({x_k})\) can be determined by only new data at \(x_{k}\), where \(\phi \) is called a one-point I.F. The best known example of one-point I.F. type is Newton’s I.F. On the other hand multi-point iteration function without memory \(x_{k+1}=\phi {(x_k, w_1{(x_{k})},\ldots ,w_n{(x_{k})})}\), can be determined by only new information at \(x_{k}, w_1{(x_{k})},\ldots ,w_n{(x_{k})}(n \ge 1)\), where \(\phi \) is called multipoint I.F. without memory.

(ii) One-point iteration function with memory \(x_{k+1}= \phi (x_k;x_{k-1},\ldots ,\) \(x_{k-n})\) can be determined by new data at \(x_{k}\) and reused data at \( x_{k-1},\ldots ,\) \( x_{k-n}\), where \(\phi \) is known as one-point I.F. with memory. The best known example of with memory one point iterative method is Secant method. On the other hand multi-point iteration function with memory \(x_{k+1}= \phi {(z_k; z_{k-1},\ldots ,z_{k-n})}\), where the I.F. \(\phi \) which has arguments \(z_j\), and each such argument represents \(k+1\) quantities \(x_j, w_1{(x_{j})},\ldots ,w_n{(x_{j})}\)

\( (n \ge 1)\) is called a multipoint I.F. with memory. In the above mentioned mapping semicolon separates the points at which new information are used from the point at which old information are reused i.e. in each iterative step we must preserve information of the last n approximations \(x_j\), and for each approximation, we must calculate n expressions \(w_1{(x_{j})},\ldots ,w_n{(x_{j})}\).

Multipoint iterative schemes are of great practical importance, since they overcome the theoretical limits of any one-point iterative scheme. Concerning their convergence order and computational efficiency. They also generate estimations of greater accuracy, highly improved computer arithmetic and symbolic calculation has allowed efficient execution of multipoint method. Memory-based multipoint iterative schemes use data from the current and previous iterations. Whereas the initial scheme for the construction of this type of scheme dates back to 1964 and Traub’s book, contribution in this field appears very rarely in the literature. To fill this gap this article presents the with memory two-step scheme and convergence order of this scheme is greater than corresponding optimal without memory scheme. The improved order of convergence has been achieved without any extra evaluation of functions using several self-accelerating parameters, which results in higher computational efficiency.

The main objective of this article is to construct such types of methods whose computational efficiency is high. These schemes are referred as multipoint iterative schemes with memory, and they can accelerate the convergence order without memory schemes without any additional functional evaluation. The summary of the paper is as follows: In Sect. 2 the new with memory multipoint iterative scheme has been constructed by adding the self-accelerating parameter in the first step of the existing fifth order without memory method and the convergence analysis have been studied. To calculate the self-accelerating parameters we have used the Hermite interpolating polynomial. Using these parameter, the R-order convergence of the proposed scheme is accelerated from 5 to 5.8284, 6.0275 and 6.2128. Numerical calculation for several examples are performed in Sect. 3 to establish theoretical analysis. The concluding remarks are given in the last section.

Analysis of Convergence for with Memory Methods

In the following section, we will use the parameter T in the scheme proposed by Noor et al. [3] to increase its convergence by replacing the parameter T with iterative parameter \(T_n\). For this first we have included parameter T in the first sub-step of without memory scheme of the fifth order, presented in the article [3]:

$$\begin{aligned} w_n= & {} z_n-\frac{f{(z_n)}}{f'{(z_n)-T f{(z_n)}}},\nonumber \\ z_{n+1}= & {} w_n-\frac{2f{(z_n)}f{(w_n)}f'{(w_n)}}{2f{(z_n)}f'{(w_n)}^2-f'{(z_n)}^{2} f{(w_n)}+f'{(z_n)}f'{(w_n)}f{(w_n)}}. \end{aligned}$$
(4)

The error expressions of each sub-step of (4) are:

$$\begin{aligned} e_{n,w}= & {} w_n - \alpha = (c_2-T)e_n^2 + (-2c_2^2-T^2+2c_2T+2c_3)e_n^3\nonumber \\&+(T^3+5Tc_2^2-4c_2^3-4Tc_3+c_2(7c_3-3T^2)-3c_4)e_n^4+O{(e_n^5)}, \end{aligned}$$
(5)

and

$$\begin{aligned} e_{n+1}= & {} \frac{-1}{2}{(T-c_2)^2}(2Tc_2+3c_3)e_n^5\nonumber \\&-\frac{1}{2}(T-c_2)[2c_2^4+6c_2 T^3+c_2^2(6Tc_2+13c_3)+2T(3T^{2}c_2+5Tc_3+2c_4)\nonumber \\&-(10T^{2}c_2^{2}+12c_3^{2}+c_2(25Tc_3+4c_4))]e_n^{6}+O{(e_n^7)}, \end{aligned}$$
(6)

where \(e_{n,w}= w_{n} - \alpha \), \(e_{n}= z_{n} - \alpha \) and \(c_i=\frac{f^{(i)}(\alpha )}{i!f'{(\alpha )}}\), for \(i=2,3,\ldots \) and \(T \in R\). We obtain the following with memory iterative scheme by replacing T with \(T_n\) in (4):

$$\begin{aligned} w_n= & {} z_n-\frac{f{(z_n)}}{f'{(z_n)-T_n f{(z_n)}}},\nonumber \\ z_{n+1}= & {} w_n-\frac{2f{(z_n)}f{(w_n)}f'{(w_n)}}{2f{(z_n)}f'{(w_n)}^2-f'{(z_n)}^{2} f{(w_n)}+f'{(_n)}f'{(w_n)}f{(w_n)}}, \end{aligned}$$
(7)

and the above scheme is represented by MWM5. Now from Eq. (6) it is clear that the convergence order of algorithm (4) is five when \(T \ne c_2\). Next to accelerate the order of convergence of the method (4) from five to higher, we can assume \(T=c_2=f''{(\alpha )}/(2f'{(\alpha )})\) but in actual fact the exact values of \(f'{(\alpha )}\) and \(f''{(\alpha )}\) are not attainable in practice. So we will assume the parameter T as \(T_n\). The parameter \(T_n\) can be calculated by using the available data from the previous and current iterations and satisfies the condition \(\lim _{n\rightarrow \infty } T_n = c_2 = f''{(\alpha )}/(2f'{(\alpha )})\), such that the fifth and sixth order asymptotic convergence constant should be zero in the error expression (6). Different formulas for \(T_n\) are as follows:

Method 1:

$$\begin{aligned} T_n= & {} \frac{H''_2{(z_n)}}{2f'{(z_n)}}, \end{aligned}$$
(8)

where \(H_2{(z)}= f{(z_n)}+ f{[z_n,z_n]}(z-z_n)+ f{[z_n,z_n,w_{n-1}](z-z_n)^2}\) and \(H''_2{(z_n)}=2 f{[z_n,z_n,w_{n-1}]}\).

Method 2:

$$\begin{aligned} T_n= & {} \frac{H''_3{(z_n)}}{2f'{(z_n)}}, \end{aligned}$$
(9)

where \(H_3{(z)}= H_2{(z)}+ f{[z_n,z_n,w_{n-1},z_{n-1}]}(z-z_n)^2(z-w_{n-1})\) and \(H''_3{(z_n)}= 2f{[z_n,z_n,w_{n-1}]}+2f{[z_n,z_n,w_{n-1},z_{n-1}]}(z_n-w_{n-1})\).

Method 3:

$$\begin{aligned} T_n= & {} \frac{H''_4{(z_n)}}{2f'{(z_n)}}, \end{aligned}$$
(10)

where \(H_4{(z)}= H_3{(z)}+ f{[z_n,z_n,w_{n-1},z_{n-1},z_{n-1}]}(z-z_n)^2(z-w_{n-1})(z-z_{n-1})\) and \(H''_4{(z_n)}= 2f{[z_n,z_n,w_{n-1}]}+(4f{[z_n,z_n,w_{n-1},z_{n-1}]} -2f{[z_n,w_{n-1}z_{n-1},z_{n-1}]})(z_n-w_{n-1})\).

Note: The condition \(H'_m(z_n)=f'(z_n)\) is satisfied by the Hermite interpolation polynomial \(H_m(z)\) for \(m=2,3,4\). So, \(T_n=\frac{H''_m(z_n)}{2f'(z_n)}\) can be expressed as \(T_n=\frac{H''_m(z_n)}{2H'_m(z_n)}\) \((m=2,3,4)\).

Theorem 1

Let \(H_m\) be the Hermite polynomial of degree m that interpolates a function f at interpolation nodes \(z_n,z_n,t_0\ldots t_{m-2}\) belonged in an interval I and the derivative \(f^{(m+1)}\) is continuous in I and the Hermite polynomial \(H_m{(z_n)}= f{(z_n)}\), \(H'_m{(z_n)}=f'{(z_n)}\), \(H_m{(t_j)}=f{(t_j)}\) \((j=0,1,\ldots ,m-2)\). Denote \(e_{t,j}=t_j-\alpha \) \((j=0,1,\ldots ,m-2)\) and suppose that

  1. (1)

    all nodes \(z_n,t_0,\ldots ,t_{m-2}\) are enough near to the root \(\alpha \):

  2. (2)

    the condition \(e_n=O(e_{t,0}\ldots e_{t,m-2})\) holds. Then

    $$\begin{aligned}&H''_{m}{(z_n)}=2f'{(\alpha )}\bigg (c_2-(-1)^{m-1}c_{m+1}\prod _{j=0}^{m-2}e_{t,j}+3c_3e_n\bigg ), \end{aligned}$$
    (11)
    $$\begin{aligned}&T_{n}=\frac{H''(z_n)}{2f'{(z_n)}} \sim \bigg (c_2-(-1)^{m-1}c_{m+1}\prod _{j=0}^{m-2}e_{t,j}+(3c_3-2c_2^2)e_n\bigg ), \end{aligned}$$
    (12)

    and

    $$\begin{aligned} T_{n}-c_2 \sim \bigg (-(-1)^{m-1}c_{m+1}\prod _{j=0}^{m-2}e_{t,j}+(3c_3-2c_2^2)e_n\bigg ). \end{aligned}$$
    (13)

Proof

We can calculate the expression of the Hermite interpolation by this way

$$\begin{aligned} f{(z)}-H_{m}{(z)}=\frac{f^{(m+1)}(\xi )}{(m+1)!}(z-z_n)^2\prod _{j=0}^{m-2}(z-t_j),(\xi \in I). \end{aligned}$$
(14)

Now, we get the below mentioned equation by differentiating (14) two times at the point \(z=z_n\),

$$\begin{aligned} H''_{m}{(z_n)}=f''{(z_n)}-2\frac{f^{(m+1)}(\xi )}{(m+1)!} \prod _{j=0}^{m-2}(z_n-t_j),(\xi \in I). \end{aligned}$$
(15)

Next the Taylor’s series expansion of \(f'\) at the point \(z_n \in I\) and \(\xi \in I\) about the zero \(\alpha \) of f provides

$$\begin{aligned} f'{(z_n)}= & {} f'{(\alpha )}\big (1+2c_2e_n+3c_3e_n^2+O{(e_n^3)}\big ), \end{aligned}$$
(16)
$$\begin{aligned} f''{(z_n)}= & {} f'{(\alpha )}\big (2c_2+6c_3e_n+O{(e_n^2)}\big ), \end{aligned}$$
(17)

and

$$\begin{aligned} f^{(m+1)}{(\xi )}=f'{(\alpha )}\big ((m+1)!c_{m+1}+(m+2)!c_{m+2}e_{\xi }+O{(e_{\xi }^2)}\big ), \end{aligned}$$
(18)

where \(e_{\xi }=\xi -\alpha \). Putting (17), (18) in (15), we obtain

$$\begin{aligned} H''_{m}{(z_n)}=2f'{(\alpha )}\bigg (c_2-(-1)^{m-1}c_{m+1}\prod _{j=0}^{m-2}e_{t,j}+3c_3e_n\bigg ), \end{aligned}$$
(19)

which implies

$$\begin{aligned} \frac{H''_m(z_n)}{2f'{(z_n)}} \sim \bigg (c_2-(-1)^{m-1}c_{m+1}\prod _{j=0}^{m-2}e_{t,j}+(3c_3-2c_2^2)e_n\bigg ). \end{aligned}$$
(20)

And hence

$$\begin{aligned} T_{n} \sim \bigg (c_2-(-1)^{m-1}c_{m+1}\prod _{j=0}^{m-2}e_{t,j}+(3c_3-2c_2^2)e_n\bigg ), \end{aligned}$$
(21)

or

$$\begin{aligned} T_{n}-c_2 \sim \bigg (-(-1)^{m-1}c_{m+1}\prod _{j=0}^{m-2}e_{t,j}+(3c_3-2c_2^2)e_n\bigg ). \end{aligned}$$
(22)

The definition of R-order of convergence [21] and the following statement [14, page 287] can be used to estimate the order of convergence of iterative scheme (7). \(\square \)

Theorem 2

If the errors \(e_j=z_j-\alpha \) evaluated by an iterative root finding method IM fulfill

$$\begin{aligned} e_{k+1} \sim \prod _{i=0}^{m-2} (e_{k-i})^{m_i}, k \ge k{(\{e_k\})}, \end{aligned}$$

then the R-order of convergence of IM, denoted with \(O_R (IM,\alpha )\), satisfies the inequality \(O_R (IM,\alpha ) \ge s^*\), where \(s^*\) is the unique positive solution of the equation \(s_{n+1}-\sum _{i=0}^{n} m_i s^{n-i}=0\).

Presently, for the new iterative scheme with memory (7) we can state subsequent convergence theorem:

Theorem 3

In the iterative method (7), let \(T_n\) be the varying parameter and it is calculated by (8)–(10). If an initial guess \(z_0\) is enough near to a simple zero \(\alpha \) of f(x), then the R-order of convergence of iterative methods (8)–(7), (9)–(7) and (10)–(7) with memory is at least \((6+\sqrt{32})/2 \approx 5.8284\), 6.0275 and 6.2128, respectively.

Proof

Let the iterative method (IM) generating the sequence of \(\{z_n\}\) which is converges to the root \(\alpha \) of f(z), by means of R-order \(O_R(IM,\alpha ) \ge r\), we express

$$\begin{aligned} e_{n+1} \sim D_{n,r} e_n^r. \end{aligned}$$
(23)

Next \(D_{n,r}\) will tends to the asymptotic error constant \(D_r\) of IM by taking \(n \rightarrow \infty \), then. Therefore

$$\begin{aligned} e_{n+1} \sim D_{n,r}( D_{n-1,r}e_{n-1}^r)^r=D_{n,r}D^r_{n-1,r}e_{n-1}^{r^2}. \end{aligned}$$
(24)

The resulting error expression of the with memory scheme (7) can be obtained using (5), (6), and the varying parameter \(T_n\)

$$\begin{aligned} e_{n,w}= w_n-\alpha \sim (c_2-T_n)e_n^2, \end{aligned}$$
(25)

and

$$\begin{aligned} e_{n+1}= z_{n+1}-\alpha \sim B_{n,5}(T_n-c_2)^{2}e_n^5+B_{n,6}(T_n-c_2)e_n^6, \end{aligned}$$
(26)

where \(B_{n,5}\) and \(B_{n,6}\) originates from (6) represents the changing quantity which occurs due to the self-accelerating parameter \(T_n\). Also the higher order terms in (25)–(26) are excluded.

Method 1. \(T_n\) is evaluated by (8): The calculation of \(T_n\) is analogous to the derivation of (23). The R-order convergence of iterative sequence \(\{w_n\}\) is take up as p, so

$$\begin{aligned} e_{n,w} \sim D_{n,p} e_n^p \sim D_{n,p} (D_{n-1,r} e_{n-1}^r) ^p = D_{n,p}D_{n-1,r}^p e_{n-1}^{rp}. \end{aligned}$$
(27)

Using Theorem 2.1 for \(m=2\) and \(t_0=w_{n-1}\), we attain

$$\begin{aligned} T_n-c_2 \sim c_3e_{t,0} = c_3e_{n-1,w}. \end{aligned}$$
(28)

Now from (25), (26) and (28), we get

$$\begin{aligned} e_{n,w} \sim -c_3e_{n-1,w}(D_{n-1,r} e_{n-1}^r)^2 \sim -c_3D_{n-1,p}D_{n-1,r}^2 e_{n-1}^{2r+p}, \end{aligned}$$
(29)

and

$$\begin{aligned} e_{n+1}\sim & {} B_{n,5} c_3^{2}e_{n-1,w}^2e_n^5+B_{n,6} c_3e_{n-1,w}e_n^6 \nonumber \\\sim & {} B_{n,5} c_3^2(D_{n-1,p} e_{n-1}^p)^2 (D_{n-1,r} e_{n-1}^r)^5+ B_{n,6} c_3(D_{n-1,p} e_{n-1}^p) (D_{n-1,r} e_{n-1}^r)^6,\nonumber \\\sim & {} B_{n,5} c_3^2 D_{n-1,p}^2 D_{n-1,r}^5 e_{n-1}^{5r+ 2p}+B_{n,6} c_3 D_{n-1,p} D_{n-1,r}^6 e_{n-1}^{6r+ p},\nonumber \\\sim & {} (B_{n,5} c_3^2 D_{n-1,p}^2 D_{n-1,r}^5+B_{n,6} c_3 D_{n-1,p} D_{n-1,r}^6 e_{n-1}^{r-p}) e_{n-1}^{5r+ 2p}, \end{aligned}$$
(30)

since \(r>p\). By equating the exponents of \(e_{n-1}\) present in the set of relation (27)–(29) and (24)–(30), we attain the resulting system of equations:

$$\begin{aligned} 2r+p=rp,\nonumber \\ 5r+2p=r^2. \end{aligned}$$
(31)

The solution of system of equation (31) is specified by \(r=(6+ \sqrt{32})/2\) and \( p=2.4142\). As a result, the R-order of convergence of the with memory iterative schemes (8)–(7) is at least \((6+ \sqrt{32})/2 \approx 5.8284\).

Method 2. \(T_n\) is evaluated by (9): By means of the Theorem 2.1 for \(m=3\), \(t_0=w_{n-1}\) and \(t_1=z_{n-1}\), we get hold of

$$\begin{aligned} T_n-c_2 \sim -c_4e_{t,0}e_{t,1} = -c_4e_{n-1,w}e_{n-1}. \end{aligned}$$
(32)

In accordance with (25), (26) and (32), we find

$$\begin{aligned} e_{n,w}\sim & {} (c_2-T_n)e_n^2 \sim c_4e_{n-1} e_{n-1,w}(D_{n-1,r} e_{n-1}^r)^2\nonumber \\\sim & {} c_4 D_{n-1,p}D_{n-1,r}^2 e_{n-1}^{2r+p+1}, \end{aligned}$$
(33)

and

$$\begin{aligned} e_{n+1}\sim & {} B_{n,5}c_4^2e_{n-1}^2e_{n-1,w}^2e_n^5-B_{n,6}c_4e_{n-1}e_{n-1,w}e_n^6\nonumber \\\sim & {} B_{n,5}c_4^2e_{n-1}^2 (D_{n-1,p}e_{n-1}^{p})^{2}(D_{n-1,r}e_{n-1}^r)^5- B_{n,6}c_4e_{n-1}\nonumber \\&( D_{n-1,p}e_{n-1}^{p})(D_{n-1,r} e_{n-1}^r)^6\nonumber \\\sim & {} B_{n,5}c_4^2 D_{n-1,p}^2 D_{n-1,r}^5e_{n-1}^{5r+2p+2}+B_{n,6}(-c_4) D_{n-1,p}D_{n-1,r}^6e_{n-1}^{6r+p+1} \nonumber \\\sim & {} e_{n-1}^{5r+2p+1}(B_{n,5}c_4^2 D_{n-1,p}^2 D_{n-1,r}^5e_{n-1}^{1}+B_{n,6}(-c_4) D_{n-1,p}D_{n-1,r}^6e_{n-1}^{r-p}). \end{aligned}$$
(34)

By equating the exponents of \(e_{n-1}\) present in the set of relation (27)–(33) and (24)–(34), we attain the resulting system of equations:

$$\begin{aligned} 2r+p+1=rp,\nonumber \\ 5r+2p+1=r^2. \end{aligned}$$
(35)

The solution of system of equations (35) is specified by \(r=6.0275\) and \( p=2.5967\). As a result, the R-order of convergence of the with memory iterative schemes (9)–(7) is at least 6.0275.

Method 3. \(T_n\) is evaluated by (10): By means of the Theorem 2.1 for \(m=4\), \(t_0=w_{n-1}\) and \(t_1=t_2=z_{n-1}\), we get hold of

$$\begin{aligned} T_n-c_2 \sim c_5e_{t,0}e_{t,1}e_{t,2} = c_5e_{n-1,w}e_{n-1}^2. \end{aligned}$$
(36)

In accordance with (25), (26) and (36), we find

$$\begin{aligned} e_{n,w}\sim & {} (c_2-T_n)e_n^2 \sim -c_5e_{n-1}^2 e_{n-1,w}(D_{n-1,r} e_{n-1}^r)^2\nonumber \\\sim & {} -c_5 D_{n-1,p}D_{n-1,r}^2 e_{n-1}^{2r+p+2}, \end{aligned}$$
(37)

and

$$\begin{aligned} e_{n+1}\sim & {} B_{n,5}c_5^2e_{n-1}^4e_{n-1,w}^2e_n^5+B_{n,6}c_5e_{n-1}^2e_{n-1,w}e_n^6\nonumber \\\sim & {} B_{n,5}c_5^2e_{n-1}^4 (D_{n-1,p}e_{n-1}^{p})^2 (D_{n-1,r}e_{n-1}^r)^5+B_{n,6}c_5e_{n-1}^2 \nonumber \\&( D_{n-1,p}e_{n-1}^{p})(D_{n-1,r} e_{n-1}^r)^6\nonumber \\\sim & {} B_{n,5}c_5^2 D_{n-1,p}^2 D_{n-1,r}^5e_{n-1}^{5r+2p+4}+ B_{n,6} c_5 D_{n-1,p}D_{n-1,r}^6e_{n-1}^{6r+p+2} \nonumber \\\sim & {} e_{n-1}^{5r+2p+2}(B_{n,5}c_5^2 D_{n-1,p}^2 D_{n-1,r}^5e_{n-1}^{2}+B_{n,6}c_5 D_{n-1,p}D_{n-1,r}^6e_{n-1}^{r-p}).\nonumber \\ \end{aligned}$$
(38)

By equating the exponents of \(e_{n-1}\) present in the set of relation (27)–(37) and (24)–(38), we attain the resulting system of equations:

$$\begin{aligned} 2r+p+2=rp,\nonumber \\ 5r+2p+2=r^2. \end{aligned}$$
(39)

The solution of system of equations (39) is specified by \(r=6.2128\) and \( p=2.7673\). As a result, the R-order of convergence of the with memory iterative schemes (10)- (7) is at least 6.2128. \(\square \)

Numerical Assessment

Within the ensuing segment, after the review of two-step iterative schemes, the comparison has been done between the existing techniques and proposed scheme (7) denoted by MWM5. Our proposed method is compared with existing well known without memory fifth-order schemes HM, ISHM, LLGS, NR, SHM, YCHAM and similar type of two-step with memory methods XW1, XW2 presented in the fres. [2,3,4,5,6,7, 22] respectively. We now state the above mentioned fifth-order iterative schemes.

In the article [3] Noor and Noor suggested two-step modified Halley method of fifth-order convergence:

$$\begin{aligned} w_n= & {} z_n-\frac{f{(z_n)}}{f'{(z_n)}},\nonumber \\ z_{n+1}= & {} w_n-\frac{2f{(z_n)}f{(w_n)}f'{(w_n)}}{2f{(z_n)}f'{(w_n)}^2-f'{(z_n)}^2f{(w_n)}+f'{(z_n)}f'{(w_n)}f{(w_n)}}, \end{aligned}$$
(40)

which is denoted by HM and this is the method which we have modified and converted into with memory method in this paper.

In 2007 Noor and Noor [6] proposed two-step Halley method of fifth-order convergence:

$$\begin{aligned} w_n= & {} z_n-\frac{2f{(z_n)}f'{(z_n)}}{2f'{(z_n)}- f{(z_n)}f''{(z_n)}},\nonumber \\ z_{n+1}= & {} z_n-\frac{2[f{(z_n)}+f{(w_n)}]f'{(z_n)}}{2f'{(z_n)}^2-[f{(z_n)}+f{(w_n)}] f''{(z_n)}}, \end{aligned}$$
(41)

which is denoted by NR.

In his paper [4, 5], Kou et al. have suggested following iterative method:

$$\begin{aligned} w_n= & {} z_n-\frac{f{(z_n)}}{f'{(z_n)}}-\frac{f''{(z_n)}f{(z_n)}^2}{2f'{(z_n)}^3- 2f{(z_n)}f{'(z_n)}f''{(z_n)}},\nonumber \\ z_{n+1}= & {} w_n-\frac{f{(w_n)}}{f'{(z_n)}}-\frac{f''{(z_n)}f{(w_n)}}{2f'{(z_n)}^3}, \end{aligned}$$
(42)

which is denoted by ISHM and

$$\begin{aligned} w_n= & {} z_n-\frac{f{(z_n)}}{f'{(z_n)}}-\frac{f''{(z_n)}f{(z_n)}^2}{2f'{(z_n)}^3- 2f{(z_n)}f{'(z_n)}f''{(z_n)}},\nonumber \\ z_{n+1}= & {} w_n-\frac{f{(w_n)}}{f'{(z_n)}+f''{(z_n)}(w_n-z_n)}, \end{aligned}$$
(43)

which is denoted by SHM.

In 2008 Fang et al. [2] have proposed following fifth-order iterative methods:

$$\begin{aligned} w_n= & {} z_n-\frac{f{(z_n)}}{f'{(z_n)}},\nonumber \\ z_{n+1}= & {} w_n-\frac{5f'{(z_n)}^2+3f'{(w_n)}^2}{f'{(z_n)}^2+7f'{(w_n)}^2}\frac{f{(w_n)}}{f'{(z_n)}}, \end{aligned}$$
(44)

which is denoted by LLGS.

In 2007 Ham and Chun [7] have proposed following fifth-order iterative methods:

$$\begin{aligned} w_n= & {} z_n-\frac{f{(z_n)}}{f'{(z_n)}},\nonumber \\ z_{n+1}= & {} w_n-\frac{f'{(w_n)}+3f'{(z_n)}}{5f'{(w_n)}-f'{(z_n)}}\frac{f{(w_n)}}{f'{(z_n)}}, \end{aligned}$$
(45)

which is denoted by YCHAM.

In 2014, Wang and Zang [22] proposed two with memory methods. Both are two-step methods in the subsequent form:

$$\begin{aligned} w_n= & {} z_n-\frac{f{(z_n)}}{T_n f{(z_n)}+f'{(z_n)}},\nonumber \\ z_{n+1}= & {} w_n-\left( 1-\frac{f{(w_n)}}{2T_n f{(z_n)}+f'{(z_n)}}\right) \nonumber \\&\left( 1+\frac{2f{(w_n)}}{f{(z_n)}}+a\left( \frac{2f{(w_n)}}{f{(z_n)}}\right) ^2\right) , \end{aligned}$$
(46)

where \(a \in R\), which is denoted by XW41, and

$$\begin{aligned} w_n= & {} z_n-\frac{f{(z_n)}}{T_n f{(z_n)}+f'{(z_n)}},\nonumber \\ z_{n+1}= & {} w_n-\left( 1-\frac{f{(w_n)}}{2T_n f{(z_n)}+f'{(z_n)}}\right) \left( \frac{f{(z_n)}+(2+b)f{(w_n)}}{f{(z_n)}+bf{(w_n)}}\right) ,\nonumber \\ \end{aligned}$$
(47)

where \(b \in R\), which is denoted by XW42. We are captivating the values of the self-accelerating parameter \(T_n\) for both methods in the following form:

Method 4:

$$\begin{aligned} T_n= & {} -\frac{H''_2{(z_n)}}{2f'{(z_n)}}, \end{aligned}$$
(48)

where \(H_2{(z)}= f{(z_n)}+ f{[z_n,z_n]}(z-z_n)+ f{[z_n,z_n,w_{n-1}](z-z_n)^2}\) and \(H''_2{(z)}= 2f{[z_n,z_n,w_{n-1}]}\).

Method 5:

$$\begin{aligned} T_n= & {} -\frac{H''_3{(z_n)}}{2f'{(z_n)}}, \end{aligned}$$
(49)

where \(H_3{(z)}= H_2{(z)}+ f{[z_n,z_n,w_{n-1},z_{n-1}]}(z-z_n)^2(z-w_{n-1})\) and \(H''_3{(z)}= 2f{[z_n,z_n,w_{n-1}]}+2f{[z_n,z_n,w_{n-1},z_{n-1}]}(z_n-w_{n-1})\).

Method 6:

$$\begin{aligned} T_n= & {} -\frac{H''_4{(z_n)}}{2f'{(z_n)}}, \end{aligned}$$
(50)

where \(H_4{(z)}= H_3{(z)}+ f{[z_n,z_n,w_{n-1},z_{n-1},z_{n-1}]}(z-z_n)^2(z-w_{n-1})(z-z_{n-1}))\) and \(H''_4{(z)}= 2f{[z_n,z_n,w_{n-1}]}+(4f{[z_n,z_n,w_{n-1},z_{n-1}]} -2f{[z_n,w_{n-1}z_{n-1},z_{n-1}]})(z_n-w_{n-1})\).

In Table 1, we have considered six different nonlinear functions with their zeroes (\(\alpha \)). Also in the same table we have mentioned the roots each function only up to four decimal places instead of the infinite number of digits after the decimal (these examples have been taken from the references [3, 15, 23]). Tables 2 and 3 contains the absolute errors \(|z_k-\alpha |\) in the first five iteration and the computational order of convergence for the new scheme and existing schemes. The formula of computational order of convergence (COC) given by [16]

$$\begin{aligned} COC \approx \frac{ln|f(z_{n+1})/f(z_{n})|}{ln|f(z_{n})/f(z_{n-1})|}, \end{aligned}$$

is considered to approximate the computational efficiency which verified all the theoretical rate of convergence. In Tables 2 and 3, we have written the numerical results proposed two-step with memory iterative methods which are in accordance with the theoretical part of the paper. To compute the exact root upto 10000 significant digits we have used “Set Accuracy” command in Mathematica 8. The nonlinear functions have been solved by the proposed iterative method MWM5 (7) and the computed results are compared with other existing two-step similar type with memory and fifth order without memory iterative schemes.

Table 1 Test functions
Table 2 Comparison of absolute errors
Table 3 Numerical comparison

From Tables 2 and 3 it is clear that the \(R-order\) convergence of the proposed with memory scheme MWM5 (7) is accelerated from 5 to 6.2128, in accordance with the used self-accelerating parameter methods given by (8)–(10). The acceleration of convergence order of the proposed scheme is achieved without any extra function evaluation, that can be easily seen from the existing without memory fifth order scheme presented in [3] and new method (7) along with the applied accelerating method (8)–(10). In the table “NC” stands for non-convergence of the methods. The results attained in Tables 2 and 3 by means of the proposed scheme verify the theoretical analysis and also we can notice that our scheme is quite superior to the other iterative schemes.

Conclusion

In this article for calculating the root of nonlinear equations, we have developed the novel two-step iterative scheme with memory. Since the main objective of the paper is to develop the scheme of higher order convergence without any extra calculation. That is done using three different estimations of self-accelerating parameters, without any extra function calculation in the fifth-order scheme to attain higher-order convergence. These parameters are calculated using Hermite interpolation. The acceleration in the R-order of convergence of novel with memory iterative schemes are 5 to 5.8284, 6.0275 and 6.2128. The numerical assessment of the new scheme by means of the different test examples verifies that it supports the main theorem.