1 Introduction

Many applications in science and engineering require the solution of a single nonlinear equation, for example to locate the candidates for extremum. A very well known iterative method is Newton’s scheme which is of second order. There are many methods of higher order, see e.g. the books by Traub [31] and Petković et al. [27] and the comparative studies [6, 7]. To develop higher order methods, one can use higher derivatives, such as in Halley [21] or use multistep methods. The multistep methods without memory have the barrier as conjectured by Kung and Traub [23] that a method using \(r+1\) function evaluations per step can have order \(2^r\). In order to overcome the barrier of so called optimality, one can develop methods with memory, see e.g Chapter 6 of [31] or Chapter 6 of the more recent book [27].

Recall that Traub classified iterative methods with memory in the following way:

  1. 1.

    Let \(x_{k+1}\) be determined by new information at \(x_k\) and reused information at \(x_{k-1},\ldots ,\) \(x_{k-p}\) by the iterative process

    $$\begin{aligned} x_{k+1}=\phi (x_k;\, x_{k-1},\ldots ,x_{k-p}). \end{aligned}$$
    (1)

    Then \(\phi \) is called a one-point iteration function with memory, which defines an iterative method with memory.

  2. 2.

    Let \(z_j\) represent \(p+1\) quantities \(x_j,{\omega }_1(x_j),\ldots ,{\omega }_p(x_j)\ (j\ge 1).\) If \(x_{k+1}\) is calculated iteratively by

    $$\begin{aligned} x_{k+1}=\phi (z_k, z_{k-1},\ldots ,z_{k-p}), \end{aligned}$$
    (2)

    then \(\phi \) is called a multipoint iteration function with memory.

Here we compare two multipoint methods with memory with the best multipoint methods without memory (see [7]).

To estimate the convergence rate of the family of multipoint iterative methods (2) with memory, we will use the concept of R-order of convergence introduced by Ortega and Rheinboldt [26].

In the next section we list the two multpoint methods with memory to be evaluated and compare to the best two methods without memory. We will experiment with these four methods and discuss the basins of attraction for each one. The idea of basin of attraction for comparative study was used by Stewart [30] and followed by the work of Amat et al. [1,2,3], Argyros and Magreñan [4], Chun et al. [9, 10], Chun and Neta [8, 11,12,13,14], Chicharro et al. [5], Cordero et al. [15], Geum et al. [17,18,19,20], Neta et al. [24, 25] and Scott et al. [28].

In the next section we introduced the four methods and discuss the implementaion. In Sect. 3, we present the numerical results and the basins of attractions for the methods ran on seven examples. We close with concluding remark.

2 Methods for comparison

As we mentioned previously, we will compare two methods with memory to the best two methods without memory. The methods with their order of convergence (p), number of function- (and derivative-) evaluations per step (\(\nu \)) and efficiency (I) are

  1. 1.

    Chun et al.’s method [9] (\(p=4,\,\nu =3,\,I=1.5874\)), denoted CLND

    $$\begin{aligned} y_n= & {} {\displaystyle x_n-{2\over 3} {f(x_n)\over f'(x_n)}}, \end{aligned}$$
    (3)
    $$\begin{aligned} x_{n+1}= & {} x_n-{f(x_n)\over f'(x_n)}H(\tilde{t}(x_n)), \end{aligned}$$
    (4)

    where the weight function H satisfies \(H(0)=1,\, H'(0)=\frac{1}{2}, \, H''(0)=1,\) and

    $$\begin{aligned} \tilde{t}(x_n)={\displaystyle {3\over 2}}{f'(x_n)-f'(y_n)\over f'(x_n)}. \end{aligned}$$
    (5)

    CLND is the case where the weight function H(t) in (4) is given by

    $$\begin{aligned} H(t)=\frac{1+(2g-2c-1/2)t+gt^2}{1+(2g-2c-1)t+ct^2} \end{aligned}$$
    (6)

    with \(c=0,\, g=0\), which is basically Jarratt’s fourth-order (J4) method [22]

    $$\begin{aligned} x_{n+1}=x_n- \left[ 1-{3\over 2} {f'(y_n)-f'(x_n)\over 3 f'(y_n)-f'(x_n)} \right] {f(x_n)\over f'(x_n)}, \end{aligned}$$
    (7)

    where \(y_n\) is given by (3).

  2. 2.

    Sharma–Arora’s method [29] (\(p=8,\, \nu =4,\,I=1.6818\)), denoted SA8

    $$\begin{aligned}&y_n=x_n-{f(x_n)\over f'(x_n)},\nonumber \\&\displaystyle z_n= \phi _4(x_n,y_n),\nonumber \\&\displaystyle x_{n+1}=z_n-\frac{f[z_n,y_n]}{f[z_n, x_n]}\, \frac{f(z_n)}{2f[z_n,y_n]-f[z_n,x_n]}, \end{aligned}$$
    (8)

    where

    $$\begin{aligned} \displaystyle \phi _4(x_n,y_n) =\displaystyle y_n- \frac{f(y_n)}{2f[y_n,x_n]-f'(x_n)}. \end{aligned}$$
    (9)
  3. 3.

    Ullah et al.’s method [32] (R-order \(7.94449, \, \nu =3,\, I=1.99536\)), denoted UKSHA,

    $$\begin{aligned}&\displaystyle \beta _n=-\frac{1}{N'_6(x_n)}, \quad p_n=-\frac{N''_7(w_n)}{2N'_7(w_n)},\quad \lambda _n=\frac{1}{6}N'''_8(y_n),\quad n\ge 2,\nonumber \\&\displaystyle y_n= x_n-\frac{ f(x_n)}{f[x_n,w_n]+p_nf(w_n) }, \quad w_n=x_n+\beta _n f(x_n),\quad n\ge 0,\nonumber \\&\displaystyle x_{n+1}=y_n-\frac{f(y_n)}{f[x_n,y_n]+f[w_n,x_n,y_n](y_n-x_n)+\lambda _n (y_n-x_n)(y_n-w_n)}, \end{aligned}$$
    (10)

    where

    $$\begin{aligned} \displaystyle N_6(t)=N_6(t; x_n,y_{n-1},w_{n-1},x_{n-1},y_{n-2},w_{n-2},x_{n-2}), \end{aligned}$$
    (11)

    is an interpolaion polynomial of sixth degree, passing through \(x_n,y_{n-1},\, w_{n-1},x_{n-1},y_{n-2},w_{n-2},x_{n-2},\)

    $$\begin{aligned} \displaystyle N_7(t)=N_7(t; w_n, x_n,y_{n-1},w_{n-1},x_{n-1},y_{n-2},w_{n-2},x_{n-2}), \end{aligned}$$
    (12)

    is an interpolaion polynomial of seventh degree, passing through \(w_n, x_n,\, y_{n-1},w_{n-1},x_{n-1},y_{n-2},w_{n-2},x_{n-2},\) and

    $$\begin{aligned} \displaystyle N_8(t)=N_8(t; y_n, w_n, x_n,y_{n-1},w_{n-1},x_{n-1},y_{n-2},w_{n-2},x_{n-2}), \end{aligned}$$
    (13)

    is an interpolaion polynomial of eighth degree, passing through \(y_n, w_n, x_n,\, y_{n-1},w_{n-1},x_{n-1},y_{n-2},w_{n-2},x_{n-2}.\) In the case where \(n=1\), this method uses

    $$\begin{aligned} \displaystyle \beta _n=-\frac{1}{N'_3(x_n)}, \quad p_n=-\frac{N''_4(w_n)}{2N'_4(w_n)},\quad \lambda _n=\frac{1}{6}N'''_5(y_n), \end{aligned}$$
    (14)

    where

    $$\begin{aligned} \displaystyle N_3(t)=N_3(t; x_n,y_{n-1},w_{n-1},x_{n-1}), \end{aligned}$$
    (15)

    is an interpolaion polynomial of third degree, passing through \(x_n, y_{n-1},\, w_{n-1},x_{n-1},\)

    $$\begin{aligned} \displaystyle N_4(t)=N_4(t; w_n, x_n,y_{n-1},w_{n-1},x_{n-1}), \end{aligned}$$
    (16)

    is an interpolaion polynomial of fourth degree, passing through \(w_n, x_n,\, y_{n-1},w_{n-1},x_{n-1},\) and

    $$\begin{aligned} \displaystyle N_5(t)=N_5(t; y_n,w_n, x_n,y_{n-1},w_{n-1},x_{n-1}), \end{aligned}$$
    (17)

    is an interpolaion polynomial of fifth degree, passing through \(y_n,w_n, x_n,\, y_{n-1},w_{n-1},x_{n-1}.\) In the case of \(n=0\), the initial approximations \(\beta _n,\, p_n, \, \lambda _n\) could be considered as very small positive values.

  4. 4.

    D z̆ unić et al.’s method [16] (R-order \(2(2+\sqrt{5}) \approx 8.47,\,\nu =4,\, I= 2.86926\)), denoted DPP,

    $$\begin{aligned}&\gamma _n=\displaystyle -\frac{x_n-x_{n-1}}{f(x_n)-f(x_{n-1})}, \quad n\ge 0,\nonumber \\&\displaystyle y_n= x_n-\frac{f(x_n)}{\phi _n},\quad w_n=x_n+\gamma _n f(x_n), \quad n\ge 0,\nonumber \\&\displaystyle z_n=y_n-h(s_n,v_n)\frac{f(y_n)}{\phi _n},\quad n\ge 0,\nonumber \\&\displaystyle x_{n+1}=z_n-\frac{f(z_n)}{N_3 '(z_n; z_n,y_n,x_n,w_n)},\quad n\ge 0, \end{aligned}$$
    (18)

    where \(\phi _n\) is defined by

    $$\begin{aligned} \phi _n=\frac{f(w_n)-f(x_n)}{\gamma _n f(x_n)}, \end{aligned}$$
    (19)

    h is a weight function of two variables that satisfies \(h(0,0)=h_s(0,0)=h_v(0,0)=1, h_{vv}(0,0)=2, \) \(\, s_n=\frac{f(y_n)}{f(x_n)}, \, v_n=\frac{f(y_n)}{f(w_n)},\) and \(N_3 '(z_n; z_n,y_n,x_n,w_n)\) is the derivative of Newton’s interpolating polynomial of degree three at the points \(z_n,y_n,x_n\), and \(w_n\) evaluated at \(z_n\), which is given by

    $$\begin{aligned} N_3' (z_n;z_n,y_n,x_n,w_n)= & {} f[z_n,y_n]+f[z_n,y_n,x_n](z_n-y_n)\nonumber \\&+f[z_n,y_n,x_n,w_n](z_n-y_n)(z_n-x_n). \end{aligned}$$
    (20)

    For the function h,  we experimented with \(h(s,v)=\frac{1+s}{1+v}\). Given \(x_{-1}\), we ran the method with \(\gamma _{n}\) taken as a very small positive value to find an additional starting value \(x_0\).

Table 1 Average number of function evaluations per point for each example (1–7) and each of the methods
Table 2 CPU time (in seconds) required for each example (1–7) and each of the methods
Table 3 Number of points requiring 40 iterations for each example (1–7) and each of the methods
Fig. 1
figure 1

The top row for CLND (left) and SA8 (right). Second row for UKSHA (left) and DPP (right) for the roots of the polynomial \(z^2-1\)

Fig. 2
figure 2

The top row for CLND (left) and SA8 (right). Second row for UKSHA (left) and DPP (right) for the roots of the polynomial \( z^3-1 \)

Fig. 3
figure 3

The top row for CLND (left) and SA8 (right). Second row for UKSHA (left) and DPP (right) for the roots of the polynomial \(z^3-z\)

Fig. 4
figure 4

The top row for CLND (left) and SA8 (right). Second row for UKSHA (left) and DPP (right) for the roots of the polynomial \(z^4-10z^2+9\)

3 Numerical experiments

In this section, we detail the experiments we have used with each of the methods. All the examples have roots within a square of [−3, 3] by [−3, 3]. We have taken \(601^2\) equally spaced points in the square as initial points for the methods and we have registered the total number of function-evaluations per point on average (NFEA) required to converge to a root (in Table 1) and also to which root it converged. We have also collected the CPU time (in seconds) required to run each method on all the points using Dell Optiplex 990 desktop computer (see Table 2) and the number of points requiring 40 iterations in Table 3. These points are painted black and we refer to them as black points or NBP.

Example 1

The first example is the quadratic polynomial

$$\begin{aligned} p_1(z)=z^2-1 \end{aligned}$$
(21)

whose roots are at \({\pm } 1\). The basins are given in Fig. 1. The top row shows the basins of the methods without memory and the bottom for those with memory. It is clear that the best methods are those without memory, since the domain is divided equally by the vertical axis. DPP is better than UKSHA, since there is no preference to the root \(z=-1\) over the other. For a more quantitative comparison, we refer to the Tables 1, 2 and 3. In Table 1 we have compared the NFEA. In Table 2 we compared the CPU time in seconds to run the method on all \(601^2\) points and in Table 3 we listed the number of points for which the method did not converge after 40 iterations (NBP). The CPU results show that the DPP is much faster than UKSHA and slower than the methods without memory. DPP has also the lowest NBP and the lowest NFEA. This seems encouraging for multipoint methods with memory.

Example 2

The second example is the cubic polynomial

$$\begin{aligned} p_2(z)=z^3-1 \end{aligned}$$
(22)

having the 3 roots of unity.

The basins of attraction are given in Fig. 2. Now we see that multipoint methods with memory have many black points. It could be that when the roots are not real, the methods have difficulty. We will check that in the rest of experiments. Based on Table 1 we find that SA8 has the lowest NFEA. SA8 is the fastest (224.969 s) and has only one black point (exactly as CLND).

Example 3

The third example is another cubic polynomial, but with real roots only, i.e. the polynomial is given by:

$$\begin{aligned} p_3(z)=z^3-z. \end{aligned}$$
(23)

The basins of attraction are displayed in Fig. 3. All methods look reasonable. It is possible that the fact that all roots are real as in Example 1 that we do not have many black points for the methods with memory. UKSHA has the lowest NFEA but took more CPU (1385.539 s) than any other method. It is clear that each step of UKSHA is more computationally expensive than other methods. CLND and SA8 have no black points.

Example 4

The fourth example is a quartic polynomial with real roots at \(\pm 1,\,\pm 3\)

$$\begin{aligned} p_4(z)=z^4-10z^2+9. \end{aligned}$$
(24)

The basins are displayed in Fig. 4. Again all the roots are real and the methods with memory do not have as many black points as in Example 2. The methods of memory use about the same NFEA and it is less than for CLND and SA8. In terms of CPU, CLND is fastest (309.646 s) followed by SA8 (318.273 s) and DPP (515.661 s). UKSHA has fewest black points.

Example 5

The fifth example is a fifth degree polynomial

$$\begin{aligned} p_5(z)=z^5-1. \end{aligned}$$
(25)
Fig. 5
figure 5

The top row for CLND (left) and SA8 (right). Second row for UKSHA (left) and DPP (right) for the roots of the polynomial \(z^5-1\)

The basins are displayed in Fig. 5. Now that the roots are not all real, we see more black points in the basins of methods with memory (see also Table 3). They also require higher NFEA and are very slow (over 1000 s versus around 400 s for SA8 and CLND).

Example 6

The next example is a polynomial of degree 6 with complex coefficients

$$\begin{aligned} p_6(z)=z^6-\frac{1}{2} z^5 +\frac{11(i+1)}{4} z^4-\frac{3i+19}{4}z^3 +\frac{5i+11}{4} z^2 -\frac{i+11}{4}z+\frac{3}{2}-3i. \end{aligned}$$
(26)
Fig. 6
figure 6

The top row for CLND (left) and SA8 (right). Second row for UKSHA (left) and DPP (right) for the roots of the polynomial \(z^6-\frac{1}{2} z^5 +\frac{11(i+1)}{4} z^4-\frac{3i+19}{4}z^3 +\frac{5i+11}{4} z^2 -\frac{i+11}{4}z+\frac{3}{2}-3i\)

This is an example that was difficult for many methods. The basins are displayed in Fig. 6. It is clear that the basins for UKSHA are not as well defined as for the other methods. SA8 uses the least NFEA and UKSHA the most such number. The CPU time for methods without memory is about 1200 s versus DPP with 4538.803 s and UKSHA with 10276.925 s. CLND and SA8 have NO black points and UKSHA about twice the number of black points as DPP.

We now run a non-polynomial example.

Fig. 7
figure 7

The top row for CLND (left) and SA8 (right). Second row for UKSHA (left) and DPP (right) for the roots of the polynomial \((e^{z+1}-1)(z-1)\)

Example 7

$$\begin{aligned} p_7(z)=(e^{z+1}-1)(z-1). \end{aligned}$$
(27)

The roots are \(\pm 1\) and the basins are given in Fig. 7. Notice that in all methods the basin for \(z=+1\) is much smaller. The basin for \(z=+1\) is the largest for SA8. DPP uses 6.41 function evaluations per point and about 9 for SA8 and CLND. SA8 is the fastest followed closely by CLND and the slowest is UKSHA. In terms of the number of black points, it is clear that UKSHA has the most and the methods SA8 and CLND have the least.

In order to pick the best method overall, we have averaged the results in Tables 1, 2 and 3 across the seven examples. It is clear that SA8 uses the least NFEA (10.25) followed closely by CLND (11.45) and UKSHA uses the highest such number (17.56). The fastest method on average is SA8 (414.577 s) and the slowest is UKSHA (4035.547 s). Even DPP is much slower than SA8 and CLND. The average number of black points is the highest for methods with memory (over 18,000 versus 245–263 for SA8 and CLND, respectively).

4 Conclusions

We can see that the two methods with memory performed poorly when the function has complex roots. They are also computationally more expensive and require more function evaluations per point on average. We thus do not recommend the use of multipoint methods with memory.