1 Introduction

Solving nonlinear equations is a basic and extremely valuable tool in all fields in science and engineering. One can distinguish between two general approaches for solving nonlinear equations numerically, namely, one-point and multi-point methods. The basic optimality theorem shows that an analytic one-point method based on k evaluations is of order at most k, see [29, § 5.4] or [15] for an improved proof. The Newton–Raphson method \(x_{n+1}:=x_n - \frac{f(x_n)}{f'(x_n)}\) is probably the most widely used algorithm for finding roots. It requires two evaluations per iteration step, one for f and one for \(f'\), and results in second order convergence which is optimal for this one-point method.

Some computational issues encountered by one-point methods are overcome by multi-point methods since they allow to achieve greater accuracy with the same number of function evaluations. Important aspects related to these methods are convergence order and efficiency. It is convenient to attain a convergence order, with a fixed number of function evaluations per iteration step, which is as high as possible. The unproved conjecture by Kung and Traub [15] plays a central role in this context; it states that an optimal multi-point method without memory provides a convergence order of \(2^k\) while using \(k+1\) evaluations in each iteration step. The efficiency index for a method with k evaluations and convergence order p is given by \(E(k,p)=\root k \of {p}\), see [19]. Hence, the efficiency of a method supporting Kung–Traub’s conjecture is \(\root k+1 \of {2^k}\). In particular, an optimal method with convergence order eight has an efficiency index \(\root 4 \of {8}\simeq 1.68179\).

A large number of multi-point methods for finding simple roots of a nonlinear equation \(f(x)=0\) with a scalar function \(f: D\subset \mathbb {R} \rightarrow \mathbb {R}\) which is defined on an open interval D (or \(f: D\subset \mathbb {C} \rightarrow \mathbb {C}\) defined on a region D in the complex plane \(\mathbb {C}\)) have been developed and analyzed to improve the convergence order of classical methods like the Newton–Raphson iteration.

Some well known two-point methods without memory are described e.g. in Jarratt [13], King [14], and Ostrowski [19]. Using inverse interpolation, Kung and Traub [15] constructed two general optimal classes without memory. Since then, there have been many attempts to construct optimal multi-point methods, utilizing e.g. weight functions, see in particular [2, 3, 5, 16, 20, 21, 2326, 28, 31].

In [23], Sharifi et al. present a family of three-point with eight-order convergence methods to find the simple roots of nonlinear equations by suitable approximations and weight function based on Maheshwari’s method. Their method requires three evaluations of the function and one evaluation of its first derivative per iteration. However, the authors in [24] introduce another new class of optimal iterative methods without memory for approximating a simple root of a given nonlinear equation such that their proposed class uses four function evaluations and one first derivative evaluation per iteration. The convergence order is 16 and therefore optimal in the sense of Kung–Traub’s conjecture. In [25], Sharifi et al. present an iterative three-point method but with memory based on the family of King’s methods to solve nonlinear equations. Their proposed method has eighth order convergence and costs only four function evaluations per iteration which supports the Kung–Traub’s conjecture on the optimal order of convergence. They achieved an acceleration of the convergence speed by an appropriate variation of a free parameter in each step. The self accelerator parameter is estimated using Newton’s interpolation polynomial of fourth degree. The order of convergence was increased from 8 to 12 without any extra function evaluation. Sharma and Sharma [26] derive a family of eighth order methods for the solution of nonlinear equations based on Ostrowski’s fourth order method. In terms of computational costs, their family requires three evaluations of the function and one evaluation of first derivative. The efficiency index of their presented methods is 1.68179 which is better than the efficiency index 1.587 of Ostrowski’s method. In [22], Salimi et al. construct two optimal Newton–Secant like iterative methods for solving non-linear equations. Their proposed classes have convergence order four and eight and cost only three and four function evaluations per iteration, respectively. Their methods support the Kung–Traub’s conjecture and possess a high computational efficiency. They illustrated their new methods with numerical experiments and a comparison with some well known existing optimal methods.

We will construct a three-point method of convergence order eight which is free from second order derivatives, uses 4 evaluations, and provides the efficiency index \(\root 4 \of {8} \simeq 1.68179\).

A widely used criterion to judge and rank different methods for solving nonlinear equations is the basin of attraction. We will use two measures to assess the performance in finding the basin of attraction [30].

The paper is organized as follows. Section 2 introduces the new method based on a Newton step and Newton’s interpolation. Moreover, details of the new method and the proof of its optimal convergence order eight are given. The numerical performance of the proposed method compared to other methods are illustrated in Sect. 3. We approximate and visualize the basins of attraction in Sect. 4 for the proposed method and several existing methods, both graphically and by means of some numerical performance measures [30]. Finally, we conclude in Sect. 5.

2 Description of the method and convergence analysis

We construct in this section a new optimal three-point method for solving nonlinear equations by using a Newton-step and Newton’s interpolation polynomial of degree three which was also applied in [25].

Method 1: The new method, denoted by MSSV, is given by

$$\begin{aligned} \left\{ \begin{aligned}&y_{n} := x_{n} - u(x_n), \\&z_{n} := x_{n} - u(x_n) \left( 1+\dfrac{f(y_n)}{f(x_n)} + \left( 1+\dfrac{1}{1+u(x_n)}\right) \left( \dfrac{f(y_n)}{f(x_n)}\right) ^2\right) , \\&x_{n+1} := z_{n} - \dfrac{f(z_n)}{f[z_n,y_n]+(z_n-y_n)f[z_n,y_n,x_n]+(z_n-y_n)(z_n-x_n)f[z_n,y_n,x_n,x_n]}, \end{aligned} \right. \end{aligned}$$
(2.1)

where \(u(x_n)=\frac{f(x_{n})}{f'(x_{n})}\). The standard notation for divided differences in Newton’s interpolation

$$\begin{aligned} g[t_{\nu },t_{\nu +1}, \ldots , t_{\nu +j}] = \frac{g[t_{\nu +1}, \ldots , t_{\nu +j}] - g[t_\nu , \ldots , t_{\nu +j-1}]}{t_{\nu +j}-t_{\nu }}, \end{aligned}$$

with \(g[t_\nu ] = g(t_\nu )\) and \(g[t_{\nu },t_{\nu }] = g'(t_{\nu })\) is used.

The iteration method (2.1) and all forthcoming methods are applied for \(n = 0, 1, \ldots \) where \(x_0\) denotes an initial approximation of the simple root \(x^{*}\) of the function f. The method (2.1) uses four evaluations per iteration step, three for f and one for \(f'\). Note that (2.1) works for real and complex functions.

The convergence order of method (2.1) is given in the following theorem.

Theorem 1

Let \(f:D\subset \mathbb {R} \rightarrow \mathbb {R}\) be a nine times continuously differentiable function with a simple zero \(x^{*}\in D\). If the initial point \(x_{0}\) is sufficiently close to \(x^{*}\) then the method defined by (2.1) converges to \(x^{*}\) with order eight.

Proof

Let \(e_{n} := x_{n}-x^{*}\), \(e_{n,y}:=y_{n}-x^{*}\), \(e_{n,z} := z_{n}-x^{*}\) and \(c_{n} := \frac{f^{(n)}(x^{*})}{n!\,f'(x^{*})}\) for \(n\in \mathbb {N}\). Using the fact that \(f(x^{*})=0\), the Taylor expansion of f at \(x^{*}\) yields

$$\begin{aligned} f(x_{n}) = f'(x^{*}) \left( e_{n} + c_{2}e_{n}^{2}+c_{3}e_{n}^{3} + \cdots + c_{8}e_{n}^{8}\right) + O(e_n^{9}) \end{aligned}$$
(2.2)

and

$$\begin{aligned} f'(x_{n}) = f'(x^{*}) \left( 1 + 2c_{2}e_{n}+3c_{3}e_{n}^{2}+4c_{4}e_{n}^{3} + \cdots + 9c_9e_n^8\right) + O(e_n^{9}). \end{aligned}$$
(2.3)

Therefore, we obtain

$$\begin{aligned} \begin{aligned} \frac{f(x_{n})}{f'(x_{n})}&= e_{n}-c_{2}e_{n}^{2} + \left( 2c_{2}^{2}-2c_{3}\right) e_{n}^{3} + \left( -4c_2^3+7c_2c_3-3c_4\right) e_n^4 \\&\quad + \left( 8c_2^4-20c_2^2c_3+6c_3^2+10c_2c_4-4c_5\right) e_n^5 \\&\quad + \left( -16c_2^5+52c_2^3c_3-28c_2^2c_4 + 17c_3c_4-c_2(33c_3^2-13c_5)-5c_6\right) e_n^6 + O(e_n^{7}) \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} e_{n,y} = y_n-x^{*}&= c_{2}e_{n}^{2} + \left( -2c_2^2+2c_3\right) e_n^3 + \left( 4c_2^3-7c_2c_3+3c_4\right) e_n^4\\&\quad + \left( -8c_2^4+20c_2^2c_3-6c_3^2-10c_2c_4+4c_5\right) e_n^5\\&\quad + \left( 16c_2^5-52c_2^3c_3+28c_2^2c_4-17c_3c_4 + c_2(33c_3^2-13c_5)+5c_6\right) e_n^6 + O(e_n^{7}). \end{aligned} \end{aligned}$$

We have

$$\begin{aligned} f(y_{n}) = f'(x^{*}) \left( e_{n,y} + c_{2}e_{n,y}^{2} + c_{3}e_{n,y}^{3} + \cdots + c_{8}e_{n,y}^{8}\right) + O(e_{n,y}^{9}) \end{aligned}$$
(2.4)

by a Taylor expansion of f at \(x^{*}\). By substituting (2.2)–(2.4) into (2.1), we get

$$\begin{aligned} \begin{aligned} e_{n,z} = z_n-x^{*}&= c_2 \left( c_2+5c_2^2-c_3\right) e_n^4 \\&\quad + \left( -8c_2^3-36c_2^4-2c_3^2+c_2^2(-1+32c_3) + c_2(4c_3-2c_4)\right) e_n^5 + O(e_n^6). \end{aligned} \end{aligned}$$

We obtain

$$\begin{aligned} f(z_{n}) = f'(x^{*}) \left( e_{n,z} + c_{2}e_{n,z}^{2}+c_{3}e_{n,z}^{3}+\cdots +c_{8}e_{n,z}^{8}\right) + O(e_{n,z}^{9}) \end{aligned}$$
(2.5)

by using again a Taylor expansion of f at \(x^{*}\). Substituting (2.2)–(2.5) into (2.1) results in

$$\begin{aligned} e_{n+1} = x_{n+1}-x^{*} = c_2^2 \left( c_2+5c_2^2-c_3\right) \left( c_2^2+5c_2^3-c_2c_3+c_4\right) e_n^8 + O(e_n^9), \end{aligned}$$
(2.6)

which finishes the proof of the theorem. \(\square \)

We will compare the new method (2.1) with some existing optimal three-point methods of order eight having the same optimal computational efficiency index equal to \(\root 4 \of {8} \simeq 1.68179\), see [19, 29].

The existing methods that we are going to compare are the following:

Method 2: The method by Chun and Lee [5], denoted by CL, is given by

$$\begin{aligned} \left\{ \begin{aligned}&y_{n} := x_{n} - \dfrac{f(x_{n})}{f'(x_{n})}, \\&z_{n} := y_{n} - \dfrac{f(y_n)}{f'(x_n)} \cdot \dfrac{1}{\left( 1-\frac{f(y_n)}{f(x_n)}\right) ^2}, \\&x_{n+1} := z_{n} - \dfrac{f(z_n)}{f'(x_n)} \cdot \dfrac{1}{\left( 1-H(t_n)-J(s_n)-P(u_n)\right) ^2}, \end{aligned} \right. \end{aligned}$$
(2.7)

with weight functions

$$\begin{aligned} H(t_n) = -\beta -\gamma +t_n+\frac{t_n^2}{2}-\frac{t_n^3}{2}, \quad J(s_n) = \beta +\frac{s_n}{2}, \quad P(u_n) = \gamma +\frac{u_n}{2}, \end{aligned}$$

where \(t_n = \frac{f(y_n)}{f(x_n)}\), \(s_n = \frac{f(z_n)}{f(x_n)}\), \(u_n = \frac{f(z_n)}{f(y_n)}\), and \(\beta , \gamma \in \mathbb {R}\). Note that the parameters \(\beta \) and \(\gamma \) cancel when used in (2.7). Hence, their choice has no contribution to the method.

Method 3: The method by Neta [17], see also [18, formula (9)], denoted by N, is given by

$$\begin{aligned} \left\{ \begin{aligned}&y_{n} := x_{n} - \dfrac{f(x_n)}{f'(x_n)}, \\&z_{n} := y_{n} - \dfrac{f(x_n)+Af(y_n)}{f(x_n)+(A-2)f(y_n)} \cdot \dfrac{f(y_n)}{f'(x_n)}, \quad A\in \mathbb {R}, \\&x_{n+1} := y_n + \delta _1f^2(x_n)+\delta _2f^3(x_n), \end{aligned} \right. \end{aligned}$$
(2.8)

where

$$\begin{aligned} \begin{aligned} F_y&= f(y_n)-f(x_n),&\qquad F_z&= f(z_n)-f(x_n),\\ \zeta _y&= \dfrac{1}{F_y} \left( \dfrac{y_n-x_n}{F_y}-\dfrac{1}{f'(x_n)}\right) ,&\qquad \zeta _z&= \dfrac{1}{F_z} \left( \dfrac{z_n-x_n}{F_z}-\dfrac{1}{f'(x_n)}\right) ,\\ \delta _2&= -\dfrac{\zeta _y-\zeta _z}{F_y-F_z},&\qquad \delta _1&= \zeta _y+\delta _2F_y. \end{aligned} \end{aligned}$$

We will use \(A = 0\) in the numerical experiments of this paper.

Method 4: The Sharma and Sharma method [26], denoted by SS, is given by

$$\begin{aligned} \left\{ \begin{aligned}&y_{n} := x_{n} - \dfrac{f(x_n)}{f'(x_n)}, \\&z_{n} := y_{n} - \dfrac{f(y_n)}{f'(x_n)} \cdot \dfrac{f(x_n)}{f(x_n)-2f(y_n)}, \\&x_{n+1} := z_{n} - \dfrac{f[x_n,y_n]f(z_n)}{f[x_n,z_n]f[y_n,z_n]}\,W(t_n), \end{aligned} \right. \end{aligned}$$
(2.9)

with the weight function

$$\begin{aligned} W(t_n) = 1+\frac{t_n}{1+\alpha t_n}, \quad \alpha \in \mathbb {R}, \end{aligned}$$

and \(t_n = \frac{f(z_n)}{f(x_n)}\). We will use \(\alpha =1\) in the numerical experiments of this paper.

Method 5: The method from Babajee et al. [2], denoted by BCST, is given by

$$\begin{aligned} \left\{ \begin{aligned}&y_{n} := x_{n} - \dfrac{f(x_n)}{f'(x_n)} \left( 1 + \left( \dfrac{f(x_n)}{f'(x_n)}\right) ^5 \right) , \\&z_{n} := y_{n} - \dfrac{f(y_n)}{f'(x_n)} \left( 1 - \dfrac{f(y_n)}{f(x_n)}\right) ^{-2}, \\&x_{n+1} := z_n - \dfrac{f(z_n)}{f'(x_n)} \cdot \dfrac{1+\left( \dfrac{f(y_n)}{f(x_n)}\right) ^2 + 5\left( \dfrac{f(y_n)}{f(x_n)}\right) ^4 + \dfrac{f(z_n)}{f(y_n)}}{\left( 1 - \dfrac{f(y_n)}{f(x_n)} - \dfrac{f(z_n)}{f(x_n)} \right) ^2} . \end{aligned} \right. \end{aligned}$$
(2.10)

Method 6: The method from Thukral and Petković [28], denoted by TP, is given by

$$\begin{aligned} \left\{ \begin{aligned}&y_{n} := x_{n} - \dfrac{f(x_n)}{f'(x_n)}, \\&z_{n} := y_{n} - \dfrac{f(y_n)}{f'(x_n)}\cdot \dfrac{f(x_n)+\beta f(y_n)}{f(x_n)+(\beta -2)f(y_n)}, \qquad \beta \in \mathbb {R}, \\&x_{n+1} := z_n - \dfrac{f(z_n)}{f'(x_n)} \cdot \left( \varphi (t_n)+\psi (s_n)+\omega (u_n)\right) , \end{aligned} \right. \end{aligned}$$
(2.11)

where the weight functions are

$$\begin{aligned} \varphi (t_n) = \left( 1+\dfrac{t_n}{1-2t_n}\right) ^2, \qquad \psi (s_n) = \dfrac{s_n}{1-\alpha s_n}, \quad \alpha \in \mathbb {R},\qquad \omega (u_n)=4u_n, \end{aligned}$$

and \(t_n=\frac{f(y_n)}{f(x_n)}\), \(s_n=\frac{f(z_n)}{f(y_n)}\) and \(u_n=\frac{f(z_n)}{f(x_n)}\). We will use \(\beta =0\) and \(\alpha =1\) in the numerical experiments of this paper.

3 Numerical examples

The new three-point method MSSV is tested on several nonlinear equations. To obtain high accuracy and avoid the loss of significant digits, we employed multi-precision arithmetic with 20,000 significant decimal digits in the programming package Mathematica.

We are going to perform numerical experiments with the four test functions \(f_1,\ldots , f_4\), which appear in Table 1. We are going to reach the given root \(x^{*}\) starting with the mentioned \(x_0\) for the four functions and the six methods of convergence order eight.

Table 1 Test functions \(f_1, \ldots , f_4\) and root \(x^{*}\), and initial guess \(x_0\)
Table 2 Errors, COC, and ACOC for the iterative methods (2.1) and (2.7)–(2.11) (abbreviated as MSSV, CL, N, SS, BCST, and TP, respectively) applied for finding the roots of test functions \(f_j(x)=0\), \(j=1,\ldots ,4,\) given in Table 1

In order to test our proposed method (2.1) and compare it with the methods (2.7)–(2.11), we compute the error, the computational order of convergence (COC) [32] by

$$\begin{aligned} COC \approx \frac{\ln |(x_{n+1}-x^{*})/(x_{n}-x^{*})|}{\ln |(x_{n}-x^{*})/(x_{n-1}-x^{*})|} \end{aligned}$$
(3.1)

and the approximated computational order of convergence (ACOC) [6] by

$$\begin{aligned} ACOC \approx \frac{\ln |(x_{n+1}-x_{n})/(x_{n}-x_{n-1})|}{\ln |(x_{n}-x_{n-1})/(x_{n-1}-x_{n-2})|}. \end{aligned}$$
(3.2)

It is worth noting that COC has been used in the recent years. Nevertheless, ACOC is more practical because it does not require the knowledge of the root \(x^{*}\). We refer to [10] for a comparison among several convergence orders. Note that these formulas may result for particular examples in convergence orders which are higher than expected. The reason is that the error equation (2.6) contains problem-dependent coefficients which may vanish for some nonlinear functions f. However, the formulas (3.1) and (3.2) will provide, for a “random” example, good approximations for the convergence order of the method.

We have used both COC and ACOC to check the accuracy of the considered methods. Note that both COC and ACOC give already for small values of n good experimental approximations to convergence order.

The comparison of our method (2.1) with the methods (2.7)–(2.11) applied to the four nonlinear equations \(f_j(x)=0\), \(j=1,\ldots ,4,\) are presented in Table 2. We abbreviate (2.1) by MSSV, and (2.7)–(2.11) as CL, N, SS, BCST, TP, respectively. The computational convergence order COC and ACOC are given with \(n=3\). Note that, for all problems and all methods, both COC and ACOC approximate very accurately the theoretical order of convergence.

4 Dynamic behavior

We have already observed that all methods converge if the initial guess is chosen suitably. We now investigate the regions where the initial point has to be chosen in order to achieve the root. In other words, we will numerically approximate the domain of attraction of the zeros as a qualitative measure of how the methods depend on the choice of the initial approximation of the root. To answer this important question on the dynamical behavior of the algorithms, we will investigate the dynamics of the new method (2.1) and compare it with the methods (2.7)–(2.11).

Let’s recall some basic concepts such as basin of attraction. For more details and many other examples of the study of the dynamic behavior of iterative methods, one can consult [1, 2, 4, 79, 11, 12, 27, 30].

Let \(Q:\mathbb {C} \rightarrow \mathbb {C}\) be a rational map on the complex plane. For \(z\in \mathbb {C} \), we define its orbit as the set \({\text {orb}}(z) = \{z,\,Q(z),\,Q^2(z),\ldots \}\). The convergence \({\text {orb}}(z)\rightarrow z^{*}\) is understood in the sense \(\lim \nolimits _{k\rightarrow \infty } Q^k(z) = z^{*}\). A point \(z_0 \in \mathbb {C}\) is called periodic point with minimal period m if \(Q^m(z_0) = z_0\) where m is the smallest positive integer with this property (and thus \(\{z_0,Q(z_0),\ldots , Q^{m-1}(z_0)\}\) is a cycle). A periodic point with minimal period 1 is called fixed point. Moreover, a periodic point \(z_0\) with period m is called attracting if \(|(Q^m)'(z_0)|<1\), repelling if \(|(Q^m)'(z_0)|>1\), and neutral otherwise. The Julia set of a nonlinear map Q(z), denoted by J(Q), is the closure of the set of its repelling periodic points. The complement of J(Q) is the Fatou set F(Q).

Table 3 Test polynomials \(p_1(z),\ldots , p_6(z)\) and their roots

The six methods (2.1) and (2.7)–(2.11) provide iterative rational maps Q when they are applied to find roots of complex polynomials p. In particular, we are interested in the basins of attraction of the roots of the polynomials where the basin of attraction of a root \(z^{*}\) is the complex set \(\{ z_0 \in \mathbb {C} : {\text {orb}}(z_0) \rightarrow z^{*} \}\). It is well known that the basins of attraction of the different roots lie in the Fatou set F(Q). The Julia set J(Q) is, in general, a fractal and the rational map Q is unstable there.

From the dynamical and graphical point of view, we take a \(600 \times 600\) grid of the square \([-3,3]\times [-3,3] \subset \mathbb {C}\) and assign a color to each point \(z_0\in D\) according to the simple root to which the corresponding orbit of the iterative method starting from \(z_0\) converges. We mark the point as black if the orbit does not converge to a root in the sense that after at most 15 iterations it has a distance to any of the roots which is larger than \(10^{-3}\). We have used only 15 iterations because we are using methods of convergence order eight which, if they converge, do it very fast. The basins of attraction are distinguished by their color.

Different colors are used for different roots. In the basins of attraction, the number of iterations needed to achieve the root is shown by the brightness. Brighter color means less iteration steps. Note that black color denotes lack of convergence to any of the roots. This happens, in particular, when the method converges to a fixed point that is not a root or if it ends in a periodic cycle or at infinity. Actually and although we have not done it in this paper, infinity can be considered an ordinary point if we consider the Riemann sphere instead of the complex plane. In this case, we can assign a new “ordinary color” for the basin of attraction of infinity. Details for this idea can be found in [12].

Basins of attraction for the six methods (2.1) and (2.7)–(2.11) for the six test problems \(p_j(z)=0\), \(j=1,\ldots ,6\), (Table 3) are illustrated in Figs. 1, 2, 3, 4, 5 and 6 from left to right and from top to bottom.

From the pictures, we can easily judge the behavior and suitability of any method depending on the circumstances. If we choose an initial point \(z_0\) in a zone where different basins of attraction touch each other, it is impossible to predict which root is going to be reached by the iterative method that starts in \(z_0\). Hence, \(z_0\) is not a good choice. Both the black zones and the zones with a lot of colors are not suitable for choosing the initial guess \(z_0\) if a precise root should be reached. Although the most attractive pictures appear when we have very intricate frontiers among basins of attraction, they correspond to the cases where the dynamic behavior of the method is more unpredictable and the method is more demanding with respect to the choice of the initial point.

Fig. 1
figure 1

Comparison of basins of attraction of methods MSSV, CL, N (top row, left to right), and SS, BCST, TP (bottom row, left to right) for the test problem \(p_1(z)= z^2-1=0\)

Fig. 2
figure 2

Comparison of basins of attraction of methods MSSV, CL, N (top row, left to right), and SS, BCST, TP (bottom row, left to right) for the test problem \(p_2(z)= z^3-z=0\)

Fig. 3
figure 3

Comparison of basins of attraction of methods MSSV, CL, N (top row, left to right), and SS, BCST, TP (bottom row, left to right) for the test problem \(p_3(z)= z(z^2+1)(z^2+4)=0\)

Fig. 4
figure 4

Comparison of basins of attraction of methods MSSV, CL, N (top row, left to right), and SS, BCST, TP (bottom row, left to right) for the test problem \(p_4(z)= (z^4-1)(z^2+2i)=0\)

Fig. 5
figure 5

Comparison of basins of attraction of methods MSSV, CL, N (top row, left to right), and SS, BCST, TP (bottom row, left to right) for the test problem \(p_5(z)= z^7-1=0\)

Fig. 6
figure 6

Comparison of basins of attraction of methods MSSV, CL, N (top row, left to right), and SS, BCST, TP (bottom row, left to right) for the test problem \(p_6(z)= (10z^5-1)(z^5+10)=0\)

Table 4 Measures of convergence of the iterative methods MSSV, CL, N, SS, BCST, and TP applied to find the roots of polynomials \(p_j(z)=0\), \(j=1,\ldots ,6\)

Finally, we have included in Table 4 the results of some numerical experiments to measure the behavior of the six iterative methods (2.1) and (2.7)–(2.11) in finding the roots of the test polynomials \(p_j(z)=0\), \(j=1,\ldots ,6\). To compute the data of this table, we have applied the six methods to the six polynomials, starting at an initial point \(z_0\) on a \(600 \times 600\) grid in the rectangle \([-3,3] \times [-3,3]\) of the complex plane. The same way was used in Figs. 1, 2, 3, 4, 5 and 6 to show the basins of attraction of the roots. In particular, we decide again that an initial point \(z_0\) has reached a root \(z^{*}\) when its distance to \(z^{*}\) is less than \(10^{-3}\) (in this case \(z_0\) is in the basin of attraction of \(z^{*}\)) and we decide that the method starting in \(z_0\) diverges when no root is found in a maximum of 15 iterations of the method. We say in this case that \(z_0\) is a “nonconvergent point”. The column I/P shows the mean of iterations per point until the algorithm decides that a root has been reached or the point is declared nonconvergent. The column NC shows the percentage of nonconvergent points, indicated as black zones in the pictures of Figs. 1, 2, 3, 4, 5 and 6. It is clear that the nonconvergent points have a great influence on the values of I/P since these points contribute always with the maximum number of 15 allowed iterations. In contrast, “convergent points” are reached usually very fast due to the fact that we are dealing with methods of order eight. To reduce the effect of nonconvergent points, we have included the column I\(_\text {C}\)/C which shows the mean number of iterations per convergent point. If we use either the columns I/P or the column I\(_\text {C}\)/C to compare the performance of the iterative methods, we clearly obtain different conclusions.

5 Conclusion

We have introduced a new optimal three-point method without memory for approximating a simple root of a given nonlinear equation which uses only four function evaluations each iteration and results in a method of convergence order eight. Therefore, Kung and Traub’s conjecture is supported. Numerical examples and comparisons with some existing eighth-order methods are included and confirm the theoretical results. The numerical experience suggests that the new method is a valuable alternative for solving these problems and finding simple roots. We used the basins of attraction for comparing the iterative algorithms and we have included some tables with comparative results.