1 Introduction

Iterative methods are widely used for finding roots of a nonlinear equation of the following form

$$\begin{aligned} f(x)=0, \end{aligned}$$

where \(f:D\subset \mathbf {C} \rightarrow \mathbf {C}\), which is defined on an open interval D. Moreover, solving nonlinear equations using the iterative methods is a basic and extremely valuable tool in all fields of science as well as in economics and engineering. Analytical procedures for solving such problems are hardly available. It is indispensable to calculate approximate solutions based on numerical methods. The general procedure is to start with one initial approximation to the root and attain a sequence of iterates, which in the limit converge to the actual solution. The Newton–Raphson iteration

$$\begin{aligned} y_{k}= x_{k}-\frac{f(x_{k})}{f'(x_{k})}, \end{aligned}$$

is probably the most widely used algorithm for finding roots. The Newton method converges quadratically and requires two evaluations for each iteration step, one evaluation of f and one of \(f'\) [16]. The Newton–Raphson iteration is an example of a one-point iteration, i.e., in each iteration step the evaluations are taken at a single point. Let \( \alpha \) be multi roots of \( f(x_{k})=0 \) with multiplicity m i.e., \( f^{(i)}(\alpha )=0, i=0, 1, \ldots , m-1 \) and \( f^{(m)}(\alpha )\ne 0 \). If functions \( f^{(m-1)} \) and \( f^{1/m} \) have only a simple zero at \( \alpha \), any of the iterative methods for a simple zero may be used [25]. The Newton method for finding a simple zero \( \alpha \) has been modified by Scheroder to find multiple zeros of a nonlinear equation which is of the form

$$\begin{aligned} y_{k}= x_{k}-m\frac{f(x_{k})}{f'(x_{k})}, \end{aligned}$$

with convergence of two [19].

The important aspect that related to these method are order of convergence and number of iteration evaluation. In recent years, a large number of multi-point methods for finding simple and multiple roots of nonlinear equations have been developed and analyzed to improve the order of convergence of classical methods see [6, 9, 12,13,14, 17, 18, 21, 22, 24, 29]. The objective of this paper is to find the best methods from the literature by comparing the numerical performance and the dynamical behavior of basin attraction. We focus on the methods with known multiplicity m. We compare the methods with the third order convergence, and the same efficiency index. The iterative method of order p requiring n function evaluations per iteration is defined by \(E(n,p)=\root n \of {p}\), see [16]. These methods are non-optimal due to Traub [11] conjecture that multipoint iteration based on n evaluation has optimal order \(2^{n-1}\).

Some of the existing methods of third order convergence are listed as bellow:

  1. 1.

    Dong’s method (a) (1982) [4]

  2. 2.

    Dong’s method (b) (1982) [4]

  3. 3.

    Victory and Neta’s method [27]

  4. 4.

    Dong’s method (1987) [5]

  5. 5.

    Osada’s method [15]

  6. 6.

    Chun et al.’s method [3]

  7. 7.

    Homeier’s method [10]

  8. 8.

    Heydari et al.’s method [9]

  9. 9.

    Zhou et al.’s method [29]

  10. 10.

    Sharifi et al.’s method [21]

  11. 11.

    Ferrara et al.’s method [6]

This paper is organized as follow: Sect. 2 lists all the reviewed methods and the test functions used and the numerical comparisons. The dynamical behavior of the methods are illustrated in Sect. 3. Finally, a conclusion is provided in Sect. 4.

2 Numerical examples

2.1 Numerical methods

In following, details of the reviewed method are listed below .

  1. 1.

    Dong’s method (a) (1982) (D82a) [4]. The iteration is given by

    $$\begin{aligned} \begin{aligned} y_{k}&= x_{k}-\sqrt{m}\frac{f(x_{k})}{f'(x_{k})}, \\ x_{k+1}&= y_{k}-m\left( 1-\frac{1}{\sqrt{m}} \right) ^{1-m}\frac{f(y_{k})}{f'(x_{k})}. \end{aligned} \end{aligned}$$
    (2.1)
  2. 2.

    Dong’s method (b) (1982) (D82b) [4]. The iteration is given by

    $$\begin{aligned} \begin{aligned} y_{k}&= x_{k}-\frac{f(x_{k})}{f'(x_{k})},\\ x_{k+1}&=y_{k}+\frac{\frac{f(x_{k})}{f'(x_{k})}f(y_{x})}{f(y_{k})-\left( 1-\frac{1}{m} \right) ^{m-1}f(x_{k})}. \end{aligned} \end{aligned}$$
    (2.2)
  3. 3.

    Victory and Neta’s method (VN) [27]. The iteration is given by

    $$\begin{aligned} \begin{aligned} y_{k}&=x_{k}-\frac{f(x_{k})}{f'(x_{k})},\\ x_{k+1}&=y_{k}-\frac{f(y_{k})}{f'(x_{k})}\cdot \frac{f(x_{k})+Af(y_{k})}{f(x_{k})+Bf(y_{k})}, \end{aligned} \end{aligned}$$
    (2.3)

    where \(A=\mu ^{2m}-\mu ^{m+1}, B=-\frac{\mu ^{m}(m-2)(m-1)+1}{(m-1)^{2}} \) and \(\mu =\frac{m}{m-1}\).

  4. 4.

    Dong’s method (1987) (D87) [5]. The iteration is given by

    $$\begin{aligned} \begin{aligned} y_{k}&= x_{k}-\frac{f(x_{k})}{f'(x_{k})},\\ x_{k+1}&= y_{k}-\frac{f(x_{k})}{\left( \frac{m}{m-1} \right) ^{m+1}f'(y_{k})+\frac{m-m^{2}-1}{(m-1)^{2}}f'(x_{k})}. \end{aligned} \end{aligned}$$
    (2.4)
  5. 5.

    Osada’s method (OS) [15]. The iteration is given by

    $$\begin{aligned} x_{k+1}=x_{k}-\frac{1}{2}m(m+1)\frac{f(x_{k})}{f'(x_{k})}+\frac{1}{2}(m-1)^{2}\frac{f'(x_{k})}{f''(x_{k})}. \end{aligned}$$
    (2.5)
  6. 6.

    Chun and Neta’s method (CN) [3]. The iteration is given by

    $$\begin{aligned} \begin{aligned} x_{n+1}&=x_{n}-\frac{m\left[ (2 \theta -1)m +3-2 \theta \right] }{2}\frac{f(x_{n})}{f'(x_{n})}\\&\quad +\,\frac{\theta (m-1)^{2}}{2}\frac{f'(x_{n})}{f''(x_{n})}-\frac{(1-\theta )m^{2}}{2}\frac{f(x_{n})^{2}f''(x_{n})}{f'(x_{n})^{3}}, \end{aligned} \end{aligned}$$
    (2.6)

    where \(\theta = \frac{1}{2},-1,\) in computation we used \(\theta = -1\).

  7. 7.

    Homeier’s method (HM) [10]. The iteration is given by

    $$\begin{aligned} \begin{aligned} y_{k}&=x_{k}-\frac{m}{m+1}\frac{f(x_{k})}{f'(x_{k})},\\ x_{k+1}&=x_{k}-m^{2}\left( \frac{m}{m+1} \right) ^{m-1}\frac{f(x_{k})}{f'(y_{k})}+m(m-1)\frac{f(x_{k})}{f'(x_{k})}. \end{aligned} \end{aligned}$$
    (2.7)
  8. 8.

    Heydari et al.’s method (HY) [9]. The iteration is given by

    $$\begin{aligned} \begin{aligned} y_{k}&=x_{k}-\theta \frac{f(x_{k})}{f'(x_{k})},\\ x_{k+1}&=x_{k}-\frac{\frac{m(\mu \theta -\theta +\mu )}{\theta (\mu -1)}f(x_{k})-\frac{m\mu ^{1-m}}{\theta (\mu -1)}f(y_{k})}{f'(x_{k})}, \end{aligned} \end{aligned}$$
    (2.8)

    where \(\mu = \frac{m-\theta }{m}\) and \( \theta =\frac{2m}{m+2}\).

  9. 9.

    Zhou et al.’s method (ZH) [29]. The iteration is given by

    $$\begin{aligned} \begin{aligned} y_{k}&=x_{k}- \frac{f(x_{k})}{f'(x_{k})},\\ x_{k+1}&=x_{k}+m(m-2)\frac{f(x_{k})}{f'(x_{k})}-m(m-1)\left( \frac{m}{m-1} \right) ^{m}\frac{f(y_{k})}{f'(x_{k})}. \end{aligned} \end{aligned}$$
    (2.9)
  10. 10.

    Sharifi et al.’s method (SH) [21]. The iteration is given by

    $$\begin{aligned} \begin{aligned} y_{k}&=x_{k}- \frac{f(x_{k})}{f'(x_{k})},\\ x_{k+1}&=x_{k}- \frac{\alpha f(x_{k})+f(y_{k})}{\beta f'(x_{k})}, \end{aligned} \end{aligned}$$
    (2.10)

    where \(\alpha =(\mu -1)\frac{\mu ^{\mu }}{m^{m}},\)\(\beta = \frac{\mu ^{\mu }}{m^{m+1}}\) and \(\mu =m-1\).

  11. 11.

    Ferrara et al.’s method (FR) [6]. The iteration is given by

    $$\begin{aligned} \begin{aligned} y_{k}&=x_{k}- \frac{f(x_{k})}{f'(x_{k})},\\ x_{k+1}&=x_{k}- \frac{\theta f(x_{k})}{\theta f(x_{k})-f(y_{k})}\frac{f(x_{k})}{ f'(x_{k})}, \end{aligned} \end{aligned}$$
    (2.11)

    where \(\theta =\left( \frac{-1+m}{m} \right) ^{-1+m}.\)

2.2 Numerical test functions

We have analyzed all eleven nonlinear third order convergence multiple root methods with the test functions listed in Table 1. To obtain high possible accuracy and avoid the loss of significant digits, we have utilized the multi-precision arithmetic with 200 significant decimal places in the programming package of Mathematica 10.3 [8].

Table 1 List of test functions
Table 2 Error, COC and ACOC for D82a, D82b, VN, D87, OS, CN, HM, HY, ZH, SH and FR
Table 3 List of test functions and their roots for basin of attraction

The computational order of convergence (COC) is approximated as [28]

$$\begin{aligned} \text {COC}\approx \frac{\ln |(x_{n+1}-\alpha )/(x_{n}-\alpha )|}{\ln |(x_{n}-\alpha )/(x_{n-1}-\alpha )|}. \end{aligned}$$
(2.12)

The approximated computational order of convergence, (ACOC) is calculated by [7]

$$\begin{aligned} \text {ACOC}\approx \frac{\ln |(x_{n+1}-x_{n})/(x_{n}-x_{n-1})|}{\ln |(x_{n}-x_{n-1})/(x_{n-1}-x_{n-2})|}. \end{aligned}$$
(2.13)

Table 2 exhibits that the method D87a [5] performs better in term of error for each iteration as compared to others.

3 Basin of attraction

In this section we observe the performance of all methods by using basin of attraction. Basin of attraction suggests that the methods converge if the initial guess are chosen correctly. We now investigate the stability region from basin of attraction. For more information, see [1, 20] and [26].

Let \(G:\mathbf {C} \rightarrow \mathbf {C} \) be a rational map on the complex plane. For \(z\in \mathbf {C} \), we specify its orbit as the set \(orb(z)=\{z,\,G(z),\,G^2(z),\dots \}\). A point \(z_0 \in \mathbf {C} \) is called periodic point with minimal period m if \(G^m(z_0)=z_0\), where m is the smallest integer with this property. A periodic point with minimal period 1 is called fixed point. Moreover, a point \(z_0\) is called attracting if \(|G'(z_0)|<1\), repelling if \(|G^{\prime }(z_0)|>1\), and neutral otherwise. The Julia set of a nonlinear map G(z), denoted by J(G), is the closure of the set of its repelling periodic points. The complement of J(G) is the Fatou set F(G), where the basin of attraction of the different roots lies [2].

Fig. 1
figure 1

Comparison of basin of attraction of method D82a, D82b, VN (top row, from left to right), D87, OS, CN (2nd row, from left to right), HM, HY, ZH (3rd row, from left to right) and SH, FR (bottom row, from left to right) for test function \(p_{1}(z)=(z+\frac{1}{z})^{5}\) respectively

Fig. 2
figure 2

Comparison of basin of attraction of method D82a, D82b, VN (top row, from left to right), D87, OS, CN (2nd row, from left to right), HM, HY, ZH (3rd row, from left to right) and SH, FR (bottom row, from left to right) for test function \(p_{2}(z)=(z^{3}-1)^{10}\) respectively

Fig. 3
figure 3

Comparison of basin of attraction of method D82a, D82b, VN (top row, from left to right), D87, OS, CN (2nd row, from left to right), HM, HY, ZH (3rd row, from left to right) and SH, FR (bottom row, from left to right) for test function \(p_3(z)=(z^3-1)^{3}\) respectively

Fig. 4
figure 4

Comparison of basin of attraction of method D82a, D82b, VN (top row from left to right), D87, OS, CN (2nd row, from left to right), HM, HY, ZH (3rd row, from left to right) and SH, FR (bottom row, from left to right) for test function \(p_{4}(z)=(2z^{4}-z)^{8}\) respectively

Fig. 5
figure 5

Comparison of basin of attraction of method D82a, D82b, VN (top row, from left to right), D87, OS, CN (2nd row, from left to right), HM, HY, ZH (3rd row, from left to right) and SH, FR (bottom row, from left to right) for test function \(p_{5}(z)=(z^{5}-z^{2}+1)^{15}\) respectively

We use a \(512\times 512\) grid of square \([-\,3,3] \times [-\,3,3] \in \mathbb {C} \) for basin of attraction figures. Next, we setup a color to each point \(z_{0} \in \mathbb {D}\) according to the root to which the corresponding orbit of the method starting from \(z_{0}\) converges. Point with black color is marked if the orbit does not converge to the root, apparently after at most 100 iterations it has distance to any of the roots, which larger than \(10^{-3}\). Hence, we analyzed the basin of attraction by their colors for different methods.

We have tested the listed methods in for their basin of attraction by the test functions tabular in Table 3.

In Figs. 1, 2, 3, 4 and 5, different colors are used for distinct roots. The brighter the color the less the iteration used in computation. When it less of iteration step the colors became more brighter compare than other. Note that, the black color stands for the lack of convergence to any of the roots. Figures 1, 2, 3, 4 and 5 show that for all test function considered in Table 3, method D87 [5] presents less black point and also has larger brighter region in comparison with other method with the same order of convergence which means that D87 provides faster convergences.

4 Conclusion

In conclusion, from the numerical experiments and basin of attraction, D87 gives better performance compare to others with the same order of convergences.