Abstract
We present a three-point iterative method without memory for solving nonlinear equations in one variable. The proposed method provides convergence order eight with four function evaluations per iteration. Hence, it possesses a very high computational efficiency and supports Kung–Traub’s conjecture. The construction, the convergence analysis, and the numerical implementation of the method will be presented. Using several test problems, the proposed method will be compared with existing methods of convergence order eight concerning accuracy and basins of attraction. Furthermore, some measures are used to judge methods with respect to their performance in finding the basins of attraction.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Solving nonlinear equations is a basic and extremely valuable tool in all fields in science and engineering. One can distinguish between two general approaches for solving nonlinear equations numerically, namely, one-point and multi-point methods. The basic optimality theorem shows that an analytic one-point method based on k evaluations is of order at most k, see [29, § 5.4] or [15] for an improved proof. The Newton–Raphson method \(x_{n+1}:=x_n - \frac{f(x_n)}{f'(x_n)}\) is probably the most widely used algorithm for finding roots. It requires two evaluations per iteration step, one for f and one for \(f'\), and results in second order convergence which is optimal for this one-point method.
Some computational issues encountered by one-point methods are overcome by multi-point methods since they allow to achieve greater accuracy with the same number of function evaluations. Important aspects related to these methods are convergence order and efficiency. It is convenient to attain a convergence order, with a fixed number of function evaluations per iteration step, which is as high as possible. The unproved conjecture by Kung and Traub [15] plays a central role in this context; it states that an optimal multi-point method without memory provides a convergence order of \(2^k\) while using \(k+1\) evaluations in each iteration step. The efficiency index for a method with k evaluations and convergence order p is given by \(E(k,p)=\root k \of {p}\), see [19]. Hence, the efficiency of a method supporting Kung–Traub’s conjecture is \(\root k+1 \of {2^k}\). In particular, an optimal method with convergence order eight has an efficiency index \(\root 4 \of {8}\simeq 1.68179\).
A large number of multi-point methods for finding simple roots of a nonlinear equation \(f(x)=0\) with a scalar function \(f: D\subset \mathbb {R} \rightarrow \mathbb {R}\) which is defined on an open interval D (or \(f: D\subset \mathbb {C} \rightarrow \mathbb {C}\) defined on a region D in the complex plane \(\mathbb {C}\)) have been developed and analyzed to improve the convergence order of classical methods like the Newton–Raphson iteration.
Some well known two-point methods without memory are described e.g. in Jarratt [13], King [14], and Ostrowski [19]. Using inverse interpolation, Kung and Traub [15] constructed two general optimal classes without memory. Since then, there have been many attempts to construct optimal multi-point methods, utilizing e.g. weight functions, see in particular [2, 3, 5, 16, 20, 21, 23–26, 28, 31].
In [23], Sharifi et al. present a family of three-point with eight-order convergence methods to find the simple roots of nonlinear equations by suitable approximations and weight function based on Maheshwari’s method. Their method requires three evaluations of the function and one evaluation of its first derivative per iteration. However, the authors in [24] introduce another new class of optimal iterative methods without memory for approximating a simple root of a given nonlinear equation such that their proposed class uses four function evaluations and one first derivative evaluation per iteration. The convergence order is 16 and therefore optimal in the sense of Kung–Traub’s conjecture. In [25], Sharifi et al. present an iterative three-point method but with memory based on the family of King’s methods to solve nonlinear equations. Their proposed method has eighth order convergence and costs only four function evaluations per iteration which supports the Kung–Traub’s conjecture on the optimal order of convergence. They achieved an acceleration of the convergence speed by an appropriate variation of a free parameter in each step. The self accelerator parameter is estimated using Newton’s interpolation polynomial of fourth degree. The order of convergence was increased from 8 to 12 without any extra function evaluation. Sharma and Sharma [26] derive a family of eighth order methods for the solution of nonlinear equations based on Ostrowski’s fourth order method. In terms of computational costs, their family requires three evaluations of the function and one evaluation of first derivative. The efficiency index of their presented methods is 1.68179 which is better than the efficiency index 1.587 of Ostrowski’s method. In [22], Salimi et al. construct two optimal Newton–Secant like iterative methods for solving non-linear equations. Their proposed classes have convergence order four and eight and cost only three and four function evaluations per iteration, respectively. Their methods support the Kung–Traub’s conjecture and possess a high computational efficiency. They illustrated their new methods with numerical experiments and a comparison with some well known existing optimal methods.
We will construct a three-point method of convergence order eight which is free from second order derivatives, uses 4 evaluations, and provides the efficiency index \(\root 4 \of {8} \simeq 1.68179\).
A widely used criterion to judge and rank different methods for solving nonlinear equations is the basin of attraction. We will use two measures to assess the performance in finding the basin of attraction [30].
The paper is organized as follows. Section 2 introduces the new method based on a Newton step and Newton’s interpolation. Moreover, details of the new method and the proof of its optimal convergence order eight are given. The numerical performance of the proposed method compared to other methods are illustrated in Sect. 3. We approximate and visualize the basins of attraction in Sect. 4 for the proposed method and several existing methods, both graphically and by means of some numerical performance measures [30]. Finally, we conclude in Sect. 5.
2 Description of the method and convergence analysis
We construct in this section a new optimal three-point method for solving nonlinear equations by using a Newton-step and Newton’s interpolation polynomial of degree three which was also applied in [25].
Method 1: The new method, denoted by MSSV, is given by
where \(u(x_n)=\frac{f(x_{n})}{f'(x_{n})}\). The standard notation for divided differences in Newton’s interpolation
with \(g[t_\nu ] = g(t_\nu )\) and \(g[t_{\nu },t_{\nu }] = g'(t_{\nu })\) is used.
The iteration method (2.1) and all forthcoming methods are applied for \(n = 0, 1, \ldots \) where \(x_0\) denotes an initial approximation of the simple root \(x^{*}\) of the function f. The method (2.1) uses four evaluations per iteration step, three for f and one for \(f'\). Note that (2.1) works for real and complex functions.
The convergence order of method (2.1) is given in the following theorem.
Theorem 1
Let \(f:D\subset \mathbb {R} \rightarrow \mathbb {R}\) be a nine times continuously differentiable function with a simple zero \(x^{*}\in D\). If the initial point \(x_{0}\) is sufficiently close to \(x^{*}\) then the method defined by (2.1) converges to \(x^{*}\) with order eight.
Proof
Let \(e_{n} := x_{n}-x^{*}\), \(e_{n,y}:=y_{n}-x^{*}\), \(e_{n,z} := z_{n}-x^{*}\) and \(c_{n} := \frac{f^{(n)}(x^{*})}{n!\,f'(x^{*})}\) for \(n\in \mathbb {N}\). Using the fact that \(f(x^{*})=0\), the Taylor expansion of f at \(x^{*}\) yields
and
Therefore, we obtain
and
We have
by a Taylor expansion of f at \(x^{*}\). By substituting (2.2)–(2.4) into (2.1), we get
We obtain
by using again a Taylor expansion of f at \(x^{*}\). Substituting (2.2)–(2.5) into (2.1) results in
which finishes the proof of the theorem. \(\square \)
We will compare the new method (2.1) with some existing optimal three-point methods of order eight having the same optimal computational efficiency index equal to \(\root 4 \of {8} \simeq 1.68179\), see [19, 29].
The existing methods that we are going to compare are the following:
Method 2: The method by Chun and Lee [5], denoted by CL, is given by
with weight functions
where \(t_n = \frac{f(y_n)}{f(x_n)}\), \(s_n = \frac{f(z_n)}{f(x_n)}\), \(u_n = \frac{f(z_n)}{f(y_n)}\), and \(\beta , \gamma \in \mathbb {R}\). Note that the parameters \(\beta \) and \(\gamma \) cancel when used in (2.7). Hence, their choice has no contribution to the method.
Method 3: The method by Neta [17], see also [18, formula (9)], denoted by N, is given by
where
We will use \(A = 0\) in the numerical experiments of this paper.
Method 4: The Sharma and Sharma method [26], denoted by SS, is given by
with the weight function
and \(t_n = \frac{f(z_n)}{f(x_n)}\). We will use \(\alpha =1\) in the numerical experiments of this paper.
Method 5: The method from Babajee et al. [2], denoted by BCST, is given by
Method 6: The method from Thukral and Petković [28], denoted by TP, is given by
where the weight functions are
and \(t_n=\frac{f(y_n)}{f(x_n)}\), \(s_n=\frac{f(z_n)}{f(y_n)}\) and \(u_n=\frac{f(z_n)}{f(x_n)}\). We will use \(\beta =0\) and \(\alpha =1\) in the numerical experiments of this paper.
3 Numerical examples
The new three-point method MSSV is tested on several nonlinear equations. To obtain high accuracy and avoid the loss of significant digits, we employed multi-precision arithmetic with 20,000 significant decimal digits in the programming package Mathematica.
We are going to perform numerical experiments with the four test functions \(f_1,\ldots , f_4\), which appear in Table 1. We are going to reach the given root \(x^{*}\) starting with the mentioned \(x_0\) for the four functions and the six methods of convergence order eight.
In order to test our proposed method (2.1) and compare it with the methods (2.7)–(2.11), we compute the error, the computational order of convergence (COC) [32] by
and the approximated computational order of convergence (ACOC) [6] by
It is worth noting that COC has been used in the recent years. Nevertheless, ACOC is more practical because it does not require the knowledge of the root \(x^{*}\). We refer to [10] for a comparison among several convergence orders. Note that these formulas may result for particular examples in convergence orders which are higher than expected. The reason is that the error equation (2.6) contains problem-dependent coefficients which may vanish for some nonlinear functions f. However, the formulas (3.1) and (3.2) will provide, for a “random” example, good approximations for the convergence order of the method.
We have used both COC and ACOC to check the accuracy of the considered methods. Note that both COC and ACOC give already for small values of n good experimental approximations to convergence order.
The comparison of our method (2.1) with the methods (2.7)–(2.11) applied to the four nonlinear equations \(f_j(x)=0\), \(j=1,\ldots ,4,\) are presented in Table 2. We abbreviate (2.1) by MSSV, and (2.7)–(2.11) as CL, N, SS, BCST, TP, respectively. The computational convergence order COC and ACOC are given with \(n=3\). Note that, for all problems and all methods, both COC and ACOC approximate very accurately the theoretical order of convergence.
4 Dynamic behavior
We have already observed that all methods converge if the initial guess is chosen suitably. We now investigate the regions where the initial point has to be chosen in order to achieve the root. In other words, we will numerically approximate the domain of attraction of the zeros as a qualitative measure of how the methods depend on the choice of the initial approximation of the root. To answer this important question on the dynamical behavior of the algorithms, we will investigate the dynamics of the new method (2.1) and compare it with the methods (2.7)–(2.11).
Let’s recall some basic concepts such as basin of attraction. For more details and many other examples of the study of the dynamic behavior of iterative methods, one can consult [1, 2, 4, 7–9, 11, 12, 27, 30].
Let \(Q:\mathbb {C} \rightarrow \mathbb {C}\) be a rational map on the complex plane. For \(z\in \mathbb {C} \), we define its orbit as the set \({\text {orb}}(z) = \{z,\,Q(z),\,Q^2(z),\ldots \}\). The convergence \({\text {orb}}(z)\rightarrow z^{*}\) is understood in the sense \(\lim \nolimits _{k\rightarrow \infty } Q^k(z) = z^{*}\). A point \(z_0 \in \mathbb {C}\) is called periodic point with minimal period m if \(Q^m(z_0) = z_0\) where m is the smallest positive integer with this property (and thus \(\{z_0,Q(z_0),\ldots , Q^{m-1}(z_0)\}\) is a cycle). A periodic point with minimal period 1 is called fixed point. Moreover, a periodic point \(z_0\) with period m is called attracting if \(|(Q^m)'(z_0)|<1\), repelling if \(|(Q^m)'(z_0)|>1\), and neutral otherwise. The Julia set of a nonlinear map Q(z), denoted by J(Q), is the closure of the set of its repelling periodic points. The complement of J(Q) is the Fatou set F(Q).
The six methods (2.1) and (2.7)–(2.11) provide iterative rational maps Q when they are applied to find roots of complex polynomials p. In particular, we are interested in the basins of attraction of the roots of the polynomials where the basin of attraction of a root \(z^{*}\) is the complex set \(\{ z_0 \in \mathbb {C} : {\text {orb}}(z_0) \rightarrow z^{*} \}\). It is well known that the basins of attraction of the different roots lie in the Fatou set F(Q). The Julia set J(Q) is, in general, a fractal and the rational map Q is unstable there.
From the dynamical and graphical point of view, we take a \(600 \times 600\) grid of the square \([-3,3]\times [-3,3] \subset \mathbb {C}\) and assign a color to each point \(z_0\in D\) according to the simple root to which the corresponding orbit of the iterative method starting from \(z_0\) converges. We mark the point as black if the orbit does not converge to a root in the sense that after at most 15 iterations it has a distance to any of the roots which is larger than \(10^{-3}\). We have used only 15 iterations because we are using methods of convergence order eight which, if they converge, do it very fast. The basins of attraction are distinguished by their color.
Different colors are used for different roots. In the basins of attraction, the number of iterations needed to achieve the root is shown by the brightness. Brighter color means less iteration steps. Note that black color denotes lack of convergence to any of the roots. This happens, in particular, when the method converges to a fixed point that is not a root or if it ends in a periodic cycle or at infinity. Actually and although we have not done it in this paper, infinity can be considered an ordinary point if we consider the Riemann sphere instead of the complex plane. In this case, we can assign a new “ordinary color” for the basin of attraction of infinity. Details for this idea can be found in [12].
Basins of attraction for the six methods (2.1) and (2.7)–(2.11) for the six test problems \(p_j(z)=0\), \(j=1,\ldots ,6\), (Table 3) are illustrated in Figs. 1, 2, 3, 4, 5 and 6 from left to right and from top to bottom.
From the pictures, we can easily judge the behavior and suitability of any method depending on the circumstances. If we choose an initial point \(z_0\) in a zone where different basins of attraction touch each other, it is impossible to predict which root is going to be reached by the iterative method that starts in \(z_0\). Hence, \(z_0\) is not a good choice. Both the black zones and the zones with a lot of colors are not suitable for choosing the initial guess \(z_0\) if a precise root should be reached. Although the most attractive pictures appear when we have very intricate frontiers among basins of attraction, they correspond to the cases where the dynamic behavior of the method is more unpredictable and the method is more demanding with respect to the choice of the initial point.
Finally, we have included in Table 4 the results of some numerical experiments to measure the behavior of the six iterative methods (2.1) and (2.7)–(2.11) in finding the roots of the test polynomials \(p_j(z)=0\), \(j=1,\ldots ,6\). To compute the data of this table, we have applied the six methods to the six polynomials, starting at an initial point \(z_0\) on a \(600 \times 600\) grid in the rectangle \([-3,3] \times [-3,3]\) of the complex plane. The same way was used in Figs. 1, 2, 3, 4, 5 and 6 to show the basins of attraction of the roots. In particular, we decide again that an initial point \(z_0\) has reached a root \(z^{*}\) when its distance to \(z^{*}\) is less than \(10^{-3}\) (in this case \(z_0\) is in the basin of attraction of \(z^{*}\)) and we decide that the method starting in \(z_0\) diverges when no root is found in a maximum of 15 iterations of the method. We say in this case that \(z_0\) is a “nonconvergent point”. The column I/P shows the mean of iterations per point until the algorithm decides that a root has been reached or the point is declared nonconvergent. The column NC shows the percentage of nonconvergent points, indicated as black zones in the pictures of Figs. 1, 2, 3, 4, 5 and 6. It is clear that the nonconvergent points have a great influence on the values of I/P since these points contribute always with the maximum number of 15 allowed iterations. In contrast, “convergent points” are reached usually very fast due to the fact that we are dealing with methods of order eight. To reduce the effect of nonconvergent points, we have included the column I\(_\text {C}\)/C which shows the mean number of iterations per convergent point. If we use either the columns I/P or the column I\(_\text {C}\)/C to compare the performance of the iterative methods, we clearly obtain different conclusions.
5 Conclusion
We have introduced a new optimal three-point method without memory for approximating a simple root of a given nonlinear equation which uses only four function evaluations each iteration and results in a method of convergence order eight. Therefore, Kung and Traub’s conjecture is supported. Numerical examples and comparisons with some existing eighth-order methods are included and confirm the theoretical results. The numerical experience suggests that the new method is a valuable alternative for solving these problems and finding simple roots. We used the basins of attraction for comparing the iterative algorithms and we have included some tables with comparative results.
References
Amat, S., Busquier, S., Magreñán, Á.A.: Reducing chaos and bifurcations in Newton-type methods. Abstr. Appl. Anal. 2013, Art. ID 726701 (2013)
Babajee, D.K.R., Cordero, A., Soleymani, F., Torregrosa, J.R.: On improved three-step schemes with high efficiency index and their dynamics. Numer. Algorithms 65, 153–169 (2014)
Bi, W., Ren, H., Wu, Q.: Three-step iterative methods with eighth-order convergence for solving nonlinear equations. J. Comput. Appl. Math. 225, 105–112 (2009)
Chicharro, F., Cordero, A., Gutiérrez, J.M., Torregrosa, J.R.: Complex dynamics of derivative-free methods for nonlinear equations. Appl. Math. Comput. 219, 7023–7035 (2013)
Chun, C., Lee, M.Y.: A new optimal eighth-order family of iterative methods for the solution of nonlinear equations. Appl. Math. Comput. 223, 506–519 (2013)
Cordero, A., Torregrosa, J.R.: Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 190, 686–698 (2007)
Ezquerro, J.A., Hernández, M.A.: An improvement of the region of accessibility of Chebyshev’s method from Newton’s method. Math. Comp. 78, 1613–1627 (2009)
Ezquerro, J.A., Hernández, M.A.: An optimization of Chebyshev’s method. J. Complex. 25, 343–361 (2009)
Ferrara, M., Sharifi, S., Salimi, M.: Computing multiple zeros by using a parameter in Newton–Secant method. SeMA J. (2016). doi:10.1007/s40324-016-0074-0
Grau-Sánchez, M., Noguera, M., Gutiérrez, J.M.: On some computational orders of convergence. Appl. Math. Lett. 23(4), 472–478 (2010)
Gutiérrez, J.M., Magreñán, Á.A., Varona, J.L.: The “Gauss-Seidelization” of iterative methods for solving nonlinear equations in the complex plane. Appl. Math. Comput. 218, 2467–2479 (2011)
Hernández-Paricio, L.J., Marañón-Grandes, M., Rivas-Rodríguez, M.T.: Plotting basins of end points of rational maps with Sage. Tbilisi Math. J. 5(2), 71–99 (2012)
Jarratt, P.: Some fourth order multipoint iterative methods for solving equations. Math. Comput. 20, 434–437 (1966)
King, R.F.: A family of fourth order methods for nonlinear equations. SIAM J. Numer. Anal. 10, 876–879 (1973)
Kung, H.T., Traub, J.F.: Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Mach. 21, 634–651 (1974)
Lotfi, T., Sharifi, S., Salimi, M., Siegmund, S.: A new class of three-point methods with optimal convergence order eight and its dynamics. Numer. Algorithms 68, 261–288 (2015)
Neta, B.: On a family of multipoint methods for nonlinear equations. Int. J. Comput. Math. 9, 353–361 (1981)
Neta, B., Chun, C., Scott, M.: Basins of attraction for optimal eighth order methods to find simple roots of nonlinear equations. Appl. Math. Comput. 227, 567–592 (2014)
Ostrowski, A.M.: Solution of Equations and Systems of Equations, 2nd edn. Academic Press, New York (1966)
Petković, M.S., Neta, B., Petković, L.D., Džunić, J.: Multipoint Methods for Solving Nonlinear Equations. Elsevier/Academic Press, Amsterdam (2013)
Prashanth, M., Gupta, D.K.: Semilocal convergence for Super-Halley’s method under \(\omega \)-differentiability condition. Jpn. J. Ind. Appl. Math. 32, 77–94 (2015)
Salimi, M., Lotfi, T., Sharifi, S., Siegmund, S.: Optimal Newton–Secant like methods without memory for solving nonlinear equations with its dynamics. Int. J. Comput. Math. (2016). doi:10.1080/00207160.2016.1227800
Sharifi, S., Ferrara, M., Salimi, M., Siegmund, S.: New modification of Maheshwari method with optimal eighth order of convergence for solving nonlinear equations. Open Math. (formerly Cent. Eur. J. Math.) 14, 443–451 (2016)
Sharifi, S., Salimi, M., Siegmund, S., Lotfi, T.: A new class of optimal four-point methods with convergence order \(16\) for solving nonlinear equations. Math. Comput. Simul. 119, 69–90 (2016)
Sharifi, S., Siegmund, S., Salimi, M.: Solving nonlinear equations by a derivative-free form of the King’s family with memory. Calcolo 53, 201–215 (2016)
Sharma, J.R., Sharma, R.: A new family of modified Ostrowski’s methods with accelerated eighth order convergence. Numer. Algorithms 54, 445–458 (2010)
Stewart, B.D.: Attractor Basins of Various Root-Finding Methods. M.S. thesis, Naval Postgraduate School, Monterey, CA (2001)
Thukral, R., Petković, M.S.: A family of three-point methods of optimal order for solving nonlinear equations. J. Comput. Appl. Math. 233, 2278–2284 (2010)
Traub, J.F.: Iterative Methods for the Solution of Equations. Prentice Hall, Englewood Cliffs (1964)
Varona, J.L.: Graphic and numerical comparison between iterative methods. Math. Intell. 24(1), 37–46 (2002)
Wang, X., Liu, L.: New eighth-order iterative methods for solving nonlinear equations. J. Comput. Appl. Math. 234, 1611–1620 (2010)
Weerakoon, S., Fernando, T.G.I.: A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 13(8), 87–93 (2000)
Acknowledgements
The research of the fourth author is supported by Grant MTM2015-65888-C4-4-P (MINECO/FEDER).
Author information
Authors and Affiliations
Corresponding author
About this article
Cite this article
Matthies, G., Salimi, M., Sharifi, S. et al. An optimal three-point eighth-order iterative method without memory for solving nonlinear equations with its dynamics. Japan J. Indust. Appl. Math. 33, 751–766 (2016). https://doi.org/10.1007/s13160-016-0229-5
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13160-016-0229-5
Keywords
- Optimal multi-point iterative methods
- Simple root
- Order of convergence
- Kung–Traub’s conjecture
- Basins of attraction