Abstract
In this paper, we present an iterative three-point method with memory based on the family of King’s methods to solve nonlinear equations. This proposed method has eighth order convergence and costs only four function evaluations per iteration which supports the Kung-Traub conjecture on the optimal order of convergence. An acceleration of the convergence speed is achieved by an appropriate variation of a free parameter in each step. This self accelerator parameter is estimated using Newton’s interpolation polynomial of fourth degree. The order of convergence is increased from 8 to 12 without any extra function evaluation. Consequently, this method, possesses a high computational efficiency. Finally, a numerical comparison of the proposed method with related methods shows its effectiveness and performance in high precision computations.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Solving nonlinear equations is one of the most important problems that has interesting applications in all fields in science and engineering. We consider iterative methods to find a simple root \(\alpha \) of a nonlinear equation \(f(x)=0\), where \(f : D \subset \mathbb {R} \rightarrow \mathbb {R}\) for an open interval \(D\) is a scalar function. Newton-Raphson iteration \(x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}\) is probably the most widely used algorithm for finding roots. It is of second order and requires two evaluations for each iteration step, one evaluation of \(f\) and one of \(f'\) [10, 17].
Kung and Traub [7] conjectured that no multi-point method without memory with \(n\) evaluations could have a convergence order larger than \(2^{n-1}\). A multi-point method with convergence order \(2^{n-1}\) is called optimal. The efficiency index, gives a measure of the balance between those quantities, according to the formula \(p^{1/n}\), where \(p\) is the convergence order of the method and \(n\) is the number of function evaluations per iteration.
Some well-known optimal two-point methods have been introduced by Jarratt [5], King [6], Ostrowski [10] and Petkovic et al. [12]. Some optimal multi-point methods have been proposed by Chun and Lee [1], Cordero et al. [2], Lotfi et al. [8], Neta [9], Salimi et al. [13] and Sharifi et al. [14, 15]. Zheng et al. [19] have presented an optimal Steffensen-type family. Dzunic et al. [3], Petkovic et al. [11] and Sharma et al. [16] have constructed methods with memory to solve nonlinear equations.
In this paper, we present a modification of the family of King’s methods which is derivative-free. The result of this paper is organized as follows: Sect. 2 and 3 are devoted to the construction and convergence analysis of two-point and three-point optimal derivative-free methods. We introduce the methods with memory and prove their R-order in Sect. 4. In Sect. 5, the new methods are compared with a closest competitor in a series of numerical examples. Section 6 contains a short conclusion.
2 Derivative-free two-point method of fourth order
2.1 Description of derivative-free two-point method
We start with King’s family of methods which is one of the most important two-point families for solving nonlinear equations [6].
where \(x_0\) is an initial approximation of a simple zero \(\alpha \) of \(f\). The main idea is to construct a derivative-free class of two-point methods with optimal order of convergence four. We consider Steffensen-Like’s method for the first step and for the second step approximate \(f'(x_n)\) by
where \( w_n=x_n-\beta f(x_n)\), \(\beta \ne 0\), \(f[y_n,w_n]=\dfrac{f(y_n)-f(w_n)}{y_n-w_n}\), \(t_n=\dfrac{f(y_n)}{f(x_n)}\) and \(G\) is a real function.
Hence, we obtain
2.2 Convergence analysis
We shall state the convergence theorem for the family of methods (2.2).
Theorem 1
Let \(D \subseteq \mathbb {R}\) be an open interval, \(f : D \rightarrow \mathbb {R}\) four times continuously differentiable and let \(\alpha \in D\) be a simple zero of \(f\). If the initial point \(x_{0}\) is sufficiently close to \(\alpha \), then the method defined by (2.2) converges to \(\alpha \) with order four if the weight function \(G : \mathbb {R} \rightarrow \mathbb {R}\) is continuously differentiable and satisfies the conditions
Proof
Let \(e_{n}:=x_{n}-\alpha \), \(e_{n,w}:=w_n-\alpha \), \(e_{n,y}:=y_{n}-\alpha \) and \(c_{n}:=\dfrac{f^{(n)}(\alpha )}{n!f^{'}(\alpha )}, \quad n=2,3,\ldots \)
Using Taylor expansion of \(f\) at \(\alpha \) and taking into account \(f(\alpha )=0\), we have
then
and
by substituting (2.5) in (2.2), we get
As well as
From (2.3) and (2.6), we obtain
by expanding \(G(t_n)\) around \(0\), we have
moreover, from (2.4) and (2.6), we obtain
then, by substituting (2.3)–(2.9) in (2.2) we get
with
Therefore, to provide the fourth order convergence of the two-point method (2.2), it is necessary to choose \(R_i=0\) for \(i=2,3\), and to achieve this we use the fact that
It is clear that \(R_{4}\ne 0\) in general. Thus, method (2.2) converges to \(\alpha \) with order four and the error equation becomes
This finishes the proof of the theorem. \(\square \)
3 Derivative-free three-point method with order eight
3.1 Description of derivative-free three-point method
Again, we will add one Newton step to the method (2.2)
As it is seen the function is evaluated for five times, so this method is not optimal and it is also not free of derivatives. To make it an optimal and derivative-free method, we approximate \( f'(z_n) \) with Newton’s interpolation polynomial of degree three at the point \(x_n\), \(w_n\), \(y_n\) and \(z_n\).
It is clear that
Then
and hence we get
3.2 Convergence analysis
The following theorem shows that the method (3.2) has convergence order eight.
Theorem 2
Let \(D \subseteq \mathbb {R}\) be an open interval, \(f : D \rightarrow \mathbb {R}\) eight times continuously differentiable and let \( \alpha \in D\) be a simple zero of \(f\). If the initial point \(x_{0}\) is sufficiently close to \(\alpha \) and conditions of Theoerem 1 are established, then the method defined by (3.2) converges to \(\alpha \) with order eight.
Proof
Define \(e_{n}:=x_{n}-\alpha \), \(e_{n,w}:=w_n-\alpha \), \(e_{n,y}:=y_{n}-\alpha \), \(e_{n,z}:=z_{n}-\alpha \) and \(c_{n}:=\frac{f^{(n)}(\alpha )}{n!f^{'}(\alpha )}\) for \(n=2,3,\ldots \)
Using Taylor expansion of \(f\) at \(\alpha \) and taking into account that \(f(\alpha )=0\), we have
then
and
as well as
According to Theorem 1, we get
by using the expansion of \(f(z_n)\), we have
And from (3.3) and (3.6), we have
From (3.3) and (3.4), we obtain
And from (3.9) and (3.10), we get
Therefore, from (3.11) and (3.12), we get
Finally, by substituting (3.3)–(3.13) in (3.2), we obtain
Which shows that the method (3.2) has optimal convergence order equal to eight. \(\square \)
4 The development of a new method with memory
In this section, we design a new method with memory by using self-accelerating parameters on method (3.2). We observe that the order of convergence of method (3.2) is eight when \(\beta \ne 1/f'(\alpha )\). If \(\beta =1/f'(\alpha )\) the convergence order of method (3.2) would be 12. Since the value \(f'(\alpha )\) is not available, we use an approximation \(\widehat{f'}(\alpha )\approx f'(\alpha )\), instead. The goal is to construct a method with memory that incorporates the calculation of the parameter \(\beta =\beta _n\) as the iteration proceeds by the formula \(\beta _n=1/\widehat{f'}(\alpha )\) for \(n=1,2,3,\ldots \) It is assumed that an initial estimate \(\beta _0\) should be chosen before starting the iterative process. In the following, we use the symbols \(\rightarrow \), \(O\) and \(\sim \) according to the following convention [17]: If \(\lim _{n\rightarrow \infty }f(x_n)=C\), we write \(f(x_n)\rightarrow C\) or \(f\rightarrow C\), where \(C\) is a nonzero constant. If \(\tfrac{f}{g}\rightarrow C\), we shall write \(f=O(g)\) or \(f\sim C(g)\).
We approximate \(\widehat{f'}(\alpha )\) by \(N'_4(x_n)\), then we have
where \(N_4'(t):=N_4(t;x_n, z_{n-1}, y_{n-1}, w_{n-1}, x_{n-1})\) is Newton’s interpolation polynomial of fourth degree, set through five available approximations \((x_n, z_{n-1}, y_{n-1}, w_{n-1}, x_{n-1})\).
Lemma 3
([16]) If \(\beta _n=\frac{1}{N_4'(x_n)}\), \(n=1,2,\ldots \) then the estimate
holds.
Theorem 4
If \(x_0\) is close to a simple zero of the function \(f\), then the convergence R-order of the method (3.2) with memory with the corresponding expression (4.1) of \(\beta _n\) is at least 12.
Proof
Let \((x_n)\) be a sequence of approximations produced by an iterative method (3.2). If \(f(\alpha )=0\) and this sequence converges to \(\alpha \) with the R-order \(Q_R((3.2), \alpha )\ge r\), we will write
where \(D_{n,r}\) refers to the asymptotic error of (3.2) when \(n\) tend to infinity. In other words, we get
Assume that the sequences \(( w_n) \), \(( y_n)\) and \((z_n)\) have the R-order \(q\), \(p\) and \(s\), respectively, that is
and
Also, we have
where \(B=-c_2\left( \left( -1-2\gamma +\beta f'(\alpha )(-1+2\gamma )\right) c_2^2+c_3\right) ,\)
where \(A=c_2^2\left( \left( -1-2\gamma +\beta f'(\alpha )(-1+2\gamma )\right) c_2^2+c_3\right) \left( \left( -1-2\gamma +\beta f'(\alpha )(-1+2\gamma )\right) c_2^3+c_2c_3-c_4\right) \).
Therefore by Lemma 3, we obtain
By comparing exponents of \(e_{n-1}\) appearing in two pairs of relations (4.5), (4.12) and (4.6), (4.9) and (4.7), (4.10) and (4.8), (4.11), we obtain the nonlinear system of four equations and four unknown \(r\), \(s\), \(p\) and \(q\).
A non-trivial solution of the above system is \( r=12\), \(s=6\), \(p=3\), \(q=2\).
We have proved that the convergence order of the iterative method (3.2) is at least 12. \(\square \)
5 Numerical example
In this section, we show the results of some numerical tests to compare the efficiencies of methods, using the programming package Mathematica. To obtain very high accuracy and avoid the loss of significant digits, we employed multi-precision arithmetic with 1200 significant decimal digits in the programming package Mathematica 8.
In what follows, we present some concrete iterative methods from the scheme (3.2).
Method 1. Choose the weight function \(G\) as follows:
where \(t_n=\frac{f(y_n)}{f(x_n)}\). The function \(G\) in (5.1) satisfies the assumptions of Theorem 2, then we have the following method
where in general, the divide differential of order \(n\) is obtained as:
Method 2. Choose the weight function \(G\) as follows:
where \(t_n=\frac{f(y_n)}{f(x_n)}\). The function \(G\) in (5.3) satisfies the assumptions of Theorem 2, then we have the following method
Method 3. Choose the weight function \(G\) as follows:
where \(t_n=\frac{f(y_n)}{f(x_n)}\). The function \(G\) in (5.5) satisfies the assumptions of Theorem 2, then we have the following method
Method 4. Choose the weight function \(G\) as follows:
where \(t_n=\frac{f(y_n)}{f(x_n)}\). The function \(G\) in (5.7) satisfies the assumptions of Theorem 2, then we have the following method
The new proposed methods with memory were compared with existing three-point methods given in what follows, having the same order convergence and with the same initial data \(x_0\) and \(\beta _0\).
Method 5. The derivative-free method by Kung and Traub [7] given by
Method 6. The method by Sharma et al. [16] given by
where \(\varphi (x_n)=\frac{f(w_n)-f(x_n)}{\beta _n f(x_n)}\), \(w_n=x_n+\beta _n f(x_n)\), \(\beta _n=\frac{-1}{N'(x_n)}\), \(H(u_n,v_n)=\dfrac{1+u_n}{1-v_n}\), \(u_n=\frac{f(y_n)}{f(x_n)}\) and \(v_n=\frac{f(y_n)}{f(w_n)}\).
Method 7. The method by Zheng et al. [19] given by
In order to test our proposed methods with memory (5.2), (5.4), (5.6) and (5.8) and also compare them with the methods (5.9), (5.10) and (5.11), we choose the initial value \(x_0\) using the Mathematica command FindRoot [4], pp. 158–160] and compute the error and the computational order of convergence (coc) by the approximate formula [18]
We introduced three test functions with their roots in Table 1. In Addition, in Tables 2 and 3, the proposed methods with memory with weight functions (5.1), (5.3), (5.5) and (5.7) and the methods (5.9)–(5.11) have been tested on these three different nonlinear equations respectively. It is clear that these methods are in accordance with the developed theory.
6 Conclusion
We have introduced a new method with memory for approximating a simple root of a given nonlinear equation. An increase of the convergence order is attained without any additional function evaluations, which points to a very high computational efficiency of the proposed methods with memory. We have proved that the convergence order of the new method with memory is, at least 12. So its efficiency index is \(12^{1/4}=1.86121\) which is greater than that of the three-point methods of order eight \(8^{1/4}=1.68179\) with the same function evaluations. Numerical examples show that our methods with memory work and can compete with other methods under the same conditions.
References
Chun, C., Lee, M.Y.: A new optimal eighth-order family of iterative methods for the solution of nonlinear equations. Appl. Math. Comput. 223, 506–519 (2013)
Cordero, A., Torregrosa, J.R., Vassileva, M.P.: Three-step iterative methods with optimal eighth-order convergence. J. Comput. Appl. Math. 235, 3189–3194 (2011)
Dzunic, J., Petkovic, M.S., Petkovic, L.D.: Three-point methods with and without memory for solving nonlinear equatins. Appl. Math. Comput. 218, 4917–4927 (2012)
Hazrat, R.: Mathematica: a problem-centered approach. Springer, New York (2010)
Jarratt, P.: Some fourth order multipoint iterative methods for solving equations. Math. Comput. 20, 434–437 (1966)
King, R.F.: Family of four order methods for nonlinear equations. SIAM. J. Numer. Anal. 10, 876–879 (1973)
Kung, H.T., Traub, J.F.: Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Math. 21, 634–651 (1974)
Lotfi, T., Sharifi, S., Salimi, M., Siegmund, S.: A new class of three-point methods with optimal convergence order eight and its dynamics. Numer. Algor. 68, 261–288 (2015)
Neta, B.: On a family of multipoint methods for nonlinear equations. Int. J. Computer Math. 9, 353–361 (1981)
Ostrowski, A.M.: Solution of equations and systems of equations. Academic Press, New York (1966)
Petkovic, M.S., Dzunic, J., Neta, B.: Interpolatory multipoint methods with memory for solving nonlinear equations. Appl. Math. Comput. 218, 2533–2541 (2011)
Petkovic, M.S., Neta, B., Petkovic, L.D., Dzunic, J.: Multipoint methods for solving nonlinear equations. Elsevier, Waltham (2013)
Salimi, M., Lotfi, T., Sharifi, S., Siegmund, S.: Optimal newton-secant like methods without memory for solving nonlinear equations with its dynamics. Under Revision, (2015)
Sharifi, S. Ferrara, M. Salimi, M. Siegmund, S.: New modification of Maheshwari method with optimal eighth order of convergence for solving nonlinear equations. Under Revision, (2015)
Sharifi, S. Salimi, M. Siegmund, S. Lotfi, T.: A new class of optimal four-point methods with convergence order 16 for solving nonlinear equations. Under Revision, (2015)
Sharma, J.R., Guha, R.K., Gupa, P.: Some efficient derivative free methods with memory for solving nonlinear equations. Appl. Math. Comput. 219, 699–707 (2012)
Traub, J.F.: Iterative methods for the solution of equations. Prentice Hall, New York (1964)
Weerakoon, S., Fernando, T.G.I.: A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 13, 87–93 (2000)
Zheng, Q., Li, J., Huang, F.: An optimal Steffensen-type family for solving nonlinear equations. Appl. Math. Comput. 217, 9592–9597 (2011)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Sharifi, S., Siegmund, S. & Salimi, M. Solving nonlinear equations by a derivative-free form of the King’s family with memory. Calcolo 53, 201–215 (2016). https://doi.org/10.1007/s10092-015-0144-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10092-015-0144-1
Keywords
- Multi-point method
- Nonlinear equations
- Method with memory
- R-order of convergence
- Kung-Traub’s conjecture