Abstract
We present a local convergence analysis of an efficient eighth order iterative method with a parameter for approximate a locally unique solution of an equation defined on the real line. Earlier studies such as Bi et al. (J Comput Appl Math 225:105–112, 2009) have shown convergence of these methods under hypotheses up to the seventh derivative of the function although only the first derivative appears in the method. In this study we expand the applicability of these methods using only hypotheses up to the first derivative of the function. Numerical examples are also presented in this study.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
This paper is devoted to the problem of approximating a locally unique solution \(x^*\) of equation
where \(F:D\subseteq {\mathbb {R}}\rightarrow {\mathbb {R}}\) is a differentiable nonlinear function and D is a convex subset of \({\mathbb {R}}.\) Newton-like methods are famous for finding solution of (1). These methods are studied based on: semi-local and local convergence [1–34].
The methods such as Euler’s, Halley’s, super Halley’s, Chebyshev’s [1–34] require the evaluation of the second derivative \(F''\) at each step, which in general is very expensive. To avoid this expensive computation, many authors have used higher order multi point methods [1–34].
Newton’s method is undoubtedly the most popular method for approximating a locally unique solution \(x^*\) provided that the initial point is close enough to the solution. In order to obtain a higher order of convergence Newton-like methods have been studied such as Potra–Ptak, Chebyshev, Cauchy Halley and Ostrowski method. The number of function evaluations per step increases with the order of convergence. In the scalar case the efficiency index [29, 34] \(\textit{EI}=p^{\frac{1}{m}}\) provides a measure of balance where p is the order of the method and m is the number of function evaluations.
It is well known that according to the Kung–Traub conjecture the convergence of any multi point method without memory cannot exceed the upper bound \(2^{m-1}\) [29, 34] (called the optimal order). Hence the optimal order for a method with three function evaluations per step is 4. The corresponding efficiency index is \(\textit{EI}= 4^{\frac{1}{3}}=1.58740...\) which is better than Newtons method which is \(\textit{EI}= 2^{\frac{1}{2}}=1.414....\) Therefore, the study of new optimal methods of order four is important.
We study the local convergence analysis of three step King-like method with a parameter defined for each \(n=0,1,2,\ldots \) by
where \(x_0\in D \) is an initial point, \(\beta \in {\mathbb {R}}, \mu _n=\frac{F(y_n)}{F(x_n)}, H:{\mathbb {R}}\rightarrow {\mathbb {R}}\) and F[., .], F[., ., .] are divided differences of order one and order two, respectively for function F[3, 4, 29, 30, 34]. Method (2) is using the evaluations of \(F(x_n), F(y_n), F(z_n)\) and \(F'(x_n)\) per step. According to Traub’s conjecture [29, 34] the convergence order is \(2^{4-1}=8.\) Ren et al. in [7] showed the eighth order of convergence of method (2) using Taylor expansions and hypotheses reaching up to the sixth derivative of function F. These hypotheses limit the applicability of method (2).
As a motivational example, let us define function F on \(X=[-\frac{1}{2},\frac{5}{2}]\) by
Choose \(x^*=1.\) We have that
Then, obviously function F does not have bounded third derivative in X. Notice that, in-particular there is a plethora of iterative methods for approximating solutions of nonlinear equations defined on \({\mathbb {R}}\) [1–34]. These results show that if the initial point \(x_0\) is sufficiently close to the solution \(x^*,\) then the sequence \(\{x_n\}\) converges to \(x^*.\) But how close to the solution \(x^*\) the initial guess \(x_0\) should be? These local results give no information on the radius of the convergence ball for the corresponding method. We address this question for method (2) in “Local convergence analysis” section. The same technique can be used to other methods. In the present study we extend the applicability of these methods by using hypotheses up to the first derivative of function F and contractions. Moreover we avoid Taylor expansions and use instead Lipschitz parameters. This way we do not have to use higher order derivatives to show the convergence of these methods.
The paper is organized as follows. In “Local convergence analysis” section we present the local convergence analysis. We also provide a radius of convergence, computable error bounds and uniqueness result not given in the earlier studies using Taylor expansions. Special cases and numerical examples are presented in the concluding “Numerical examples” section.
Local Convergence Analysis
We present the local convergence analysis of method (2) in this section. Let \(L_0 >0, L >0, M \ge 1\) and \(\beta \in {\mathbb {R}}\) be given parameters and let \(\varphi :[0, \frac{1}{L_0})\rightarrow [0, +\infty )\) be a continuous and non-decreasing function. It is convenient for the local convergence analysis of method (2) that follows to introduce some scalar functions and parameters. Define functions \(g_1, \, p,\, h_p\) on the interval \([0, \frac{1}{L_0})\) by
and parameter \(r_1\) by
We have that \(h_p(0)=-1 <0\) and \(h_p(t)\rightarrow +\infty \) as \(t\rightarrow \frac{1}{L_0}^-.\) It follows from the intermediate value theorem that function \(h_p\) has zeros in the interval \((0, \frac{1}{L_0}).\) Denote by \(r_p\) the smallest such zero. Moreover, define functions \(g_2, h_2, q\) and \(h_q\) on the interval \([0, r_p)\) by
and
We have that \(h_2(0)=h_q(0)=-1 < 0\) and \(h_2(t)\rightarrow +\infty , h_q(t)\rightarrow +\infty \) as \(t\rightarrow r_p^-.\) Denote by \(r_2, r_q\) the smallest zeros of functions \(h_2\) and \(h_q\) in the interval \((0, r_p).\) Furthermore, define functions \(g_3\) and \(h_3\) on the interval \([0, r_q)\) by
and
We have that \(h_3(0)=-1 < 0\) and \(h_3(t)\rightarrow +\infty \) as \(t\rightarrow r_q^-.\) Denote by \(r_3\) the smallest zeros of function \(h_3\) in the interval \((0, r_q).\) Set
Then, we have that
and for each \( t\in [0, r)\)
and
Denote by \(U(v,\rho ), \bar{U}(v,\rho )\) stand respectively for the open and closed balls in \({\mathbb {R}}\) with center \(v\in {\mathbb {R}}\) and of radius \(\rho >0.\) Next,we present the local convergence analysis of method (2) using the preceeding notation.
Theorem 2.1
Let \(F: D\subset {\mathbb {R}}\rightarrow {\mathbb {R}}\) be a differentiable function; F[., .], F[., ., .] be divided differences of order one and two for function F and let \(H:{\mathbb {R}}\rightarrow {\mathbb {R}}\) be a continuous function. Suppose there exist \(x^*\in D,\, L_0 > 0,\, L >0,\,K > 0,\, M \ge 1\) and a continuous and nondecreasing function \(\varphi :[0, \frac{1}{L_0})\rightarrow [0, +\infty )\) such that \( F(x^*)=0,\,\, F'(x^*)\ne 0,\)
and
where the radius r is defined by (3). Then, the sequence \(\{x_n\}\) generated by method (2) for \(x_0\in U(x^*, r)-\{x^*\}\) is well defined, remains in \(U(x^*, r)\) for each \(n=0,1,2,\ldots \) and converges to \(x^*.\) Moreover, the following estimates hold
and
where the \(''g''\) functions are defined above Theorem 2.1. Furthermore, for \(T\in [r, \frac{2}{L_0})\) the limit point \(x^*\) is the only solution of equation \(F(x)=0\) in \(\bar{U}(x^*,T)\cap D.\)
Proof
We shall show estimates (16)–(18) using mathematical induction. By hypothesis \(x_0\in {U}(x^*, r)-\{x^*\},\) (3) and (11), we have that
It follows from (19) and Banach Lemma on invertible functions [3, 4, 29, 30] that \(F'(x_0)\ne 0\) and
Hence, \(y_0\) is well defined by the first substep of method (2) for \(n=0.\) Using (3), (5), (12) and (20) we get that
which shows (16) for \(n=0\) and \(y_0\in U(x^*, r).\) We can write that
Notice that \(|x^*+\theta (x_0-x^*)-x^*|=\theta |x_0-x^*| < r.\) Hence, we get that \(x^*+\theta (x_0-x^*)\in U(x^*, r).\) Then, by (13) and (22), we obtain that
We also have by (21) and (23) (for \(y_0=x_0\)) that
since \(y_0\in U(x^*, r).\) Next, we shall show \( F(x_0)-(\beta -2)F(y_0)\ne 0.\) Using (3), (6), (11), (24) and the hypothesis \(x_0\ne x^*,\) we have in turn that
It follows from (24) that
Hence, \(z_0\) is well defined by the second sub step of method (2) for \(n=0.\) Then, we can write
Using (3), (7), (20), (21), (24), (26) and (27) we get in turn that
which show (17) for \(n=0\) and \(z_0\in U(x^*, r).\) Then, we also have by (23) (for \(z_0=x_0\)) that
since \(z_0\in U(x^*, r).\) We can write by the properties of the divided differences that
Next, we shall show that \(F[z_0, x_0]+F[z_0, y_0, x_0](z_0-y_0)\ne 0.\) Using (3), (8),(14), (21), (28) and (30), we obtain in turn that
Then, we get by (31) that
Hence, \(x_1\) is well defined by the last sub step of method (2) for \(n=0.\) Then, using (3), (9), (10), (28), (29) and (32) we obtain in turn that
which shows (18) for \(n=0\) and \(x_1\in U(x^*, r).\) By simply replacing \(x_0, y_0, z_0,\, x_1\) by \(x_k, y_k, z_k, x_{k+1}\) in the preceding estimates we arrive at estimates (13) and (4). Then, it follows from the estimate \(|x_{k+1}-x^*| < |x_k-x^*| < r,\) we deduce that \(x_{k+1}\in U(x^*, r)\) and \(\lim _{k\rightarrow \infty }x_k=x^*.\) To show the uniqueness part, let \(Q=\int _0^1F'(y^*+\theta (x^*-y^*)d\theta \) for some \(y^*\in \bar{U}(x^*, T)\) with \(F(y^*)=0.\) Using (11), we get that
It follows from (34) and the Banach Lemma on invertible functions that Q is invertible. Finally, from the identity \(0=F(x^*)-F(y^*)=Q(x^*-y^*),\) we deduce that \(x^*=y^*.\) \(\square \)
Remark 2.2
-
1.
In view of (11) and the estimate
$$\begin{aligned} \left| F'(x^*)^{-1}F'(x)\right|= & {} \left| F'(x^*)^{-1}\left( F'(x)-F'(x^*)\right) +I\right| \\\le & {} 1+\left| F'(x^*)^{-1}\left( F'(x)-F'(x^*)\right) \right| \le 1+L_0|x-x^*| \end{aligned}$$condition (13) can be dropped and M can be replaced by
$$\begin{aligned} M(t)=1+L_0 t \end{aligned}$$or
$$\begin{aligned} M(t)=M=2, \end{aligned}$$since \(t\in [0, \frac{1}{L_0}).\)
-
2.
The results obtained here can be used for operators F satisfying autonomous differential equations [3] of the form
$$\begin{aligned} F'(x)=P(F(x)) \end{aligned}$$where P is a continuous operator. Then, since \(F'(x^*)=P(F(x^*))=P(0),\) we can apply the results without actually knowing \(x^*.\) For example, let \(F(x)=e^x-1.\) Then, we can choose: \(P(x)=x+1.\)
-
3.
The radius \(r_1\) was shown by us to be the convergence radius of Newton’s method [3, 4]
$$\begin{aligned} x_{n+1}=x_n-F'(x_n)^{-1}F(x_n)\text { for each } n=0,1,2,\ldots \end{aligned}$$(35)under the conditions (11) and (12). It follows from the definition of r that the convergence radius r of the method (2) cannot be larger than the convergence radius \(r_1\) of the second order Newton’s method (35). As already noted in [3, 4] \(r_1\) is at least as large as the convergence ball given by Rheinboldt [32]
$$\begin{aligned} r_R=\frac{2}{3L}.\end{aligned}$$(36)In particular, for \(L_0 < L\) we have that
$$\begin{aligned} r_R < r_1 \end{aligned}$$and
$$\begin{aligned} \frac{r_R}{r_1}\rightarrow \frac{1}{3}\,\,\, as\,\,\, \frac{L_0}{L}\rightarrow 0. \end{aligned}$$That is our convergence ball \(r_1\) is at most three times larger than Rheinboldt’s. The same value for \(r_R\) was given by Traub [34].
-
4.
It is worth noticing that method (2) is not changing when we use the conditions of Theorem 2.1 instead of the stronger conditions used in [7]. Moreover, we can compute the computational order of convergence (COC) defined by
$$\begin{aligned} \xi = \ln \left( \frac{|x_{n+1}-x^*|}{|x_n-x^*|}\right) \Bigg /\ln \left( \frac{|x_{n}-x^*|}{|x_{n-1}-x^*|}\right) \end{aligned}$$or the approximate computational order of convergence
$$\begin{aligned} \xi _1= \ln \left( \frac{|x_{n+1}-x_n|}{|x_n-x_{n-1}|}\right) \Bigg /\ln \left( \frac{|x_{n}-x_{n-1}|}{|x_{n-1}-x_{n-2}|}\right) . \end{aligned}$$This way we obtain in practice the order of convergence in a way that avoids the bounds involving estimates using estimates higher than the first Fréchet derivative of operator F.
Numerical Examples
We present numerical examples in this section. It follows from the proof of Theorem 2.1 that since \(\mu _n=\frac{F(y_n)}{F(x_n)},\) we can choose
Example 3.1
Let \(D=(-\infty , +\infty ).\) Define function f of D by
Then we have for \(x^*=0\) that \(L_0=L=M=1, K=\frac{L_0}{2}.\) The parameters are
Example 3.2
Let \(D=[-1,1].\) Define function f of D by
Using (38) and \(x^*=0,\) we get that \(L_0=e-1 < L=e, M=2, K=\frac{L_0}{2}.\) The parameters are
Example 3.3
Returning back to the motivational example at the introduction of this study, we have \(L_0=L=146.6629073,\,M=2, K=\frac{L_0}{2}.\) The parameters are
References
Amat, S., Hernández, M.A., Romero, N.: A modified Chebyshev’s iterative method with at least sixth order of convergence. Appl. Math. Comput. 206(1), 164–174 (2008)
Amat, S., Busquier, S., Plaza, S.: Dynamics of the King’s and Jarratt iterations. Aequationes Math. 69, 212–213 (2005)
Argyros, I.K.: Convergence and Application of Newton-type Iterations. Springer, New York (2008)
Argyros, I.K., Hilout, S.: Computational Methods in Nonlinear Analysis. World Scientific Publ. Co., Hackensack (2013)
Argyros, I.K., Chen, D., Quian, Q.: The Jarratt method in Banach space setting. J. Comput. Appl. Math. 51, 103–106 (1994)
Babajee, D.K.R., Dauhoo, M.Z.: An analysis of the properties of Newton’s method with third order convergence. Appl. Math. Comput. 183, 659–684 (2006)
Bi, W., Ren, H., Wu, Q.: Three-step iterative methods with eight-order convergence for solving nonlinear equations. J. Comput. Appl. Math. 225, 105–112 (2009)
Candela, V., Marquina, A.: Recurrence relations for rational cubic methods I: the Halley method. Computing 44, 169–184 (1990)
Chen, J.: Some new iterative methods with three-order convergence. Appl. Math. Comput. 181, 1519–1522 (2006)
Chun, C., Neta, B., Scott, M.: Basins of attraction for optimal eighth order methods to find simple roots of nonlinear equations. Appl. Math. Comput. 227, 567–592 (2014)
Cordero, A., Torregrosa, J.: Variants of Newton’s method using fifth order quadrature formulas. Appl. Math. Comput. 190, 686–698 (2007)
Cordero, A., Maimo, J., Torregrosa, J., Vassileva, M.P., Vindel, P.: Chaos in King’s iterative family. Appl. Math. Lett. 26, 842–848 (2013)
Cordero, A., Magrenan, A., Quemada, C., Torregrosa, J.R.: Stability study of eight-order iterative methods for solving nonlinear equations. J. Comput. Appl. Math. doi:10.1016/j.cam.2015.01.006 (in press)
Cordero, A., Hueso, J.L., Martinez, E., Torregrossa, J.R.: Steffensen type methods for solving non-linear equations. J. Comput. Appl. Math. 236, 3058–3064 (2012)
Ezquerro, J.A., Hernández, M.A.: A uniparametric Halley-type iteration with free second derivative. Int. J. Pure Appl. Math. 6(1), 99–110 (2003)
Ezquerro, J.A., Hernández, M.A.: New iterations of R-order four with reduced computational cost. BIT Numer. Math. 49, 325–342 (2009)
Frontini, M., Sormani, E.: Some variants of Newton’s method with third order convergence. Appl. Math. Comput. 140, 419–426 (2003)
Gutiérrez, J.M., Magrenan, A.A., Romero, N.: On the semilocal convergence of Newton–Kantorovich method under center Lipschitz conditions. Appl. Math. Comput. 221, 79–88 (2013)
Herceg, D., Herceg, Dr.: A family of method for solving nonlinear equations. Int. J. Comput. Math. 87(11), 2533–2541 (2010). doi:10.1080/00207160802684434
Hasanov, V.I., Ivanov, I.G., Nedzhibov, G.: A new modification of Newton’s method. Acta Math. Eng. 27, 278–286 (2002)
Hernández, M.A., Salanova, M.A.: Sufficient conditions for semilocal convergence of a fourth order multipoint iterative method for solving equations in Banach spaces. Southwest J. Pure Appl. Math. 1, 29–40 (1999)
Jaiswal, J.P.: A new third-order derivative free method for solving nonlinear equations. Univers. J. Appl. Math. 1(2), 131–135 (2013)
King, R.F.: A family of fourth-order methods for nonlinear equations. SIAM. J. Numer. Anal. 10, 876–879 (1973)
Kung, H.T., Traub, J.F.: Optimal order of one-point and multipoint iterations. J. Appl. Comput. Math. 21, 643–651 (1974)
Maheshwari, A.K.: A fourth order iterative method for solving nonlinear equations. Appl. Math. Comput. 211, 283–391 (2009)
Noor, M.A.: Some applications of quadrature formulas for solving nonlinear equations. Nonlinear Anal. Forum 12(1), 91–96 (2007)
Nedzhibov, G.: On a few iterative methods for solving nonlinear equations, Application of Mathematics in Engineering and Economics 28. In: Proceeding of the XXVIII Summer School Sozopol, 2003. Heron Press, Sofia Bulgaria (2002)
Parhi, S.K., Gupta, D.K.: Semi-local convergence of a Stirling-like method in Banach spaces. Int. J. Comput. Methods 7(02), 215–228 (2010)
Petkovic, M.S., Neta, B., Petkovic, L., Džunič, J.: Multipoint Methods for Solving Nonlinear Equations. Elsevier, Amsterdam (2013)
Potra, F.A., Ptak, V.: Nondiscrete Induction and Iterative Processes, Research Notes in Mathematics, vol. 103. Pitman Publ, Boston, MA (1984)
Ren, H., Wu, Q., Bi, W.: New variants of Jarratt method with sixth-order convergence. Numer. Algorithms 52(4), 585–603 (2009)
Rheinboldt, W.C.: An adaptive continuation process for solving systems of nonlinear equations. In: Tikhonov, A.N. et al. (eds.) Mathematical Models and Numerical Methods, pub.3, (19), pp. 129–142. Banach Center, Warsaw Poland
Sharma, J.R., Guha, P.K., Sharma, R.: Some variants of Hansen–Patrick method with third and fourth order of convergence. Appl. Math. Comput. 214, 171–177 (2009)
Traub, J.F.: Iterative Methods for the Solution of Equations. Prentice Hall, Englewood Cliffs (1964)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Argyros, I.K., George, S. Local Convergence for an Efficient Eighth Order Iterative Method with a Parameter for Solving Equations Under Weak Conditions. Int. J. Appl. Comput. Math 2, 565–574 (2016). https://doi.org/10.1007/s40819-015-0078-y
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40819-015-0078-y