Abstract
In this paper, we develop new derivative-free four-parametric families of with and without memory iterative methods for determining the roots of nonlinear equations. The family of without memory methods has convergence order eight and supports Kung–Traub’s conjecture. It is then extended to obtain the family of with memory methods using the four parameters as accelerating parameters without the need for extra function evaluations. As such, the convergence order increases from 8 to 15.5156 for the family of with memory methods. Analysis of convergence and numerical experiments are carried out on some nonlinear functions to validate the theoretical results and also to demonstrate the effectiveness and applicability of the proposed families of methods.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
In the realm of numerical computations, a vast number of problems are expressed using nonlinear equations of the following form:
where \({\Omega }:D \subseteq {\mathbb{R}} \to {\mathbb{R}}\) is a real function defined on \(D\), an open interval. Finding the numerical solutions of these problems expressed by (1) has always been a challenging task but at the same time of great importance due to its numerous applications in various branches of science and engineering. Iterative methods are extensively used for solving these problems in order to get approximate solutions of (1) but with high accuracy. The following is one such iterative method, called the Newton’s method [1], which is widely used for finding the simple roots of (1):
It is a classical optimal one-point without memory method with quadratic order of convergence. However, due to the requirement of derivative evaluation and its low convergence order, the Newton’s method (2) is not suitable for many practical uses. As a result, various multi-point without memory methods have been developed and studied in literature which have higher convergence order with higher efficiency [2,3,4]. An iterative method is called optimal if it satisfies the unproved Kung–Traub’s conjecture [5] according to which an iterative without memory method requiring \(k\) function evaluations per iteration is optimal if it has the convergence order of \(2^{k - 1}\).
In the past decade, various multi-point with memory methods for finding simple roots of nonlinear equations using accelerating parameters have gained much attention among researchers [6,7,8,9,10]. To theoretically determine the efficiency of an iterative method, Ostrowski [11] introduced the efficiency index \(\left( {{\text{EI}}} \right) = p^{\frac{1}{k}}\), where \(k\) is the number of function evaluations at each iteration and \(p\) is the order of convergence. In fact, it was Traub who first introduces the with memory method, known as the Traub–Steffensen method [1], using a parameter as accelerating parameter. The method is given below:
where \(\Omega \left[ {s_{n} ,w_{n} } \right] = \frac{{\Omega \left( {s_{n} } \right) - \Omega \left( {w_{n} } \right)}}{{s_{n} - w_{n} }}\) and \(\alpha_{n}\) is the accelerating parameter calculated as follows:
The method (3) has convergence order of 2.41 which is higher than the quadratic convergence order of Newton’s method. This is achieved without any additional function evaluation. Also, unlike Newton’s method, Traub–Steffensen method does not require any evaluation of the derivatives and is derivative-free. This has motivated us to develop new multi-point with and without memory iterative methods containing more number of accelerating parameters with increased order of convergence having high efficiency index of almost 2.
In this paper, we introduce new derivative-free four-parametric families of four-point with and without memory iterative methods for computing simple roots of nonlinear equations. Formulation of the family of with memory methods is based on the extension of the new family of without memory methods by using accelerating parameters without any extra function evaluations. As a result, the convergence order increases from 8 to 15.5156. The accelerating parameters are approximated by Newton's interpolatory polynomials through the best-saved points so as to obtain highly efficient family of with memory methods.
The remaining content of the paper has been structured as follows. In Sect. 2 development of the new derivative-free family of without memory methods is discussed, and the theoretical convergence properties are fully investigated. Section 3 deals with the development and convergence analysis of the new derivative-free family of with memory methods. Section 4 covers the numerical results and the comparison of the proposed families of with and without memory methods with other existing methods on some test functions. Some real-world problems have been included in this section to confirm the applicability of the proposed families of with and without memory methods. Finally, Sect. 5 presents some concluding remarks.
2 Optimal Four-Parametric Family of Four-Point Without Memory Methods
Let us first consider the following non-optimal three-point Newton steps of eighth-order containing the first-order derivative.
To minimize the number of function evaluations from the above Eq. (4), we first approximate \(\Omega^{\prime}\left( {y_{n} } \right)\) using the following expression:
Then, we approximate \(\Omega^{\prime}\left( {s_{n} } \right)\) from the first two steps of the above Eq. (4) as follows:
where \(\lambda\) is any real parameter and \(\alpha ,\beta \in {\mathbb{R}} - \left\{ 0 \right\}\).
We want to make this Eq. (6) optimal as well as derivative-free. So, we approximate \(\Omega^{\prime}\left( {z_{n} } \right)\) in the last step of (6) by the following polynomial:
where \(l_{0} , l_{1} , l_{2}\) and \(l_{3}\) are some unknowns to be determined by means of the following conditions:
Now, solving Eq. (7) under the above conditions and simplifying, we obtain the values of \(l_{0} , l_{1} , l_{2}\) and \(l_{3}\) as follows:
where \(\Omega \left[ {x,y,z} \right] = \frac{{\Omega \left[ {x,y} \right] - \Omega \left[ {y,z} \right]}}{x - z}\) and \(\Omega \left[ {x,y,z,v} \right] = \frac{{\Omega \left[ {x,y,z} \right] - \Omega \left[ {y,z,v} \right]}}{x - v}\) are second and third divided differences, respectively.
Using (9), (10) and (11), the approximation of \(\Omega^{\prime}\left( {z_{n} } \right)\) from Eq. (7) is obtained as follows:
Now, substituting (12) in (6) and adding two new parameters \(\gamma , \delta \in {\mathbb{R}} - \left\{ 0 \right\}\) in the last two steps, we obtain a new optimal derivative-free family of four-point without memory methods which are presented below. We shall denote it by FM8.
It is evident that the new family of without memory methods (13) consumes only four function evaluations per full iteration and is completely derivative-free. Also, it preserves the optimal convergence order eighth with the efficiency index \(8^{\frac{1}{4}} \approx 1.682\). Now, we present the following theorem through which the convergence criteria of (13) are theoretically discussed.
Theorem 1
Let \(\xi \in D\) be a simple root of a sufficiently differentiable real function \(\Omega :D \subseteq {\mathbb{R}} \to {\mathbb{R}}\), where \(D\) is an open interval. If an initial guess \(s_{0}\) is close enough to \(\xi\), then the family of proposed methods defined by (13) has eighth-order convergence for any \(\lambda \in {\mathbb{R}}\) and \(\alpha , \beta , \gamma , \delta \in {\mathbb{R}} - \left\{ 0 \right\}\). And, it has the error equation given by:
where \(d_{j} = \frac{1}{j!}\frac{{\Omega^{j} \left( \xi \right)}}{{\Omega^{\prime}\left( \xi \right)}},j = 2,3,... \), and \(\varepsilon_{n} = s_{n} - \xi\) is the error at \(n^{th}\) iteration.
Proof
We construct and apply the following self-explained Mathematica code for the proof of the optimal eighth order of convergence of (13).
which shows that Eq. (13) is of optimal order eighth.
3 Four-Parametric Family of Four-Point with Memory Methods
From the error Eq. (14), the convergence order can be increased from 8 to 16 for the method (13) if \(\alpha = - \frac{1}{{\Omega^{\prime}\left( \xi \right)}}\), \(\beta = - d_{2}\), \(\gamma = \Omega^{\prime}\left( \xi \right)d_{3}\) and \(\delta = \Omega^{\prime}\left( \xi \right)d_{4}\). However, the exact value of \(\xi\) is not available to us. As such, we shall use \(\alpha = \alpha_{n}\), \(\beta = \beta_{n}\), \(\gamma = \gamma_{n}\) and \(\delta = \delta_{n}\), where \(\alpha_{n}\), \(\beta_{n}\), \(\gamma_{n}\) and \(\delta_{n}\) are accelerating parameters which will be computed using the available information from the current and the previous iterations.
Now, to approximate the accelerating parameters \(\alpha_{n} , \beta_{n} , \gamma_{n}\) and \(\delta_{n}\), we use interpolation as follows.
where \({\mathbb{N}}_{j} \left( t \right), j = 4, 5, 6, 7\) are Newton’s interpolatory polynomials of \(j\) degrees set through the points, i.e., \(s_{n} , y_{n} , z_{n} , w_{n} , s_{n - 1} , y_{n - 1} , w_{n - 1} , z_{n - 1}\).
Now, by using (15) in the method (13), we obtain the following new derivative-free family of with memory methods. We shall denote it by FWM8.
For a given \(s_{0}\), \(\alpha_{0} , \beta_{0}\), \(\gamma_{0} , \delta_{0}\), we have \(w_{0} = s_{0} + \alpha_{0} {\Omega }\left( {s_{0} } \right) \). Then,
Lemma 1
If \(\alpha_{n} = - \frac{1}{{{\mathbb{N}}^{\prime}_{4} \left( {s_{n} } \right)}}\) , \(\beta_{n} = - \frac{{{\mathbb{N}}^{\prime\prime}_{5} \left( {w_{n} } \right)}}{{2{\mathbb{N}}^{\prime}_{5} \left( {w_{n} } \right)}}\) , \(\gamma_{n} = \frac{{{\mathbb{N}}^{\prime\prime\prime}_{6} \left( {y_{n} } \right)}}{6}\) and \(\delta_{n} = \frac{{{\mathbb{N}}_{7}^{iv} \left( {z_{n} } \right)}}{24}, n = 0, 1, 2, \ldots\) , then the following estimates
hold, where
Proof
As for the proof, see Lemma 1 of [12].
Now, we present the following theorem for analyzing the R-order of convergence [13] of the derivative-free four-parametric family of four-point with memory methods (16).
Theorem 2
If an initial guess \(s_{0}\) is sufficiently close to a root \(\xi\) of \(\Omega \left( s \right) = 0\), the parameters \(\alpha_{n}\), \(\beta_{n}\), \(\gamma_{n}\) and \(\delta_{n}\) are calculated by the expressions in (15), then the R-order of convergence of the methods (16) is at least \(15.5156\).
Proof
Let the sequence of approximations \(\left\{ {s_{n} } \right\}\) produced by (16) converges to the root \(\xi\) with the order \(r\). Then, we can write
Then,
Thus,
Let the iterative sequences \(\left\{ {w_{n} } \right\}\), \(\left\{ {y_{n} } \right\}\) and \(\left\{ {z_{n} } \right\}\) have orders \(r_{1} , r_{2}\) and \(r_{3}\), respectively. Then, using (21) and (22) gives
From Theorem 1, we have
Using the above Lemma 1 and (27)–(30), we get
Now, comparing the corresponding powers of \(\varepsilon_{n - 1}\) on the right sides of (24) and (31), (25) and (32), (26) and (33), (23) and (34), we get
This system has the non-trivial solution \(r_{1} = 1.9394\), \(r_{2} = 3.8789\), \(r_{3} = 7.7578\) and \(r = 15.5156\). Hence, the R-order of convergence of the proposed family of methods (16) is at least \(15.5156\). The proof is complete.
4 Numerical Results
In this section, we examine the performance and the computational efficiency of the newly developed with and without memory methods discussed in Sects. 2 and 3 and compare with some methods of similar nature available in literature. In particular, we have considered for the comparison, the following four-parametric methods: LAM8(3.31) [8], ZM8 (ZR1 from [9]) and ACM8 (M1 from [10]). All numerical tests are executed using the programming software Mathematica 12.2. Throughout the whole computation, we have chosen the same values of the parameters \(\alpha_{0} = \beta_{0} = \gamma_{0} = \delta_{0} = - 1\) and \(\lambda = 2\) in all the test functions in order to start the initial iteration. These same values are used for the corresponding parameters of all the compared methods in order to have fair comparison. Numerical test functions which comprise some standard academic examples and real-life chemical engineering problems along with their simple roots \(\left( \xi \right)\) and initial guesses \(\left( {s_{0} } \right)\) are presented below.
Example 1
A standard academic test function given by:
It has a simple root \(\xi = 2\). We start with the initial guess \(s_{0} = 2.3\).
Example 2
A standard academic test function given by:
It has a simple root \(\xi \approx 4.1525907367571583\). We start with the initial guess \(s_{0} = 4.5\).
Example 3
A standard academic test function given by
It has a simple root \(\xi = 0\). We start with the initial guess \(s_{0} = 0.6\).
Example 4
The azeotropic point of a binary solution problem given by the following nonlinear equation (for details see [14]).
where \(F = 0.38969\) and \(G = 0.55954\).
It has a simple root \(\xi \approx 0.69147373574714142\). We start with the initial guess \(s_{0} = 1.1\).
In Tables 1 and 2, we have displayed the absolute residual errors \(\left| { {\Omega }\left( {s_{n} } \right)} \right|\) at the first three iterations obtained by the compared methods. We also include the computational convergence order (COC) of each compared method which is computed by the following formula [15]:
From the two Tables 1 and 2, the numerical results affirm the robust performance and high efficiency of the proposed with and without memory methods thus reaffirming their theoretical results. The proposed methods give better accuracy with high efficiency in terms of minimal residual errors after three iterations as compared to the other methods. Further, the COC supports the theoretical convergence order of the new proposed with and without memory methods in the test functions.
5 Concluding Remarks
We have presented in this paper new derivative-free families of with and without memory methods for finding the solutions of nonlinear equations. The use of four accelerating parameters in the with memory methods has enabled us to increase the convergence order of the without memory methods from 8 to 15.5156 and obtain very high-efficiency index of \(15.5156^{\frac{1}{4}} \approx 1.9847\) without extra function evaluations. The numerical results further confirm the good performance, validity and applicability of the proposed with and without memory methods. They are found to be more efficient with better accuracy as compared to the existing methods in comparison in terms of minimal residual errors after three iterations for convergence toward the required simple roots.
References
Traub JF (1982) Iterative methods for the solution of equations. Am Math Soc 312
Singh A, Jaiswal JP (2016) A class of optimal eighth-order Steffensen-type iterative methods for solving nonlinear equations and their basins of attraction. Appl Math Inf Sci 10(1):251–257. https://doi.org/10.18576/amis/100125
Panday S, Sharma A, Thangkhenpau G (2023) Optimal fourth and eighth-order iterative methods for non-linear equations. J Appl Math Comput 69(1):953–971. https://doi.org/10.1007/s12190-022-01775-2
Behl R, Alshomrani AS, Chun C (2020) A general class of optimal eighth-order derivative free methods for nonlinear equations. J Math Chem 58:854–867
Kung HT, Traub JF (1974) Optimal order of one-point and multipoint iteration. J ACM (JACM) 21(4):643–651. https://doi.org/10.1145/321850.321860
Zafara F, Yasmina N, Kutbib MA, Zeshana M (2016) Construction of tri-parametric derivative free fourth order with and without memory iterative method. J Nonlinear Sci Appl 9:1410–1423
Chanu WH, Panday S, Thangkhenpau G (2022) Development of optimal iterative methods with their applications and basins of attraction. Symmetry 14(10):2020. https://doi.org/10.3390/sym14102020
Lotfi T, Assari P (2015) Two new three and four parametric with memory methods for solving nonlinear equations. Int J Ind Math 7(3):269–276
Zafar F, Cordero A, Torregrosa JR, Rafi A (2019) A class of four parametric with-and without-memory root finding methods. Comp and Math Methods 1:e1024
Cordero A, Janjua M, Torregrosa JR, Yasmin N, Zafar F (2018) Efficient four parametric with and without-memory iterative methods possessing high efficiency indices. Math Prob Eng 2018:1–12. https://doi.org/10.1155/2018/8093673
Ostrowski AM (1966) Solution of equations and systems of equations. Academic Press, New York-London
Džunić J (2013) On efficient two-parameter methods for solving nonlinear equations. Numer Algor 63:549–569. https://doi.org/10.1007/s11075-012-9641-3
Ortega JM, Rheinboldt WG (1970) Iterative solution of nonlinear equations in several variables. Academic Press, New York
Solaiman OS, Hashim I (2019) Efficacy of optimal methods for nonlinear equations with chemical engineering applications. Math Probl Eng 2019, Article ID 1728965, 11 p. https://doi.org/10.1155/2019/1728965
Petković MS (2011) Remarks on ‘“On a general class of multipoint root-finding methods of high computational efficiency.”’ SIAM J Numer Math 49:1317–1319
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
G Thangkhenpau, Panday, S., Mittal, S.K. (2024). New Derivative-Free Families of Four-Parametric with and Without Memory Iterative Methods for Nonlinear Equations. In: Swain, B.P., Dixit, U.S. (eds) Recent Advances in Electrical and Electronic Engineering. ICSTE 2023. Lecture Notes in Electrical Engineering, vol 1071. Springer, Singapore. https://doi.org/10.1007/978-981-99-4713-3_30
Download citation
DOI: https://doi.org/10.1007/978-981-99-4713-3_30
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-4712-6
Online ISBN: 978-981-99-4713-3
eBook Packages: EngineeringEngineering (R0)