Abstract
The objective of this article is to define new efficient iterative methods for finding zeros of nonlinear functions. This procedure is based on Homeier [12] and Newton [12, 20, 27] methods. The proposed methods require only three function evaluations per iteration (only two function evaluations and one first derivative evaluation). The error equations are given theoretically to prove that the suggested methods have third-order convergence. Moreover, the Efficiency Index [20] is 1.4422. Numerical comparisons to demonstrate the exceptional convergence speed of the proposed methods using several types of functions and different initial guesses are included. A comparison with other well-known iterative methods is made. It is observed that our proposed methods are very competitive with the third-order methods.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Finding zeros of nonlinear functions by using iterative methods is one of the important problems which have interesting applications in different branches of science, in particular, physics and engineering [20, 22, 27], such as fluid dynamics, nuclear systems, and dynamic economic systems. Also, in mathematics, we do need iterative methods to find rapid solutions for special integrals and differential equations. Recently, there are many numerical iterative methods have been developed to solve these problems, see [1, 5, 6, 14, 16, 19, 27, 30]. These methods have been suggested and analyzed by using a variant of different techniques such as Taylor series. We first looked for the best approximation of which is used in many iterative methods. We obtained this approximation by combining two well-known methods, Potra–Ptak [23] and Weerakon methods [28]. Then, we used Homeier method [12] and the approximation to introduce the first method, which we called the Variant of Homeier Method 1 (VHM1). Finally, we used predictor–corrector technique to improve the first method (VHM1) and we called it Variant of Homeier Method 2 (VHM2). We showed that the new iterative methods are of third order of convergence, Efficiency Index [20] E.I. = 1.4422 and very robust and competitive with other third-order iterative methods.
2 The Established Methods
For the purpose of comparison, three 2-step third-order methods and two 1-step third-order methods are considered. Since these methods are well established, we state the essential formulas used to calculate the simple zero of nonlinear functions and thus compare the effectiveness of the proposed 2-step third-order methods.
Newton Method [3, 4, 9, 20, 22, 24, 27, 29].
One well-known 1-step iterative zero-finding method,
Halley Method [7, 8, 10, 11, 15, 24]:
which is widely known Halley's method. It is a cubically converging (p = 3) zero-finding 1-step algorithm. It requires three function evaluations (r = 3) and its E.I. = 1.4422.
Householder's method is also cubically converging (p = 3) 1-step zero-finding algorithm. It requires three function evaluations (r = 3) and it's E.I. = 1.4422.
Weerakoon and Fernando Method [21, 28]:
Obviously, this is an implicit scheme, which requires having the derivative of the function at the \((n+1)th\) iterative step to calculate the \((n+1)th\) iterate itself. They overcome this difficulty by making use of Newton's iterative step to compute the \((n+1)th\) iterate on the right-hand side.
This scheme has also been derived by Ozban by using the arithmetic mean of \(f{^{\prime}}(xn)\) and \(f{^{\prime}}({y}_{n})\) instead of \(f{^{\prime}}({x}_{n})\) in Newton's method (1), i.e., \(( f{^{\prime}}({x}_{n}) +f{^{\prime}}({y}_{n}) )/2.\)
Weerakoon and Fernando method is also a cubically converging (p = 3) 2-step zero-finding algorithm. It requires three function evaluations \(( r = 3 )\) and it's \(E.I.= 1.4422\).
where \(y_{n} = x_{n} - \frac{{f(x_{n} )}}{{f^{\prime}(x_{n} )}}\).
Homeier's method is also cubically converging (p = 3) 2-step zero-finding algorithm. It requires three function evaluations (r = 3) and its E.I. = 1.4422.
where
Potr-Ptak's method is also a cubically converging (p = 3) 2-step zero-finding algorithm. It requires three function evaluations; two function evaluations and one first derivative (r = 3) and its E.I. = 1.4422.
2.1 Construction of the New Methods
In this section, first we define a new third-order method for finding zeros of a nonlinear function. We do that by combining two well-known methods to obtain a new one. In fact, the new iterative method will be an improvement of the classical Homeier method and this will be our first algorithm.
Secondly, we will improve our first algorithm by assuming a three-step iterative method per full cycle. In order to do that, we perform a Newton iteration at the new third step. We use a third variable \(Z_{n}\) for the third step, which we will approximate lately.
First, we equate (combine) the two methods (4) and (6) to obtain \(f{^{\prime}}({y}_{n})\)
Now substituting (7) in (5) in order to get the first algorithm
So, we get the first Algorithm (1) which we will call it a Variant of Homeier Method 1 (VHM1).
For a given \({\mathrm{x}}_{^\circ }\), compute the approximate solution \({\mathrm{x}}_{\mathrm{n}}\) by iterative scheme.
Now, we need to drive the next algorithm, which will be an improvement of the first algorithm. The main goal is to make the new scheme optimal. We perform a Newton iteration at the new third step which comes next:
Now, we try to simplify our new scheme to reach the convergence rate three with three function evaluations per full cycle; two function evaluations and one first derivative evaluation. Obviously, \(f({z}_{n})\) and \(f{^{\prime}}({z}_{n})\) should be approximated. We replace \(f{^{\prime}}({z}_{n})\) by \(f{^{\prime}}({x}_{n})\) and write the Taylor expansion of \(f ({z}_{n})\) about \({x}_{n}\) [25].
Now, \(f^{\prime\prime}(x_{n} )\) should be approximated as well. Once again we write the Taylor expansion of \(f(y_{n} )\) about \(x_{n}\) as follows:
From (13) and (9), we obtain \(f^{\prime\prime}(x_{n} )\) as follows:
Now substituting (14) and (10) in (12), we obtain f(zn) as follows:
Now, we substitute (10) and (15) in (11), also \(f^{\prime}(x_{n} )\) instead of \(f^{\prime}(z_{n} )\):
After doing some simplifying work, we get a new algorithm.
Algorithm (1): we will call it the Variant Homeier Method 2 (VHM2).
For a given \(x_{0}\), compute the approximate solution \(x_{n + 1}\) by an iterative scheme
As we can see, both algorithms require only two function evaluations and only one first derivative evaluation per each cycle. When we compare both algorithms and Homeier method, clearly, there is big difference, which is Homeier method requires one function evaluation and two first derivative evaluations.
2.2 Convergence Criteria of the New Methods
Now, we compute the orders of convergences and corresponding error equations of the proposed methods Algorithms (8) and (16).
Theorem 2.1
Let \(\propto\) ε I be a simple zero of sufficiently differentiable function f: I ⊆ R → R for an open interval I. If \({x}_{0}\) is sufficiently close to \(\propto\), then the iterative method defined by Algorithm (1) is of order three and satisfies the error equation:
where
\(C_{k} = \frac{{f^{(k)} (\alpha )}}{k!f^{\prime}(\alpha )}\), \({\rm k} = 2, 3, \ldots {\rm and}\, e_{n} = x_{n} - \alpha.\)
Proof
Let \(\propto\) be a simple zero of f, \(f^{\prime}(\alpha ) \ne 0\). Using Taylor's series expansion around \(\propto\) in the nth iterate results in
But \(y_{n} = x_{n} - \frac{{f(x_{n} )}}{{f^{\prime}(x_{n} )}}\), \(e_{n} = x_{n} - \alpha\). Using (19), we get
Now by Taylor expansion once again \(f(y_{n} )\) about \(\propto\) and using (20):
\(f(y_{n} ) = f(\alpha ) + f^{\prime}(\alpha )(y_{n} - \alpha ) + \frac{f^{\prime\prime}(\alpha )}{{2!}}(y_{n} - \alpha )^{2}\) but \(f(\alpha ) = 0\),
\(f(y_{n} ) = f^{\prime}(\alpha )\left[ {(y_{n} - \alpha ) + \frac{f^{\prime\prime}(\alpha )}{{2!f^{\prime}(\alpha )}}(y_{n} - \alpha )^{2} } \right]\) and \(\frac{f^{\prime\prime}(\alpha )}{{2!f^{\prime}(\alpha )}} = C_{2}\)
\(f(x_{n} ) - f(y_{n} ) = f^{\prime}(\alpha )\left[ {e_{n} + (2c_{2}^{2} - c_{3} )e_{n}^{3} } \right]\) and by using (18), we obtain
Putting (24) in the Algorithm (1), Eq. (8), we get
\(x_{n + 1} = x_{n} - \left\{ {e_{n} - (2c_{2}^{2} + 2c_{3} )e_{n}^{3} + O(e_{n}^{4} )} \right\}\), where \(e_{n} = x_{n} - \alpha\)
Now, \(e_{n + 1} = x_{n + 1} - \alpha\), by substituting (25), we get.
\(e_{n + 1} = (2c_{2}^{2} + 2c_{3} )e_{n}^{3} + O(e_{n}^{4} )\) and the proof is completed.
Theorem 2.2
Let \(\propto\) ∈ I be a simple zero of sufficiently differentiable function f: I ⊆ R → R for an open interval I. If × 0 is sufficiently close to \(\propto\), then the iterative method defined by (8) is of order three and satisfies the error equation:
Proof
Let \(\propto\) be a simple zero of f, \(f^{\prime}(\alpha ) \ne 0\), once again, we can follow the same procedure provided in Theorem 2.1.
And then dividing (26) by (27), we get
By using (19) and (28), we obtain:
Now, substituting (29) in Algorithm (1.2), we get
Now using the previous result in \(e_{n + 1} = x_{n + 1} - \alpha\)
the proof is done.
2.3 More Suggestions
In this section, we present new modifications of important methods for solving nonlinear equations of type \(f(x) = 0\) using the substitution of the formula (7) \(f^{\prime}(y_{n} ) = \frac{{f(x_{n} ) - f(y_{n} )}}{{f(x_{n} ) + f(y_{n} )}}f^{\prime}(x_{n} )\) in well-known methods.
As we will see, this is so helpful that it reduces the number of required derivative evaluations in iteration schemes. We will introduce only two suggestions as examples and we will show their rate of convergences.
Example 1
Consider Noor and Gupta's fourth-order method [17, 18].
\(y_{n} = x_{n} - \frac{{f(x_{n} )}}{{f^{\prime}(x_{n} )}}\),
By substituting (7) in (30), we get
or in another form:
Theorem 2.3
Let \(\propto\) ε I be a simple zero of sufficiently differentiable function.
f: I ⊆ R → R for an open interval I. If × 0 is sufficiently close to \(\propto\), then the iterative method introduced in (31) is of order four and satisfies the error equation:
Notes:
-
1.
The suggested method requires only two function evaluations and one derivative evaluation (r = 3).
-
2.
Rate of convergence P = 4.
-
3.
Efficiency Index E.I. = \(p^{{{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 r}}\right.\kern-0pt} \!\lower0.7ex\hbox{$r$}}}}\) = \(4^{{{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 3}}\right.\kern-0pt} \!\lower0.7ex\hbox{$3$}}}}\) = 1.5874.
Example 2
Consider Jarratt's fourth-order method [2].
\(y_{n} = x_{n} - \frac{{2f(x_{n} )}}{{3f^{\prime}(x_{n} )}}\),
\(x_{n + 1} = x_{n} - \frac{{3f^{\prime}(y_{n} ) + f^{\prime}(x_{n} )}}{{6f^{\prime}(y_{n} ) - 2f^{\prime}(x_{n} )}}*\frac{{f(x_{n} )}}{{f^{\prime}(x_{n} )}}\)
By substituting the previous formula (16), we get
where
with error equation
Theorem 2.4
Let \(\propto\) ∈ I be a simple zero of sufficiently differentiable function f: I ⊆ R → R for an open interval I. If × 0 is sufficiently close to \(\propto\), then the iterative method introduced in (32) is of order two and satisfies the error equation:
3 Numerical Examples
In this section, first we present the results of numerical calculations on different functions and initial guesses to demonstrate the efficiency of the suggested methods, Variant Homeier Method 1 (VHM1) and its improvement (VHM2). Also, we compare these methods with famous methods, such as Halley's, Weerakoon and Potra–Ptak methods. All computations are carried out with 15 decimal places (See Table 1) approximate zeros \(\alpha\) found up to 15th decimal place).
All programs and computations were completed using MATLAB, 2009a. Table 2 displays the number of iterations (IT) and the computational order of convergences (COC). Table 3 displays the number of function evaluations (r), convergence order (P), efficiency index (E.I.), the sum of iterations, and average COC's for each method. When we reached the sought zero \(\alpha\) after only three iterations, we used the second formula to compute the COC of the iterative method. Furthermore, we assumed COC is zero when the iterative method diverged. Table 4 displays the number of function evaluations and derivative evaluations required for each method.
4 Conclusion
We have developed two of 2-step iterative methods for finding zeros of nonlinear functions, (VHM1) and (VHM2). The main goal is to find and improve iterative schemes which requires less derivative evaluations of the function, whereas more derivative evaluations in a method cost need more time and effort from an industry point of view. So, both new methods require only two function evaluations and one first derivative evaluation. On the contrary, known methods as Halley and Householder require one function evaluation, one first derivative and one second derivative evaluation whereas Weerakoon and Homeier methods require one function evaluation and two first derivative evaluations (See Table 4). Furthermore, we have proved theoretically that both new methods are of order three. It can be observed that the numerical experiment is displayed in Tables 2, 3, and 4.
In addition, based on numerical experiments, the proposed methods are also compared with the previous well-known iterative methods of the same order of convergence. The performance of the proposed methods can be seen in Tables 2, 3, and 4.
Moreover, it can easily be seen that both new methods are more efficient, robust, and faster convergence than the other methods with respect to the required number of derivative evaluations for each method, IT's and COC results.
Numerical experiments show that the order of convergence of both methods is at least three.
References
Ababneh, O.Y.: New iterative methods for solving nonlinear equations and their basins of attraction. WSEAS Trans. Math. Link Disabl. 21, 9–16 (2022)
Ahmad, F., Hussain, S., Rafiq, A.: New twelfth-order j-halley method for solving nonlinear equations. Open Sci. J. Math. Appl. 1(1), 1–4 (2013)
Atkinson, K.E.: An Introduction to Numerical Analysis. Wiley, Inc (1989). ISBN 0-471-62489-6
Bonnans, J.F., Gilbert, J.C., Lemaréchal, C., Sagastizábal, C.A.: Numerical Optimization: Theoretical and Practical Aspects, University-text (2nd revised edn. of translation of 1997 French ed.) Berlin: Springer-Verlag. pp. Ope xiv+490 (2006). ISBN 3-540-35445-X. MR 2265882
Chun, C., Neta, B.: Comparative study of methods of various orders for finding simple roots of nonlinear equations. J. Appl. Anal. Comput. 9, 400–427 (2019)
Eldanfour, H.M.: Modified Newton’s methods with seventh or eighth -order convergence. Gen. Lett. Math. 1(1), 1–10 (2016). https://doi.org/10.31559/glm2016.1.1.1
Ezquerro, J.A., Hernandez, M.A.: Unparametric Halley-type iteration with free second derivative. Int. J. Pure Appl. Math. 6(1), 103–114 (2003)
Ezquerro, J.A., Hernandez, M.A.: On Halley-type iterations with free second derivative. J. Comput. Appl. Math. 170, 455–459 (2004)
Gautschi, W.: Numerical Analysis: An Introduction. Birkhauser (1997)
Gutierrez, J.M., Hernandez, M.A.: An acceleration of Newton’s method: super-Halley method. Appl. Math. Comput. 117, 223–239 (2001)
Halley, E.: A new exact and easy method of finding the roots of equations generally and that without any previous reduction. Philos. Trans. R. Soc. London 18, 136–148 (1694)
Homeier, H.H.H.: On Newton–type methods with cubic convergence. J. Comput. Appl. Math. 176, 425–432 (2005)
Kumar, S., Kanwar, V., Singh, S.: Modified efficient families of two and three-step predictor-corrector iterative methods for solving nonlinear equations. Appl. Math. 1, 153–158 (2010)
Lambers, J.: Error Analysis for Iterative Methods, 2009–10 : lecture 12 notes, Mat 460/560 (2009)
Melman, A.: Geometry and convergence of Halley’s method. SIAM Rev. 39(4), 728–735 (1997)
Neta, B.: A new derivative-free method to solve nonlinear equations. Mathematics 9, 583 (2021). https://doi.org/10.3390/math9060583
Noor, M.A., Gupta, V.: Modified householder iterative method free from second derivative for nonlinear equations. Appl. Math. Comput. (2007) in press
Noor, M.A., Khan, W.A., Noor, K.I., Al-said, E.: Higher order iterative methods free from second derivative for solving nonlinear equations. Int. J. Phy. Sci. 6(8), 1887–1897 (2011)
Ricceri, B.: A class of equations with three solutions. Mathematics 8, 478 (2020)
Ostrouski, A.M.: Solutions of Equations and System of Equations. Academic Press, New York (1960)
Ozban, A.Y.: Some new variants of Newton’s method. App. Math. Lett. 17(2004), 677–682 (2004)
Petkovic, M.S., Neta, B., Petkovic, L.D., Dzunic, J.: Multipoint Methods for Solving Nonlinear Equations. Elsevier (2012)
Potra, F.A., Ptak, V.: Nondiscrete Introduction and Iterative Processes, Research notes in Mathematics, vol. 103. Pitman, Boston (1984)
Scavo, T.R., Thoo, J.B.: On the geometry of Halley's method. Am. Math. Mon. (1994)
Soleymani, F., Sharma, R., Li, X., Tohidi, E.: An optimized derivative free form of the potra_ptak method. Math. Comput. Mod. 56, 97–104 (2012)
Thukral, R.: New modifications of Newton-type methods with eighth-order convergence for solving nonlinear equations. J. Adv. Math. 10(3), 3362–3373 (2015)
Troub, J.F.: Iterative Methods for Solution of Equations. Chelsea publishing Company, New York (1977)
Weerakoon, S.T., Fernando, G.I.: A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 13, 87–93 (2000)
Ypma, J.T.: Historical development of the Newton-Raphson method. SIAM Rev. 37(4), 531–551 (1995)
Zhanlav, T., Otgondorj, K.: Comparison of some optimal derivative-free three-point iterations. J. Numer. Anal. Approx. Theory. 49, 76–90 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Conflict of Interest Statement:
The authors declare no conflict of interest regarding this publication.
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Ababneh, O., Al-Boureeny, K. (2023). New Modification Methods for Finding Zeros of Nonlinear Functions. In: Zeidan, D., Cortés, J.C., Burqan, A., Qazza, A., Merker, J., Gharib, G. (eds) Mathematics and Computation. IACMC 2022. Springer Proceedings in Mathematics & Statistics, vol 418. Springer, Singapore. https://doi.org/10.1007/978-981-99-0447-1_37
Download citation
DOI: https://doi.org/10.1007/978-981-99-0447-1_37
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-0446-4
Online ISBN: 978-981-99-0447-1
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)