Abstract
In this article, our main motivation is to present two-step with memory iterative methods for solving nonlinear equations. We attempted to convert the existing fourth-order without memory method into a with memory method. Further acceleration of convergence order is attained by means of different approximations of self-accelerating parameters. The parameters are calculated by Hermite interpolating polynomial and applied to accelerate the order of convergence of the without memory methods. In particular, the R-order of the proposed two-step with memory iterative method is increased without any additional calculations and it possesses high computational efficiency. At the end, the theoretical results are confirmed by considering different numerical examples. Numerical comparisons specify that the new family is efficient and give tough competition to some existing with memory iterative methods.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Solving nonlinear equation \(f(x)=0\), is a very significant problem in the real world. To determine the solution of nonlinear equations, many iterative methods have been projected (see[1,2,3,4,5]); these iterative methods have a vital area of research in numerical analysis as they have applications in numerous branches of pure and applied sciences. Beyond them the most famous one-point iterative without memory method is the Newton–Raphson method, given by
where \(x_0\) is the initial guess for the exact solution of \(f(x)=0\), and \(n=0,1,2,\ldots \) This converges quadratically. One drawback of this method is the condition \(f'(x_n) \ne 0\), which confines its applications in practice. To resolve this difficulty, Kumar et al. [6] developed a new one-point iterative method, specified by
Setting \(\lambda _1 = 0\) in (2), we obtain Newton’s method. The error expression for the above method is:
where \(e_n=x_n- \alpha \), \(c_k=\frac{1}{k!} \frac{f^{(k)}{(\alpha )}}{f'{(\alpha )}}\), \(k=2,3,\ldots \) and \( \alpha \) is a root of \(f{(x)}=0\).
Subsequently, we talk about the one-point iterative method with memory. In the mapping \(x_{n+1}= \phi {(x_n;x_{n-1},\ldots ,x_{n-k})}\), the iteration function \(\phi \) is called the one-point iteration function with memory, since \(x_n\) is novel information, while \(x_{k-1},\ldots ,x_{n-k}\) are reused information. The best known model one-point with memory method is the secant method, given by
Now, the next iteration function \(\phi \), defined through the mapping \(x_{n+1}= \phi {(x_n, w_1{(x_{n})},\ldots , w_n{(x_{n})})}\), is termed the multipoint iteration function without memory [7], where the terms \(w_1{(x_{n})},\ldots ,w_n{(x_{n})} \), have common argument \(x_n\). Let us classify one more iteration function \(\phi \) which has arguments \(z_j\), where every argument characterizes \(n+1\) quantities \(x_j, w_1{(x_{j})},\ldots ,w_n{(x_{j})} (n \ge 1)\). The iteration mapping \(x_{n+1}= \phi {(z_n, z_{n-1},\ldots ,z_{n-k})}\) is called a multipoint iteration function with memory, where in each iterative step we must preserve information of the last n approximations \(x_j\) and for each approximation we are required to calculate n expressions \( w_1{(x_{j})},\ldots ,w_n{(x_{j})}\). Now a days lots of researchers are paying their attention on developing with memory iterative methods using one or more self accelerating parameters. There are nice contributions are available devoted to derivative free with memory iterative methods such as [8,9,10,11]. But very few with memory with derivative iterative methods are available in literature for solving nonlinear equations. The main objective of this paper is to work on multipoint iteration function with memory, which can improve the order of convergence of the without memory with derivative methods, without using any additional calculations and it has very high computational efficiency. In this paper, we present a new multipoint iterative method with memory, to solve nonlinear equations. Their convergence analysis is also discussed. In Sect. 2, we discuss two-point iterative method with memory. This method is acquired by employing a self-accelerating parameter. This parameter is designed by the Hermite interpolating polynomial, where the R-order convergence of two-point method is increased from 4 to 4.5616, 4.7913 and 5, respectively. Section 3, corroborates theoretical results by means of different numerical examples.
2 Development and convergence analysis of with memory method
In the ensuing section, we will improve the convergence rate of the method given by Hafiz and Bahgat [12]. First, we consider the optimal fourth-order without memory scheme presented in [12],
where \(P_1{(x_n,y_n)}=2 \left( \frac{f{(y_n)}-f{(x_n)}}{y_n-x_n}\right) -f'{(x_n)}\) and \(P_2{(x_n,y_n)}=\frac{2}{y_n-x_n} \left( \frac{f{(y_n)}-f{(x_n)}}{y_n-x_n}-f'{(x_n)}\right) \). The error expression for iterative scheme (5) is
Now, we use the idea considered in Kumar et al. [6] to change the first step of the above method and consider
If we set \(T=0\), then we obtain the first step of the method (5). The error expressions for each sub-step of scheme (7),
and
where \(c_j=\frac{f^{(j)}(\alpha )}{j!f'{(\alpha )}}\), for \(j=2,3,\ldots \) and \(T, \gamma \in R\). Replacing T by \(T_n\) in the scheme (7), we obtain the following with memory method
which is denoted by OM4(2.6). It is easy to recognize from Eq. (6) that the order of convergence of scheme (5) is four and after changing the first step of method (5) to that of the first step of the scheme (7), we see that the order of scheme (7) is also four when \(T \ne c_2\). Now, by taking \(T=c_2=f''{(\alpha )}/(2f'{(\alpha )})\), it can be established that the order of the method (7) would be 5. For this type of acceleration of convergence the exact values of \(f'{(\alpha )}\) and \(f''{(\alpha )}\) are not obtainable. Other than this we could replace the parameter T by \(T_n\). To locate the values of the parameter, we can utilize the information accessible from the current and previous iteration and it satisfies \(\lim _{n\rightarrow \infty } T_n = c_2 = f''{(\alpha )}/(2f'{(\alpha )})\), such that the fourth-order asymptotic convergence constant to be zero in the error expression (9). We consider the following formula for \(T_n\):
Method 1:
where \(H_2{(x)}= f{(x_n)}+ f{[x_n,x_n]}(x-x_n)+ f{[x_n,x_n,y_{n-1}](x-x_n)^2}\) and \(H''_2{(x)}=2 f{[x_n,x_n,y_{n-1}]}\).
Method 2:
where \(H_3{(x)}= H_2{(x)}+ f{[x_n,x_n,y_{n-1},x_{n-1}]}(x-x_n)^2(x-y_{n-1})\) and \(H''_3{(x)}= 2f{[x_n,x_n,y_{n-1}]}+2f{[x_n,x_n,y_{n-1},x_{n-1}]}(x_n-y_{n-1})\).
Method 3:
where \(H_4{(x)}= H_3{(x)}+ f{[x_n,x_n,y_{n-1},x_{n-1},x_{n-1}]}(x-x_n)^2(x-y_{n-1})(x-x_{n-1})\) and \(H''_4{(x)}= 4f{[x_n,x_n,y_{n-1}]}+(2f{[x_n,x_n,y_{n-1},x_{n-1}]} -2f{[x_n,y_{n-1}x_{n-1},x_{n-1}]})(x_n-y_{n-1})\).
Note: The Hermite interpolation polynomial \(H_m(x) (m=2,3,4)\) satisfied the condition \(H'_m(x_n)=f'(x_n)\) \((m=2,3,4)\). So, \(T_n=\frac{H''_m(x_n)}{2f'(x_n)}\) can be expressed as \(T_n=\frac{H''_m(x_n)}{2H'(x_n)}\) \((m=2,3,4)\).
Theorem 1
Let \(H_m\) be the Hermite interpolating polynomial of degree m that interpolates a function f at interpolation nodes \(x_n,x_n,t_0\ldots \) \(t_{m-2}\) contained in an interval I and the derivative \(f^{(m+1)}\) is continuous in I and the Hermite interpolating polynomial \(H_m{(x_n)}= f{(x_n)}\), \(H'_m{(x_n)}=f'{(x_n)}\), \(H_m{(t_j)}=f{(t_j)}\) \((j=0,1,\ldots ,m-2)\). Define the errors \(e_{t,j}=t_j-\alpha (j=0,1,\ldots ,m-2)\) and assume that
(1) all nodes \(x_n,t_0,\ldots ,t_{m-2}\) are sufficiently close to the zero \(\alpha \),
(2) the condition \(e_n=O(e_{t,0}\ldots e_{t,m-2})\) holds. Then
and
Proof
The error expression of the Hermite interpolation can be expressed like this
After differentiating (17) twice at the point \(x=x_n\), we obtain
Taylor’s series expansion of derivative of f at the point \(x_n \in I\) and \(\xi \in I\) about the zero \(\alpha \) of f give
and
where \(e_{\xi }=\xi -\alpha \). Placing (20), (21) in (18), we find
This implies
Thus
or
The concept of R-order of convergence [13] and the subsequent declaration (see [14], p.287) will be applied to approximate the convergence order of the iterative method (10).
Theorem 2
If the errors of approximations \(e_j=x_j-\alpha \) obtained in an iterative root finding method IM satisfy
then the R-order of convergence of IM, denoted with \(O_R (IM,\alpha )\), satisfies the inequality \(O_R (IM,\alpha ) \ge s^*\), where \(s^*\) is the unique positive solution of the equation \(s_{n+1}-\sum _{i=0}^{n} m_i s^{n-i}=0\).
We state the following convergence theorem designed for iterative method with memory (10).
Theorem 3
Let the varying parameter \(T_n\) in the iterative method (10) be calculated by (11)–(13). If an initial approximation \(x_0\) is sufficiently close to a simple root \(\alpha \) of f(x), then the R-order of convergence of iterative methods (11)–(10), (12)–(10) and (13)–(10) with memory is at least \((5+\sqrt{17})/2 \approx 4.5616\), \((5+\sqrt{21})/2 \approx 4.7913\) and 5, respectively.
Proof
Let the sequence \(\{x_n\}\) be generated by an iterative method (IM) converges to the root \(\alpha \) of f(x), with R-order \(O_R(IM,\alpha ) \ge r\), we write down
If we take \(n \rightarrow \infty \), then \(D_{n,r}\) tends to the asymptotic error constant \(D_r\) of IM. Consequently
The subsequent error expression of the with memory method (10), can be attained through (8), (9) and the varying parameter \(T_n\)
and
where \(B_{n,4}\) is a varying quantity because of the self accelerating parameter \(T_n\) and it comes from (9). Here, we excluded higher order terms in (28) and (29).
Method 1. \(T_n\) is calculated by (11): It is similar to the derivation of (26). We suppose that the iterative sequence \(\{y_n\}\) has the R-order p,
Using Theorem 1 for \(m=2\) and \(t_0=y_{n-1}\), we obtain
Now from (28), (29) and (31), we obtain
and
The following system of equations, are obtained by comparing the components of \(e_{n-1}\) featuring in two pairs of relation (30)–(32) and (27)–(33),
Positive solution of system (34) is specified through \(r=(5+ \sqrt{17})/2\) and \( p=(1+\sqrt{17})/2\). As a result, we obtain at least \((5+ \sqrt{17})/2 \approx 4.5616\) is the R-order of with memory method (10)–(11).
Method 2. \(T_n\) is calculated by (12): Using Theorem 1 for \(m=3\), \(t_0=y_{n-1}\) and \(t_1=x_{n-1}\), we obtain
Now using (28), (29) and (34), we obtain
and
We obtain the following system of equations, by comparing the components of \(e_{n-1}\) featuring in two pairs of relation (30)–(36) and (27)–(37)
Positive solution of system (38) is specified through \(r=(5+ \sqrt{21})/2\) and \( p=(1+\sqrt{21})/2\). As a result, we obtain at least \((5+ \sqrt{21})/2 \approx 4.7913\) is the R-order of the methods with memory (10)–(12).
Method 3. \(T_n\) is calculated by (13): Hermite interpolating polynomial \(H_4(x)\) satisfies the condition \(H_4(x_n)=f(x_n)\), \(H'_4(x_n)=f'(x_n)\), \(H_4(y_{n-1})=f(y_{n-1)}\), \(H_4(x_{n-1})=f(x_{n-1})\) and \(H'_4(x_{n-1})=f'(x_{n-1})\). The error expression of the Hermite interpolation can be expressed as follows:
After differentiating (39) twice at the point \(x=x_n\), we obtain
Taylor’s series expansion of derivatives of f at the point \(x_n \in I\) and \(\xi \in I\) about the zero \(\alpha \) of f give
and
where \(e_{\xi }=\xi -\alpha \).
Substituting (43) and (42) into (40), we obtain
By means of (28) and (29), we have
and
Then
after substituting (47) into (44), we obtain
which implies
And hence
or
Substituting (51) into (29), we obtain
As a result, the R-order of the methods with memory (10)–(13) is at least 5. Thus the proof is over.
3 Results and discussion
Now we attempt to compare the two-step with memory method with our two-step method OM4(2.6). Wang and Zang [15] proposed two with memory methods. Both are two-step methods in the subsequent form:
where \(a \in R\), which is denoted by XW41(16), and
where \(b \in R\), which is denoted by XW42(17). We are captivating the values of the self-accelerating parameter \(T_n\) for both methods in the following form
Method 4:
where \(H_2{(x)}= f{(x_n)}+ f{[x_n,x_n]}(x-x_n)+ f{[x_n,x_n,y_{n-1}](x-x_n)^2}\) and \(H''_2{(x)}= 2f{[x_n,x_n,y_{n-1}]}\).
Method 5:
where \(H_3{(x)}= H_2{(x)}+ f{[x_n,x_n,y_{n-1},x_{n-1}]}(x-x_n)^2(x-y_{n-1})\) and \(H''_3{(x)}= 2f{[x_n,x_n,y_{n-1}]}+2f{[x_n,x_n,y_{n-1},x_{n-1}]}(x_n-y_{n-1})\).
Method 6:
where \(H_4{(x)}= H_3{(x)}+ f{[x_n,x_n,y_{n-1},x_{n-1},x_{n-1}]}(x-x_n)^2(x-y_{n-1})(x-x_{n-1}))\) and \(H''_4{(x)}= 4f{[x_n,x_n,y_{n-1}]}+(2f{[x_n,x_n,y_{n-1},x_{n-1}]} -2f{[x_n,y_{n-1}x_{n-1},x_{n-1}]})(x_n-y_{n-1})\).
Table 1 has four nonlinear test functions with their roots; two functions are considered from [16] and the other two are taken from [17]. The numerical results illustrated in Table 2 are in accordance with the theory developed in the paper. The absolute errors \(|x_n-\alpha |\) are given for our proposed method OM4(2.6), where \(\alpha \) is an exact root. All the computations have been performed using the programming package MATHEMATICA 8. The computational order of convergence in [18] is approximated by means of
to confirm the computational efficiency, which established all the theoretical rate of convergence. Our proposed method OM4(2.6) have been used to solve the nonlinear functions and the calculated results are compared with other existing methods of the same nature XW41(16) and XW42(17). Another effective approach to compare the efficiency of methods is CPU time used in the execution of program. At this moment, the CPU time has been calculated by means of the command \(``TimeUsed []''\) in MATHEMATICA 8. The CPU time depends on the specification of computer. The computer characteristics are Microsoft Windows 7 Intel Core i3-2330M CPU@ 2.20 GHz with 2 GB of RAM, 64-bit operating system throughout the paper. The mean CPU time is calculated by considering the mean of 30 performance of the program. It can be observed from Table 2, that the results obtained by our proposed method are efficient and show better performance than other existing methods.
4 Conclusion
We have presented a multipoint with memory iterative method for finding simple roots of nonlinear equations with minimum computational effort, of improved convergence order by modifying the existing without memory method. Since our aim is to construct the method of higher order convergence without any additional calculation, so we have used three different approximations of self-correcting parameters, designed by Hermite interpolating polynomials in the fourth-order method to achieve higher order convergence. The R-order of convergence of new with memory iterative methods is increased from 4 to 4.5616, 4.7913 and 5 respectively, without any additional calculation. Our measure for the proposed method is also based on the efficiency index, computational order of convergence (COC) and CPU time. At the end the numerical results confirm validity of theoretical results.
References
Wang, X., Zhang, T.: Higher-order Newton-type iterative methods with and without memory for solving nonlinear equations. Math. Commun. 19, 91–109 (2014)
Soleymani, F.: Some optimal iterative methods and their with memory variants. J. Egyp. Math. Soc. 21, 133–141 (2013)
Jaiswal, J.P.: Some class of third- and fourth-order iterative methods for solving nonlinear equations. J. Appl. Math. 2014, 1–17 (2014)
Cordero, A., Torregrosa, J.R., Vassileva, M.P.: A family of modified Ostrowski’s method with optimal eighth-order of convergence. Appl. Math. Lett. 24(12), 2082–2086 (2011)
Cordero, A., Lotfi, T., Mahdiani, K., Torregrosa, J.R.: Two optimal general classes of iterative methods with eighth-order. Act. Appl. Math. 134(1), 61–74 (2014)
Kumar, S., Kanwar, V., Tomar, S.K., Singh, S.: Geometrically constructed families of Newton’s method for unconstrained optimization and nonlinear equations. Int. J. Math. Math. Sci. Article ID 972537, 1–9 (2011)
Petkovic, M.S., Ilic, S., Dzunic, J.: Derivative free two point methods with and without memory for solving nonlinear equations. Appl. Math. Comput. 217, 1887–1895 (2010)
Soleymani, F., Lotfi, T., Tavakoli, E., Khaksar Haghani, F.: Several iterative methods with memory using self-accelerators. Appl. Math. Comput. 254, 452–458 (2015)
Lotfi, T., Assari, P.: New three- and four-parametric iterative with memory methods with efficiency index near 2. Appl. Math. Comput. 270, 1004–1010 (2015)
Lotfi, T., Assari, P.: Two new three and four parametric with memory methods for solving nonlinear equations. Int. J. Ind. Math. 7(3), 1–8 (2015)
Jaiswal, J.P.: Improved bi-parametric derivative free with memory family for solving nonlinear equations. J. Appl. Anal. Comput. 6(1), 196–206 (2016)
Hafiz, M.A., Bahgat, M.S.M.: Solving nonlinear equations using two-step optimal methods. Annu. Rev. Chaos Theory Bifur. Dyn. Sys. 3, 1–11 (2013)
Ortega, J.M., Rheinbolt, W.C.: In Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, New York (1997)
Alefeld, G., Herzberger, J.: In Introduction to Interval Computation. Academic Press, New York (1983)
Wang, X., Zhang, T.: A new family of Newton-type iterative methods with and without memory for solving nonlinear equations. Calcolo 51(1), 1–15 (2014)
Wang, X., Zhang, T.: Some Newton-type iterative methods with and without memory for solving nonlinear equations. Int. J. Comput. Methods 11(5), 1–20 (2013)
Chun, C., Lee, M.Y.: A new optimal eighth-order family of iterative methods for the solution of nonlinear equations. Appl. Math. Comput. 223, 506–519 (2013)
Weerakoon, S., Fernando, T.G.I.: A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 13, 87–93 (2000)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Choubey, N., Panday, B. & Jaiswal, J.P. Several two-point with memory iterative methods for solving nonlinear equations. Afr. Mat. 29, 435–449 (2018). https://doi.org/10.1007/s13370-018-0552-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13370-018-0552-x
Keywords
- Iterative method
- With memory scheme
- Hermite interpolation polynomial
- Computational efficiency
- Numerical result