Abstract
A number of higher order Newton-like methods (i.e. the methods requiring both function and derivative evaluations) are available in literature for multiple zeros of a nonlinear function. However, higher order Traub-Steffensen-like methods (i.e. the methods requiring only function evaluations) for computing multiple zeros are rare in literature. Traub-Steffensen-like iterations are important in the circumstances when derivatives are complicated to evaluate or expensive to compute. Motivated by this fact, here we present an efficient and rapid-converging Traub-Steffensen-like algorithm to locate multiple zeros. The method achieves eighth order convergence by using only four function evaluations per iteration, therefore, this convergence rate is optimal. Performance is demonstrated by applying the method on different problems including some real life models. The computed results are compared with that of existing optimal eighth-order Newton-like techniques to reveal the computational efficiency of the new approach.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Locating a zero of a nonlinear function is very significant problem in many areas such as Physics, Chemistry, Mathematical Biology and also in Engineering science to name a few [2, 3, 14, 20]. This is the case since problems from these areas are reduced to finding a zero. In general, closed form solutions can not be obtained so researchers use iterative methods for approximating the solution. In this work, we develop derivative-free methods to compute a multiple root (say, \(t^*\)) with multiplicity m, that means, \(f^{(k)}(t^*) = 0, k = 0, 1, 2, . . . , m - 1\) and \(f^{(m)}(t^*) \ne 0\), of the equation \(f(t) = 0\).
A plethora of higher order methods, either independent or dependent on the Newton’s method [17]
have been proposed in literature, see [1, 4,5,6, 9, 11,12,13, 16, 19, 23, 24]. Such methods require the evaluation of derivatives of either first or first and second order both of the function f. However, higher order derivative-free methods to handle the case of multiple roots are yet to be investigated. The derivative-free methods are important in the situations when derivative \(f'\) is complicated to evaluate or is expensive to obtain. The most basic derivative-free method is the well-known Traub-Steffensen method [20] which actually replaces \(f'\) in the classical Newton’s method by a suitable approximation based on finite difference formula,
where \(w_i= t_i +\alpha f(t_i)\) and \(\alpha \in {\mathbb {R}}{\setminus }\{0\}\). In this way the modified Newton’s method (1) can be transformed to Traub-Steffensen iterative scheme
The Traub-Steffensen iteration (2) is indeed a noticeable improvement of Newton’s iteration for the reason that it preserves the order of convergence without requiring any derivative computation.
Very recently, Sharma et al. in [18] have developed two-point derivative-free (Traub-Steffensen-like) methods of fourth order convergence to compute the multiple zeros. These methods require three function evaluations per iteration and, therefore, according to Kung-Traub hypothesis these possess optimal convergence [15]. According to this hypothesis multi-point methods without memory based on n function evaluations have optimal order \(2^{n-1}\). Such methods are usually known as optimal methods. In this work our aim is to develop efficient higher order iterative method that satisfies Kung-Traub conjecture. Consequently, we introduce a derivative-free eighth order scheme that requires four new information of the function f per iteration, and hence the method has optimal convergence of eighth order in the sense of Kung-Traub hypothesis. The iterative scheme uses the Traub-Steffensen iteration (2) in the first step and Traub-Steffensen-like iterations in the second and third steps.
Rest of the paper is precisely as follows. In Sect. 2, the scheme of new iterative method is formulated and convergence properties for some particular cases are explored. In Sect. 3, the main result showing eighth order of convergence is presented. To test the applicability and efficiency of the method some numerical experiments are performed in Sect. 4. A comparison with the existing methods is also performed in this section. Finally, a conclusion of the main points is given in Sect. 5.
2 Formulation of scheme
Higher order Newton-like methods have been developed using various approaches by the researchers around the world. Some useful techniques are: Composition approach, Sampling approach, Interpolation approach, Geometrical approach, Adomian approach and Weight-function approach. Of these the Weight-function approach has been used extensively in recent times, see, for example [4,5,6, 11, 12, 23, 24] and references cited therein. In the present work of computing a multiple root with multiplicity \(m > 1\), we also use this technique. Thus, consider the following three-step iterative scheme:
where the functions \(G: {\mathbb {C}}^2\rightarrow {\mathbb {C}}\) and \(H: {\mathbb {C}}^3\rightarrow {\mathbb {C}}\) are holomorphic in a neighborhood of (0, 0) and (0, 0, 0), respectively with \(u_i =\root m \of {\frac{f(y_i)}{f(t_i)}}\), \(v_i =\root m \of {\frac{f(w_i)}{f(t_i)}}\), \(r_i =\root m \of {\frac{f(z_i)}{f(y_i)}}\), \(s_i=\root m \of {\frac{f(z_i)}{f(t_i)}}\). Notice that this is a three-step scheme with first step as the Traub-Steffensen iteration (2) and next two steps as the Traub-Steffensen-like iterations. The second and third steps are weighted by the factors G and H. Hence, these steps are also called weighted Traub-Steffensen steps and the factors G and H are called weight factors or weight functions.
In what follows, we shall explore certain conditions under which the scheme (3) achieves convergence of order as high as possible. For simplicity the results are presented separately for different m. This strategy will also help us to construct the generalized iterative scheme. Firstly, for the case \(m=2\) we prove the following theorem:
Theorem 1
Let \(f: {\mathbb {C}}\rightarrow {\mathbb {C}}\) be an analytic function in a region enclosing a multiple zero \(t^*\) with multiplicity 2. Assume that initial guess \(t_0\) is sufficiently close to \(t^*\), then the local order of convergence of scheme (3) is at least 8, provided that \(G_{00} = 0\), \(G_{10} = 3\), \(G_{01} = 0\), \(G_{20} = 8\), \(G_{11} = -1\), \(G_{02} = 0\), \(H_{000} = 3\), \(H_{100} = 4\), \(H_{010} = -1\), \(H_{110} = 1\), \(H_{210} = 2-2H_{200}\), \(H_{020} = 0\), \(H_{120} = -2\), \(H_{220} = 4+2H_{200}\), \(H_{320} = -96-2H_{300}-2H_{310}\), \(H_{011} = 5-2H_{001}\), \(H_{021} = -6+2H_{001}\), \(H_{121} = 16-2H_{101}-2H_{111}\), \(\text {max} \ \{|H_{001}|,|H_{101}|,|H_{111}|,|H_{200}|,|H_{201}|,|H_{211}|,|H_{221}|,|H_{300}|,|H_{310}|\} < \infty \), where \(G_{kj}=\frac{\partial ^{k+j}}{\partial {u^k}\partial {v^j}}G({u_i,v_i})|_{(u_i=0,v_i=0)}\), \(0\le k,j\le 2\) and \(H_{abc}=\frac{\partial ^{a+b+c}}{\partial {u^a}\partial {v^b}\partial {r^c}}H({u_i,v_i,r_i})|_{(u_i=0,v_i=0,r_i=0)}\), for \(0\le a\le 3\), \(0\le b\le 2\), \(0\le c\le 1\).
Proof
Let the error at i-th iteration be \(e_{i}=t_{i}-t^*\). Using the Taylor’s expansion of \(f(t_i)\) about \(t^*\) and taking into account that \(f(t^*)=0\), \(f'(t^*)=0\) and \(f^{(2)}(t^*)\ne 0,\) we have
where \(A_n = \frac{2!}{(2+n)!}\frac{f^{(2+n)}(t^*)}{f^{(2)}(t^*)}\) for \(n\in {\mathbb {N}}\).
Similarly we have the Taylor’s expansion of \(f(w_i)\) about \(t^*\)
where \(e_{w_i}= w_{i}-t^* = e_i+ \frac{\alpha f^{(2)}(t^*)}{2!}e_i^2 \big (1+ A_1 e_i+ A_2 e_i^2+A_3 e_i^3+A_4 e_i^4+A_5 e_i^5+A_6 e_i^6+A_7 e_i^7+\cdots \big ).\)
Then the first step of (3) yields
where \(\phi _n = \phi _n (\alpha ,A_1,A_2,A_3,A_4,A_5,A_6,A_7)\), \(n=1, 2, 3.\) Here, the expressions of \(\phi _1\), \(\phi _2\) and \(\phi _3\) are not being produced explicitly since they are very lengthy.
Expanding \(f(y_i)\) about \(t^*\), it follows that
Using (4), (5) and (7) in \(u_i\) and \(v_i\), some simple calculations yield
and
where \(\psi _n = \psi _n (\alpha ,A_1,A_2,A_3,A_4,A_5,A_6,A_7)\), \(n=1, 2, 3, 4\).
Developing \(G(u_i,v_i)\) by Taylor formula in the neighborhood of origin (0, 0)
By using (4)-(10) in the second step of (3), we obtain that
where \(\delta _n = \delta _n (\alpha ,A_1,A_2,A_3,A_4,A_5,A_6,A_7,G_{00},G_{10},G_{01},G_{20},G_{11},G_{02})\), \(n=1, 2, 3, 4, 5.\)
It is clear from the above equation (11) that we will obtain at least fourth order convergence if we set coefficients of \(e_i\), \(e_i^2\) and \(e_i^3\) simultaneously equal to zero. Then solving the resulting equations, one gets
By using (12), the error equation (11) is given by
where \(\xi _n = \xi _n (\alpha ,A_1,A_2,A_3,A_4,A_5,A_6)\), \(n=1, 2, 3, 4.\) Expansion of \(f(z_i)\) about \(t^*\) leads us to the expression
Using (4), (7) and (14), we obtain
where \(\zeta _n = \zeta _n (\alpha ,A_1,A_2,A_3,A_4,A_5,A_6,A_7)\), \(n=1, 2, 3, 4, 5\)
and
where \(\chi _n = \chi _n (\alpha ,A_1,A_2,A_3,A_4,A_5,A_6)\), \(n=1, 2, 3, 4, 5\).
Developing by Taylor formula the weight function \(H(u_i,v_i,r_i)\) in the neighborhood of origin (0, 0, 0), we have
By using (4)-(17) in third step of (3), we have
where \(\eta _n = \eta _n(\alpha ,A_1,A_2,A_3,A_4,A_5,A_6,H_{abc})\), \(n=1,2,3\) and \(0\le a\le 3\), \(0\le b\le 2\), \(0\le c\le 1\).
It is clear from the above equation (18) that we will obtain at least eighth order convergence if we set coefficients of \(e_i^4\), \(e_i^5\), \(e_i^6\) and \(e_i^7\) simultaneously equal to zero. Then solving the resulting equations, we get
As a result the error equation is given by
Thus, the theorem is proved. \(\square \)
Theorem 2
Using the assumption and notation of Theorem 1, the convergence order of (3) for the case \(m = 3\) is at least 8, if \(G_{00} = 0\), \(G_{01} = 0\), \(G_{20}= 12\), \(G_{11}= 3-G_{10}\), \(G_{02}= 0\), \(H_{010} = 3+G_{10}-2H_{000}\), \(H_{110} = 3+ 2G_{10}-2H_{100}\), \(H_{020} = 2H_{000}-2G_{10}\), \(H_{120} = 6-4G_{10}+2H_{100}\), \(H_{220} = 12-2 H_{200}-2H_{210}\), \(H_{320} = -144-2 H_{300}-2H_{310}\), \(H_{021} = 6-2 H_{001}-2H_{011}\), \(H_{121} = 24-2 H_{101}-2H_{111}\), wherein \(\text {max} \ \{|G_{10}|,|H_{000}|,|H_{001}|,|H_{011}|,|H_{200}|,|H_{201}|,|H_{210}|,|H_{211}|,|H_{221}|\} < \infty \).
Proof
By Taylor’s expansion of \(f(t_i)\) about \(t^*\) taking into account that \(f(t^*)=0\), \(f'(t^*)=0\), \(f^{(2)}(t^*)=0\) and \(f^{(3)}(t^*)\ne 0,\) we have
where \(B_n = \frac{3!}{(3+n)!}\frac{f^{(3+n)}(t^*)}{f^{(3)}(t^*)}\) for \(n\in {\mathbb {N}}\).
Also Taylor’s expansion of \(f(w_k)\) about \(t^*\) yields
where \(e_{w_i}= w_{i}-t^* = e_i + \frac{\alpha f^{(3)}(t^*)}{3!}e_i^3 \big (1+ B_1 e_i+ B_2 e_i^2+B_3 e_i^3+B_4 e_i^4+ B_5 e_i^5+B_6 e_i^6+B_7 e_i^7+\cdots \big ).\)
Then the first step of (3) yields
where \(\phi '_n = \phi '_n (\alpha ,B_1,B_2,B_3,B_4,B_5,B_6,B_7)\), \(n=1, 2.\) Also the expansion of \(f(y_i)\) about \(t^*\) is
Then from (20), (21) and (23), it follows that
where \(\psi '_n = \psi '_n (\alpha ,B_1,B_2,B_3,B_4,B_5,B_6,B_7)\), \(n=1, 2, 3, 4\)
and
The Taylor formula of \(G(u_i,v_i)\) in the neighborhood of origin (0, 0) is given by
By using (20)–(26) in the second step of (3), we obtain
where \(\delta '_n = \delta '_n (\alpha ,B_1,B_2,B_3,B_4,B_5,B_6,B_7,G_{00},G_{10},G_{01},G_{20},G_{11},G_{02})\), \(n=1, 2, 3, 4, 5.\)
It is clear from the above equation (27) that we will obtain at least fourth order convergence if we set coefficients of \(e_i\), \(e_i^2\) and \(e_i^3\) simultaneously equal to zero. Then solving the resulting equations, one gets
By using (28), the error Eq. (27) is given by
where \(\xi '_n = \xi '_n (\alpha ,B_1,B_2,B_3,B_4,B_5,B_6,G_{00},G_{10})\), \(n=1, 2, 3.\) Expansion of \(f(z_i)\) about \(t^*\) leads us to the expression
Using (20), (23) and (30), we obtain
where \(\zeta '_n = \zeta '_n (\alpha ,B_1,B_2,B_3,B_4,B_5,B_6,B_7,G_{00},G_{10})\), \(n=1, 2, 3, 4, 5.\)
where \(\chi '_n = \chi '_n (\alpha ,B_1,B_2,B_3,B_4,B_5,B_6,G_{00},G_{10})\), \(n=1, 2, 3, 4\).
Writing the Taylor expansion of weight function \(H(u_i,v_i,r_i)\) in the neighborhood of origin (0, 0, 0), we have
By using (20)-(33) in third step of (3), we have
where \(\eta '_n = \eta '_n(\alpha ,B_1,B_2,B_3,B_4,B_5,B_6,G_{00},G_{10},H_{abc})\), \(n=1,2,3,4.\) and \(0\le a\le 3\), \(0\le b\le 2\), \( 0 \le c \le 1\).
It is clear from the above equation (34) that we will obtain at least eighth order convergence if we set coefficients of \(e_i^4\), \(e_i^5\), \(e_i^6\) and \(e_i^7\) simultaneously equal to zero. Then solving the resulting equations, we get
As a result the error equation is given by
Thus, the theorem is proved. \(\square \)
Below we state the theorems (without proof ) for the cases \(m = 4, 5, 6\) as the proof is similar to the above proved theorems.
Theorem 3
Using the assumption and notation of Theorem 1, the convergence order of (3) for the case \(m = 4\) is at least 8, if \(G_{01} = -G_{00}\), \(G_{20}= 16\), \(G_{11} = 4-G_{10}\), \(G_{02} = 0\), \(H_{010} = 4+G_{10}-2H_{000}\), \(H_{020} = 2H_{000}-2G_{10}\), \(H_{120} = 16-2H_{100}-2H_{110}\), \(H_{220} = 16-2 H_{200}-2H_{210}\), \(H_{320} = -192-2 H_{300}-2H_{310}\), \(H_{021} = 8-2 H_{001}-2H_{011}\), \(H_{121} = 32-2 H_{101}-2H_{111}\). Moreover, the method satisfies the error equation
where \(\text {max} \ \{|G_{10}|,|H_{100}|,|H_{110}|, |H_{201}|,|H_{211}|,|H_{221}|\} < \infty \) and \(C_n = \frac{4!}{(4+n)!}\frac{f^{(4+n)}(t^*)}{f^{(4)}(t^*)}\) for \(n\in {\mathbb {N}}\).
Theorem 4
Using the assumption and notation of Theorem 1, the convergence order of (3) for the case \(m = 5\) is at least 8, if \(G_{01} = -2G_{00}\), \(G_{20}= 20\), \(G_{11} = 5-G_{10}\), \(G_{02} = 2G_{00}\), \(H_{020} = 10-2H_{000}-2H_{010}\), \(H_{120} = 20-2H_{100}-2H_{110}\), \(H_{220} = 20-2 H_{200}-2H_{210}\), \(H_{320} = -240-2 H_{300}-2H_{310}\), \(H_{021} = 10-2 H_{001}-2H_{011}\), \(H_{121} = 40-2 H_{101}-2H_{111}\). The error equation for this case is given by
where \(\text {max} \ \{|G_{10}|,|H_{000}|,|H_{010}|, |H_{201}|,|H_{211}|,|H_{221}|\} < \infty \) and \(D_n = \frac{5!}{(5+n)!}\frac{f^{(5+n)}(t^*)}{f^{(5)}(t^*)}\) for \(n\in {\mathbb {N}}\).
Theorem 5
Using the assumption and notation of Theorem 1, the convergence order of (3) for the case \(m = 6\) is at least 8, if \(G_{20}= 24\), \(G_{11} = 6-G_{10}\), \(G_{02} = -2(G_{00}+G_{01})\), \(H_{020} = 12-H_{000}-2H_{010}\), \(H_{120} = 24-2H_{100}-2H_{110}\), \(H_{220} = 24-2 H_{200}-2H_{210}\), \(H_{320} = -288-2 H_{300}-2H_{310}\), \(H_{021} = 12-2 H_{001}-2H_{011}\), \(H_{121} = 48-2 H_{101}-2H_{111}\). Moreover, the method satisfies the error equation
where \(\text {max} \ \{|G_{00}|,|G_{10}|, |H_{201}|,|H_{211}|,|H_{221}|\} < \infty \) and \(E_n = \frac{6!}{(6+n)!}\frac{f^{(6+n)}(t^*)}{f^{(6)}(t^*)}\) for \(n\in {\mathbb {N}}\).
Remark 1
In order to prove the eighth order convergence of the method (3) we can observe from the above results that the number of conditions on \(G_{kj}\) and \(H_{abc}\) are \(6, \, 5,\, 4, \, 4, \, 3\) and \(12, \, 8, \, 7, \, 6, \, 6\) respectively, corresponding to the cases \(m=2,\,3,\, 4,\,5,\,6\). It has been seen that the conditions on \(G_{kj}\) and \(H_{abc}\) are always fixed (i.e. three and six, respectively) in number when \(m\ge 6\). However, the error equations differ from each other in the sense that the terms containing the parameter \(\alpha \) do not appear in the equations for \(m\ge 7\). We shall prove this fact in next section.
3 Main result
For the multiplicity \(m\ge 7\), we prove the order of convergence of scheme (3) by the following theorem:
Theorem 6
Let \(f: {\mathbb {C}}\rightarrow {\mathbb {C}}\) be an analytic function in a region enclosing a zero \(t^*\) with multiplicity \(m\ge 7\). Further assume that initial guess \(t_0\) is sufficiently close to \(t^*\), then the local order of convergence of scheme (3) is at least 8, provided that \(G_{20}= 4m\), \(G_{11} = m-G_{10}\), \(G_{02} = -2(G_{00}+G_{01})\), \(H_{020} = 2m-2H_{000}-2H_{010}\), \(H_{120} = 4m-2H_{100}-2H_{110}\), \(H_{220} = 4m-2 H_{200}-2H_{210}\), \(H_{320} = -48m-2 H_{300}-2H_{310}\), \(H_{021} = 2m-2 H_{001}-2H_{011}\), \(H_{121} = 8m-2 H_{101}-2H_{111}\), \(\text {max} \ \{ |H_{201}|,|H_{211}|,|H_{221}|\} < \infty \). Moreover, error equation of the scheme is given by
where \(K_n = \frac{m!}{(m+n)!}\frac{f^{(m+n)}(t^*)}{f^{(m)}(t^*)}\) for \(n\in {\mathbb {N}}\).
Proof
Taking into account that \(f^{(j)}(t^*) = 0, j = 0, 1, 2, . . . , m - 1\) and \(f^{(m)}(t^*)\ne 0,\) the Taylor’s expansion of \(f(t_i)\) about \(t^*\) is expressed as
where \(K_n = \frac{m!}{(m+n)!}\frac{f^{(m+n)}(t^*)}{f^{(m)}(t^*)}\) for \(n\in {\mathbb {N}}\).
Also the Taylor’s expansion of \(f(w_i)\) about \(t^*\) is
where \(e_{w_i}= w_{i}-t^* = e_i +\frac{\alpha f^{(m)}(t^*)}{m!}e_i^m \big (1+ K_1 e_i+ K_2 e_i^2+K_3 e_i^3+K_4 e_i^4+K_5 e_i^5+K_6 e_i^6+\cdots \big ).\)
From the first step of (3)
where \(\phi ''_n = \phi ''_n (m,K_1,K_2,K_3,K_4,K_5,K_6,K_7)\), \(n=1, 2, 3.\) Expand \(f(y_i)\) about \(t^*\), we have
Using (36), (37) and (39) in the expressions of \(u_i\) and \(v_i\), we have that
where \(\psi ''_n = \psi ''_n (m,K_1,K_2,K_3,K_4,K_5,K_6,K_7)\), \(n=1, 2, 3, 4, 5\) and
where \(\rho _n = \rho _n (m,K_1,K_2,K_3,K_4,K_5,K_6,K_7)\), \(n=1, 2, 3, 4, 5.\)
Expanding \(G(u_i,v_i)\) by Taylor formula in the neighborhood of origin (0, 0)
By inserting (36)–(42) in the second step of (3), we have
where \(\delta ''_n = \delta ''_n (m,K_1,K_2,K_3,K_4,K_5,K_6,K_7,G_{00},G_{10},G_{01},G_{20},G_{11},G_{02})\), \(n=1, 2,3,4,5.\)
It is clear from the above Eq. (43) that we will obtain at least fourth order convergence if we set coefficients of \(e_i\), \(e_i^2\) and \(e_i^3\) simultaneously equal to zero. Then solving the resulting equations, one gets
By using (44), the error equation (43) is given by
where \(\xi ''_n = \xi ''_n (m,K_1,K_2,K_3,K_4,K_5,K_6)\), \(n=1, 2, 3.\) Expansion of \(f(z_i)\) around \(t^*\) yields
Using (36), (39) and (46), we obtain
where \(\zeta ''_n = \zeta ''_n (m,K_1,K_2,K_3,K_4,K_5,K_6,K_7)\), \(n=1, 2, 3, 4.\)
where \(\chi ''_n = \chi ''_n (m,K_1,K_2,K_3,K_4,K_5,K_6)\), \(n=1, 2, 3.\)
Developing \(H(u_i,v_i,r_i)\) by Taylor formula in the neighborhood of origin (0, 0, 0), we have
By using (36)–(49) in third step of (3), we have
where \(\eta ''_n = \eta ''_n(m,K_1,K_2,K_3,K_4,K_5,K_6,H_{abc})\), \(n=1,2,3,4\) and \(0\le a\le 3\), \(0\le b\le 2\), \(0\le c\le 1\).
It is clear from the above Eq. (50) that we will obtain at least eighth order convergence if we set coefficients of \(e_i^4\), \(e_i^5\), \(e_i^6\) and \(e_i^7\) simultaneously equal to zero. Then solving the resulting equations, we get
As a result the error equation is given by
Thus, the theorem is proved. \(\square \)
Many special cases of the family (3) can be generated satisfying the corresponding conditions on the functions \(G(u_i,v_i)\) and \(H(u_i,v_i,r_i)\) shown above in theorems 1–6 depending on the multiplicity m. However, we must generate a unified iterative scheme that may satisfy the conditions of theorems 1–6 simultaneously for all \(m>1\). Moreover, we will restrict choices to consider the forms of low order polynomials. These choices should be such that the resulting methods may converge to the root with order eight for \(m > 1\). Accordingly, the following simple forms are chosen:
Combining Eqs. (3), (53) and (57), the eighth order method for \(m > 1\) is expressed as
For further reference the new method (55) is now denoted by NM.
Remark 2
It is important to note that parameter \(\alpha \), which is used in \(w_i\), appears only in the error equations of the cases \(m= 2, \, 3, \, 4, \, 5, \, 6\) but not for \(m \ge 7\) (see equation (52)). It has been observed that this parameter appears in the terms of \(e_i^{9}\) and higher order. However, such terms are difficult to compute in general. Moreover, we do not need these in order to show the required eighth order convergence.
Remark 3
The new method (NM) reaches at eighth order of convergence under certain conditions on weight functions used. This convergence rate is achieved by using only four functional evaluations viz. \(f(t_i)\), \(f(w_i)\), \(f(y_i)\) and \(f(z_i)\) per iteration. Therefore, NM is optimal by the Kung-Traub hypothesis [15].
Remark 4
Besides the cases (53) and (54) of weight functions G(u, v) and H(u, v, r), respectively, some other forms satisfying the conditions can also be explored. For readers’ interest we provide the following forms:
However, being complicated expressions we will not consider these in our further work.
4 Numerical results
In this section, we consider some numerical problems to illustrate the convergence behavior and computational efficiency of the new method (NM). Performance is also compared with existing eighth order Newton-like methods requiring derivative evaluations in their formulae. For example, we consider the methods by Akram et al. [1], Behl et al. [4] and Zafar et al. [23]. For ready reference the methods are expressed as follows:
Method by Akram et al. (AM-1):
Method by Akram et al. (AM-2):
Method by Behl et al. (BM-1):
Method by Behl et al. (BM-2):
Method by Zafar et al. (ZM-1):
Method by Zafar et al. (ZM-2):
where \(u_i=\Big (\frac{f(y_i)}{f(t_i)}\Big )^{\frac{1}{m}}\), \(r_i=\Big (\frac{f(z_i)}{f(y_i)}\Big )^{\frac{1}{m}}\) and \(s_i=\Big (\frac{f(z_i)}{f(t_i)}\Big )^{\frac{1}{m}}\).
Computational work is compiled in the programming package of Mathematica software [22] using multiple-precision arithmetic in a PC with Intel(R) Pentium(R) CPU B960 @ 2.20 GHz, 2.20 GHz (32-bit Operating System) Microsoft Windows 7 Professional and 4 GB RAM. Performance of the considered method is tested by choosing value 0.01 of the parameter \(\alpha \). The tabulated results obtained by the methods for each problem include (i) number of iterations (i) required to obtain the solution using the stopping criterion \(|t_{i+1} - t_i| + |f(t_i)| < 10^{-100}\), (ii) estimated error \(|t_{i+1}-t_i|\) in the first three iterations, (iii) computational order of convergence (COC) and (iv) CPU time (CPU-time) in seconds utilized in execution of a program, which is computed by the command “TimeUsed[ ]”. The computational order of convergence (COC) is calculated by the formula (see [21])
The following numerical examples are chosen for experimentation:
Example 1
Consider a standard test function which is defined as
This function has multiple zero at \(t^*= 0\) of multiplicity 2. We select initial approximation \(t_0 = \frac{1}{2}\) to obtain zero of this function. Numerical results are exhibited in Table 1.
Example 2
Next, consider a problem of continuous stirred tank reactor. We observe a reaction scheme that develops in the chemical reactor (see [8] for more information), which is defined as follows:
The above model was studied in detail by Douglas [10] to find a simple system that can control feedback problem. Finally, he transferred the above model to the following mathematical equation:
where \(K_H\) denotes the gaining proportional controller. The control system in Eq. (61) is balanced with the values of \(K_H\). If we assume \(K_H = 0\), we obtain the poles of the open-loop transferred function as the solutions of univariate equation
given as: \(t = -2.85, -2.85, -1.45, -4.35\). It is straightforward to say that we have one root \(t^* = -2.85\), having multiplicity 2. We select initial approximation \(t_0 = -2.7\) to obtain zero of this function. The computational results are displayed in Table 1.
Example 3
Van der Waals equation-of-state (see [6])
is considered that describes the behavior of a real gas by presenting in the perfect gas equations two parameters, \(m_1\) and \(m_2\), particular for each gas. Finding the volume V of a real gas in terms of \(m_1\), \(m_2\), P, R and T, requires the solution of equation
Given the constants \(m_1\) and \(m_2\) of a specific gas, one can find values for n, P and T, such that this equation has a three real zeros. By using the specific values, we get the nonlinear function
The zeros of function \(f_3\) are given by \(t^* = 1.75, 1.75, 1.72.\) However, our desired zero is \(t^* =1.75\) with multiplicity 2. We assumed initial guess \(t_0= 2.5\) for this example. Numerical results are displayed in Table 1.
Example 4
The fourth example we consider is originated from the equation of L-C-R circuit in electrical engineering [7]. The equation governing the L-C-R circuit is given by
whose solution q(t) is
where at \(t=0,\) \({q}={q}_0.\)
For a particular case study, the problem is given as: Assume that the charge must be dissipated to 1 percent of its original value \(({q}/{q}_0=0.01)\) in \(t=0.05\) seconds, with \({L}=5\) Henry and \({C}=10^{-4}\) Farad. Find proper value of R?
Using the numerical values, the problem reduces to
where \(t={R}\).
We consider this case for five times and obtained the required nonlinear function
The function \(f_4\) has a multiple zero at \(328.15142908514817\ldots \) of multiplicity 5. We choose the initial approximation \(t_0 = 300 \) for obtaining the zero of the function. Numerical results are displayed in Table 1.
Example 5
Next, we assume a nonlinear test function of the academic interest which is defined by
This function has a imaginary zero \(t^*=i\) with multiplicity 6. We choose the initial approximations \(t_0 = 1.5 i\) for computing the zero. The obtained results are shown in Table 1.
Example 6
Lastly, we consider another standard test function which is defined as
This function has multiple zero \(t^*=1.84112940685019962\ldots \) of multiplicity 10. Let us choose the initial approximation \(t_0 = 2\) to compute this zero. The computed results are displayed in Table 1.
From the numerical results as shown in Table 1 we observe that the errors generated by the methods show the increasing accuracy in the successive approximations, which justifies the good convergence. In case of the existing eighth order methods, the performance is not uniformly consistent, see the results of example 5. The entry 0 for \(|t_{i+1}-t_i|\) means that the stopping criterion \(|t_{i+1}- t_i|+|f(t_i)| < 10^{-100}\) has been reached. Results of computational order of convergence shown in penultimate column in the table overwhelmingly support the theoretical eighth order of convergence. However, note that this is not true in case of existing eighth-order methods BM-1 and BM-2. Computational efficiency of the methods can be judged by observing the entries of CPU-time. Indeed, the NM is efficient in general, since CPU time consumed is less than that of the time taken by the existing ones in majority of the cases. The similar robust nature of the new method has been observed by implementing it on many other different numerical examples.
5 Conclusions
In the present work, we have derived a Traub-Steffensen-like derivative-free iterative scheme having eighth order convergence for computing multiple roots of nonlinear equations. The analysis of convergence has been studied in detail under standard hypotheses. The most important feature of the obtained scheme is its optimal eighth order convergence which is difficult to attain in derivative-free numerical algorithms. Performance has been examined by numerical testing on variety of problems including those that arise in real applications. Moreover, the comparison of performance of the methods with existing optimal eighth order methods has also been shown. It has been observed that unlike that of some of existing techniques the proposed technique has consistent convergence behavior. Comparison of estimated CPU-time has also proved the efficient and robust character of the new method. We conclude the work with a remark that Traub-Steffensen-like techniques are good options to Newton-type techniques in the problems where derivatives are costlier to compute or complicated to evaluate.
References
Akram, S., Zafar, F., Yasmin, N.: An optimal eighth-order family of iterative methods for multiple roots. Mathermatics (2019). https://doi.org/10.3390/math7080672
Argyros, I.K.: Convergence and Applications of Newton-Type Iterations. Springer-Verlag, New York, NY, USA (2008)
Argyros, I.K., Magreñán, Á.A.: Iterative Methods and Their Dynamics with Applications. CRC Press, New York, NY, USA (2017)
Behl, R., Alshomrani, A.S., Motsa, S.S.: An optimal scheme for multiple roots of nonlinear equations with eighth-order convergence. J. Math. Chem. 56, 2069–2084 (2018)
Behl, R., Cordero, A., Motsa, S.S., Torregrosa, J.R.: An eighth-order family of optimal multiple root finders and its dynamics. Numer. Algor. 77, 1249–1272 (2018)
Behl, R., Zafar, F., Alshormani, A.S., Junjua, M.U.D., Yasmin, N.: An optimal eighth-order scheme for multiple zeros of unvariate functions. Int. J. Comput. Meth. (2018). https://doi.org/10.1142/S0219876218430028
Chapra, S.C., Canale, R.P.: Numerical Methods for Engineers. McGraw-Hill Book Company, New York (1988)
Constantinides, A., Mostoufi, N.: Numerical Methods for Chemical Engineers with MATLAB Applications. Upper Saddle River, Prentice Hall PTR (1999)
Dong, C.: A family of multipoint iterative functions for finding multiple roots of equations. Int. J. Comput. Math. 21, 363–367 (1987)
Douglas, J.M.: Process Dynamics and Control. Prentice Hall, Englewood Cliffs (1972)
Geum, Y.H., Kim, Y.I., Neta, B.: A class of two-point sixth-order multiplezero finders of modified double-Newton type and their dynamics. Appl. Math. Comput. 270, 387–400 (2015)
Geum, Y.H., Kim, Y.I., Neta, B.: Constructing a family of optimal eighth-order modified Newton-type multiple-zero finders along with the dynamics behind their purely imaginary extraneous fixed points. J. Comp. Appl. Math. 333, 131–156 (2018)
Hansen, E., Patrick, M.: A family of root finding methods. Numer. Math. 27, 257–269 (1977)
Hoffman, J.D.: Numerical Methods for Engineers and Scientists. McGraw-Hill Book Company, New York, NY, USA (1992)
Kung, H.T., Traub, J.F.: Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Mach. 21, 643–651 (1974)
Osada, N.: An optimal multiple root-finding method of order three. J. Comput. Appl. Math. 51, 131–133 (1994)
Schröder, E.: Über unendlich viele Algorithmen zur Auflösung der Gleichungen. Math. Ann. 2, 317–365 (1870)
Sharma, J.R., Kumar, S., Jäntschi, L.: On a class of optimal fourth order multiple root solvers without using derivatives. Symmetry 11(1452), 1–14 (2019)
Sharma, J.R., Sharma, R.: Modified Jarratt method for computing multiple roots. Appl. Math. Comput. 217, 878–881 (2010)
Traub, J.F.: Iterative Methods for the Solution of Equations. Chelsea Publishing Company, New York, NY, USA (1982)
Weerakoon, S., Fernando, T.G.I.: A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 13, 87–93 (2000)
Wolfram, S.: The Mathematica Book, 5th edn. Wolfram Media, Champaign, IL, USA (2003)
Zafar, F., Cordero, A., Quratulain, R., Torregrosa, J.R.: Optimal iterative methods for finding multiple roots of nonlinear equations using free parameters. J. Math. Chem. 56, 1884–1901 (2017)
Zafar, F., Cordero, A., Sultana, S., Torregrosa, J.R.: Optimal iterative methods for finding multiple roots of nonlinear equations using weight functions and dynamics. J. Comp. Appl. Math. 342, 352–374 (2018)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Sharma, J.R., Kumar, S. An excellent derivative-free multiple-zero finding numerical technique of optimal eighth order convergence. Ann Univ Ferrara 68, 161–186 (2022). https://doi.org/10.1007/s11565-022-00394-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11565-022-00394-w