Abstract
In this chapter we consider some modifications of the Chebyshev method that are free from second derivative and prove semilocal convergence theorems for these modifications as well as for the Chebyshev method. These two modifications can be considered as a generalization of some well-known iterative methods.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Key words
1 Introduction
As is known, the higher order methods, such as Halley and Chebyshev methods play an important role in the solution of nonlinear equations. Especially they can be used in problems, where a quick convergence is required, such as stiff systems [11] and bifurcation problems [13]. However, they are not used often in practice due to their operational cost. For instance, in the iterative third-order methods, the main problem is to evaluate the second derivative of the operator. To overcome this difficulty, in the past years appeared many (multipoint) iterative methods [5–7, 12, 15] free from second derivative but with the same order of convergence. As a result, the operational cost is reduced to that of a second-order iterations, such as Newton’s method.
In this chapter we propose some new modifications (multipoint iterations) of the Chebyshev method which are free from second derivative (Sect. 2). In Sects. 3–5 we analyze the convergence of the Chebyshev method and its two modifications, respectively, by using a technique consisting of a new system of real sequences [2, 8]. In Sect. 6, we give mild convergence conditions for these methods. In the last Sect. 7, we present numerical results.
2 Some Modifications of Chebyshev Method
We consider a nonlinear equation
Here \(F : \Omega \subseteq X \rightarrow Y\) is a nonlinear Frechet twice differentiable operator defined on a convex, nonempty domain Ω, and X, Y are Banach spaces. The well-known Chebyshev method for solving the nonlinear equation (1) is given by [7]:
As in scalar cases [15] we can take next approximations
and
where
As a consequence, we define the following new modifications:
and
Thus we have classes of new two-and three-point iterative processes (3) and (4). It should be pointed out that such iterations (3) and (4) were given in [15] for functions of one variable.
In [5, 6] it was suggested a uniparametric Halley-type iterations with free from second derivative of the form
and proved order three convergence of (5), as Halley method. If we take the approximation
in (5), then (5) leads to (3). In this sense our modification (3) is easier than (5). It also should be pointed out that the iteration (3) with \(\theta = 1/2\) and θ = 1 was given in [7] and [1], respectively, and proven order three convergence under some restrictions. The iterations (4) can be considered as a generalization of some well-known iterations for function of one variable. For instance, if \(b = -2\) the iteration (4) leads to two-point one with third-order convergence, suggested by Kou et al. [10]. If b = 0 the iteration (4) leads to also two-point one with third-order convergence that was suggested by Potra and Ptak [9, 12] and CL2 method [1]. From (3) and (4) it is clear that the modification (4) is preferable to (3), especially for the system of nonlinear equations, because in (3) the matrix-vector multiplication is needed in each iteration.
3 Recurrence Relations
In [14] we reduced the two-dimensional cubic decreasing region into one-dimensional region for the Chebyshev method. Now we will study the convergence of Chebyshev method (2) in detail. We assume that \({\Gamma }_{0} \in \mathbf{L}(Y,X)\) exists at some x 0 ∈ Ω, where L(Y, X) is a set of bounded linear operators from Y into X. In what follows we assume that
Let us suppose that
and define the sequence
where
and \(d = 1 + 2\omega,\) \(\omega = \frac{K} {{M}^{2}m},\) \(m ={ \mathrm{min}}_{n}\|{\Gamma }_{n}\| > 0\). In Sect. 4, we will show that m > 0.
Lemma 1.
Let f,g be two real functions given in (8). Then
-
(i)
f is increasing and f(x) > 1 for \(x \in (0, \frac{1} {2})\).
-
(ii)
g is increasing in \((0, \frac{1} {2})\).
-
(iii)
f(γx) < f(x), \(g(\gamma x) \leq {\gamma }^{2}g(x)\) for \(x \in (0, \frac{1} {2})\) and γ ∈ (0,1).
The proof is trivial [8].
Lemma 2.
Let \(0 < {a}_{0} < \frac{1} {2}\) and \(f{({a}_{0})}^{2}g({a}_{0}) < 1\) . Then the sequence {a n } is decreasing.
Proof.
From the hypothesis we deduce that \(0 < {a}_{1} < {a}_{0}\). Now we suppose that \(0 < {a}_{k} < {a}_{k-1} < \cdots < {a}_{1} < {a}_{0} < 1/2\). Then \(0 < {a}_{k+1} < {a}_{k}\) if and only if \({f}^{2}({a}_{k})g({a}_{k}) < 1\). Notice that \(f({a}_{k}) < f({a}_{0})\) and \(g({a}_{k}) < g({a}_{0})\). Consequently, \({f}^{2}({a}_{k})g({a}_{k}) < {f}^{2}({a}_{0})g({a}_{0}) < 1\). □
Lemma 3.
If \(0 < {a}_{0} < \frac{1} {2d}\) , then \({f}^{2}({a}_{0})g({a}_{0}) < 1\).
Proof.
It is easy to show that the inequality \({f}^{2}({a}_{0})g({a}_{0}) < 1\) is equivalent to
Since
Therefore there exists
such that
We compute
It is clear that
for d > 1. Thus \(\varphi ({a}_{0}) > 0\) for \(0 < {a}_{0} < \frac{1} {2d}\). □
Lemma 4.
Let us suppose that the hypothesis of Lemma 3 is satisfied and define \(\gamma = {a}_{1}/{a}_{0}\). Then
-
(i)
\(\gamma = f{({a}_{0})}^{2}g({a}_{0}) \in (0;1)\)
-
(iin)
\({a}_{n} \leq {\gamma }^{{3}^{n-1} }{a}_{n-1} \leq {\gamma }^{\frac{{3}^{n}-1} {2} }{a}_{0}\)
-
(iiin)
\(f({a}_{n})g({a}_{n}) \leq \frac{{\gamma }^{{3}^{n}}} {f({a}_{0})},\) n ≥ 0
Proof.
Notice that (i) is trivial. Next we prove (ii n ) following an inductive procedure. So
and by Lemma 1 we have
i.e., (ii 1), (iii 1) are proved. If we suppose that (ii n ) is true, then
\(\Delta = \frac{1} {f({a}_{0})} < 1\) and the proof is complete. □
4 Convergence Study of Chebyshev Method
In this section, we study the sequence {a n } defined above and prove the convergence of the sequence {x n } given by (2). Notice that
Given this situation we prove following statements for n ≥ 1:
Assuming
we have
Then, by the Banach lemma, Γ 1 is defined and
On the other hand, if \({x}_{n},{x}_{n-1} \in \Omega \), we will use Taylor’s formula
Taking into account (2) we obtain
Substituting the last expression in (9) we obtain
Then for n = 1, if x 1 ∈ Ω, we have
From (11) we get
Using (10) and
in (13) we obtain
or
and (II 1) is true. To prove (III 1) notice that
and
and (IV 1) is true. Using
and
and (14) we have
Thus, \({y}_{1},{x}_{2} \in \overline{B({x}_{0},R\eta )}\) and (V 1) is true. Now, following an inductive procedure and assuming
the items \(({I}_{n}) - ({V }_{n})\) are proved.
Notice that Γ n > 0 for all n = 0, 1, …. Indeed, if Γ k = 0 for some k, then due to statement (I n ), we have \(\|{\Gamma }_{n}\| = 0\) for all n ≥ k. As a consequence, the iteration (2), as well as (3) and (4), terminated after kth step, i.e., the convergence of iterations does not hold. To establish the convergence of {x n } we only have to prove that it is a Cauchy sequence and that the above assumptions (15) are true. We note that
As a consequence of Lemma 4 it follows that
So from Δ < 1 and γ < 1, we deduce that \({\prod \limits }_{k=0}^{n-1}f({a}_{k})g({a}_{k})\) converges to zero by letting \(n \rightarrow \infty \).
We are now ready to state the main result on convergence for ().
Theorem 1.
Let us assume that \({\Gamma }_{0} = F^{\prime}{({x}_{0})}^{-1} \in</Emphasis> \boldsymbol{\text{ L}}(Y,X)\) exists at some x 0 ∈ Ω and \(({c}_{1}) - ({c}_{4})\) are satisfied. Suppose that
Then if \(\overline{B({x}_{0},R\eta )} =\{ x \in X;\|x - {x}_{0}\| \leq R\eta \} \subseteq \Omega \) the sequence {x n } defined in (2) and starting at x 0 has at least R-order three and converges to a solution x ∗ of the Eq. (1). In that case, the solution x ∗ and the iterates \({x}_{n},{y}_{n}\) belong to \(\overline{B({x}_{0},R\eta )},\) and x ∗ is the only solution of Eq. (1) in \(B({x}_{0}, \frac{2} {M\beta } - R\eta ) \cap \Omega \) . Furthermore, we have the following error estimates:
5 Convergence Study of Modifications of the Chebyshev Method
The convergence of the proposed modifications (3) and (4) is studied analogously as those of Chebyshev method. The difference is only to prove assumption (II n ) for these methods. Therefore, we turn our attention only to the proof of assumption (II n ). At first, we consider a modification (3). For this, if \({x}_{n},{y}_{n} \in \Omega \) we obtain from Taylor’s formula
where
According to (3) we have
Substituting the last expression into (18) we get
Then, for n = 1, if \({y}_{0} \in \Omega \), we have
Since \({\xi }_{1} - {\eta }_{0} = \overline{\theta }({x}_{1} - {x}_{0}) - w\theta ({y}_{0} - {x}_{0})\), it follows
If we take \(\hat{\theta } =\max (\overline{\theta },w\theta )\), then we get the following estimate:
Therefore, we have
Analogously, for the modification (4), we have
Notice that
Substituting the last expression into (20) we have
where
If \({\xi }_{n},{\eta }_{n-1},{\zeta }_{n-1} \in \Omega \) then we have
Using these expressions we get
where
Then we obtain
For the modifications (3) and (4) the cubic convergence theorem 1 is valid, in which d equals to 1 + 5ω and 1 + 4ω, respectively.
It should be mentioned that in [4] was constructed a family of predictor-corrector methods free from second derivative. But these methods, except the case A20, require more computational cost even as compared to the modification (3).
6 Mild Convergence Conditions
In order to obtain mild convergence conditions for these methods we first consider inexact Newton method (IN) for (1):
The terms \({r}_{k} \in {R}^{n}\) represent the residuals of the approximate solutions s k [3, 4]. We consider a local convergence result [3, 4]:
Theorem 2.
Given \({\eta }_{k} \leq \overline{\eta } < t < 1,k = 0,1,\ldots \) , there exists \(\epsilon > 0\) such that for any initial approximation x 0 with \(\|{x}_{0} - {x}^{{_\ast}}\|\leq \epsilon,\) the sequence of the IN iterates (1) satisfying
converges to x ∗.
Moreover we know that the IN converges superlinearly when \({\eta }_{k} \rightarrow 0\) as k → ∞. Now we analyze the connection between the inexact Newton method and the Chebyshev method (2) and its modifications (3) and (4). To this end we rewrite (2)–(4) in the form (22) with
and
respectively.
Theorem 3.
Let us assume that \({\Gamma }_{0} = F^{\prime}{({x}_{0})}^{-1} \in \mathcal{L}(Y,X)\) exists at some x 0 ∈ Ω, and the conditions (c 1 )–(c 3 ) are satisfied. Suppose that \(0 < {a}_{0} < 0.5\) . Then the sequences {x k } given by (2), (3) and (4) converge to x ∗.
Proof.
We first observe that the sequence {a k } given by (2) and (3) with d = 1 is decreasing, i.e.,
It is easy to show that for residuals r k of all the methods (2),(3) and (4) hold the following estimation
From (25) and (26) follows \({\eta }_{k} \rightarrow 0\) as k → ∞. Then by Theorem 2 the methods (2)–(4) converge to x ∗ . □
The assumptions in Theorem 3 are milder than cubic convergence condition in Theorem 1 with d > 1.
7 Numerical Results and Discussion
Now, we give some numerical examples that confirm the theoretical results. First, we consider the following test equations:
All computations are carried out with a double arithmetic precision, and the number of iterations, such that \(\|F({x}_{n})\| \leq 1.0e - 16,\) is tabulated (see Table 1). We see that the third-order MOD 1 and MOD 2 takes less iterations to converge as compared to second-order Newton’s method (NM).
Now we consider the following systems of equations:
As seen from the Tables 1–3, that the proposed modifications (MOD 1, MOD 2) are almost always superior to these classical predecessor, the Chebyshev method (CM), because of their convergence order is as same as CM, but these are simpler and free from second derivative.
We also compared the computational cost of two modifications to the classical NM (see Table 2). The numerical results showed that MOD 2 is the most effective method especially when \(b = -2\) or b = 0.
8 Conclusion
In this chapter we proposed new two families of methods which include many well-known third-order methods as particular case. We proved third-order convergence theorem for these modifications and as well as for Chebyshev method. The new methods were compared by their performance to Newton’s method and Chebyshev method, and it was observed that they show better performance than NM and CM.
References
Babajee D.K.R., Dauhoo M.Z., Darvishi M.T., Karami A., Barati A.: Analysis of two Chebyshev like third order methods free from second derivatives for solving systems of nonlinear equations. Comput.Appl.Math 233, 2002–2012 (2010)
Candela V., Marquina A.: Recurrence relations for rational cubic methods II: The Chebyshev method. Computing. 45, 355–367 (1990)
C\(\hat{a}\)tinas E.:The inexact, inexact perturbed and quasi Newton methods are equivalent models. Math. Comp. 74, 291–301 (2004)
Dembo R.S., Eisenstat S.C., Steihang T.: Inexact Newton methods. SIAM J.Numer.Anal. 19, 400–408 (1982)
Ezquerro J.A., Hernandez M.A.: A uniparametric Halley type iteration with free second derivative. Int.J.Pure Appl.Math. 61,103–114 (2003)
Ezquerro J.A., Hernandez M.A.: On Halley type iterations with free second derivative. J.Comput.Appl.Math. 170, 455–459 (2004)
Hernandez M.A.: Second derivative free variant of the Chebyshev method for nonlinear equations. J.Optimization theory and applications 104,501–515 (2000)
Hernandez M.A., Salanova M.A.: Modification of the Kantorovich assumptions for semilocal convergence of the Chebyshev. J.Comput.Appl.Math. 126, 131–143 (2000)
Homeier H.H.H.: On Newton type methods with cubic convergence. J.Comput. Appl.Math. 176, 425–432 (2005)
Kou J, Li Y, Wang X.: A modification of Newton method with third order convergence. Appl.Math.Comput. 181, 1106–1111 (2006)
Lambert J.D.:Computational methods in ordinary differential equations. John Wiley and Sons, London, England (1976)
Potra F.A., Ptak V.:Nondiscrete induction and iterative processes. Pitman, NY (1984)
Stoer Bulirsch : Introduction to numerical analysis, 3rd edition. Springer (2002)
Zhanlav T.: Note on the cubic decreasing region of the Chebyshev method. J.Comput.Appl.Math. 235, 341–344 (2010)
Zhanlav T. Chuluunbaatar O.: Some iterations of higher order convergence for nonlinear equations. Vestnik Friendship University of People 3, 70–78 (2009)(in russian)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer Science+Business Media New York
About this chapter
Cite this chapter
Tugal, Z., Dorjgotov, K. (2013). Semilocal Convergence with R-Order Three Theorems for the Chebyshev Method and Its Modifications. In: Chinchuluun, A., Pardalos, P., Enkhbat, R., Pistikopoulos, E. (eds) Optimization, Simulation, and Control. Springer Optimization and Its Applications, vol 76. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-5131-0_21
Download citation
DOI: https://doi.org/10.1007/978-1-4614-5131-0_21
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4614-5130-3
Online ISBN: 978-1-4614-5131-0
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)