Key words

1 Introduction

As is known, the higher order methods, such as Halley and Chebyshev methods play an important role in the solution of nonlinear equations. Especially they can be used in problems, where a quick convergence is required, such as stiff systems [11] and bifurcation problems [13]. However, they are not used often in practice due to their operational cost. For instance, in the iterative third-order methods, the main problem is to evaluate the second derivative of the operator. To overcome this difficulty, in the past years appeared many (multipoint) iterative methods [57, 12, 15] free from second derivative but with the same order of convergence. As a result, the operational cost is reduced to that of a second-order iterations, such as Newton’s method.

In this chapter we propose some new modifications (multipoint iterations) of the Chebyshev method which are free from second derivative (Sect. 2). In Sects. 35 we analyze the convergence of the Chebyshev method and its two modifications, respectively, by using a technique consisting of a new system of real sequences [2, 8]. In Sect. 6, we give mild convergence conditions for these methods. In the last Sect. 7, we present numerical results.

2 Some Modifications of Chebyshev Method

We consider a nonlinear equation

$$F(x) = 0.$$
(1)

Here \(F : \Omega \subseteq X \rightarrow Y\) is a nonlinear Frechet twice differentiable operator defined on a convex, nonempty domain Ω, and X, Y are Banach spaces. The well-known Chebyshev method for solving the nonlinear equation (1) is given by [7]:

$$\begin{array}{rcl} {y}_{n}& =& {x}_{n} - {\Gamma }_{n}F({x}_{n}),\qquad {\Gamma }_{n} = F^{\prime}{({x}_{n})}^{-1}, \\ {x}_{n+1}& =& {y}_{n} -\frac{1} {2}{\Gamma }_{n}{F}^{{\prime\prime}}({x}_{ n}){({y}_{n} - {x}_{n})}^{2},\quad n = 0,1,\ldots.\end{array}$$
(2)

As in scalar cases [15] we can take next approximations

$$\begin{array}{rcl} \frac{1} {2}{F}^{{\prime\prime}}({x}_{ n}){({y}_{n} - {x}_{n})}^{2}& \approx & \frac{1} {2\theta }(F^{\prime}({z}_{n}) - F^{\prime}({x}_{n}))({y}_{n} - {x}_{n}), \\ {z}_{n}& =& (1 - \theta ){x}_{n} + \theta {y}_{n}\quad 0 < \theta \leq \end{array}$$

and

$$\frac{1} {2}{F}^{{\prime\prime}}({x}_{ n}){({y}_{n} - {x}_{n})}^{2} \approx \left (1 + \frac{b} {2}\right )F({y}_{n}) + bF({x}_{n}) -\frac{b} {2}F({z}_{n}),$$

where

$${z}_{n} = {x}_{n} + {\Gamma }_{n}F({x}_{n}),\qquad - 2 \leq b \leq 0.$$

As a consequence, we define the following new modifications:

$$\begin{array}{rcl} {y}_{n}& =& {x}_{n} - {\Gamma }_{n}F({x}_{n}) \\ {z}_{n}& =& (1 - \theta ){x}_{n} + \theta {y}_{n},\quad \theta \in (0,1] \\ {x}_{n+1}& =& {y}_{n} - \frac{1} {2\theta }{\Gamma }_{n}(F^{\prime}({z}_{n}) - F^{\prime}({x}_{n}))({y}_{n} - {x}_{n})\end{array}$$
(3)

and

$$\begin{array}{rcl} {y}_{n}& =& {x}_{n} - {\Gamma }_{n}F({x}_{n}) \\ {z}_{n}& =& {x}_{n} + {\Gamma }_{n}F({x}_{n}), \\ {x}_{n+1}& =& {y}_{n} - {\Gamma }_{n}\left (\left (1 + \frac{b} {2}\right )F({y}_{n}) + bF({x}_{n}) -\frac{b} {2}F({z}_{n})\right ), \\ & & -2 \leq b \leq 0. \end{array}$$
(4)

Thus we have classes of new two-and three-point iterative processes (3) and (4). It should be pointed out that such iterations (3) and (4) were given in [15] for functions of one variable.

In [5, 6] it was suggested a uniparametric Halley-type iterations with free from second derivative of the form

$$\begin{array}{rcl} {y}_{n}& =& {x}_{n} - {\Gamma }_{n}F({x}_{n}) \\ {z}_{n}& =& (1 - \theta ){x}_{n} + \theta {y}_{n},\quad \theta \in (0,1] \\ H({x}_{n},{y}_{n})& =& \frac{1} {\theta }{\Gamma }_{n}(F^{\prime}({z}_{n}) - F^{\prime}({x}_{n})) \\ {x}_{n+1}& =& {y}_{n} -\frac{1} {2}H({x}_{n},{y}_{n}){\left [I + \frac{1} {2}H({x}_{n},{y}_{n})\right ]}^{-1}({y}_{ n} - {x}_{n}),\quad n \geq 0\end{array}$$
(5)

and proved order three convergence of (5), as Halley method. If we take the approximation

$${\left [I + \frac{1} {2}H({x}_{n},{y}_{n})\right ]}^{-1} \approx I$$

in (5), then (5) leads to (3). In this sense our modification (3) is easier than (5). It also should be pointed out that the iteration (3) with \(\theta = 1/2\) and θ = 1 was given in [7] and [1], respectively, and proven order three convergence under some restrictions. The iterations (4) can be considered as a generalization of some well-known iterations for function of one variable. For instance, if \(b = -2\) the iteration (4) leads to two-point one with third-order convergence, suggested by Kou et al. [10]. If b = 0 the iteration (4) leads to also two-point one with third-order convergence that was suggested by Potra and Ptak [9, 12] and CL2 method [1]. From (3) and (4) it is clear that the modification (4) is preferable to (3), especially for the system of nonlinear equations, because in (3) the matrix-vector multiplication is needed in each iteration.

3 Recurrence Relations

In [14] we reduced the two-dimensional cubic decreasing region into one-dimensional region for the Chebyshev method. Now we will study the convergence of Chebyshev method (2) in detail. We assume that \({\Gamma }_{0} \in \mathbf{L}(Y,X)\) exists at some x 0 ∈ Ω, where L(Y, X) is a set of bounded linear operators from Y into X. In what follows we assume that

$$\begin{array}{rcl} ({c}_{1})& & \|{F}^{{\prime\prime}}(x)\| \leq M,\qquad x \in \Omega, \\ ({c}_{2})& & \|{y}_{0} - {x}_{0}\| =\| {\Gamma }_{0}F({x}_{0})\| \leq \eta, \\ ({c}_{3})& & \|{\Gamma }_{0}\| \leq \beta, \\ ({c}_{4})& & \|{F}^{{\prime\prime}}(x) - {F}^{{\prime\prime}}(y)\| \leq K\|x - y\|,\qquad x,y \in \Omega,\qquad K > 0.\end{array}$$

Let us suppose that

$${a}_{0} = M\beta \eta $$
(6)

and define the sequence

$${a}_{n+1} = f{({a}_{n})}^{2}g({a}_{ n}){a}_{n},$$
(7)

where

$$f(x) = \frac{2} {2 - 2x - {x}^{2}},\qquad g(x) = \frac{{x}^{2}(4 + x)} {8} d,$$
(8)

and \(d = 1 + 2\omega,\) \(\omega = \frac{K} {{M}^{2}m},\) \(m ={ \mathrm{min}}_{n}\|{\Gamma }_{n}\| > 0\). In Sect. 4, we will show that m > 0.

Lemma 1.

Let f,g be two real functions given in (8). Then

  1. (i)

    f is increasing and f(x) > 1 for \(x \in (0, \frac{1} {2})\).

  2. (ii)

    g is increasing in \((0, \frac{1} {2})\).

  3. (iii)

    f(γx) < f(x), \(g(\gamma x) \leq {\gamma }^{2}g(x)\) for \(x \in (0, \frac{1} {2})\) and γ ∈ (0,1).

The proof is trivial [8].

Lemma 2.

Let \(0 < {a}_{0} < \frac{1} {2}\) and \(f{({a}_{0})}^{2}g({a}_{0}) < 1\) . Then the sequence {a n } is decreasing.

Proof.

From the hypothesis we deduce that \(0 < {a}_{1} < {a}_{0}\). Now we suppose that \(0 < {a}_{k} < {a}_{k-1} < \cdots < {a}_{1} < {a}_{0} < 1/2\). Then \(0 < {a}_{k+1} < {a}_{k}\) if and only if \({f}^{2}({a}_{k})g({a}_{k}) < 1\). Notice that \(f({a}_{k}) < f({a}_{0})\) and \(g({a}_{k}) < g({a}_{0})\). Consequently, \({f}^{2}({a}_{k})g({a}_{k}) < {f}^{2}({a}_{0})g({a}_{0}) < 1\). □ 

Lemma 3.

If \(0 < {a}_{0} < \frac{1} {2d}\) , then \({f}^{2}({a}_{0})g({a}_{0}) < 1\).

Proof.

It is easy to show that the inequality \({f}^{2}({a}_{0})g({a}_{0}) < 1\) is equivalent to

$$\varphi ({a}_{0}) = 2{a}_{0}^{4} + (8 - d){a}_{ 0}^{3} - 4d{a}_{ 0}^{2} - 16{a}_{ 0} + 8 > 0.$$

Since

$$\begin{array}{rcl} \varphi (0)& =& 8 > 0,\quad \varphi (0.5) = \frac{9} {8}(1 - d) < 0\quad (\varphi (0.5) = 0\mbox{ when }d = 1), \\ \varphi ^{\prime}({a}_{0})& =& 8{a}_{0}^{3} + 24{a}_{ 0}^{2} - 3d{a}_{ 0}^{2} - 8d{a}_{ 0} - 16,\quad \varphi ^{\prime}({a}_{0}) < 0\quad \mbox{ for }0 < {a}_{0} < 0.5.\end{array}$$

Therefore there exists

$$\overline{{a}_{0}} < \frac{1} {2},$$

such that

$$\varphi (\overline{{a}_{0}}) = 0.$$

We compute

$$\varphi \left ( \frac{1} {2d}\right ) = \frac{d - 1} {8{d}^{4}} \left (64{d}^{3} - 8{d}^{2} - 9d - 1\right ).$$

It is clear that

$$\varphi \left ( \frac{1} {2d}\right ) > 0$$

for d > 1. Thus \(\varphi ({a}_{0}) > 0\) for \(0 < {a}_{0} < \frac{1} {2d}\). □ 

Lemma 4.

Let us suppose that the hypothesis of Lemma  3 is satisfied and define \(\gamma = {a}_{1}/{a}_{0}\). Then

  1. (i)

    \(\gamma = f{({a}_{0})}^{2}g({a}_{0}) \in (0;1)\)

  2. (iin)

    \({a}_{n} \leq {\gamma }^{{3}^{n-1} }{a}_{n-1} \leq {\gamma }^{\frac{{3}^{n}-1} {2} }{a}_{0}\)

  3. (iiin)

    \(f({a}_{n})g({a}_{n}) \leq \frac{{\gamma }^{{3}^{n}}} {f({a}_{0})},\) n ≥ 0

Proof.

Notice that (i) is trivial. Next we prove (ii n ) following an inductive procedure. So

$${a}_{1} \leq \gamma {a}_{0}$$

and by Lemma 1 we have

$$f({a}_{1})g({a}_{1}) < f(\gamma {a}_{0})g(\gamma {a}_{0}) < f({a}_{0}){\gamma }^{2}g({a}_{ 0}) = \frac{{\gamma }^{2}{f}^{2}({a}_{0})g({a}_{0})} {f({a}_{0})} = \frac{{\gamma }^{3}} {f({a}_{0})},$$

i.e., (ii 1), (iii 1) are proved. If we suppose that (ii n ) is true, then

$$\begin{array}{rcl} {a}_{n+1}& =& {f}^{2}({a}_{ n})g({a}_{n}){a}_{n} \leq {f}^{2}({\gamma }^{\frac{{3}^{n}-1} {2} }{a}_{0})g({\gamma }^{\frac{{3}^{n}-1} {2} }{a}_{0}){a}_{n} \\ & \leq & {f}^{2}({a}_{ 0}){\gamma }^{{3}^{n}-1 }g({a}_{0}){\gamma }^{\frac{{3}^{n}-1} {2} }{a}_{0} = {\gamma }^{1+\frac{3} {2} ({3}^{n}-1) }{a}_{0} = {\gamma }^{\frac{{3}^{n+1}-1} {2} }{a}_{0}, \\ \mbox{ and }f({a}_{n+1})g({a}_{n+1})& \leq & \frac{f({a}_{0}){\gamma }^{{3}^{n+1}-1 }g({a}_{0})} {f({a}_{0})} f({a}_{0}) = \frac{{\gamma }^{{3}^{n+1} }} {f({a}_{0})} = \Delta {\gamma }^{{3}^{n+1} } \\ \end{array}$$

\(\Delta = \frac{1} {f({a}_{0})} < 1\) and the proof is complete. □ 

4 Convergence Study of Chebyshev Method

In this section, we study the sequence {a n } defined above and prove the convergence of the sequence {x n } given by (2). Notice that

$$\begin{array}{rcl} & & M\|{\Gamma }_{0}\|\|{\Gamma }_{0}F({x}_{0})\| \leq {a}_{0} \\ & & \|{x}_{1} - {x}_{0}\| \leq \left (1 + \frac{{a}_{0}} {2} \right )\|{\Gamma }_{0}F({x}_{0})\|.\end{array}$$

Given this situation we prove following statements for n ≥ 1:

$$\begin{array}{rcl} ({I}_{n})& & \|{\Gamma }_{n}\| =\| F^{\prime}{({x}_{n})}^{-1}\| \leq f({a}_{ n-1})\|{\Gamma }_{n-1}\| \\ (I{I}_{n})& & \|{\Gamma }_{n}F({x}_{n})\| \leq f({a}_{n-1})g({a}_{n-1})\|{\Gamma }_{n-1}F({x}_{n-1})\| \\ (II{I}_{n})& & M\|{\Gamma }_{n}\|\|{\Gamma }_{n}F({x}_{n})\| \leq {a}_{n} \\ (I{V }_{n})& & \|{x}_{n+1} - {x}_{n}\| \leq \left (1 + \frac{{a}_{n}} {2} \right )\|{\Gamma }_{n}F({x}_{n})\| \\ ({V }_{n})& & {y}_{n},{x}_{n+1} \in B({x}_{0},R\eta ),\mbox{ where }B({x}_{0},R\eta ) = \left \{x \in \Omega :\| x - {x}_{0}\| < \frac{1 + {a}_{0}/2} {1 - \gamma \Delta } \eta \right \}\\ \end{array}$$

Assuming

$$\left (1 + \frac{{a}_{0}} {2} \right ){a}_{0} < 1,\qquad {x}_{1} \in \Omega,$$

we have

$$\|I - {\Gamma }_{0}F^{\prime}({x}_{1})\| \leq \| {\Gamma }_{0}\|\|F^{\prime}({x}_{0}) - F^{\prime}({x}_{1})\| \leq M\|{\Gamma }_{0}\|\|{x}_{1} - {x}_{0}\| \leq \left (1 + \frac{{a}_{0}} {2} \right ){a}_{0} < 1.$$

Then, by the Banach lemma, Γ 1 is defined and

$$\|{\Gamma }_{1}\| \leq \frac{\|{\Gamma }_{0}\|} {1 -\| {\Gamma }_{0}\|\|F^{\prime}({x}_{0}) - F^{\prime}({x}_{1})\|} \leq \frac{1} {1 -\left (1 + \frac{{a}_{0}} {2} \right ){a}_{0}}\|{\Gamma }_{0}\| = f({a}_{0})\|{\Gamma }_{0}\|.$$

On the other hand, if \({x}_{n},{x}_{n-1} \in \Omega \), we will use Taylor’s formula

$$F({x}_{n}) = F({x}_{n-1}) + F^{\prime}({x}_{n-1})({x}_{n} - {x}_{n-1}) + \frac{{F}^{{\prime\prime}}({\xi }_{n})} {2} {({x}_{n} - {x}_{n-1})}^{2},$$
(9)
$${\xi }_{n} = \theta {x}_{n} + (1 - \theta ){x}_{n-1},\qquad \theta \in (0,1).$$
(10)

Taking into account (2) we obtain

$${x}_{n} - {x}_{n-1} = \left [I -\frac{1} {2}{\Gamma }_{n-1}{F}^{{\prime\prime}}({x}_{ n-1})({y}_{n-1} - {x}_{n-1})\right ]({y}_{n-1} - {x}_{n-1}).$$
(11)

Substituting the last expression in (9) we obtain

$$\begin{array}{rcl} F({x}_{n})& =& -\frac{1} {2}{F}^{{\prime\prime}}({x}_{ n-1}){({y}_{n-1} - {x}_{n-1})}^{2} + \frac{1} {2}{F}^{{\prime\prime}}({\xi }_{ n}){({x}_{n} - {x}_{n-1})}^{2} \\ & =& \frac{1} {2}\left [{F}^{{\prime\prime}}({\xi }_{ n}) - {F}^{{\prime\prime}}({x}_{ n-1}) - {F}^{{\prime\prime}}({\xi }_{ n}){\Gamma }_{n-1}{F}^{{\prime\prime}}({x}_{ n-1})({y}_{n-1} - {x}_{n-1})\right. \\ & & +\left.\frac{1} {4}{F}^{{\prime\prime}}({\xi }_{ n}){\Gamma }_{n-1}^{2}{F}^{{\prime\prime}}{({x}_{ n-1})}^{2}{({y}_{ n-1} - {x}_{n-1})}^{2}\right ]{({y}_{ n-1} - {x}_{n-1})}^{2}.\end{array}$$
(12)

Then for n = 1, if x 1 ∈ Ω, we have

$$\|F({x}_{1})\| \leq \frac{1} {2}\left [K\|{\xi }_{1} - {x}_{0}\| + M{a}_{0} + \frac{1} {4}M{a}_{0}^{2}\right ]\|{\Gamma }_{ 0}F{({x}_{0})}\|^{2}.$$
(13)

From (11) we get

$$\|{x}_{1} - {x}_{0}\| \leq \left (1 + \frac{{a}_{0}} {2} \right )\|{y}_{0} - {x}_{0}\| \leq \left (1 + \frac{{a}_{0}} {2} \right )\|{\Gamma }_{0}F({x}_{0})\|.$$

Using (10) and

$$\|{\xi }_{1} - {x}_{0}\| = \theta \|{x}_{1} - {x}_{0}\| \leq \theta \left (1 + \frac{{a}_{0}} {2} \right )\|{\Gamma }_{0}F({x}_{0})\|$$

in (13) we obtain

$$\begin{array}{rcl} \|{\Gamma }_{1}F({x}_{1})\|& \leq & \|{\Gamma }_{1}\|\|F({x}_{1})\| \\ & \leq & \frac{1} {2}f({a}_{0})\|{\Gamma }_{0}\|M{a}_{0}\left (K\theta \left (1 + \frac{{a}_{0}} {2} \right ) \frac{1} {{M}^{2}\|{\Gamma }_{0}\|} + \frac{4 + {a}_{0}} {4} \right )\|{\Gamma }_{0}F{({x}_{0})\|}^{2} \\ \end{array}$$

or

$$\begin{array}{rcl} \|{\Gamma }_{1}F({x}_{1})\|& \leq & \frac{f({a}_{0})} {2} {a}_{0}^{2}\left [K\theta \left (1 + \frac{{a}_{0}} {2} \right ) \frac{1} {{M}^{2}m} + \left (1 + \frac{{a}_{0}} {4} \right )\right ]\|{\Gamma }_{0}F({x}_{0})\| \\ & \leq & \frac{f({a}_{0})} {8} {a}_{0}^{2}(4 + {a}_{ 0})\left (1 + \frac{2K\theta } {{M}^{2}m}\right )\|{\Gamma }_{0}F({x}_{0})\| \\ & =& f({a}_{0})g({a}_{0})\|{\Gamma }_{0}F({x}_{0})\| \\ \end{array}$$

and (II 1) is true. To prove (III 1) notice that

$$\begin{array}{rcl}{ M}_{1}\|{\Gamma }_{1}\|\|{\Gamma }_{1}F({x}_{1})\|& \leq & Mf({a}_{0})\|{\Gamma }_{0}\|f({a}_{0})g({a}_{0})\|{\Gamma }_{0}F({x}_{0})\| \\ & \leq & {f}^{2}({a}_{ 0})g({a}_{0}){a}_{0} = {a}_{1} \\ \end{array}$$

and

$$\begin{array}{rcl} \|{x}_{2} - {x}_{1}\|& \leq & \|{y}_{1} - {x}_{1}\| + \frac{1} {2}M\|{\Gamma }_{1}\|\|{\Gamma }_{1}F({x}_{1})\|\|{y}_{1} - {x}_{1}\| \\ & \leq & \left (1 + \frac{{a}_{1}} {2} \right )\|{y}_{1} - {x}_{1}\| = \left (1 + \frac{{a}_{1}} {2} \right )\|{\Gamma }_{1}F({x}_{1})\|,\end{array}$$
(14)

and (IV 1) is true. Using

$$\begin{array}{rcl} s\|{x}_{1} - {x}_{0}\|& \leq & \|{y}_{0} - {x}_{0}\| + \frac{1} {2}\|{\Gamma }_{0}\|M\|{\Gamma }_{0}F({x}_{0})\|\|{y}_{0} - {x}_{0}\| \\ & \leq & \left (1 + \frac{{a}_{0}} {2} \right )\|{y}_{0} - {x}_{0}\| \\ & \leq & \left (1 + \frac{{a}_{0}} {2} \right )\eta < \frac{1 + {a}_{0}/2} {1 - \gamma \Delta } \eta = R\eta \\ \end{array}$$

and

$$\begin{array}{rcl} \|{y}_{1} - {x}_{0}\|& \leq & \|{y}_{1} - {x}_{1}\| +\| {x}_{1} - {x}_{0}\| \leq \left ( \frac{\gamma } {f({a}_{0})} + 1 + \frac{{a}_{0}} {2} \right )\eta \\ & =& \left (1 + \frac{{a}_{0}} {2} \right )\left (1 + \frac{\Delta \gamma } {1 + {a}_{0}/2}\right )\eta < \left (1 + \frac{{a}_{0}} {2} \right )(1 + \Delta \gamma )\eta \\ & <& \frac{1 + {a}_{0}/2} {1 - \gamma \Delta } \eta = R\eta \\ \end{array}$$

and (14) we have

$$\|{x}_{2} - {x}_{0}\| \leq \| {x}_{2} - {x}_{1}\| +\| {x}_{1} - {x}_{0}\| \leq R\eta.$$

Thus, \({y}_{1},{x}_{2} \in \overline{B({x}_{0},R\eta )}\) and (V 1) is true. Now, following an inductive procedure and assuming

$${y}_{n},{x}_{n+1} \in \Omega \mbox{ and }\left (1 + \frac{{a}_{n}} {2} \right ){a}_{n} < 1,\mbox{ }n \in \mathcal{N},$$
(15)

the items \(({I}_{n}) - ({V }_{n})\) are proved.

Notice that Γ n  > 0 for all n = 0, 1, . Indeed, if Γ k  = 0 for some k, then due to statement (I n ), we have \(\|{\Gamma }_{n}\| = 0\) for all n ≥ k. As a consequence, the iteration (2), as well as (3) and (4), terminated after kth step, i.e., the convergence of iterations does not hold. To establish the convergence of {x n } we only have to prove that it is a Cauchy sequence and that the above assumptions (15) are true. We note that

$$\begin{array}{rcl} \left (1 + \frac{{a}_{n}} {2} \right )\|{\Gamma }_{n}F({x}_{n})\|& \leq & \left (1 + \frac{{a}_{0}} {2} \right )f({a}_{n-1})g({a}_{n-1})\|{\Gamma }_{n-1}F({x}_{n-1})\| \\ & \leq & \left (1 + \frac{{a}_{0}} {2} \right )\|{\Gamma }_{0}F({x}_{0})\|\prod \limits_{k=0}^{n-1}f({a}_{ k})g({a}_{k}).\end{array}$$

As a consequence of Lemma 4 it follows that

$$\prod \limits_{k=0}^{n-1}f({a}_{ k})g({a}_{k}) \leq \prod \limits_{k=0}^{n-1}{\gamma }^{{3}^{k} }\Delta = {\Delta }^{n}{\gamma }^{1+3+{3}^{2}+\cdots +{3}^{n-1} } = {\Delta }^{n}{\gamma }^{\frac{{3}^{n}-1} {2} }.$$

So from Δ < 1 and γ < 1, we deduce that \({\prod \limits }_{k=0}^{n-1}f({a}_{k})g({a}_{k})\) converges to zero by letting \(n \rightarrow \infty \).

We are now ready to state the main result on convergence for ().

Theorem 1.

Let us assume that \({\Gamma }_{0} = F^{\prime}{({x}_{0})}^{-1} \in</Emphasis> \boldsymbol{\text{ L}}(Y,X)\) exists at some x 0 ∈ Ω and \(({c}_{1}) - ({c}_{4})\) are satisfied. Suppose that

$$0 < {a}_{0} < \frac{1} {2d},\mbox{ with}\quad d = 1 + 2\omega,\quad \omega = \frac{K} {{M}^{2}m}.$$
(16)

Then if \(\overline{B({x}_{0},R\eta )} =\{ x \in X;\|x - {x}_{0}\| \leq R\eta \} \subseteq \Omega \) the sequence {x n } defined in (2) and starting at x 0 has at least R-order three and converges to a solution x of the Eq. (1). In that case, the solution x and the iterates \({x}_{n},{y}_{n}\) belong to \(\overline{B({x}_{0},R\eta )},\) and x is the only solution of Eq. (1) in \(B({x}_{0}, \frac{2} {M\beta } - R\eta ) \cap \Omega \) . Furthermore, we have the following error estimates:

$$\|{x}^{{_\ast}}- {x}_{ n}\| \leq \left (1 + \frac{{a}_{0}} {2} {\gamma }^{\frac{{3}^{n}-1} {2} }\right ){\gamma }^{\frac{{3}^{n}-1} {2} } \frac{{\Delta }^{n}} {1 - \Delta {\gamma }^{{3}^{n} }}\eta.$$
(17)

The proof is the same as Theorem 3.1 in [7, 8].

5 Convergence Study of Modifications of the Chebyshev Method

The convergence of the proposed modifications (3) and (4) is studied analogously as those of Chebyshev method. The difference is only to prove assumption (II n ) for these methods. Therefore, we turn our attention only to the proof of assumption (II n ). At first, we consider a modification (3). For this, if \({x}_{n},{y}_{n} \in \Omega \) we obtain from Taylor’s formula

$$F({x}_{n}) = -\frac{1} {2}{F}^{{\prime\prime}}({\eta }_{ n-1}){({y}_{n-1} - {x}_{n-1})}^{2} + \frac{1} {2}{F}^{{\prime\prime}}({\xi }_{ n}){({x}_{n} - {x}_{n-1})}^{2},$$
(18)

where

$$\begin{array}{rcl}{ \eta }_{n-1}& =& (1 - w){x}_{n-1} + w{z}_{n-1}, \\ {\xi }_{n}& =& \overline{\theta }{x}_{n} + (1 -\overline{\theta }){x}_{n-1},\quad 0 < \omega,\overline{\theta } < 1.\end{array}$$

According to (3) we have

$${x}_{n} - {x}_{n-1} = \left (I - \frac{1} {2\theta }{\Gamma }_{n-1}(F^{\prime}({z}_{n-1}) - F^{\prime}({x}_{n-1}))\right )({y}_{n-1} - {x}_{n-1}).$$

Substituting the last expression into (18) we get

$$\begin{array}{rcl} F({x}_{n})& =& \frac{1} {2}\left ({F}^{{\prime\prime}}({\xi }_{ n}) - {F}^{{\prime\prime}}({\eta }_{ n-1})\right ){({y}_{n-1} - {x}_{n-1})}^{2} \\ & & +\frac{1} {2}{F}^{{\prime\prime}}({\xi }_{ n})\left [-\frac{1} {\theta }{\Gamma }_{n-1}(F^{\prime}({z}_{n-1}) - F^{\prime}({x}_{n-1})){({y}_{n-1} - {x}_{n-1})}^{2}\right. \\ & & \left.+ \frac{1} {4{\theta }^{2}}{\Gamma }_{n-1}^{2}{(F^{\prime}({z}_{ n-1}) - F^{\prime}({x}_{n-1}))}^{2}{({y}_{ n-1} - {x}_{n-1})}^{2}\right ]. \end{array}$$
(19)

Then, for n = 1, if \({y}_{0} \in \Omega \), we have

$$\begin{array}{rcl} \|F({x}_{1})\|& \leq & \left [\frac{K} {2} \|{\xi }_{1} - {\eta }_{0}\| + \frac{M} {2\theta }\|{\Gamma }_{0}\|M\theta \|{y}_{0} - {x}_{0}\|\right. \\ & & \left.+ \frac{M} {8{\theta }^{2}}\|{\Gamma {}_{0}}\|^{2}{M}^{2}{\theta }^{2}\|{y}_{ 0} - {x{}_{0}}\|^{2}\right ]\|{y}_{ 0} - {x{}_{0}}\|^{2}.\end{array}$$

Since \({\xi }_{1} - {\eta }_{0} = \overline{\theta }({x}_{1} - {x}_{0}) - w\theta ({y}_{0} - {x}_{0})\), it follows

$$\|{\xi }_{1} - {\eta }_{0}\| \leq \overline{\theta }\|{x}_{1} - {x}_{0}\| + w\theta \|{y}_{0} - {x}_{0}\| \leq \left (\overline{\theta }\left (1 + \frac{{a}_{0}} {2} \right ) + w\theta \right )\|{y}_{0} - {x}_{0}\|.$$

If we take \(\hat{\theta } =\max (\overline{\theta },w\theta )\), then we get the following estimate:

$$\begin{array}{rcl} \|F({x}_{1})\|& \leq & \left \{K\hat{\theta }\left (1 + \frac{{a}_{0}} {2} \right )\frac{{M}^{2}\|{\Gamma }_{0}\|} {{M}^{2}\|{\Gamma }_{0}\|}\|{\Gamma }_{0}F{({x}_{0})}\|^{2} + \frac{{M}^{2}\|{\Gamma }_{ 0}\|} {2} \|{\Gamma }_{0}F{({x}_{0})\|}^{2}\right. \\ & & \left.+\frac{{M}^{3}} {8} \|{\Gamma {}_{0}}\|^{2}\|{\Gamma }_{ 0}F{({x}_{0})}\|^{3}\right \}\|{\Gamma }_{ 0}F({x}_{0})\|.\end{array}$$

Therefore, we have

$$\|{\Gamma }_{1}F({x}_{1})\| \leq f({a}_{0})g({a}_{0})\|{\Gamma }_{0}F({x}_{0})\|,\quad g({a}_{0}) = \frac{{a}_{0}^{2}(4 + {a}_{0})} {8} {d}_{1}\mbox{ with }{d}_{1} = 1 + 2.5\omega.$$

Analogously, for the modification (4), we have

$$\begin{array}{rcl} F({x}_{n})& =& -\frac{1} {2}\left [\left (1 + \frac{b} {2}\right ){F}^{{\prime\prime}}({\eta }_{ n-1}) -\frac{b} {2}{F}^{{\prime\prime}}({\zeta }_{ n-1})\right ]{({y}_{n-1} - {x}_{n-1})}^{2} \\ & & +\frac{{F}^{{\prime\prime}}({\xi }_{n})} {2} {({x}_{n} - {x}_{n-1})}^{2}, \end{array}$$
(20)
$$\begin{array}{rcl} {\xi }_{n}& =& \alpha {x}_{n-1} + (1 - \alpha ){x}_{n},\qquad \alpha \in (0,1), \\ {\eta }_{n-1}& =& \theta {x}_{n-1} + (1 - \theta ){y}_{n-1},\quad \theta \in (0,1), \\ {\zeta }_{n-1}& =& w{x}_{n-1} + (1 - w){z}_{n-1},\quad w \in (0,1).\end{array}$$

Notice that

$$\begin{array}{rcl} {\xi }_{n} - {\eta }_{n-1}& =& (1 - \theta )({x}_{n-1} - {y}_{n-1}) + \rho ({x}_{n} - {x}_{n-1}) \\ & =& (\rho - (1 - \theta ))({y}_{n-1} - {x}_{n-1}) -\frac{\rho } {2}{\Gamma }_{n-1}{D}_{n}{({y}_{n-1} - {x}_{n-1})}^{2}, \\ {\eta }_{n-1} - {\zeta }_{n-1}& =& (1 - w)({x}_{n-1} - {z}_{n-1}) + \lambda ({y}_{n-1} - {x}_{n-1}) \\ & =& (1 - w + \lambda )({y}_{n-1} - {x}_{n-1}), \\ \mbox{ where }\rho & =& 1 - \alpha,\qquad \lambda = 1 - \theta, \\ \end{array}$$
$$\begin{array}{rcl}{ x}_{n} - {x}_{n-1}& =& \left [I -\frac{1} {2}{\Gamma }_{n-1}\left (\left (1 + \frac{b} {2}\right ){F}^{{\prime\prime}}({\eta }_{ n-1}) -\frac{b} {2}{F}^{{\prime\prime}}({\xi }_{ n-1})\right )({y}_{n-1} - {x}_{n-1})\right ] \\ & & \times ({y}_{n-1} - {x}_{n-1}).\end{array}$$

Substituting the last expression into (20) we have

$$\begin{array}{rcl} F({x}_{n})& =& \frac{1} {2}{B}_{n}{({y}_{n} - {x}_{n-1})}^{2} -\frac{{F}^{{\prime\prime}}({\xi }_{ n})} {2} {\Gamma }_{n-1}{D}_{n}{({y}_{n} - {x}_{n-1})}^{3} \\ & & +\frac{{F}^{{\prime\prime}}({\xi }_{n})} {8} {\Gamma }_{n-1}^{2}{D}_{ n}^{2}{({y}_{ n-1} - {x}_{n-1})}^{4}, \end{array}$$
(21)

where

$$\begin{array}{rcl} {B}_{n}& =& {F}^{{\prime\prime}}({\xi }_{ n}) - {F}^{{\prime\prime}}({\eta }_{ n-1}) -\frac{b} {2}\left ({F}^{{\prime\prime}}({\eta }_{ n-1}) - {F}^{{\prime\prime}}({\xi }_{ n-1})\right ), \\ {D}_{n}& =& \left (1 + \frac{b} {2}\right ){F}^{{\prime\prime}}({\eta }_{ n-1}) -\frac{b} {2}{F}^{{\prime\prime}}({\xi }_{ n-1}).\end{array}$$

If \({\xi }_{n},{\eta }_{n-1},{\zeta }_{n-1} \in \Omega \) then we have

$$\begin{array}{rcl} \|{B}_{n}\|& \leq & K\left [\vert \beta - (1 - \theta )\vert -\frac{b} {2}(1 - w + \gamma )\right ]\|{y}_{n-1} - {x}_{n-1}\| \\ & & +\frac{K\|{\Gamma }_{n-1}\|\beta } {2} M\|{y}_{n-1} - {x{}_{n-1}}\|^{2}, \\ \|{D}_{n}\|& \leq & M.\end{array}$$

Using these expressions we get

$$\|{\Gamma }_{n}F({x}_{n})\| \leq f({a}_{n-1})\frac{{a}_{n-1}^{2}} {2} \left \{ \frac{K} {{M}^{2}m}\hat{d} + \left (1 + \frac{{a}_{n-1}} {4} \right )\right \}\|{\Gamma }_{n-1}F({x}_{n-1})\|,$$

where

$$\hat{d} = \vert \beta - (1 - \theta )\vert -\frac{b} {2}(1 - w + \gamma ) + \beta {a}_{n-1} < 3 + {a}_{n-1} < 4\left (1 + \frac{{a}_{n-1}} {4} \right )\!,$$
$$\vert \beta - \gamma \vert < 1\qquad 0 < 1 - w + \gamma < 2.$$

Then we obtain

$$\begin{array}{rcl} \|{\Gamma }_{n}F({x}_{n})\|& \leq & f({a}_{n-1})g({a}_{n-1})\|{\Gamma }_{n-1}F({x}_{n-1})\| \\ g({a}_{n-1})& =& \frac{{a}_{n-1}^{2}(4 + {a}_{n-1})} {8} {d}_{2},\qquad {d}_{2} = 1 + 4\omega. \end{array}$$

For the modifications (3) and (4) the cubic convergence theorem 1 is valid, in which d equals to 1 + 5ω and 1 + 4ω, respectively.

It should be mentioned that in [4] was constructed a family of predictor-corrector methods free from second derivative. But these methods, except the case A20, require more computational cost even as compared to the modification (3).

6 Mild Convergence Conditions

In order to obtain mild convergence conditions for these methods we first consider inexact Newton method (IN) for (1):

$$F^{\prime}({x}_{k}){s}_{k} = -F({x}_{k}) + {r}_{k},$$
(22)
$${x}_{k+1} = {x}_{k} + {s}_{k},\quad k = 0,1,\ldots,{x}_{0} \in \Omega $$
(23)

The terms \({r}_{k} \in {R}^{n}\) represent the residuals of the approximate solutions s k [3, 4]. We consider a local convergence result [3, 4]:

Theorem 2.

Given \({\eta }_{k} \leq \overline{\eta } < t < 1,k = 0,1,\ldots \) , there exists \(\epsilon > 0\) such that for any initial approximation x 0 with \(\|{x}_{0} - {x}^{{_\ast}}\|\leq \epsilon,\) the sequence of the IN iterates (1) satisfying

$$\|{r}_{k}\| \leq {\eta }_{k}\|F({x}_{k})\|,\qquad k = 0,1,\ldots $$
(24)

converges to x .

Moreover we know that the IN converges superlinearly when \({\eta }_{k} \rightarrow 0\) as k → . Now we analyze the connection between the inexact Newton method and the Chebyshev method (2) and its modifications (3) and (4). To this end we rewrite (2)–(4) in the form (22) with

$$\begin{array}{rcl}{ r}_{k}& =& F^{\prime}({x}_{k}){s}_{k} + F({x}_{k}) = -\frac{1} {2}{F}^{{\prime\prime}}({x}_{ k}){({y}_{k} - {x}_{k})}^{2}, \\ {r}_{k}& =& -\frac{1} {2\theta }(F^{\prime}({z}_{k}) - F^{\prime}({x}_{k}))({y}_{k} - {x}_{k}), \\ \end{array}$$

and

$${r}_{k} = -\frac{1} {2}\left (\left (1 + \frac{b} {2}\right )F({y}_{n}) + bF({x}_{n}) -\frac{b} {2}F({z}_{n})\right )\!,$$

respectively.

Theorem 3.

Let us assume that \({\Gamma }_{0} = F^{\prime}{({x}_{0})}^{-1} \in \mathcal{L}(Y,X)\) exists at some x 0 ∈ Ω, and the conditions (c 1 )–(c 3 ) are satisfied. Suppose that \(0 < {a}_{0} < 0.5\) . Then the sequences {x k } given by (2), (3) and (4) converge to x .

Proof.

We first observe that the sequence {a k } given by (2) and (3) with d = 1 is decreasing, i.e.,

$$0 < {a}_{k+1} < {a}_{k} < \cdots < {a}_{1} < {a}_{0} < \frac{1} {2}.$$
(25)

It is easy to show that for residuals r k of all the methods (2),(3) and (4) hold the following estimation

$$\|{r}_{k}\| \leq \frac{{a}_{k}} {2} \|F({x}_{k})\|,\qquad \left ({\eta }_{k} = \frac{{a}_{k}} {2} \right ).$$
(26)

From (25) and (26) follows \({\eta }_{k} \rightarrow 0\) as k → . Then by Theorem 2 the methods (2)–(4) converge to x  ∗ . □ 

The assumptions in Theorem 3 are milder than cubic convergence condition in Theorem 1 with d > 1.

7 Numerical Results and Discussion

Now, we give some numerical examples that confirm the theoretical results. First, we consider the following test equations:

$$\begin{array}{rcl} (I)& & {x}^{3} - 10 = 0, \\ (II)& & {x}^{3} + 4{x}^{2} - 10 = 0, \\ (III)& & ln(x) = 0, \\ (IV )& & {\sin }^{2}x - {x}^{2} + 1 = 0.\end{array}$$

All computations are carried out with a double arithmetic precision, and the number of iterations, such that \(\|F({x}_{n})\| \leq 1.0e - 16,\) is tabulated (see Table 1). We see that the third-order MOD 1 and MOD 2 takes less iterations to converge as compared to second-order Newton’s method (NM).

Table 1 Number of iterations

Now we consider the following systems of equations:

$$(V )\quad F(x) = \left (\begin{array}{c} {x}_{1}^{2} - {x}_{2} + 1 \\ {x}_{1} +\cos \left (\frac{\pi } {2} {x}_{2}\right ) \end{array} \right ) = 0,$$
$$(V I)\quad F(x) = \left (\begin{array}{l} {x}_{1}{x}_{3} + {x}_{2}{x}_{4} + {x}_{3}{x}_{5} + {x}_{4}{x}_{6} \\ {x}_{1}{x}_{5} + {x}_{2}{x}_{6} \\ {x}_{1} + {x}_{3} + {x}_{5} - 1 \\ - {x}_{1} + {x}_{2} - {x}_{3} + {x}_{4} - {x}_{5} + {x}_{6} \\ - 3{x}_{1} - 2{x}_{2} - {x}_{3} + {x}_{5} + 2{x}_{6} \\ 3{x}_{1} - 2{x}_{2} + {x}_{3} - {x}_{5} + 2{x}_{6} \end{array} \right ) = 0.$$

As seen from the Tables 13, that the proposed modifications (MOD 1, MOD 2) are almost always superior to these classical predecessor, the Chebyshev method (CM), because of their convergence order is as same as CM, but these are simpler and free from second derivative.

Table 3 The number of iterations

We also compared the computational cost of two modifications to the classical NM (see Table 2). The numerical results showed that MOD 2 is the most effective method especially when \(b = -2\) or b = 0.

Table 2 The computational cost of the methods

8 Conclusion

In this chapter we proposed new two families of methods which include many well-known third-order methods as particular case. We proved third-order convergence theorem for these modifications and as well as for Chebyshev method. The new methods were compared by their performance to Newton’s method and Chebyshev method, and it was observed that they show better performance than NM and CM.