Keywords

1 Introduction

Finding zeros of nonlinear functions by using iterative methods is one of the important problems which have interesting applications in different branches of science, in particular, physics and engineering [20, 22, 27], such as fluid dynamics, nuclear systems, and dynamic economic systems. Also, in mathematics, we do need iterative methods to find rapid solutions for special integrals and differential equations. Recently, there are many numerical iterative methods have been developed to solve these problems, see [1, 5, 6, 14, 16, 19, 27, 30]. These methods have been suggested and analyzed by using a variant of different techniques such as Taylor series. We first looked for the best approximation of which is used in many iterative methods. We obtained this approximation by combining two well-known methods, Potra–Ptak [23] and Weerakon methods [28]. Then, we used Homeier method [12] and the approximation to introduce the first method, which we called the Variant of Homeier Method 1 (VHM1). Finally, we used predictor–corrector technique to improve the first method (VHM1) and we called it Variant of Homeier Method 2 (VHM2). We showed that the new iterative methods are of third order of convergence, Efficiency Index [20] E.I. = 1.4422 and very robust and competitive with other third-order iterative methods.

2 The Established Methods

For the purpose of comparison, three 2-step third-order methods and two 1-step third-order methods are considered. Since these methods are well established, we state the essential formulas used to calculate the simple zero of nonlinear functions and thus compare the effectiveness of the proposed 2-step third-order methods.

Newton Method [3, 4, 9, 20, 22, 24, 27, 29].

One well-known 1-step iterative zero-finding method,

$$x_{n + 1} = x_{n} - \frac{{f(x_{n} )}}{{f^{\prime}(x_{n} )}},n = 0,1,2, \ldots$$
(1)

Halley Method [7, 8, 10, 11, 15, 24]:

$$x_{n + 1} = x_{n} - \frac{{2f(x_{n} )f^{\prime}(x_{n} )}}{{2f^{{\prime}{2}} (x_{n} ) - f(x_{n} )f^{\prime\prime}(x_{n} )}},n = 0,1,2, \ldots$$
(2)

which is widely known Halley's method. It is a cubically converging (p = 3) zero-finding 1-step algorithm. It requires three function evaluations (r = 3) and its E.I. = 1.4422.

Householder method [13, 24]:

$$x_{n + 1} = x_{n} - \frac{{f(x_{n} )}}{{f^{\prime}(x_{n} )}}\left\{ {1 + \frac{{f(x_{n} )f^{\prime\prime}(x_{n} )}}{{2f^{{\prime}{2}} (x_{n} )}}} \right\},n = 0,1,2,\ldots$$
(3)

Householder's method is also cubically converging (p = 3) 1-step zero-finding algorithm. It requires three function evaluations (r = 3) and it's E.I. = 1.4422.

Weerakoon and Fernando Method [21, 28]:

$$x_{n + 1} = x_{n} - \frac{{2f(x_{n} )}}{{f^{\prime}(x_{n} ) + f^{\prime}(y_{n} )}},n = 0,1,2,\ldots\;{\text{where,}}$$
(4)
$$y_{n} = x_{n} - \frac{{f(x_{n} )}}{{f^{\prime}(x_{n} )}}$$

Obviously, this is an implicit scheme, which requires having the derivative of the function at the \((n+1)th\) iterative step to calculate the \((n+1)th\) iterate itself. They overcome this difficulty by making use of Newton's iterative step to compute the \((n+1)th\) iterate on the right-hand side.

This scheme has also been derived by Ozban by using the arithmetic mean of \(f{^{\prime}}(xn)\) and \(f{^{\prime}}({y}_{n})\) instead of \(f{^{\prime}}({x}_{n})\) in Newton's method (1), i.e., \(( f{^{\prime}}({x}_{n}) +f{^{\prime}}({y}_{n}) )/2.\)

Weerakoon and Fernando method is also a cubically converging (p = 3) 2-step zero-finding algorithm. It requires three function evaluations \(( r = 3 )\) and it's \(E.I.= 1.4422\).

Homeier Method [12, 21]:

$$x_{n + 1} = x_{n} - \left( {\frac{{f(x_{n} )}}{2}} \right)\left\{ {\frac{1}{{f^{\prime}(x_{n} )}} + \frac{1}{{f^{\prime}(y_{n} )}}} \right\},n = 0,1,2,\ldots$$
(5)

where \(y_{n} = x_{n} - \frac{{f(x_{n} )}}{{f^{\prime}(x_{n} )}}\).

Homeier's method is also cubically converging (p = 3) 2-step zero-finding algorithm. It requires three function evaluations (r = 3) and its E.I. = 1.4422.

Potra-Ptak Method [23, 25]:

$$x_{n + 1} = x_{n} - \frac{{f(x_{n} ) + f(y_{n} )}}{{f^{\prime}(x_{n} )}},$$
(6)

where

$$y_{n} = x_{n} - \frac{{f(x_{n} )}}{{f^{\prime}(x_{n} )}}$$

Potr-Ptak's method is also a cubically converging (p = 3) 2-step zero-finding algorithm. It requires three function evaluations; two function evaluations and one first derivative (r = 3) and its E.I. = 1.4422.

2.1 Construction of the New Methods

In this section, first we define a new third-order method for finding zeros of a nonlinear function. We do that by combining two well-known methods to obtain a new one. In fact, the new iterative method will be an improvement of the classical Homeier method and this will be our first algorithm.

Secondly, we will improve our first algorithm by assuming a three-step iterative method per full cycle. In order to do that, we perform a Newton iteration at the new third step. We use a third variable \(Z_{n}\) for the third step, which we will approximate lately.

First, we equate (combine) the two methods (4) and (6) to obtain \(f{^{\prime}}({y}_{n})\)

$$\frac{{2f(x_{n} )}}{{f^{\prime}(x_{n} ) + f^{\prime}(y_{n} )}} \approx \frac{{f(x_{n} ) + f(y_{n} )}}{{f^{\prime}(x_{n} )}}$$
$$\begin{gathered} 2f(x_{n} )f^{\prime}(x_{n} ) \approx \left[ {f(x_{n} ) + f(y_{n} )} \right]\left\{ {f^{\prime}(x_{n} ) + f^{\prime}(y_{n} )} \right\} \hfill \\ \hfill \\ 2f(x_{n} )f^{\prime}(x_{n} ) \approx \left[ {f(x_{n} ) + f(y_{n} )} \right]f^{\prime}(x_{n} ) + \left[ {f(x_{n} ) + f(y_{n} )} \right]f^{\prime}(y_{n} ) \hfill \\ \hfill \\ \frac{{2f(x_{n} )f^{\prime}(x_{n} ) - \left[ {f(x_{n} ) + f(y_{n} )} \right]f^{\prime}(x_{n} )}}{{\left[ {f(x_{n} ) + f(y_{n} )} \right]}} \approx f^{\prime}(y_{n} ) \hfill \\ \end{gathered}$$
$$f^{\prime}(y_{n} ) \approx \frac{{f(x_{n} ) - f(y_{n} )}}{{f(x_{n} ) + f(y_{n} )}}f^{\prime}(x_{n} )$$
(7)

Now substituting (7) in (5) in order to get the first algorithm

$$x_{n + 1} = x_{n} - \frac{{f(x_{n} )}}{2}*\left\{ {\frac{1}{{f^{\prime}(x_{n} )}} + \frac{1}{{\frac{{f(x_{n} ) - f(y_{n} )}}{{f(x_{n} ) + f(y_{n} )}}f^{\prime}(x_{n} )}}} \right\}$$

So, we get the first Algorithm (1) which we will call it a Variant of Homeier Method 1 (VHM1).

For a given \({\mathrm{x}}_{^\circ }\), compute the approximate solution \({\mathrm{x}}_{\mathrm{n}}\) by iterative scheme.

$$y_{n} = x_{n} - \frac{{f(x_{n} )}}{{f^{\prime}(x_{n} )}}, f^{\prime}(x_{n} ) \ne 0$$
$$x_{n + 1} = x_{n} - \frac{{f^{2} (x_{n} )}}{{\left[ {f(x_{n} ) - f(y_{n} )} \right]f^{\prime}(x_{n} )}},\,{\text{for \, n }} = \, 0,{ 1},{ 2}, \ldots$$
(8)

Now, we need to drive the next algorithm, which will be an improvement of the first algorithm. The main goal is to make the new scheme optimal. We perform a Newton iteration at the new third step which comes next:

$$y_{n} = x_{n} - \frac{{f(x_{n} )}}{{f^{\prime}(x_{n} )}}$$
(9)
$$z_{n} = x_{n} - \frac{{f^{2} (x_{n} )}}{{\left[ {f(x_{n} ) - f(y_{n} )} \right]f^{\prime}(x_{n} )}}$$
(10)
$$x_{n + 1} = z_{n} - \frac{{f(z_{n} )}}{{f^{\prime}(z_{n} )}}.$$
(11)

Now, we try to simplify our new scheme to reach the convergence rate three with three function evaluations per full cycle; two function evaluations and one first derivative evaluation. Obviously, \(f({z}_{n})\) and \(f{^{\prime}}({z}_{n})\) should be approximated. We replace \(f{^{\prime}}({z}_{n})\) by \(f{^{\prime}}({x}_{n})\) and write the Taylor expansion of \(f ({z}_{n})\) about \({x}_{n}\) [25].

$$f(z_{n} ) = f(x_{n} ) + f^{\prime}(x_{n} )(z_{n} - x_{n} ) + \frac{1}{2!}f^{\prime\prime}(x_{n} )(z_{n} - x_{n} )^{2}$$
(12)

Now, \(f^{\prime\prime}(x_{n} )\) should be approximated as well. Once again we write the Taylor expansion of \(f(y_{n} )\) about \(x_{n}\) as follows:

$$f(y_{n} ) = f(x_{n} ) + f^{\prime}(x_{n} )(y_{n} - x_{n} ) + \frac{1}{2!}f^{\prime\prime}(x_{n} )(y_{n} - x_{n} )^{2}$$
(13)

From (13) and (9), we obtain \(f^{\prime\prime}(x_{n} )\) as follows:

$$f^{\prime\prime}(x_{n} ) = \frac{{2f(y_{n} )\left[ {f^{\prime}(x_{n} )} \right]^{2} }}{{\left[ {f(x_{n} )} \right]^{2} }}$$
(14)

Now substituting (14) and (10) in (12), we obtain f(zn) as follows:

$$f(z_{n} ) = \frac{{f(x_{n} )f^{2} (y_{n} )}}{{\left[ {f(x_{n} ) - f(y_{n} )} \right]^{2} }}.$$
(15)

Now, we substitute (10) and (15) in (11), also \(f^{\prime}(x_{n} )\) instead of \(f^{\prime}(z_{n} )\):

$$x_{n + 1} = z_{n} - \frac{{f(z_{n} )}}{{f^{\prime}(z_{n} )}}$$
$$X_{n + 1} = \left\{ {X_{n} - \frac{{f^{2} (x_{n} )}}{{\left[ {f(x_{n} ) - f(y_{n} )} \right]f^{\prime}(x_{n} )}}} \right\} - \frac{{\frac{{f(x_{n} )f^{2} (y_{n} )}}{{\left[ {f(x_{n} ) - f(y_{n} )} \right]^{2} }}}}{{f^{\prime}(x_{n} )}}$$

After doing some simplifying work, we get a new algorithm.

Algorithm (1): we will call it the Variant Homeier Method 2 (VHM2).

For a given \(x_{0}\), compute the approximate solution \(x_{n + 1}\) by an iterative scheme

$$y_{n} = x_{n} - \frac{{f(x_{n} )}}{{f^{\prime}(x_{n} )}}, \,f^{\prime}(x_{n} ) \ne 0.$$
$$x_{n + 1} = x_{n} - \left\{ {1 + \frac{{f(x_{n} )f(y_{n} )}}{{\left[ {f(x_{n} ) - f(y_{n} )} \right]^{2} }}} \right\}*\frac{{f(x_{n} )}}{{f^{\prime}(x_{n} )}},{\text{ for n }} = \, 0,{ 1},{ 2}, \ldots$$
(16)

As we can see, both algorithms require only two function evaluations and only one first derivative evaluation per each cycle. When we compare both algorithms and Homeier method, clearly, there is big difference, which is Homeier method requires one function evaluation and two first derivative evaluations.

2.2 Convergence Criteria of the New Methods

Now, we compute the orders of convergences and corresponding error equations of the proposed methods Algorithms (8) and (16).

Theorem 2.1

Let\(\propto\)  ε I be a simple zero of sufficiently differentiable function f: I ⊆ R → R for an open interval I. If \({x}_{0}\) is sufficiently close to \(\propto\), then the iterative method defined by Algorithm (1) is of order three and satisfies the error equation:

$$e_{n + 1}^{{}} = (2c_{2}^{2} + 2c_{3}^{{}} )e_{n}^{3} + o(e_{n}^{4} ),$$

where

\(C_{k} = \frac{{f^{(k)} (\alpha )}}{k!f^{\prime}(\alpha )}\), \({\rm k} = 2, 3, \ldots {\rm and}\, e_{n} = x_{n} - \alpha.\)

Proof

Let \(\propto\) be a simple zero of f, \(f^{\prime}(\alpha ) \ne 0\). Using Taylor's series expansion around \(\propto\) in the nth iterate results in

$$f(x_{n} ) = f^{\prime}(\alpha )e_{n} + \frac{1}{2!}f^{\prime\prime}(\alpha )e_{n}^{2} + \frac{1}{3!}f^{\prime\prime\prime}(\alpha )e_{n}^{3} + O(e_{n}^{4} )$$
$$f(x_{n}^{{}} ) = f^{\prime}(\alpha )\left[ {e_{n} + c_{2} e_{n}^{2} + c_{3} e_{n}^{3} + O(e_{n}^{4} )} \right]$$
(17)
$$f^{\prime}(x_{n}^{{}} ) = f^{\prime}(\alpha )\left[ {1 + 2c_{2} e_{n}^{{}} + 3c_{3} e_{n}^{2} + O(e_{n}^{3} )} \right]$$
(18)

From (17) and (18), we have

$$\frac{{f(x_{n} )}}{{f^{\prime}(x_{n} )}} = e_{n} - c_{2} e_{n}^{2} + (2c_{2}^{2} - 2c_{3} )e_{n}^{3} + O(e_{n}^{3} )$$
(19)

But \(y_{n} = x_{n} - \frac{{f(x_{n} )}}{{f^{\prime}(x_{n} )}}\), \(e_{n} = x_{n} - \alpha\). Using (19), we get

$$y_{n} = x_{n} - \left\{ {e_{n} - c_{2} e_{n}^{2} + (2c_{2}^{2} - 2c_{3} )e_{n}^{3} + O(e_{n}^{3} )} \right\}$$
$$y_{n} = \alpha + c_{2} e_{n}^{2} + (2c_{3} - 2c_{2}^{2} )e_{n}^{3} + O(e_{n}^{3} )$$
$$(y_{n} - \alpha ) = c_{2} e_{n}^{2} + (2c_{3} - 2c_{2}^{2} )e_{n}^{3} + O(e_{n}^{3} )$$
(20)

Now by Taylor expansion once again \(f(y_{n} )\) about \(\propto\) and using (20):

\(f(y_{n} ) = f(\alpha ) + f^{\prime}(\alpha )(y_{n} - \alpha ) + \frac{f^{\prime\prime}(\alpha )}{{2!}}(y_{n} - \alpha )^{2}\) but \(f(\alpha ) = 0\),

\(f(y_{n} ) = f^{\prime}(\alpha )\left[ {(y_{n} - \alpha ) + \frac{f^{\prime\prime}(\alpha )}{{2!f^{\prime}(\alpha )}}(y_{n} - \alpha )^{2} } \right]\) and \(\frac{f^{\prime\prime}(\alpha )}{{2!f^{\prime}(\alpha )}} = C_{2}\)

$$f(y_{n} ) = f^{\prime}(\alpha )\left[ {c_{2} e_{n}^{2} + (2c_{3} - 2c_{2}^{2} )e_{n}^{3} + O(e_{n}^{4} )} \right]$$
(21)
$$f^{2} (x_{n}^{{}} ) = f^{{\prime}{2}} (\alpha )\left[ {e_{n}^{2} + 2c_{2} e_{n}^{3} + c_{3} e_{n}^{3} + O(e_{n}^{4} )} \right]$$
(22)

From (17) and (21), we get

\(f(x_{n} ) - f(y_{n} ) = f^{\prime}(\alpha )\left[ {e_{n} + (2c_{2}^{2} - c_{3} )e_{n}^{3} } \right]\) and by using (18), we obtain

$$f^{\prime}(x_{n} )\left( {f(x_{n} ) - f(y_{n} )} \right) = f^{{\prime}{2}} (\alpha )\left[ {e_{n} + 2c_{2} e_{n}^{2} + (2c_{2}^{2} + 2c_{3} )e_{n}^{3} + O(e_{n}^{4} )} \right]$$
(23)

Now by (22) and (23), we get

$$\frac{{f^{2} (x_{n} )}}{{f^{\prime}(x_{n} )\left( {f(x_{n} ) - f(y_{n} )} \right)}} = e_{n} - (2c_{2}^{2} + 2c_{3} )e_{n}^{3} + O(e_{n}^{4} )$$
(24)

Putting (24) in the Algorithm (1), Eq. (8), we get

\(x_{n + 1} = x_{n} - \left\{ {e_{n} - (2c_{2}^{2} + 2c_{3} )e_{n}^{3} + O(e_{n}^{4} )} \right\}\), where \(e_{n} = x_{n} - \alpha\)

$$x_{n + 1} = \alpha + (2c_{2}^{2} + 2c_{3} )e_{n}^{3} + O(e_{n}^{4} )$$
(25)

Now, \(e_{n + 1} = x_{n + 1} - \alpha\), by substituting (25), we get.

\(e_{n + 1} = (2c_{2}^{2} + 2c_{3} )e_{n}^{3} + O(e_{n}^{4} )\) and the proof is completed.

Theorem 2.2

Let \(\propto\) ∈ I be a simple zero of sufficiently differentiable function f: I ⊆ R → R for an open interval I. If × 0 is sufficiently close to \(\propto\), then the iterative method defined by (8) is of order three and satisfies the error equation:

$$e_{n + 1}^{{}} = (2c_{3}^{{}} - c_{2}^{2} )e_{n}^{3} + o(e_{n}^{4} ),$$

Proof

Let \(\propto\) be a simple zero of f, \(f^{\prime}(\alpha ) \ne 0\), once again, we can follow the same procedure provided in Theorem 2.1.

Using (17) and (21), we get

$$f(x_{n} )f(y_{n} ) = f^{{\prime}{2}} (\alpha )\left[ {c_{2} e_{n}^{3} } \right]$$
(26)
$$f(x_{n} ) - f(y_{n} ) = f^{\prime}(\alpha )\left[ {e_{n} + (2c_{2}^{2} - c_{3} )e_{n}^{3} } \right]$$
$$\left[ {f(x_{n} ) - f(y_{n} )} \right]^{2} = f^{{\prime}{2}} (\alpha )\left[ {e_{n}^{2} } \right]$$
(27)

And then dividing (26) by (27), we get

$$\frac{{f(x_{n} )f(y_{n} )}}{{\left[ {f(x_{n} ) - f(y_{n} )} \right]^{2} }} = c_{2} e_{n}$$
(28)

By using (19) and (28), we obtain:

$$\left\{ {1 + \frac{{f(x_{n} )f(y_{n} )}}{{\left[ {f(x_{n} ) - f(y_{n} )} \right]^{2} }}} \right\}*\frac{{f(x_{n} )}}{{f^{\prime}(x_{n} )}} = e_{n} + (c_{2}^{2} - 2c_{3} )e_{n}^{3}$$
(29)

Now, substituting (29) in Algorithm (1.2), we get

$$\begin{aligned}x_{n + 1} =& x_{n} - \left\{ {1 + \frac{{f(x_{n} )f(y_{n} )}}{{\left[ {f(x_{n} ) - f(y_{n} )} \right]^{2} }}} \right\}*\frac{{f(x_{n} )}}{{f^{\prime}(x_{n} )}} \\&= x_{n} - \left\{ {e_{n} + (c_{2}^{2} - 2c_{3} )e_{n}^{3} } \right\}, {\rm and \, by}\, e_{n} = x_{n} - \alpha , {\rm we \, get}: \\&= \alpha + (2c_{3} - c_{2}^{2} )e_{n}^{3} + O(e_{n}^{4} ) \end{aligned}$$

Now using the previous result in \(e_{n + 1} = x_{n + 1} - \alpha\)

$$e_{n + 1} = \left\{ {\alpha + (2c_{3} - c_{2}^{2} )e_{n}^{3} + O(e_{n}^{4} )} \right\} - \alpha = (2c_{3} - c_{2}^{2} )e_{n}^{3} + O(e_{n}^{4} ),$$

the proof is done.

2.3 More Suggestions

In this section, we present new modifications of important methods for solving nonlinear equations of type \(f(x) = 0\) using the substitution of the formula (7) \(f^{\prime}(y_{n} ) = \frac{{f(x_{n} ) - f(y_{n} )}}{{f(x_{n} ) + f(y_{n} )}}f^{\prime}(x_{n} )\) in well-known methods.

As we will see, this is so helpful that it reduces the number of required derivative evaluations in iteration schemes. We will introduce only two suggestions as examples and we will show their rate of convergences.

Example 1

Consider Noor and Gupta's fourth-order method [17, 18].

\(y_{n} = x_{n} - \frac{{f(x_{n} )}}{{f^{\prime}(x_{n} )}}\),

$$x_{n + 1} = y_{n} - \frac{{f(y_{n} )}}{{f^{\prime}(y_{n} )}} - \frac{1}{2}\left( {\frac{{f(y_{n} )}}{{f^{\prime}(y_{n} )}}} \right)^{2} *\frac{{f^{\prime}(y_{n} )}}{{f(x_{n} )}}*\frac{{f^{\prime}(y_{n} ) - f^{\prime}(x_{n} )}}{{f^{\prime}(y_{n} )}}$$
(30)

By substituting (7) in (30), we get

$$x_{n + 1} = y_{n} - \frac{{f(x_{n} ) + f(y_{n} )}}{{f(x_{n} ) - f(y_{n} )}}*\frac{{f(y_{n} )}}{{f^{\prime}(x_{n} )}}\left( {1 - \frac{{f(y_{n} )}}{{f(x_{n} )}}*\frac{{f(y_{n} )}}{{f(x_{n} ) - f(y_{n} )}}} \right)$$
(31)

or in another form:

$$x_{n + 1} = y_{n} - \frac{{f^{3} (x_{n} ) - 2f(x_{n} )f^{2} (y_{n} ) - f^{3} (y_{n} )}}{{\left[ {f(x_{n} ) - f(y_{n} )} \right]^{2} }}*\frac{{f(y_{n} )}}{{f(x_{n} )f^{\prime}(x_{n} )}}$$

Theorem 2.3

Let \(\propto\) ε I be a simple zero of sufficiently differentiable function.

f: I ⊆ R → R for an open interval I. If × 0 is sufficiently close to \(\propto\), then the iterative method introduced in (31) is of order four and satisfies the error equation:

$$e_{n + 1}^{{}} = (5c_{2}^{3} - c_{2} c_{3}^{{}} )e_{n}^{4} + o(e_{n}^{5} ),$$

Notes:

  1. 1.

    The suggested method requires only two function evaluations and one derivative evaluation (r = 3).

  2. 2.

    Rate of convergence P = 4.

  3. 3.

    Efficiency Index E.I. = \(p^{{{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 r}}\right.\kern-0pt} \!\lower0.7ex\hbox{$r$}}}}\) = \(4^{{{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 3}}\right.\kern-0pt} \!\lower0.7ex\hbox{$3$}}}}\) = 1.5874.

Example 2

Consider Jarratt's fourth-order method [2].

\(y_{n} = x_{n} - \frac{{2f(x_{n} )}}{{3f^{\prime}(x_{n} )}}\),

\(x_{n + 1} = x_{n} - \frac{{3f^{\prime}(y_{n} ) + f^{\prime}(x_{n} )}}{{6f^{\prime}(y_{n} ) - 2f^{\prime}(x_{n} )}}*\frac{{f(x_{n} )}}{{f^{\prime}(x_{n} )}}\)

By substituting the previous formula (16), we get

$$x_{n + 1} = x_{n} - \frac{{2f(x_{n} ) - f(y_{n} )}}{{2f(x_{n} ) - 4f(y_{n} )}}*\frac{{f(x_{n} )}}{{f^{\prime}(x_{n} )}},$$
(32)

where

$$y_{n} = x_{n} - \frac{{f(x_{n} )}}{{f^{\prime}(x_{n} )}}$$

with error equation

$$e_{n + 1}^{{}} = \frac{{ - c_{2} }}{2}e_{n}^{2} + (c_{2}^{2} - c_{3}^{{}} )e_{n}^{3} + o(e_{n}^{4} ).$$

Theorem 2.4

Let\(\propto\) ∈ I be a simple zero of sufficiently differentiable function f: I ⊆ R → R for an open interval I. If × 0 is sufficiently close to\(\propto\), then the iterative method introduced in (32) is of order two and satisfies the error equation:

$$e_{n + 1}^{{}} = \frac{{ - c_{2} }}{2}e_{n}^{2} + (c_{2}^{2} - c_{3}^{{}} )e_{n}^{3} + o(e_{n}^{4} ),$$

3 Numerical Examples

In this section, first we present the results of numerical calculations on different functions and initial guesses to demonstrate the efficiency of the suggested methods, Variant Homeier Method 1 (VHM1) and its improvement (VHM2). Also, we compare these methods with famous methods, such as Halley's, Weerakoon and Potra–Ptak methods. All computations are carried out with 15 decimal places (See Table 1) approximate zeros \(\alpha\) found up to 15th decimal place).

Table 1 Different test functions and their approximate zeros (\(\propto\))

All programs and computations were completed using MATLAB, 2009a. Table 2 displays the number of iterations (IT) and the computational order of convergences (COC). Table 3 displays the number of function evaluations (r), convergence order (P), efficiency index (E.I.), the sum of iterations, and average COC's for each method. When we reached the sought zero \(\alpha\) after only three iterations, we used the second formula to compute the COC of the iterative method. Furthermore, we assumed COC is zero when the iterative method diverged. Table 4 displays the number of function evaluations and derivative evaluations required for each method.

Table 2 Comparison of various iterative methods
Table 3 Summary of the comparison of variant methods
Table 4 Type of functions required for each method

4 Conclusion

We have developed two of 2-step iterative methods for finding zeros of nonlinear functions, (VHM1) and (VHM2). The main goal is to find and improve iterative schemes which requires less derivative evaluations of the function, whereas more derivative evaluations in a method cost need more time and effort from an industry point of view. So, both new methods require only two function evaluations and one first derivative evaluation. On the contrary, known methods as Halley and Householder require one function evaluation, one first derivative and one second derivative evaluation whereas Weerakoon and Homeier methods require one function evaluation and two first derivative evaluations (See Table 4). Furthermore, we have proved theoretically that both new methods are of order three. It can be observed that the numerical experiment is displayed in Tables 2, 3, and 4.

In addition, based on numerical experiments, the proposed methods are also compared with the previous well-known iterative methods of the same order of convergence. The performance of the proposed methods can be seen in Tables 2, 3, and 4.

Moreover, it can easily be seen that both new methods are more efficient, robust, and faster convergence than the other methods with respect to the required number of derivative evaluations for each method, IT's and COC results.

Numerical experiments show that the order of convergence of both methods is at least three.