1 Introduction

Finding the simple roots of a given scalar nonlinear equation f(x)=0 is a significant problem in Numerical Analysis with interesting applications in science and engineering. For example, the application of iterative schemes for solving nonlinear equations for finding the matrix sign function discussed in [1]. In the last decade, many modified iterative methods have been developed to improve the local order of convergence of some classical methods such as Newton or Ostrowski’s schemes. In this sense, many iterative procedures have been designed of which we cite among others [26] and the references therein.

Nevertheless, as the order of an iterative method increases, so does the number of functional evaluations per step. The efficiency index (see [7]) provides a measure of the balance between those quantities, according to the formula p 1/n, where p is the order of convergence of the method and n the number of functional evaluations per step. Kung and Traub conjectured in [8] that a multipoint iterative scheme without memory, requiring n functional evaluations per iteration, has order of convergence at most 2n−1. Multipoint schemes which achieve this bound are called optimal methods. Due to this fact, the development of optimal methods without memory is still an active area and has been attracted much attention of many researchers recently (see [9]).

In this work, we construct two new iterative three-point classes, with optimal eighth-order convergence, for finding a simple root r of the nonlinear equation f(x)=0, where \(f: D \subset \mathbb{R} \to \mathbb{R}\) is a scalar function on an open interval D. In fact, based on the first two steps of Kung-Traub’s method [8], we develops these classes of eighth-order methods, free from second derivative, with efficiency index 81/4=1.682.

The rest of this paper is organized as follows: the design and convergence analysis of the proposed classes are described in Sect. 2. A generalization of the procedure is made in Sect. 3, showing the conditions on an arbitrary fourth-order method and on the weight functions that assure the eighth-order convergence by applying the same technique that in the previous section. Computational aspects and comparisons with other known eighth-order methods are illustrated in Sect. 4. Finally, in Sect. 5, some dynamical aspects of the different methods used in this study are analyzed.

2 The Development of the Methods and Their Convergence

This section deals with constructing two new multipoint classes of optimal iterative methods for solving nonlinear equations, including the Kung-Traub’s method as the first two steps and suitably choosing two weight functions depending on one and two variables, respectively, in its third step. The order of convergence of the iterative schemes of these classes is eight, requiring four functional evaluations per step. So, these methods are optimal in the sense of Kung-Traub’s conjecture.

To this end, we start with a three-step scheme (omitting iteration index for simplicity)

$$ \begin{cases} y = x-\frac{f(x)}{f'(x)},\\ z = y-\frac{f(x)^2}{(f(x)-f(y))^2} \frac{f(y)}{f'(x)},\\ \hat{x} = z-\frac{f(z)}{f'(z)}, \end{cases} $$
(2.1)

where x is a current approximation and \(\hat{x}\) is a new approximation to a simple real zero r of f. Note that the first two steps form the Kung-Traub’s method of order four. This method is an special case of the two-point proposed in [10] and it has been also extended in [11].

The iterative method (2.1) has order eight but it requires five functional evaluations, which is too expensive in terms of computational effort. To decrease this cost of functional evaluations from 5 to 4, we try to approximate f′(z) in the third step of (2.1) using available data f(x), f′(x), f(y) and f(z). We are seeking this approximation in the form

$$ f'(z)\thickapprox \frac{f'(x)}{H(t,s)+G(v)}, $$
(2.2)

where \(t=\frac{f(y)}{f(x)}\), \(s=\frac{f(z)}{f(y)}\) and \(v=\frac{f(z)}{f(x)}\).

Therefore, we start from the scheme (2.1), the approximations (2.2) and state the following three-point method

$$ \begin{cases} y = x-\frac{f(x)}{f'(x)},\\ z = y-\frac{f(x)^2}{(f(x)-f(y))^2} \frac{f(y)}{f'(x)},\\ \hat{x} = z-(H(t,s)+G(v))\frac{f(z)}{f'(x)}. \end{cases} $$
(2.3)

Similar ideas, by using weight-functions, can be found in [12, 13].

To find the suitable weight functions H and G in (2.3), providing order eight, we will use the method of undetermined coefficients and Taylor’s series about zero, since t, s and v tend to zero when x tends to r. The technique of undetermined coefficients was studied, perhaps for the first time, in [14]. We have

$$\begin{aligned} H(t,s) = & H(0,0) + H_t(0,0)t + H_s(0,0)s \\ &{}+\frac{1}{2} \bigl[ H_{tt}(0,0)t^2 + 2H_{ts}(0,0)st + H_{ss}(0,0)s^2 \bigr] \\ &{}+ \frac{1}{6} \bigl[H_{ttt}(0,0)t^3 +3 H_{tts}(0,0)t^2s + 3H_{tss}(0,0)ts^2+H_{sss}(0,0)s^3 \bigr]+\cdots, \end{aligned}$$

and

$$G(v)=G(0)+G'(0)v+\frac{1}{2}G''(0)v^2+ \frac{1}{6}G'''(0)v^3+ \cdots. $$

By using Taylor’s expansion of f(x) about r and taking into account that f(r)=0, we obtain

$$ f(x)=f'(r) \bigl[e + c_2 e^2 + c_3 e^3 + c_4 e^4+ c_5 e^5 + c_6 e^6 + c_7 e^7+c_8 e^8+ O \bigl(e^9\bigr) \bigr] $$
(2.4)

and

$$ f'(x)=f'(r) \bigl[1 + 2 c_2 e + 3 c_3 e^2 + 4 c_4 e^3+ 5c_5 e^4+ 6c_6 e^5 + 7c_7 e^6 + 8c_8 e^7+ O\bigl(e^8\bigr) \bigr], $$
(2.5)

where \(c_{j}=\frac{f^{(j)} (r)}{j! f'(r)}\), for j=2,3,… and e=xr.

Now, we are going to give the Mathematica code to provide some suitable conditions in such a way that the proposed method gets optimal eighth-order.

figure a

So, the expression of the asymptotic error of \(\hat{\mathtt{e}}=\hat{\mathtt{x}}-\mathtt{r}\) is

$$ \hat{e} =a_4 e^4+a_5 e^5+a_6 e^6+a_7 e^7+a_8 e^8+ O\bigl(e^9\bigr). $$
(2.6)

The iterative three-point method (2.3) will have the order of convergence equal to eight if a 4, a 5, a 6 and a 7 in (2.6) all vanish. First, for a 4 we have

figure b

According to the above analysis, the error equation is

$$\begin{aligned} \hat{e} =&\frac{-c_2(2c_2 ^2-c_3)}{2} \bigl(-2c_2 c_4-c_2^2c_3 \bigl(H_{tts}(0,0)+4\bigl(H_{ss}(0,0)-8\bigr) \bigr) \\ &{}+2c_2^4\bigl(2H_{ss}(0,0)+H_{tts}(0,0)-31 \bigr) \bigr)e^8+O\bigl(e^9\bigr). \end{aligned}$$
(2.7)

Based on the previous description, Theorem 2.1 establishes the eighth-order convergence of iterative scheme (2.3). Although the result is restricting to real domain, a complex version of it is possible.

Theorem 2.1

Let us suppose that \(f:D \subset \mathbb{R} \rightarrow \mathbb{R}\) is a sufficiently differentiable real function, with a simple root r in an open set D, and x 0 is an initial guess close to r. Then, the iterative scheme (2.3) has optimal eighth-order convergence if sufficiently differentiable functions H(t,s) and G(v) are chosen such that

$$ \begin{array} {l@{\quad\quad}l@{\quad\quad}l} H(0,0)=1-G(0), & H_t(0,0)=2 , & H_s (0,0)=1 , \\ H_{tt}(0,0)=8, & H_{ttt}(0,0)=36, & H_{ts}(0,0)=4-G'(0), \end{array} $$
(2.8)

where indices in H denote the partial derivative with respect to the given ones, e.g., \(H_{t}(t,s)=\frac{\partial H(t,s)}{\partial t}\) and expression (2.7) describes the error equation of this family.

Some simple weight functions satisfying conditions (2.8) are

$$\begin{aligned} H_1(t,s)&=-2t^3+\frac{1}{1-s+2st-2t},\quad\quad G_1(v)=2v , \\ H_2(t, s) &=1+2t+4t^2+6t^3+s+4ts,\quad\quad G_2(v)=0 . \end{aligned}$$

We will denote by M1 and M2 the iterative schemes defined by (2.3) by using H 1(t,s)+G 1(v) and H 2(t,s)+G 2(v), respectively.

Similarly, we have

Theorem 2.2

let us suppose that \(f:D \subset \mathbb{R} \rightarrow \mathbb{R}\) is a sufficiently differentiable real function, with a simple root r in an open set D, and x 0 is an initial guess close to r. Then, the iterative scheme (2.1) with alternative estimation of f′(z)

$$ f'(z)\thickapprox \frac{f'(x)}{H(t,s)G(v)}, $$
(2.9)

has optimal eighth-order convergence if sufficiently differentiable functions H(t,s) and G(v) are chosen such that

$$ \begin{array} {l@{\quad\quad}l@{\quad\quad}l} H(0,0)=1/G(0), & H_t(0,0)=2/G(0), & H_s (0,0)=1/G(0), \\ H_{ts}(0,0)=\frac{4G(0)-G'(0)}{G(0)^2}, & H_{tt}(0,0)=8/G(0), & H_{ttt}(0,0)=36/G(0), \end{array} $$
(2.10)

and the error equation is

$$\begin{aligned} \hat{e} =&\frac{-c_2(2c_2 ^2-c_3)}{2} \bigl(-2c_2 c_4-c_2^2c_3 \bigl(H_{tts}(0,0)+4H_{ss}(0,0)+4G'(0)-32\bigr) \\ & {}+c_3 ^2\bigl(H_{ss}(0,0)-2 \bigr)+2c_2^4\bigl(H_{tts}(0,0)+2H_{ss}(0,0)+4G'(0)-31 \bigr) \bigr)e^8 \\ &{}+O\bigl(e^9\bigr). \end{aligned}$$
(2.11)

Some simple weight functions satisfying conditions (2.10) are

$$\begin{aligned} H_3(t, s) &=1+2t+4t^2+6t^3+s+2ts,\quad\quad G_3(v)=1+2v, \\ H_4(t,s)&=-2t^3+\frac{1}{1-s+2st-2t},\quad\quad G_4(v)=1+2v. \end{aligned}$$

The resulting methods using H 3(t,s)G 3(v) and H 4(t,s)G 4(v) in (2.9) will be denoted in the following by M3 and M4, respectively.

3 A General Procedure for Designing Eighth-Order Methods

In the previous section, we have designed two optimal eighth-order classes of iterative schemes from the Kung-Traub’s optimal fourth-order method. However, it can be proved that the same procedure from any other optimal fourth-order does not support order eight. Then, we wonder which are the necessary conditions to be achieved by an optimal fourth-order method to ensure the order of convergence eight by the same procedures used in Theorems 2.1 and 2.2?

In order to get this aim, let us consider an iterative multipoint method z=ϕ(x,y), whose first step is Newton’s one, y. Its error equation will be

$$z-r=p_4 e^4+p_5 e^5+p_6 e^6+p_7 e^7+O\bigl(e^7\bigr). $$

We will analyze in the following which are the conditions that p i , i=4,5,6,7 must verify to get eighth-order of convergence with a third step including a sum or product of weight functions. Then, the Taylor expansion around r of the auxiliary variables t, s and v is made an also we expand the weight functions H and G around zero. For the sake of simplicity, let us assume that our combined weight function is G(t,s)+H(v) (all the calculations can be re-arranged with G(t,s)H(v)). Then, the error equation at the third step is

$$\begin{aligned} \hat{e} = & -\bigl(-1 + G(0) + H(0,0)\bigr) p_4 e^4 + \bigl(\bigl(2 G(0) + 2H(0,0) - H_t(0,0)\bigr) c_2 p_4 \\ &{} - \bigl(-1 + G(0) + H(0,0)\bigr) p_5\bigr)e^5+O \bigl(e^6\bigr). \end{aligned}$$

It is clear that, in order to achieve order six, it is necessary that G(0)=1−H(0,0) and H t (0,0)=2. Then,

$$ \hat{e} = -\frac{p_4 }{2 c_2} \bigl(\bigl(-12+H_{tt}(0,0)\bigr) c_2^3+2 c_2 c_3+2 H_s(0,0) p_4 \bigr) e^6+O \bigl(e^7\bigr). $$

Then, to achieve order seven, it is necessary that \(p_{4}=a c_{2} c_{3}+b c_{2}^{3}\) with convenient real values of parameters a and b, a≠0. By using this expression of p 4,

$$\begin{aligned} \hat{e} =& \biggl( -\frac{1}{2} c_2 \bigl(b c_2^2+a c_3 \bigr) \bigl( \bigl(-12+H_{tt}(0,0)+2 b H_{s}(0,0)\bigr) c_2^2+2 \bigl(1+a H_{s}(0,0)\bigr) c_3 \bigr) \biggr) e^6 \\ &{}+O\bigl(e^7\bigr) \end{aligned}$$

and, solving the linear system {−12+H tt (0,0)+2bH s (0,0)=0, 1+aH s (0,0)=0}, we obtain an only solution for p 4≠0, that is

$$ H_{s}(0,0)= -\frac{1}{a} $$
(3.1)

and

$$ H_{tt}(0,0)=\frac{2 (6 a+b)}{a}. $$
(3.2)

The resulting error equation is

$$\begin{aligned} \hat{e} = & -\frac{1}{6 a} \bigl(b c_2^2+a c_3 \bigr) \bigl(\bigl(-48 b+a \bigl(-120+H_{ttt}(0,0)+6 b \bigl(G'(0)+ H_{ts}(0,0)\bigr)\bigr)\bigr) c_2^4 \\ & {} +6 \bigl(2 a+6 b+a^2 \bigl(G'(0)+H_{ts}(0,0) \bigr) \bigr) c_2^2 c_3+12 a c_3^2+12 a c_2 c_4-6 p_5 \bigr) e^7+ O\bigl(e^8\bigr). \end{aligned}$$

According to the coefficients appearing in this error equation, it is necessary that \(p_{5}=c c_{2}^{4}+d c_{2}^{2} c_{3}+h c_{3}^{2}+ g c_{2} c_{4}\), with appropriated real parameters c,d,h,g, in order to get eighth-order of convergence. Then, a linear system is solved, obtaining a unique solution:

$$\begin{aligned} H_{ts}(0,0) =& -\frac{2 a+6 b-d}{a^2}-G'(0), \\ H_{ttt}(0,0) =&\frac{6 (20 a^2+10 a b+6 b^2+a c-b d )}{a^2}, \\ h =&g=2 a. \end{aligned}$$
(3.3)

Finally, the error equation of the resulting scheme is

$$\begin{aligned} \hat{e} = & -\frac{1}{2 a^2} \bigl(b c_2^2+a c_3 \bigr) \bigl( \bigl(-2 a (62 b+13 c)-2 (8 b+c) (6 b-d)\\ &{}+a^2 \bigl(-140+b \bigl(2+H_{tts}(0,0) \bigr)+b^2 H_{ss}(0,0) \bigr) \bigr) c_2^5 \\ &{}+ \bigl(10 a^2+2 a (35 b+8 c-5 d)+2 (-6 b+d)^2\,{+}\,a^3 \bigl(2+H_{tts}(0,0)+2 b H_{ss}(0,0)\bigr) \bigr) c_2^3 c_3 \\ &{}+2 a (-3 b+2 d) c_2^2 c_4+a c_2 \bigl( \bigl(2 a-24 b+8 d+a^3 H_{ss}(0,0) \bigr) c_3^2+6 a c_5 \bigr) \\ &{}+2 a (7 a c_3 c_4-p_6 ) \bigr) e^8+O\bigl(e^9 \bigr), \end{aligned}$$

and the following result can be established.

Theorem 3.1

Let us consider the iterative scheme

$$\begin{aligned} y = & x-\frac{f(x)}{f'(x)}, \\ z = &\phi(x,y), \\ \hat{x} = &z-\bigl(H(t,s)+G(v)\bigr)\frac{f(z)}{f'(x)}. \end{aligned}$$
(3.4)

To achieve order of convergence eight by using the sum of weight functions H and G, it must be verified that the Taylor expansion of the iteration function of fourth-order of convergence around the solution r is

$$z-r=\bigl(a c_2 c_3+b c_2^3 \bigr) e^4+\bigl(c c_2^4+d c_2^2 c_3+2a c_3^2+2a c_2 c_4\bigr) e_k ^5+p_6 e^6+O\bigl(e^7\bigr). $$

Moreover, it is necessary to choose properly weight functions H and G such that the expressions G(0)=1−H(0,0), H t (0,0)=2, \(H_{s}(0,0)= -\frac{1}{a}\), \(H_{tt}(0,0)=\frac{2 (6 a+b)}{a}\), \(H_{ts}(0,0)= -\frac{2 a+6 b-d}{a^{2}}-G'(0)\) and \(H_{ttt}(0,0)=\frac{6 (20 a^{2}+10 a b+6 b^{2}+a c-b d )}{a^{2}}\) are satisfied.

Theorem 3.2

Let us consider the iterative scheme

$$\begin{aligned} y = & x-\frac{f(x)}{f'(x)}, \\ z = &\phi(x,y), \\ \hat{x} = &z-\bigl(H(t,s)G(v)\bigr)\frac{f(z)}{f'(x)}. \end{aligned}$$
(3.5)

In order to gain order of convergence eight by using the product of weight functions H and G it is necessary that the Taylor expansion of the iteration function of fourth-order of convergence around the solution r is

$$z-r=\bigl(a c_2 c_3+b c_2^3 \bigr) e^4+\bigl(c c_2^4+d c_2^2 c_3+2a c_3^2+2a c_2 c_4\bigr) e_k ^5+p_6 e^6+O\bigl(e^7\bigr), $$

and the following conditions must be satisfied: H(0,0)=1/G(0), H t (0,0)=2/G(0), \(H_{tt}(0,0)=\frac{2 (6 a+b)}{a G(0)}\), \(H_{s}(0,0)=-\frac{1}{a G(0)}\),

$$H_{ts}(0,0)=\frac{(-2a-6b+d)G(0)-a^2G'(0)}{a^2 G(0)^2} $$

and

$$H_{ttt}(0,0)=\frac{6}{a^2 G(0)} (20 a^2 +10 a b +6 b^2 +a c -b d ). $$

From these results, we wonder if it is possible to obtain eighth-order schemes when the second step in (3.4) and (3.5) is Ostrowski’s method. The answer is yes. It is easy to prove that the error equation of Ostrowski’s scheme is

$$z-r= \bigl(c_2^3-c_2 c_3 \bigr) e^4-2 \bigl(2 c_2^4-4 c_2^2 c_3+c_3^2+c_2 c_4 \bigr) e^5+O\bigl(e^6\bigr). $$

So, an accurate selection of the weight functions can be made in order to get an optimal eight-order scheme. As a=−1, b=1, c=−4 and d=8, then the necessary conditions on the weight functions H and G are given by Theorems 3.1 and 3.2. Then, the eighth-order of convergence is guaranteed for both classes.

However, Jarratt’s procedure (see [15]) is not appropriated to get and optimal scheme of order eight such as (3.4) and (3.5), as its error equation (as a fourth-order scheme) is

$$z-r= \biggl(c_2^3-c_2 c_3+ \frac{c_4}{9} \biggr) e^4+ \biggl(-4 c_2^4+8 c_2^2 c_3-2 c_3^2- \frac{20 c_2 c_4}{9}+\frac{8 c_5}{27} \biggr) e^5+O \bigl(e^6\bigr), $$

different from the required in Theorems 3.1 and 3.2.

Weight-function products have been recently used in [16], but without any general process that impose conditions on the fourth-order departure method.

4 Numerical Results

We recall that methods M1 and M2 are defined by expression (2.3) by using

$$H_1(t,s) = -2t^3+\frac{1}{1-s+2st-2t}, \quad\quad G_1(v)=2v $$

and

$$H_2(t, s) = 1+2t+4t^2+6t^3+s+4ts, \quad\quad G_2(v)=0, $$

respectively. Analogously, methods M3 and M4 are defined by expression

$$\begin{cases} y = x-\frac{f(x)}{f'(x)}, \\ z = y-\frac{f(x)^2}{(f(x)-f(y))^2} \frac{f(y)}{f'(x)},\\ \hat{x} = z-(H(t,s)G(v))\frac{f(z)}{f'(x)}, \end{cases} $$

by using

$$H_3(t, s) =1+2t+4t^2+6t^3+s+2ts, \quad\quad G_3(v)=1+2v $$

and

$$H_4(t,s) =-2t^3+\frac{1}{1-s+2st-2t}, \quad\quad G_4(v)=1+2v, $$

respectively. The variables t, s and v are the same used in M1 and M2.

Now, we are going to test the proposed optimal eighth-order methods, M1 to M4, for approximating the zeros of the following functions

$$ \begin{array}{l@{\quad}l} f_1(x) = \exp(x)\sin{x} + \log{x^4 - 3 x + 1}, & r=0 \\ f_2(x) = x^2 - (1 - x)^{25}, & r \approx 0.143739\ldots \\ f_3(x)= \exp{\bigl(x^3 - x\bigr)} - \cos{\bigl(x^2 - 1\bigr)} + x^3 + 1, & r=-1 \end{array} $$

and comparing the results with the obtained by some known methods.

All the iterative schemes introduced in the following are optimal in the sense of Kung-Traub’s conjecture and have been designed by using sum and product of weight-functions of one variable, so they are fully comparable with the new ones designed in this paper. Let us refer now to the procedure that Dz̆unić and Petković present in [17]: a three-step eighth-order method, whose iterative expression is

$$\begin{aligned} y = & x -\frac{f(x)}{f'(x)}, \\ z = & y - \frac{f(x)}{f(x)-2f(y)} \frac{f(y)}{f'(x)}, \\ \hat{x} = & z-G_1(s)G_2(v)G_3(t) \frac{f(z)}{f'(x)}, \end{aligned}$$

where \(t=\frac{f(y)}{f(x)}\), \(s=\frac{f(z)}{f(y)}\), \(v=\frac{f(z)}{f(x)}\), G 1(s)=1+s+4s 2, G 2(v)=(1+v)2 and \(G_{3}(t)=\frac{1}{1-2t-t^{2}-5t^{4}}\). We will denote this scheme by DP.

Other eighth-order method was developed by Soleymani et al. in [18]. It is also based on Ostrowski’s method (see [7]) and its iterative expression is:

$$\begin{aligned} y = & x -\frac{f(x)}{f'(x)}, \\ z = & y - \frac{f(x)}{f(x)-2 f(y)} \frac{f(y)}{f'(x)}, \\ \hat{x} = & z-\bigl(G(s)+H(t)+Q(v)\bigr) \frac{f(z)(2x-z-y)}{2f[z,y](x-z)+(z-y)f'(x)}, \end{aligned}$$

where \(t=\frac{f(z)}{f(x)}\), \(s=\frac{f(z)}{f(y)}\), \(v=\frac{f(y)}{f(x)}\), G(s)=1, \(H(t)=\frac{3}{2} t\), \(Q(v)=-\frac{3}{2} v^{3}\) and f[⋅,⋅] denotes the divided difference of order 1. In the following, we will denote this method by S.

Geum and Kim designed in [19] a uniparametric family of eighth-order methods, one of them has the iterative expression

$$\begin{aligned} y = & x -\frac{f(x)}{f'(x)}, \\ z = & y - \frac{1+2u}{1-3u^2} \frac{f(y)}{f'(x)}, \\ \hat{x} = & z-\frac{1}{1-2u-q}\frac{f(z)}{f'(x)}, \end{aligned}$$

where u=f(y)/f(x) and q=f(z)/f(y). We will denote this method by GK1. On the other hand, these authors published in [20] a biparametric family of eighth-order methods, from which we choose the following scheme:

$$\begin{aligned} y = & x -\frac{f(x)}{f'(x)}, \\ z = & y - \frac{1+2u}{1-3u^2} \frac{f(y)}{f'(x)}, \\ \hat{x} = & z- \biggl(\frac{1}{1-2u}+1-\frac{4}{2+v+4w} \biggr) \frac{f(z)}{f'(x)}, \end{aligned}$$

where u=f(y)/f(x), v=f(z)/f(y) and w=f(z)/f(x). We will denote this method by GK2.

All the calculations have been made by using software Mathematica, in variable precision arithmetics, with 2000 digits of mantissa. The exact error |x n r| at the first three iterations of the described methods are given in Tables 1, 2 and 3, where A(−h) denotes A×10h. These tables also include the value of the computational order of convergence r c approximated by (see [21])

$$ r_c=\frac{\log |f(x_n)/f(x_{n-1})|}{\log |f(x_{n-1})/f(x_{n-2})|}. $$
(4.1)
Table 1 Numerical results for f 1(x)=exp(x)sinx+logx 4−3x+1, x 0=0.3
Table 2 Numerical results for f 2(x)=x 2−(1−x)25, x 0=0.25
Table 3 Numerical results for f 3(x)=exp(x 3x)−cos(x 2−1)+x 3+1, x 0=−1.65

Let us observe that the proposed methods are highly competitive, being the errors obtained in the different test functions as accurate as the obtained with the known schemes, and better than them in some cases.

The described methods are optimal eighth-order schemes as well as the methods DP, S, GK1 and GK2. So, the efficiency index of all of them is E=81/4≈1.682.

5 Dynamical Features

From the numerical point of view, the dynamical properties of the rational function associated with an iterative method give us important information about its stability and reliability. In this section, we are going to describe the dynamical planes of the iterative methods used in the numerical section when they are applied to the complex function \(p(z)=z^{2}-\frac{1}{2z}+i\) whose roots are z 1≈0.8199−0.4525i, z 2≈−0.7376+0.9030i and z 3≈−0.0823−0.4505i.

In the different dynamical planes, we see the convergence basins of the roots of p(z) for the schemes that we are using. In the representation we have used the software described in [22]. We draw a mesh with eight hundred points per axis; each point of the mesh is a different initial estimation which we introduce in each procedure. If the method reaches one of the roots in less than eighty iterations, this point is drawn in the color assigned to this root (green, orange and blue). The color will be more intense when the number of iterations is lower. Moreover, the roots are marked in the dynamical plane by white stars. Otherwise, if the method arrives at the maximum of iterations without converging to any root, the point will be drawn in black. In each axis, we represent the real and imaginary part of each initial estimation.

In Fig. 1, we show the dynamical planes of the classical fourth-order method of Kung and Traub, KT4, and some of the schemes used in the numerical section: DP, S and GK1. In Fig. 2, we can see the dynamical planes of the proposed methods M1, M2, M3 and M4 on the same function p(z).

Fig. 1
figure 1

Dynamical planes for K-T method of order 4 and eighth-order schemes

Fig. 2
figure 2

Dynamical planes for proposed eighth-order schemes

As KT4 is a very stable method, we want to obtain qualitative information about the possible lost of stability related to the improvement of the order of convergence. It can be observed that, in general, higher order means lower stability. In fact, the Julia set associated to KT4 and also M1, M2 and M3 is connected, meanwhile in the rest of dynamical planes some ’islands’ appears in the Fatou set, and it shows that the Julia set is disconnected in these cases.

6 Conclusions

The weight functions technique has been used in order to design two classes of optimal methods. Also a generalization is made in order to find other families of the same order with different fourth-order predictors. The proposed schemes have been showed to be competitive in terms of computational efficiency and also in terms of the absolute error on different test functions. In the dynamical study, it has been shown that the new methods have a similar stability as KT4, spite of the improvement of the order of convergence.

To design new methods, variants with memory of the proposed families, could be a future work.