1 Introduction

   Due to the advancement of digital computer, advanced computer arithmetics and symbolic computation, many scholars proposed the eighth-order derivative free modifications or extensions of Steffesnsen’s method or Steffensen’s type method [1,2,3,4,5,6] in a number of ways at the expense of additional evaluations of functions or by increasing the substeps. All these modifications or extensions are targeted at increasing the local order of convergence with a view of increasing their efficiency index [7, 8]. Most of them were the extensions or modifications of a particular well known or unknown existing optimal fourth-order derivative free scheme. According to our best knowledge, we don’t have any optimal derivative free scheme in a general way which is applicable to every optimal fourth-order derivative free scheme to produce further optimal scheme of order eight till date.

Nowadays, the construction of optimal derivative free schemes which are applicable to every optimal iteration function of a fixed order to further produce higher-order method have more significant importance instead of obtaining a higher-order extensions or modifications of a particular existing iteration function.

Therefore the principal aim of this paper is to present a new interesting scheme in a general way which is capable to produce further new and interesting optimal eighth-order derivative free scheme from each optimal fourth-order derivative free scheme whose first substep employs Steffensen’s method or Steffensen type method. In this way, we have given the flexibility to the scholars who can pick any existing optimal derivative free fourth-order method from the available literature to further extend optimal eight-order convergence of the considered scheme. We consider a concrete variety of standard test problems in order to check the reliability and efficiency of the proposed scheme and allow us to compare them with some other existing methods of same order. Finally, the dynamical study of them have a great extent to theoretical aspects.

2 Development of eighth-order optimal scheme

This section is devoted to the main contribution of this research article. Our mean to say that we propose a new optimal family of eighth-order derivative free methods for solving nonlinear equations in this section. Therefore, we consider a general eighth-order derivative free scheme in the following way

$$\begin{aligned} \left\{ \begin{aligned} y_n&=x_n-\frac{f(x_n)}{f[u_n,\; x_n]},\;\;\; u_n=x_n+\beta f(x_n),\; \beta \in {\mathbb {R}}\\ z_n&= \varphi _4(u_n,\;x_n,\;y_n),\\ x_{n+1}&= z_n- \frac{f(z_n) (u_n-x_n) (u_n-y_n) (x_n-y_n)}{ f[y_n,z_n] (u_n-x_n) (u_n-z_n) (x_n-z_n)-a (y_n-z_n) }, \end{aligned} \right. \end{aligned}$$
(2.1)

where \(a=f[x_n,z_n] (u_n-y_n) (u_n-z_n)-f[u_n,z_n] (x_n-y_n) (x_n-z_n)\).

In the next Theorem 3.1, we demonstrate that the order of convergence of the proposed scheme will reach at the optimal eighth-order without using any additional functional evaluations. It is interesting to observe that only a single coefficient \(\lambda _0\) of \(\varphi _4(u_n,\;x_n,\;y_n)\) contributes its role in the construction of the desired eighth-order convergence (for the details please see Theorem 3.1).

3 Convergence analysis

Theorem 3.1

Let\( f: {\mathbb {C}} \rightarrow {\mathbb {C}}\)be have a simple zero\(\alpha \)and analytic function in the neighborhood of the required zero\(\xi \). In addition, we assume that\(\varphi _4(u_n,\;x_n,\;y_n)\)is any optimal fourth-order scheme whose first substep employs (2.1). Moreover, we consider initial guess\(x=x_{0}\)is sufficiently close to the desired zero\( \xi \)for guaranteed convergence. Then, the scheme (2.1) has an eighth-order convergence.

Proof

Let us assume that \(e_n=x_n-\xi \) be the error at nth point. We expand the function \(f(x_n)\) around the point \(x=\xi \) with the help of Taylor’s series expansion. Then, we have

$$\begin{aligned} f(x_n)=c_1e_n+c_2 e_n^2 +c_ 3e_n^3+c_4 e_n^4+c_ 5e_n^5+c_6 e_n^6+c_7 e_n^7+c_8 e_n^8+O(e_n^9), \end{aligned}$$
(3.1)

where \(c_j=\frac{f^{(j)}(\xi )}{j! f'(\xi ) }\) for \(j=1,2,\; \dots ,\;8\).

With the help of above expression (3.1), we further obtain

$$\begin{aligned} u_n-\xi =(1+\beta c_1)e_n+\beta (c_2 e_n^2 +c_ 3e_n^3+c_4 e_n^4+c_ 5e_n^5+c_6 e_n^6+c_7 e_n^7+c_8 e_n^8)+O(e_n^9). \end{aligned}$$
(3.2)

Now, in the combination of the Taylor’s series expansion and the above expression (3.2), we have

$$\begin{aligned} f(u_n)=c_1 ( 1+ \beta c_1 )e_n+c_2 \left\{ ( 1+ \beta c_1 )^2+ \beta c_1 \right\} e_n^2+\sum _ {k=1} ^{6} A_k e_n^{k+2}+O(e_n^9), \end{aligned}$$
(3.3)

where \(A_k=A_k(\beta , c_1, c_2, \dots , c_8)\).

By using the expressions (3.1) and (3.3), we have

$$\begin{aligned} y_n-\xi&=\left( \beta +\frac{1}{c_1} \right) c_2e_n^2+ \frac{ c_1 c_3 ( \beta ^2 c_1^2+3 \beta c_1+2)-c_2^2 ( \beta ^2 c_1^2+2 \beta c_1+2)}{c_1^2}e_n^3 \\&\quad +\sum _ {k=1} ^{5} {\bar{A}}_k e_n^{k+3}+O(e_n^9). \end{aligned}$$
(3.4)

With the help of Taylor’s series expansion, we further obtain

$$\begin{aligned} f(y_n)&= c_2 ( 1+\beta c_1)e_n^2+\frac{c_1 c_3( \beta ^2 c_1^2+3 \beta c_1+2)-c_2^2 ( \beta ^2 c_1^2+2 \beta c_1+2)}{c_1}e_n^3\\&\quad +\sum _{k=1}^{5}\bar{{\bar{A}}}_k e_n^{k+3} +O(e_{n}^9). \end{aligned}$$
(3.5)

As we know that \(\eta (u_n, x_n,\;y_n)\) is any optimal fourth-order derivative free scheme. Then, it should satisfy the error equation of the following form

$$\begin{aligned} z_n-\xi =\lambda _0 e_n^{4}+\lambda _1 e_n^{5}+\lambda _2 e_n^{6}+\lambda _3 e_n^{7}+ \lambda _4 e_n^{8}+O(e_n^9), \end{aligned}$$
(3.6)

where \(\lambda _0 \ne 0\).

In addition, the Taylor’s series expansion of the function \(f(z_n)\) gives us

$$\begin{aligned} f(z_n)=c_1\left( \lambda _0 e_n^{4}+\lambda _1 e_n^{5}+\lambda _2 e_n^{6}+\lambda _3 e_n^{7}\right) +(c_2 \lambda _0^2+c_1 \lambda _4) e_n^8 +O(e_n^9). \end{aligned}$$
(3.7)

With the help of expressions (3.1)–(3.7), we further yield

$$\begin{aligned} f[z_n,x_n]&=c_1+c_2 e_n+c_3 e_n^2+ c_4 e_n^3+(c_2 \lambda _0+c_5)e_n^4\\&\quad + (c_3 \lambda _0+c_2 \lambda _1+c_6)e_n^5 +O(e_n^6), \end{aligned}$$
(3.8)
$$\begin{aligned} f[z_n,u_n]&=c_1+ c_2 (\beta c_1+1) e_n + \{c_3 (\beta c_1+1)^2+\beta c_2^2\} e_n^2\\&\quad + \{c_4 (\beta c_1+1)^3+\beta c_2 c_3 (2 \beta c_1+3)\}e_n^3\\&\quad + \left[ c_2 \{\beta c_4 (3 \beta ^2 c_1^2+6 \beta c_1+4)+\lambda _0\}+\beta ^2 c_3 c_2^2 \right. \\&\quad \left. +(\beta c_1+1) \{c_5 (\beta c_1+1)^3+2 \beta c_3^2\} \right] e_n^4 +O(e_n^5) \end{aligned}$$
(3.9)

and

$$\begin{aligned} f[z_n,y_n]&= c_1+\frac{c_2^2 (\beta c_1+1)}{c_1}e_n^2 \\&\quad +\frac{c_2 \left( c_1 c_3 (\beta ^2 c_1^2+3 \beta c_1+2)-c_2^2 (\beta ^2 c_1^2+2 \beta c_1+2)\right) }{c_1^2}e_n^3 +O(e_n^4) \end{aligned}$$
(3.10)

By using the expressions (3.1)–(3.10), we obtain

$$\begin{aligned} \frac{f(z_n) (u_n-x_n) (u_n-y_n) (x_n-y_n)}{ f[y_n,z_n] (u_n-x_n) (u_n-z_n) (x_n-z_n)-a (y_n-z_n) }&=\lambda _0 e_n^{4}+\lambda _1 e_n^{5}+\lambda _2 e_n^{6}+\lambda _ 3 e_n^{7} \\&\quad +\left( \frac{ c_1^2 \lambda _4-c_2 \lambda _0 \{c_4 (\beta c_1+1)^2+c_1 \lambda _0\}}{c_1^2} \right) e_n^8+O(e_n^9). \end{aligned}$$
(3.11)

Finally, by inserting the expressions (3.6) and (3.11) in the last substep of the proposed scheme (2.1) and after some simplification, we have

$$\begin{aligned} e_{n+1}=\frac{c_2 \lambda _0 \{c_4 (\beta c_1+1)^2+c_1 \lambda _0\}}{c_1^2} e_n^8+O(e_n^{9}), \end{aligned}$$
(3.12)

The above expression (3.12) confirms two things about the scheme (2.1) that it attains an eighth-order convergence and optimal in the sense of Kung–Traub conjecture because it uses only four functional values. This completes the proof. \(\square \)

Remark 3.2

No doubts, one generally expects that some other \(\lambda _j, 1 \le j \le 4\) from \(\varphi _4 (u_n,x_n,y_n)\) should also be involved in the asymptotic error constant of the scheme (2.1). However, the expression (3.12) confirm that only \(\lambda _0\) appears in the asymptotic error constant of (2.1).

4 Numerical examples

This section is devoted to check the effectiveness and efficiency of our proposed methods with the other existing optimal eighth-order derivative free methods. Therefore, we consider some of the following special cases of \(\varphi (u_n,x_n,y_n)\)

  1. (1)

    \(\varphi _4(u_n,x_n,y_n)= y_n-\frac{f\left( y_n\right) }{\frac{a f\left( y_n\right) -b f\left( u_n\right) }{y_n-u_n}+\frac{c f\left( y_n\right) -d f\left( x_n\right) }{y_n-x_n}},\; \text {such that}\; a=c=1 \;\text { and} \; b+d=1\),

  2. (2)

    \(\varphi _4(u_n,x_n,y_n)= y_n-\left[ \frac{f[y_n,\;x_n]+(p-1)f[u_n,\;y_n]-(p-1)f[u_n,\;x_n]-b (y_n-x_n)(y_n-u_n)}{f[y_n,\;x_n]+p f[u_n,\;y_n]-pf[u_n,\;x_n]-a (y_n-x_n)(y_n-u_n)}\right] \frac{f(y_n)}{f[y_n,x_n]}, a, b, p \in {\mathbb {R}}\) and

  3. (3)

    \(\varphi _4(u_n,x_n,y_n)=y_n-\frac{f\left( u_n\right) f\left( y_n\right) \left( y_n-x_n\right) }{\left( f\left( u_n\right) -f\left( y_n\right) \right) \left( f\left( y_n\right) -f\left( x_n\right) \right) }\),

which are chosen from Cordero and Torregrosa [9], Zheng et al. [10], Kung–Traub [3]. We called these cases with our proposed scheme (2.1) by \(OM1_8\)\(( \text {for}\; a = b = c = 1,\; d = 0,\; \beta =1)\), OM2 \((\text {for}\; a=b=0,\; p=2\; \text {and}\;\beta =1)\) and \(OM3_8 \; (\beta =1) \), respectively. Now, we will compare them with the existing optimal derivative free eighth-order methods which were proposed by Zheng et al. [1], Soleymani and Vanani [2], Kung and Traub [3], Khattri and Steihaug [4] and Thukral [5], out of which we have chosen methods namely, method (8) with \(\gamma =1\), method (21), method (4.1) with \(\beta =1\), method (17) with \(\alpha =1\) and method (2.8) with \(\beta =1\), respectively. We denoted these methods by \(ZM_8\), \(SK_8\), \(KT_8\), \(KM_8\) and \(TM_8\), respectively.

All the examples have roots within a square of [− 3, 3] by [− 3, 3] except one within another square of [− 1, 5] by [− 3, 3]. We have taken 360,000 equally spaced points in the square as initial points for the methods and we have registered the total number of iterations required to converge to a root and also to which root it converged. We have also collected the CPU time (in seconds) required to run each method on all the points using Samsung desktop computer with Intel(R) Core(TM) i7-8700K CPU. We then computed the average number of iterations required per point and the number of points requiring 40 iterations.

Example 1

The first example is the quadratic polynomial

$$\begin{aligned} p_1(z)=z^2-1 \end{aligned}$$
(4.1)

whose roots are at \(\pm 1\). We have plotted the basins in Fig. 1. We used a different color for each basin, so that we can tell if the method converged to the closest root. We have also used lighter shade when the number of iterations is lower and at the maximum number of iterations we color the point black. Therefore ideally the method should show lighter shades. The best methods are \(OM3_8, OM1_8, ZM_8\) and \(KM_8\).

Fig. 1
figure 1

The top row for \(ZM_8\) (left), \(SK_8\) (center) and \(KT_8\) (right). The second row for \(KM_8\) (left), \(TM_8\) (center) and \(OM1_8\) (right). The bottom row for \(OM2_8\) (left), \(OM3_8\) (right) for the roots of the polynomial \(z^2-1\)

Fig. 2
figure 2

The top row for \(ZM_8\) (left), \(SK_8\) (center) and \(KT_8\) (right). The second row for \(KM_8\) (left), \(TM_8\) (center) and \(OM1_8\) (right). The bottom row for \(OM2_8\) (left), \(OM3_8\) (right) for the roots of the polynomial \(z^3-1\)

Fig. 3
figure 3

The top row for \(ZM_8\) (left), \(SK_8\) (center) and \(KT_8\) (right). The second row for \(KM_8\) (left), \(TM_8\) (center) and \(OM1_8\) (right). The bottom row for \(OM2_8\) (left), \(OM3_8\) (right) for the roots of the polynomial \(z^4-1\)

Fig. 4
figure 4

The top row for \(ZM_8\) (left), \(SK_8\) (center) and \(KT_8\) (right). The second row for \(KM_8\) (left), \(TM_8\) (center) and \(OM1_8\) (right). The bottom row for \(OM2_8\) (left), \(OM3_8\) (right) for the roots of the polynomial \(z^4-7.79075z^3+14.7445z^2+2.511z-1.674\)

Fig. 5
figure 5

The top row for \(ZM_8\) (left), \(SK_8\) (center) and \(KT_8\) (right). The second row for \(KM_8\) (left), \(TM_8\) (center) and \(OM1_8\) (right). The bottom row for \(OM2_8\) (left), \(OM3_8\) (right) for the roots of the polynomial \(z^5-1\)

Fig. 6
figure 6

The top row for \(ZM_8\) (left), \(SK_8\) (center) and \(KT_8\) (right). The second row for \(KM_8\) (left), \(TM_8\) (center) and \(OM1_8\) (right). The bottom row for \(OM2_8\) (left), \(OM3_8\) (right) for the root of the non-polynomial \(z+cos(z)+\frac{\pi }{2}\)

Now we check Table 1 to see the average number of iterations per point. The minimum is 2.34 and it is achieved by \(OM3_8\) followed by methods: \(ZM_8\), \(KM_8\) and \(OM1_8\) (2.56), and \(OM2_8\) (2.58). The highest number (4.57) is used by \(KT_8\). All other methods used 3.01–4.13 number of iterations per point on average.

Table 1 Average number of iterations per point for each test function (1–6) and each of the methods
Table 2 CPU time (in seconds) required for each test function (1–6) and each of the methods
Table 3 Number of points requiring more than 40 iterations for each test function (1–6) and each of the methods

Based on the CPU time in seconds, we find that the fastest method is \(ZM_8\) with 135.219 s. The slowest is \(SK_8\) with 238.062 s. We can see that the basins for this method have many black points (Fig. 1, middle of top row). In terms of the number of black points (see Table 3) we find that most methods have no such points except \(SK_8\) (30825 points), \(TM_8\) (94 points), and \(KT_8\) (4 points).

Example 2

The second example is the cubic polynomial

$$\begin{aligned} p_2(z)=z^3-1 \end{aligned}$$
(4.2)

having the 3 roots of unity.

The basins of attraction are given in Fig. 2. Based on these plots we find that \(OM3_8\), \(OM2_8\), \(ZM_8\) and \(OM1_8\) are best. Based on Table 1 we find that the minimum number of iterations per point on average is achieved by \(OM3_8\) (3.19) followed by \(ZM_8\) (3.60), and \(OM1_8\) and \(KM_8\) (3.61). The worst are \(SK_8\) (17.28) and \(KT_8\) (11.63). All the other methods use 3.65–4.88.

The fastest method is \(ZM_8\) method (244.796 s) and the slowest are \(SK_8\) (704.48 s), \(KT_8\) (605.61 s), \(TM_8\) (475.14 s), and \(KM_8\) (392.906 s). Based on the number of black points clearly we have \(SK_8\) being the worst with 129371 such points and \(KT_8\) (14258).

Example 3

The third example is another cubic polynomial, but with real roots only, i.e. the polynomial is given by:

$$\begin{aligned} p_3(z)=z^4-1. \end{aligned}$$
(4.3)

The basins of attraction are displayed in Fig. 3. It seems that the best methods are \(OM3_8, OM1_8\) and \(KM_8\). Consulting the number of iterations per point, we find that \(OM1_8\) is best (4.78) followed by \(KM_8\) and \(ZM_8\) (5.22), and \(OM3_8\) (5.33). The worst is \(SK_8\) (20.41). All the others use 5.52 – 13.22 iterations per point. The fastest method is again \(ZM_8\) (356.656 s). The slowest are \(SK_8\) (798.81), \(KT_8\) (664.89 s), and \(KM_8\) (588.813). All methods have black points.

Example 4

Let us consider a quartic equation from [11, 12], which describes the fraction of the nitrogen-hydrogen feed that gets converted to ammonia (this fraction is called fractional conversion). By considering 250 atm and \(500^0\) C, then the mentioned equation can be converter in to the following form

$$\begin{aligned} p_4(z)=z^4-7.79075 z^3+14.7445 z^2+2.511. \end{aligned}$$
(4.4)

The above function has total four number of zeros and out of them two are real and other two are complex conjugate to each other. However, our desired zero is \(\xi =3.9485424455620457727+0.3161235708970163733 i\).

The basins are displayed in Fig. 4. The best methods are \(TM_8, OM1_8\) and \(OM2_8\). Based on the average number of iterations per point (see Table 1) we find that the minimum is achieved by \(OM3_8\) (3.86), followed closely by \(TM_8\) (3.99) and \(OM1_8\) (4.82). The worst method in this sense is \(KT_8\) which uses 7.34 iterations per point on average. The rest of the methods use 5.35 – 6.93 iterations per point on average. In terms of the CPU time, the fastest method is \(OM3_8\) (651.703 s). The slowest are \(KM_8\) with 999.453  s and \(SK_8\) with 912.7 s. All others use 689.39–895.094 s. Based on the number of black points, the worst method is \(SK_8\) with 23514 black points, and all other methods have between 1075 and 4264 black points.

Example 5

The fifth example is a fifth degree polynomial

$$\begin{aligned} p_5(z)=z^5-1. \end{aligned}$$
(4.5)

The basins are displayed in Fig. 5. It seems that the best methods are \(KM_8\) and \(OM1_8\). The data in Tables 1, 2 and 3 give a quantitative information. Based on Table 1 we find that \(SK_8\) is the worst, requiring 23.18 iterations per point on average. The smallest number of iterations on average is for \(OM1_8\) (8.01) closely followed by \(KM_8\) (8.06). The rest of the methods use between 8.07 and 21.19 iterations per point. The fastest method is \(ZM_8\) (589 s) and the slowest is \(SK_8\) (1003.7 s). The rest use 614.5–1001.94 s. In terms of black points, we find again that the worst are \(SK_8\) (187906) and \(KT_8\) (130717). All other methods have at least 16837 black points.

Example 6

An undesirable RF breakdown which may happen in the high power microwave devices working under the vacuum condition is known as is multi factor [13]. For example, multi factor appears inside a parallel plate waveguide. There exists an electric field with an electric potential difference which creates the electron movement between these two plates. An interesting case in the study of the electron trajectories is when the electron reaches a plate with root of multiplicity 2. The trajectory of an electron in the air gap between two parallel plates is as follows

$$\begin{aligned} y(t) &= y_{0}+\left( v_{0}+e\frac{E_{0}}{m\omega }\sin (\omega t_{0}+\alpha )\right) (t-t_{0})+e\frac{E_{0}}{m\omega ^{2}}(\cos (\omega t+\alpha )-\cos (\omega t_{0}+\alpha )) \end{aligned}$$
(4.6)

where t is time, m and e are the mass and charge of the electron at rest, \( E_{0}\sin (\omega t+\alpha )\) is the RF electric field between plates and \( y_{0}\) and \(v_{0}\) are the position and velocity of the electron at time \( t_{0}\). We consider the following particular case of (4.6), where the parameters have been normalized:

$$\begin{aligned} p_6(z)=z+cos(z)+ \frac{\pi }{2}. \end{aligned}$$
(4.7)

with the simple zero \(\xi =-1.570796326794896619231322\). so, our last example is a non-polynomial function defined by (4.7). This is an example that was difficult for many methods. The basins are displayed in Fig. 6. The best methods seem to be \(OM3_8\) and \(TM_8\). In terms of average number of iterations per point, \(OM2_8\) is the best method with 18.96 followed by \(OM3_8\) with 19.59. The worst is \(SK_8\) and \(KT_8\) with 24.45 iterations per point on average. The fastest method (Table 2) is \(OM3_8\) (394.750 s) and the slowest is \(KM_8\) (770.365 s). \(SK_8\) and \(KT_8\) have the highest number of black points (Table 3). It is clear that one has to use quantitative measures to distinguish between methods, since we have a different conclusion when just viewing the basins of attraction.

In order to pick the best method overall, we have averaged the results in Tables 1, 2 and 3 across the 6 examples. The best method based on the 3 criteria used is \(OM3_8\). The method with the fewest number of iterations per point is \(OM3_8\) (7.19) followed closely by \(OM1_8\) (7.50) and \(OM2_8\) (7.52). The fastest method is \(ZM_8\) (415.487 s) closely followed by \(OM3_8\) (418.597 s). The methods with the least number of black points on average are \(OM2_8\) (32082.5 points) and \(OM1_8\) (32673.7 points).

5 Conclusions

In this work, we present a new interesting general optimal eighth-order scheme that is applicable to each earlier optimal multi-point derivative free scheme of order four whose first sub step should be Steffensen’s type method to construct further new optimal iteration functions of order eight. In addition, it does not use any kind of derivative of the considered function and also satisfy the classical Kung–Traub optimality conjecture. We can easily find many new optimal eighth-order derivative free iteration functions. We have compared the basins of our methods with several existing methods of same order using 3 quantitative measure and found that the best method based on all 3 criteria is \(OM3_8\). Therefore, we conclude that our proposed iteration scheme provides methods at least as better as the ones available in the literature.