1 Introduction

Constructing higher order multi-point numerical methods for the multiple zeros of the univariate function f(t), where \(f:\mathbb {C}\rightarrow \mathbb {C}\) is analytic in neighborhood of the required zero, is one of the most important and difficult problems in numerical analysis. The advantages of multi-point iterative methods over the one-point iterative method can be found in the excellent book by Traub [19]. More recently, Argyros and Regmi have also highlighted the newly optimized advantages of multi-point iterative methods in their excellent book (see [1]). The most basic one-point method is the Newton’s method [13]

$$\begin{aligned} t_{i+1}=t_i-\mu \frac{f(t_i)}{f'(t_i)}, \ \ \ \ i=0, 1, 2, \ldots \end{aligned}$$
(1)

Here \(\mu \) is the multiplicity of a zero (say, \(\alpha \)) of the function f(t), that is, \(f^{(j)}(\alpha )=0, j=0,1,\ldots ,\mu -1\) and \(f^{(\mu )}(\alpha )\ne 0\).

Many higher order multi-point methods, either independent or dependent on the Newton’s scheme (1), have been proposed in literature, see [2, 5, 7, 9,10,11, 14,15,16,17,18, 22] and the references cited therein. In particular, Liu and Zhou [11] have recently proposed the following scheme of two-point Newton-like methods:

$$\begin{aligned} y_i=&\, t_i-\mu \frac{f(t_i)}{f'(t_i)},\nonumber \\ t_{i+1}=&\,y_i-\mu G(u_i)\frac{f(t_i)}{f'(t_i)}, \end{aligned}$$
(2)

where \(u_i=\Big (\frac{f'(y_i)}{f'(t_i)} \Big )^{\frac{1}{\mu -1}}\) and \(G: \mathbb {C}\rightarrow \mathbb {C}\) is a holomorphic function in the neighborhood of origin zero. They have shown that this iterative scheme attains fourth order convergence provided that the function G(u) satisfies the conditions:

$$\begin{aligned} G(0) = 0, \ G'(0) = 1, \ G''(0) =\frac{4\mu }{\mu -1}, \ \mu \ne 1. \end{aligned}$$

In this work our aim is to develop multi-point iterations with high computational efficient, i.e. the iterative methods of higher convergence order that may use the computations as small in number as we please. The definition of computational efficiency is closely related to the well-known conjecture by Kung and Traub [8]. According to this the multi-point methods without memory requiring n functional evaluations can attain the maximum convergence order \(2^{n-1}\). The methods qualifying the Kung–Traub criterion are usually known as optimal methods. It is clear that Liu–Zhou method is optimal with fourth order convergence. Due to the complexity in developing iterative procedures, the optimal methods for computing multiple roots are seldom obtained.

Keeping the above points in view, we propose a family of eighth order methods that requires four new pieces of information per iteration and hence possesses optimal convergence of eighth order in the sense of Kung–Traub hypothesis. Proposed iterative scheme is the composition of three steps that uses the Liu–Zhou iteration (2) in the first two steps and Newton-type iteration in the third step. Iterative scheme is unique in the sense that it requires two functions and two derivatives per each iteration. The efficiency index (see [12]) is 1.682 which is better than the efficiency index 1.587 of the basic fourth order Liu–Zhou method. Therefore, the new scheme can also be observed as the modification of Liu–Zhou scheme.

We summarize the contents of this article. In Sect. 2, the eighth order iterative technique is developed and its convergence is studied. Some numerical tests are performed in Sect. 3 to check the stability of the methods and to verify the theoretical results. A comparison with the existing methods is also shown in this section. In Sect. 4 concluding remarks are reported.

2 Development of scheme

Authors of this research area have used a variety of techniques to develop higher order iterative methods for solving nonlinear equations. Some of these are: interpolation approach, sampling approach, composition approach, geometrical approach, adomian approach and weight-function approach. Of these the weight-function approach has been most popular in recent times, see, for example [5, 11, 14, 22] and references cited therein. We also use this technique in the present work of computing a multiple root with multiplicity \(\mu > 1\). Thereby consider the following three-step iterative scheme:

$$\begin{aligned} y_{i}= \&t_i-\mu \frac{ f(t_i)}{f'(t_i)}, \nonumber \\ z_{i}= \&y_i-\mu G(u_i)\frac{ f(t_i)}{f'(t_i)}, \nonumber \\ t_{i+1}= \&z_i-\mu v_i B(u_i)H(v_i)K(w_i)\frac{ f(t_i)}{f'(t_i)}, \end{aligned}$$
(3)

where \(u_i=\Big (\frac{f'(y_i)}{f'(t_i)} \Big )^{\frac{1}{\mu -1}}\), \(v_i=\Big (\frac{f(z_i)}{f(t_i)} \Big )^{\frac{1}{\mu }}\), \(w_i = \frac{v_i}{u_i}\), and the functions \(G,B,H,K:\mathbb {C}\rightarrow \mathbb {C}\) are analytic in a neighborhood of 0. Note that second and third steps are weighted by the factors G, B, H and K, so these factors are called weight factors or weight functions. Note that \(u_i\) and \(v_i\) are one-to- \(\mu -1\) and one-to- \(\mu \) multi-valued functions, respectively, so we consider their principal analytic branches. Hence, it is convenient to treat them as the principal root.

In the sequel we will explore certain conditions under which the scheme (3) achieves convergence of order as high as possible. The lengthy calculations are handled by using computer algebra system such as Mathematica software (see [21]). In order to study the convergence following theorem is stated and proved:

Theorem 1

Let \(f: \mathbb {C}\rightarrow \mathbb {C}\) be an analytic function in a region enclosing a multiple zero (say, \(\alpha \)) with multiplicity \(\mu >1\). Assume that initial guess \(t_0\) is sufficiently close to \(\alpha \), then the local order of convergence of scheme (3) is at least 8, provided that

$$\begin{aligned}&G(0) = 0, \ G'(0) = 1, \ G''(0) =\frac{4\mu }{\mu -1},\\&G'''(0) =\frac{6 (\mu (\mu ^3-3 \mu ^2+\mu +1)B'''(0) + 2(6 \mu ^4+ \mu ^3- 5 \mu ^2- 3 \mu -3)B''(0))}{(\mu -1)^2 (\mu (1-\mu )B'''(0) + 6(\mu ^2-\mu -1)B''(0) )}, \ \ \\&B(0) = \frac{(\mu -1) (\mu (1-\mu )B'''(0) + 6 (\mu ^2-\mu -1)B''(0))}{4 (9 \mu ^3- 8 \mu ^2- 5 \mu +6)}, \\&B'(0) = 2B(0), \ H(0)=\frac{\mu ^2 H'(0)}{2 (\mu ^2-\mu -1)}, \ K(0)= \frac{1}{B(0)H(0)}, \ K'(0) =\frac{\mu -1}{\mu } K(0),\\&H'(0) \ne 0, \ \mu (1-\mu )B'''(0) + 6(\mu ^2-\mu -1)B''(0) \ne 0. \end{aligned}$$

Proof

Let the error at i-th iteration be \(\epsilon _i=t_i-\alpha \). Developing \(f(t_i)\) and \(f'(t_i)\) about \(\alpha \) by Taylor’s series

$$\begin{aligned} f(t_i)=\ \frac{f^{(\mu )}(\alpha )}{\mu !}\epsilon ^\mu _i\left( 1+ C_1\epsilon _i+C_2\epsilon ^2_i+C_3\epsilon ^3_i+C_4\epsilon ^4_i+C_5\epsilon ^5_i+C_6\epsilon ^6_i+\cdots \right) \end{aligned}$$
(4)

and

$$\begin{aligned} f'(t_i)= & {} \frac{f^{(\mu )}(\alpha )}{\mu !}\epsilon ^{\mu -1}_i\left( \mu +(\mu +1)C_1\epsilon _i+(\mu +2)C_2\epsilon ^2_i+(\mu +3)C_3\epsilon ^3_i\right. \nonumber \\&\left. +(\mu +4)C_4\epsilon ^4_i+\cdots \right) , \end{aligned}$$
(5)

where \(C_n=\frac{\mu !}{(\mu +n)!}\frac{f^{(\mu +n)}(\alpha )}{f^{(\mu )}(\alpha )}\) for \(n\in \mathbb {N}\).

Using (4) and (5) in first step of (3), it follows that

$$\begin{aligned} \epsilon _{y_i} = y_i-\alpha = \frac{C_1}{\mu }\epsilon _i^2+\frac{2\mu C_2-(\mu +1)C_1^2}{\mu ^2}\epsilon _i^3 +\sum _{n=1}^{5}\phi _n \epsilon _i^{n+3}+O(\epsilon _i^9). \end{aligned}$$
(6)

where \(\phi _n = \phi _n (\mu ,C_1,C_2,C_3,\ldots ,C_7)\), \(n=1,2,3,5\). For brevity, expressions of \(\phi _n\) being very lengthy are not expressed explicitly. Such lengthy expressions obtained in the next computations will also not be shown explicitly.

Expansion of \(f'(y_i)\) about \(\alpha \) leads us to the expression

$$\begin{aligned} f'(y_i)= & {} \frac{f^{(\mu )}(\alpha )}{\mu !}e^{\mu -1}_{y_i}\left( \mu +(\mu +1)C_1\epsilon _{y_i}+(\mu +2)C_2\epsilon ^2_{y_i}+(\mu +3)C_3\epsilon ^3_{y_i}\right. \nonumber \\&\left. +(\mu +4)C_4\epsilon ^4_{y_i}+\cdots \right) . \end{aligned}$$
(7)

Using (5) and (7) in \(u_i=\Big (\frac{f'(y_i)}{f'(t_i)} \Big )^{\frac{1}{\mu -1}}\), we have that

$$\begin{aligned} u_i= \frac{C_1}{\mu }\epsilon _i+\frac{2(\mu -1)C_2-(\mu +1)C_1^2}{\mu (\mu -1)}\epsilon ^2_i+\sum _{n=1}^6\eta _n \epsilon _i^{n+2}+O(\epsilon _i^9), \end{aligned}$$
(8)

where \(\eta _n=\eta _n(\mu ,C_1,C_2,\ldots ,C_8)\).

Expansion of weight function \(G(u_i)\) in the neighborhood of origin yields

$$\begin{aligned} G(u_i)\approx \ G(0)+u_i G'(0)+\frac{1}{2}u_i^2 G''(0)+\frac{1}{6}u_i^3 G'''(0)+O(u_i^4). \end{aligned}$$
(9)

Then the second step of scheme (3), on using Eqs. (4)–(6), (8) and (9), produces

$$\begin{aligned} \epsilon _{z_i} =z_i-\alpha = -G(0) \epsilon _i+\frac{1+G(0)-G'(0)}{\mu }C_1 \epsilon _i^2 +\sum _{n=1}^6\gamma _n \epsilon _i^{n+2}+O(\epsilon _i^9), \end{aligned}$$
(10)

where \(\gamma _n=\gamma _n(\mu , G(0), G'(0), G''(0), G'''(0), C_1, C_2,\ldots ,C_7)\).

It follows that the fourth order convergence can be achieved if the coefficients of \(\epsilon _i\), \(\epsilon _i^2\) and \(\epsilon _i^3\) vanish. Then, resulting equations yield

$$\begin{aligned} G(0)=0,\quad G'(0)=1\quad \text {and}\quad G''(0)=\frac{4\mu }{\mu -1}. \end{aligned}$$
(11)

By using (11) in (10), we obtain that

$$\begin{aligned} \epsilon _{z_i}= & {} \frac{C_1}{6\mu ^3(\mu -1)^2}\big ((3 (\mu ^3+ 8 \mu ^2+ \mu +2)-G'''(0) (\mu -1)^2 )C_1^2-6 (\mu -1) \mu ^2 C_2\big )\epsilon _i^4\nonumber \\&+\sum _{n=1}^4\varphi _n \epsilon _i^{n+4}+O(\epsilon _i^9), \end{aligned}$$
(12)

where \(\varphi _n =\varphi _n(\mu , G'''(0), C_1, C_2,\ldots ,C_6)\).

Developing \(f(z_i)\) about \(\alpha \), then

$$\begin{aligned} f(z_i)=\ \frac{f^{(\mu )}(\alpha )}{\mu !}\epsilon _{z_i}^\mu \left( 1+ C_1\epsilon _{z_i}+C_2\epsilon ^2_{z_i}+C_3\epsilon ^3_{z_i}+C_4\epsilon ^4_{z_i}+C_5\epsilon ^5_{z_i}+C_6\epsilon ^6_{z_i}+\cdots \right) . \end{aligned}$$
(13)

From (4) and (13), we get expression of \(v_i=\Big (\frac{f(z_i)}{f(t_i)} \Big )^{\frac{1}{\mu }}\) as

$$\begin{aligned} v_i= & {} \frac{C_1}{6\mu ^3(\mu -1)^2}\left( (3 (\mu ^3+ 8 \mu ^2+ \mu +2)-G'''(0) (\mu -1)^2 )C_1^2-6 (\mu -1) \mu ^2 C_2\right) \epsilon _i^3\nonumber \\&+\sum _{n=1}^5\tau _n \epsilon _i^{n+3}+O(\epsilon _i^9), \end{aligned}$$
(14)

where \(\tau _n=\tau _n(\mu ,G'''(0), C_1, C_2,\ldots ,C_6)\).

The use of (8) and (14) in \(w_i = \frac{v_i}{u_i}\) gives

$$\begin{aligned} w_i= & {} \frac{1}{6\mu ^2(\mu -1)^2}\left( (3 (\mu ^3+8 \mu ^2+ \mu +2)-G'''(0) (\mu -1)^2 )C_1^2-6 (\mu -1) \mu ^2 C_2\right) \epsilon _i^2\nonumber \\&+\sum _{n=1}^6 \psi _n \epsilon _i^{n+2}+O(\epsilon _i^9), \end{aligned}$$
(15)

where \(\psi _n=\psi _n(\mu ,G'''(0), C_1, C_2,\ldots ,C_7)\).

Next, we expand weight functions \(B(u_i)\), \(H(v_i)\) and \(K(w_i)\) in the neighborhood of origin by Taylor series, then we have

$$\begin{aligned}&\ B(u_i) \approx \ B(0)+ u_iB'(0)+\frac{1}{2}u_i^2B''(0)+\frac{1}{6}u_i^3B'''(0), \end{aligned}$$
(16)
$$\begin{aligned}&\ H(v_i) \approx \ H(0)+ v_iH'(0), \end{aligned}$$
(17)
$$\begin{aligned}&\ K(w_i) \approx \ K(0)+ w_iK'(0). \end{aligned}$$
(18)

Hence by substituting (4), (5), (8), (12), (14)–(18) into the last step of scheme (3), we obtain the error equation

$$\begin{aligned} \epsilon _{i+1}= & {} \frac{(B(0) H(0) K(0)-1)C_1}{6\mu ^3(\mu -1)^2}\left( (G'''(0) (\mu -1)^2 - 3 (\mu ^3+ 8\mu ^2+ \mu +2)) C_1^2 \right. \nonumber \\&\left. + 6 (\mu -1) \mu ^2 C_2\right) \epsilon _i^4+\sum _{n=1}^4\xi _n \epsilon _i^{n+4}+O(\epsilon _i^8), \end{aligned}$$
(19)

where \(\xi _n=\xi _n(\mu ,G'''(0),B(0),B'(0),B''(0),B'''(0),H(0),H'(0),K(0),K'(0), C_1, C_2,\ldots ,C_6)\).

To obtain eighth order, it is sufficient to fix coefficients of \(\epsilon _i^4\), \(\epsilon _i^5\), \(\epsilon _i^6\) and \(\epsilon _i^7\) simultaneously equal to zero. This process will give us

$$\begin{aligned} \left\{ \begin{array}{l} G'''(0) =\frac{6 (\mu (\mu ^3-3 \mu ^2+\mu +1)B'''(0) + 2(6 \mu ^4+ \mu ^3- 5 \mu ^2- 3 \mu -3)B''(0))}{(\mu -1)^2 (\mu (1-\mu )B'''(0) + 6(\mu ^2-\mu -1)B''(0) )}, \\ B(0) = \frac{(\mu -1) (\mu (1-\mu )B'''(0) + 6 (\mu ^2-\mu -1)B''(0))}{4 (9 \mu ^3- 8 \mu ^2- 5 \mu +6)}, \\ B'(0) = 2B(0), \ H(0)=\frac{\mu ^2 H'(0)}{2 (\mu ^2-\mu -1)}, \ K(0)= \frac{1}{B(0)H(0)}, \ K'(0) =\frac{\mu -1}{\mu } K(0), \end{array}\right. \end{aligned}$$
(20)

wherein \(H'(0) \ne 0\) and \(\mu (1-\mu )B'''(0) + 6(\mu ^2-\mu -1)B''(0) \ne 0\).

Substituting the values expressed by (20) in the error equation (19), then some simple calculations yield

$$\begin{aligned} \epsilon _{i+1} =&\ \frac{1}{48 (\mu -1)^5 \mu ^8(\mu (1-\mu )B'''(0)+ 6(\mu ^2-\mu -1)B''(0))^3}\nonumber \\&\times \Big (C_1\Bigg [(-\mu (\mu ^3+9\mu ^2-13\mu +3)B'''(0) +2 (3 \mu ^4+ 9 \mu ^3 \nonumber \\&-26 \mu ^2- 11 \mu -3)B''(0))C_1^2 +2 (\mu -1) \mu (\mu (\mu -1)B'''(0)\nonumber \\&+(6 + 6 \mu - 6 \mu ^2)B''(0))C_2] [(\mu ^2(\mu -1)^2B'''(0)^2 \nonumber \\&\times (14 \mu ^7+ 131 \mu ^6+ 172 \mu ^5-1058 \mu ^4+ 374 \mu ^3+ 87 \mu ^2+ 280 \mu +48)\nonumber \\&-12 \mu (14 \mu ^{10}+ 67 \mu ^9- 310 \mu ^8 - 424 \mu ^7 \nonumber \\&+1557 \mu ^6- 469 \mu ^5- 268 \mu ^4- 558 \mu ^3+ 63 \mu ^2+280 \mu +48)B''(0) B'''(0)\nonumber \\&+12 (42 \mu ^{11}+93 \mu ^{10}- 984 \mu ^9 \nonumber \\&- 315 \mu ^8+ 3878 \mu ^7+ 200 \mu ^6-2468 \mu ^5- 3143 \mu ^4- 372 \mu ^3+ 1941\nonumber \\&\times \mu ^2+ 1128 \mu +144)B''(0)^2)C_1^4-12 \mu \nonumber \\&\times (\mu -1)(\mu ^2(\mu -1)^2(4\mu ^5+ 21 \mu ^4- 36 \mu ^3- \mu ^2+8)B'''(0)^2 -4\mu (12 \mu ^8+21\nonumber \\&\times \mu ^7- 182 \mu ^6+ 185 \mu ^5+ 53 \mu ^4 \nonumber \\&- 50 \mu ^3- 63 \mu ^2+24)B''(0) B'''(0)+12(12 \mu ^9+ 3 \mu ^8-142\nonumber \\&\times \mu ^7+130 \mu ^6+ 152 \mu ^5- 46 \mu ^4- 142 \mu ^3- 51 \mu ^2 \nonumber \\&+ 48 \mu +24)B''(0)^2)C_1^2 C_2 +12 (\mu -1)^2 \mu ^4 (2\mu -3)\nonumber \\&\times (\mu (\mu -1)B'''(0)+(6 + 6 \mu - 6 \mu ^2)B''(0))^2C_2^2 \nonumber \\&\ + 24 (\mu -1)^3\mu ^3 (1 + \mu ) (\mu (\mu -1)B'''(0)+(6 + 6 \mu - 6\nonumber \\&\times \mu ^2)B''(0))^2 C_1 C_3 \Bigg ]\Bigg )\epsilon _i^8 + O\left( \epsilon _i^9\right) . \end{aligned}$$
(21)

Hence, the eighth order convergence is established. This completes the proof of theorem. \(\square \)

2.1 Some concrete methods

Many special cases of the scheme (3) can be generated satisfying the corresponding conditions on the functions G, B, H and K shown above in Theorem 1 . Moreover, we will restrict choices to consider the forms of low order polynomials. These choices should be such that the resulting methods may converge to the root with order eight for \(\mu > 1\). Accordingly, the following simple forms are chosen:

$$\begin{aligned} G(u_i) =&\ \ u_i + \frac{2 \mu }{\mu -1} u_i^2 + \frac{ (23 \mu ^4+ 7 \mu ^3-21 \mu ^2-13 \mu -12)}{(\mu -1)^2 (13 \mu ^2-13 \mu -12)} u_i^3. \end{aligned}$$
(22)
$$\begin{aligned} B(u_i) =&\ \ -\frac{13 \mu ^3-26 \mu ^2+ \mu +12}{4 (9\mu ^3- 8\mu ^2-5 \mu +6)}\Bigg (\frac{1}{2}+u_i\Bigg )-\frac{u_i^2}{2}\bigg (1-\frac{1}{6}u_i\bigg ). \end{aligned}$$
(23)
$$\begin{aligned} B(u_i) =&\ \ \frac{1}{8 (9 \mu ^3-8 \mu ^2-5\mu +6)(6+u_i)} \bigg (-72 - 6 \mu + 156 \mu ^2 -78 \mu ^3 \nonumber \\&\ -(169 \mu ^3- 338 \mu ^2+ 13 \mu +156) u_i - (242 \mu ^3-244 \mu ^2-118 \mu +168) u_i^2\bigg ). \end{aligned}$$
(24)
$$\begin{aligned} H(v_i) =&\ \ \frac{\mu ^2}{2 (\mu ^2-\mu -1)}+v_i. \end{aligned}$$
(25)
$$\begin{aligned} H(v_i) =&\ \ \frac{\mu ^3 v_i+\mu ^2 (1 + 2 v_i)- 2 \mu v_i-2 v_i}{2 (\mu ^2-\mu -1) (1 + \mu v_i)}. \end{aligned}$$
(26)
$$\begin{aligned} H(v_i) =&\ \ \frac{\mu ^3 ( 2 \mu ^2 v_i- 2 \mu v_i- 2 v_i+\mu )}{2 (\mu ^2-\mu -1) (2 v_i+ 2 \mu ^3 v_i + \mu ^2 (v_i^2- 4v_i+1))}. \end{aligned}$$
(27)
$$\begin{aligned} K(w_i) =&\ \ -\frac{16 (\mu ^2-\mu -1)(9 \mu ^3- 8\mu ^2- 5 \mu +6)}{\mu ^2(13 \mu ^2- 13 \mu -12)}\Big (\frac{1}{\mu -1}+\frac{1}{\mu }w_i\Big ). \end{aligned}$$
(28)
$$\begin{aligned} K(w_i) =&\ \ -\frac{16 (\mu ^2-\mu -1)(9 \mu ^3- 8 \mu ^2- 5 \mu +6)(\mu -w_i + \mu w_i + \mu ^2 w_i)}{(\mu -1) \mu ^3 (13 \mu ^2- 13 \mu -12) (1 + \mu w_i)}. \end{aligned}$$
(29)

Let us select the combination of (22), (23), (25), (28) in the scheme (3) and denote the resulting method by M-1; the combination of (22), (23), (26), (28) and denote the corresponding method by M-2; the combination of (22), (24), (25), (28) and denote the method by M-3; and the combination of (22), (23), (27), (28) and denote the method by M-4.

Remark 1

The computational efficiency (E) is defined as \(E=p^{1/\theta }\), where p is the order of convergence of the considered method and \(\theta \) is the number of function evaluations required per iteration (see [12]). With the conditions (11) and (20) the proposed scheme (3) reaches at eighth order convergence by using only four functional evaluations (viz. \(f(t_i)\), \(f(z_i)\), \(f'(t_i)\) and \(f'(y_i)\)) per iteration. Thus, the E-value of the new scheme is \(8^{1/4}\approx 1.682\), which is much better than the E-values of Newton’s method (\(E= 2^{1/2}\approx 1.414\)) and fourth order Liu–Zhou method (\(E= 4^{1/3}\approx 1.587\)).

Remark 2

The proposed algorithms require the knowledge of multiplicity \(\mu \) of a root. To estimate \(\mu \), we can employ the formula

$$\begin{aligned} \mu \approx \frac{t_{i+1}-t_i}{F(t_{i+1})-F(t_i)}, \end{aligned}$$

wherein \(F(t_i)=\frac{f(t_i)}{f'(t_i)}\), which is approximately the reciprocal of divided difference of F for successive iterates \(t_i\) and \(t_{i+1}\) (see [6]).

3 Numerical examples

Convergence behavior and computational efficiency of the new methods M-1, M-2, M-3 and M-4 are hereby demonstrated by applying on some numerical problems. Performance is compared with some existing well-known methods. For example, we choose the optimal fourth order methods by Kansal et al. [5], Li et al. [9, 10], Liu and Zhou [11], Sharma and Sharma [17], Soleymani et al. [18] and Zhou et al. [22]. These methods are expressed as follows:

Li–Liao–Cheng method (LLC):

$$\begin{aligned} y_i =&\ t_i - \frac{2\mu }{\mu +2}\frac{f(t_i)}{f'(t_i)}, \\ t_{i+1} =&\ t_i - \frac{\mu (\mu -2)\big (\frac{\mu }{\mu +2}\big )^{-\mu }f'(y_i)-\mu ^2 f'(t_i)}{f'(t_i)-\big (\frac{\mu }{\mu +2}\big )^{-\mu }f'(y_i)}\frac{f(t_i)}{2 f'(t_i)}. \end{aligned}$$

Li–Cheng–Neta method (LCN):

$$\begin{aligned} y_i =&\ t_i - \frac{2\mu }{\mu +2}\frac{f(t_i)}{f'(t_i)}, \\ t_{i+1} =&\ t_i - a_1 \frac{f(t_i)}{f'(y_i)}-\frac{f(t_i)}{a_2 f'(t_i)+a_3 f'(y_i)}, \end{aligned}$$

where

$$\begin{aligned} a_1 = \&-\frac{1}{2}\frac{\big (\frac{\mu }{\mu +2}\big )^\mu \mu (\mu ^4+4\mu ^3-16\mu -16)}{\mu ^3-4\mu +8}, \\ a_2 = \&-\frac{(\mu ^3-4\mu +8)^2}{\mu (\mu ^4+4\mu ^3-4\mu ^2-16\mu +16)(\mu ^2+2\mu -4)}, \\ a_3 = \&\frac{\mu ^2(\mu ^3-4\mu +8)}{\big (\frac{\mu }{\mu +2}\big )^\mu (\mu ^4+4\mu ^3-4\mu ^2-16\mu +16)(\mu ^2+2\mu -4)}. \end{aligned}$$

Sharma–Sharma method (SS):

$$\begin{aligned} y_i =&\ t_i - \frac{2\mu }{\mu +2}\frac{f(t_i)}{f'(t_i)}, \\ t_{i+1} =&\ t_i - \frac{\mu }{8}\Big [(\mu ^3- 4\mu + 8) - (\mu + 2)^2\Big (\frac{\mu }{\mu +2}\Big )^\mu \frac{f'(t_i)}{f'(y_i)} \\&\times \Big (2(\mu -1)-(\mu +2)\Big (\frac{\mu }{\mu +2}\Big )^\mu \frac{f'(t_i)}{f'(y_i)} \Big )\Big ]\frac{f(t_i)}{f'(t_i)}. \end{aligned}$$

Zhou–Chen–Song method (ZCS):

$$\begin{aligned} y_i =&\ t_i - \frac{2\mu }{\mu +2}\frac{f(t_i)}{f'(t_i)}, \\ t_{i+1} =&\ t_i -\frac{\mu }{8} \Big [\mu ^3\Big (\frac{\mu +2}{\mu }\Big )^{2\mu }\Big (\frac{f'(y_i)}{f'(t_i)}\Big )^2-2\mu ^2(\mu +3)\Big (\frac{\mu +2}{\mu }\Big )^{\mu }\frac{f'(y_i)}{f'(t_i)} \\&+(\mu ^3+ 6\mu ^2+ 8\mu + 8)\Big ]\frac{f(t_i)}{f'(t_i)}. \end{aligned}$$

Soleymani–Babajee–Lotfi method (SBL):

$$\begin{aligned} y_i =&\ t_i - \frac{2\mu }{\mu +2}\frac{f(t_i)}{f'(t_i)}, \\ t_{i+1} =&\ t_i - \frac{f'(y_i) f(t_i)}{p_1 (f'(y_i))^2+p_2 f'(y_i)f'(t_i)+p_3 (f'(t_i))^2}, \end{aligned}$$

where

$$\begin{aligned} p_1 = \&\frac{1}{16}\mu ^{3-\mu }(2+\mu )^\mu , \\ p_2 = \&\frac{8-\mu (2+\mu )(\mu ^2-2)}{8\mu }, \\ p_3 = \&\frac{1}{16}(\mu -2)\mu ^{\mu -1}(2+\mu )^{3-\mu }. \end{aligned}$$

Kansal–Kanwar–Bhatia method (KKB):

$$\begin{aligned} y_i =&\ t_i - \frac{2\mu }{\mu +2}\frac{f(t_i)}{f'(t_i)}, \\ t_{i+1} =&\ t_i - \frac{\mu }{4}f(t_i) \Bigg (1+\frac{\mu ^4 q^{-2\mu }\Big (q^{\mu -1}-\frac{f'(y_i)}{f'(t_i)}\Big )^2(q^\mu -1)}{8(2 q^{\mu }+m(q^{\mu }-1))}\Bigg ) \\&\times \Big (\frac{4-2\mu +\mu ^2(q^{-\mu }-1)}{f'(t_i)}-\frac{q^{-\mu }(2q^\mu +\mu (q^\mu -1))^2}{f'(t_i)-f'(y_i)}\Big ), \end{aligned}$$

where \(q=\frac{\mu }{\mu +2}.\)

Liu–Zhou method (LZ):

$$\begin{aligned} y_i =&\ t_i - \mu \frac{f(t_i)}{f'(t_i)}, \\ t_{i+1} =&\ y_i - \mu \Big (u_i+\frac{2\mu }{\mu -1}u_i^2\Big )\frac{f(t_i)}{f'(t_i)}, \end{aligned}$$

where \(u_i=\Big (\frac{f'(y_i)}{f'(t_i)} \Big )^{\frac{1}{\mu -1}}.\)

The methods are tested on the various problems as shown in Table 1. Numerical calculations are performed in the programming package of Mathematica software using multiple-precision arithmetic. Computed results exhibited in Table 2 contain the following numerical values:

  • The number of iterations (i) required to obtain the desired solution.

  • The estimated errors \(|t_{i+1}-t_i|\) for the first three iterations.

  • The computational order of convergence (COC).

  • The total number of function evaluations (TNFE).

Necessary iterations (i) are calculated by using the condition \(|t_{i+1} - t_i|+|f(t_i)|< 10^{-350}\) as the stopping criterion. Computational order of convergence is calculated by the formula

$$\begin{aligned} \text {COC}= \frac{\ln |(t_{i+2}-\alpha )/(t_{i+1}-\alpha )|}{\ln |(t_{i+1}-\alpha )/(t_{i}-\alpha )|}, \end{aligned}$$

which is used to validate the theoretical order of convergence (see [20]).

Table 1 Test functions
Table 2 Comparison of numerical results

From the numerical results displayed in Table 2 we observe that the errors generated by the proposed methods M-1, M-2, M-3 and M-4 show the greater accuracy in the successive approximations. This justifies the good convergence of the methods. The value 0 for \(|t_{i+1}-t_i|\) indicates that the required accuracy has been achieved. Computational order of convergence (COC) shown in the penultimate column of the table overwhelmingly supports the theoretical convergence order. This feature points to the uniformity in the convergence speed of the iterations which is contrary to the belief that higher order iterations do not always preserve the order of convergence. Computational efficiency of the methods can be viewed by observing the entries of TNFE. Indeed, the new methods are efficient in general, since TNFE is less than that of the existing ones in all the cases. Similar numerical testing, carried out for many other problems, have confirmed the above conclusions to a large extent.

4 Conclusions

A convergent three-step optimal eighth order scheme has been derived for locating multiple zeros of nonlinear functions. The methodology is based on Liu–Zhou optimal fourth order and Newton-like iterations. Analysis of the convergence has proved the order eight under standard assumptions. Performance has been checked by numerical testing. The theoretical eighth order convergence is verified by calculating computational order of convergence (COC). The comparison of performance of the methods with existing efficient methods has also been shown. Comparison of computational efficiency, measured in terms of total number of function evaluations (TNFE) required to achieve the desired solution with the specified accuracy, has also confirmed the efficient and robust character of the new methods. Finally, it is hoped that this study makes a significant contribution to solving nonlinear equations.