Keywords

1 Introduction

In the realm of numerical computations, a vast number of problems are expressed using nonlinear equations of the following form:

$$ {\Omega }\left( s \right) = 0, $$
(1)

where \({\Omega }:D \subseteq {\mathbb{R}} \to {\mathbb{R}}\) is a real function defined on \(D\), an open interval. Finding the numerical solutions of these problems expressed by (1) has always been a challenging task but at the same time of great importance due to its numerous applications in various branches of science and engineering. Iterative methods are extensively used for solving these problems in order to get approximate solutions of (1) but with high accuracy. The following is one such iterative method, called the Newton’s method [1], which is widely used for finding the simple roots of (1):

$$ s_{n + 1} = s_{n} - \frac{{\Omega \left( {s_{n} } \right)}}{{\Omega^{\prime}\left( {s_{n} } \right)}},\quad n = 0,1,2, \ldots $$
(2)

It is a classical optimal one-point without memory method with quadratic order of convergence. However, due to the requirement of derivative evaluation and its low convergence order, the Newton’s method (2) is not suitable for many practical uses. As a result, various multi-point without memory methods have been developed and studied in literature which have higher convergence order with higher efficiency [2,3,4]. An iterative method is called optimal if it satisfies the unproved Kung–Traub’s conjecture [5] according to which an iterative without memory method requiring \(k\) function evaluations per iteration is optimal if it has the convergence order of \(2^{k - 1}\).

In the past decade, various multi-point with memory methods for finding simple roots of nonlinear equations using accelerating parameters have gained much attention among researchers [6,7,8,9,10]. To theoretically determine the efficiency of an iterative method, Ostrowski [11] introduced the efficiency index \(\left( {{\text{EI}}} \right) = p^{\frac{1}{k}}\), where \(k\) is the number of function evaluations at each iteration and \(p\) is the order of convergence. In fact, it was Traub who first introduces the with memory method, known as the Traub–Steffensen method [1], using a parameter as accelerating parameter. The method is given below:

$$ \begin{aligned} w_{n} & = s_{n} + \alpha_{n} \Omega \left( {s_{n} } \right), \alpha_{n} \ne 0 \\ s_{n + 1} & = s_{n} - \frac{{\Omega \left( {s_{n} } \right)}}{{\Omega \left[ {s_{n} ,w_{n} } \right]}}, \\ \end{aligned} $$
(3)

where \(\Omega \left[ {s_{n} ,w_{n} } \right] = \frac{{\Omega \left( {s_{n} } \right) - \Omega \left( {w_{n} } \right)}}{{s_{n} - w_{n} }}\) and \(\alpha_{n}\) is the accelerating parameter calculated as follows:

$$ \alpha_{n} = - \frac{1}{{N_{1}^{\prime} \left( {s_{n} } \right)}}; N_{1} = \Omega \left( {s_{n} } \right) + \left( {s - s_{n} } \right)\Omega \left[ {s_{n} ,w_{n} } \right], \quad n \ge 0. $$

The method (3) has convergence order of 2.41 which is higher than the quadratic convergence order of Newton’s method. This is achieved without any additional function evaluation. Also, unlike Newton’s method, Traub–Steffensen method does not require any evaluation of the derivatives and is derivative-free. This has motivated us to develop new multi-point with and without memory iterative methods containing more number of accelerating parameters with increased order of convergence having high efficiency index of almost 2.

In this paper, we introduce new derivative-free four-parametric families of four-point with and without memory iterative methods for computing simple roots of nonlinear equations. Formulation of the family of with memory methods is based on the extension of the new family of without memory methods by using accelerating parameters without any extra function evaluations. As a result, the convergence order increases from 8 to 15.5156. The accelerating parameters are approximated by Newton's interpolatory polynomials through the best-saved points so as to obtain highly efficient family of with memory methods.

The remaining content of the paper has been structured as follows. In Sect. 2 development of the new derivative-free family of without memory methods is discussed, and the theoretical convergence properties are fully investigated. Section 3 deals with the development and convergence analysis of the new derivative-free family of with memory methods. Section 4 covers the numerical results and the comparison of the proposed families of with and without memory methods with other existing methods on some test functions. Some real-world problems have been included in this section to confirm the applicability of the proposed families of with and without memory methods. Finally, Sect. 5 presents some concluding remarks.

2 Optimal Four-Parametric Family of Four-Point Without Memory Methods

Let us first consider the following non-optimal three-point Newton steps of eighth-order containing the first-order derivative.

$$ \begin{aligned} y_{n} & = s_{n} - \frac{{\Omega \left( {s_{n} } \right)}}{{\Omega^{\prime}\left( {s_{n} } \right)}} \\ z_{n} & = y_{n} - \frac{{\Omega \left( {y_{n} } \right)}}{{\Omega^{\prime}\left( {y_{n} } \right)}} \\ s_{n + 1} & = z_{n} - \frac{{\Omega \left( {z_{n} } \right)}}{{\Omega^{\prime}\left( {z_{n} } \right)}} \\ \end{aligned} $$
(4)

To minimize the number of function evaluations from the above Eq. (4), we first approximate \(\Omega^{\prime}\left( {y_{n} } \right)\) using the following expression:

$$ \Omega^{\prime}\left( {y_{n} } \right) \approx \frac{{\Omega^{\prime}\left( {s_{n} } \right)}}{{1 + \frac{{\Omega \left( {y_{n} } \right)}}{{\Omega \left( {s_{n} } \right)}} \frac{{\Omega \left( {s_{n} } \right) + \left( {\lambda - 1} \right)\Omega \left( {y_{n} } \right)}}{{\Omega \left( {s_{n} } \right) - \Omega \left( {y_{n} } \right) }}}},\quad \lambda \in {\mathbb{R}} $$
(5)

Then, we approximate \(\Omega^{\prime}\left( {s_{n} } \right)\) from the first two steps of the above Eq. (4) as follows:

$$ \begin{aligned} y_{n} & = s_{n} - \frac{{\Omega \left( {s_{n} } \right)}}{{\Omega \left[ {s_{n} ,w_{n} } \right] + \beta \Omega \left( {w_{n} } \right)}}, \;w_{n} = s_{n} + \alpha \Omega \left( {s_{n} } \right) \\ z_{n} & = y_{n} - \left( {1 + \frac{{\Omega \left( {y_{n} } \right)}}{{\Omega \left( {s_{n} } \right)}} \frac{{\Omega \left( {s_{n} } \right) + \left( {\lambda - 1} \right)\Omega \left( {y_{n} } \right)}}{{\Omega \left( {s_{n} } \right) - \Omega \left( {y_{n} } \right)}}} \right)\frac{{\Omega \left( {y_{n} } \right)}}{{\Omega \left[ {y_{n} ,w_{n} } \right] + \beta \Omega \left( {w_{n} } \right)}} \\ s_{n + 1} & = z_{n} - \frac{{\Omega (z_{n} )}}{{\Omega^{\prime}(z_{n} )}}, \\ \end{aligned} $$
(6)

where \(\lambda\) is any real parameter and \(\alpha ,\beta \in {\mathbb{R}} - \left\{ 0 \right\}\).

We want to make this Eq. (6) optimal as well as derivative-free. So, we approximate \(\Omega^{\prime}\left( {z_{n} } \right)\) in the last step of (6) by the following polynomial:

$$ Q\left( v \right) = l_{0} + l_{1} \left( {v - z_{n} } \right) + l_{2} \left( {v - z_{n} } \right)^{2} + l_{3} \left( {v - z_{n} } \right)^{3} , $$
(7)

where \(l_{0} , l_{1} , l_{2}\) and \(l_{3}\) are some unknowns to be determined by means of the following conditions:

$$ Q\left( {s_{n} } \right) = \Omega \left( {s_{n} } \right), \;Q\left( {y_{n} } \right) = \Omega \left( {y_{n} } \right), \;Q\left( {w_{n} } \right) = \Omega \left( {w_{n} } \right), \;Q\left( {z_{n} } \right) = \Omega \left( {z_{n} } \right). $$

Now, solving Eq. (7) under the above conditions and simplifying, we obtain the values of \(l_{0} , l_{1} , l_{2}\) and \(l_{3}\) as follows:

$$ l_{0} = \Omega \left( {z_{n} } \right) $$
(8)
$$ l_{3} = \Omega \left[ {s_{n} ,y_{n} ,z_{n} ,w_{n} } \right] $$
(9)
$$ l_{2} = \Omega \left[ {y_{n} ,z_{n} ,w_{n} } \right] - l_{3} \left( {y_{n} + w_{n} - 2z_{n} } \right) $$
(10)
$$ l_{1} = \Omega \left[ {z_{n} ,w_{n} } \right] - l_{2} \left( {w_{n} - z_{n} } \right) - l_{3} \left( {w_{n} - z_{n} } \right)^{2} , $$
(11)

where \(\Omega \left[ {x,y,z} \right] = \frac{{\Omega \left[ {x,y} \right] - \Omega \left[ {y,z} \right]}}{x - z}\) and \(\Omega \left[ {x,y,z,v} \right] = \frac{{\Omega \left[ {x,y,z} \right] - \Omega \left[ {y,z,v} \right]}}{x - v}\) are second and third divided differences, respectively.

Using (9), (10) and (11), the approximation of \(\Omega^{\prime}\left( {z_{n} } \right)\) from Eq. (7) is obtained as follows:

$$ \Omega^{\prime}\left( {z_{n} } \right) \approx Q^{\prime}\left( {z_{n} } \right) = l_{1} = \Omega \left[ {z_{n} ,w_{n} } \right] - l_{2} \left( {w_{n} - z_{n} } \right) - l_{3} \left( {w_{n} - z_{n} } \right)^{2} . $$
(12)

Now, substituting (12) in (6) and adding two new parameters \(\gamma , \delta \in {\mathbb{R}} - \left\{ 0 \right\}\) in the last two steps, we obtain a new optimal derivative-free family of four-point without memory methods which are presented below. We shall denote it by FM8.

$$ \begin{aligned} w_{n} & = s_{n} + \alpha \Omega \left( {s_{n} } \right) \\ y_{n} & = s_{n} - \frac{{\Omega \left( {s_{n} } \right)}}{{\Omega \left[ {s_{n} ,w_{n} } \right] + \beta \Omega \left( {w_{n} } \right)}} \\ z_{n} & = y_{n} - \left( {1 + \frac{{\Omega \left( {y_{n} } \right)}}{{\Omega \left( {s_{n} } \right)}}\frac{{\Omega \left( {s_{n} } \right) + \left( {\lambda - 1} \right)\Omega \left( {y_{n} } \right)}}{{\Omega \left( {s_{n} } \right) - \Omega \left( {y_{n} } \right)}}} \right) \times \\ & \quad \frac{{\Omega \left( {y_{n} } \right)}}{{\Omega \left[ {y_{n} ,w_{n} } \right] + \beta \Omega \left( {w_{n} } \right) + \gamma \left( {y_{n} - s_{n} } \right)\left( {y_{n} - w_{n} } \right)}} \\ s_{n + 1} & = z_{n} - \frac{{\Omega \left( {z_{n} } \right)}}{{Q^{\prime}\left( {z_{n} } \right) + \delta \left( {z_{n} - s_{n} } \right)\left( {z_{n} - y_{n} } \right)\left( {z_{n} - w_{n} } \right)}} \\ \end{aligned} $$
(13)

It is evident that the new family of without memory methods (13) consumes only four function evaluations per full iteration and is completely derivative-free. Also, it preserves the optimal convergence order eighth with the efficiency index \(8^{\frac{1}{4}} \approx 1.682\). Now, we present the following theorem through which the convergence criteria of (13) are theoretically discussed.

Theorem 1

Let \(\xi \in D\) be a simple root of a sufficiently differentiable real function \(\Omega :D \subseteq {\mathbb{R}} \to {\mathbb{R}}\), where \(D\) is an open interval. If an initial guess \(s_{0}\) is close enough to \(\xi\), then the family of proposed methods defined by (13) has eighth-order convergence for any \(\lambda \in {\mathbb{R}}\) and \(\alpha , \beta , \gamma , \delta \in {\mathbb{R}} - \left\{ 0 \right\}\). And, it has the error equation given by:

$$ \begin{aligned} \varepsilon_{n + 1} & = \frac{1}{{\Omega^{\prime}\left( \xi \right)^{2} }}\left( {1 + \Omega^{\prime}\left( \xi \right)\alpha } \right)^{4} \left( {\beta + d_{2} } \right)^{2} \\ & \quad \left( { - \gamma + \Omega^{\prime}\left( \xi \right)\left( {1 + \Omega^{\prime}\left( \xi \right)\alpha } \right)\beta^{2} \left( { - 1 + \lambda } \right)} \right. \\ & \quad + \Omega^{\prime}\left( \xi \right)\left( {2\beta \left( { - 2 + \Omega^{\prime}\left( \xi \right)\alpha \left( { - 1 + \lambda } \right) + \lambda } \right)d_{2} } \right. \\ & \quad \left. {\left. { + \left( { - 3 + \Omega^{\prime}\left( \xi \right)\alpha \left( { - 1 + \lambda } \right) + \lambda } \right)d_{2}^{2} + d_{3} } \right)} \right) \\ & \quad \left( {\delta + d_{2} \left( { - \gamma + \Omega^{\prime}\left( \xi \right)\left( {1 + \Omega^{\prime}\left( \xi \right)\alpha } \right)\beta^{2} \left( { - 1 + \lambda } \right)} \right.} \right. \\ & \quad + \Omega^{\prime}\left( \xi \right) \left( {2\beta \left( { - 2 \Omega^{\prime}\left( \xi \right)\alpha \left( { - 1 + \lambda } \right) + \lambda } \right)d_{2} } \right) \\ & \quad \left. {\left. { + \left( { - 3 + \Omega^{\prime}\left( \xi \right)\alpha \left( { - 1 + \lambda } \right) + \lambda } \right)d_{2}^{2} + d_{3} } \right)} \right) \\ & \quad \left. { - \Omega^{\prime}\left( \xi \right)d_{4} } \right)\varepsilon_{n}^{8} + O\left( {\varepsilon_{n}^{9} } \right) \\ \end{aligned} $$
(14)

where \(d_{j} = \frac{1}{j!}\frac{{\Omega^{j} \left( \xi \right)}}{{\Omega^{\prime}\left( \xi \right)}},j = 2,3,... \), and \(\varepsilon_{n} = s_{n} - \xi\) is the error at \(n^{th}\) iteration.

Proof

We construct and apply the following self-explained Mathematica code for the proof of the optimal eighth order of convergence of (13).

$$ \Omega \left[ {\varepsilon_{ - } } \right]: = \Omega^{\prime}\left( \xi \right)\left( {\varepsilon + d_{2} \varepsilon^{2} + d_{3} \varepsilon^{3} + d_{4} \varepsilon^{4} + d_{5} \varepsilon^{5} + d_{6} \varepsilon^{6} + d_{7} \varepsilon^{7} + d_{8} \varepsilon^{8} } \right); $$
$$ \varepsilon_{w} = \varepsilon + \alpha \Omega \left[ \varepsilon \right]\;\;\left( {*\varepsilon_{w} = w - \xi *} \right) $$
$$ {\text{Out}}[1]:(1 + \Omega^{\prime}(\xi )\alpha )\varepsilon + O[\varepsilon ]^{2} $$
$$ \Omega \left[ {x_{ - } ,y_{ - } } \right] : = \frac{\Omega \left[ x \right] - \Omega \left[ y \right]}{{x - y}}; $$
$$ \Omega \left[ {x_{ - } ,y_{ - } ,z_{ - } } \right]: = \frac{{\Omega \left[ {x,y} \right] - \Omega \left[ {y,z} \right]}}{x - z}; $$
$$ \varepsilon_{y} = {\text{Series}}\left[ {\varepsilon - \frac{\Omega \left[ \varepsilon \right]}{{\Omega \left[ {\varepsilon ,\varepsilon_{w} } \right] + \beta \Omega \left[ {\varepsilon_{w} } \right]}},\left\{ {\varepsilon ,0, 8} \right\}} \right]{\text{//Simplif}}y $$
$$ {\text{Out}}\left[ 2 \right]:\left( {1 + \Omega^{\prime}\left( \xi \right) \alpha } \right)\left( {\beta + d_{2} } \right)\varepsilon^{2} + O\left[ \varepsilon \right]^{3} $$
$$ \begin{aligned} \varepsilon_{z} & = \varepsilon_{y} - \left( {1 + \frac{{\Omega \left[ {\varepsilon_{y} } \right]}}{\Omega \left[ \varepsilon \right]} \frac{{\Omega \left[ \varepsilon \right] + \left( {\lambda - 1} \right)\Omega \left[ {\varepsilon_{y} } \right]}}{{\Omega \left[ \varepsilon \right] - \Omega \left[ {\varepsilon_{y} } \right]}}} \right) \times \\ & \quad \frac{{\Omega \left[ {\varepsilon_{y} } \right]}}{{\Omega \left[ {\varepsilon_{y} ,\varepsilon_{w} } \right] + \beta \Omega \left[ {\varepsilon_{w} } \right] + \gamma \left( {\varepsilon_{y} - \varepsilon } \right)\left( {\varepsilon_{y} - \varepsilon_{w} } \right)}}{\text{//Simplif}}y \\ \end{aligned} $$
$$ \begin{aligned} & {\text{Out}}\left[ 3 \right]:\frac{ - 1}{{\Omega^{\prime}\left( \xi \right)}} \left( {1 + \Omega^{\prime}\left( \xi \right)\alpha } \right)^{2} \left( {\beta + d_{2} } \right) \\ & \quad \left( { - \Omega^{\prime}\left( \xi \right)\beta^{2} - \Omega^{\prime}\left( \xi \right)^{2} \alpha \beta^{2} - \gamma + \Omega^{\prime}\left( \xi \right)\beta^{2} \lambda } \right. \\ & \quad + \Omega^{\prime}\left( \xi \right)^{2} \alpha \beta^{2} \lambda + 2\Omega^{\prime}\left( \xi \right)\beta \left( { - 2 + \Omega^{\prime}\left( \xi \right)\alpha \left( { - 1 + \lambda } \right) + \lambda } \right)d_{2} \\ & \quad \left. { + \Omega^{\prime}\left( \xi \right)\left( { - 3 + \Omega^{\prime}\left( \xi \right)\alpha \left( { - 1 + \lambda } \right) + \lambda } \right)d_{2}^{2} + \Omega^{\prime}\left( \xi \right)d_{3} } \right)\varepsilon^{4} + O\left[ \varepsilon \right]^{5} \\ \end{aligned} $$
$$ l_{3} = \Omega \left[ {\varepsilon ,\varepsilon_{y} ,\varepsilon_{z} ,\varepsilon_{w} } \right]; $$
$$ l_{2} = {\text{Series}}\left[ {\Omega \left[ {\varepsilon_{y} ,\varepsilon_{z} ,\varepsilon_{w} } \right] - l_{3} \left( {\varepsilon_{y} + \varepsilon_{w} - 2\varepsilon_{z} } \right),\left\{ {\varepsilon ,0, 8} \right\}} \right]{\text{//Simplify}}; $$
$$ Q^{\prime}\left( {\varepsilon_{z} } \right) = {\text{Series}}\left[ {\Omega \left[ {\varepsilon_{z} ,\varepsilon_{w} } \right] - l_{2} \left( {\varepsilon_{w} - \varepsilon_{z} } \right) - l_{3} \left( {\varepsilon_{w} - \varepsilon_{z} } \right)^{2} ,\left\{ {\varepsilon ,0, 8} \right\}} \right]{\text{//Simplify}}; $$
$$ \varepsilon_{n + 1} = {\text{Series}}\left[ {\varepsilon_{z} - \frac{{\Omega [\varepsilon_{z} ]}}{{Q^{\prime}(\varepsilon_{z} ) + \delta (\varepsilon_{z} - \varepsilon )(\varepsilon_{z} - \varepsilon_{y} )(\varepsilon_{z} - \varepsilon_{w})}}}, \left\{\varepsilon ,0,8\right\} \right]{\text{//FullSimplify}} $$
$$ \begin{aligned} & {\text{Out}}\left[ 4 \right]:\frac{1}{{\Omega^{\prime}\left( \xi \right)^{2} }}\left( {1 + \Omega^{\prime}\left( \xi \right)\alpha } \right)^{4} \left( {\beta + d_{2} } \right)^{2} \\ & \quad \left( { - \gamma + \Omega^{\prime}\left( \xi \right)\left( {1 + \Omega^{\prime}\left( \xi \right)\alpha } \right)\beta^{2} \left( { - 1 + \lambda } \right) + \Omega^{\prime}\left( \xi \right)} \right. \\ & \quad \left( {2\beta \left( { - 2 + \Omega^{\prime}\left( \xi \right)\alpha \left( { - 1 + \lambda } \right) + \lambda } \right)d_{2} } \right. \\ & \quad \left. { + \left( { - 3 + \Omega^{\prime}\left( \xi \right)\alpha \left( { - 1 + \lambda } \right) + \lambda } \right)d_{2}^{2} + d_{3} } \right) \\ & \quad \left( {\delta + d_{2} \left( { - \gamma + \Omega^{\prime}\left( \xi \right)\left( {1 + \Omega^{\prime}\left( \xi \right)\alpha } \right)\beta^{2} \left( { - 1 + \lambda } \right)} \right) - \Omega^{\prime}\left( \xi \right)d_{4} } \right) \\ & \quad \left. { + \Omega^{\prime}\left( \xi \right) \left( {2\beta \left( { - 2 \Omega^{\prime}\left( \xi \right)\alpha \left( { - 1 + \lambda } \right) + \lambda } \right)d_{2} } \right)} \right) \\ & \quad \left. { \left. { + \left( { - 3 + \Omega^{\prime}\left( \xi \right)\alpha \left( { - 1 + \lambda } \right) + \lambda } \right)d_{2}^{2} + d_{3} } \right)} \right)\varepsilon_{n}^{8} + O\left( {\varepsilon_{n}^{9} } \right) \\ \end{aligned} $$

which shows that Eq. (13) is of optimal order eighth.

3 Four-Parametric Family of Four-Point with Memory Methods

From the error Eq. (14), the convergence order can be increased from 8 to 16 for the method (13) if \(\alpha = - \frac{1}{{\Omega^{\prime}\left( \xi \right)}}\), \(\beta = - d_{2}\), \(\gamma = \Omega^{\prime}\left( \xi \right)d_{3}\) and \(\delta = \Omega^{\prime}\left( \xi \right)d_{4}\). However, the exact value of \(\xi\) is not available to us. As such, we shall use \(\alpha = \alpha_{n}\), \(\beta = \beta_{n}\), \(\gamma = \gamma_{n}\) and \(\delta = \delta_{n}\), where \(\alpha_{n}\), \(\beta_{n}\), \(\gamma_{n}\) and \(\delta_{n}\) are accelerating parameters which will be computed using the available information from the current and the previous iterations.

Now, to approximate the accelerating parameters \(\alpha_{n} , \beta_{n} , \gamma_{n}\) and \(\delta_{n}\), we use interpolation as follows.

$$ \alpha_{n} = - \frac{1}{{{\mathbb{N}}^{\prime}_{4} \left( {s_{n} } \right)}},\;\beta_{n} = - \frac{{{\mathbb{N}}^{\prime\prime}_{5} \left( {w_{n} } \right)}}{{2{\mathbb{N}}^{\prime}_{5} \left( {w_{n} } \right)}},\;\gamma_{n} = \frac{{{\mathbb{N}}^{\prime\prime\prime}_{6} \left( {y_{n} } \right)}}{6},\;\delta_{n} = \frac{{{\mathbb{N}}_{7}^{iv} \left( {z_{n} } \right)}}{24},\;\,n = 0, 1, 2, \ldots $$
(15)

where \({\mathbb{N}}_{j} \left( t \right), j = 4, 5, 6, 7\) are Newton’s interpolatory polynomials of \(j\) degrees set through the points, i.e., \(s_{n} , y_{n} , z_{n} , w_{n} , s_{n - 1} , y_{n - 1} , w_{n - 1} , z_{n - 1}\).

Now, by using (15) in the method (13), we obtain the following new derivative-free family of with memory methods. We shall denote it by FWM8.

For a given \(s_{0}\), \(\alpha_{0} , \beta_{0}\), \(\gamma_{0} , \delta_{0}\), we have \(w_{0} = s_{0} + \alpha_{0} {\Omega }\left( {s_{0} } \right) \). Then,

$$ \begin{aligned} \alpha _{n} & = - \frac{1}{{\mathbb{N}^{\prime}_{4} \left( {s_{n} } \right)}}, \beta _{n} = - \frac{{\mathbb{N}^{\prime\prime}_{5} \left( {w_{n} } \right)}}{{2\mathbb{N}^{\prime}_{5} \left( {w_{n} } \right)}}, \gamma _{n} = \frac{{\mathbb{N}^{\prime\prime\prime}_{6} \left( {y_{n} } \right)}}{6}, \delta _{n} = \frac{{\mathbb{N}_{7}^{{iv}} \left( {z_{n} } \right)}}{{24}} \\ w_{n} & = s_{n} + \alpha _{n} \Omega \left( {s_{n} } \right) \\ y_{n} & = s_{n} - \frac{{\Omega \left( {s_{n} } \right)}}{{\Omega \left[ {s_{n} ,w_{n} } \right] + \beta _{n} \Omega \left( {w_{n} } \right)}} \\ z_{n} & = y_{n} - \left( {1 + \frac{{\Omega \left( {y_{n} } \right)}}{{\Omega \left( {s_{n} } \right)}}\frac{{\Omega \left( {s_{n} } \right) + \left( {\lambda - 1} \right)\Omega \left( {y_{n} } \right)}}{{\Omega \left( {s_{n} } \right) - \Omega \left( {y_{n} } \right)}}} \right) \times \\ & \quad \frac{{\Omega \left( {y_{n} } \right)}}{{\Omega \left[ {y_{n} ,w_{n} } \right] + \beta _{n} ~\Omega \left( {w_{n} } \right) + \gamma _{n} \left( {y_{n} - s_{n} } \right)\left( {y_{n} - w_{n} } \right)}} \\ s_{{n + 1}} & = z_{n} - \frac{{\Omega \left( {z_{n} } \right)}}{{Q^{\prime}\left( {z_{n} } \right) + \delta _{n} \left( {z_{n} - s_{n} } \right)\left( {z_{n} - y_{n} } \right)\left( {z_{n} - w_{n} } \right)}} \\ \end{aligned} $$
(16)

Lemma 1

If \(\alpha_{n} = - \frac{1}{{{\mathbb{N}}^{\prime}_{4} \left( {s_{n} } \right)}}\) , \(\beta_{n} = - \frac{{{\mathbb{N}}^{\prime\prime}_{5} \left( {w_{n} } \right)}}{{2{\mathbb{N}}^{\prime}_{5} \left( {w_{n} } \right)}}\) , \(\gamma_{n} = \frac{{{\mathbb{N}}^{\prime\prime\prime}_{6} \left( {y_{n} } \right)}}{6}\) and \(\delta_{n} = \frac{{{\mathbb{N}}_{7}^{iv} \left( {z_{n} } \right)}}{24}, n = 0, 1, 2, \ldots\) , then the following estimates

$$ 1 + \alpha_{n} \Omega^{\prime}\left( \alpha \right) \sim \varepsilon_{n - 1} \varepsilon_{n - 1,y} \varepsilon_{n - 1,w} \varepsilon_{n - 1,z} $$
(17)
$$ \beta_{n} + d_{2} \sim \varepsilon_{n - 1} \varepsilon_{n - 1,y} \varepsilon_{n - 1,w} \varepsilon_{n - 1,z} $$
(18)
$$ K_{n} \sim \varepsilon_{n - 1} \varepsilon_{n - 1,y} \varepsilon_{n - 1,w} \varepsilon_{n - 1,z} $$
(19)
$$ L_{n} \sim \varepsilon_{n - 1} \varepsilon_{n - 1,y} \varepsilon_{n - 1,w} \varepsilon_{n - 1,z} $$
(20)

hold, where

$$ \begin{aligned} K_{n} & = - \gamma_{n} + \Omega^{\prime}\left( \xi \right)\left( {1 + \Omega^{\prime}\left( \xi \right)\alpha_{n} } \right)\beta_{n}^{2} \left( { - 1 + \lambda } \right) \\ & \quad + \Omega^{\prime}\left( \xi \right)\left( {2\beta_{n} \left( { - 2 + \Omega^{\prime}\left( \xi \right)\alpha_{n} \left( { - 1 + \lambda } \right) + \lambda } \right)d_{2} } \right. \\ & \quad \left. { + \left( { - 3 + \Omega^{\prime}\left( \xi \right)\alpha_{n} \left( { - 1 + \lambda } \right) + \lambda } \right)d_{2}^{2} + d_{3} } \right), \\ \end{aligned} $$
$$ \begin{aligned} L_{n} & = \delta_{n} + d_{2} \left( { - \gamma_{n} + \Omega^{\prime}\left( \xi \right)\left( {1 + \Omega^{\prime}\left( \xi \right)\alpha_{n} } \right)\beta_{n}^{2} \left( { - 1 + \lambda } \right)} \right. \\ & \quad + \Omega^{\prime}\left( \xi \right)\left( {2\beta_{n} \left( { - 2 + \Omega^{\prime}\left( \xi \right)\alpha_{n} \left( { - 1 + \lambda } \right) + \lambda } \right)d_{2} } \right. \\ & \quad \left. {\left. { + \left( { - 3 + \Omega^{\prime}\left( \xi \right)\alpha_{n} \left( { - 1 + \lambda } \right) + \lambda } \right)d_{2}^{2} + d_{3} } \right)} \right) - \Omega^{\prime}\left( \xi \right)d_{4} , \\ \end{aligned} $$
$$ \varepsilon_{n} = s_{n} - \xi , \,\varepsilon_{n,y} = y_{n} - \xi , \,\varepsilon_{n,w} = w_{n} - \xi , \,\varepsilon_{n,z} = z_{n} - \xi . $$

Proof

As for the proof, see Lemma 1 of [12].

Now, we present the following theorem for analyzing the R-order of convergence [13] of the derivative-free four-parametric family of four-point with memory methods (16).

Theorem 2

If an initial guess \(s_{0}\) is sufficiently close to a root \(\xi\) of \(\Omega \left( s \right) = 0\), the parameters \(\alpha_{n}\), \(\beta_{n}\), \(\gamma_{n}\) and \(\delta_{n}\) are calculated by the expressions in (15), then the R-order of convergence of the methods (16) is at least \(15.5156\).

Proof

Let the sequence of approximations \(\left\{ {s_{n} } \right\}\) produced by (16) converges to the root \(\xi\) with the order \(r\). Then, we can write

$$ \varepsilon_{n + 1} \sim \varepsilon_{n}^{r} ,\;\;{\text{where}}\;\varepsilon_{n} = s_{n} - \xi. $$
(21)

Then,

$$ \varepsilon_{n} \sim \varepsilon_{n - 1}^{r} $$
(22)

Thus,

$$ \varepsilon_{n + 1} \sim \varepsilon_{n}^{r} = \left( {\varepsilon_{n - 1}^{r} } \right)^{r} = \varepsilon_{n - 1}^{{r^{2} }} $$
(23)

Let the iterative sequences \(\left\{ {w_{n} } \right\}\), \(\left\{ {y_{n} } \right\}\) and \(\left\{ {z_{n} } \right\}\) have orders \(r_{1} , r_{2}\) and \(r_{3}\), respectively. Then, using (21) and (22) gives

$$ \varepsilon_{n,w} \sim \varepsilon_{n}^{{r_{1} }} = \left( {\varepsilon_{n - 1}^{r} } \right)^{{r_{1} }} = \varepsilon_{n - 1}^{{rr_{1} }} $$
(24)
$$ \varepsilon_{n,y} \sim \varepsilon_{n}^{{r_{2} }} = \left( {\varepsilon_{n - 1}^{r} } \right)^{{r_{2} }} = \varepsilon_{n - 1}^{{rr_{2} }} $$
(25)
$$ \varepsilon_{n,z} \sim \varepsilon_{n}^{{r_{3} }} = \left( {\varepsilon_{n - 1}^{r} } \right)^{{r_{3} }} = \varepsilon_{n - 1}^{{rr_{3} }} $$
(26)

From Theorem 1, we have

$$ \varepsilon_{n,w} \sim \left( {1 + \alpha_{n} \Omega^{\prime}\left( \xi \right)} \right)\varepsilon_{n} $$
(27)
$$ \varepsilon_{n,y} \sim \left( {1 + \alpha_{n} \Omega^{\prime}\left( \xi \right)} \right)\left( {\beta_{n} + d_{2} } \right)\varepsilon_{n}^{2} $$
(28)
$$ \varepsilon_{n,z} \sim \left( {1 + \alpha_{n} \Omega^{\prime}\left( \xi \right)} \right)^{2} \left( {\beta_{n} + d_{2} } \right)K_{n} \varepsilon_{n}^{4} $$
(29)
$$ \varepsilon_{n + 1} \sim \left( {1 + \alpha_{n} \Omega^{\prime}\left( \xi \right)} \right)^{4} \left( {\beta_{n} + d_{2} } \right)^{2} K_{n} L_{n} \varepsilon_{n}^{8} $$
(30)

Using the above Lemma 1 and (27)–(30), we get

$$ \varepsilon_{n,w} \sim \left( {1 + \alpha_{n} \Omega^{\prime}\left( \xi \right)} \right)\varepsilon_{n} = \varepsilon_{n - 1}^{{r + r_{1} + r_{2} + r_{3} + 1}} $$
(31)
$$ \varepsilon_{n,y} \sim \left( {1 + \alpha_{n} \Omega^{\prime}\left( \xi \right)} \right)\left( {\beta_{n} + d_{2} } \right)\varepsilon_{n}^{2} = \varepsilon_{n - 1}^{{2r + 2r_{1} + 2r_{2} + 2r_{3} + 2}} $$
(32)
$$ \varepsilon_{n,z} \sim \left( {1 + \alpha_{n} \Omega^{\prime}\left( \xi \right)} \right)^{2} \left( {\beta_{n} + d_{2} } \right)K_{n} \varepsilon_{n}^{4} = \varepsilon_{n - 1}^{{4r + 4r_{1} + 4r_{2} + 4r_{3} + 4}} $$
(33)
$$ \varepsilon_{n + 1} \sim \left( {1 + \alpha_{n} \Omega^{\prime}\left( \xi \right)} \right)^{4} \left( {\beta_{n} + d_{2} } \right)^{2} K_{n} L_{n} \varepsilon_{n}^{8} = \varepsilon_{n - 1}^{{8r + 8r_{1} + 8r_{2} + 8r_{3} + 8}} $$
(34)

Now, comparing the corresponding powers of \(\varepsilon_{n - 1}\) on the right sides of (24) and (31), (25) and (32), (26) and (33), (23) and (34), we get

$$ \begin{aligned} & rr_{1} - r - r_{1} - r_{2} - r_{3} - 1 = 0 \\ & rr_{2} - 2r - 2r_{1} - 2r_{2} - 2r_{3} - 2 = 0 \\ & rr_{3} - 4r - 4r_{1} - 4r_{2} - 4r_{3} - 4 = 0 \\ & r^{2} - 8r - 8r_{1} - 8r_{2} - 8r_{3} - 8 = 0 \\ \end{aligned} $$
(35)

This system has the non-trivial solution \(r_{1} = 1.9394\), \(r_{2} = 3.8789\), \(r_{3} = 7.7578\) and \(r = 15.5156\). Hence, the R-order of convergence of the proposed family of methods (16) is at least \(15.5156\). The proof is complete.

4 Numerical Results

In this section, we examine the performance and the computational efficiency of the newly developed with and without memory methods discussed in Sects. 2 and 3 and compare with some methods of similar nature available in literature. In particular, we have considered for the comparison, the following four-parametric methods: LAM8(3.31) [8], ZM8 (ZR1 from [9]) and ACM8 (M1 from [10]). All numerical tests are executed using the programming software Mathematica 12.2. Throughout the whole computation, we have chosen the same values of the parameters \(\alpha_{0} = \beta_{0} = \gamma_{0} = \delta_{0} = - 1\) and \(\lambda = 2\) in all the test functions in order to start the initial iteration. These same values are used for the corresponding parameters of all the compared methods in order to have fair comparison. Numerical test functions which comprise some standard academic examples and real-life chemical engineering problems along with their simple roots \(\left( \xi \right)\) and initial guesses \(\left( {s_{0} } \right)\) are presented below.

Example 1

A standard academic test function given by:

$$ \Omega_{1} \left( s \right) = e^{{ - s^{2} }} \left( {1 + s^{3} + s^{6} } \right)\left( {s - 2} \right). $$
(36)

It has a simple root \(\xi = 2\). We start with the initial guess \(s_{0} = 2.3\).

Example 2

A standard academic test function given by:

$$ {\Omega }_{2} \left( s \right) = \log \left( {s^{2} + s + 2} \right) - s + 1. $$
(37)

It has a simple root \(\xi \approx 4.1525907367571583\). We start with the initial guess \(s_{0} = 4.5\).

Example 3

A standard academic test function given by

$$ {\Omega }_{3} \left( s \right) = \sin^{2} s + s $$
(38)

It has a simple root \(\xi = 0\). We start with the initial guess \(s_{0} = 0.6\).

Example 4

The azeotropic point of a binary solution problem given by the following nonlinear equation (for details see [14]).

$$ \Omega_{4} \left( s \right) = \frac{{FG\left( {G\left( {1 - s} \right)^{2} - Fs^{2} } \right)}}{{\left( {s\left( {F - G} \right) + G} \right)^{2} }} + 0.14845, $$
(39)

where \(F = 0.38969\) and \(G = 0.55954\).

It has a simple root \(\xi \approx 0.69147373574714142\). We start with the initial guess \(s_{0} = 1.1\).

In Tables 1 and 2, we have displayed the absolute residual errors \(\left| { {\Omega }\left( {s_{n} } \right)} \right|\) at the first three iterations obtained by the compared methods. We also include the computational convergence order (COC) of each compared method which is computed by the following formula [15]:

$$ {\text{COC}} = \frac{{\log \left| { \Omega \left( {s_{n} } \right)/\Omega \left( {s_{n - 1} } \right)} \right|}}{{\log \left| { \Omega \left( {s_{n - 1} } \right)/\Omega \left( {s_{n - 2} } \right)} \right|}}. $$
(40)
Table 1 Comparison of the without memory methods on the test functions
Table 2 Comparison of the with memory methods on the test functions

From the two Tables 1 and 2, the numerical results affirm the robust performance and high efficiency of the proposed with and without memory methods thus reaffirming their theoretical results. The proposed methods give better accuracy with high efficiency in terms of minimal residual errors after three iterations as compared to the other methods. Further, the COC supports the theoretical convergence order of the new proposed with and without memory methods in the test functions.

5 Concluding Remarks

We have presented in this paper new derivative-free families of with and without memory methods for finding the solutions of nonlinear equations. The use of four accelerating parameters in the with memory methods has enabled us to increase the convergence order of the without memory methods from 8 to 15.5156 and obtain very high-efficiency index of \(15.5156^{\frac{1}{4}} \approx 1.9847\) without extra function evaluations. The numerical results further confirm the good performance, validity and applicability of the proposed with and without memory methods. They are found to be more efficient with better accuracy as compared to the existing methods in comparison in terms of minimal residual errors after three iterations for convergence toward the required simple roots.