Introduction

This paper is devoted to the problem of approximating a locally unique solution \(x^*\) of equation

$$\begin{aligned} F(x)=0, \end{aligned}$$
(1)

where \(F:D\subseteq {\mathbb {R}}\rightarrow {\mathbb {R}}\) is a differentiable nonlinear function and D is a convex subset of \({\mathbb {R}}.\) Newton-like methods are famous for finding solution of (1). These methods are studied based on: semi-local and local convergence [134].

The methods such as Euler’s, Halley’s, super Halley’s, Chebyshev’s [134] require the evaluation of the second derivative \(F''\) at each step, which in general is very expensive. To avoid this expensive computation, many authors have used higher order multi point methods [134].

Newton’s method is undoubtedly the most popular method for approximating a locally unique solution \(x^*\) provided that the initial point is close enough to the solution. In order to obtain a higher order of convergence Newton-like methods have been studied such as Potra–Ptak, Chebyshev, Cauchy Halley and Ostrowski method. The number of function evaluations per step increases with the order of convergence. In the scalar case the efficiency index [29, 34] \(\textit{EI}=p^{\frac{1}{m}}\) provides a measure of balance where p is the order of the method and m is the number of function evaluations.

It is well known that according to the Kung–Traub conjecture the convergence of any multi point method without memory cannot exceed the upper bound \(2^{m-1}\) [29, 34] (called the optimal order). Hence the optimal order for a method with three function evaluations per step is 4. The corresponding efficiency index is \(\textit{EI}= 4^{\frac{1}{3}}=1.58740...\) which is better than Newtons method which is \(\textit{EI}= 2^{\frac{1}{2}}=1.414....\) Therefore, the study of new optimal methods of order four is important.

We study the local convergence analysis of three step King-like method with a parameter defined for each \(n=0,1,2,\ldots \) by

$$\begin{aligned} y_n= & {} x_n-\frac{F(x_n)}{F'(x_n)}\nonumber \\ z_n= & {} y_n-\frac{(F(x_n)+\beta F(y_n))F'(x_n)^{-1}F(x_n)}{F(x_n)-(\beta -2)F(y_n)}\nonumber \\ x_{n+1}= & {} z_n-\frac{H(\mu _n)F(z_n)}{F[z_n, x_n]+F[z_n, y_n, x_n](z_n-y_n)}, \end{aligned}$$
(2)

where \(x_0\in D \) is an initial point, \(\beta \in {\mathbb {R}}, \mu _n=\frac{F(y_n)}{F(x_n)}, H:{\mathbb {R}}\rightarrow {\mathbb {R}}\) and F[., .], F[., ., .] are divided differences of order one and order two, respectively for function F[3, 4, 29, 30, 34]. Method (2) is using the evaluations of \(F(x_n), F(y_n), F(z_n)\) and \(F'(x_n)\) per step. According to Traub’s conjecture [29, 34] the convergence order is \(2^{4-1}=8.\) Ren et al. in [7] showed the eighth order of convergence of method (2) using Taylor expansions and hypotheses reaching up to the sixth derivative of function F. These hypotheses limit the applicability of method (2).

As a motivational example, let us define function F on \(X=[-\frac{1}{2},\frac{5}{2}]\) by

$$\begin{aligned} F(x)=\left\{ \begin{array}{lll} x^3\ln x^2+ x^5-x^4,\,\,\,\, &{} x\ne 0\\ 0 ,\,\,\,\, &{} x=0 \end{array}\right. \end{aligned}$$

Choose \(x^*=1.\) We have that

$$\begin{aligned} F'(x)= & {} 3x^2\ln x^2+5x^4-4x^3+2x^2,\quad F'(1)=3,\\ F''(x)= & {} 6x\ln x^2+20x^3-12x^2+10x\\ F'''(x)= & {} 6\ln x^2+60x^2-24x+22. \end{aligned}$$

Then, obviously function F does not have bounded third derivative in X. Notice that, in-particular there is a plethora of iterative methods for approximating solutions of nonlinear equations defined on \({\mathbb {R}}\) [134]. These results show that if the initial point \(x_0\) is sufficiently close to the solution \(x^*,\) then the sequence \(\{x_n\}\) converges to \(x^*.\) But how close to the solution \(x^*\) the initial guess \(x_0\) should be? These local results give no information on the radius of the convergence ball for the corresponding method. We address this question for method (2) in “Local convergence analysis” section. The same technique can be used to other methods. In the present study we extend the applicability of these methods by using hypotheses up to the first derivative of function F and contractions. Moreover we avoid Taylor expansions and use instead Lipschitz parameters. This way we do not have to use higher order derivatives to show the convergence of these methods.

The paper is organized as follows. In “Local convergence analysis” section we present the local convergence analysis. We also provide a radius of convergence, computable error bounds and uniqueness result not given in the earlier studies using Taylor expansions. Special cases and numerical examples are presented in the concluding “Numerical examples” section.

Local Convergence Analysis

We present the local convergence analysis of method (2) in this section. Let \(L_0 >0, L >0, M \ge 1\) and \(\beta \in {\mathbb {R}}\) be given parameters and let \(\varphi :[0, \frac{1}{L_0})\rightarrow [0, +\infty )\) be a continuous and non-decreasing function. It is convenient for the local convergence analysis of method (2) that follows to introduce some scalar functions and parameters. Define functions \(g_1, \, p,\, h_p\) on the interval \([0, \frac{1}{L_0})\) by

$$\begin{aligned}&\displaystyle g_1(t)=\frac{Lt}{2(1-L_0t)},\\&\displaystyle p(t)=\frac{L_0t}{2}+|\beta -2|Mg_1(t),\\&\displaystyle h_p(t)=p(t)-1\\ \end{aligned}$$

and parameter \(r_1\) by

$$\begin{aligned} r_1=\frac{2}{2L_0+L}. \end{aligned}$$

We have that \(h_p(0)=-1 <0\) and \(h_p(t)\rightarrow +\infty \) as \(t\rightarrow \frac{1}{L_0}^-.\) It follows from the intermediate value theorem that function \(h_p\) has zeros in the interval \((0, \frac{1}{L_0}).\) Denote by \(r_p\) the smallest such zero. Moreover, define functions \(g_2, h_2, q\) and \(h_q\) on the interval \([0, r_p)\) by

$$\begin{aligned}&\displaystyle g_2(t)=\left[ 1+\frac{M}{1-L_0t}+\frac{2|1-\beta |M^2g_1(t)}{(1-L_0t)(1-p(t))}\right] g_1(t),\\&\displaystyle h_2(t)=g_2(t)-1,\\&\displaystyle q(t)=K(3+g_1(t)+2g_2(t))t\\ \end{aligned}$$

and

$$\begin{aligned} h_q(t)=q(t)-1. \end{aligned}$$

We have that \(h_2(0)=h_q(0)=-1 < 0\) and \(h_2(t)\rightarrow +\infty , h_q(t)\rightarrow +\infty \) as \(t\rightarrow r_p^-.\) Denote by \(r_2, r_q\) the smallest zeros of functions \(h_2\) and \(h_q\) in the interval \((0, r_p).\) Furthermore, define functions \(g_3\) and \(h_3\) on the interval \([0, r_q)\) by

$$\begin{aligned} g_3(t)=\left( 1+\frac{M\varphi (t)}{1-q(t)}\right) g_2(t) \end{aligned}$$

and

$$\begin{aligned} h_3(t)=g_3(t)-1. \end{aligned}$$

We have that \(h_3(0)=-1 < 0\) and \(h_3(t)\rightarrow +\infty \) as \(t\rightarrow r_q^-.\) Denote by \(r_3\) the smallest zeros of function \(h_3\) in the interval \((0, r_q).\) Set

$$\begin{aligned} r=\min \{r_1, r_2, r_3\}. \end{aligned}$$
(3)

Then, we have that

$$\begin{aligned} 0 < r \le r_1 \end{aligned}$$
(4)

and for each \( t\in [0, r)\)

$$\begin{aligned} 0\le g_1(t)< & {} 1\end{aligned}$$
(5)
$$\begin{aligned} 0\le p(t)< & {} 1.\end{aligned}$$
(6)
$$\begin{aligned} 0\le g_2(t)< & {} 1\end{aligned}$$
(7)
$$\begin{aligned} 0\le q(t)< & {} 1. \end{aligned}$$
(8)

and

$$\begin{aligned} 0\le g_3(t) < 1. \end{aligned}$$
(9)

Denote by \(U(v,\rho ), \bar{U}(v,\rho )\) stand respectively for the open and closed balls in \({\mathbb {R}}\) with center \(v\in {\mathbb {R}}\) and of radius \(\rho >0.\) Next,we present the local convergence analysis of method (2) using the preceeding notation.

Theorem 2.1

Let \(F: D\subset {\mathbb {R}}\rightarrow {\mathbb {R}}\) be a differentiable function; F[., .], F[., ., .] be divided differences of order one and two for function F and let \(H:{\mathbb {R}}\rightarrow {\mathbb {R}}\) be a continuous function. Suppose there exist \(x^*\in D,\, L_0 > 0,\, L >0,\,K > 0,\, M \ge 1\) and a continuous and nondecreasing function \(\varphi :[0, \frac{1}{L_0})\rightarrow [0, +\infty )\) such that \( F(x^*)=0,\,\, F'(x^*)\ne 0,\)

$$\begin{aligned}&\displaystyle |H(x)|\le \varphi (|x-x^*|),\end{aligned}$$
(10)
$$\begin{aligned}&\displaystyle \left| F'(x^*)^{-1}\left( F'(x)-F'(x^*)\right) \right| \le L_0|x-x^*|,\end{aligned}$$
(11)
$$\begin{aligned}&\displaystyle \left| F'(x^*)^{-1}\left( F'(x)-F'(y)\right) \right| \le L|x-y|,\end{aligned}$$
(12)
$$\begin{aligned}&\displaystyle \left| F'(x^*)^{-1}F'(x)\right| \le M,\end{aligned}$$
(13)
$$\begin{aligned}&\displaystyle \left| F'(x^*)^{-1}\left( F[x,y]-F'(x^*)\right) \right| \le K\left( |x-x^*|+|y-x^*|\right) , \end{aligned}$$
(14)

and

$$\begin{aligned} \bar{U}(x^*, r)\subseteq D; \end{aligned}$$
(15)

where the radius r is defined by (3). Then, the sequence \(\{x_n\}\) generated by method (2) for \(x_0\in U(x^*, r)-\{x^*\}\) is well defined, remains in \(U(x^*, r)\) for each \(n=0,1,2,\ldots \) and converges to \(x^*.\) Moreover, the following estimates hold

$$\begin{aligned}&\displaystyle \left| y_n-x^*\right| \le g_1\left( |x_n-x^*|\right) |x_n-x^*| < |x_n-x^*| < r,\end{aligned}$$
(16)
$$\begin{aligned}&\displaystyle |z_n-x^*|\le g_2(|x_n-x^*|)|x_n-x^*| < |x_n-x^*|, \end{aligned}$$
(17)

and

$$\begin{aligned} |x_{n+1}-x^*|\le g_3(|x_n-x^*|)|x_n-x^*| < |x_n-x^*|, \end{aligned}$$
(18)

where the \(''g''\) functions are defined above Theorem 2.1. Furthermore, for \(T\in [r, \frac{2}{L_0})\) the limit point \(x^*\) is the only solution of equation \(F(x)=0\) in \(\bar{U}(x^*,T)\cap D.\)

Proof

We shall show estimates (16)–(18) using mathematical induction. By hypothesis \(x_0\in {U}(x^*, r)-\{x^*\},\) (3) and (11), we have that

$$\begin{aligned} \left| F'(x^*)^{-1}\left( F'(x_0)-F'(x^*)\right) \right| \le L_0|x_0-x^*| < L_0r < 1. \end{aligned}$$
(19)

It follows from (19) and Banach Lemma on invertible functions [3, 4, 29, 30] that \(F'(x_0)\ne 0\) and

$$\begin{aligned} \left| F'(x^*)^{-1}F'(x_0)\right| \le \frac{1}{1-L_0|x_0-x^*|}. \end{aligned}$$
(20)

Hence, \(y_0\) is well defined by the first substep of method (2) for \(n=0.\) Using (3), (5), (12) and (20) we get that

$$\begin{aligned} |y_0-x^*|= & {} \left| x_0-x^*-F'(x_0)^{-1}F(x_0)\right| \nonumber \\\le & {} \Big |F'(x_0)^{-1}F'(x^*)\Big |\Big |\int _0^1F'(x^*)^{-1}\left( F'(x^*+\theta (x_0-x^*)\right) \nonumber \\&-F'(x_0))(x_0-x^*)d\theta \Big |\nonumber \\\le & {} \frac{L|x_0-x^*|^2}{2\left( 1-L_0|x_0-x^*|\right) }\nonumber \\= & {} g_1\left( |x_0-x^*|\right) |x_0-x^*| < |x_0-x^*| < r, \end{aligned}$$
(21)

which shows (16) for \(n=0\) and \(y_0\in U(x^*, r).\) We can write that

$$\begin{aligned} F(x_0)=F(x_0)-F(x^*)=\int _0^1F'(x^*+\theta (x_0-x^*)(x_0-x^*)d\theta . \end{aligned}$$
(22)

Notice that \(|x^*+\theta (x_0-x^*)-x^*|=\theta |x_0-x^*| < r.\) Hence, we get that \(x^*+\theta (x_0-x^*)\in U(x^*, r).\) Then, by (13) and (22), we obtain that

$$\begin{aligned} \left| F'(x^*)^{-1}F(x_0)\right|\le & {} M\left| x_0-x^*\right| . \end{aligned}$$
(23)

We also have by (21) and (23) (for \(y_0=x_0\)) that

$$\begin{aligned} \left| F'(x^*)^{-1}F(y_0)\right|\le & {} M|y_0-x^*|\le Mg_1(|x_0-x^*|)|x_0-x^*|, \end{aligned}$$
(24)

since \(y_0\in U(x^*, r).\) Next, we shall show \( F(x_0)-(\beta -2)F(y_0)\ne 0.\) Using (3), (6), (11), (24) and the hypothesis \(x_0\ne x^*,\) we have in turn that

$$\begin{aligned}&\left| \left( F'(x^*)(x_0-x^*)\right) ^{-1}[F(x_0)-F'(x^*)-F'(x^*)(x_0-x^*)-(\beta -2)F(y_0)\right| \nonumber \\&\quad \le |x_0-x^*|^{-1}[|F'(x^*)^{-1}(F(x_0)-F'(x^*)-F'(x^*)(x_0-x^*))|\nonumber \\&\qquad +\,|\beta -2||F'(x^*)^{-1}F(y_0)|\nonumber \\&\quad \le |x_0-x^*|^{-1}\left[ \frac{L_0}{2}|x_0-x^*|^2+\,|\beta -2|Mg_1(|x_0-x^*|)|x_0-x^*|\right. \nonumber \\&\quad =p(|x_0-x^*|) < p(r) < 1. \end{aligned}$$
(25)

It follows from (24) that

$$\begin{aligned} \left| \left( F(x_0)-(\beta -2)F(y_0)\right) ^{-1}F'(x^*)\right| \le \frac{1}{|x_0-x^*|(1-p(|x_0-x^*|))}. \end{aligned}$$
(26)

Hence, \(z_0\) is well defined by the second sub step of method (2) for \(n=0.\) Then, we can write

$$\begin{aligned} z_0-x^*= & {} y_0-x^*-F'(x_0)^{-1}F(x_0)\nonumber \\&\quad +\,\left[ 1-\frac{F(x_0)+\beta F(y_0)}{F(x_0)-(\beta -2)F(y_0)}\right] F'(x_0)^{-1}F(y_0)\nonumber \\= & {} y_0-x^*-F'(x_0)^{-1}F(x_0)\nonumber \\&\quad +\,\frac{2(1-\beta )F(y_0)F'(x_0)^{-1}F(y_0)}{F(x_0)-(\beta -2)F(y_0)}. \end{aligned}$$
(27)

Using (3), (7), (20), (21), (24), (26) and (27) we get in turn that

$$\begin{aligned} \left| z_0-x^*\right|\le & {} \left| y_0-x^*\right| +\left| F'(x_0)^{-1}F(x^*)\right| \left| F'(x^*)^{-1}F(x_0)\right| \nonumber \\&+\,2|1-\beta |\left| \left( F(x_0)-(\beta -2)F(y_0)\right) ^{-1}F'(x^*)\right| \nonumber \\&\times \, \left| F'(x^*)^{-1}F(y_0)\right| \left| F'(x_0)^{-1}F(x^*)\right| \left| F'(x^*)^{-1}F(y_0)\right| \nonumber \\\le & {} |y_0-x^*|+\frac{M|y_0-x^*|}{1-L_0|x_0-x^*|}\nonumber \\&+\,\frac{2|1-\beta |M^2|y_0-x^*|^2}{(1-L_)|x_0-x^*|)(1-p(|x_0-x^*|))|x_0-x^*|}\nonumber \\\le & {} \left[ 1+\frac{M}{1-L_0|x_0-x^*|}+\frac{2|1-\beta |M^2g_1(|x_0-x^*|)}{(1-L_0|x_0-x^*|)(1-p(|x_0-x^*|))}\right] |y_0-x^*|\nonumber \\\le & {} g_2(|x_0-x^*|)|x_0-x^*| < |x_0-x^*| < r, \end{aligned}$$
(28)

which show (17) for \(n=0\) and \(z_0\in U(x^*, r).\) Then, we also have by (23) (for \(z_0=x_0\)) that

$$\begin{aligned} |F'(x^*)^{-1}F(z_0)|\le M|z_0-x^*|, \end{aligned}$$
(29)

since \(z_0\in U(x^*, r).\) We can write by the properties of the divided differences that

$$\begin{aligned}&F[z_0,x_0]+F[z_0, y_0, x_0](z_0-y_0)\nonumber \\&\quad =F[z_0, x_0]+F[z_0, x_0, y_0](z_0-y_0)\nonumber \\&\quad =F[z_0, x_0]+F[z_0, x_0]-F[x_0, y_0]. \end{aligned}$$
(30)

Next, we shall show that \(F[z_0, x_0]+F[z_0, y_0, x_0](z_0-y_0)\ne 0.\) Using (3), (8),(14), (21), (28) and (30), we obtain in turn that

$$\begin{aligned}&|F'(x^*)^{-1}(2F[z_0, x_0]-F[x_0, y_0]-F'(x^*))|\nonumber \\&\le |F'(x^*)^{-1}(F[z_0, x_0]-F'(x^*))|\nonumber \\&\quad +\,|F'(x^*)^{-1}(F[z_0, x_0]-F'(x^*))|\nonumber \\&\quad +\,|F'(x^*)^{-1}(F[x_0, y_0]-F'(x^*))|\nonumber \\&\le K(|z_0-x^*|+|x_0-x^*|)\nonumber \\&\quad +\,K(|z_0-x^*|+|x_0-x^*|)+K(|x_0-x^*|+|y_0-x^*|)\nonumber \\&=K(2|z_0-x^*|+3|x_0-x^*|+\,|y_0-x^*|)\nonumber \\&\le K[g_1(|x_0-x^*|)+3+2g_2(|x_0-x^*|)]|x_0-x^*|\nonumber \\&=q(|x_0-x^*|) < q(r) < 1. \end{aligned}$$
(31)

Then, we get by (31) that

$$\begin{aligned} \big |(F[z_0, x_0]+F[z_0, y_0, x_0](z_0-y_0))^{-1}F'(x^*)\big |\le \frac{1}{1-q(|x_0-x^*|)}. \end{aligned}$$
(32)

Hence, \(x_1\) is well defined by the last sub step of method (2) for \(n=0.\) Then, using (3), (9), (10), (28), (29) and (32) we obtain in turn that

$$\begin{aligned} |x_1-x^*|= & {} |z_0-x^*|+|H(\mu _n)|\nonumber \\&\times \, \left| \left( F[z_0, x_0]+F[z_0, y_0, x_0](z_0-y_0)\right) ^{-1}F'(x^*)\right| \left| F'(x^*)^{-1}F(z_0)\right| \nonumber \\\le & {} |z_0-x^*|+ \frac{M\varphi (|x_0-x^*|)|x_0-x^*|}{1-q(|x_0-x^*|)}\nonumber \\\le & {} \left[ 1+\frac{M\varphi (|x_0-x^*|)}{1-q(|x_0-x^*|)}\right] g_2(|x_0-x^*|)|x_0-x^*| \nonumber \\= & {} g_3(|x_0-x^*|)|x_0-x^*| < |x_0-x^*| < r, \end{aligned}$$
(33)

which shows (18) for \(n=0\) and \(x_1\in U(x^*, r).\) By simply replacing \(x_0, y_0, z_0,\, x_1\) by \(x_k, y_k, z_k, x_{k+1}\) in the preceding estimates we arrive at estimates (13) and (4). Then, it follows from the estimate \(|x_{k+1}-x^*| < |x_k-x^*| < r,\) we deduce that \(x_{k+1}\in U(x^*, r)\) and \(\lim _{k\rightarrow \infty }x_k=x^*.\) To show the uniqueness part, let \(Q=\int _0^1F'(y^*+\theta (x^*-y^*)d\theta \) for some \(y^*\in \bar{U}(x^*, T)\) with \(F(y^*)=0.\) Using (11), we get that

$$\begin{aligned} \left| F'(x^*)^{-1}\left( Q-F'(x^*)\right) \right|\le & {} \int _0^1L_0\left| y^*+\theta (x^*-y^*)-x^*\right| d\theta \nonumber \\\le & {} \int _0^1(1-\theta )|x^*-y^*|d\theta \le \frac{L_0}{2}T < 1. \end{aligned}$$
(34)

It follows from (34) and the Banach Lemma on invertible functions that Q is invertible. Finally, from the identity \(0=F(x^*)-F(y^*)=Q(x^*-y^*),\) we deduce that \(x^*=y^*.\) \(\square \)

Remark 2.2

  1. 1.

    In view of (11) and the estimate

    $$\begin{aligned} \left| F'(x^*)^{-1}F'(x)\right|= & {} \left| F'(x^*)^{-1}\left( F'(x)-F'(x^*)\right) +I\right| \\\le & {} 1+\left| F'(x^*)^{-1}\left( F'(x)-F'(x^*)\right) \right| \le 1+L_0|x-x^*| \end{aligned}$$

    condition (13) can be dropped and M can be replaced by

    $$\begin{aligned} M(t)=1+L_0 t \end{aligned}$$

    or

    $$\begin{aligned} M(t)=M=2, \end{aligned}$$

    since \(t\in [0, \frac{1}{L_0}).\)

  2. 2.

    The results obtained here can be used for operators F satisfying autonomous differential equations [3] of the form

    $$\begin{aligned} F'(x)=P(F(x)) \end{aligned}$$

    where P is a continuous operator. Then, since \(F'(x^*)=P(F(x^*))=P(0),\) we can apply the results without actually knowing \(x^*.\) For example, let \(F(x)=e^x-1.\) Then, we can choose: \(P(x)=x+1.\)

  3. 3.

    The radius \(r_1\) was shown by us to be the convergence radius of Newton’s method [3, 4]

    $$\begin{aligned} x_{n+1}=x_n-F'(x_n)^{-1}F(x_n)\text { for each } n=0,1,2,\ldots \end{aligned}$$
    (35)

    under the conditions (11) and (12). It follows from the definition of r that the convergence radius r of the method (2) cannot be larger than the convergence radius \(r_1\) of the second order Newton’s method (35). As already noted in [3, 4] \(r_1\) is at least as large as the convergence ball given by Rheinboldt [32]

    $$\begin{aligned} r_R=\frac{2}{3L}.\end{aligned}$$
    (36)

    In particular, for \(L_0 < L\) we have that

    $$\begin{aligned} r_R < r_1 \end{aligned}$$

    and

    $$\begin{aligned} \frac{r_R}{r_1}\rightarrow \frac{1}{3}\,\,\, as\,\,\, \frac{L_0}{L}\rightarrow 0. \end{aligned}$$

    That is our convergence ball \(r_1\) is at most three times larger than Rheinboldt’s. The same value for \(r_R\) was given by Traub [34].

  4. 4.

    It is worth noticing that method (2) is not changing when we use the conditions of Theorem 2.1 instead of the stronger conditions used in [7]. Moreover, we can compute the computational order of convergence (COC) defined by

    $$\begin{aligned} \xi = \ln \left( \frac{|x_{n+1}-x^*|}{|x_n-x^*|}\right) \Bigg /\ln \left( \frac{|x_{n}-x^*|}{|x_{n-1}-x^*|}\right) \end{aligned}$$

    or the approximate computational order of convergence

    $$\begin{aligned} \xi _1= \ln \left( \frac{|x_{n+1}-x_n|}{|x_n-x_{n-1}|}\right) \Bigg /\ln \left( \frac{|x_{n}-x_{n-1}|}{|x_{n-1}-x_{n-2}|}\right) . \end{aligned}$$

    This way we obtain in practice the order of convergence in a way that avoids the bounds involving estimates using estimates higher than the first Fréchet derivative of operator F.

Numerical Examples

We present numerical examples in this section. It follows from the proof of Theorem 2.1 that since \(\mu _n=\frac{F(y_n)}{F(x_n)},\) we can choose

$$\begin{aligned} \varphi (t)=\frac{Mg_2(t)t}{1-L_0t}. \end{aligned}$$

Example 3.1

Let \(D=(-\infty , +\infty ).\) Define function f of D by

$$\begin{aligned} f(x)=\sin (x). \end{aligned}$$
(37)

Then we have for \(x^*=0\) that \(L_0=L=M=1, K=\frac{L_0}{2}.\) The parameters are

$$\begin{aligned} r_1=0.6667,\, \quad {r_p}=0.3139,\quad r_2=0.2371,\quad r_q=0.2710,\quad r_3=0.2016=r. \end{aligned}$$

Example 3.2

Let \(D=[-1,1].\) Define function f of D by

$$\begin{aligned} f(x)=e^x-1. \end{aligned}$$
(38)

Using (38) and \(x^*=0,\) we get that \(L_0=e-1 < L=e, M=2, K=\frac{L_0}{2}.\) The parameters are

$$\begin{aligned} r_1=0.3249,\quad {r_p}=0.1337,\quad r_2=0.0711,\quad r_q=0.1043,\, \quad r_3=0.0609=r. \end{aligned}$$

Example 3.3

Returning back to the motivational example at the introduction of this study, we have \(L_0=L=146.6629073,\,M=2, K=\frac{L_0}{2}.\) The parameters are

$$\begin{aligned} r_1=0.0045,\, \quad {r_p}=0.0018,\, \quad r_2=0.0011,\, \quad r_q=0.0015,\,\quad r_3=0.0011=r. \end{aligned}$$