1 Introduction

It is well known that among the most important and challenging problem in numerical analysis is the search of solutions of nonlinear equations in Banach spaces. The need to solve these systems of nonlinear equations appears in different scientific areas, such as physics, engineering, biology or chemistry. In these areas we find systems that model various phenomena whose solution we do not know and therefore we need to solve. Most normally, problems appear in the form of nonlinear equations or systems of nonlinear equations

$$\begin{aligned} F(x)=0 \end{aligned}$$
(1)

where \(F:D \subseteq X\rightarrow Y\) is a nonlinear Fréchet differentiable operator in an open convex domain D. Where D is a subset of a Banach space X with values in another Banach space Y. This equation can represent different things: integral or differential equations or system of nonlinear equations.

In most cases, the analytical solutions of most nonlinear equations are not known, so the best way to solve these equations or systems of equations is therefore to use numerical iterative methods that can approximate the wanted solutions with any tolerance, and that otherwise, it would not be possible to achieve.

On the other hand, Mathematics is always changing and with the appearance and growth of STEM as it is presented in [1,2,3] the applications to other disciplines are used to teach them. Moreover, in advanced mathematics we need to use different alternative since we all know the different problems that students present.

After studying and working with various higher order iterative methods to solve nonlinear equations, the main difference that we are going to find when carrying out the convergence analysis of iterative methods in Banach spaces, is whether we focus on the initial data or if we focus on the solution obtained. The convergence analysis can be studied from two points of view. Taking into account the vicinity of the solution, which is known by the name of local analysis, or taking into account the initial approximations and the domain, called semilocal analysis. In this paper we will study the local analysis.

Many researchers [4, 5] discussed the local convergence of iterative methods under different continuity condition. Singh et al. [6] discussed the semilocal and local convergence of fifth order iterative method under Hölder continuity condition in Banach spaces. The local convergence of many optimal eighth order iterative methods under Lipschitz continuity condition discussed in [7]. For initial approximation \(x_0\), Argyros et al. [8, 9] discussed the local convergence of fifth and sixth order iterative method. Other studied can be seen in [10,11,12,13,14,15,16].

We consider the fourth and fifth order iterative method defined for \(n=0,1,2,\ldots \)

$$\begin{aligned}&\left\{ \begin{aligned}&y_n = x_n - \theta F'(x_n)^{-1}F(x_n)\\&z_n = x_n - \frac{1}{\theta ^{2}}F'(x_n)^{-1}\left( (\theta ^{2}+\theta -1)F(x_n) + F(y_n)\right) \\&x_{n+1} = z_n - F'(x_n)^{-1} F(z_n) \end{aligned} \right. \end{aligned}$$
(2)
$$\begin{aligned}&\left\{ \begin{aligned}&y_n = x_n - \theta F'(x_n)^{-1}F(x_n)\\&z_n = x_n - \frac{1}{\theta ^{2}}F'(x_n)^{-1}\left( (\theta ^{2}+\theta -1)F(x_n) + F(y_n)\right) \\&x_{n+1} = z_n - F'(y_n)^{-1} F(z_n) \end{aligned} \right. \end{aligned}$$
(3)

The semilocal convergence study of (2) is performed in [17]. It can be easily observed that for \(\theta =1\), it reduces to the fifth order method [18]. We can see in [19] the semilocal convergence analysis study of the last method for \(\theta =1\).

Next we will study the local convergence of the iterative methods (2) and (3). Local convergence is based on the neighborhood of the solution. We are just going to have to assume that the first order Fréchet derivative satisfies the Lipschitz continuity condition. As a result of the study we will obtain a theorem of existence and uniqueness in which the convergence balls of both methods will be established. In addition we will make several examples in which we will compute the radii of convergence of the balls using the established Theorem.

It can be easily observed that for different value of parameter \(\theta \), we get the different methods. It is reasonable to find here a particular \(\theta \) which enlarges the convergence balls. The relation between \(\theta \) and convergence ball is presented so that one can choose this for the practical purpose. Showing up next, by taking some existing methods in literature, convergence ball is compared and it is observed that by our approach, improved convergence balls is achieved in several numerical examples.

Finally, the dynamic study of the method for a generic 2-degree polynomial is performed, to show first the parameter spaces and choosing different values of the parameter with strange behaviors, represent the attraction basins.

This paper is arranged in four sections. The first section is the introduction. In Sect. 2, the local convergence of fourth order iterative method (2) is discussed. The local convergence of fifth order iterative method (3) studied in Sect. 3. Several examples are discussed in Sect. 4. Section 5 is the study of real and complex dynamics. Finally, conclusions and references are provided in Sect. 6.

2 Local convergence of method 2

In this section, we discuss the local convergence of iterative method (2) to solve nonlinear system (1). The order of convergence is shown to be four with the Taylor’s development. We discuss the local convergence using the following assumptions, for all \(x,y \in D\) and for \(M>0, M_0>0\):

$$\begin{aligned}&F'(\beta ) = 0 \quad F'(\beta )^{-1} \ne 0 \in B(Y,X) \forall (x,y) \in D \end{aligned}$$
(4)
$$\begin{aligned}&\Vert (F'(x) - F'(y)) F'(\beta )^{-1}\Vert \le M\Vert x-y\Vert \end{aligned}$$
(5)
$$\begin{aligned}&\Vert (F'(x) - F'(\beta )) F'(\beta )^{-1}\Vert \le M_{0}\Vert x-\beta \Vert \end{aligned}$$
(6)

where, the set of bounded linear operator from Y into X is named B(YX). Next, we will demonstrate a previous Lemma that will help us to test the local convergence of iterative fourth order methods under these assumptions.

Lemma 1

LetFbe operator defined (1) and satisfies the assumptions (4), (5). Then the following hold,

$$\begin{aligned}&\Vert F'(\beta )^{-1}F'(x)\Vert \le 1 + M_0 \Vert x-\beta \Vert \quad \forall x \in D\\&\Vert F'(\beta )^{-1}F'(\beta + t(x-\beta ))\Vert \le 1 + M_0 \Vert x-\beta \Vert \quad \forall t \in ]0,1[\\&\Vert F'(\beta )^{-1}F(x)\Vert \le (M_0 \Vert x-\beta \Vert + 1)\Vert x-\beta \Vert \end{aligned}$$

Proof

We can find the demonstration of this lemma in [4] so we will not repeat it in this article, referring readers to the original quotation. \(\square \)

Theorem 1

Let\(F:D\subseteq X\rightarrow Y\)be a nonlinear Fréchet differentiable operator defined by (1). So\(\beta \)exists and is such a solution that the\(\overline{\mathcal {B}}(\beta ,\rho )\subseteq \Omega \)and\(\theta \in (2-\sqrt{2},2)\), where\(\rho \)is the radius that we must find. We assume that the assumptions (4)–(6) hold true. Then the sequence\(\{x_n\}\)rendered by (2) if we use an initial approximation\(x_0\in \mathcal {B}(\beta ,\rho )\)is well defined for\(n=0,1,2\ldots \) hangs on in \(\mathcal {B}(\beta ,\rho )\)and converge to\(\beta \). Furthermore, the three inequalities are also met for\(n=0,1,2\ldots \)

$$\begin{aligned} \Vert y_n-\beta \Vert&\le g_1(\Vert x_n-\beta \Vert )\Vert x_n-\beta \Vert<\Vert x_n-\beta \Vert<\rho \\ \Vert z_n-\beta \Vert&\le g_2(\Vert x_n-\beta \Vert )\Vert x_n-\beta \Vert<\Vert x_n-\beta \Vert<\rho \\ \Vert x_{n+1}-\beta \Vert&\le g_3(\Vert x_n-\beta \Vert )\Vert x_n-\beta \Vert<\Vert x_n-\beta \Vert <\rho \end{aligned}$$

The functions\(g_i,\,i=1,2,3\)are to be defined later. Additionally, if there exist\(R\in [0,2/M_0)\)such that\(\overline{\mathcal {B}}(\beta ,\rho )\subseteq D\)then the limit point\(\beta \)is unique in the ball\(\overline{\mathcal {B}}(\beta ,\rho )\)

Proof

Taking an initial approach \(x_0\in D \) and taking into account the assumption of the theorem, \(\Vert x_0-\beta \Vert <\frac{1}{M_0}\), this makes

$$\begin{aligned} \Vert F'(\beta )^{-1} (F'(x_0) - F'(\beta ))\Vert \le M_0\Vert x_0-\beta \Vert <1 \end{aligned}$$

Now, using the Banach Lemma we get to

$$\begin{aligned} \Vert F'(x_0)^{-1}F'(\beta )\Vert \le \frac{1}{1-M_0\Vert x_0-\beta \Vert } \end{aligned}$$

Which shows that \(y_0\), \(z_0\) well defined, and applying the first step of the method (2) for \( n = 0 \), we arrive at

$$\begin{aligned} y_0 - \beta&= x_0 - \beta -\theta F'(x_0)^{-1}F(x_0) \\&= x_0-\beta -F'(x_0)^{-1}F(x_0) +F'(x_0)^{-1}F(x_0)- \theta F'(x_0)^{-1}F(x_0) \\&= x_0-\beta -F'(x_0)^{-1}F(x_0) +(1-\theta )F'(x_0)F(x_0) \\&=-F'(x_0)^{-1}(F(x_0)-F'(x_0)(x_0-\beta ))+(1-\theta )F'(x_0)^{-1}F(x_0)\\&=-F'(x_0)^{-1}F'(\beta ) \int _{0}^{1} F'(\beta )^{-1} [F'(\beta + t(x_0 - \beta )) - F'(x_0)](x_0 - \beta ) dt\\&\quad +\, (1-\theta )F'(x_0)^{-1}F'(\beta )\int _{0}^{1}F'(\beta )^{-1}F'(\beta +t(x_0-\beta ))(x_0-\beta ) \end{aligned}$$

Next, we apply the norm both on one side and on the other,

$$\begin{aligned} \Vert y_0-x_0\Vert \le \frac{1}{1-M_0 e_{0}} \left( \frac{M}{2} e_{0}^{2} +|1-\theta |(1+M_0 e_0)e_0\right) = g_1(e_{0}) e_{0} \end{aligned}$$

where

$$\begin{aligned} g_1(t)= \frac{\frac{M}{2} t+|1-\theta |(1+M_0 t)}{(1- M_0 t)} \end{aligned}$$

Now, we consider the function \(h_1(t)=g_1(t)-1\), since, \(h_1(0)=|1-\theta |-1<0\) if \(\theta \in (0,2)\) and \(h_1(1/M_0)\rightarrow \infty .\) Hence, applying Intermediate Value Theorem, \(h_1(t)\) has at least one root in the interval \((0,1/M_0).\) If \(\rho _1\) is the smallest root of \(h_1(t)\) in the interval \((0,1/M_0)\) we obtain

$$\begin{aligned} 0<\rho _1<1/M_0 \end{aligned}$$

and \(0\le g_1(t)<1\forall t \in [0,\rho _1).\) Thus, we reach,

$$\begin{aligned} \Vert y_0 - \beta \Vert \le g_1(\Vert x_0-\beta \Vert )\Vert x_0-\beta \Vert < \Vert x_0-\beta \Vert \end{aligned}$$

From the step two of the proposed methods, we get

$$\begin{aligned} z_0 - \beta= & \, x_0 - \frac{1}{\theta ^{2}}F'(x_0)^{-1}\left( (\theta ^{2}+\theta -1)F(x_0) + F(y_0)\right) \\= & \, x_0-\beta -F'(x_0)^{-1}\left[ \left( 1+\frac{1}{\theta }-\frac{1}{\theta ^{2}}\right) F(x_0) - \frac{1}{\theta ^{2}}F(y_0)\right] \\= & \, -F'(x_0)^{-1}\left[ F(x_0)-F'(x_0)(x_0-\beta )+\left( \frac{1}{\theta }-\frac{1}{\theta ^{2}}\right) F(x_0)-\frac{1}{\theta ^{2}}F(y_0)\right] \end{aligned}$$

Apply norm on both sides,

$$\begin{aligned} \Vert z_0 - \beta \Vert= & \, \Vert F'(x_0)^{-1}F'(\beta )\Vert \left[ \Vert F'(\beta )^{-1}\left( F'(\beta +t(x_0-\beta )) - F'(x_0)\right) \Vert \Vert (x_0 - \beta )\Vert \right. \\&\left. +\, \left( \frac{1}{\theta }-\frac{1}{\theta ^{2}}\right) \Vert F'(\beta )^{-1}F(x_0)\Vert +\frac{1}{\theta ^{2}}\Vert F'(\beta )^{-1} F(y_0)\Vert \right] \\= & \, \frac{1}{1-M_0\Vert x_0-\beta \Vert }\left[ \frac{M}{2}\Vert x_0-\beta \Vert ^{2}+\left| \frac{1}{\theta }-\frac{1}{\theta ^{2}}\right| (1+M_0\Vert x_0-\beta \Vert )\right. \\&\left. +\,\frac{1}{\theta ^{2}} (1+M_0\Vert y_0-\beta \Vert )\right] \\= & \, \frac{1}{1-M_0\Vert x_0-\beta \Vert }\left[ \frac{M}{2}\Vert x_0-\beta \Vert +\left| \frac{1}{\theta }-\frac{1}{\theta ^{2}}\right| (1+M_0\Vert x_0-\beta \Vert )\right. \\&\left. +\,\frac{1}{\theta ^{2}} (1+M_0g_1(\Vert x_0-\beta \Vert )\Vert x_0-\beta \Vert )g_1(\Vert x_0-\beta \Vert )\right] \Vert x_0-\beta \Vert \end{aligned}$$

where,

$$\begin{aligned} g_2(t)= \frac{ 1}{1-M_0 t}\left( \frac{Mt}{2}+\left| \frac{1}{\theta }-\frac{1}{\theta ^{2}}\right| (1+M_0 t)+\frac{1}{\theta ^{2}}(1+M_0g_1(t)t)g_1(t)\right) \end{aligned}$$

Then, if we have the function \(h_2(t)=g_2(t)-1\), since, \(h_2(0)=2|1-\theta |-\theta ^{2}<0\) if \(\theta \in (2-\sqrt{2},2) \) and the function \(h_2(\rho _1)>0.\) Hence, applying Intermediate Value Theorem, \(h_2(t)\) has at least one root in the interval \((0,\rho _1).\) Let \(\rho _2\) be the smallest root of \(h_2(t)\) in \((0,\rho _1).\) Now we obtain

$$\begin{aligned} 0<\rho _2<\rho _1 \end{aligned}$$

and \(0\le g_2(t)<1\forall t \in [0,\rho _1).\) Applying the third step of the method, we have

$$\begin{aligned} x_1 - \beta = z_0 - \beta - F'(x_0)^{-1}F'(\beta )F'(\beta )^{-1}F(z_0) \quad \end{aligned}$$

Then apply norm on both sides and by using lemma 1, verifies

$$\begin{aligned} \Vert x_1 - \beta \Vert\le & \, \Vert z_0 - \beta \Vert + \frac{1}{1 - M_0(e_0)}(1 + M_0\Vert z_0 - \beta \Vert ) \Vert z_0 - \beta \Vert \\= & \, \left( 1 + \frac{1 + M_0 g_2(\Vert x_0-\beta \Vert )\Vert x_0-\beta \Vert }{1 - M_0(\Vert x_0-\beta \Vert )}\right) g_2(\Vert x_0-\beta \Vert )\Vert x_0-\beta \Vert \\= & \, g_3(\Vert x_0-\beta \Vert ) \Vert x_0-\beta \Vert \end{aligned}$$

where

$$\begin{aligned} g_3(t)= \left( 1 + \frac{1 + M_0 g_2(t)t}{1 - M_0 t}\right) g_2(t) \end{aligned}$$

Considering the function \(h_3(t)=g_3(t)-1\), since, \(h_3(0)=4|\theta -1|-\theta ^{2}<0\) if \(\theta \in (2/3,2)\) and the function \(h_3(\rho _2)>0.\) Hence, applying Intermediate Value Theorem, \(h_3(t)\) has at least one root in the interval \((0,\rho _2).\) Let \(\rho \) be the smallest root of \(h_3(t)\) in \((0,\rho _2).\) Then we get

$$\begin{aligned} 0<\rho<\rho _2<\rho _1<1/M_0 \end{aligned}$$

and \(0\le g_3(t)<1\forall \,\, t \in [0,\rho ).\) Furthermore, we have,

$$\begin{aligned} \Vert x_1 - \beta \Vert \le g_3(\Vert x_0-\beta \Vert )\Vert x_0-\beta \Vert< \Vert x_0-\beta \Vert <\rho \end{aligned}$$

As a result, the inequalities hold for \(n=0\). We can now use mathematical induction to replace n, we obtain the inequalities. If we use the estimate \(\Vert x_{n+1}-\beta \Vert \le \Vert x_n-\beta \Vert <\rho \), we have \(x_{n+1}\in \mathcal {B}(\beta ,\rho ).\)

$$\begin{aligned} \Vert x_{n+1} - \beta \Vert\le & \, g_3(\Vert x_n-\beta \Vert )\Vert x_n-\beta \Vert \le g_3(\Vert x_{n-1}-\beta \Vert )\Vert x_{n-1}-\beta \Vert \\\le & \, g_3(\Vert x_{n-2}-\beta \Vert )\Vert x_{n-2}-\beta \Vert \le \cdots g_3(\Vert x_{0}-\beta \Vert )^{n+1}\Vert x_{0}-\beta \Vert \end{aligned}$$

then, by taking limits, we get \(\displaystyle {\lim _{n\rightarrow \infty }g_3(\Vert x_0-\beta \Vert )^{n+1}=0}.\) Hence, we get \(\lim _{n\rightarrow \infty }x_n=\beta .\) We can affirm that the method converge to the solution.

Next we will prove the uniqueness. Let \(\alpha \) be another solution in \(\mathcal {B}(\beta ,\rho )\), \(\alpha \ne \beta \) with \(F(\alpha )=0.\)

Let \(K=\int _{0}^{1}F'(\alpha +t(\beta -\alpha ))dt\). Then, taking into account (5), we obtain

$$\begin{aligned} \Vert F'(\beta )^{-1}(K-F'(\beta ))\Vert \le \int _{0}^{1}M_0\Vert \alpha +t(\beta -\alpha )-\alpha \Vert dt\le \frac{M_0}{2}\Vert \beta -\alpha \Vert =\frac{M_0}{2}R<1 \end{aligned}$$

Therefore, is clear that \(K^{-1}\) exists. Now, taking into account the Banach lemma, we have

$$\begin{aligned} 0=F(\beta )-F(\alpha )=K(\beta -\alpha ). \end{aligned}$$

Hence we deduce that \(\beta =\alpha \). \(\square \)

3 Local convergence of method 3

In this section we discuss the local convergence of fifth order iterative method (3). The order of convergence is shown to be five with the Taylor’s development. At first we must realize that the first two steps of the our proposed method are same as the method (2). Hence, the first two inequalities are same as previous theorem. From the third step of the method, we get

$$\begin{aligned} \Vert x_1-\beta \Vert&\le \Vert z_0-\beta \Vert +\Vert F'(y_0)^{-1}F'(\beta )\Vert \Vert F'(\beta )^{-1}F(z_0)\Vert \\&\le \Vert z_0 - \beta \Vert + \frac{(1+M_0\Vert z_0-\beta \Vert )\Vert z_0-\beta \Vert }{1 -M_0 \Vert y_0-\beta \Vert }\\&= g_2(\Vert x_0-\beta \Vert )\Vert x_0-\beta \Vert \\&\quad +\frac{(1+M_0 g_2(\Vert x_0-\beta \Vert )\Vert x_0-\beta \Vert )g_2(\Vert x_0-\beta \Vert )\Vert x_0-\beta \Vert }{1-M_0g_1(\Vert x_0-\beta \Vert )\Vert x_0-\beta \Vert }\\&= g_4(\Vert x_0-\beta \Vert ) \Vert x_0-\beta \Vert \end{aligned}$$

where

$$\begin{aligned} g_4(t)= \left( g_2(t) + \frac{(1 + M_0 g_2(t)t)g_2(t)}{1 - M_0g_1(t)t}\right) \end{aligned}$$

We consider the function \(h_4(t)=g_4(t)-1\), since, \(h_4(0)=4|\theta -1|-\theta ^{2}<0\) if \(\theta \in (2/3,2)\) and \(h_4(\rho _2)>0.\) Hence, applying Intermediate Value Theorem, \(h_4(t)\) has at least one root in the interval \((0,\rho _2).\) Let \(R_1\) be the smallest root of \(h_3(t)\) in \((0,\rho _2).\) So we have

$$\begin{aligned} 0<R_1<\rho _2<\rho _1<1/M_0 \end{aligned}$$

and \(0\le g_4(t)<1\,\forall \,t \in [0,R_1)\). Next, we establish the local convergence theorem for the iterative method (3).

Theorem 2

Under the (4)–(6) hold and for initial approximation\(x_0\in \mathcal {B}(\beta ,\rho )\)the sequence\(\{x_n\}\)generated by (2) is well defined for\(n=0,1,2\ldots \)remains in\(\mathcal {B}(\beta ,\rho )\)and converge to\(\beta \). Moreover, the following inequalities holds for\(n=0,1,2\ldots \)

$$\begin{aligned} \Vert y_n-\beta \Vert&\le g_1(\Vert x_n-\beta \Vert )\Vert x_n-\beta \Vert<\Vert x_n-\beta \Vert<\rho \\ \Vert z_n-\beta \Vert&\le g_2(\Vert x_n-\beta \Vert )\Vert x_n-\beta \Vert<\Vert x_n-\beta \Vert<\rho \\ \Vert x_{n+1}-\beta \Vert&\le g_4(\Vert x_n-\beta \Vert )\Vert x_n-\beta \Vert<\Vert x_n-\beta \Vert <\rho \end{aligned}$$

Where the functions\(g_i,\,i=1,2,4\)are defined above. Also, if there exist\(R_1\in [0,2/M_0)\)such that\(\overline{\mathcal {B}}(\beta ,\rho )\subseteq D\)then the limit point\(\beta \)is the unique in\(\overline{\mathcal {B}}(\beta ,\rho )\).

Proof

Proof of this theorem follows the same line as Theorem 1, hence we omit the proof. \(\square \)

4 Numerical examples

In this section, several numerical examples are given to demonstrate the efficacy of this studied method and analysis. For this all the numerical examples have been worked out on the same software and computer. For comparison of radii of convergence balls, we consider another existing fifth order Halley-like method described in [8] and denote it with \(HLM_5\), and another sixth order Chebyshev–Halley type method described in [9] and denote it with \(CHM_6\) respectively for \(n\ge 0\),

$$\begin{aligned} y_n= & \, x_n - F'(x_n)^{-1}F(x_n)\nonumber \\ u_n= & \, x_n-a F'(x_n)^{-1}F(x_n)\nonumber \\ z_n= & \, y_n-\gamma A_{a,n}F'(x_n)^{-1}F(x_n)\nonumber \\ x_{n+1}= & \, z_n-\delta B_{a,n}F'(x_n)^{-1}F(z_n) \end{aligned}$$
(7)

and,

$$\begin{aligned} y_n= & \, x_n-F'(x_n)^{-1}F(x_n)\nonumber \\ z_n= & \, y_n-(1+(F(x_n)-2\delta F(y_n))^{-1}F(y_n))F'(x_n)^{-1}F(x_n)\nonumber \\ x_{n+1}= & \, z_n-(F'(x_n)+\overline{F}''(x_n)(z_n-x_n))^{-1}F(z_n). \end{aligned}$$
(8)

where, \(\overline{F}''(x_n)=2F(y_n)F'(x_n)^2F(x_n)\) and \(a, \gamma , \delta \) are real parameter. For computational point of view, we denote our proposed initial fourth order iterative method (2) with \(PM1_4\) and initial fifth order iterative method (3) with \(PM2_5\). We consider the Examples from [4]. We use the four following examples

Example 1

Consider the function f on \(\mathcal {D}=\left[ \frac{-1}{2},\frac{5}{2}\right] \), given by

$$\begin{aligned} f(x) = \left\{ \begin{array} {l c r} x^3\text {ln}x^2+x^5-x^4 &\,\text {if } x \ne 0,\\ 0 &\,\text {if } x=0 \end{array}\right. \end{aligned}$$

Now, for \(\alpha =1\). It can be easily seen that \(f'''\) is unbounded on \(\mathcal {D}\) and then the conditions on higher derivatives for local convergence is more restrictive in this case. However all conditions of the Theorem 1 are true and then from a simple calculation, we get \(M=M_0=96.6628\).

Example 2

In this example, we consider \(X=Y={\mathbb {R}}^3,\,\mathcal {D}=\overline{\mathcal {B}}(0,1)\) and consider the function F on \(\mathcal {D}\) for \(u=(x,y,z)^T\) by

$$\begin{aligned} F(u) = \left( e^x-1,\frac{e-1}{2}y^2+y,z\right) ^T. \end{aligned}$$

Making the derivative:

$$\begin{aligned} F'(u) = \left( \begin{array}{ccc} e^x &\, 0 &\,0 \\ 0 &\, (e-1)y+1 &\, 0 \\ 0 &\, 0 &\, 1 \\ \end{array} \right) \end{aligned}$$

Now, for \(\alpha =(0,0,0)^T,\,F'(\alpha )=F'(\alpha )^{-1}=diag(1,1,1)\). From a simple calculation we get \(M_0=e-1\) and \(M=e\).

Next, we consider an integral Hammerstein type equation. These type of equations arises in practical problems involving chemistry and electromagnetic fluid dynamics [20].

Example 3

Consider \(X=Y={\mathbb {C}}[0,1]\), the space of all continuous functions defined on [0, 1]. Define F on \(\mathcal {D}=\overline{\mathcal {B}}(0,1)\), by

$$\begin{aligned} F(x(s))=x(s) -5\int _{0}^{1} {s t x(t)^3}dt. \end{aligned}$$

where \(x(s)\in {\mathbb {C}}[0,1]\) and equipping with a sup norm, defined by \(\displaystyle {\Vert u\Vert =\max _{t\in [0,1]}|u(t)|}\).

It can be easily seen that

$$\begin{aligned} F'(x(s))y(s)=y(s) -15\int _{0}^{1} {s t x(t)^3}y(s)dt. \end{aligned}$$

Now, for the non trivial solution \(\alpha =0\), we can obtain \(M_0=7.5\), \(M=15\).

Example 4

Let \(X=Y={\mathbb {R}}^2,\,\mathcal {D}=\overline{\mathcal {B}}(0,1)\). Define F on \(\mathcal {D}\) for \(u=(x,y)^T\), by

$$\begin{aligned} F(u) = \left( \sin x,\frac{1}{3}(e^y + 2y - 1)\right) ^T \end{aligned}$$

Making the derivative:

$$\begin{aligned} F'(u) = \left( \begin{array}{cc} \cos x &\, 0 \\ 0 &\, \frac{1}{3}(e^y+2) \end{array} \right) . \end{aligned}$$

It can be easily seen that for \(\alpha =(0,0)^T\), we get \(M=M_0=1\).

Example 5

[15] Now, in order to show the applicability of our theory in a real problem we are going to consider the following quartic equation that describes the fraction of the nitrogen–hydrogen feed that gets converted to ammonia, called the fractional conversion. For 250 ATM. and 500C., this equation takes the form:

$$\begin{aligned} f(x) = x^4 - 7.79075x^3 + 14.7445x^2 + 2.511x - 1.674. \end{aligned}$$

Consider the function f on \(\mathcal {D}=[0,1]\).

For simple calculation, we get \(M=7.33210369\), \(M_0=3.66605148\).

Table 1 Value of parameters for \(HLM_5\) and \(CHM_6\) for different examples
Table 2 comparison of radii balls with different methods
Table 3 Radii balls for different \(\theta \) of \(PM1_4\)
Table 4 Radii balls for different \(\theta \) of \(PM2_5\)

In Table 1 we can see the values of parameters for \(HLM_5\) and \(CHM_6\) methods described above for each and every one of the four proposed examples. In Table 2 we can see the convergence radii obtained with our methods are larger than the convergence radii obtained with the other two methods described above for each and every one of the four proposed examples.

In Table 3 we can see how, for different values of the parameter \(\theta \), we obtain different values of the convergence radii. In addition, the value of the parameter \(\theta =1\) maximizes the radius for this method of order four.

Similarly, in Table 4 we observe how, for different values of the \(\theta \) parameter, we obtain different values of the convergence radii for this method of order five. And similarly to the previous one, when the value of the parameter reaches the value of \(\theta =1\) the convergence radius is maximized for the five examples studied.

5 Dynamics

In this section, we will study the real and the complex dynamics of method 3 when it is applied to quadratic polynomials.

5.1 Complex dynamics

First of all we recommend the reader to see [5, 10, 11, 21] in order to revise the principal concepts of complex dynamics that will be assumed as known in this article.

In this subsection we will carry out the complex dynamics study of the method (3). For this purpose, we will apply it to a generic polynomial \(p(z)=(z-A)(z-B)\), and we will use the Möebius map \(h(z)=\frac{z-A}{z-B}\), with the next properties

$$\begin{aligned} \text{ i) } \ \ h\left( \infty \right) =1, \quad \text{ ii) } \ \ h\left( A \right) =0, \quad \text{ iii) } \ \ h\left( B \right) =\infty , \end{aligned}$$

As a consequence of applying the method to a generic grade 2 polynomial and applying the Möebius map we obtain the following rational operator

$$\begin{aligned} G(z,\theta )=-\frac{z^4 (z+2) \left( -2 \theta +z^3-2 \theta z^2+4 z^2-4 \theta z+6 z+2\right) }{(2 z+1) \left( 2 \theta z^3-2 z^3+4 \theta z^2-6 z^2+2 \theta z-4 z-1\right) }. \end{aligned}$$
(9)

5.2 Fixed points and their stability

It is easy to see that both \(z=0\) and \(z=\infty \) are fixed points of the \(G(z,\theta )\) function. Both come from applying the Möebius map to the roots A and B respectively.

But we want to study the strange fixed points. A strange fixed point is nothing more than a fixed point but that is not a solution to the equation \(f(z)=0\). First of all we observe that \(z=1\) is a strange fixed point, which comes from the original convergence to infinity, when applying the Möebius map. It is also clear that \(z=1\) is a strange fixed point, but there are more, which are all the solutions of the polynomial

$$\begin{aligned} p(z)=z^6-2 \theta z^5+7 z^5-10 \theta z^4+21 z^4-16 \theta z^3+31 z^3-10 \theta z^2+21 z^2-2 \theta z+7 z+1 \end{aligned}$$

It could not be otherwise, the solutions we are looking for depend on the value of the \( \theta \) parameter.

5.3 Critical points and parameter spaces

In this subsection, we will continue calculating the critical points, specifically the free critical points, and showing the parameter spaces associated with each of them. We know that there is at least one critical point for every invariant component of Fatou.

We start with the calculation. The solutions of \(G'(z,\theta )=0\), are the critical points of the family, where

$$\begin{aligned} G'(z,\theta )=-\frac{2 z^3 (z+1)^4 \left( 8 \theta +8 \theta z^4-8 z^4-12 \theta ^2 z^3+41 \theta z^3-39 z^3-24 \theta ^2 z^2+76 \theta z^2-62 z^2-12 \theta ^2 z+41 \theta z-39 z-8\right) }{(2 z+1)^2 \left( 2 \theta z^3-2 z^3+4 \theta z^2-6 z^2+2 \theta z-4 z-1\right) ^2} \end{aligned}$$

Solving the equation, we find the first two critical points, \(z=0\) and \(z=\infty \). We remember that both are related to roots A and B of the original polynomial p(z) since we have applied the Möebius map. Both critical points then have their own Fatou component associated.

But we also found more critical points different from the two related to the roots. These critical points are called free critical points which in this particular case are the point \(z=-1\) and the solutions of the polynomial

$$\begin{aligned} q(z)= & \, 8 \theta +8 \theta z^4-8 z^4-12 \theta ^2 z^3+41 \theta z^3-39 z^3-24 \theta ^2 z^2\\&+\,76 \theta z^2-62 z^2-12 \theta ^2 z+41 \theta z-39 z-8 \end{aligned}$$

It is clear that this polynomial has 4 different roots. But not all solutions are independent. Only two of the four solutions are independent.

At this point, the question we have to ask ourselves is why is it important to study the orbits of the critical points? Well, the dynamic behavior of an iterative method is given by the study of the orbits of the free critical points.

As we have seen that the critical points depended on the value of the parameter, for example, to determine if there is an periodic periodic orbit, we must look for the values of the parameters that make the orbits of the free critical points attract periodic orbits.

For this reason we are going to draw the parameter spaces associated with the free critical points obtained. In these parameter spaces, we use the critical point as an initial estimate, and for each parameter value, we remember that they are the values we are looking for, the color of the point informs us about the place to which it has converged: to an attracting periodic orbit, to a fixed point, or even infinity.

In Figs. 1 and 2, the parameter space associated to \(cr_1(\theta )\) one free critical point are shown. In this figures, if a point is painted in cyan, implies that the iteration of the critical point converges to 0 or to \(\infty \). If a point is painted in yellow, the iteration converges to 1. In the figures, other colors appear that are neither cyan nor yellow, which are representing different dynamic behaviors such as convergence to strange fixed points or n-cycles. So, when choosing a value of the \(\theta \) parameter, we must select points on the plane that are colored in cyan to obtain a good numerical behavior.

Fig. 1
figure 1

Parameter planes associated to free critical point \(cr_1(\theta )\)

Fig. 2
figure 2

Parameter planes associated to free critical point \(cr_2(\theta )\)

We have now found the anomalies, any point of the parameter planes associated with the two independent free critical points that is not cyan. That is why we select values of the parameter that in the parameter plane are of a different color than cyan, and show their basins of attraction in Figs. 3, 4, 5 and 6. In Fig. 3 can see the basins of attraction when the value of the parameter is \(\theta =2.\) In this Figure we can observe large regions of magenta or cyan color that imply convergence to one of the two roots. We also see areas of divergence. In Fig. 4 can see the basins of attraction when the value of the parameter is \(\theta =1.8.\) In this figure there are many more areas of convergence to the roots than in the previous figure. The Figs. 5 and 6 show the basins of attraction for the parameter values \(\theta =2.3\) and \(\theta =2.6\) respectively.

Fig. 3
figure 3

Basins of attraction associated to the method with \(\theta =2\)

Fig. 4
figure 4

Basins of attraction associated to the method with \(\theta =1.8\)

Fig. 5
figure 5

Basins of attraction associated to the method with \(\theta =2.3\)

Fig. 6
figure 6

Basins of attraction associated to the method with \(\theta =2.6\)

5.4 Real dynamics

We recommend the reader to see [12,13,14] in order to revise the principal concepts of complex dynamics. By applying method (3) on polynomial \(p(z)=z^2-1\) and we use the convergence plane, taking the horizontal axis as the values of x and the vertical one as the values of the parameter, that it, each point of the plane corresponds to a value of the starting point and a value of the parameter \(\theta \). We can see the convergence planes in Figs. 7, 8, 9 and 10. In this convergence planes, a point \((x_0,y_0)\) is painted in cyan if the method 3 with \(\theta =y_0\) and as starting point \(x_0\) converges to \(-1\), in magenta if it converges to 1, in yellow if it diverges to \(\infty \) and in other colors if it has a “problematic” behavior such us cycles, convergence to extraneous fixed points or even chaos. With this tool we can view the behavior of the method for each value of the parameter and how it changes and moreover the same with the starting point and we can compare the results obtained.

Fig. 7
figure 7

Detail of the convergence plane

Fig. 8
figure 8

Detail of the convergence plane

Fig. 9
figure 9

Detail of the convergence plane

Fig. 10
figure 10

Detail of the convergence plane

6 Conclusions

In this article, we have discussed the local convergence analysis of the iterative method of fourth and fifth order with the intention of approximate a unique root of a given function. The work and the analysis are discussed only under the premise that the first order Fréchet derivative fulfills the continuity condition of Lipschitz. It should be noted that this work has high importance since it avoids the calculation of higher order derivatives, which appear in several problems and supposes an inconvenience.

We have established for each method a theorem of existence and uniqueness provided by the convergence ball. In addition, several numerical examples have been made to demonstrate the effectiveness of our analysis. Improved convergence ball radii have been obtained with respect to the different value of the \(\theta \) parameter.

It is noted that to increase the value of the \(\theta \) parameter, it provides the improved radii of the convergence balls. It is maximized by \(\theta =1\) and then, to further increase the value of the parameter \(\theta =1\), reduce the radius of the convergence balls.

From the complex dynamic study we can conclude that although we have large regions of convergence, depending on the value of the parameter we will also find regions with strange behaviors. And from the study of the real dynamics, we observe that although it could be expected, both dynamics are different and the complex dynamics does not contain the real dynamics, so in all the studies that we carry out it will not be enough with the study of the dynamics complex. In this case we have performed the dynamic study using a generic polynomial of order two to which we have applied the Möebius map \(h(z)=\frac{z-A}{z-B}\). For future studies we leave open both methods working with generic polynomials of order greater than two or even with polynomials that depend on one or more parameters.