Abstract
Based on the third-order Traub’s method, two iterative schemes with memory are introduced. The proper inclusion of accelerating parameters allows the introduction of memory. Therefore, the order of convergence of the iterative methods increases from 3 up to 3.73 without new functional evaluations. One of them includes derivatives and the other one is derivative-free. The stability of the methods with memory is analyzed and their basins of attraction are compared to check the differences between them. The methods are applied to solve two nonlinear problems in Chemistry, such as the fractional conversion of the nitrogen-hydrogen feed that gets converted to ammonia and the Colebrook–White equation.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In the last decades there is an extensive interest in the research about the solution of the nonlinear equation \(f(x)=0\), where function \(f:I\subseteq \mathbb {R}\longrightarrow \mathbb {R}\) is defined in an open interval I, by means of iterative methods. Most of these schemes are based on the well-known Newton’s method. The use of different techniques helps the design of new iterative methods.
These kind of schemes can be sorted depending on several criteria. Based on their expression, they can be either with or without derivatives, with or without memory, single-step or multistep,...There is another classification that depends on the own features of the method, such as the order of convergence, the R-order of convergence [1], the efficiency index [2] or the optimality based on the Kung and Traub’s conjecture [3].
The main goal in the research of the iterative schemes, apart from solving the equation \(f(x)=0\), is the increasing of the order of convergence and the reach of the optimality. Multistep methods help to increase this order. They can be constructed using different techniques, such as the composition or the weight functions techniques. The former uses different steps of known methods with a later treatment to reduce the number of functional evaluations (see for instance [4, 5]). The latter uses a weight function depending on previous functional evaluations in specific steps of the method (see for instance [6, 7]). Moreover, a common guideline to improve the order of convergence of a method is to include more than one previous iterate to generate the following one, resulting in methods with memory, where the conjecture of Kung and Traub is not valid. Several authors can be found in the literature with this purpose, highlighting the papers of Petković and Sharma [8], Chun and Neta [9], Soleymani et al. [10] among others. Furthermore, interesting overviews can be found in [11, 12].
In this paper, two iterative schemes are introduced. Based on the two-step third-order Traub’s method [13], the new methods keep the two-step structure. However, the main difference is the inclusion of memory. With the proper selection of the accelerating parameters, this fact will increase the order of convergence in comparison to the original method. The design of the methods, the inclusion of memory and the proof of their order of convergence cover Sect. 2. In order to analyze the stability of the schemes, in Sect. 3 a dynamical study based on multidimensional real dynamics is performed. Section 4 shows how the proposed methods can be used for solving common equations in Chemistry. Finally, Sect. 5 collects the main conclusions.
2 Improvement of the order of convergence
The third-order convergent Traub’s method [13] has the iterative expression
From method (1), some changes are introduced. The main idea is the modification of the original method to check its performance over two actions: the inclusion of parameters as a way to generate methods with memory and the replacement of the derivatives to analyze a derivative-free Traub-type method. These actions will generate two different iterative methods with memory. In order to analyze their order of convergence, the following statement will be applied [11].
Let \(\{g_k\}\) and \(\{h_k\}\) be two nonzero sequences. In this work, it is used the notation \(g_k=\mathcal {O}(h_k)\), or equivalently \(g_k\sim h_k\), to indicate
where C is a nonzero constant.
2.1 Inclusion of memory
The first modification in method (1) consists of the inclusion of a parameter in the first step, resulting
Method (2) has order of convergence 3, for every value of the parameter, and its error equation is
where \(\alpha \) is the solution of \(f(x)=0\), \(e_k=x_k-\alpha \) and \(c_j=\frac{f^{(j)}(\alpha )}{j!f'(\alpha )}, j\ge 2\). Let us remark that for \(\beta =-c_2\), the method is at least fourth-order convergent, but this value cannot be reached because the value of \(\alpha \) is unknown. Therefore, to increase the order of convergence, we need an approximation of \(c_2=\frac{f''(\alpha )}{2f'(\alpha )}\). Using Newton’s interpolation polynomial of second degree \(N(t)=f(x_k)+f[x_k,x_{k-1}](t-x_k)+f[x_k,x_{k-1},y_{k-1}](t-x_k)(t-x_{k-1})\), the value of \(\beta \) is approximated by
Replacing \(\beta \) by \(\beta _k\) in method (2), the obtained expression is
Method (4) is called MM1. Its order of convergence is analyzed in the following result.
Theorem 1
Let \(f:I\subset \mathbb {R}\longrightarrow \mathbb {R}\) be a real function sufficiently differentiable in an open interval I. If \(\alpha \in I\) is a simple root of \(f(x)=0\) and \(x_0\) and \(x_1\) are initial estimations close enough to \(\alpha \), then the iterative method MM1 converges to \(\alpha \) with order of convergence \(p\approx 3.30\).
Proof
From the error equation (3), the following relation is hold:
Let us denote \(e_{k,y}=y_k-\alpha \), for all k. Let N(t) be the Newton’s interpolation polynomial of second degree which interpolates in \(x_k\), \(x_{k-1}\) and \(y_{k-1}\). By using the following Taylor series expansions around \(\alpha \),
it is verified
Let us suppose that the R-order of the method is at least p, so it is verified
such that
where \(D_p\) is the asymptotic error constant.
In the same way,
By using relation (9) in (8), we obtain
Let us consider that sequence \(\{y_k\}\) has R-order of convergence at least \(p_1\). Then,
On the other hand, from (2) and the Taylor series expansion around \(\alpha \)
it is obtained
And the use of (7) and (9) gives
Then, from (7) and (9) the error relation (5) becomes
By matching the exponents in (11) and (12), with the exponents in (10) and (13), we obtain the following system of two equations
whose solution \(p=3.30\) is the order of convergence of method MM1. \(\square \)
2.2 Derivative-free methods and inclusion of memory
As mentioned in the previous section, the interest in derivative-free methods lies in the possibility that not every function has a known derivative. In this way, the second modification introduced in (1) is the replacement of the derivative by a divided difference. For allowing the inclusion of memory, the next modification is the introduction of a parameter in the divided difference. The proposed scheme is
where \(v_k=x_k+\delta f(x_k)\), and its error equation is
Note that for \(\delta f'(\alpha )\in \{-1,-2\}\) the method has, at least, order of convergence 4. However, as in the previous method, the value of \(\alpha \) is unknown. In an analogous way as in MM1 case, parameter \(\delta \) is approximated in terms of N(t), resulting
and replacing this value in (14), the final method with memory has the iterative expression
The order of convergence of (17), denoted by MM2, is 3.73, as the following results establishes.
Theorem 2
Let \(f:I\subset \mathbb {R}\longrightarrow \mathbb {R}\) be a real function sufficiently differentiable in an open interval I. If \(\alpha \in I\) is a simple root of \(f(x)=0\) and \(x_0\) and \(x_1\) are initial approximations close enough to \(\alpha \), then the iterative method MM2 converges to \(\alpha \) with order of convergence \(p\approx 3.73\).
Proof
First, from the error equation (15)
Let us denote \(e_{k,y}=y_k-\alpha \), for all k. By using Taylor series expansions (6), the definition of \(\delta _k\) given by (16) satisfies
Then, we have the relation
Analogously,
Let us suppose that the R-order of the method is at least p, and for sequence \(\{y_k\}\) is at least \(p_1\). Then, from the proof of Theorem 1, the following relations are satisfied:
As \(v_k=x_k+\delta _k f(x_k)\), using the development of \(f(x_k)\) in (6) and
it is obtained from the iterative scheme of method MM2
And using relations (19), (21) and (23), we have
Now, from (20) and (21), (18) satisfies
Finally, as the exponents in (24) and (25) match with the exponents in (22) and (26), the positive solution of the system of equations
is \(p\approx 3.73\), \(p_1\approx 2.73\). Then, the order of convergence of method MM2 is 3.73. \(\square \)
Let us note that both methods, MM1 and MM2, increase the order of convergence of Traub’s method by means of memory without any additional functional evaluations. Moreover, method MM2 is a derivative-free scheme which reaches higher convergence order than method MM1.
3 Dynamical analysis
In this section, we are going to present a dynamical study of the proposed methods with memory, based on multidimensional real dynamics. Some fundamentals about real dynamics are introduced below. Further information can be found in [6, 14, 15].
3.1 Basics on multidimensional real dynamics
The standard form of an iterative method with memory which uses two previous iterates to calculate the following one is
where \(x_0\) and \(x_1\) are the initial guesses. A function defined from \(\mathbb {R}^2\) to \(\mathbb {R}\) cannot have fixed points. Therefore, an auxiliary vectorial function \(\varPhi \) is defined by means of
If \((x_{k-1},x_k)\) is a fixed point of \(\varPhi \), \(\varPhi (x_{k-1},x_k)=(x_{k-1},x_k)\) so, \(x_{k+1}=x_k\) and \(x_k=x_{k-1}\). Then, the discrete dynamical system \(\varPhi :\mathbb {R}^2\rightarrow \mathbb {R}^2\) is defined as
where \(\phi \) is the operator of the iterative scheme with memory.
The orbit of a point \(\mathbf{{x}}\) is defined as the set \(\{\mathbf{{x}},\varPhi (\mathbf{{x}}),\varPhi ^2(\mathbf{{x}}),\ldots ,\varPhi ^n(\mathbf{{x}}),\ldots \}\). A point \(\mathbf{{x}}=(z,x)\) is a fixed point \(\mathbf{{x}}^F=(z,x)^F\) of \(\varPhi \) if \(z=x\) and \(x=\phi (z,x)\). If a fixed point \(\mathbf{{x}}^F\) of operator \(\varPhi \) is different from \((x^r,x^r)\), where \(x^r\) satisfies \(f(x^r)=0\), it is called strange fixed point. A point \(\mathbf{{x}}_T\) is T-periodic if \(\varPhi ^T(\mathbf{{x}}_T)=\mathbf{{x}}_T\) and \(\varPhi ^t(\mathbf{{x}}_T)\ne \mathbf{{x}}_T\), for \(t<T\).
In [16], the stability of a periodic point \(\mathbf{{x}}_T\) is defined from its asymptotical behavior in the following result.
Theorem 3
Let \(\varPhi \) from \(\mathbb {R}^n\) to \(\mathbb {R}^n\) be \(\mathcal {C}^2\). Assume \(\mathbf{{x}}_T\) is a T-periodic point. Let \(\lambda _1,\lambda _2,\ldots ,\lambda _n\) be the eigenvalues of \(\varPhi '(\mathbf{{x}}_T)\), where \(\varPhi '\) denotes the Jacobian matrix of \(\varPhi \). Then,
-
1.
If all the eigenvalues \(\lambda _j\) satisfy \(|\lambda _j|<1\), then \(\mathbf{{x}}_T\) is attracting.
-
2.
If one eigenvalue \(\lambda _{j_0}\) satisfies \(|\lambda _{j_0}|>1\), then \(\mathbf{{x}}_T\) is unstable, that is, repelling or saddle.
-
3.
If all the eigenvalues \(\lambda _j\) satisfy \(|\lambda _j|>1\), then \(\mathbf{{x}}_T\) is repelling.
In addition, if all the eigenvalues \(\lambda _j\) satisfy \(|\lambda _j|\ne 1\), the T-periodic point is called hyperbolic. In particular, if there exist an eigenvalue \(\lambda _i\) such that \(|\lambda _i|<1\) and another eigenvalue \(\lambda _j\) such that \(|\lambda _j|>1\), then the hyperbolic point is called saddle point. Moreover, if all the eigenvalues are equal to zero the T-periodic point is superattracting.
A critical point \(\mathbf{{x}}^C\) satisfies \(\det (\varPhi '(\mathbf{{x}}^C))=0\). The basin of attraction of a T-periodic point \(\mathbf{{x}}^*\), is defined as the set of pre-images of any order such that
3.2 Basins of attraction
The dynamical plane represents the basins of attraction of each method. Several implementations can be found in the literature, such as [17, 18], wherein there is shown a code for different softwares devoted to the complex analysis. For the real dynamics case, there are two variations of the complex dynamical planes. On the one hand, the real dynamical plane with memory is very similar to the complex dynamical plane. The current iteration \(x_k\) is represented as the abscissae and the previous iteration \(x_{k-1}\) as the ordinates. The method is analyzed over a mesh of values of \(x_k\) and \(x_{k-1}\) as initial guesses. On the other hand, there are methods with memory whose final rational function does not include memory. In this case, the real dynamical plane with memory turns into the real dynamical line [19], and the method is analyzed over a set of initial guesses in the real line.
When the rational function includes a parameter, two useful drawing tools are the convergence plane [20] and the bifurcation diagram. The convergence plane covers an interval of the parameter as the ordinates, and a set of initial guesses in the real line as the abscissae. The bifurcation diagram shows the advanced state of the orbit on an strange fixed point with an small perturbation for a set of values of the parameter.
For the dynamical lines, the dynamical planes and the convergence planes, each attracting point is mapped with a non-black color. If the orbit of these initial guesses tends to an attracting fixed point, the initial guess \((x_k,x_{k-1})\) or \(x_k\) is depicted in the corresponding color; otherwise, the initial guess is depicted in black.
3.3 Analysis of the rational functions
The rational functions obtained when each method of Sect. 2 is applied on quadratic or cubic polynomials are analyzed below.
There are three quadratic polynomials under consideration: \(p_2^{\bullet }(x)=\{p_2^0(x)=x^2, p_2^+(x)=x^2+\lambda , p_2^-(x)=x^2-\lambda \}, \lambda >0\). Note that polynomial \(p_2^+(x)\) does not have real roots and the only real root of polynomial \(p_2^0(x)\) is \(x^r=0\), while for \(p_2^-(x)\) the two real roots are \(x^r_{1,2}=\pm \sqrt{\lambda }\).
The cubic polynomials under analysis, following the guideline of [21], are \(p_3^{\bullet }(x)=\{p_3^0(x)=x^3, p_3^+(x)=x^3+x,p_3^-(x)=x^3-x,p_3^{\gamma }(x)=x^3+\gamma x+1,\gamma \in \mathbb {R}\}\). Note that for \(p_3^0(x)\) and \(p_3^+(x)\), the only real root is the value \(x^r=0\), while for \(p_3^-(x)\) there are three real roots, namely \(x^r_1=0, x^r_{2,3}=\pm 1\). The real roots of \(p_3^{\gamma }(x)\) depend on the value of \(\gamma \). If we denote \(\gamma ^*=-3/\root 3 \of {4}\approx -1.8899\), when \(\gamma < \gamma ^*\), the polynomial has three \(x_{1-3}^r(\gamma )\) real roots, but for \(\gamma > \gamma ^*\) the only real root of \(p_3^{\gamma }(x)\) is \(x_1^r(\gamma )\).
Regarding the study of the fixed points of (27), they must satisfy \(z=x\) and \(x=\phi (z,x)\), so the real dynamics of \(\varPhi \) becomes in the study of a one-dimensional operator \(\tilde{\varPhi }(x)=\left. \varPhi (z,x)\right| _{z=x}\).
3.3.1 Methods on \(p_2^{\bullet }(x)\)
The dynamical systems that result from the application of methods MM1 and MM2 on the proposed polynomials does not depend on the previous iterate z. Therefore, as \(\tilde{\varPhi }(x)=\left. \varPhi (z,x)\right| _{z=x}\), the expressions of \(\tilde{\varPhi }(x)\) match with the second component of the dynamical systems \(\varPhi (z,x)\).
When method MM1 is applied on \(p_2^{\bullet }(x)\), the one-dimensional operators get the expressions
for \(p_2^0(x)\), \(p_2^+(x)\) and \(p_2^-(x)\), respectively.
Lemma 1
The only fixed points of the operators associated to method MM1 on the quadratic polynomials \(p_2^{\bullet }(x)\) are the roots of their respective polynomials.
By applying method MM2 on polynomials \(p_2^{\bullet }(x)\) the one-dimensional operators are the following:
for \(p_2^0(x)\), \(p_2^+(x)\) and \(p_2^-(x)\), respectively.
Lemma 2
The fixed points for the operators of method MM2, when it is applied on polynomials \(p_2^{\bullet }(x)\), agree with the roots of the corresponding polynomials. In addition, the rational functions corresponding to polynomials \(p_2^+(x)\) and \(p_2^-(x)\) have the strange fixed point \(x_1^F=0\), which is repelling.
Table 1 gathers the strange fixed points that are real for every value of \(\lambda >0\) for MM2, classified in terms of the quadratic polynomials.
Figure 1 shows the bifurcation diagrams of the methods that have strange fixed points. The points in the bifurcation diagrams represent the 500th to 700th iterates for each value of the parameter. In this way, the behavior of the advanced orbit can be found. Method MM2 has one strange fixed point for \(x^F=0\) for \(p_2^+(x)\), whose bifurcation diagram is represented in Fig. 1a. For the same method applied on \(p_2^-(x)\), the bifurcation diagram corresponding to the strange fixed point \(x^F=0\) is represented in Fig. 1b.
A chaotic behavior can be observed in Fig. 1a. For MM2 method, the application on \(p_2^+(x)\) results in an unstable behavior as this polynomial does not have real roots. Regarding Fig. 1b, there is convergence to the roots of the polynomial, verifying that there is not any other point where the orbit converges.
Figure 2 shows the dynamical lines of both methods when they are applied on quadratic polynomials. Orange represents convergence either to \(x^*=0\), for \(p_2^0(x)\), or to \(x^*_1=\sqrt{\lambda }\), for \(p_2^-(x)\). Blue basin represents the convergence to \(x^*_2=-\sqrt{\lambda }\) for \(p_2^-(x)\). The initial guesses cover the values \(x_0\in [-30,30]\), with a mesh of 500 points. The iterations stop when the difference between the current iteration and an attracting point is lower than \(10^{-3}\) or the number of iterations reach the value 50.
The dynamical lines of the methods over \(p_2^+(x)\) are black, since they do not converge to any real root, as expected. Regarding the application of MM1 and MM2 on \(p_2^-(x)\), the real lines split in two regions, and every initial guess converges to the nearest root.
A way to visualize the behavior for different values of the parameter is the convergence plane. Figure 3 represents, with the same map of colors, the convergence plane of the methods on polynomial \(p_2^-(x)\) for \(\lambda \in [0,30]\). In addition, black and white lines represent the superattracting fixed points and the strange fixed points, respectively, for each value of \(\lambda \).
The behavior of the convergence planes is similar to the behavior of the real line in Fig. 2 for \(p_2^-(x)\) cases.
3.3.2 Methods on \(p_3^{\bullet }(x)\)
The dynamical systems that result from the application of MM1 and MM2 on cubic polynomials include both the previous iteration z and the current one x. Therefore, for the analysis of the fixed points, the obtention of \(\tilde{\varPhi }(x)=\left. \varPhi (z,x)\right| _{z=x}\) is mandatory.
When method MM1 is applied on cubic polynomials, the one-dimensional operators are
for \(p_3^0(x)\) and \(p_3^\pm (x)\), respectively.
Lemma 3
The only fixed points of every operator of method MM1 on the cubic polynomials \(p_3^{0}(x)\), \(p_3^{-}(x)\) and \(p_3^{+}(x)\) are the roots of these polynomials.
Figure 4 represents the dynamical planes of the method MM1 for \(p_3^0(x)\), \(p_3^+(x)\) and \(p_3^-(x)\). Orange is devoted to \((z,x)^*=(0,0)\), blue represents the basin of attraction of \((z,x)^*=(1,1)\) and green is the basin of \((z,x)^*=(-1,-1)\). Note that the dynamical planes in the bottom row are a zoom of the dynamical planes in the top row. The dynamical planes of this work have been generated with a mesh of \(500 \times 500\) points and the same conditions as in the previous dyamical lines.
There are specific iterative methods for finding multiple roots, since the usual iterative methods use to fail. Figure 4a, d illustrate this fact. In the other pair of figures, the stability has a good performance in a neighborhood of the roots, converging to the superattracting points. Let us remark that the orbit of each initial guess represented in back converges to the infinity.
Regarding the application of MM1 on \(p_3^{\gamma }(x)=x^3+\gamma x+1\), the corresponding one-dimensional operator is
The following result collects the number of fixed points depending on the value of \(\gamma \).
Lemma 4
When method MM1 is applied on polynomial \(p_3^{\gamma }(x)\), the fixed points of the rational operator are the real roots of the polynomial and two strange repelling fixed points \(x_{4,5}^F(\gamma )\) when \(\gamma \in [\gamma ^*, \gamma ^+]\), where \(\gamma ^+ \approx 1.483004\).
The dynamical planes of \(\varPhi _{3,MM1}^{\gamma }\) are depicted in Fig. 5. As detailed in the analysis of the rational function, there are three regions of \(\gamma \) where the behavior can be different. For \(\gamma <\gamma ^*\) there are three real roots and no strange fixed point. In the interval \(\gamma ^*<\gamma <\gamma ^+\), there is only a real root and two strange fixed points. For \(\gamma >\gamma ^+\), only one real root is present. Therefore, the dynamical planes of Fig. 5 represent one case in each interval.
The expected behavior is observed in Fig. 5 for every value of \(\gamma \). In Fig. 5d, there is convergence to either of the three real roots in a neighborhood of them. Figure 5a, d show black wide regions which correspond to periodic orbits with no convergence to the roots. A quite different behaviour is observed from Fig. 5b, c, e, f, where there is only convergence to the unique real root.
The one-dimensional operators when method MM2 is applied on cubic polynomials are
for \(p_3^0(x)\), \(p_3^+(x)\) and \(p_3^-(x)\), respectively, where \(P_{25}(x)\) and \(Q_{25}(x)\) are polynomials of degree 25.
Lemma 5
The roots of polynomials \(p_3^{0}(x)\), \(p_3^{-}(x)\) and \(p_3^{+}(x)\) are fixed points of their respective operators for method MM2. Furthermore, \(\tilde{\varPhi }_{3,MM2}^-(x)\) has 8 strange fixed points: \(x_{1,2}^F=\mp \frac{1}{\sqrt{3}}\), \(x_{3,4}^F\approx \pm 0.494627\), \(x_{5,6}^F\approx \pm 0.518495\) and \(x_{7,8}^F\approx \pm 0.49701\). The fixed points \(x^F_{1,2}\) are neutral, while \(x^F_{3-8}\) are repelling.
The strange fixed points when MM2 is applied over the three cubic polynomials under consideration are collected in Table 2.
Figure 6 represents the dynamical planes of MM2 for \(p_3^0(x)\), \(p_3^+(x)\) and \(p_3^-(x)\). The dynamical planes of the top and the bottom rows are applied over the same polynomial. The difference is the magnification of the axes.
In comparison with the dynamical planes of MM1 represented in Fig. 4, in the case of the method MM2 there only is convergence to the roots for \(p_3^0(x)\). For the other two polynomials, we can see a stability behavior as in the MM1 case.
When method MM2 is applied over \(p_3^{\gamma }(x)\), the one-dimensional operator is of the form
where \(H_{25}^{\gamma }(x)\) denotes a polynomial of degree 25 with the parameter \(\gamma \).
Lemma 6
The number of real fixed points of method MM2 when it is applied on \(p_3^{\gamma }(x)\) depends on the value of \(\gamma \). The real fixed points agree with the roots of the polynomial in the corresponding interval. Furthermore, when \(\gamma < \gamma _1\) there is presence of eight strange fixed points. For \(\gamma \in [\gamma _1,\gamma _2]\) there are six strange fixed points. For \(\gamma \in [\gamma _2,0]\) the number of strange fixed points is four and finally, when \(\gamma \in [0,\gamma _3]\) there are two strange fixed points, where \(\gamma _1\approx -31.4326\), \(\gamma _2\approx -0.7976\) and \(\gamma _3\approx 0.5969\).
Table 3 shows all the fixed points of the one-dimensional operator depending on \(\gamma \) in different intervals. As in method MM1, the roots of \(p_3^{\gamma }(x)\) are denoted by \(x_{1-3}^*\). Let us reamark that the strange fixed points \(x_{4,5}^F\), with \(x_4^F=-x_5^F\), are attracting and repelling depending on small subintervals of \(\gamma \).
Figure 7 shows the dynamical planes of \(\varPhi _{3,MM2}^{\gamma }\) taking different values for \(\gamma \) according to the intervals used in Table 3. Since there are two strange fixed points that are attracting for different values of \(\gamma \), their basin of attraction is represented in white.
Figure 7 shows the good qualities of method MM2 for a generic cubic polynomial. Despite for little initial guesses there is attraction to a strange fixed point, the usual behaviour is the convergence to the roots.
4 Numerical performance
This section is devoted to demonstrate the features of the introduced methods MM1 and MM2. They will also be compared with two well-known iterative schemes: Newton’ and Traub’s methods. To carry out this study, some nonlinear problems of applications in Chemistry are solved.
For each case, we show a table that gathers the main results of each iterative method. The stopping criteria is either \(|f(x_{k+1})|<10^{-500}\) or \(|x_{k+1}-x_{k}|<10^{-500}\), and both values are displayed in each table. Moreover, the number of iterations needed to converge and the approximated computational order of convergence ACOC [22] are also shown.
4.1 Fractional conversion
The fractional conversion describes the fraction of the nitrogen-hydrogen feed that gets converted to ammonia. For 250 atm and 227K, the expression can be described by [23, 24]
Figure 8 represents the fractional conversion.
In Table 4 we can see the results obtained for the different iterative schemes to find the solution of the nonlinear problem defined in (29), where the initial guesses are \(x_0=\{0.1,0.5\}\). The obtained values confirm the expected behavior. The methods with memory converge in less iterations than the original scheme of Traub, and their ACOC is close to the theoretical values.
4.2 Friction coefficients
The friction coefficients used when calculating resistance or pressure loss in ducts, tubes or pipes can be calculated with the Colebrook–White equation as [25]
where \(\theta \) is the relation between the roughness of the surface and de hydraulic diameter, and Re is the number of Reynolds. For the test cases, \(\theta =10^{-4}\) and \(Re=4\times 10^{-3}\).
Figure 9 represents the function (30). Table 5 collects the data from the application of the different iterative methods to the problem defined by (30). The methods with memory MM2 converges in many less iterations than the other three methods, and its ACOC is close to the theoretical value.
5 Conclusions
Two new iterative methods with memory have been introduced. In comparison with Traub’s method, they have higher order of convergence. MM1 is a method that includes derivatives of order 3.30, while MM2 is derivative-free of order 3.73. The stability of the methods has been verified via the multidimensional real dynamics, showing the good performance of both methods for quadratic polynomials, of MM1 for a generic family of cubic polynomials with three roots and of MM2 for specific cubic polynomials with a multiple root, three simple roots and a single root. In addition, the capacity of the method to solve a common chemical problem has been shown.
References
J.M. Ortega, W.C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables (Academic Press, Cambridge, 1970)
A.M. Ostrowski, Solutions of Equations and Systems of Equations (Academic Press, Cambridge, 1966)
H.T. Kung, J.F. Traub, Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Math. 21, 643–51 (1974)
F. Soleymani, T. Lotfi, P. Bakhtiari, A multi-step class of iterative methods for nonlinear systems. Optim. Lett. 8, 1001–1015 (2014)
A. Cordero, J.L. Hueso, E. Martínez, J.R. Torregrosa, A modified Newton–Jarratt’s composition. Numer. Algorithm 55, 87–99 (2010)
F.I. Chicharro, A. Cordero, J.R. Torregrosa, Dynamics of iterative families with memory based on weight functions procedure. J. Appl. Math. Comput. (2018). https://doi.org/10.1016/j.cam.2018.01.019
J.R. Sharma, H. Arora, Efficient higher order derivative-free multipoint methods with and without memory for systems of nonlinear equations. Int. J. Comput. Math. 95, 920–938 (2018)
M.S. Petković, J.R. Sharma, On some efficient derivative-free iterative methods with memory for solving systems of nonlinear equations. Numer. Algorithm 71, 457–474 (2016)
C. Chun, B. Neta, How good are methods with memory for the solution of nonlinear equations? SeMA Journal 74, 613–625 (2017)
F. Soleymani, T. Lotfi, E. Tavakoli, F.K. Haghani, Several iterative methods with memory using self-accelerators. Appl. Math. Comput. 254, 452–458 (2015)
M.S. Petković, B. Neta, L.D. Petković, J. Džunić, Multipoint Methods for Solving Nonlinear Equations (Elsevier, New York, 2013)
S. Amat, S. Busquier, Advances in Iterative Methods for Nonlinear Equations (Springer, Berlin, 2016)
J.F. Traub, Iterative Methods for the Solution of Equations (Prentice-Hall, Upper Saddle River, 1964)
B. Campos, A. Cordero, J.R. Torregrosa, P. Vindel, A multidimensional dynamical approach to iterative methods with memory. Appl. Math. Comput. 271, 701–715 (2015)
B. Campos, A. Cordero, J.R. Torregrosa, P. Vindel, Stability of King’s family of iterative methods with memory. J. Comput. Appl. Math. 318, 504–514 (2017)
R.C. Robinson, An Introduction to Dynamical Systems: Continuous and Discrete (American Mathematical Society, Providence, 2012)
F.I. Chicharro, A. Cordero, J.R. Torregrosa, Drawing dynamical and parameters planes of iterative families and methods. Sci. World J. 780513, 1–11 (2013)
J.L. Varona, Graphic and numerical comparison between iterative methods. Math. Intell. 24, 37–46 (2002)
F.I. Chicharro, A. Cordero, J.R. Torregrosa, M.P. Vassileva, King-type derivative-free iterative families: real and memory dynamics. Complexity 2713145, 1–15 (2017)
Á.A. Magreñán, A new tool to study real dynamics: the convergence plane. Appl. Math. Comput. 248, 215–224 (2014)
S. Amat, S. Busquier, S. Plaza, Chaotic dynamics of a third-order Newton-type method. J. Math. Anal. Appl. 366, 24–32 (2010)
A. Cordero, J.R. Torregrosa, Variants of Newton’s method using fifth order quadrature formulas. Appl. Math. Comput. 190, 686–698 (2007)
M. Shacham, An improved memory method for the solution of a nonlinear equation. Chem. Eng. Sci. 44, 1495–1501 (1989)
I.K. Argyros, Á.A. Magreñán, L. Orcos, Local convergence and a chemical application of derivative free root finding methods with one parameter based on interpolation. J. Math. Chem. 54, 1404–1416 (2016)
C.F. Colebrook, C.M. White, Experiments with fluid friction in roughened pipes. Proc. R. Soc. Lond. 161, 367–381 (1937)
Author information
Authors and Affiliations
Corresponding author
Additional information
This research was partially supported by Ministerio de Economía y Competitividad MTM2014-52016-C2-2-P and Generalitat Valenciana PROMETEO/2016/089.
Rights and permissions
About this article
Cite this article
Chicharro, F.I., Cordero, A., Garrido, N. et al. Stability and applicability of iterative methods with memory. J Math Chem 57, 1282–1300 (2019). https://doi.org/10.1007/s10910-018-0952-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10910-018-0952-z