Keywords

1 Introduction

Solution of nonlinear equations and system of nonlinear equations using iterative methods are the most attractive fields in numerical analysis. There are good amounts of research contributions toward these areas but still there is some space for modification. The quadratically convergent Newton’s approximation method [1, 2, 14] is a classic work in this regard.

Let the system of nonlinear equations be

$$\begin{aligned} F(x)=(f_1(x),f_2(x),\ldots ,f_n(x))^T=0. \end{aligned}$$
(1)

where each function \(f_i\) maps a vector \(x=(x_1,x_2,\ldots ,x_n)^T\) of n-dimensional space \({\mathrm{I{\!}\mathrm R}}^n\) into the real line \( {\mathrm{I{\!}\mathrm R}}\). System (1) involves n nonlinear equations and n unknowns.

We need to find a vector \({{\alpha }}=({{\alpha }}_1,{{\alpha }}_2,\ldots ,{{\alpha }}_n)^T\) such that \(F({{\alpha }})=0\). Newton’s method to solve (1) involves the scheme

$$\begin{aligned} x_{n+1}=x_n-\frac{F(x_n)}{F'(x_n)}\,\,\,\,\, n= 0, 1, 2, \ldots , \end{aligned}$$
(2)

where \(F'(x_n)\) is the first Frechet derivative of \(F(x_n)\).

Subsequently, many Newton type iterative methods have been developed by using different quadrature rules to solve the system of nonlinear equations with third-order convergence, see [3,4,5,6,7,8,9,10]. To discuss a few of them, we have

Noor and Waseem [4] have established two cubically convergent methods using open quadrature formula.

$$\begin{aligned} x_{n+1}=x_{n}-\Bigg [F'({x_{n}})+3F'\bigg (\frac{x_{n}+2y_{n}}{3}\bigg )\Bigg ]^{-1}4F(x_n),\,\,\,\,\, n= 0, 1, 2, \ldots \end{aligned}$$
(3)

and

$$\begin{aligned} x_{n+1}=x_{n}-\Bigg [3F'\bigg (\frac{2x_{n}+y_{n}}{3}\bigg )+F'({y_{n}})\Bigg ]^{-1}4F(x_n),\,\,\,\,\, n= 0, 1, 2, \ldots \end{aligned}$$
(4)

where \(y_{n}=x_n-\frac{F(x_n)}{F'(x_n)},\,\,\,\,\, n= 0, 1, 2, \ldots \)

Further, Liu et al. [3] have developed a cubically convergent method using two-point Gauss quadrature formula.

$$\begin{aligned} x_{n+1}=x_{n}-\frac{2F(x_n)}{\Bigg [F'({\frac{x_{n}+y_{n}}{2}}-{\frac{y_{n}-x_{n}}{2\sqrt{3}}})+F'({\frac{x_{n}+y_{n}}{2}}+{\frac{y_{n}-x_{n}}{2\sqrt{3}}})\Bigg ]},\,\,\,\,\, n= 0, 1, 2, \ldots \end{aligned}$$
(5)

where \(y_{n}=x_n-\frac{F(x_n)}{F'(x_n)},\,\,\,\,\, n= 0, 1, 2, \ldots \)

Also they have constructed a fifth-order convergent method using two-point Gauss quadrature formula.

$$\begin{aligned} z_{n}=x_{n}-\frac{2F(x_n)}{\Bigg [F'({\frac{x_{n}+y_{n}}{2}}-{\frac{y_{n}-x_{n}}{2\sqrt{3}}})+F'({\frac{x_{n}+y_{n}}{2}}+{\frac{y_{n}-x_{n}}{2\sqrt{3}}})\Bigg ]},\end{aligned}$$
(6)
$$\begin{aligned} x_{n+1}=z_{n}-\frac{F(z_{n})}{F'(y_{n})},\,\,\,\,\, n= 0, 1, 2, \ldots \end{aligned}$$
(7)

where \(y_{n}=x_n-\frac{F(x_n)}{F'(x_n)},\,\,\,\,\, n= 0, 1, 2, \ldots \).

Some multi-step iterative methods with fith-order convegence has been developed by using quadrature rule. For detail see [11,12,13].

In this paper, we develop a couple of new iterative schemes to solve the system of nonlinear equations. These schemes posses third- and fifth-order convergence, respectively. Further, we estimate the Efficiency Index \(E.I=p^{1/w}\), where p is the order of convergence and w is the sum of the number of functional evaluations and derivative evaluations per iteration. Additionally the computational order of convergence has been established.

This paper is organized as follows. Section 2 deals with development of new methods. In Sect. 3 the convergence of these methods are established. Finally in Sect. 4, several numerical examples are tested to find the solution of nonlinear equations and boundary value problems of nonlinear ODEs, affirming the consistency of the numerical results with the theoretical findings, and also a comparison is done corresponding to some existing results of some rules of the same class.

2 Development of the Methods

Modified Newton’s method—(I)

we have proposed the new Newton type iterative method as

$$\begin{aligned} x_{n+1}=x_{n}-\frac{NF(x_n)}{\sum _{k=1}^{N} F'\Bigg [x_n+\frac{(y_n-x_n)(k-0.5)}{N}\Bigg ]},\,\,\,\,\, n= 0, 1, 2, \ldots \end{aligned}$$
(8)

where \(y_{n}=x_n-\frac{F(x_n)}{F'(x_n)},\,\,\,\,\, n= 0, 1, 2, \ldots \)

In particular choosing \(M=1\), i.e., \(N=2\), we have

$$\begin{aligned} x_{n+1}=x_{n}-\frac{2F(x_n)}{\Bigg [F'(\frac{3x_{n}+y_{n}}{4})+F'(\frac{x_{n}+3y_{n}}{4})\Bigg ]},\,\,\,\,\, n= 0, 1, 2, \ldots \end{aligned}$$
(9)

where \(y_{n}=x_n-\frac{F(x_n)}{F'(x_n)},\,\,\,\,\, n= 0, 1, 2, \ldots \)

The scheme (9) is named as NH-1 method.

Modified Newton’s method—(II)

Let

$$\begin{aligned} y_{n}=x_n-\frac{F(x_n)}{F'(x_n)},\,\,\,\,\, n= 0, 1, 2, \ldots \end{aligned}$$
(10)

be the nth iterate in the Newton’s method. And let \(s_n\) be the (n+1)th iterate in generalized NH-1 method, i.e.,

$$\begin{aligned} s_{n}=x_{n}-\frac{NF(x_n)}{\sum _{k=1}^{N} F'\Bigg [x_n+\frac{(y_n-x_n)(k-0.5)}{N}\Bigg ]}. \end{aligned}$$
(11)

We can use the iterative values from (10) and (11) to construct a new method with higher order convergence.

$$\begin{aligned} \left\{ \begin{array}{ll} s_{n}=x_{n}-\frac{NF(x_n)}{\sum _{k=1}^{N} F'\Bigg [x_n+\frac{(y_n-x_n)(k-0.5)}{N}\Bigg ]} &{} ;\\ x_{n+1}=s_n-\frac{F(s_n)}{F'(y_n)} &{} n= 0, 1, 2, \ldots .\\ \end{array} \right. \end{aligned}$$
(12)

The scheme (12) is named as generalized NH-2 method.

In particular choosing \(M=1\), i.e., \(N=2\), we have

$$\begin{aligned} \left\{ \begin{array}{ll} s_{n}=x_{n}-\frac{2F(x_n)}{\Bigg [F'(\frac{3x_{n}+y_{n}}{4})+F'(\frac{x_{n}+3y_{n}}{4})\Bigg ]} &{} ;\\ x_{n+1}=s_n-\frac{F(s_n)}{F'(y_n)} &{} n= 0, 1, 2, \ldots . \end{array} \right. \end{aligned}$$
(13)

The scheme (13) is named as NH-2 method.

3 Convergence Analysis

Lemma 3.1

Let \(F:A \subset {\mathrm{I{\!}\mathrm R}}^n \rightarrow {\mathrm{I{\!}\mathrm R}}^n\) be sufficiently Frechet differentiable in a convex set A. For any \(x_0, t \in A\), the Taylor’s expansion is as follows

$$\begin{aligned} F(x_{0}+t)=F(x_{0})+t F'(x_{0})+\frac{t^2}{2!}F''(x_{0})+ \frac{t^3}{3!}F'''(x{_0})+ \cdots \frac{t^{m-1}}{{m-1}!}F^{m-1}(x{_0})+R_m, \end{aligned}$$
(14)

where \(\Vert R_{m}\Vert \le {\frac{1}{m!}}\sup {\Vert F^{m}(x_{0}+{{\beta }}t)\Vert \Vert t\Vert ^m}\) and \(0\le {{\beta }}\le 1\).

Theorem 3.1

Let \(F:A \subset {\mathrm{I{\!}\mathrm R}}^n \rightarrow {\mathrm{I{\!}\mathrm R}}^n\) be sufficiently Frechet differentiable in a convex set A. Let \(F({{\alpha }})=0\) and \({{\alpha }} \in D\). If the initial guess \(x_0\) is close to \({{\alpha }}\) then the iterative method NH-1 converges cubically to \({{\alpha }}\) and the error equation is

$$\begin{aligned} e_{n+1}=({c_{2}}^2-\frac{c_3}{4N^2}){e_n}^3+({c_{2}}^3+3{c_{2}}{c_{3}}+\frac{3{c_{2}}{c_{3}}}{4N^2}-\frac{2c_4}{N^2}-\frac{3c_4}{2N^3}){e_n}^4+O(\Vert e_{n}\Vert ^5), \end{aligned}$$
(15)

where \(e_n=x_n-{{\alpha }}\) and \(c_k=(\frac{1}{k!})\times \frac{F^{(k)}({{\alpha }})}{F'({{\alpha }})}\).

Proof

Using Taylor’s series expansion for F(x) at \(x=x_0\) and taking \(F({{\alpha }})=0\) and \(e_n=x_n-{{\alpha }}\), we get

$$\begin{aligned} F(x_n)&=F'({{\alpha }})(x_n-{{\alpha }}), F''({{\alpha }})(x_n-{{\alpha }})^2, F'''({{\alpha }})(x_n-{{\alpha }})^3, F^{iv}({{\alpha }})(x_n-{{\alpha }})^4+O(\Vert x_{n}-{{\alpha }}\Vert ^5)\nonumber \\&= F'({{\alpha }})\Bigg [ e_n+c_2e_{n}^2+c_3e_{n}^3+c_4e_{n}^4+O(\Vert e_{n}\Vert ^5)\Bigg ]. \end{aligned}$$
(16)

Expanding \(F'(x)\) at \(x=x_n\), we get

$$\begin{aligned} F'(x_n)=F'({{\alpha }})\Bigg [1+2c_2e_{n}+3c_3e_{n}^2+4c_4e_{n}^3+5c_5e_{n}^4+O(\Vert e_{n}\Vert ^5)\Bigg ]. \end{aligned}$$
(17)

From the above two equations

$$\begin{aligned} \frac{F(x_n)}{F'(x_n)}= e_n-c_2e_{n}^2+2(c_{2}^2-c_3)e_{n}^3+(7c_2c_3-3c_4-4c_{2}^3)e_{n}^4+O(\Vert e_{n}\Vert ^5). \end{aligned}$$
(18)

Again, taking \(P_k=\frac{k-0.5}{N}\)

$$\begin{aligned} x_n-\Bigg [\frac{(k-0.5)}{N}\times \frac{F(x_n)}{F'(x_n)}\Bigg ]= & {} x_n-P_k[e_n-c_2e{n}^2+2(c_{2}^2-c_3)e_{n}^3+(7c_2c_3-3c_4-4c_{2}^3)e_{n}^4 \nonumber \\+ & {} O(\Vert e_{n}\Vert ^5]\nonumber \\= & {} x_n+(-1+(1-p_k))e_n+C_2P_ke_{n}^2-2C_{2}^2P_ke_{n}^3+2C_3P_ke_{n}^3-\nonumber \\- & {} (7c_2c_3-3c_4-4c_{2}^3)e_{n}^4+O(\Vert e_{n}\Vert ^5)\nonumber \\= & {} {{\alpha }}+(1-P_k)e_n+C_2P_ke_{n}^2-2C_{2}^2P_ke_{n}^3+2C_3P_ke_{n}^3\nonumber \\- & {} (7c_2c_3-3c_4-4c_{2}^3)e_{n}^4+O(\Vert e_{n}\Vert ^5). \end{aligned}$$
(19)

Then

$$\begin{aligned} F'(x_n-\Bigg [\frac{(k-0.5)}{N}\times \frac{F(x_n)}{F'(x_n)}\Bigg ]= & {} F'({{\alpha }})[1+(2C_{2}-2C_{2}P_k)e_{n}+(2C_{2}^2P_k+3c_{3}+3c_{3}P_k-6c_3p_k)e_{n}^2\nonumber \\+ & {} (-4c_{2}^3p_k+10c_2c_3p_k-6c_2c_3p_{k}^2+4c_4-12c_4p_k+12c_4p_{k}^2-4c_4p_{k}^3)e_{n}^3\nonumber \\+ & {} (-26c_{2}^2c_3p_k+6C_2c_4p_k+8c_{2}^4p_k+18c_{2}^2c_3p_{k}^2+12c_2c_4p_k+12c_2c_4p_{k}^3\nonumber \\- & {} 24c_2c_4p_{k}^2+5c_5-20c_5p_k+30c_5p_{k}^2-20c_5p_{k}^3+5c_5p_{k}^4)e_{n}^4\nonumber \\+ & {} O(\Vert e_{n}\Vert ^5]. \end{aligned}$$
(20)

Now using (16) and (20) in (8), we get

$$\begin{aligned} x_{n+1}= & {} x_n-\frac{N\times {F(x_n)}}{\sum _{k=1}^{N}F'\Bigg [x_n-\Bigg [\frac{(k-0.5)}{N}\times \frac{F(x_n)}{F'(x_n)}\Bigg ]\Bigg ]}\nonumber \\= & {} x_n-[e_n+(-c_{2}^2+\frac{c_3}{4N^2})e_{n}^3+(-c_{2}^3-3c_2c_3-\frac{3c_2c_3}{4n^2}+\frac{2c_4}{N^2}+\frac{3c_4}{2N^3})e_{n}^4\nonumber \\+ & {} O(\Vert e_{n}\Vert ^5)]. \end{aligned}$$
(21)
$$\begin{aligned} e_{n+1}=(c_{2}^2-\frac{c_3}{4N^2})e_{n}^3+(c_{2}^3-3c_2c_3-\frac{3c_2c_3}{4n^2}+\frac{2c_4}{N^2}+\frac{3c_4}{2N^2})e_{n}^4+O(\Vert e_{n}\Vert ^5). \end{aligned}$$
(22)

Thus, (22) shows that the iterative scheme NH-1 is cubically convergent. This completes the rest of the proof. \(\Box \)

Theorem 3.2

Let the vector function \(F(x)=0\) satisfies all the conditions of theorem-1. Then the iterative scheme NH-2 posses fifth-order convergence. More over the error equation will be

$$\begin{aligned} e_{n+1}=\Bigg [2\Bigg [c_{2}^4-\frac{c_{2}^2c_3}{4N^2}\Bigg ]e_{n}^5\Bigg ]+O(\Vert e_{n}\Vert ^6), \end{aligned}$$
(23)

where \(e_n=x_n-{{\alpha }}\) and \(c_k=(\frac{1}{k!})\times \frac{F^{(k)}({{\alpha }})}{F'({{\alpha }})}\).

Proof

Using third-order convergence formula

$$\begin{aligned} e_{n+1}=(c_{2}^2-\frac{c_3}{4N^2})e_{n}^3+(c_{2}^3+3c_2c_3+\frac{3c_2c_3}{4n^2}-\frac{2c_4}{N^2}-\frac{3c_4}{2N^2})e_{n}^4+O(\Vert e_{n}\Vert ^5)\nonumber \end{aligned}$$

that is

$$\begin{aligned} x_{n+1}={{\alpha }}+(c_{2}^2-\frac{c_3}{4N^2})e_{n}^3+(c_{2}^3+3c_2c_3+\frac{3c_2c_3}{4n^2}-\frac{2c_4}{N^2}-\frac{3c_4}{2N^2})e_{n}^4+O(\Vert e_{n}\Vert ^5) \end{aligned}$$
(24)

Replacing \(x_{n+1}\) by \(s_n\) in (24), we have

$$\begin{aligned} s_{n}= & {} {{\alpha }}+(c_{2}^2-\frac{c_3}{4N^2})e_{n}^3+(c_{2}^3-3c_2c_3+\frac{3c_2c_3}{4n^2}-\frac{2c_4}{N^2}-\frac{3c_4}{2N^2})e_{n}^4+O(\Vert e_{n}\Vert ^5)\nonumber \\= & {} {{\alpha }}+h_n, \end{aligned}$$
(25)

where \(h_n=(c_{2}^2-\frac{c_3}{4N^2})e_{n}^3+(c_{2}^3+3c_2c_3+\frac{3c_2c_3}{4n^2}-\frac{2c_4}{N^2}-\frac{3c_4}{2N^2})e_{n}^4+O(\Vert e_{n}\Vert ^5)\).

Now

$$\begin{aligned} F(s_n)= & {} F({{\alpha }})+F'({{\alpha }})h_n+\frac{F''({{\alpha }})}{2!}h_{n}^2+\frac{F'''({{\alpha }})}{3!}h_{n}^3+O(\Vert h_{n}\Vert ^4)\qquad \nonumber \\= & {} F'({{\alpha }})[h_{n}+c_2h_{n}^2+c_3h_{n}^3+O(\Vert h_{n}\Vert ^4)]\qquad \nonumber \\= & {} F'({{\alpha }})[(c_{2}^2-\frac{c_3}{4N^2})e_{n}^3+(c_{2}^3-3c_2c_3+\frac{3c_2c_3}{4n^2}-\frac{2c_4}{N^2}-\frac{3c_4}{2N^2})e_{n}^4+O(\Vert e_{n}\Vert ^5)].\qquad \end{aligned}$$
(26)

Again \(y_n=x_n-\frac{F(x_n)}{F'(x_n)}\).

So by using (18), we obtain

$$\begin{aligned} y_n= & {} x_n-[ e_n-c_2e_{n}^2+2(c_{2}^2-c_3)e_{n}^3+(7c_2c_3-3c_4-4c_{2}^3)e_{n}^4+O(\Vert e_{n}\Vert ^5)]\nonumber \\= & {} {{\alpha }}+c_2e_{n}^2-2(c_{2}^2-c_3)e_{n}^3-(7c_2c_3-3c_4-4c_{2}^3)e_{n}^4+O(\Vert e_{n}\Vert ^5). \end{aligned}$$
(27)

And

$$\begin{aligned} F'(y_n)= & {} F'[{{\alpha }}+c_2e_{n}^2-2(c_{2}^2-c_3)e_{n}^3-(7c_2c_3-3c_4-4c_{2}^3)e_{n}^4+O(\Vert e_{n}\Vert ^5)]\nonumber \\= & {} F'({{\alpha }})+F''({{\alpha }})[c_2e_{n}^2-2(c_{2}^2-c_3)e_{n}^3-(7c_2c_3-3c_4-4c_{2}^3)e_{n}^4+O(\Vert e_{n}\Vert ^5)]\nonumber \\= & {} F'({{\alpha }})[1+2c_{2}^2+4(c_2c_3-c_{2}^3)e_{n}^3+O(\Vert e_{n}\Vert ^4]. \end{aligned}$$
(28)

From (11), we get

$$\begin{aligned} (x_{n+1}-{{\alpha }})=(s_n-{{\alpha }})-\frac{F(s_n)}{F'(y_n)} \end{aligned}$$

and

$$\begin{aligned} (x_{n+1}-{{\alpha }})F'(y_n)=(s_n-{{\alpha }})(F'(y_n))-{F(s_n)}. \end{aligned}$$
(29)

Using (26) and (28) in (29) yields

$$\begin{aligned} KUL=KUV-KV, \end{aligned}$$
(30)

where     \(K=F'({{\alpha }})\), \(L=e_{n+1}\),

\(U=[1+2c_{2}^2+4(c_2c_3-c_{2}^3)e_{n}^3+O(\Vert e_{n}\Vert ^4)]\), and

\(V=[(c_{2}^2-\frac{c_3}{4N^2})e_{n}^3+(c_{2}^3-3c_2c_3+\frac{3c_2c_3}{4n^2}-\frac{2c_4}{N^2}-\frac{3c_4}{2N^2})e_{n}^4+O(\Vert e_{n}\Vert ^5)]\).

Simplifying (30), we obtain

$$\begin{aligned} UL=\Bigg [2(c_{2}^4-\frac{c_{2}^2c_3}{4N^2})e_{n}^5\Bigg ] \end{aligned}$$

and

$$\begin{aligned} L&=\Bigg [2(c_{2}^4-\frac{c_{2}^2c_3}{4N^2})e_{n}^5\Bigg ][U]^{-1}\nonumber \\&=\Bigg [2(c_{2}^4-\frac{c_{2}^2c_3}{4N^2})e_{n}^5\Bigg ][1-\{2c_{2}^2+4(c_2c_3-c_{2}^3)e_{n}^3\}+\cdots ]. \end{aligned}$$
(31)

Further

$$\begin{aligned} e_{n+1}=\Bigg [2(c_{2}^4-\frac{c_{2}^2c_3}{4N^2})e_{n}^5\Bigg ]O(\Vert e_{n}\Vert ^6). \end{aligned}$$
(32)

Hence, from (32) it is seen that the iterative scheme NH-2 posses fifth-order convergence. This completes the rest of the proof. \(\Box \)

4 Numerical Examples

Efficiency index We see that in each of the same class of methods, the functional values as well as derivatives are evaluated in each iteration. The efficiency index as \(E.I=p^{1/w}\), where p is the order of convergence and w is the sum of the number of functional evaluations and derivative evaluations per iteration in the method. For a system of nonlinear equations with n equations and n unknowns, evaluating the function F is to calculate n functional values \(f_i=1,2,\ldots ,n\) and evaluating a derivative \(F'\) is to calculate \(n^2\) derivative values \(\frac{\partial f_i}{\partial x_j}, i=1,2,\ldots ,n\,\,\, and\,\,\, j=1,2,\ldots ,n.\) It is seen that efficiency index of various convergent methods as follows:

NR-1: \(E.I.=3^\frac{1}{n+2n^{2}}\), NR-2: \(E.I.=3^\frac{1}{n+3n^{2}}\), NGM: \(E.I.=3^\frac{1}{n+3n^{2}}\) and M: \(E.I.=5^\frac{1}{2n+4n^{2}}\) and the efficiency index of the proposed methods are as follows:

NH-1: \(E.I.=3^\frac{1}{n+3n^{2}}\) and NH-2: \(E.I.=5^\frac{1}{n+4n^{2}}\)

We have taken some examples to examine the convergence of our constructed methods. We have also compared this result with the same class of methods discussed earlier, namely, NR-1 [4], NR-2 [4], NGM [3] and M [3]. We have evaluated the number of iterations k, the error of the approximate solution \(x_{(k)}\), the computational order of convergence (COC1/COC2), the computational asymptotic convergence constant (CACC), approximate value of the function \(F{x_{(k)}}\) for our constructed methods NH-1, NH-2 and are compared with some of the same class of methods. The comparison is shown in respective tables.

To compute the computational order of convergence (COC), we have used the following formulae:

For the nonlinear system with unknown exact solution (\({{\alpha }}\))

$$\begin{aligned} COC1=\frac{\log (\Vert x_{(k+2)}-x_{(k+1)}\Vert /\Vert x_{(k+1)}-x_{(k)}\Vert )}{\log (\Vert x_{(k+1)}-x_{(k)}\Vert /\Vert x_{(k)}-x_{(k-1)}\Vert )}. \end{aligned}$$

For the nonlinear system with known exact solution (\({{\alpha }}\))

$$\begin{aligned} COC2=\frac{\log (\Vert x_{(k+2)}-{{\alpha }}\Vert /\Vert x_{(k+1)}-{{\alpha }}\Vert )}{\log (\Vert x_{(k+1)}-c\Vert /\Vert x_{(k)}-{{\alpha }}\Vert )} \end{aligned}$$

For the computation of the above factor, we have used the last three approximations of the corresponding iterations.

Additionally, we have evaluated the computational asymptotic convergence constant (CACC) for pth order convergence as defined below

$$\begin{aligned} CACC=\frac{\Vert x_{(k+1)}-{{\alpha }}\Vert }{\Vert x_{(k)}-{{\alpha }}\Vert ^p}. \end{aligned}$$

Example 4.1

Consider the following system with five equations and five unknowns:

$$\begin{aligned}&4(x_{1}-x_{2}^2)+x_{2}-x_{3}^2=0.\nonumber \\&x_{2}(x_{2}^2-x_{1})-2(1-x_{2})+4(x_{2}-x_{3}^2)+x_{3}-x_{4}^2=0.\nonumber \\&x_{3}(x_{3}^2-x_{2})-2(1-x_{3})+4(x_{3}-x_{4}^2)+x_{2}^2-x_{1}+x_{4}-x_{5}^2=0.\nonumber \\&x_{4}(x_{4}^2-x_{3})-2(1-x_{4})+4(x_{4}-x_{5}^2)+x_{3}^2-x_{2}=0.\nonumber \\&x_{5}(x_{5}^2-x_{4})-2(1-x_{5})+x_{4}^2-x_{3}=0. \end{aligned}$$
(33)

where \(x_{(0)}=(1.5,1.5,1.5,1.5,1.5)^T\) is the initial trial solution and \({\alpha }\!=\!(1,1,1,1,1)^T\) is the exact solution. The numerical comparison of the results of the system (33) is shown in Table 1.

Table 1 Comparison of various methods for the system (33)

Example 4.2

Consider the following system with two equations and two unknowns:

$$\begin{aligned}&(x_{1}-1)^4+e^{-x_{2}}-x_{2}^2+3x_{2}+1=0.\nonumber \\&4sin(x_{1}-1)-ln(x_{1}^2-x_{1}+1)-x_{2}^2=0, \end{aligned}$$
(34)

where \(x_{(0)}=(0.5,-0.5)^T\) is the initial trial solution, and the approximate numerical solution of the above nonlinear system is \((1.271384307950132,-0.880819073102661)^T\). The numerical comparison of the results of the system (34) is shown in Table 2.

Table 2 Comparison of various methods for the system (34)

Example 4.3

Consider the following nonlinear equation:

$$\begin{aligned} 2e^{x-4}-5x+18=0, \end{aligned}$$
(35)

where \(x_{(0)}=2\) is the initial trial solution and \({\alpha }=4\) is the exact solution. The numerical comparison of the results of the Eq. (35) is shown in Table 3.

Table 3 Comparison of various methods for the system (35)

Example 4.4

Consider the following nonlinear equation:

$$\begin{aligned}&y''(t)+y^{1+p}(t)=0,\,\,t \in [0,1],(p>0) \nonumber \\&y(0)=0, y(1)=1. \end{aligned}$$
(36)

Let the interval [0, 1] be partitioned as \(t_0 = 0< t_1<t_2< \cdots<t_{n-1} < t_n,\) \(t_i=t_0+ih\) and \(h=1/n.\)

Let \(y_{0}=y(t_0)=y(0)=0, y_{1}=y(t_1), \ldots y_{n-1}=y(t_{n-1})\) and \(y_n=y(t_{n})=y(1)=1\). Now using numerical defferential formula, we have \(y_{i}''=\frac{y_{i-1}-2y_i+y_{i+1}}{h^2}\), \(i=1,2, \ldots (n-1)\). Choosing \(p=\sqrt{3}\) and \(n=10\), we obtain the following system of nonlinear equation involving nine variables.

$$\begin{aligned}&2y_{1}-h^2y_{1}^{\sqrt{3}+1}-y_{2}=0. \nonumber \\&-y_{i-1}+2y_{i}-h^2y_{i}^{\sqrt{3}+1}-y_{i+1}=0.i=2,3, \ldots ,8.\nonumber \\&-y_{8}+2y_{9}-h^2y_{9}^{\sqrt{3}+1}-1=0, \end{aligned}$$
(37)

where \(y_{(0)}=(1,1,1,1,1,1,1,1,1)\) is the initial trial solution and the approximate numerical solution of the above nonlinear system is \((0.10630974179490, 0.21259757834295, 0.31873991751981, 0.42444234737133, 0.52918276231997, 0.63216575267634, 0.73229207302272, 0.82814954506063, 0.91803296518217 )^T\). The numerical comparison of the results of the system (37) is shown in Table 4.

Table 4 Comparison of various methods for the system (36)

Example 4.5

Consider the following system of nonlinear equations:

$$\begin{aligned} {\sum _{j=1,j \ne i}^{n}x_j-(n-1){x_{i}}^2}=0,\,\,\,1\le i \le n , n=5, \end{aligned}$$
(38)

where \(x_{(0)}=(3.5,3.5,3.5,3.5,3.5)^T\) is the initial trial solution and \({\alpha }\!=\!(1,1,1,1,1)^T\) is the exact solution. The numerical comparison of the results of the system (38) is shown in Table 5.

Table 5 Comparison of various methods for the system (37)

5 Conclusion

We have constructed a pair of iterative methods for the solution of nonlinear system of equations with third- and fifth-order convergence, respectively. The computational order of convergence (COC1/COC2), the computational asymptotic convergence constant (CACC), and the efficiency index (E.I) for these methods are equivalent to those of the same class of discussed methods. The numerical results also agree to the theoretical claim. Above all, these methods are very handy in solving the system of nonlinear equations as well as boundary value nonlinear ordinary differential equations. The proposed methods can be seen as alternatives to the existing methods of the same class.