Abstract
In this paper, we modify the accelerated generalized successive overrelaxation (AGSOR) method for block two-by-two complex linear systems, and use the AGSOR method as an inner iteration for the modified Newton equations to solve the nonlinear system whose Jacobian matrix is a block two-by-two complex symmetric matrix. Our new method is named modified Newton AGSOR (MN-AGSOR) method. Because generalized successive overrelaxation (GSOR) method is a special form of the AGSOR method, the modified Newton GSOR (MN-GSOR) method is also analyzed in the discussion. Next, we use the Hölder continuous condition instead of the Lipschitz assumption to analyze and prove the local convergence properties of the MN-AGSOR method. At last, numerical experiments verify the efficiency of the MN-AGSOR method, and it can be seen from the comparison of various aspects that the MN-AGSOR method is superior to some other recently proposed methods.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this paper, we mainly consider the problem of solving a nonlinear system
where \(F:\mathbb {D}\subset \mathbb {C}^{2n}\rightarrow \mathbb {C}^{2n}\) is nonlinear and continuously differentiable. As studied in [6, 19], suppose that the Jacobian matrix of the above nonlinear system is a block two-by-two complex symmetric matrix in the form of
Here the matrices \(W(x),T(x)\in \mathbb {R}^{n\times n}\) are symmetric, and at least one of them is positive definite. The symbol \(i=\sqrt{-1}\) represents the imaginary unit, throughout the paper.
In general, the Newton method is widely used for solving the nonlinear system (1). By applying the Newton method, the solution of the nonlinear system is equivalently obtained by solving the corresponding Newton equation
with \(s_k\) being the unknown vector. In order to improve the convergence speed of the Newton method, the modified Newton method was proposed by Darvishi and Barati [7] in 2007
Unlike the Newton method, the modified Newton method needs only one more evaluation of F per step, but it has the order of convergence three at least. For the nonlinear system (1) mentioned above, the corresponding Newton equation can be written in the following form:
where \(d_{x1}\) and \(d_{x2}\) are two unknown vectors with dimension n. As can be seen from the previous discussion, the main task of solving the nonlinear system (1) is to solve the linear system (3), so we pay our attention to solving the block two-by-two complex linear systems
where the matrices \(W,T\in \mathbb {R}^{n\times n}\) are symmetric, and W is positive definite. This system is very common in many fields, especially in the finite element method discretization of elliptic PDE problems [1, 12, 14, 16, 17].
Since Bai and Golub proposed the Hermitian/skew-Hermitian splitting (HSS) method [4] in 2003, many novel and effective methods have been proposed for solving the linear system (4), such as the modified HSS (MHSS) [2], preconditioned MHSS (PMHSS)[3] and generalized PMHSS (GPMHSS) [8] methods. These methods are based on the Hermitian/skew-Hermitian splitting of the coefficient matrix. They can efficiently solve non-Hermitian positive definite system of linear equations. Recently, some single-step iterative methods based on the HSS method have been extensively studied, and they are also quite effective for complex symmetric linear systems [13, 20, 22].
In 2015, the generalized successive overrelaxation (GSOR) [18] method was proposed by Salkuyeh et al. to solve the linear system of equations. Its performance is quite excellent in terms of the convergence speed and accuracy of the solution. In the same year, Edalatpour et al. proposed an accelerated generalized successive overrelaxation (AGSOR) [9] method which is an extension of the GSOR method. Subsequently, the preconditioned GSOR [11] method and the shifted GSOR [10] method were successively proposed. Inspired by their ideas, we propose to apply the AGSOR method to the block two-by-two complex linear systems. By applying the AGSOR method as the inner iteration, and using the modified Newton method as the outer iteration, we present the modified Newton-AGSOR method for solving nonlinear systems with block two-by-two complex Jacobian matrices.
Finally, we give a brief introduction to the structure of this paper. In Sect. 2, we will make convergence analysis of the AGSOR method for block two-by-two complex linear systems. We use the AGSOR method introduced in Sect. 2 as the inner iteration of our new method, and elaborate on the modified Newton-AGSOR (MN-AGSOR) method, including its algorithm and iterative formula in Sect. 3. In Sect. 4, the local convergence of the MN-AGSOR method will be analyzed and proved under the Hölder continuous condition. The numerical results of the MN-AGSOR method are shown and comparisons with several recently proposed methods are made to confirm its effectiveness in Sect. 5. At the end of the paper, we summarize the results of the entire article.
2 The AGSOR method for block two-by-two complex linear systems
In this section, we apply the AGSOR method proposed by Edalatpour et al. in [9] to solve block two-by-two complex linear systems, although it is slightly different from the standard AGSOR.
Consider the linear system of equations whose coefficient matrix has complex block two-by-two form, i.e.
where \(W,T \in \mathbb {R}^{n \times n}\) are symmetric positive definite, and symmetric matrices, respectively. The difference between this system of linear equations and the one solved by AGSOR method in [9] is that the matrix on the skew-diagonal is complex symmetric.
Inspired by the AGSOR method, we can easily establish the iteration algorithm for solving the system (5).
In fact, the iterative equation (6) can be changed to the following form:
where
and
with \(S= W^{-1}T\) and \(0\ne \alpha \beta \in \mathbb {R}\).
So far, we have completed the construction of the AGSOR method adapted to solve the block two-by-two complex linear systems.
Obviously, the GSOR method introduced in [18] is a special form of the AGSOR method [9], with the condition \(\alpha =\beta\).
Lemma 1
Let\(\alpha\)and\(\beta\)be two real numbers with\(\alpha \beta \ne 0\)and\(\mathcal {G}_{\alpha , \beta }\)be the iteration matrix of the AGSOR method adapted to solve the block two-by-two complex linear systems (5). Then, for every eigenvalue\(\lambda\)of\(\mathcal {G}_{\alpha ,\ \beta }\)there is an eigenvalue\(\gamma\)of\(S=W^{-1}T\)which satisfies
Proof
Let \(\alpha \ne \beta\) and \(\lambda\) be an eigenvalue of \(\mathcal {G}_{\alpha ,\beta }\) with \(x=(z^{T},y^{T})^{T}\) being the eigenvector of \(\lambda\). Then, we have
which is equivalent to
Therefore, we can obtain the following equations:
First we show \(y\ne 0\) if \(\lambda \ne 1 -\alpha\). In fact, when \(\lambda \ne 1 -\alpha\), we assume \(y=0\), then according to the first equation of (8), we have \(z=y=0\), which contradicts with the fact that \(x=(z^{T},y^{T})^{T}\) is an eigenvector.
Now, if \(\lambda \ne 1 -\alpha\), then
which means that for every eigenvalue \(\lambda\) of \(\mathcal {G}_{\alpha ,\beta }\), there is an eigenvalue \(\gamma\) of S with its corresponding eigenvector y, i.e.
thus
If \(\lambda = 1 -\alpha\), since \(\alpha \ne \beta\) which implies \(\lambda \ne 1 -\beta\), the conclusion is still correct. In addition, if \(\alpha =\beta\), by the above process, we can get a simpler equation:
Now we prove the equation (9) only for \(\lambda = 1 -\alpha =1-\beta\), which means that \(Sy=0\) and \(\lambda Sz=0\) according to (8). Therefore, S has an eigenvalue \(\gamma =0\) or \(\lambda =0\), and then (9) holds true. \(\square\)
Remark 1
We can see that the above Lemma 1 is consistent with the Lemma 2 in [9] and the conditions of the matrices W and T are the same as those given in [9]. Therefore, the AGOSR method for solving linear systems (5) has the same convergence condition as the AGOSR method for solving block two-by-two real linear system, i.e.
Its proof can be found in Theorem 1 in [9]. And the selection of the optimal parameters \(\alpha _\star\) and \(\beta _\star\) of the AGSOR method is also given in Theorem 2 of [9]. We directly show the results as follows:
where
Here \(\lambda _{max}\) and \(\lambda _{min}\) are the largest and smallest eigenvalues of the modulus of the matrix \(S=W^{-1}T\), respectively. Moreover, it has been mentioned in [9] that the optimal values of parameters \(\alpha _\star\) and \(\beta _\star\) of the AGSOR method are the optimal values of \(\alpha\) and \(\beta\) in the set
where \(b= \alpha + \beta\) and \(c = \alpha \beta\).
3 The modified Newton-AGSOR method
Based on the extension of the AGSOR method introduced in the previous section, next we will use the AGSOR method as an inner iteration method for the modified Newton method to solve a class of large sparse nonlinear systems with block two-by-two complex symmetric Jacobian matrices.
Now, we focus on the nonlinear system described as
where \(F:\mathbb {D} \subset \mathbb {C}^{2n} \rightarrow \mathbb {C}^{2n}\) is continuously differentiable and its Jacobian matrix is large sparse and complex symmetric with the following form
In the above expression, \(W(x) \in \mathbb {R}^{n \times n}\) and \(T(x) \in \mathbb {R}^{n \times n}\) are symmetric matrices and W(x) satisfies the positive definite condition. It should be noted that the condition for T(x) is weaker than the modified Newton DPMHSS (MN-DPMHSS)[19] and the modified Newton-MDPMHSS (MN-MDPMHSS)[6] methods.
For simplicity of later discussion, we disassemble F(x) as follows:
where P(x) , Q(x) \(\in \mathbb {C}^{n}\).
When we use AGSOR method as the inner iteration for the modified Newton method, it is equivalent to using AGSOR method to solve the two Newton equations with the following form:
Now, we have a preliminary understanding of the basic structure and specific form of the modified Newton-AGSOR (MN-AGSOR) method. Afterwards, we can apply the MN-AGSOR method for solving nonlinear system (10).
From the above iteration table, the MN-AGSOR method can be easily rewritten as an equivalent form as follows:
where
and
with \(S(x)= W(x)^{-1}T(x)\) and 0 \(\ne \alpha \beta \in \mathbb {R}\).
Notice that the matrix W(x) is positive definite and, in general, we can use the Cholesky decomposition method or the conjugate gradient method to solve the two linear subsystems in (6).
In addition, we define matrices \(\mathcal {M}_{\alpha ,\beta }(x)\) and \(\mathcal {N}_{\alpha ,\beta }(x)\) by
Then the Jacobian matrix \({F'}(x)\) can be split as
and the following formulas hold true
Using the above equations, we can convert the expression (12) equivalently to
4 Convergence analysis
The main content of this section is to analyze and prove the local convergence properties of the MN-AGSOR iteration method. We will complete our work under the Hölder continuous condition, similar to [5], which is weaker than Lipschitz continuous assumption [21].
Assume that the mapping \(F:\mathbb {D}\subset \mathbb {C}^{2n}\rightarrow \mathbb {C}^{2n}\) is G-differentiable in an open domain \(\mathbb {D}_0 \subset \mathbb {D}\), \(F^{'}({x})\) is symmetric and continuous, and there exists a point \({x}_{*}\in \mathbb {D}_0\) satisfying \(F({x}_*)=0\). For analyzing the convergence properties of the MN-AGSOR method and facilitating the symbolic operation in the proof process, we need some assumptions and appointments.
Appointment I In the article, \(\Vert z\Vert\) represents the 2-norm of a matrix or a vector z. We use the symbols \(\varDelta =\min \left\{ \alpha ,\beta \right\}\) and \(\varLambda =\max \left\{ \left| 1-\alpha \right| ,\left| 1-\beta \right| \right\}\).
Assumption 1
Suppose \({x}_{*}\in \mathbb {D}_0\) is a solution to \(F({x})=0\), and there exists a positive constant r, for any \(u\in \mathbb {N}(x_*,r)\subset \mathbb {D}_0\), he following conditions hold.
-
(A1)
(The Bounded Condition) There exist positive constants \(\eta\) and \(\gamma\) satisfying
$$\begin{aligned} \max \left\{ \; ||T(x_*)||\;,\;||W(x_*)|| \; \right\} \le \eta \quad \text {and} \quad ||{F'}(x_*)^{-1}||\le \gamma . \end{aligned}$$ -
(A2)
(The Hölder Condition) For some \(p\in (0,1]\), there exist nonnegative constants \(H_t\) and \(H_\omega\) satisfying
$$\begin{aligned} ||T(x_*)-T(x)||\le & {} H_t||x_*-x||^p,\\ ||W(x_*)-W(x)||\le & {} H_\omega ||x_*-x||^p. \end{aligned}$$
The following perturbation lemma is useful for later discussion and analysis of convergence; see Lemma 2.3.2 in [15].
Lemma 2
Assume that\(M,N\in \mathbb {C}^{n \times n}\), withMbeing nonsingular and\(||M^{-1}||\le \xi\). If\(||M-N||\le \zeta\)and\(\zeta \xi < 1\), then\(N^{-1}\)exists and its norm satisfies
Lemma 3
If\(r\in (0,1/(\gamma H)^\frac{1}{p})\)with\(H=H_\omega +H_t\)and Assumption1holds, then for any\(x,v\in \mathbb {N}(x_*,r)\subset \mathbb {D}_0\), the matrix\({F'}(x)\)is nonsingular and the following four inequalities are true when\(p\in (0,1]\), for any\(x,v\in \mathbb {N}(x_*,r)\):
-
(1)
\(||{F'}(x_*)-{F'}(x)||\le H||x_*-x||^{p}\),
-
(2)
\(||{F'}(x)^{-1}||\le \frac{\gamma }{1-\gamma H||x_*-x||^{p}}\),
-
(3)
\(||{F}(v)||\le \frac{H}{p+1}||v-x_*||^{p+1}+2\eta ||v-x_*||\),
-
(4)
\(S(v)\le \frac{H\gamma }{1-\gamma H||x-x_*||^{p}}(\frac{1}{p+1}||v-x_*||^{p}+||x-x_*||^{p})||v-x_*||\),
where \(S(v)=||v-x_*-{F'}(x)^{-1}{F}(v)||\).
Proof
Using Assumption (A2), we obtain
Then the first formula of Lemma 3 is proved.
Moreover, the condition \(r\in (0,1/(\gamma H)^\frac{1}{p})\) implies \(\gamma H||x-x_*||^{p}<1\). Hence by using \(||{F'}(x_*)^{-1}||\le \gamma\) and Lemma 2, we know that \({F'}(x)^{-1}\) exists and satisfies
Thus the second formula of Lemma 3 is true.
Since
and
then it holds that
So the third formula of Lemma 3 is correct.
Next, because
we have
Now proof of Lemma 3 is complete. \(\square\)
Theorem 1
Let\(\varDelta =\min \left\{ \alpha ,\beta \right\}\)and\(\varLambda =\max \left\{ \left| 1-\alpha \right| ,\left| 1-\beta \right| \right\}\). Under the conditions of Lemma3, assume\(r\in (0,r_0)\), with\(r_0=min\left\{ r_1,r_2,r_3\right\}\),
where\(\mu _*=min\left\{ {m_*,l_*}\right\}\), \(m_*=\liminf \nolimits _{k\rightarrow \infty }\;m_k\), \(l_*={\liminf \nolimits _{k\rightarrow +\infty }}\;l_k\), and the constant\(\mu _*\)satisfies
Here the symbol\(\lfloor x \rfloor\)is an upper bound function, representing the smallest integer no less than the corresponding real numberx, the number\(\sigma \in (0,\sigma _0)\)is a prescribed positive constant with\(\sigma _0=\frac{1-\theta }{\theta }\), and
with\(\alpha , \beta\)satisfying\(0<\alpha \beta<\alpha +\beta <\alpha \beta \frac{1-\rho (W(x_*)^{-1}T(x_*)) }{2}+2\). Then for all\(x_0\in \mathbb {N}(x_*,r)\), any positive integer sequences\(\left\{ {l_k}\right\} ^\infty _{k=0}\)and\(\left\{ {m_k}\right\} ^\infty _{k=0}\), the iteration solution sequence\(\left\{ {x_k}\right\} ^\infty _{k=0}\)generated by the MN-AGSOR method is well-defined and converges to the solution\(x_*\). Moreover, it holds that
where
Proof
The idea to prove this theorem is to find some r, such that for any vector \(x\in \mathbb {N}(x_*,r)\), it holds that
which is satisfied if we can show \(||\mathcal {G}_{\alpha ,\beta }(x_*)-\mathcal {G}_{\alpha ,\beta }(x)||<\sigma \theta\) since
By using (16) and Assumption (A1), we obtain
In addition, \(r\in (0,1/(\gamma H)^\frac{1}{p})\) implies \(\gamma H||x-x_*||^{p}<1\), then by Assumption (A2), we have
and
Thus, from (18) and Lemma 2, for any \(x \in \mathbb {N}(x_*,r)\), we obtain
given that
which is satisfied since \(r<r_1\).
On the one hand, by direct calculations, we have
On the other hand, \(r<r_1\) implies
Furthermore, in order to make \(||\mathcal {G}_{\alpha ,\beta }(x)-\mathcal {G}_{\alpha , \beta }(x_*)||<\sigma \theta\), we only need to show
which is equivalent to
and it is true since \(r<r_2\). Therefore, when \(r<r_1\) and \(r<r_2\), we have
Hence, for any \(u\in \mathbb {N}(x_*,r)\), with \(r<\min \{r_1,r_2\}\), we obtain
since \(\sigma <\sigma _0=\frac{1-\theta }{\theta }\).
Now, we can estimate the error of the iteration sequence \(\left\{ {x_k}\right\} ^\infty _{k=0}\) generated by MN-AGSOR method. Using (14) and Lemma 3, we get
where
and \((1+\sigma )\theta <1\).
Set \(\mu _*=min\left\{ {m_*, l_*}\right\}\), \(m_*=\liminf \limits _{k\rightarrow \infty }\;m_k\), and \(l_*={\liminf \limits _{k\rightarrow +\infty }}\;l_k\). It is clear that the function \(g(s,\lambda )\) is strictly monotone decreasing with respect to \(\lambda\). Additionally, by direct calculations, we have
which implies that \(g(s,\lambda )\) is strictly monotone increasing with respect to s. Then, for \(x_k\in \mathbb {N}(x_*,r)\), we get
under the conditions that
Actually, the above two inequalities are correct with \(\mu _*\) satisfying (15) and \(r<r_3\), thus, we obtain
Similarly, it holds that
Therefore, for any \(u_0\in \mathbb {D}(x_*,r)\subset \mathbb {D}_0\), since
then we know that the iteration solution sequence \(\{x_k\}_{k=0}^\infty\) generated by the modified Newton-GSOR method is convergent to the solution \(x_*\) and well-defined. Moreover, \(\Vert x_{k+1}- x_*\Vert <g(r,x_*)^2\Vert u_k- x_*\Vert\) directly leads to \(\Vert x_k- x_*\Vert <g(r_0,x_*)^{2k}\Vert x_0- x_*\Vert\), or equivalently,
then by letting \(k\rightarrow \infty\), we establish (17), at this point, proof of this theorem is completed. \(\square\)
5 Numerical examples
Next, we will make comparisons between the Modified Newton-AGSOR (MN-AGSOR) method, the Modified Newton-DPMHSS (MN-DPMHSS) method [19] and the Modified Newton-MDPMHSS (MN-MDPMHSS) method [6] by several numerical experiments to show the validity and superiority of the MN-AGSOR method. As mentioned earlier, the AGSOR method contains the GSOR method (as long as the two parameters are the same), so MN-GSOR is a special form of MN-AGSOR, but in the numerical experiments, we use MN-GSOR as an independent method.
Consider the following nonlinear equations [6, 19]
where \(\mathbb {D}=\left( 0,1\right] \times \varOmega\) with \((x,y)\in \varOmega =(0,1)\times (0,1)\), and \(\partial \varOmega\) being the boundary of \(\varOmega\). The constant \(\zeta >0\) represents the magnitude of the reaction term. By discretizing the above problem on equidistant grids \(\Delta t=h=1/(N+1)\), then at each temporal step of the implicit scheme, we should solve a system of nonlinear equations
where
and
with \(n=N \times N\) and \(A_N = tridiag(-1,2,-1)\). Here \(\otimes\) means the Kronecker product.
Mention that all the numerical tests are finished in Matlab (R2016a) on an Intel quad-core processor (2.79GHz, 8GB RAM). We choose the initial guess \({x}_0=1\) for all the considered iteration methods, and the program’s termination condition for the outer modified Newton iteration is
We set the tolerance of inner iteration methods \(\delta _k=\tilde{\delta }_k=\delta\) for the considered four methods.
It is easy to see that the solution of (23) is \(\hat{u}_\star =0\) , and
thus \(F'(u_\star )=M\). In addition, we can get that
for any vector \(u \in N(u_\star ,r)\).
Hence, the solution sequence \(\{u_k\}\) generated by the MN-AGSOR method converges to the solution \(u_\star = 0\) according to the previous theoretical analysis. We obtain the optimal experimental parameters of the MN-DPMHSS method and the MN-MDPMHSS method in different situations from the article [6, 19].
First we choose \(\alpha _1=\beta _1=1\) and \(\alpha _2=\beta _2=1\). The optimal experimental parameters of the MN-AGOSR method and the MN-GSOR method are obtained by some experimental tests. The detailed data of the experimental optimal parameters are shown in Tables 1, 2, 3 and 4.
In Tables 5, 6, 7 and 8, we compare our MN-AGSOR method and MN-GSOR method with MN-DPMHSS method and MN-MDPMHSS method in the following four aspects: the step of inner iteration steps denoted as “In Step”, the step of outer iteration steps denoted as “Out Step”, the elapsed CPU time in seconds denoted as “CPU(s)”, and the error estimates denoted as “RES”.
Next we choose \(\alpha _1=\alpha _2=1\), \(\beta _1 = 1/2\) and \(\beta _2 = -2\) for further comparison. At this time, the experimental optimal parameters of the four considered methods are listed in Table 9 for \(\hbox {N}=50\), and numerical results of the four methods are presented in Tables 10 and 11, respectively.
According to the results of the numerical tests in Tables 5, 6, 7 and 8 and Tables 10 and 11, we can see that the inner and outer iteration steps of the MN-AGSOR method are significantly smaller than the MN-DPMHSS and MN-MDPMHSS methods, and the CPU time of the MN-AGSOR method is significantly less which implies that the modified Newton-AGSOR method is more efficient and superior than the MN-DPMHSS and the MN-MDPMHSS methods. On the other hand, we can see that the performances of the MN-GSOR method and the MN-AGSOR method are similar, but the MN-GSOR method requires only one parameter, so when we handle the scientific and engineering problems, it is better to choose the MN-GSOR method instead of the MN-AGSOR method.
6 Conclusions
For solving the nonlinear systems with block two-by-two complex symmetric Jacobian matrices,we have introduced a modified Newton-AGSOR method (MN-AGSOR) method based on the AGSOR algorithm. In the theoretical analysis, the local convergence properties of the MN-AGSOR have been discussed under the Hölder continuous condition instead of the stronger Lipschitz assumption. The numerical results confirm that the MN-AGSOR method has the advantage over the modified Newton-DPMHSS and the modified Newton-MDPMHSS methods in both CPU time and iteration steps. Because the performance of the MN-GSOR method is very close to that of the MN-AGOSR method, we prefer the MN-GSOR method in applications. Furthermore, the MN-AGSOR method needs less conditions for the Jacobian splitting matrix than the MN-DPMHSS and MN-MDPMHSS methods.
References
Bai, Z.Z.: Block preconditioners for elliptic PDE-constrained optimization problems. Computing 91(4), 379–395 (2011)
Bai, Z.Z., Benzi, M., Chen, F.: Modified HSS iteration methods for a class of complex symmetric linear systems. Computing 87(3–4), 93–111 (2010)
Bai, Z.Z., Benzi, M., Chen, F.: On preconditioned MHSS iteration methods for complex symmetric linear systems. Numer. Algorithms 56(2), 297–317 (2011)
Bai, Z.Z., Golub, G.H., Ng, M.K.: Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. SIAM J. Matrix Anal. Appl. 24(3), 603–626 (2003)
Chen, M., Lin, R., Wu, Q.: Convergence analysis of the modified Newton-HSS method under the Hölder continuous condition. J. Comput. Appl. Math. 264, 115–130 (2014)
Chen, M.H., Wu, Q.B.: Modified Newton-MDPMHSS method for solving nonlinear systems with block two-by-two complex symmetric Jacobian matrices. Numer. Algorithms 80(2), 355–375 (2019)
Darvishi, M., Barati, A.: A third-order Newton-type method to solve systems of nonlinear equations. Appl. Math. Comput. 187(2), 630–635 (2007)
Dehghan, M., Dehghani-Madiseh, M., Hajarian, M.: A generalized preconditioned MHSS method for a class of complex symmetric linear systems. Math. Model. Anal. 18(4), 561–576 (2013)
Edalatpour, V., Hezari, D., Khojasteh Salkuyeh, D.: Accelerated generalized SOR method for a class of complex systems of linear equations. Math. Commun. 20(1), 37–52 (2015)
Edalatpour, V., Hezari, D., Salkuyeh, D.: Two efficient inexact algorithms for a class of large sparse complex linear systems. Mediterr. J. Math. 13(4), 2301–2318 (2016)
Hezari, D., Edalatpour, V., Salkuyeh, D.: Preconditioned GSOR iterative method for a class of complex symmetric system of linear equations. Numer. Linear Algebra Appl. 22(4), 761–776 (2015)
Karlsson, H.O.: The quasi-minimal residual algorithm applied to complex symmetric linear systems in quantum reactive scattering. J. Chem. Phys. 103(12), 4914–4919 (1995)
Li, C.X., Wu, S.L.: A single-step HSS method for non-Hermitian positive definite linear systems. Appl. Math. Lett. 44, 26–29 (2015)
Lions, J.L.: Optimal control of systems governed by partial differential equations problèmes aux limites. Springer, Berlin (1971)
Ortega, J.M., Rheinboldt, W.C.: Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, New York (1970)
Papp, D., Vizvari, B.: Effective solution of linear Diophantine equation systems with an application in Chemistry. J. Math. Chem. 39(1), 15–31 (2006)
Rees, T., Dollar, H.S., Wathen, A.J.: Optimal solvers for PDE-constrained optimization. SIAM J. Sci. Comput. 32(1), 271–298 (2010)
Salkuyeh, D.K., Hezari, D., Edalatpour, V.: Generalized successive overrelaxation iterative method for a class of complex symmetric linear system of equations. Int. J. Comput. Math. 92(4), 802–815 (2015)
Wang, J., Guo, X.P., Zhong, H.X.: MN-DPMHSS iteration method for systems of nonlinear equations with block two-by-two complex Jacobian matrices. Numer. Algorithms 77(1), 167–184 (2018)
Wang, X., Xiao, X.Y., Zheng, Q.Q.: A single-step iteration method for non-Hermitian positive definite linear systems. J. Comput. Appl. Math. 346, 471–482 (2019)
Wu, Q., Chen, M.: Convergence analysis of modified Newton-HSS method for solving systems of nonlinear equations. Numer. Algorithms 64(4), 659–683 (2013)
Xiao, X.Y., Wang, X., Yin, H.W.: Efficient single-step preconditioned HSS iteration methods for complex symmetric linear systems. Comput. Math. Appl. 74(10), 2269–2280 (2017)
Author information
Authors and Affiliations
Corresponding authors
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Qi, X., Wu, HT. & Xiao, XY. Modified Newton-AGSOR method for solving nonlinear systems with block two-by-two complex symmetric Jacobian matrices. Calcolo 57, 14 (2020). https://doi.org/10.1007/s10092-020-00362-w
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10092-020-00362-w
Keywords
- Complex systems of nonlinear equations
- Convergence analysis
- Modified Newton-DPMHSS
- Modified Newton-AGSOR
- Modified Newton-GSOR
- Modified Newton-MDPMHSS