Abstract
We propose, in this paper, the preconditioned accelerated generalized successive overrelaxation (PAGSOR) iteration method for efficiently solving the large complex symmetric linear systems. To solve the nonlinear systems whose Jacobian matrices are complex and symmetric, treating the PAGSOR method as internal iteration, we construct a modified Newton-PAGSOR (MN-PAGSOR) method to provide an effective approach for solving a wide range of problems in various scientific and engineering fields. Based on the Hölder continuous condition we present the theoretical framework of the modified method, demonstrate its local convergence properties, and provide numerical experiments to validate its effectiveness in solving a class of nonlinear systems.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let us consider the solution of a complex nonlinear systems of equations,
where \(F{:}\,\mathbb {D}\subset \mathbb {C}^n\rightarrow \mathbb {C}^n\) is a continuously differentiable nonlinear mapping defined on an open convex subset of the complex linear space \(\mathbb {C}^n\), and \(F=(F_1,\cdots ,F_n)^{\textrm{T}}\), \(F_j=F_j(u)\), \(j=1,\cdots ,n\), \(u=(u_1,\cdots ,u_n)^{\textrm{T}}\). And the Jacobian matrix \(F'(x)\) is large sized, sparse, and complex symmetric. This class of nonlinear systems of equations has widespread applications in various fields, such as quantum mechanics [31], fluid mechanics [24, 25, 41], structural mechanics [1, 27], chemical reaction [32, 38], nonlinear wave [47], mechanical engineering [27], and so on. Assuming that the complex symmetric Jacobian matrix \(F'(u)\) has the following form:
where W(u) and T(u) are real symmetric and positive semi-definite matrices, and \(\textrm{i}=\sqrt{-1}\) is the imaginary unit. The effective classical method to solve the nonlinear system of equations (1) is the Newton method [22, 36, 44] with the specific format:
With a good choice of the initial guess \(u_0\) for the exact solution \(u_*\) of the equation (1), the Newton method possesses quadratic convergence speed. However, it becomes challenging to solve exactly the Newton equation \(F'(u)s=-F(u)\), especially when the dimension n is large.
To address this issue and raise the efficiency, on one hand, the inexact Newton method [21] was proposed and widely studied. The specific format of the inexact Newton method is as follows:
note that the parameter \(\eta _k\in [0,1)\) here is the forcing term which is used to control the level of accuracy and \(F'(u_k)\) represents the Jacobian matrix of the operator F(u) evaluated at the iterate \(u_k\). On the other hand, to expedite the solving process of the Newton method, the modified Newton method was presented and studied in [20], which is formulated as follows:
The theoretical and numerical analysis shows evidently that the modified Newton method (5) has achieved significant improvements. Compared with the Newton method, it requires only one additional evaluation F per step. However, the modified Newton method has at least R-order three of convergence. Over the years, inner-outer methods have attracted lots of attention for nonlinear systems due to their superior numerical performance, see [12, 18, 28, 34, 50, 53] and references therein. Therefore, for the nonlinear system (1), building on the efficient performance of the modified Newton method as an outer iteration, we need to search for more suitable inner methods for different forms of nonlinear problems to further improve effectiveness. To make the solver faster, we focus on finding efficient inner iterative methods to solve the following linear system.
Actually, we consider the linear system of equations
where A is a complex matrix with \(A=W+\textrm{i}T \), and \(W, T \in \mathbb {R}^{n\times n}\) are real symmetric matrices, W being positive definite, and T being positive semi-definite. It is known that (6) is a special case of the generalized saddle-point problem [16, 17] which has broad applications, for instance, in the finite element discretization of constrained optimization problems for elliptic partial differential equations, or distributed control problems [2, 42, 43].
The methods for solving block two-by-two structured real linear equations have garnered significant attention from scholars. Among them, several kinds of preconditioned Krylov subspace methods [19, 45] have shown remarkable performance in handling large sparse matrices, effectively utilizing their structural characteristics to accelerate the solution process. Since 2003, the Hermitian and skew-Hermitian splitting (HSS) methods introduced by Bai et al. [9] have gained considerable attention for their effectiveness in solving large linear systems with non-Hermitian positive definite matrices. Researchers have devoted efforts to studying and refining the HSS-based methods [6, 8, 11, 55], drawn by their elegant formulation and reliable convergence properties. To circumvent the need for complex linear system computations, Bai et al. designed the modified HSS (MHSS) iteration method [3] to tackle the solution of equation (6) without resorting to complex linear system computations. Subsequently, massive improved methods have been proposed both in terms of HSS-version, such as the preconditioned MHSS (PMHSS) method [4] and the double-parameter generalized PMHSS (DGPMHSS) [33], etc., and in terms of SOR-version. However, as for the improved methods in terms of SOR-version, the original idea, and analysis of the generalized successive overrelaxation (GSOR) method were presented early in, 2005 [13] for solving augmented linear systems. Further generalizations and developments were extended by applying the GSOR method to address large sparse generalized saddle-point problems, as exemplified by the Uzawa method [14], and the GSOR-type (SOR-acceleration for HSS) methods were also established in [10]. Nearly 10 years later, certain straightforward applications of the GSOR methods and their accelerations as well as preconditioning were also researched in [23, 29, 46]. Especially inspired also by the preconditioning technique, Huang et al. introduced the preconditioned AGSOR (PAGSOR) method [30] whose aim is to decrease the optimal convergence factor of the AGSOR method and to enhance its convergence efficiency. All the above studies lead us to provide excellent inner iteration methods by developing two variants of the GSOR method, namely the accelerated GSOR (AGSOR) and the preconditioned GSOR (PGSOR) for linear systems.
As for nonlinear systems (1), by making use of the HSS-version iteration as the inner solver for the inexact Newton or modified Newton method, researchers have contributed a lot of excellent algorithms. See, for example, papers [12, 15, 28, 35, 48, 49, 53, 54] and references therein. In recent years the modified Newton-DGPMHSS (MN-DGPMHSS) method [18] and the modified Newton-DSS (MN-DSS) method [52] were also proposed to improve the algorithms. (Note that “D” for the two methods here represents “double” or “double-step”. Do not confuse it with the “D” standing for “deteriorated” used in the initial study related to the HSS-type iteration methods [37]. Abbreviations always have limitations!) Due to the advantages of easy programming, efficient computing, and saving storage for SOR-version iteration and its variants, the modified Newton-GSOR (MN-GSOR) method and the modified Newton-AGSOR (MN-AGSOR) method were introduced [39, 40]. The local convergence properties of these methods were performed under a Hölder condition. The efficiency of their numerical results can be compared with those of the MN-DPMHSS and MN-MPPMHSS methods. In 2021, the modified Newton-PGSOR (MN-PGSOR) method was proposed to solve (1) with block two-by-two complex symmetric Jacobian matrices. It can get a similar conclusion while applying on a 2-torsion free block upper triangular matrix algebra [26]. All these methods have made progress in the study of solving a nonlinear system of equations.
To improve the computational efficiency and expand the range of problems when the Jacobian matrix is symmetric indefinite, we consider the possibility of applying the modified Newton method to solve a nonlinear system of equations while utilizing the preconditioned accelerated successive overrelaxation (PAGSOR) method to solve the Newton equations. Thus, we construct the modified Newton-PAGSOR (MN-PAGSOR) method to solve equations (1). Under proper conditions, the local convergence theorem is provided. Then, we use numerical results to show the feasibility and effectiveness of the method compared with several existing algorithms.
The article is organized as follows. In Sect. 2, we introduce the MN-PAGSOR method for handling (1) while elaborating on the properties of local convergence in Sect. 3. To substantiate its practicality and efficiency, Sect. 4 presents compelling numerical results that validate the effectiveness of the MN-PAGSOR method. Finally, in Sect. 5, a succinct conclusion is provided, summarizing the key findings and contributions of this article.
2 The Modified Newton-PAGSOR (MN-PAGSOR) Method
Consider the following complex linear system:
The complex symmetric linear system is structured in the following form:
where \(\textrm{i}=\sqrt{-1}\) is the imaginary unit, \(W,T\in \mathbb {R}^{n\times n}\) are real, symmetric, and positive semi-definite matrices, and \(x, y, p, q\in \mathbb {R}^n\) are real vectors. To avoid complex number operations, we can derive the equivalent real equations in the two-by-two form:
2.1 The PAGSOR Iteration Method
Multiplying (9) to its left by the matrix
we obtain
Denote \(\widetilde{W}=\omega W + T\), \(\widetilde{T}=\omega T - W\), \(\widetilde{p}=\omega p+q\), \(\widetilde{q}= \omega q-p\), and
Then, (9) can be rewritten as
Note that the coefficient matrix \(\tilde{A}\) can be naturally split into
where the notation \(\varvec{O}\) is used to denote the zero matrix.
By [30], the PAGSOR method for \(\tilde{A}u=\tilde{b}\) is established as follows.
The definition of \(\widetilde{W}\), \(\widetilde{T}\), \(\widetilde{p}\), \(\widetilde{q}\) are the same as those in (11), where \(\alpha \), \(\beta \) are two real parameters and \(\omega > 0\).
Furthermore, the iterative method (12) can be equivalently rewritten as follows:
or
where
Obviously, the matrix \(Q_\omega (\alpha ,\beta )\) is referred to as the iteration matrix of the PAGSOR method. Now, let us consider another decomposition of matrix \(\widetilde{A}\),
with
Evidently, we get
Specifically, the AGSOR method, as outlined in [23], represents a particular form of the PAGSOR iterative method when \(\omega =1\).
2.2 The Modified Newton-PAGSOR Method
For convenience, we introduce the following notation:
where \(\textrm{Re}(u)\) and \(\textrm{Im}(u)\) represent the real and imaginary parts of any complex vector or matrix u, respectively. Naturally, F(u) has the form \(F(u)=P(u)+\textrm{i} Q(u)\), where \(P(u)=\textrm{Re}(F(u))\), \(Q(u)=\textrm{Im}(F(u))\). We assume that the Jacobian matrix \(F'(u)\) can be expressed in the form
where \(W(u)=\textrm{Re}(F'(u))\in \mathbb {R}^{n\times n}\), \(T(u)=\textrm{Im}(F'(u))\in \mathbb {R}^{n\times n}\) are real symmetric matrices, with W(u) being positive definite and T(u) positive semi-definite.
Our MN-PAGSOR iteration method is developed to solve the complex nonlinear system, with which the PAGSOR method applied to solve the following Newton equations:
where
and
Hence, we can use the following expression for the MN-PAGSOR method:
with \(\widetilde{W}(u_k)=\omega W(u_k)+T(u_k)\), \(\widetilde{T}(u_k)=\omega T(u_k)-W(u_k)\), \(u_k=x_k+\textrm{i}y_k\), \(\widetilde{P}(u_k)=\omega P(u_k)+Q(u_k)\), \(\widetilde{Q}(u_k)=\omega Q(u_k)-P(u_k)\).
Same as (16), (17), and (18), we need to simplify (22) and provide an approximate method to generate the next iterate, say, namely \(u_{k+1}\),
Thus, we can construct the MN-PAGSOR method. The algorithm process is as follows.
According to (13) and (20), the calculating leads the expressions for \(d_{k,l_k}\), \(h_{k,m_k}\),
which are in the formula mentioned above and
Therefore, the MN-PAGSOR method can be expressed as
Here, the matrix is split into
with
Since these matrices have the following relationship with the iterative matrix \(Q_\omega (\alpha ,\beta , u)\):
we obtain finally an equivalent form of the MN-PAGSOR method,
3 Local Convergence of MN-PAGSOR Method
In this section, we derive the local convergence of the MN-PAGSOR method, i.e., the iterates \(u_k\) generated by the MN-PAGSOR method converge to \(u_*\) for any sufficiently good initial guess \(u_0\). Our analysis is based on the H\(\ddot{\textrm{o}}\)lder condition, which is weaker than the Lipschitz one.
Definition 1
A mapping \(F{:}\,\,\mathbb {D}\subseteq \mathbb {C}^n\rightarrow \mathbb {C}^n \) is nonlinear, if there exists a linear operator \(A\in \mathbb {L}(\mathbb {R}^n,\mathbb {R}^n)\) that satisfies
for any \(h\in \mathbb {C}^n\), then F is Gateaux differentiable (or G-differentiable) at an interior point \(u\in \mathbb {D}\). Additionally, for an open set \(\mathbb {D}_0\subset \mathbb {D}\), if the mapping \(F{:}\,\mathbb {D}\subseteq \mathbb {C}^n\rightarrow \mathbb {C}^n \) is G-differentiable at every point in \(\mathbb {D}_0\), it is said to be G-differentiable on the open set \(\mathbb {D}_0\).
Lemma 1
(Perturbation Lemma [36]) Let \(A,C\in \mathbb {C}^{n\times n}\), and assume that A is nonsingular satisfying \(\Vert A^{-1}\Vert \leqslant p\). If \(\Vert A-C\Vert \leqslant q\) and \(pq< 1 \), then B is also nonsingular and
We further require that \(F{:}\,\mathbb {D}\subset \mathbb {C}^n\rightarrow \mathbb {C}^n\) is G-differentiable on an open neighborhood \(\mathbb {D}_0\subset \mathbb {D}\) centered on \(u_*\in \mathbb {D}\) with \(F(u_*)=0\), and that the Jacobian matrix \(F'(u)\) is continuous and symmetric. Let \(\mathbb {N}(u_*, r)\) denote an open ball centered at \(u_*\) with radius r. We find that the perturbation lemma plays a crucial role in the proofs of Lemma 2 and Theorem 1.
Assumption 1
For all \(u\in \mathbb {N}(u_*,r)\subset \mathbb {D}_0\), the following conditions are established.
- (A1):
-
(The bounded condition) There exist positive constants \(\delta \) and \(\gamma \) such that
$$\begin{aligned} \max \left\{ \Vert W(u_*)\Vert ,\Vert T(u_*)\Vert \right\} \leqslant \delta ,\quad \Vert A(u_*)^{-1}\Vert \leqslant \gamma . \end{aligned}$$ - (A2):
-
(The Hölder condition) There exist nonnegative constants \(K_w\) and \(K_t\) such that
$$\begin{aligned}{} & {} \Vert W(u)-W(u_*)\Vert \leqslant K_w\Vert u-u_*\Vert ^p,\\{} & {} \Vert T(u)-T(u_*)\Vert \leqslant K_t\Vert u-u_*\Vert ^p \end{aligned}$$with the exponential \(p\in (0,1]\).
Lemma 2
Under Assumption 1, if \(r\in (0,(\frac{1}{\gamma K})^{\frac{1}{p}})\), then \(A(u)^{-1}\) exists for \(u\in \mathbb {N}(u_*,r)\). Besides, the following inequalities hold with \( K{:}=K_t+K_w\) for all \(u,y\in \mathbb {N}(u_*,r)\subset \mathbb {D}_0\):
-
(i)
\(\Vert A(u)-A(u_*)\Vert \leqslant K\Vert u-u_*\Vert ^{p},\)
-
(ii)
\(\Vert A(u)^{-1}\Vert \leqslant \frac{\gamma }{1-\gamma K\Vert u-u_*\Vert ^{p}},\)
-
(iii)
\(\Vert H(y)\Vert \leqslant \frac{K}{p+1}\Vert y-u_*\Vert ^{p+1}+2\delta \Vert y-u_*\Vert ,\)
-
(iv)
\(\Vert y - u _*-A( u )^{-1} H ( y )\Vert \leqslant \frac{\gamma }{1-\gamma K\Vert u - u_*\Vert ^{p}} \left( \frac{K}{p+1}\Vert y - u _*\Vert ^p +K\Vert u - u_*\Vert ^p \right) \Vert y-u_*\Vert .\)
Proof
(i) The Hölder condition directly implies
(ii) Making use of this inequality
and the perturbation lemma, we know that \(A(u)^{-1}\) exists and satisfies
(iii) Furthermore, the known bounded condition leads to
and
Consequently, we find
(iv) Due to the following facts:
we have finally,
The proof of Lemma 2 is completed.
Theorem 1
Under the assumptions of Lemma 2, denote \(\Delta =\min \left\{ \alpha ,\beta \right\} \) and \(\Lambda =\max \left\{ \vert 1-\alpha \vert ,\vert 1-\beta \vert \right\} \), suppose \(r\in (0,r_0)\), define \(r_0:=\min \left\{ r_1,r_2, r_3\right\} \), where
with \(\nu _*=\min \left\{ l_*, m_*\right\} , l_*=\lim \inf _{k\rightarrow \infty }l_k,\ m_*=\lim \inf _{k\rightarrow \infty }m_k\), and the quantity \(\nu _*\) satisfies
where the symbol \(\lfloor \cdot \rfloor \) is used to denote the smallest integer no less than the corresponding real number \(\tau \in (0,\frac{1-\theta }{\theta })\) a predetermined positive constant and
with \(\alpha , \beta \) satisfying \(0<\alpha \beta<\alpha +1<\alpha \beta \frac{1-\rho (\widetilde{W}(u_*)^{-1}\widetilde{T}(u_*))}{2}+2\). Then, for any \(u\in \mathbb {N}(u_*,r)\) and any sequences \(\left\{ l_k\right\} _{k=0}^{\infty }\), \(\left\{ m_k\right\} _{k=0}^{\infty }\) of positive integers, the iteration sequence \(\left\{ u_k\right\} _{k=0}^{\infty }\) generated by the MN-PAGSOR method is well-defined, and hence converges to \(u_*\). In addition, the following inequality holds:
where, when \(z\in (0, r)\) and \( \iota > \nu _*\), then
Proof
First of all, by making use of the inverse matrix
we have
Based on (26) and the inequality \(\Vert Q_\omega (\alpha , \beta , u_*)\Vert \leqslant \vartheta (\alpha ,\beta ,u_*)< 1\) [30], the boundedness condition implies
From the Hölder condition, we obtain
Furthermore, according to Assumptions (A2), two estimates are produced as follows:
If r satisfies
then, by Lemma 1, we can further obtain
Since \(r< r_1\), the following inequality holds:
Therefore,
Moreover, to establish the inequality \(\Vert Q_\omega (\alpha ,\beta ,u)-Q_\omega (\alpha ,\beta ,u_*)\Vert \leqslant \tau \theta \), the following inequality holds:
so it follows that
Thus, when \(r< \min \left\{ r_1, r_2\right\} \), for any \(u\in \mathbb {N}(u_*,r)\), we have
Next, the error of the iterative sequence\(\left\{ u_k\right\} _{k=0}^\infty \) generated by the MN-PAGSOR method can be estimated using (27) and Lemma 2, yielding
Here, we use the notation
Considering the assumption that
which is equivalent to
From the fact that \(u_k\in \mathbb {N}(u_*, r)\), we can conclude that the following inequality holds:
Therefore, when \(r< r_3\)
Similarly,
That is
For arbitrary \(u_0\in \mathbb {N}(u_*, r)\subset \mathbb {D}_0\), the following inequality holds:
Consequently, the iterative sequence \(\left\{ u_k\right\} _{k=0}^{\infty }\) is well-posed and converges to \(u_*\). Additionally, from \(\Vert u _{k+1}-u_*\Vert < g(r_0^{p}, \nu _*)^2\Vert u_k-u_*\Vert \), it follows that \(\Vert u_k-u_*\Vert < g(r_0^{p}, \nu _*)^{2k}\Vert u_0-u_*\Vert \), the following inequality is correct:
as \(k\rightarrow \infty \), \({\lim \sup }_{k \rightarrow \infty }\Vert u_k-u_*\Vert ^{\frac{1}{k}}\leqslant g(r_0^{p}, \nu _*)^2\).
4 Numerical Results
In this section, we list two frequently encountered complex nonlinear systems and compute their numerical solutions by using the respective methods. The experimental results are then compared to verify the effectiveness and feasibility of our approach. The known methods we have experimented with are MN-GSOR, MN-DGPMHSS, and MN-AGSOR methods which utilize the modified Newton method as the external iteration and GSOR, DGPMHSS, and AGSOR as the internal iterations. To ensure the reliability of the results, we have compared not only the CPU time consumption of the algorithms, but also their inner and outer iteration steps. The optimal parameters of each method and detailed information on computational results are listed below for comparison. In the tables below, the error estimation is denoted as Error, CPU time in seconds is referred to as CPU time (s), outer iteration steps are indicated as Outer IT, and inner iteration steps as Inner IT. The numerical experiments demonstrate that our method outperforms the MN-GSOR, MN-DGPMHSS, and MN-AGSOR methods. All experiments are implemented on MATLAB [version 9.12.0.2039608 (R2022a)] and run on a personal computer with a configuration of 8-core Central Processing Unit [Apple M2] and 16.00 GB memory.
Example 1
Consider the following nonlinear equations:
where \(\Omega = (0, 1)\times (0, 1)\), and \(\partial \Omega \) is its boundary. The coefficients \(\alpha _1=\alpha _2=1,\ \beta _1 =\beta _2=0.5\), and \(\rho \) is a positive constant used to control the reaction term. By discretizing the above problem on equidistant grids \(\Delta t=h=\frac{1}{N+1}\) (N is a given positive constant), at each temporal step of the implicit scheme, we should consider a system of nonlinear equations (1) of the form
where
and \(A_N\) is a tridiagonal matrix of the following form:
Here, the vector \(u=(u_1, u_2,\cdots ,u_n)^\textrm{T}\), \(n=N^2\), and \(\otimes \) represents the Kronecker product.
We choose \(u_0=1\) to start the numerical computation and use the following stopping criterion for the outer iteration:
To control the accuracy of the inner iteration, both \(\sigma _k\) and \(\widetilde{\sigma _k}\) are set to be \(\sigma \). As a result, we need the following inequalities:
In Table 1, we have listed the optimal parameter \(\alpha \) for the MN-GSOR method, (\(\alpha , \beta \)) for MN-DGPMHSS and MN-AGSOR methods, and (\(\alpha , \beta , \omega \)) for the MN-PAGSOR method. These optimal parameters obtained from each of the methods in the numerical experiments listed here are used to compare their performance. As for a one-parameter method, there is conducted theoretical analysis for the optimal parameter; see, for example, [6, 7, 9] and references therein. But for any kind of multi-parameter method, there is either theoretical analysis for optimal parameters which is too complicated to be available, or actually no such theoretical analysis. So, we directly use numerical experimentation to determine the choice of optimal parameters. Furthermore, the numerical results show us that, as the size of the nonlinear system problem increases, the optimal parameters of the MN-PAGSOR method remain relatively stable.
In our numerical experiments, the size of the discretization N, the control parameter \(\rho \), and the inner tolerance \(\sigma \) are chosen unified as: \(N=30, 40, 50, 100, \) \(\rho =1, 10, 200,\) and \(\sigma =0.1, 0.2, 0.4.\)
In Tables 2–10, the four methods MN-GSOR, MN-DGPMHSS, MN-AGSOR, and MN-PAGSOR are compared for efficiency. Our numerical results in these tables demonstrate that the proposed MN-PAGSOR method has a smaller computation time than that of the other three methods. Additionally, the MN-PAGSOR method requires fewer inner and outer iteration steps. It can also be observed that the iteration steps of the MN-PAGSOR method remain relatively stable as the problem size increases. In other words, the MN-PAGSOR method converges more quickly compared to other methods. It is an effective improvement and one gets a promising approach.
Example 2
Consider the following two-dimensional complex nonlinear convection-diffusion equation:
where \(\Omega = (0, 1)\times (0, 1)\), and \(\partial \Omega \) is its boundary. The coefficients \(\alpha _1=1, \alpha _2=-1,\ \beta _1 =\beta _2=0.5\), and \(\rho \) is a positive constant used to control the reaction term. By discretizing above problem on equidistant grids \(h=\frac{1}{N+1}\) (N is a given positive constant), we should consider a system of nonlinear equations (1) of the form
where
with the vector \(u=(u_1, u_2,\cdots , u_n)^\textrm{T},\) \( n= N^2\). The symbols \(A_N,\) N, and \(\otimes \) are consistent with Example 1.
Table 11 lists the optimal parameters for the four methods we mentioned before.
In our numerical experiments, the size of the discretization N, the control parameter \(\rho \), and the inner tolerance \(\sigma \) are chosen unified as: \(N=120, 130\), \(\rho =10, 200\), and \(\sigma =0.1, 0.2, 0.4\).
In Tables 12–17, the four methods MN-GSOR, MN-DGPMHSS, MN-AGSOR, and MN-PAGSOR are compared for efficiency. We observe that our MN-PAGSOR method consistently outperforms the MN-GSOR, MN-DGPMHSS, and MN-AGSOR methods in terms of inner iteration steps, outer iteration steps, and CPU time under the same problem size. This highlights the superior performance of our proposed MN-PAGSOR algorithm in solving a class of complex nonlinear systems with complex symmetric Jacobians. Furthermore, the numerical results indicate that the iteration steps of the MN-PAGSOR method remain relatively stable as the problem size varies, demonstrating its robust stability. These findings reinforce the effectiveness and promising potential of the MN-PAGSOR method.
5 Conclusion
In this study, we propose the modified Newton-PAGSOR method for a class of large sparse systems of nonlinear equations with complex symmetric Jacobian matrices. Additionally, we investigate the local convergence of the MN-PAGSOR method under the Hölder continuity conditions. Numerical results demonstrate the feasibility and effectiveness of the new method, showing that MN-PAGSOR outperforms other methods in terms of computational time and number of iterations. Moreover, our numerical experiments reveal that the optimal parameters selected for MN-PAGSOR are more stable compared to other methods. However, further research is still needed to investigate the semi-local convergence analysis and optimal selection of multiple parameters, which remains an interesting problem for future study.
Data Availability
The data are available. Our data are mainly expressed by the numerical results.
References
Axelsson, O., Kucherov, A.: Real valued iterative methods for solving complex symmetric linear systems. Numer. Linear Algebra Appl. 7, 197–218 (2000)
Bai, Z.-Z.: Block preconditioners for elliptic PDE-constrained optimization problems. Computing 91, 379–395 (2011)
Bai, Z.-Z., Benzi, M., Chen, F.: Modified HSS iteration methods for a class of complex symmetric linear systems. Computing 87, 93–111 (2010)
Bai, Z.-Z., Benzi, M., Chen, F.: On preconditioned MHSS iteration methods for complex symmetric linear systems. Numer. Algorithms 56, 297–317 (2011)
Bai, Z.-Z., Benzi, M., Chen, F., Wang, Z.-Q.: Preconditioned MHSS iteration methods for a class of block two-by-two linear systems with applications to distributed control problems. IMA J. Numer. Anal. 33, 343–369 (2013)
Bai, Z.-Z., Golub, G.H.: Accelerated Hermitian and skew-Hermitian splitting iteration methods for saddle-point problems. IMA J. Numer. Anal. 27, 1–23 (2007)
Bai, Z.-Z., Golub, G.H., Li, C.-K.: Optimal parameter in Hermitian and skew-Hermitian splitting method for certain two-by-two block matrices. SIAM J. Sci. Comput. 28, 584–603 (2006)
Bai, Z.-Z., Golub, G.H., Lu, L.-Z., Yin, J.-F.: Block triangular and skew-Hermitian splitting methods for positive-definite linear systems. SIAM J. Sci. Comput. 26, 844–863 (2005)
Bai, Z.-Z., Golub, G.H., Ng, M.K.: Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. SIAM J. Matrix Anal. Appl. 24, 603–626 (2003)
Bai, Z.-Z., Golub, G.H., Ng, M.K.: On successive-overrelaxation acceleration of the Hermitian and skew-Hermitian splitting iterations. Numer. Linear Algebra Appl. 14, 319–335 (2007)
Bai, Z.-Z., Golub, G.H., Pan, J.-Y.: Preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite linear systems. Numerische Mathematik 98, 1–32 (2004)
Bai, Z.-Z., Guo, X.-P.: On Newton-HSS methods for systems of nonlinear equations with positive-definite Jacobian matrices. J. Comput. Math. 28, 235–260 (2010)
Bai, Z.-Z., Parlett, B.N., Wang, Z.-Q.: On generalized successive overrelaxation methods for augmented linear systems. Numerische Mathematik 102, 1–38 (2005)
Bai, Z.-Z., Wang, Z.-Q.: On parameterized inexact Uzawa methods for generalized saddle point problems. Linear Algebra Appl. 428, 2900–2932 (2008)
Bai, Z.-Z., Yang, X.: On HSS-based iteration methods for weakly nonlinear systems. Appl. Numer. Math. 59, 2923–2936 (2009)
Benzi, M., Golub, G.H.: A preconditioner for generalized saddle point problems. SIAM J. Matrix Anal. Appl. 26, 20–41 (2004)
Benzi, M., Golub, G.H., Liesen, J.: Numerical solution of saddle point problems. Acta Numerica 14, 1–137 (2005)
Chen, M.-H., Wu, Q.-B.: On modified Newton-DGPMHSS method for solving nonlinear systems with complex symmetric Jacobian matrices. Comput. Math. Appl. 76, 45–57 (2018)
da Cunha, R.D., Becker, D.: Dynamic block GMRES: an iterative method for block linear systems. Adv. Comput. Math. 27, 423–448 (2007)
Darvishi, M.T., Barati, A.: A third-order Newton-type method to solve systems of nonlinear equations. Appl. Math. Comput. 187, 630–635 (2007)
Dembo, R., Eisenstat, S., Steihaug, T.: Inexact Newton method. SIAM J. Numer. Anal. 19, 400–408 (1982)
Deuflhard, P.: Newton Methods for Nonlinear Problems. Springer-Verlag, Berlin, Heidelberg (2004)
Edalatpour, V., Hezari, D., Salkuyeh, D.K.: Accelerated generalized SOR method for a class of complex systems of linear equations. Math. Commun. 20, 37–52 (2015)
Elman, H.C.: Preconditioners for saddle point problems arising in computational fluid dynamics. Appl. Numer. Math. 43, 75–89 (2002)
Elman, H.C., Silvester, D.J., Wathen, A.: Finite Elements and Fast Iterative Solvers: with Applications in Incompressible Fluid Dynamics. Oxford University Press, Oxford (2014)
Feng, Y.-Y., Wu, Q.-B.: MN-PGSOR method for solving nonlinear systems with block two-by-two complex symmetric Jacobian matrices. J. Math. 2021, 1–18 (2021)
Feriani, A., Perotti, F., Simoncini, V.: Iterative system solvers for the frequency analysis of linear mechanical systems. Comp. Methods Appl. Mech. Eng. 190, 1719–1739 (2000)
Guo, X.-P., Duff, I.S.: Semilocal and global convergence of the Newton-HSS method for systems of nonlinear equations. Numer. Linear Algebra Appl. 18, 299–315 (2011)
Hezari, D., Edalatpour, V., Salkuyeh, D.K.: Preconditioned GSOR iterative method for a class of complex symmetric system of linear equations. Numer. Linear Algebra Appl. 22, 761–776 (2015)
Huang, Z.-G., Wang, L.-G., Xu, Z., Cui, J.-J.: Preconditioned accelerated generalized successive overrelaxation method for solving complex symmetric linear systems. Comput. Math. Appl. 77, 1902–1916 (2019)
Karlsson, H.O.: The quasi-minimal residual algorithm applied to complex symmetric linear systems in quantum reactive scattering. J. Chem. Phys. 103, 4914–4919 (1995)
Kuramoto, Y.: Oscillations Chemical Waves and Turbulence. Dover, Mineola (2003)
Li, C.-X., Wu, S.-L.: A double-parameter GPMHSS method for a class of complex symmetric linear systems from Helmholtz equation. Math. Prob. Eng. 2014, 1–7 (2014)
Li, Y., Guo, X.-P.: Semilocal convergence analysis for MMN-HSS methods under the Hölder conditions. East Asia J. Appl. Math. 7, 396–416 (2017)
Li, Y., Guo, X.-P.: Multi-step modified Newton-HSS methods for systems of nonlinear equations with positive definite Jacobian matrices. Numer. Algorithms 75, 55–80 (2017)
Ortega, J.M., Rheinboldt, W.C.: Iterative Solution of Nonlinear Equations in Several Variables. Society for Industrial and Applied Mathematics, Philadelphia, PA (2000)
Pan, J.-Y., Ng, M.K., Bai, Z.-Z.: New preconditioners for saddle point problems. Appl. Math. Comput. 172, 762–771 (2006)
Papp, D., Vizvari, B.: Effective solution of linear Diophantine equation systems with an application in chemistry. J. Math. Chem. 39, 15–31 (2006)
Qi, X., Wu, H.-T., Xiao, X.-Y.: Modified Newton-GSOR method for solving complex nonlinear systems with symmetric Jacobian matrices. Comput. Appl. Math. 39, 1–18 (2020)
Qi, X., Wu, H.-T., Xiao, X.-Y.: Modified Newton-AGSOR method for solving nonlinear systems with block two-by-two complex symmetric Jacobian matrices. Calcolo 57, 1–23 (2020)
Raviart, P.A., Girault, V.: Finite Element Approximation of the Navier-Stokes Equations. Springer Verlag, Berlin, New York (1979)
Rees, T., Dollar, H.S., Wathen, A.J.: Optimal solvers for PDE-constrained optimization. SIAM Journal on Scientific Computing 32, 271–298 (2010)
Rees, T., Stoll, M.: Block-triangular preconditioners for PDE-constrained optimization. Numer. Linear Algebra Appl. 17, 977–996 (2010)
Rheinboldt, W.C.: Methods for Solving Systems of Nonlinear Equations. Society for Industrial and Applied Mathmatics, Philadelphia, PA (1998)
Saad, Y.: Iterative Methods for Sparse Linear Systems, 2nd edn. Society for Industrial and Applied Mathematics, Philadelphia, PA (2003)
Salkuyeh, D.K., Hezari, D., Edalatpour, V.: Generalized successive overrelaxation iterative method for a class of complex symmetric linear system of equations. Int. J. Comp. Math. 92, 802–815 (2015)
Sulem, C., Sulem, P.L.: The Nonlinear Schrödinger Equation Self-focusing and Wave Collapse. Springer, New York (2007)
Wang, J., Guo, X.-P., Zhong, H.-X.: Accelerated GPMHSS method for solving complex systems of linear equations. East Asia J. Appl. Math. 7, 143–155 (2017)
Wang, J., Guo, X.-P., Zhong, H.-X.: DPMHSS iterative method for systems of nonlinear equations with block two-by-two complex Jacobian matrices. Numer. Algorithms 77, 167–184 (2018)
Wu, Q.-B., Chen, M.-H.: Convergence analysis of modified Newton-HSS method for solving systems of nonlinear equations. Numer. Algorithms 64, 659–683 (2013)
Xiao, X.-Y., Wang, X., Yin, H.-W.: Efficient single-step preconditioned HSS iteration methods for complex symmetric linear systems. Comput. Math. Appl. 74, 2269–2280 (2017)
Xie, F., Lin, R.-F., Wu, Q.-B.: Modified Newton-DSS method for solving a class of systems of nonlinear equations with complex symmetric Jacobian matrices. Numer. Algorithms 85, 951–975 (2020)
Yang, A.-L., Wu, Y.-J.: Newton-MHSS methods for solving systems of nonlinear equations with complex symmetric Jacobian matrices. Numer. Algebra Control Optim. 2, 839–853 (2012)
Zhong, H.-X., Chen, G.-L., Guo, X.-P.: On preconditioned modified Newton-MHSS method for systems of nonlinear equations with complex symmetric Jacobian matrices. Numer. Algorithms 69, 553–567 (2015)
Zhu, M.-Z., Zhang, G.-F.: A class of iteration methods based on HSS for Topelitz systems of weakly nonlinear equations. Journal of Computational and Applied Mathematics 290, 433–444 (2015)
Acknowledgements
The authors would like to thank two anonymous referees for their responsible and valuable comments which eventually lead to an improved presentation of the article.
Funding
This work is supported by the National Natural Science Foundation of China with Grant Nos. 12161030 and 12171216.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Ma, R., Wu, YJ. & Song, LJ. Modified Newton-PAGSOR Method for Solving Nonlinear Systems with Complex Symmetric Jacobian Matrices. Commun. Appl. Math. Comput. (2024). https://doi.org/10.1007/s42967-024-00410-0
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s42967-024-00410-0
Keywords
- Preconditioned accelerated generalized successive overrelaxation (PAGSOR)
- Complex symmetric Jacobian matrix
- Large sparse nonlinear systems
- Modified Newton-PAGSOR (MN-PAGSOR) method
- Local convergence