Abstract
Double-step scale splitting (DSS) iteration method is proved to be an unconditionally convergent iteration method, which is also efficient and robust for solving a class of large sparse complex symmetric systems of linear equations. In this paper, by making use of the DSS iteration technique as the inner solver to approximately solve the Newton equations, we establish a new modified Newton-DSS method for solving systems of nonlinear equations whose Jacobian matrices are large, sparse, and complex symmetric. Subsequently, we investigate the local and semilocal convergence properties of our method under some proper assumptions. Finally, numerical results on some problems illustrate the superiority of our method over some previous methods.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
We assume that \(F: \mathbb {D}\subset \mathbb {C}^{n}\rightarrow \mathbb {C}^{n}\) is a continuously differentiable mapping defined on an open convex subset of n-dimensional complex linear space \(\mathbb {C}^{n}\) and consider the iteration solution of the large sparse system of nonlinear equations:
The Jacobian matrix of F(x) is large, sparse, and complex symmetric, i.e.,
satisfies that matrices W(x) and T(x) are both symmetric and real positive definite, which implies that the complex matrix \(F^{\prime }(x)\) is nonsingular. \(i=\sqrt {-1}\) is the imaginary unit. Actually, such nonlinear equations can be derived in many practical cases, such as nonlinear waves, quantum mechanics, chemical oscillations, and turbulence (see [1,2,3,4]).
To our knowledge, inexact Newton method [5] is the most classic and popular iteration method for solving the system of nonlinear equations, which can be formulated as:
where \(x_{0}\in \mathbb {D}\) is a given initial vector and rk is a residual yielded by the inner iteration. Obviously, it is the variant of Newton’s method where the so-called Newton equation
is solved approximately at each iteration. In particular, when the scale of problems is large, linear iterative methods are commonly applied to compute the approximation solution. For example, the Newton-Krylov subspace methods [6], which make use of Krylov subspace methods as inner iterations to solve the Newton equations, have been widely studied and successfully used.
Recently, based on the Hermitian and skew-Hermitian splitting (HSS) iteration method [7] and the special structure of the complex matrix, Bai et al. have proposed a modified Hermitian and skew-Hermitian splitting (MHSS) iteration method [8] and its preconditioned version which is called PMHSS iteration method [9] for the complex linear systems. Because of the elegant properties and high efficiency, the HSS-like iteration methods for complex linear systems have extended in many literatures (see [10,11,12,13,14,15,16]). Thereinto, a double-step scale splitting (DSS) iteration scheme [16] was established for solving the complex symmetric linear system
with matrices W and T both symmetric and real positive definite. It not only is convergent unconditionally but also behaves better than PMHSS iteration method.
By utilizing the HSS iteration method as the inner iteration, Bai and Guo [17] have presented the Newton-HSS method for solving the system of nonlinear equations with non-Hermitian positive definite Jacobian matrices. Meanwhile, the local convergence theorem of Newton-HSS method was proved. Since then, Guo et al. [18] analyzed the semilocal convergence properties of the above Newton-HSS method. Focusing on the efficiency of the outer iteration method, Wu and Chen [19] have established the modified Newton-HSS method by utilizing the modified Newton method as the outer iteration instead of the Newton method, and given proof of the local and semilocal convergence properties of the method. Subsequently, Chen et al. [20] gave a new convergence theorem of the modified Newton-HSS method under the Hölder continuous condition, which is weaker than the Lipschitz continuous condition. Up to now, there have been a series of literatures showing the feasibleness of the combination of the modified Newton method and other HSS-like iteration methods (see [21,22,23,24,25,26]).
Nevertheless, when the Jacobian matrices (1.2) are complex, the convergence speed of Newton-HSS method reduces significantly since the resolution of the linear system needs a complex algorithm. In order to overcome this deficiency, Yang et al. [27] and Zhong et al. [28] presented the Newton-MHSS method and the modified Newton-PMHSS method, respectively. Inspired by the above ideas, it is natural to try to combine the DSS iteration method and the modified Newton method as the inner solver and outer solver, respectively. As a result, we construct a modified Newton-DSS method for the systems of nonlinear equations with complex symmetric Jacobian matrices. Under some reasonable assumptions, the local and semilocal convergence theorems of the modified Newton-DSS method are discussed. Finally, we examine the feasibility and efficiency of our method by several numerical examples.
The organization of this paper is as follows. In Section 2, we introduce the modified Newton-DSS method. The local and semilocal convergence properties of the MNDSS method are shown under some suitable assumptions in Sections 3 and 4, respectively. Some numerical results are given in Section 5 to illustrate the advantages of our method compared to the modified Newton-MHSS method even modified Newton-PMHSS method. Finally, in Section 6, we give some brief conclusions.
2 The modified Newton-DSS method
Firstly, let us review some of the standard facts on the double-step scale splitting (DSS) iteration method [15]. Consider the iteration solution of the following linear system
where A is a complex symmetric matrix of the form
and \(W,T\in \mathbb {R}^{n\times n}\) are both positive definite and symmetric. Based on the special structure of the coefficient matrix A, Bai et al. in [8] designed a modification of the Hermitian and skew-Hermitian splitting (HSS) iteration method [7] which is called MHSS iteration method. Subsequently, a preconditioned MHSS (PMHSS) iteration method was derived in [9]. In order to improve the convergence rate of the PMHSS method, lots of researchers have developed some efficient iteration methods.
Recently, by picking up the idea of symmetry of the PMHSS method and using the technique of scaling to reconstruct complex linear system, Zheng et al. [16] designed a double-step scale splitting (DSS) iteration method. In this method, by multiplying parameters (α − i) and (1 − iα) for both side of complex linear system (2.1), respectively, we obtain two fixed point equations, i.e.,
and
Based on the above matrix splitting, there are two iteration formations that we can construct:
and
Similar to the construction of the PMHSS iteration method, DSS iteration method is proposed by alteranting between above iterations for solving the complex symmetric linear system (2.1). It is described as follows:
Since W,T are symmetric and positive definite, and \(\alpha \in \mathbb {R}\) is positive, it implies that matrices αW + T and αT + W are both symmetric and positive definite. Therefore, the two subsystems involved in each step of the DSS iteration method (??) can be effectively solved exactly by a sparse Cholesky factorization, or inexactly by a preconditioned conjugate gradient (PCG) scheme. Theoretical analysis proved the unconditional convergence of the DSS iteration method and presented two reciprocal optimal iteration parameters. Moreover, the DSS iteration method is superior to the PMHSS iteration method in terms of the iteration counts and CPU time in some numerical examples (for details, see [16]).
After straightforward operations, the above DSS iteration method can be equivalently reformulated as the standard form
where
Hence, the unconditional convergence property of the DSS iteration method given in [16] can be described as follows.
Theorem 2.1
Let \(A=W+iT \in \mathbb {C}^{n\times n}\) be a nonsingular matrix with W and T both symmetric and positive definite. Let α be a positive constant. Then the spectral radius of the DSS iteration method satisfies
where sp(W− 1T) denotes the spectrum of the matrix W− 1T. Consequently,
so the DSS iteration method converges to the unique solution of the linear system (2.1) for any initial guess.
Now, inspired by the MN-HSS method [19] which utilizing the modified Newton iteration
as the outer iteration, which has R-order of convergence three at least, we can establish a new method named modified Newton-DSS method since the DSS iteration method is applied as the inner iteration. It means that we employ the DSS iteration method to the following linear systems:
Then, the MN-DSS method for solving the nonlinear sysytem (1.1) with complex symmetric Jacobian matrices is obtained as Algorithm 2 shows.
From the iterative scheme (2.3), the modified Newton-DSS method can be rewritten as following equivalent form after uncomplicated derivations:
where
Define matrices B(α;x) and C(α;x) by
An easy computation shows that the Jacobian matrix \(F^{\prime }(x)\) possesses a new expression as
and
Due to (2.7) and Neumann Lemma, the modified Newton-DSS method can be equivalently expressed as the following form
3 Local convergence theorem of the modified Newton-DSS method
In this section, we analyze the local convergence property of the modified Newton-DSS method and prove the local convergence theorem. First of all, we summarize without proofs the relevant definitions and lemmas.
Definition 3.1
A nonlinear mapping \(F:\mathbb {D} \subset \mathbb {C}^{n} \rightarrow \mathbb {C}^{n}\) is Gateaux differentiable (or G-differentiable) at an interior point x of \(\mathbb {D}\) if there exists a linear operator \(A\in L(\mathbb {R}^{n},\mathbb {R}^{m})\) such that
for any \(h\in \mathbb {C}^{n}\). Moreover, \(F: \mathbb {D}\subset \mathbb {C}^{n}\rightarrow \mathbb {C}^{n}\) is said to be G-differentiable on an open set \(\mathbb {D}_{0} \subset \mathbb {D}\) if it is G-differentiable at any point in \(\mathbb {D}_{0}\)
Lemma 3.1
(Neumann Lemma) Let \(A\in L(\mathbb {R}^{n})\) satisfy ∥A∥ < 1. Then (I − A)− 1 exists and
Lemma 3.2
(Banach Lemma) Let \(A, B\in \mathbb {C}^{n\times n}\) satisfy ∥I − BA∥ < 1. Then the matrices A,B are nonsingular. Moreover,
and
Assume that \(F:\mathbb {D}\subset \mathbb {C}^{n}\rightarrow \mathbb {C}^{n}\) is G-differentiable on an open neighborhood \(N_{0}\subset \mathbb {D}\) of a point \(x_{*}\in \mathbb {D}\) at which the Jacobian matrix \(F^{\prime }(x)\) is continuous, positive definite, complex symmetric and F(x∗) = 0. Let us split \(F^{\prime }(x)\) into the form \(F^{\prime }(x)=W(x)+iT(x),\) where W(x) and T(x) are both real positive definite and symmetric matrices for any \(x\in \mathbb {D}\), respectively. Denote with \(\mathbb {N}(x_{*},r)\) an open ball centered at x∗ with radius r > 0.
Assumption 3.1
For all \(x\in \mathbb {N}(x_{*},r)\subset \mathbb {N}_{0}\), suppose that the following conditions hold. (THE BOUNDED CONDITION) there exist positive constants β, γ and δ such that
(THE LIPSCHITZ CONDITION) there exist nonnegative constants Lw and Lt such that
In the following, we will prove the local convergence of our method.
Lemma 3.3
If \(r\in (0,\frac {1}{\gamma L})\) and Assumption 3.1 holds, then \(F^{\prime }(x)^{-1}\) exists for any \(x\in \mathbb {N}(x_{*},r)\subset \mathbb {N}_{0}\). Moreover, the following inequalities hold with L := Lw + Lt for all \(x,y\in \mathbb {N}(x_{*},r)\):
Proof
From the Lipschitz condition, it is directly implied that
Moreover, the condition r ∈ (0,1/γL) suggests that
It follows from Lemma 3.2 that \(F^{\prime }(x)^{-1}\) exists, and
In addition, since the definition of integral shows
and the bounded condition results in
we obtain
Clearly, it holds that
Hence,
The proof of Lemma 3.3 is completed. □
Lemma 3.4
Under the conditions of Lemma 3.3, let r ∈ (0,r0) and define \(r_{0}:=\min \limits \{r_{1},r_{2}\}\), where r1 is the minimal positive solution of the quadratic equation
and
with \(u=\min \limits \{l_{*},m_{*}\},~l_{*}=\lim \inf _{k\rightarrow \infty }l_{k}, ~m_{*}=\lim \inf _{k\rightarrow \infty }m_{k}\), and the constant u satisfies
where the symbol ⌊∙⌋ is used to denote the smallest integer no less than the corresponding real number, \(\tau \in (0,\frac {1-\theta }{\theta })\) a prescribed positive constant and
In addition, we utilize the notation
Then, for any \(x\in \mathbb {N}(x_{*},r)\subset \mathbb {N}_{0}, t\in (0,r)\) and v > u, it holds that
Proof
According to the bounded condition, the equality (2.8) and the fact
under some moderate conditions, it holds that
Firstly, from the bounded condition, we have
It follows from the Assumption 3.1 that we can further obtain
Moreover, we have
and
Since
we see at once that
provided r is small enough such that γL∥x − x∗∥ < 1 which resulting in ∥I − (W(x∗) − iT(x∗))− 1(W(x) − iT(x))∥ < 1. It follows immediately that
On account of the above proof, we can easily get
Hence,
Likewise, we have
Consequently, by making use of the Banach lemma, i.e., Lemma 3.2, it holds that
for all \(x\in \mathbb {N}(x_{*},r)\), provided r is small enough such that
which resulting in ∥I − B(α;x∗)− 1B(α;x)∥ < 1. From (2.8) we immediately get the equality
Based on (3.1), (3.2), and (3.3), we can easily obtain that
Meanwhile, r < r1 implies that
Hence
Furthermore, since t ∈ (0,r) and r < r2, it is obvious that
□
Theorem 3.1
Under the conditions of Lemma 3.3 and 3.4, then for any \(x_{0}\in \mathbb {N}(x_{*},r)\) and any sequences \(\{l_{k}\}^{\infty }_{k=0}, \{m_{k}\}^{\infty }_{k=0}\) of positive integers, the iteration sequence \(\{x_{k}\}^{\infty }_{k=0}\) generated by the modified Newton-DSS method is well-defined and converges to x∗. Moreover, it holds that
Proof
From Lemma 3.3, Lemma 3.4, and (2.7), we can easily obtain that
and
We can further prove that \(\{x_{k}\}^{\infty }_{k=0}\subset \mathbb {N}(x_{*},r)\) converges to x∗ by induction. In fact, for k = 0, we can obtain ∥x0 − x∗∥ < r < r0 and
as \(x_{0}\in \mathbb {N}(x_{*},r)\). Hence we have \(x_{1}\in \mathbb {N}(x_{*},r)\). Now, suppose that \(x_{n}\in \mathbb {N}(x_{*},r)\) for some positive integer k = n, then we can straightforwardly deduce the estimate
which shows that \(x_{n+1}\in \mathbb {N}(x_{*},r)\) for k = n + 1. Moreover, \(x_{n+1}\rightarrow x_{*}\) when \(n\rightarrow \infty \).
The proof of theorem is completed. □
4 Semilocal convergence theorem of the modified Newton-DSS method
In this section, we prove a Kantorovich-type semilocal convergence for the modified Newton-DSS method by utilizing the major function. That is, if we impose some conditions on the initial vector x0 but do not require knowledge of the existence of a solution, the exact solution x∗ of the nonlinear system must exist in some neighborhood of x0.
Assume that \(F:\mathbb {D}\subset \mathbb {C}^{n}\rightarrow \mathbb {C}^{n}\) is G-differentiable on an open neighborhood \(N_{0}\subset \mathbb {D}\) of a point \(x_{0}\in \mathbb {D}\) at which the Jacobian matrix \(F^{\prime }(x)\) is continuous, positive definite and complex symmetric. Suppose \(F^{\prime }(x)=W(x)+iT(x),\) where W(x) and T(x) are both real positive definite and symmetric matrices for any \(x\in \mathbb {D}\), respectively. Denote with \(\mathbb {N}(x_{0},r)\) an open ball centered at x0 with radius r > 0.
Assumption 4.1
Let \(x_{0}\in \mathbb {C}^{n}\) and suppose that the following conditions hold. (THE BOUNDED CONDITION) there exist positive constants β, γ and 𝜖 such that
(THE LIPSCHITZ CONDITION) there exist nonnegative constants Lh and Ls such that for all \(x, y\in \mathbb {N}(x_{0},r)\subset \mathbb {N}_{0}\),
From Assumption 4.1, Banach’s Lemma, and the integral mean-value theorem, let L := Lw + Lt and we can easily get Lemma 4.1 without detailed proof as follows.
Lemma 4.1
Under Assumption 4.1, for all \(x,y\in \mathbb {N}(x_{0},r)\), if \(r\in (0,\frac {1}{\gamma L})\), then \(F^{\prime }(x)^{-1}\) exists. And we have the following inequalities:
Define
The iterative sequences {tk}, {sk} are generated by the following formulas
where
Now, we claim that the sequences {tk}, {sk} converge monotone increasingly to some number as shown by the following lemma.
Lemma 4.2
Assume that the constants satisfy
Denote \(t_{*}=\frac {b-\sqrt {b^{2}-2ac}}{a}\), then the the sequences {tk}, {sk}, generated by the formulas (4.1), increase and converge to t∗. Moreover,
Proof
Details see Lemma 4.2 and Lemma 4.3 in [19]. □
Theorem 4.1
Under the assumptions of Lemmas 4.1 and 4.2, define \(r:=\min \limits (r_{1},r_{2})\), where r1 is the minimal positive solution of the quadratic equation
and
and define \(u=\min \limits \{l_{*},m_{*}\}\), with \(l_{*}=\lim \inf _{k\rightarrow \infty }l_{k},~m_{*}=\lim \inf _{k\rightarrow \infty }m_{k}\), and the constant u satisfies
where the symbol ⌊∙⌋ is used to denote the smallest integer no less than the corresponding real number, \(\tau \in (0,\frac {1-\theta }{\theta })\) a prescribed positive constant and
Then the iteration sequence \(\{x_{k}\}^{\infty }_{k=0}\) generated by the modified Newton-DSS method is well-defined and converges to x∗, which satisfies F(x∗) = 0.
Proof
Firstly, analysis similar to that in the proof of Lemma 3.4 shows the estimate about the iterative matrix M(α;x) of the linear solver: if \(x\in \mathbb {N}(x_{0},r)\), then
Now we prove following inequalities by induction
Since
the inequalities (4.2) are correct for k = 0. Suppose that (4.2) holds for all nonnegative integers less than k. We need to prove that it holds for k. For the first inequality in (4.2), we have
Since \(x_{k-1}, y_{k-1}\in \mathbb {N}(x_{0},r)\) and by the inequality(2.4) and the inequalities in Lemma 4.1, we have
It follows that
and then
Likewise, we can prove that
and
Hence, the inequalities (4.2) hold for all k. Since the sequences {tk},{sk} converge to t∗ and
the sequence {xk} also converges, to say x∗. Since ∥M(α;x∗)∥ < 1, we have F(x∗) = 0 from the iteration (2.9).
The proof of theorem is completed. □
5 Numerical examples
In this section, we present two examples which are of the complex nonlinear system of the form (1.1) with its Jacobian matrix that has the form (1.2). By making use of these examples, we illustrate the efficiency of our modified Newton-DSS method (MNDSS) compared with that of the modified Newton-MHSS method (MNMHSS) and the modified Newton-PMHSS method (MNPMHSS), in the sense of both the number of iteration steps (denoted as “IT”) and the elapsed CPU time in seconds (denoted as “CPU time”). The optimal parameters used in actual computations are obtained experimentally by minimizing the corresponding iteration steps and error relative to the exact solution. The experimental results point to the conclusion that MNDSS method outperforms both MNMHSS method and MNPMHSS method.
Example 1
We consider the following nonlinear equations [27]:
where Ω = [0,1] × [0,1], ∂Ω is the boundary of Ω. The coefficients α1 = β1 = 1, α2 = β2 = 1 and ϱ is a positive constant used to control the magnitude of the reaction term. By discretizing this equation with centered finite difference scheme on the equidistant discretization grid Δt = h = 1/(N + 1), at each temporal step of the implicit scheme, we can obtain the system of nonlinear equations F(x) = 0 with following form:
where
with AN = tridiag(− 1,2,− 1) and n = N × N. Here ⊗ is the Kronecker product symbol.
In our computations, we choose the initial guess to be u0 = 1, the stopping criterion for the outer Newton iteration is set to be
and the prescribed tolerance ηk and \(\tilde {\eta }_{k}\) for controlling the accuracy of the iteration methods are set to be the same value η.
Firstly, we finished some upfront work for the parameter α. There are some theoretical method about how to select the optimal parameter α. It is a interesting topic in the future research. Here, we decide the choice based on experiment in Tables 1, 2, and 3, in which we present the experimentally optimal parameter α for MNMHSS, MNPMHSS and MNDSS methods, respectively. Subsequently, we adopt these experimentally optimal parameters α for the three methods to solve the nonlinear equation.
In Tables 4, 5, 6, 7, and 8, we display the experimental results about the modified Newton method incorporated with MHSS, PMHSS, and DSS, corresponding to the scale of the problem N = 25,26,27, the inner tolerance η = 0.1,0.2,0.4 and the problem parameter ϱ = 1,10,200, respectively. From these tables, we can easily observe that all these iteration methods can compute an approximate solution of the system of nonlinear equations. In particular, the modified Newton-DSS method remarkably outperforms the modified Newton-MHSS even modified Newton-PMHSS methods from the point of view of number of iterations and CPU time. Here, the number of outer iteration and the total numbers of inner iterations are denoted with Outer IT and Inner IT.
Moreover, we see that the Inner ITs for the MNDSS method almost remain constant with problem size, which means the extensibility as the MNPMHSS method possesses. Actually, the CPU time and Inner IT are both almost half of the MNPMHSS methods. Even though the two methods can deal with the problems more efficiently than the MNMHSS method, the MNDSS method considerably works better than the MNPMHSS method, both from aspects of iteration counts and CPU time.
Example 2
The second test is for the complex nonlinear Helmholtz equation:
with σ1 and σ2 being real coefficient functions. Here, u subjects to homogeneous Dirichlet boundary conditions in the square Ω = [0,1] × [0,1]. We discretize the problem with finite differences on a N × N grid with mesh size h = 1/(N + 1). This leads to a system of nonlinear equations F(x) = 0 with following form:
where
with the matrix \(K\in \mathbb {R}^{n\times n}\) possessing the tensor-product form
For the numerical tests, we set σ1 = 100 and σ2 = 1000. In addition, initial guess is chosen as x0 = 0 and the iteration is terminated once the current xk satisfies
The prescribed tolerances \(\eta _{k}=\tilde {\eta }_{k}\equiv \eta =0.1, 0.2, 0.4\) and the scale of problem N = 30,60,90, respectively. Now we solve the nonlinear problem by MNMHSS, MNPMHSS and MNDSS methods and show the experimental results. They are compared in elapsed CPU times, the number of outer iterations and inner iterations.
We reselect the experimental optimal parameters α for three iteration methods (see Table 9). The numerical results are displayed in Tables 10, 11, and 12. From these tables, we can find the same conclusion as the previous instance: MNDSS and MNPMHSS are far superior to MNMHSS method while our method performs more efficiently than MNPMHSS method in the sense of CPU time and the number of iterations.
6 Conclusion
In this paper, by utilizing the double-step scale splitting (DSS) iteration method as inner iteration and employing the modified Newton method as outer iteration, we have established a modified Newton-DSS (MNDSS) method for the solution of nonlinear complex systems, especially whose Jacobian matrices are large, sparse and complex symmetric. There are many feasible techniques, such as MHSS, PMHSS and some deformed methods for complex symmetric linear systems. Thereinto, DSS method is competitive with the result that the combination of modified Newton method and DSS method, i.e., MNDSS method can work better. We have also proved the local and semilocal convergence theorems of the modified Newton-DSS method. At last, the numerical experiments with experimental choices for parameters demonstrate its effectiveness. Actually, in the whole example section, we find that it does take much time to determine the experimental optimal parameters α. In our future studies, we are to rise to the challenges to the complicated headache, i.e., how to choose the optimal parameters.
References
Bohr, T., Hensen, M.H., Paladin, G., Vulpiani, A.: Dynamical Systems Approach to Turbulence. Cambridge University Press, Cambridge (1998)
Sulem, C., Sulem, P.L.: The Nonlinear Schrödinger Equation, Self-focusing and Wave Collapse. Springer, New York (1999)
Aranson, I.S., Kramer, L.: The world of the complex Ginzburg-Landau equation. Rev. Mod. Phys. 74, 99–143 (2002)
Kuramoto, Y.: Oscillations, Chemical Waves, and Turbulence. Dover, Mineola (2003)
Dembo, R.S., Eisenstat, S.C., Steihaug, T.: Inexact Newton methods. SIAM J. Numer. Anal. 19, 400–408 (1982)
Saad, Y.: Iterative Methods for Sparse Linear Systems, 2nd edn. SIAM, Philadelphia (2003)
Bai, Z.Z., Golub, G.H., Ng, M.K.: Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. SIAM J. Matrix Anal. Appl. 24, 603–626 (2003)
Bai, Z.Z., Benzi, M., Chen, F.: Modified HSS iteration methods for a class of complex symmetric linear systems. Computing 87, 93–111 (2010)
Bai, Z.Z., Benzi, M., Chen, F.: On preconditioned MHSS iteration methods for complex symmetric linear systems. Numer. Algor. 56, 297–317 (2011)
Hezari, D., Salkuyeh, D.K., Edalatpour, V.: A new iterative method for solving a class of complex symmetric system of linear equations. Numer. Algor. 73, 927–955 (2016)
Wang, T., Zheng, Q.Q., Lu, L.Z.: A new iteration method for a class of complex symmetric linear systems. J. Comput. Appl. Math. 325, 188–197 (2017)
Xiao, X.Y., Yin, H.W.: Efficient parameterized HSS iteration methods for complex symmetric linear systems. Comput. Math. Appl. 73, 87–95 (2017)
Huang, Z.G., Wang, L.G., Xu, Z., Cui, J.J.: An efficient two-step iterative method for solving a class of complex symmetric linear systems. Comput. Math. Appl. 75, 2473–2498 (2018)
Li, C.L., Ma, C.F.: On Euler-extrapolated Hermitian/skew-Hermitian splitting method for complex symmetric linear systems. Appl. Math. Lett. 86, 42–48 (2018)
Xiao, X.Y., Wang, X.: A new single-step iteration method for solving complex symmetric linear systems. Numer. Algor. 78, 643–660 (2018)
Zheng, Z., Huang, F.L., Peng, Y.C.: Double-step scale splitting iteration method for a class of complex symmetric linear systems. Appl. Math. Lett. 73, 91–97 (2017)
Bai, Z.Z., Guo, X.P.: On Newton-HSS methods for system of nonlinear equation with positive-definite Jacobian matrices. J. Comput. Math. 28, 235–260 (2010)
Guo, X.P., Duff, I.S.: Semilocal and global convergence of the Newton-HSS method for systems of nonlinear equations. Numer. Linear Algebra Appl. 18, 299–315 (2011)
Wu, Q.B., Chen, M.H.: Convergence analysis of modified Newton-HSS method for solving systems of nonlinear equations. Numer. Algor. 64, 659–683 (2013)
Chen, M.H., Lin, R.F., Wu, Q.B.: Convergence analysis of the modified Newton-HSS method under the Hölder continuous condition. J. Comput. Appl. Math. 264, 115–130 (2014)
Li, Y., Guo, X.P.: Multi-step modified Newton-HSS methods for systems of nonlinear equations with positive definite Jacobian matrices. Numer. Algor. 75, 55–80 (2017)
Wang, J., Guo, X.P., Zhong, H.X.: MN-DPMHSS iteration method for systems of nonlinear equations with block two-by-two complex Jacobian matrices. Numer. Algor. 77, 167–184 (2018)
Dai, P.F., Wu, Q.B., Chen, M.H.: Modified Newton-NSS method for solving systems of nonlinear equations. Numer. Algor. 77, 1–21 (2018)
Li, Y.M., Guo, X.P.: On the accelerated modified Newton-HSS method for systems of nonlinear equations. Numer. Algor. 79, 1049–1073 (2018)
Chen, M.H., Wu, Q.B.: Modified Newton-MDPMHSS method for solving nonlinear systems with block two-by-two complex symmetric Jacobian matrices. Numer. Algor. 80, 355–375 (2019)
Xie, F., Wu, Q.B., Dai, P.F.: Modified Newton-SHSS method for a class of systems of nonlinear equations. Comp. Appl. Math. 38, 19 (2019). https://doi.org/10.1007/s40314-019-0793-9
Yang, A.L., Wu, Y.J.: Newton-MHSS methods for solving systems of nonlinear equations with complex symmetric Jacobian matrices. Numer. Algebra, Control Optim. 2, 839–853 (2012)
Zhong, H.X., Chen, G.L., Guo, X.P.: On preconditioned modified Newton-MHSS method for systems of nonlinear equations with complex symmetric Jacobian matrices. Numer. Algor. 69, 553–567 (2015)
Funding
This work is supported by the National Natural Science Foundation of China (Grant Nos. 11771393, 11632015), Zhejiang Natural Science Foundation (Grant No. LZ14A010002), and Science Foundation of Taizhou University (Grant No. 2017PY028).
Author information
Authors and Affiliations
Corresponding authors
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Xie, F., Lin, RF. & Wu, QB. Modified Newton-DSS method for solving a class of systems of nonlinear equations with complex symmetric Jacobian matrices. Numer Algor 85, 951–975 (2020). https://doi.org/10.1007/s11075-019-00847-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11075-019-00847-y