Abstract
In this paper, for solving horizontal linear complementarity problems, a two-step modulus-based matrix splitting iteration method is established. The convergence analysis of the proposed method is presented, including the case of accelerated overrelaxation splitting. Numerical examples are reported to show the efficiency of the proposed method.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Given \(A,B\in \mathbb {R}^{n\times {n}}\) and \(q\in \mathbb {R}^{n}\), the horizontal linear complementarity problem (abbreviated as HLCP) is to find vectors \(z,r\in \mathbb {R}^{n}\) such that
where for two matrices \(F=(f_{ij}), G=(g_{ij})\in \mathbb {R}^{m\times {n}}\) the order F ≥ (>)G means fij ≥ (>)gij for any i and j.
Clearly, when either A or B is an n × n identity matrix, the HLCP reduces to the well-known linear complementarity problem (abbreviated as LCP) [9]. On the other hand, the HLCP can be viewed as the simplest case of the extended HLCP [8], and it has wide applications in the fields of control theory, hydrodynamic lubrication, nonlinear networks (see [12,13,14,15, 26, 27] for details).
Recently, for solving HLCPs, some matrix splitting-based methods were introduced. In particular, the projected splitting methods were introduced in [22], while modulus-based matrix splitting (MMS) methods were introduced in [23]. Specially, the MMS method was shown to be more efficient than the interior-point method [21] and the projected splitting method with global convergence. Weaker assumptions and larger convergence domain of the MMS method were given in [41] when the system matrices are assumed to be H+-matrices. The convergence of modulus-based Jacobi (MJ) and modulus-based Gauss-Seidel (MGS) methods was discussed in [25] for the HLCP from hydrodynamic lubrication. The MMS technique had also been used as a successful tool for LCPs [3,4,5, 10, 18,19,20, 30, 34, 39, 42], and some classes of nonlinear complementarity problems (NCP) [17, 29, 32, 35, 36, 38, 40] in recent years. On the other hand, without matrix splitting, the HLCP was solved by nonsmooth Newton’s method based on the equivalent modulus equation (see [24]). The technique of nonsmooth Newton’s iteration had also been successfully used in LCPs [33] and NCPs [37] with locally fast convergence rate.
In this paper, we aim at further accelerating of the MMS method for the HLCP. To achieve higher computing efficiency, and to make full use of the information contained in the system matrices, the technique of two-step splitting had been used for LCPs and NCPs (see [30, 36, 39] for details). It is therefore interesting to study the two-step MMS method for the HLCP as well. Thus, we first propose the two-step MMS method for the HLCP by employing two-step matrix splittings in Section 2. In Section 3, we present the convergence analysis for the proposed method. Specifically, the accelerated overrelaxation (AOR) splitting is analyzed. The proposed method and theorems are shown to generalized the existing theories of the MMS method. Next, by numerical examples, we show that the proposed method is more efficient than the MMS method in Section 4. Finally, Section 5 concludes this work.
Next, some needed notations, definitions and known results are given.
Let en be an n × 1 vector whose elements are all equal to 1. Let \(A=(a_{ij})\in \mathbb {R}^{n\times {n}}\) and A = DA − LA − UA = DA − CA, where DA,−LA,−UA and − CA denote the diagonal, the strictly lower-triangular, the strictly upper-triangular and nondiagonal matrices of A, respectively. tridiag(a;b;c) denotes a tridiagonal matrix with vectors a,b,c forming its subdiagonal, main diagonal and superdiagonal entries, respectively, while blktridiag(A;B;C) denotes a block tridiagonal matrix with block matrices I ⊗ A,I ⊗ B,I ⊗ C as its corresponding block entries, where“⊗” is the Kronecker product and I is the identity matrix.
By |A| we denote |A| = (|aij|) and the comparison matrix of A is 〈A〉 = (〈aij〉), where 〈aij〉 = |aij| if i = j and 〈aij〉 = −|aij| if i≠j. A is called a Z-matrix if aij ≤ 0 for any i≠j, a nonsingular M-matrix if it is a nonsingular Z-matrix with A− 1 ≥ 0, an H-matrix if 〈A〉 is a nonsingular M-matrix (e.g., see [6]). A is called an H+-matrix if A is an H-matrix with aii > 0 for every i (e.g., see [2]). If \(|a_{ii}|>\sum \limits _{j\neq i}|a_{ij}|\) for all 1 ≤ i ≤ n, A is called a strictly diagonal dominant (s.d.d.) matrix. By ρ(A) we denote the spectral radius of A. A = M − N is called an H-splitting if 〈M〉−|N| is a nonsingular M-matrix; and an H-compatible splitting if 〈A〉 = 〈M〉−|N| (e.g., see [31]).
2 Two-step method
First, we introduce the MMS method for solving the HLCP.
Let A = MA − NA,B = MB − NB be two splittings of A and B, respectively. Then, with \(z=\frac {1}{\gamma }(|x|+x)\) and \(r=\frac {1}{\gamma }{\Omega }(|x|-x)\), the HLCP can be equivalently transformed into a system of fixed-point equations
where Ω is a positive diagonal parameter matrix and γ is a positive constant (see [23] for more details). Based on (2), the MMS method is presented as follows:
Method 1
[23] Let \({\Omega }\in \mathbb {R}^{n\times n}\) be a positive diagonal matrix, γ be a positive constant and A = MA − NA,B = MB − NB be two splittings of the matrix \(A\in \mathbb {R}^{n\times n}\) and \(B\in \mathbb {R}^{n\times n}\), respectively. Given an initial vector \(x^{(0)}\in \mathbb {R}^{n}\), compute \(x^{(k+1)}\in \mathbb {R}^{n}\) by solving the linear system
Then set
for k = 0,1,2,…, until the iteration sequence \(\{(z^{(k)},r^{(k)})\}_{k=1}^{+\infty }\) is convergent.
With the same idea used in LCPs and NCPs, in order to achieve high computing efficiency, we consider making use of the information in the matrices A and B. The two-step modulus-based matrix splitting (TMMS) iteration method for solving the HLCP is presented as follows:
Method 2
Two-step modulus-based matrix splitting iteration method for the HLCP For any given positive diagonal matrix \({\Omega }\in \mathbb {R}^{n\times n}\) and γ > 0, let \(A=M_{A_{1}}-N_{A_{1}}=M_{A_{2}}-N_{A_{2}}\) be two splittings of the matrix \(A\in \mathbb {R}^{n\times n}\), while \(B=M_{B_{1}}-N_{B_{1}}=M_{B_{2}}-N_{B_{2}}\) be two splittings of the matrix \(B\in \mathbb {R}^{n\times n}\). Given an initial vector \(x^{(0)}\in \mathbb {R}^{n}\), compute \(x^{(k+1)}\in \mathbb {R}^{n}\) by solving the two linear systems
Then set
for k = 0,1,2,…, until the iteration sequence \(\{(z^{(k)},r^{(k)})\}_{k=1}^{+\infty }\) is convergent.
Note that if we take \(M_{A_{1}}=M_{A_{2}}, N_{A_{1}}=N_{A_{2}}, M_{B_{1}}=M_{B_{2}}\) and \(N_{B_{1}}=N_{B_{2}}\), Method 2 reduces to Method 1 by combining the two iterations in (3).
Furthermore, by the same definitions as those in [30] of two-step matrix splittings, taking
we can obtain the two-step modulus-based accelerated overrelaxation (TMAOR) iteration method. Taking α = β and α = β = 1, the TMAOR iteration method reduces to the two-step modulus-based successive overrelaxation (TMSOR) iteration method and the two-step modulus-based Gauss-Seidel (TMGS) iteration method, respectively. Furthermore, to achieve fast convergence rate, one can generalize the MAOR iteration method by setting more choices of the relaxation parameters than (4) in the two-step matrix splittings, e.g.,
3 Convergence analysis
It is well known that the HLCP has a unique solution for any known vector q if and only if {A,B} satisfies the \(\mathcal {W}\)-property [28], i.e., if all column-representative matrices of {A,B} have determinants that are either all positive or all negative. As remarked in [23], that both A and B are H+-matrices is in such cases. In the following discussion, the convergence analysis of Method 2 is presented when the HLCP is always assumed to have a unique solution with both A and B being H+-matrices.
We present some useful lemmas first.
Lemma 1
[11] Let A be an H-matrix. Then |A− 1|≤〈A〉− 1.
Lemma 2
[16] Let \(B\in {\mathbb {R}^{n\times n}}\) be an s.d.d. matrix. Then, \(\forall C\in {\mathbb {R}^{n\times n}}\), \(||{B^{-1}C}||_{\infty }\leq {\max \limits _{1\leq i\leq n}\frac {(|C|e)_{i}}{(\langle {B}\rangle e)_{i}}}\) holds.
Lemma 3
[6] Assume that A is a Z-matrix. Then the following statements are equivalent:
-
A is an M-matrix;
-
There exists a positive diagonal matrix D, such that AD is an s.d.d. matrix with positive diagonal entries;
-
If A = M − N satisfies M− 1 ≥ 0 and N ≥ 0, then ρ(M− 1N) < 1.
Let (z∗,r∗) be a solution of the HLCP. Then, by (2) and straightforward computation, we can get that \(x^{\ast } = \frac {\gamma }{2} (z^{\ast } - {\Omega }^{-1}r^{\ast })\) satisfies the implicit fixed-point equations
By subtracting (6) from (3), we have
To prove \(\lim \limits _{k\rightarrow +\infty } {z^{(k)}}= z^{\ast }\) and \(\lim \limits _{k\rightarrow +\infty } {r^{(k)}}= r^{\ast }\), we need only to prove \(\lim \limits _{k\rightarrow +\infty } {x^{(k)}}= x^{\ast }\).
Theorem 1
Let Ω = (ωjj) be an n × n positive diagonal matrix and \(A,B\in \mathbb {R}^{n\times n}\) be two H+-matrices. Let D be a positive diagonal matrix such that 〈A〉D is an s.d.d. matrix. Assume that:
-
(I)
\(A=M_{A_{1}}-N_{A_{1}}=M_{A_{2}}-N_{A_{2}}\) and \(B=M_{B_{1}}-N_{B_{1}}=M_{B_{2}}-N_{B_{2}}\) are two H-compatible splittings of A and B, respectively;
-
(II)
|bij|ωjj ≤|aij| (i≠j) and sign(bij) = sign(aij) (bij≠ 0), for all i,j.
Then the iteration sequence \(\{(z^{(k)},r^{(k)})\}_{k=1}^{+\infty }\) generated by Method 2 converges to the unique solution (z∗,r∗) of the HLCP for any initial vector \(x^{(0)}\in \mathbb {R}^{n}\) provided
Proof
By the assumption (I), we have
On the other hand, assumption (II) implies that
If \({\Omega }\geq D_{B}^{-1}D_{A}\), by (9) and (10), we get
If \(D_{B}^{-1}D_{A}e>{\Omega } e>D_{B}^{-1}D_{A}e-D_{B}^{-1}D^{-1}\langle A\rangle De\) holds, we have
Thus, if (8) holds, by (11) and (13), we have that \(\langle M_{B_{1}}{\Omega }+ M_{A_{1}}\rangle D\) is an s.d.d. matrix, which implies that \(M_{B_{1}}{\Omega }+ M_{A_{1}}\) is an H-matrix. Then, by Lemma 1 and the first equality of (7), we have
where
By Lemma 2, we have
If (8) holds, we obtain
Then by (11), (12) and (14), we have \(||D^{-1}\mathcal {P}_{1}D||_{\infty }<1\).
Similarly, by the second equality of (7), we have
where
Thus, we get
By proceeding the same deduction, we can get that \(\langle {\mathscr{M}}_{2}\rangle D\) is an s.d.d. matrix. With the same discussion, if (8) holds, we have \(||D^{-1}\mathcal {P}_{2}D||_{\infty }<1\), too. Hence, the next inequality holds:
which implies that \(\lim \limits _{k\rightarrow +\infty } {x^{(k)}}= x^{\ast }\) by (15), proving the claim. □
Next, we present the convergence results for the TMAOR.
Lemma 4
Let A,B be two H+-matrices. If \(0<\alpha <\frac {1}{\rho \left [(D_{A}+D_{B}{\Omega })^{-1}(D_{B}{\Omega }+|C_{A}|)\right ]}\), there exists a positive diagonal matrix \(\bar {D}\), such that
is an s.d.d. matrix.
Proof
Since A is an H+-matrix, we have 〈A〉 = DA −|CA| = (DA + DBΩ) −|DBΩ + CA| is an M-matrix. By Lemma 3, we can get \(\rho \left [(D_{A}+D_{B}{\Omega })^{-1}(D_{B}{\Omega }+|C_{A}|)\right ]<1\).
If 0 < α ≤ 1, then we have \(\frac {1+\alpha -|1-\alpha |}{\alpha }D_{A}+\frac {1-\alpha -|1-\alpha |}{\alpha }D_{B}{\Omega }-2|C_{A}|=2\langle A\rangle \) is an M-matrix.
If \(1<\alpha <\frac {1}{\rho \left [(D_{A}+D_{B}{\Omega })^{-1}(D_{B}{\Omega }+|C_{A}|)\right ]}\), then we have \(\frac {1}{\alpha }I-(D_{A}+D_{B}{\Omega })^{-1}(D_{B}{\Omega }+|C_{A}|)\) is an M-matrix, which implies that
is an M-matrix. Therefore, by Lemma 3, such \(\bar {D}\) in the conclusion exists. □
Remark 1
For computing the positive diagonal matrix \(\bar {D}\) given in the assumption of Lemma 4, readers can refer to [1, 7, 18].
Theorem 2
Let \(A,B\in \mathbb {R}^{n\times n}\) be two H+-matrices and \({\Omega }\in \mathbb {R}^{n\times n}\) be a positive diagonal matrix satisfying \({\Omega }\geq D_{A}D_{B}^{-1}\). Furthermore, for i,j = 1,2,…,n, let |bij|ωjj ≤|aij|(i≠j) and sign(bij) = sign(aij)(bij≠ 0). Then, the iteration sequence \((z^{(k)},r^{(k)})_{k=1}^{+\infty }\) generated by the TMAOR method converges to the unique solution (z∗,r∗) of (1) for any initial vector \(x^{(0)}\in \mathbb {R}^{n}\) provided
Proof
With the same notations and discussion as the proof of Theorem 1, let \(\bar {D}\) be the positive diagonal matrix given by Lemma 4. If \({\Omega }\geq D_{A}D_{B}^{-1}\), by Lemma 4 and (16), we have
Hence \({\mathscr{M}}_{1}\bar {D}\) is an s.d.d. matrix. Then by Lemma 1, we have \(||\bar {D}^{-1}\mathcal {P}_{1}\bar {D}||_{\infty }<1\). Furthermore, by (4) and the same discussion as above, it is easy to get \(||\bar {D}^{-1}\mathcal {P}_{2}\bar {D}||_{\infty }<1\) too. Therefore we have
which implies the TMAOR method is convergent. □
Note that, there is no assumption on the two-step splittings in Theorem 2 and its proof. Thus, we have the next corollary.
Corollary 1
With the same notations and assumptions as Theorem 2, the MAOR method is convergent.
Remark 2
It is well known that, when the system matrix is an H+-matrix, H-splitting or H-compatible splitting, including the classic AOR splitting, is usually the sufficient condition to guarantee the convergence of matrix splitting iteration methods (see [6] for the linear equations and the references mentioned in Section 1 for LCPs or NCPs). Note that Corollary 1 discusses the convergence conditions of the MAOR method for the HLCP for the first time. Thus, Corollary 1 provides a more explicit theoretical analysis of Method 1 than those given in [23] and [41].
Remark 3
In Theorems 3.1 and 3.2, the elements of A and B are linked by some relationship where the assumption (II) with DBΩ ≥ DA had been called “B is more or equally diagonally dominant than A” in [23]. The condition even excludes the simple case when A = I. However, if we denote the HLCP given in (1) by HLCP(A,B,q), it is easy to see that (z,r) is a solution of HLCP(A,B,q) if and only if (r,z) is a solution of HLCP(B,A,−q). Therefore, for HLCP(B,A,−q), with the same discussion as that given in [23], the proposed method and convergence analysis can be reformulated by replacing all A,B,q by B,A,−q, respectively. For example, the equivalent modulus equation of HLCP(B,A,−q) is
and the assumption (II) should be changed to “|aij|ωjj ≥|bij| (i≠j) and sign(bij) = sign(aij) (aij≠ 0)”.
4 Numerical examples
In this section, numerical examples are given to show the efficiency of the proposed method. The computations were run on an Intel®;Core™, where the CPU is 2.50 GHz and the memory is 4.00 GB.
Let In be the identity matrix of order n. Consider the following three examples.
Example 1
[23, 41] Let n = m2, \(A=\hat {A}+\mu I_{n}, B=\hat {B}+\nu I_{n}\) and q = Az∗− Bw∗, where μ,ν are real parameters, \(\hat {A}=\text {blktridiag}(-I_{m}, S, -I_{m}), \hat {B}=I_{m}\otimes S\) and \(S=\text {tridiag}(-e_{m};4e_{m};-e_{m})\in \mathbb {R}^{m\times m}\).
Example 1 can come from the discretization of a 2-D boundary problem like
by five-point difference scheme with suitable boundary conditions, where z(u,v),r(u,v) and q(u,v) are three 2-D mapings. The similar examples had been analyzed in [4] and [17] for the LCP and the NCP, respectively. It is easy to see that both A and B are symmetric positive definite matrices.
Example 2
[23, 41] Let n = m2, \(A=\hat {A}+\mu I_{n}, B=\hat {B}+\nu I_{n}\) and q = Az∗− Bw∗, where μ,ν are real parameters, \(\hat {A}=\text {blktridiag}(-1.5I_{m}; S;-0.5I_{m}), \hat {B}=I_{m}\otimes S\) and \(S=\text {tridiag}(-1.5e_{m};4e_{m};-0.5e_{m})\in \mathbb {R}^{m\times m}\).
Note that the system matrices A and B in Example 2 belong to asymmetric positive definite cases.
Example 3
Consider Problem 1 in the numerical experiments of [21], where the HLCP is from the discretization of the Reynolds equation [15], which models a 1-D test problem in hydrodynamic lubrication with Convergent-Divergent profile.
For this example, it is shown by [25] that the assumption (II) of Theorem 1 cannot be satisfied. Although the convergence of the MJ and MGS methods had been proved by another way in [25], it is difficult to prove the convergence of the MSOR method or the MAOR method. Nevertheless, we still show the numerical results of Method 2 based on the MJ and MGS methods for Example 3.
Besides Method 1, the accelerated modulus-based matrix splitting method (AMMS) is also included in the numerical experiments to compare with Method 2. By utilizing a pair of suitable matrix splitting to the matrices A and B in (2), the AMMS method was proposed in [23] and shown to converge faster than the MMS method. Specially, let \(A=M_{A_{1}}-N_{A_{1}}=M_{A_{2}}-N_{A_{2}}\) and \(B=M_{B_{1}}-N_{B_{1}}=M_{B_{2}}-N_{B_{2}}\) be two pairs of matrix splittings of A and B, respectively, where
The accelerated modulus-based accelerated overrelaxation (AMAOR) iteration method is based on solving the next equation in the k th iteration:
In the following, we first compare the TMAOR method with the MAOR method and the AMAOR method with different pairs of relaxation parameters for Example 1 and Example 2. On the other hand, besides the TMAOR method, more choices of the two-step matrix splittings in the TMMS method given by (5) are tested for Example 3. The abbreviations of the corresponding terminologies are shown in Table 1. Note that the AOR splitting reduces to the SOR splitting when α = β, and to Gauss-Seidel splitting (the MSOR2, the AMSOR2 and the TMSOR2) when α = β = 1, respectively. Let γ = 1 and \({\Omega }=\tau D_{A}D_{B}^{-1}\), where τ = 0.8,1.0,1.2. All initial iteration vectors are chosen to be x(0) = en, the tolerance is set at 10− 10 and the maximum iteration step is 30,000.
In order to compare with the theoretical result, the upper bounds of α in (16) are presented in Table 2 for Example 1 and Example 2, while the ones of Example 3 are not presented because the assumption of Theorem 1 is not satisfied. Note that \({\Omega }\geq D_{A}D_{B}^{-1}\) is one assumption of Theorem 2. Thus, we only show the upper bounds of α for the cases of τ = 1 and τ = 1.2. It is remarked that, although the upper bounds of Example 1 with μ = ν = 0 listed in Table 2 are all equal to 1 with 2 decimal digit accuracy, the exact bounds are all larger than 1, which is also guaranteed by the proof of Lemma 4.
Numerical results are reported in Tables 3, 4, 5, and 6, where “IT” and “CPU” denote the number of iteration steps and the elapsed CPU time in seconds, respectively.
For Example 1 and Example 2, it is shown by Tables 3, 4, and 5 that all methods are convergent. In each comparison, the number of iteration steps of the MAOR is nearly twice as long as that of the TMAOR. This fact is due to the two linear systems solved in each iteration of the TMAOR. For the efficiency, we notice that the TMAOR always converges faster than the MAOR. On the other hand, the TMAOR converges faster than the AMAOR in most cases, except the TMAOR1 with τ = 0.8,μ = ν = 0 for Example 1 and the TMAOR2 with τ = 0.8,1.2 for Example 2. Pass to the relaxation parameters (α,β). By the results in Table 2, (16) can be satisfied with the choices of (α,β) in Table 1 both for the two examples when μ = ν = 4. On the other hand, for Example 1 with μ = ν = 0, (16) cannot be satisfied when α > 1. Meanwhile, when τ = 0.8, the assumption \({\Omega }\geq D_{A}D_{B}^{-1}\) in Theorem 2 does not hold. However, the TMAOR still converges and also performs better than the MAOR in these cases.
For Example 3, focus on Table 6, where the iteration steps of some cases of the MJ and the AMJ exceed 30,000, denoted by “–” with their CPU time. One can see that the MJ and the MGS perform better than the AMJ and the AMGS, respectively. Based on the first step iteration of Jacobi splittings, by introducing another pair of matrix splittings as the second step iteration, all the five methods (the TMJ, the TMJSOR1, the TMJSOR2, the TMJSOR3, and the TMJSOR4) converge faster than the MJ. On the other hand, for the cases of the first step iteration being Gauss-Seidel splittings, the TMGSJ, the TMGSSOR1 and the TMGS take more CPU time than the MGS, while as the relaxation parameters in the matrix splittings of the second step iteration being larger than 1, the TMGSSOR2 and the TMGSSOR3 converge faster than the MGS. Thus, the numerical results in Table 6 show that Method 2 can also accelerate Method 1 by suitable choices of the two-step matrix splittings.
5 Conclusions
By employing two-step matrix splittings, we have constructed the two-step MMS method for the HLCP. The proposed method extends the MMS method in [23]. We also give the convergence theorems, which include and generalize some existing results. Specially, the convergence analysis of the TMAOR method is given, which enriches the theory of the MMS method. Finally, the effectiveness of the proposed method is shown by numerical examples. Note that by the numerical results in the previous section, in Theorem 1, the assumption (II) may be weakened and the convergence range of α may be enlarged. How to improve the convergence theorems is worth studying in the future.
References
Alanelli, M., Hadjidimos, A.: A new iterative criterion for H-matrices. SIAM J. Matrix Anal. Appl. 29, 160–176 (2006)
Bai, Z. -Z.: On the convergence of the multisplitting methods for the linear complementarity problem. SIAM J. Matrix Anal. Appl. 21, 67–78 (1999)
Bai, Z. -Z.: Modulus-based matrix splitting iteration methods for linear complementarity problems. Numer. Linear Algebra Appl. 17, 917–933 (2010)
Bai, Z. -Z., Zhang, L. -L.: Modulus-based synchronous multisplitting iteration methods for linear complementarity problems. Numer. Linear Algebra Appl. 20, 425–439 (2013)
Bai, Z. -Z., Zhang, L. -L.: Modulus-based synchronous two-stage multisplitting iteration methods for linear complementarity problems. Numer. Algorithm. 62, 59–77 (2013)
Berman, A., Plemmons, R. J.: Nonnegative Matrix in the Mathematical Sciences. SIAM Publisher, Philadelphia (1994)
Bru, R., Giménez, I., Hadjidimos, A.: Is a ∈ cn×n a general H-matrix?. Linear Algebra Appl. 436, 364–380 (2012)
Cottle, R. W., Dantzig, G. B.: A generalization of the linear complementarity problem. J. Comb. Theory 8, 79–90 (1970)
Cottle, R. W., Pang, J. S., Stone, R. E.: The Linear Complementarity Problem. Academic, San Diego (1992)
Fang, X. -M., Zhu, Z. -w.: The modulus-based matrix double splitting iteration method for linear complementarity problems. Comput. Math. Appl. 78(11), 3633–3643 (2019)
Frommer, A., Mayer, G.: Convergence of relaxed parallel multisplitting methods. Linear Algebra Appl. 119, 141–152 (1989)
Fujisawa, T., Kuh, E. S.: Piecewise-linear theory of nonlinear networks. SIAM J. Appl. Math. 22, 307–328 (1972)
Fujisawa, T., Kuh, E. S., Ohtsuki, T.: A sparse matrix method for analysis of piecewise-linear resistive networks. IEEE Trans. Circ. Theory 19, 571–584 (1972)
Gao, X., Wang, J.: Analysis and application of a one-layer neural network for solving horizontal linear complementarity problems. Int. J. Comput. Int. Sys. 7(4), 724–732 (2014)
Giacopini, M., Fowell, M., Dini, D., Strozzi, A.: A mass-conserving complementarity formulation to study lubricant films in the presence of cavitation. J. Tribol. 132, 041702 (2010)
Hu, J. -G.: Estimates of \(||{b^{-1}c}||_{\infty }\) and their applications. Math. Numer. Sin. 4, 272–282 (1982)
Huang, N., Ma, C. -F.: The modulus-based matrix splitting algorithms for a class of weakly nonlinear complementarity problems. Numer. Linear Algebra Appl. 23, 558–569 (2016)
Li, W.: A general modulus-based matrix splitting method for linear complementarity problems of H-matrices. Appl. Math. Lett. 26, 1159–1164 (2013)
Li, W., Zheng, H.: A preconditioned modulus-based iteration method for solving linear complementarity problems of H-matrices. Linear Multilinear Algebra 64, 1390–1403 (2016)
Liu, S. -M., Zheng, H., Li, W.: A general accelerated modulus-based matrix splitting iteration method for solving linear complementarity problems. Calcolo 53, 189–199 (2016)
Mezzadri, F., Galligani, E.: An inexact Newton method for solving complementarity problems in hydrodynamic lubrication. Calcolo 55, 1 (2018)
Mezzadri, F., Galligani, E.: Splitting methods for a class of horizontal linear complementarity problems. J. Optim. Theory Appl. 180(2), 500–517 (2019)
Mezzadri, F., Galligani, E.: Modulus-based matrix splitting methods for horizontal linear complementarity problems. Numer. Algorithm. 83, 201–219 (2020)
Mezzadri, F., Galligani, E.: A modulus-based nonsmooth Newton’s method for solving horizontal linear complementarity problems. Optimiz. Lett. Published online. https://doi.org/10.1007/s11590-019-01515-9
Mezzadri, F., Galligani, E.: On the convergence of modulus-based matrix splitting methods for horizontal linear complementarity problems in hydrodynamic lubrication. Math. Comput. Simulat. Published online. https://doi.org/10.1016/j.matcom.2020.01.014
Sun, M.: Monotonicity of Mangasarian’s iterative algorithm for generalized linear complementarity problems. J. Math. Anal. Appl. 144, 474–485 (1989)
Sun, M.: Singular control problems in bounded intervals. Stochastics 21, 303–344 (1987)
Sznajder, R., Gowda, M. S.: Generalizations of p0- and P-properties; extended vertical and horizontal linear complementarity problems. Linear Algebra Appl. 223(/224), 695–715 (1995)
Xia, Z. -C., Li, C. -L.: Modulus-based matrix splitting iteration methods for a class of nonlinear complementarity problem. Appl. Math. Comput. 271, 34–42 (2015)
Zhang, L. -L.: Two-step modulus based matrix splitting iteration for linear complementarity problems. Numer. Algorithm. 57, 83–99 (2011)
Zhang, L. -L., Ren, Z. -R.: Improved convergence theorems of modulus-based matrix splitting iteration methods for linear complementarity problems. Appl. Math. Lett. 26, 638–642 (2013)
Zheng, H.: Improved convergence theorems of modulus-based matrix splitting iteration method for nonlinear complementarity problems of H-matrices. Calcolo 54, 1481–1490 (2017)
Zheng, H., Li, W.: The modulus-based nonsmooth Newton’s method for solving linear complementarity problems. J. Comput. Appl. Math. 288, 116–126 (2015)
Zheng, H., Li, W., Vong, S.: A relaxation modulus-based matrix splitting iteration method for solving linear complementarity problems. Numer. Algorithm. 74, 137–152 (2017)
Zheng, H., Li, W., Vong, S.: An iteration method for nonlinear complementarity problems. J. Comput. Appl. Math. 372, 112681 (2020)
Zheng, H., Liu, L.: A two-step modulus-based matrix splitting iteration method for solving nonlinear complementarity problems of h+-matrices. Comput. Appl. Math. 37(4), 5410–5423 (2018)
Zheng, H., Vong, S.: The modulus-based nonsmooth Newton’s method for solving a class of nonlinear complementarity problems of P-matrices. Calcolo. 55, 37 (2018)
Zheng, H., Vong, S., Liu, L.: A direct precondition modulus-based matrix splitting iteration method for solving linear complementarity problems. Appl. Math. Comput. 353, 396–405 (2019)
Zheng, H., Vong, S.: Improved convergence theorems of the two-step modulus-based matrix splitting and synchronous multisplitting iteration methods for solving linear complementarity problems. Linear Multilinear Algebra. 67 (9), 1773–1784 (2019)
Zheng, H., Vong, S.: A modified modulus-based matrix splitting iteration method for solving implicit complementarity problems. Numer. Algorithm. 82(2), 573–592 (2019)
Zheng, H., Vong, S.: On convergence of the modulus-based matrix splitting iteartion method for horizontal linear complementarity problem of h+-matrices. Appl. Math. Comput. 369, 124890 (2020)
Zheng, N., Yin, J. -F.: Accelerated modulus-based matrix splitting iteration methods for linear complementarity problems. Numer. Algorithm. 64, 245–262 (2013)
Acknowledgments
The authors would like to thank the reviewers for their helpful suggestions.
Funding
This work was supported by the Major Projects of Guangdong Education Department for Foundation Research and Applied Research (No. 2018KZDXM065); University of Macau with (No. MYRG2018-00047-FST); the Science and Technology Development Fund, Macau SAR (No. 0005/2019/A); Guangdong provincial Natural Science Foundation (Grant No. 2018A0303100015); and the Young Innovative Talents Project from Guangdong Provincial Department of Education (Nos. 2018KQNCX230, 2018KQNCX233).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Zheng, H., Vong, S. A two-step modulus-based matrix splitting iteration method for horizontal linear complementarity problems. Numer Algor 86, 1791–1810 (2021). https://doi.org/10.1007/s11075-020-00954-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11075-020-00954-1