1 Introduction

Given \(A,B\in \mathbb {R}^{n\times {n}}\) and \(q\in \mathbb {R}^{n}\), the horizontal linear complementarity problem (abbreviated as HLCP) is to find vectors \(z,r\in \mathbb {R}^{n}\) such that

$$ Az-Br+q=0,~z,r\geq 0 ~\text{and}~z^{T}r=0, $$
(1)

where for two matrices \(F=(f_{ij}), G=(g_{ij})\in \mathbb {R}^{m\times {n}}\) the order F ≥ (>)G means fij ≥ (>)gij for any i and j.

Clearly, when either A or B is an n × n identity matrix, the HLCP reduces to the well-known linear complementarity problem (abbreviated as LCP) [9]. On the other hand, the HLCP can be viewed as the simplest case of the extended HLCP [8], and it has wide applications in the fields of control theory, hydrodynamic lubrication, nonlinear networks (see [12,13,14,15, 26, 27] for details).

Recently, for solving HLCPs, some matrix splitting-based methods were introduced. In particular, the projected splitting methods were introduced in [22], while modulus-based matrix splitting (MMS) methods were introduced in [23]. Specially, the MMS method was shown to be more efficient than the interior-point method [21] and the projected splitting method with global convergence. Weaker assumptions and larger convergence domain of the MMS method were given in [41] when the system matrices are assumed to be H+-matrices. The convergence of modulus-based Jacobi (MJ) and modulus-based Gauss-Seidel (MGS) methods was discussed in [25] for the HLCP from hydrodynamic lubrication. The MMS technique had also been used as a successful tool for LCPs [3,4,5, 10, 18,19,20, 30, 34, 39, 42], and some classes of nonlinear complementarity problems (NCP) [17, 29, 32, 35, 36, 38, 40] in recent years. On the other hand, without matrix splitting, the HLCP was solved by nonsmooth Newton’s method based on the equivalent modulus equation (see [24]). The technique of nonsmooth Newton’s iteration had also been successfully used in LCPs [33] and NCPs [37] with locally fast convergence rate.

In this paper, we aim at further accelerating of the MMS method for the HLCP. To achieve higher computing efficiency, and to make full use of the information contained in the system matrices, the technique of two-step splitting had been used for LCPs and NCPs (see [30, 36, 39] for details). It is therefore interesting to study the two-step MMS method for the HLCP as well. Thus, we first propose the two-step MMS method for the HLCP by employing two-step matrix splittings in Section 2. In Section 3, we present the convergence analysis for the proposed method. Specifically, the accelerated overrelaxation (AOR) splitting is analyzed. The proposed method and theorems are shown to generalized the existing theories of the MMS method. Next, by numerical examples, we show that the proposed method is more efficient than the MMS method in Section 4. Finally, Section 5 concludes this work.

Next, some needed notations, definitions and known results are given.

Let en be an n × 1 vector whose elements are all equal to 1. Let \(A=(a_{ij})\in \mathbb {R}^{n\times {n}}\) and A = DALAUA = DACA, where DA,−LA,−UA and − CA denote the diagonal, the strictly lower-triangular, the strictly upper-triangular and nondiagonal matrices of A, respectively. tridiag(a;b;c) denotes a tridiagonal matrix with vectors a,b,c forming its subdiagonal, main diagonal and superdiagonal entries, respectively, while blktridiag(A;B;C) denotes a block tridiagonal matrix with block matrices IA,IB,IC as its corresponding block entries, where“⊗” is the Kronecker product and I is the identity matrix.

By |A| we denote |A| = (|aij|) and the comparison matrix of A is 〈A〉 = (〈aij〉), where 〈aij〉 = |aij| if i = j and 〈aij〉 = −|aij| if ij. A is called a Z-matrix if aij ≤ 0 for any ij, a nonsingular M-matrix if it is a nonsingular Z-matrix with A− 1 ≥ 0, an H-matrix if 〈A〉 is a nonsingular M-matrix (e.g., see [6]). A is called an H+-matrix if A is an H-matrix with aii > 0 for every i (e.g., see [2]). If \(|a_{ii}|>\sum \limits _{j\neq i}|a_{ij}|\) for all 1 ≤ in, A is called a strictly diagonal dominant (s.d.d.) matrix. By ρ(A) we denote the spectral radius of A. A = MN is called an H-splitting if 〈M〉−|N| is a nonsingular M-matrix; and an H-compatible splitting if 〈A〉 = 〈M〉−|N| (e.g., see [31]).

2 Two-step method

First, we introduce the MMS method for solving the HLCP.

Let A = MANA,B = MBNB be two splittings of A and B, respectively. Then, with \(z=\frac {1}{\gamma }(|x|+x)\) and \(r=\frac {1}{\gamma }{\Omega }(|x|-x)\), the HLCP can be equivalently transformed into a system of fixed-point equations

$$ (M_{B}{\Omega}+M_{A})x=(N_{B}{\Omega} +N_{A})x+(B{\Omega}-A)|x|-\gamma q, $$
(2)

where Ω is a positive diagonal parameter matrix and γ is a positive constant (see [23] for more details). Based on (2), the MMS method is presented as follows:

Method 1

[23] Let \({\Omega }\in \mathbb {R}^{n\times n}\) be a positive diagonal matrix, γ be a positive constant and A = MANA,B = MBNB be two splittings of the matrix \(A\in \mathbb {R}^{n\times n}\) and \(B\in \mathbb {R}^{n\times n}\), respectively. Given an initial vector \(x^{(0)}\in \mathbb {R}^{n}\), compute \(x^{(k+1)}\in \mathbb {R}^{n}\) by solving the linear system

$$ (M_{B}{\Omega}+M_{A})x^{(k+1)}=(N_{B}{\Omega}+ N_{A})x^{(k)}+(B{\Omega}-A)|x^{(k)}|-\gamma q. $$

Then set

$$ z^{(k+1)}=\frac{1}{\gamma }(|x^{(k+1)}|+x^{(k+1)})~~~\text{and}~~~r^{(k+1)}=\frac{1}{\gamma }{\Omega}(|x^{(k+1)}|-x^{(k+1)}) $$

for k = 0,1,2,…, until the iteration sequence \(\{(z^{(k)},r^{(k)})\}_{k=1}^{+\infty }\) is convergent.

With the same idea used in LCPs and NCPs, in order to achieve high computing efficiency, we consider making use of the information in the matrices A and B. The two-step modulus-based matrix splitting (TMMS) iteration method for solving the HLCP is presented as follows:

Method 2

Two-step modulus-based matrix splitting iteration method for the HLCP For any given positive diagonal matrix \({\Omega }\in \mathbb {R}^{n\times n}\) and γ > 0, let \(A=M_{A_{1}}-N_{A_{1}}=M_{A_{2}}-N_{A_{2}}\) be two splittings of the matrix \(A\in \mathbb {R}^{n\times n}\), while \(B=M_{B_{1}}-N_{B_{1}}=M_{B_{2}}-N_{B_{2}}\) be two splittings of the matrix \(B\in \mathbb {R}^{n\times n}\). Given an initial vector \(x^{(0)}\in \mathbb {R}^{n}\), compute \(x^{(k+1)}\in \mathbb {R}^{n}\) by solving the two linear systems

$$ \left\{ \begin{array}{l} (M_{B_{1}}{\Omega}+M_{A_{1}})x^{(k+\frac{1}{2})}=(N_{B_{1}}{\Omega}+N_{A_{1}})x^{(k)}+(B{\Omega}-A)|x^{(k)}|-\gamma q, \\ (M_{B_{2}}{\Omega}+M_{A_{2}})x^{(k+1)}=(N_{B_{2}}{\Omega}+N_{A_{2}})x^{(k+\frac{1}{2})}+(B{\Omega}-A)|x^{(k+\frac{1}{2})}|-\gamma q. \end{array} \right. $$
(3)

Then set

$$ z^{(k+1)}=\frac{1}{\gamma }(|x^{(k+1)}|+x^{(k+1)})~~~\text{and}~~~r^{(k+1)}=\frac{1}{\gamma }{\Omega}(|x^{(k+1)}|-x^{(k+1)}) $$

for k = 0,1,2,…, until the iteration sequence \(\{(z^{(k)},r^{(k)})\}_{k=1}^{+\infty }\) is convergent.

Note that if we take \(M_{A_{1}}=M_{A_{2}}, N_{A_{1}}=N_{A_{2}}, M_{B_{1}}=M_{B_{2}}\) and \(N_{B_{1}}=N_{B_{2}}\), Method 2 reduces to Method 1 by combining the two iterations in (3).

Furthermore, by the same definitions as those in [30] of two-step matrix splittings, taking

$$ \left\{ \begin{array}{l} M_{A_{1}}=\frac{1}{\alpha}(D_{A}-\beta{L_{A}}),M_{A_{2}}=\frac{1}{\alpha}(D_{A}-\beta{U_{A}}), \\ M_{B_{1}}=\frac{1}{\alpha}(D_{B}-\beta{L_{B}}),M_{B_{2}}=\frac{1}{\alpha}(D_{B}-\beta{U_{B}}). \end{array} \right. $$
(4)

we can obtain the two-step modulus-based accelerated overrelaxation (TMAOR) iteration method. Taking α = β and α = β = 1, the TMAOR iteration method reduces to the two-step modulus-based successive overrelaxation (TMSOR) iteration method and the two-step modulus-based Gauss-Seidel (TMGS) iteration method, respectively. Furthermore, to achieve fast convergence rate, one can generalize the MAOR iteration method by setting more choices of the relaxation parameters than (4) in the two-step matrix splittings, e.g.,

$$ \left\{ \begin{array}{l} M_{A_{1}}=\frac{1}{\alpha_{1}}(D_{A}-\beta_{1}{L_{A}}),M_{A_{2}}=\frac{1}{\alpha_{1}}(D_{A}-\beta_{1}{U_{A}}), \\ M_{B_{1}}=\frac{1}{\alpha_{2}}(D_{B}-\beta_{2}{L_{B}}),M_{B_{2}}=\frac{1}{\alpha_{2}}(D_{B}-\beta_{2}{U_{B}}). \end{array} \right. $$
(5)

3 Convergence analysis

It is well known that the HLCP has a unique solution for any known vector q if and only if {A,B} satisfies the \(\mathcal {W}\)-property [28], i.e., if all column-representative matrices of {A,B} have determinants that are either all positive or all negative. As remarked in [23], that both A and B are H+-matrices is in such cases. In the following discussion, the convergence analysis of Method 2 is presented when the HLCP is always assumed to have a unique solution with both A and B being H+-matrices.

We present some useful lemmas first.

Lemma 1

[11] Let A be an H-matrix. Then |A− 1|≤〈A− 1.

Lemma 2

[16] Let \(B\in {\mathbb {R}^{n\times n}}\) be an s.d.d. matrix. Then, \(\forall C\in {\mathbb {R}^{n\times n}}\), \(||{B^{-1}C}||_{\infty }\leq {\max \limits _{1\leq i\leq n}\frac {(|C|e)_{i}}{(\langle {B}\rangle e)_{i}}}\) holds.

Lemma 3

[6] Assume that A is a Z-matrix. Then the following statements are equivalent:

  • A is an M-matrix;

  • There exists a positive diagonal matrix D, such that AD is an s.d.d. matrix with positive diagonal entries;

  • If A = MN satisfies M− 1 ≥ 0 and N ≥ 0, then ρ(M− 1N) < 1.

Let (z,r) be a solution of the HLCP. Then, by (2) and straightforward computation, we can get that \(x^{\ast } = \frac {\gamma }{2} (z^{\ast } - {\Omega }^{-1}r^{\ast })\) satisfies the implicit fixed-point equations

$$ \left\{ \begin{array}{l} (M_{B_{1}}{\Omega}+M_{A_{1}})x^{\ast}=(N_{B_{1}}{\Omega} +N_{A_{1}})x^{\ast}+(B{\Omega}-A)|x^{\ast}|-\gamma q, \\ (M_{B_{2}}{\Omega}+M_{A_{2}})x^{\ast}=(N_{B_{2}}{\Omega} +N_{A_{2}})x^{\ast}+(B{\Omega}-A)|x^{\ast}|-\gamma q. \end{array} \right. $$
(6)

By subtracting (6) from (3), we have

$$ \left\{ \begin{array}{ll} (M_{B_{1}}{\Omega}+M_{A_{1}})(x^{(k+\frac{1}{2})}-x^{\ast})=&(N_{B_{1}}{\Omega} +N_{A_{1}})(x^{(k)}-x^{\ast})\\ &+(B{\Omega}-A)(|x^{(k)}|-|x^{\ast }|)\\ (M_{B_{2}}{\Omega}+M_{A_{2}})(x^{(k+1)}-x^{\ast})=&(N_{B_{2}}{\Omega} +N_{A_{2}})(x^{(k+\frac{1}{2})}-x^{\ast})\\ &+(B{\Omega}-A)(|x^{(k+\frac{1}{2})}|-|x^{\ast }|). \end{array} \right. $$
(7)

To prove \(\lim \limits _{k\rightarrow +\infty } {z^{(k)}}= z^{\ast }\) and \(\lim \limits _{k\rightarrow +\infty } {r^{(k)}}= r^{\ast }\), we need only to prove \(\lim \limits _{k\rightarrow +\infty } {x^{(k)}}= x^{\ast }\).

Theorem 1

Let Ω = (ωjj) be an n × n positive diagonal matrix and \(A,B\in \mathbb {R}^{n\times n}\) be two H+-matrices. Let D be a positive diagonal matrix such that 〈AD is an s.d.d. matrix. Assume that:

  1. (I)

    \(A=M_{A_{1}}-N_{A_{1}}=M_{A_{2}}-N_{A_{2}}\) and \(B=M_{B_{1}}-N_{B_{1}}=M_{B_{2}}-N_{B_{2}}\) are two H-compatible splittings of A and B, respectively;

  2. (II)

    |bij|ωjj ≤|aij| (ij) and sign(bij) = sign(aij) (bij≠ 0), for all i,j.

Then the iteration sequence \(\{(z^{(k)},r^{(k)})\}_{k=1}^{+\infty }\) generated by Method 2 converges to the unique solution (z,r) of the HLCP for any initial vector \(x^{(0)}\in \mathbb {R}^{n}\) provided

$$ {\Omega} e>D_{B}^{-1}D_{A}e-D_{B}^{-1}D^{-1}\langle A\rangle De. $$
(8)

Proof

By the assumption (I), we have

$$ \begin{array}{@{}rcl@{}} &&\langle M_{B_{1}}{\Omega}+ M_{A_{1}}\rangle De\\ &\geq&(\langle M_{B_{1}}\rangle{\Omega}+ \langle M_{A_{1}}\rangle)De\\ &=&(\langle A\rangle+\langle B\rangle{\Omega}+|N_{A_{1}}|+|N_{B_{1}}|{\Omega})De\\ &\geq&(D_{A}+D_{B}{\Omega}-|C_{A}|-|C_{B}|{\Omega})De. \end{array} $$
(9)

On the other hand, assumption (II) implies that

$$ |C_{A}|\geq|C_{B}|{\Omega}. $$
(10)

If \({\Omega }\geq D_{B}^{-1}D_{A}\), by (9) and (10), we get

$$ \langle M_{B_{1}}{\Omega}+ M_{A_{1}}\rangle De\geq 2(D_{A}-|C_{A}|)De=2\langle A\rangle De>0. $$
(11)

If \(D_{B}^{-1}D_{A}e>{\Omega } e>D_{B}^{-1}D_{A}e-D_{B}^{-1}D^{-1}\langle A\rangle De\) holds, we have

$$ \begin{array}{@{}rcl@{}} &&DD_{B}\left[{\Omega} -(D_{B}^{-1}D_{A}-D_{B}^{-1}D^{-1}\langle A\rangle D)\right]e>0\\ &\Rightarrow&\left[DD_{B}{\Omega} D^{-1} -(DD_{A}D^{-1}-\langle A\rangle)\right]De>0\\ &\Rightarrow&\big{(}D_{B}{\Omega}-|C_{A}|\big{)}De>0. \end{array} $$
(12)

By (9), (10) and 12, we get

$$ \langle M_{B_{1}}{\Omega}+ M_{A_{1}}\rangle De\geq (D_{A}-|C_{B}|{\Omega})De\geq(D_{A}-|C_{A}|)De=\langle A\rangle De>0. $$
(13)

Thus, if (8) holds, by (11) and (13), we have that \(\langle M_{B_{1}}{\Omega }+ M_{A_{1}}\rangle D\) is an s.d.d. matrix, which implies that \(M_{B_{1}}{\Omega }+ M_{A_{1}}\) is an H-matrix. Then, by Lemma 1 and the first equality of (7), we have

$$ \begin{array}{@{}rcl@{}} &&|x^{(k+\frac{1}{2})}-x^{\ast}|\\ &=&\Big{|}(M_{B_{1}}{\Omega}+M_{A_{1}})^{-1}\left[(N_{B_{1}}{\Omega}+N_{A_{1}})(x^{(k)}-x^{\ast})+(B{\Omega}-A)(|x^{(k)}|-|x^{\ast}|)\right]\Big{|}\\ &\leq&\langle M_{B_{1}}{\Omega}+ M_{A_{1}}\rangle^{-1}(|N_{B_{1}}{\Omega}+N_{A_{1}}|+|B{\Omega}-A|)|x^{(k)}-x^{\ast}|\\ &\doteq&\mathcal{P}_{1}|x^{(k)}-x^{\ast}|, \end{array} $$

where

$$ \mathcal{P}_{1}=\mathcal{M}_{1}^{-1}\mathcal{N}_{1}, \mathcal{M}_{1}=\langle M_{B_{1}}{\Omega}+ M_{A_{1}}\rangle, \mathcal{N}_{1}=|N_{B_{1}}{\Omega}+N_{A_{1}}|+|B{\Omega}-A|. $$

By Lemma 2, we have

$$ ||D^{-1}\mathcal{P}_{1}D||_{\infty}=||(\mathcal{M}_{1}D)^{-1}(\mathcal{N}_{1}D)||_{\infty}\leq \max\limits_{1\leq i\leq n}\frac{(\mathcal{N}_{1}De)_{i}}{(\mathcal{M}_{1}De)_{i}}. $$
(14)

If (8) holds, we obtain

$$ \begin{array}{@{}rcl@{}} &&\mathcal{M}_{1}De-\mathcal{N}_{1}De\\ &=&(\langle M_{B_{1}}{\Omega}+ M_{A_{1}}\rangle-|N_{B_{1}}{\Omega}+N_{A_{1}}|-|B{\Omega}-A|)De\\ &\geq&(\langle M_{B_{1}}{\Omega}\rangle+\langle M_{A_{1}}\rangle-|N_{B_{1}}{\Omega}|-|N_{A_{1}}|-|D_{B}{\Omega}-D_{A}|-|C_{A}|+|C_{B}{\Omega}|)De\\ &=&(D_{B}{\Omega}+\langle A\rangle-|C_{A}|-|D_{B}{\Omega}-D_{A}|)De\\ &=&\left\{ \begin{array}{ll} 2\langle A\rangle De,&\text{if } {\Omega}\geq D_{A}D_{B}^{-1}; \\ 2\big{(}D_{B}{\Omega}-|C_{A}|\big{)}De,& \text{if } D_{B}^{-1}D_{A}e>{\Omega} e>D_{B}^{-1}D_{A}e-D_{B}^{-1}D^{-1}\langle A\rangle De. \end{array} \right. \end{array} $$

Then by (11), (12) and (14), we have \(||D^{-1}\mathcal {P}_{1}D||_{\infty }<1\).

Similarly, by the second equality of (7), we have

$$ |x^{(k+1)}-x^{\ast}|\leq\mathcal{P}_{2}|x^{(k+\frac{1}{2})}-x^{\ast}|, $$

where

$$ \mathcal{P}_{2}=\mathcal{M}_{2}^{-1}\mathcal{N}_{2}, \mathcal{M}_{2}=\langle M_{B_{2}}{\Omega}+ M_{A_{2}}\rangle, \mathcal{N}_{2}=|N_{B_{2}}{\Omega}+N_{A_{2}}|+|B{\Omega}-A|. $$

Thus, we get

$$ |x^{(k+1)}-x^{\ast}|\leq\mathcal{P}_{2}\mathcal{P}_{1}|x^{(k)}-x^{\ast}|, $$
(15)

By proceeding the same deduction, we can get that \(\langle {\mathscr{M}}_{2}\rangle D\) is an s.d.d. matrix. With the same discussion, if (8) holds, we have \(||D^{-1}\mathcal {P}_{2}D||_{\infty }<1\), too. Hence, the next inequality holds:

$$ \begin{array}{@{}rcl@{}} \rho(\mathcal{P}_{2}\mathcal{P}_{1})&=&\rho({D}^{-1}\mathcal{P}_{2}\mathcal{P}_{1}D)\\ &\leq&||{D}^{-1}\mathcal{P}_{2}DD^{-1}\mathcal{P}_{1}D||_{\infty}\\ &\leq&||{D}^{-1}\mathcal{P}_{2}D||_{\infty}||D^{-1}\mathcal{P}_{1}D||_{\infty}<1, \end{array} $$

which implies that \(\lim \limits _{k\rightarrow +\infty } {x^{(k)}}= x^{\ast }\) by (15), proving the claim. □

Next, we present the convergence results for the TMAOR.

Lemma 4

Let A,B be two H+-matrices. If \(0<\alpha <\frac {1}{\rho \left [(D_{A}+D_{B}{\Omega })^{-1}(D_{B}{\Omega }+|C_{A}|)\right ]}\), there exists a positive diagonal matrix \(\bar {D}\), such that

$$ \left[\frac{1+\alpha-|1-\alpha|}{\alpha}D_{A}+\frac{1-\alpha-|1-\alpha|}{\alpha}D_{B}{\Omega}-2|C_{A}|\right]\bar{D} $$

is an s.d.d. matrix.

Proof

Since A is an H+-matrix, we have 〈A〉 = DA −|CA| = (DA + DBΩ) −|DBΩ + CA| is an M-matrix. By Lemma 3, we can get \(\rho \left [(D_{A}+D_{B}{\Omega })^{-1}(D_{B}{\Omega }+|C_{A}|)\right ]<1\).

If 0 < α ≤ 1, then we have \(\frac {1+\alpha -|1-\alpha |}{\alpha }D_{A}+\frac {1-\alpha -|1-\alpha |}{\alpha }D_{B}{\Omega }-2|C_{A}|=2\langle A\rangle \) is an M-matrix.

If \(1<\alpha <\frac {1}{\rho \left [(D_{A}+D_{B}{\Omega })^{-1}(D_{B}{\Omega }+|C_{A}|)\right ]}\), then we have \(\frac {1}{\alpha }I-(D_{A}+D_{B}{\Omega })^{-1}(D_{B}{\Omega }+|C_{A}|)\) is an M-matrix, which implies that

$$ \begin{array}{@{}rcl@{}} &\frac{1+\alpha-|1-\alpha|}{\alpha}D_{A}+\frac{1-\alpha-|1-\alpha|}{\alpha}D_{B}{\Omega}-2|C_{A}|\\ &=2(D_{A}+D_{B}{\Omega})\left[\frac{1}{\alpha}I-(D_{A}+D_{B}{\Omega})^{-1}(D_{B}{\Omega}+|C_{A}|)\right] \end{array} $$

is an M-matrix. Therefore, by Lemma 3, such \(\bar {D}\) in the conclusion exists. □

Remark 1

For computing the positive diagonal matrix \(\bar {D}\) given in the assumption of Lemma 4, readers can refer to [1, 7, 18].

Theorem 2

Let \(A,B\in \mathbb {R}^{n\times n}\) be two H+-matrices and \({\Omega }\in \mathbb {R}^{n\times n}\) be a positive diagonal matrix satisfying \({\Omega }\geq D_{A}D_{B}^{-1}\). Furthermore, for i,j = 1,2,…,n, let |bij|ωjj ≤|aij|(ij) and sign(bij) = sign(aij)(bij≠ 0). Then, the iteration sequence \((z^{(k)},r^{(k)})_{k=1}^{+\infty }\) generated by the TMAOR method converges to the unique solution (z,r) of (1) for any initial vector \(x^{(0)}\in \mathbb {R}^{n}\) provided

$$ 0<\beta\leq\alpha<\frac{1}{\rho\left[(D_{A}+D_{B}{\Omega})^{-1}(D_{B}{\Omega}+|C_{A}|)\right]}. $$
(16)

Proof

With the same notations and discussion as the proof of Theorem 1, let \(\bar {D}\) be the positive diagonal matrix given by Lemma 4. If \({\Omega }\geq D_{A}D_{B}^{-1}\), by Lemma 4 and (16), we have

$$ \begin{array}{@{}rcl@{}} &&\mathcal{M}_{1}\bar{D}e-\mathcal{N}_{1}\bar{D}e\\ &=&\left[\langle M_{A_{1}}+M_{B_{1}}{\Omega}\rangle-|N_{A_{1}}+N_{B_{1}}{\Omega}|-|B{\Omega}-A|\right]\bar{D}e\\ &=&\left[\frac{1}{\alpha}(D_{A}+D_{B}{\Omega})-\frac{\beta}{\alpha}|L_{A}+L_{B}{\Omega}| -\frac{|1-\alpha|}{\alpha}(D_{A}+D_{B}{\Omega})-\frac{|\beta-\alpha|}{\alpha}|L_{A}+L_{B}{\Omega}|\right.\\ &&\left.-|U_{A}+U_{B}{\Omega}|-|D_{B}{\Omega}-D_{A}|-|C_{B}{\Omega}-C_{A}|\right]\bar{D}e\\ &=&\left[\frac{1+\alpha-|1-\alpha|}{\alpha}D_{A}+\frac{1-\alpha-|1-\alpha|}{\alpha}D_{B}{\Omega}-|C_{B} {\Omega}+C_{A}|-|C_{B}{\Omega}-C_{A}|\right]\bar{D}e\\ &=&\left[\frac{1+\alpha-|1-\alpha|}{\alpha}D_{A}+\frac{1-\alpha-|1-\alpha|}{\alpha}D_{B}{\Omega}-2|C_{A}|\right]\bar{D}e\\ &>&0. \end{array} $$

Hence \({\mathscr{M}}_{1}\bar {D}\) is an s.d.d. matrix. Then by Lemma 1, we have \(||\bar {D}^{-1}\mathcal {P}_{1}\bar {D}||_{\infty }<1\). Furthermore, by (4) and the same discussion as above, it is easy to get \(||\bar {D}^{-1}\mathcal {P}_{2}\bar {D}||_{\infty }<1\) too. Therefore we have

$$ \begin{array}{@{}rcl@{}} \rho(\mathcal{P}_{2}\mathcal{P}_{1})&=&\rho({V}^{-1}\mathcal{P}_{2}\mathcal{P}_{1}V)\\ &\leq&||{\bar{D}}^{-1}\mathcal{P}_{2}\bar{D}\bar{D}^{-1}\mathcal{P}_{1}\bar{D}||_{\infty}\\ &\leq&||{\bar{D}}^{-1}\mathcal{P}_{2}\bar{D}||_{\infty}||\bar{D}^{-1}\mathcal{P}_{1}\bar{D}||_{\infty}<1, \end{array} $$

which implies the TMAOR method is convergent. □

Note that, there is no assumption on the two-step splittings in Theorem 2 and its proof. Thus, we have the next corollary.

Corollary 1

With the same notations and assumptions as Theorem 2, the MAOR method is convergent.

Remark 2

It is well known that, when the system matrix is an H+-matrix, H-splitting or H-compatible splitting, including the classic AOR splitting, is usually the sufficient condition to guarantee the convergence of matrix splitting iteration methods (see [6] for the linear equations and the references mentioned in Section 1 for LCPs or NCPs). Note that Corollary 1 discusses the convergence conditions of the MAOR method for the HLCP for the first time. Thus, Corollary 1 provides a more explicit theoretical analysis of Method 1 than those given in [23] and [41].

Remark 3

In Theorems 3.1 and 3.2, the elements of A and B are linked by some relationship where the assumption (II) with DBΩ ≥ DA had been called “B is more or equally diagonally dominant than A” in [23]. The condition even excludes the simple case when A = I. However, if we denote the HLCP given in (1) by HLCP(A,B,q), it is easy to see that (z,r) is a solution of HLCP(A,B,q) if and only if (r,z) is a solution of HLCP(B,A,−q). Therefore, for HLCP(B,A,−q), with the same discussion as that given in [23], the proposed method and convergence analysis can be reformulated by replacing all A,B,q by B,A,−q, respectively. For example, the equivalent modulus equation of HLCP(B,A,−q) is

$$ (M_{B}+M_{A} {\Omega})x=(N_{B} +N_{A} {\Omega})x+(A{\Omega}-B)|x|+\gamma q, $$

and the assumption (II) should be changed to “|aij|ωjj ≥|bij| (ij) and sign(bij) = sign(aij) (aij≠ 0)”.

4 Numerical examples

In this section, numerical examples are given to show the efficiency of the proposed method. The computations were run on an Intel®;Core™, where the CPU is 2.50 GHz and the memory is 4.00 GB.

Let In be the identity matrix of order n. Consider the following three examples.

Example 1

[23, 41] Let n = m2, \(A=\hat {A}+\mu I_{n}, B=\hat {B}+\nu I_{n}\) and q = AzBw, where μ,ν are real parameters, \(\hat {A}=\text {blktridiag}(-I_{m}, S, -I_{m}), \hat {B}=I_{m}\otimes S\) and \(S=\text {tridiag}(-e_{m};4e_{m};-e_{m})\in \mathbb {R}^{m\times m}\).

Example 1 can come from the discretization of a 2-D boundary problem like

$$ \bigtriangleup z+\frac{\partial^{2} r}{\partial^{2} u}+\mu z+\nu r-q=0, z\geq0, r\geq0~~\text{and}~~z^{T}r=0. $$

by five-point difference scheme with suitable boundary conditions, where z(u,v),r(u,v) and q(u,v) are three 2-D mapings. The similar examples had been analyzed in [4] and [17] for the LCP and the NCP, respectively. It is easy to see that both A and B are symmetric positive definite matrices.

Example 2

[23, 41] Let n = m2, \(A=\hat {A}+\mu I_{n}, B=\hat {B}+\nu I_{n}\) and q = AzBw, where μ,ν are real parameters, \(\hat {A}=\text {blktridiag}(-1.5I_{m}; S;-0.5I_{m}), \hat {B}=I_{m}\otimes S\) and \(S=\text {tridiag}(-1.5e_{m};4e_{m};-0.5e_{m})\in \mathbb {R}^{m\times m}\).

Note that the system matrices A and B in Example 2 belong to asymmetric positive definite cases.

Example 3

Consider Problem 1 in the numerical experiments of [21], where the HLCP is from the discretization of the Reynolds equation [15], which models a 1-D test problem in hydrodynamic lubrication with Convergent-Divergent profile.

For this example, it is shown by [25] that the assumption (II) of Theorem 1 cannot be satisfied. Although the convergence of the MJ and MGS methods had been proved by another way in [25], it is difficult to prove the convergence of the MSOR method or the MAOR method. Nevertheless, we still show the numerical results of Method 2 based on the MJ and MGS methods for Example 3.

Besides Method 1, the accelerated modulus-based matrix splitting method (AMMS) is also included in the numerical experiments to compare with Method 2. By utilizing a pair of suitable matrix splitting to the matrices A and B in (2), the AMMS method was proposed in [23] and shown to converge faster than the MMS method. Specially, let \(A=M_{A_{1}}-N_{A_{1}}=M_{A_{2}}-N_{A_{2}}\) and \(B=M_{B_{1}}-N_{B_{1}}=M_{B_{2}}-N_{B_{2}}\) be two pairs of matrix splittings of A and B, respectively, where

$$ \left\{ \begin{array}{l} M_{A_{1}}=\frac{1}{\alpha}(D_{A}-\beta{L_{A}}), M_{B_{1}}=\frac{1}{\alpha}(D_{B}-\beta{L_{B}}),\\ M_{A_{2}}=D_{A}-U_{A}, M_{B_{2}}=D_{B}-U_{B}. \end{array} \right. $$

The accelerated modulus-based accelerated overrelaxation (AMAOR) iteration method is based on solving the next equation in the k th iteration:

$$ (M_{B_{1}}{\Omega}+M_{A_{1}})x^{(k+1)}+(N_{B_{2}}{\Omega}-N_{A_{2}})|x^{(k+1)}| = (N_{B_{1}}{\Omega}+ N_{A_{1}})x^{(k)}+(M_{B_{2}}{\Omega}-M_{A_{2}})|x^{(k)}|-\gamma q. $$

In the following, we first compare the TMAOR method with the MAOR method and the AMAOR method with different pairs of relaxation parameters for Example 1 and Example 2. On the other hand, besides the TMAOR method, more choices of the two-step matrix splittings in the TMMS method given by (5) are tested for Example 3. The abbreviations of the corresponding terminologies are shown in Table 1. Note that the AOR splitting reduces to the SOR splitting when α = β, and to Gauss-Seidel splitting (the MSOR2, the AMSOR2 and the TMSOR2) when α = β = 1, respectively. Let γ = 1 and \({\Omega }=\tau D_{A}D_{B}^{-1}\), where τ = 0.8,1.0,1.2. All initial iteration vectors are chosen to be x(0) = en, the tolerance is set at 10− 10 and the maximum iteration step is 30,000.

Table 1 Abbreviations of testing methods

In order to compare with the theoretical result, the upper bounds of α in (16) are presented in Table 2 for Example 1 and Example 2, while the ones of Example 3 are not presented because the assumption of Theorem 1 is not satisfied. Note that \({\Omega }\geq D_{A}D_{B}^{-1}\) is one assumption of Theorem 2. Thus, we only show the upper bounds of α for the cases of τ = 1 and τ = 1.2. It is remarked that, although the upper bounds of Example 1 with μ = ν = 0 listed in Table 2 are all equal to 1 with 2 decimal digit accuracy, the exact bounds are all larger than 1, which is also guaranteed by the proof of Lemma 4.

Table 2 Upper bounds of α in (16)

Numerical results are reported in Tables 345, and 6, where “IT” and “CPU” denote the number of iteration steps and the elapsed CPU time in seconds, respectively.

Table 3 Numerical results of Example 1 when μ = ν = 4 and \({\Omega }=\tau D_{A}D_{B}^{-1}\)
Table 4 Numerical results of Example 1 when μ = ν = 0 and \({\Omega }=\tau D_{A}D_{B}^{-1}\)
Table 5 Numerical results of Example 2 when μ = ν = 4 and \({\Omega }=\tau D_{A}D_{B}^{-1}\)
Table 6 Numerical results of Example 3

For Example 1 and Example 2, it is shown by Tables 34, and 5 that all methods are convergent. In each comparison, the number of iteration steps of the MAOR is nearly twice as long as that of the TMAOR. This fact is due to the two linear systems solved in each iteration of the TMAOR. For the efficiency, we notice that the TMAOR always converges faster than the MAOR. On the other hand, the TMAOR converges faster than the AMAOR in most cases, except the TMAOR1 with τ = 0.8,μ = ν = 0 for Example 1 and the TMAOR2 with τ = 0.8,1.2 for Example 2. Pass to the relaxation parameters (α,β). By the results in Table 2, (16) can be satisfied with the choices of (α,β) in Table 1 both for the two examples when μ = ν = 4. On the other hand, for Example 1 with μ = ν = 0, (16) cannot be satisfied when α > 1. Meanwhile, when τ = 0.8, the assumption \({\Omega }\geq D_{A}D_{B}^{-1}\) in Theorem 2 does not hold. However, the TMAOR still converges and also performs better than the MAOR in these cases.

For Example 3, focus on Table 6, where the iteration steps of some cases of the MJ and the AMJ exceed 30,000, denoted by “–” with their CPU time. One can see that the MJ and the MGS perform better than the AMJ and the AMGS, respectively. Based on the first step iteration of Jacobi splittings, by introducing another pair of matrix splittings as the second step iteration, all the five methods (the TMJ, the TMJSOR1, the TMJSOR2, the TMJSOR3, and the TMJSOR4) converge faster than the MJ. On the other hand, for the cases of the first step iteration being Gauss-Seidel splittings, the TMGSJ, the TMGSSOR1 and the TMGS take more CPU time than the MGS, while as the relaxation parameters in the matrix splittings of the second step iteration being larger than 1, the TMGSSOR2 and the TMGSSOR3 converge faster than the MGS. Thus, the numerical results in Table 6 show that Method 2 can also accelerate Method 1 by suitable choices of the two-step matrix splittings.

5 Conclusions

By employing two-step matrix splittings, we have constructed the two-step MMS method for the HLCP. The proposed method extends the MMS method in [23]. We also give the convergence theorems, which include and generalize some existing results. Specially, the convergence analysis of the TMAOR method is given, which enriches the theory of the MMS method. Finally, the effectiveness of the proposed method is shown by numerical examples. Note that by the numerical results in the previous section, in Theorem 1, the assumption (II) may be weakened and the convergence range of α may be enlarged. How to improve the convergence theorems is worth studying in the future.