1 Introduction

The linear complementarity problem is to find a vector xRn such that

$$ x\geq0,Ax+q\geq0,x^{T}(Ax+q)=0, $$
(1.1)

or to show that no such vector x exists, where A = [aij] ∈ Rn×n and qRn. We abbreviate the problem by LCP(A,q) and denote its solution by x. Many problems can be described in the form (1.1), such as problems in linear and quadratic programming, the problem of finding a Nash equilibrium point of a bimatrix game, some free boundary problems of fluid mechanics and American options pricing problem (see [1,2,3,4]), etc.

It is well-known that the LCP(A,q) has a unique solution for any qRn if and only if A is a P-matrix [2]. When the matrix involved is a P-matrix, Chen and Xiang [5] gave the error bound for the LCP(A,q) as:

$$ \parallel{x-x^{*}}\parallel_{\infty}\leq\max\limits_{d\in[0,1]^{n}}\parallel{(I-{\Lambda} +{\Lambda} A)^{-1}}\parallel_{\infty}\parallel{r(x)}\parallel_{\infty}, $$
(1.2)

where x is the solution of LCP(A,q), r (x) = min {x,Ax + q},Λ = diag(di) with 0 ≤ di ≤ 1, and the min operator r(x) denotes the componentwise minimum of two vectors. Since real H-matrices with positive diagonal entries form a subclass of P-matrices, the research of special H-matrices has aroused many scholars’ interest, such as Nekrasov matrices [10], Ostrowski matrices [13], QN matrices [7], and S-strictly diagonally dominant(S-SDD) matrices [14]. Meanwhile, the error bounds for these subclasses of H-matrices are presented. For example, the corresponding error bounds for LCPs of QN matrices are achieved by Dai et al. in [8] and Gao et al. in [9]. More error bounds for the subclasses of H-matrices can be seen in [12, 16, 17], etc.

The paper uses some basic notations. Denote N := {1,…,n} by the set of indices, by S a nonempty subset of N, by \(\bar S: = N\backslash S\) the complement of S in N, and by e := (1,…,1)T the unit column vector of n elements, respectively. We also denote by \({r_{i}}\left (A \right ): = \sum \limits _{k \in N,k \ne i} {\left | {{a_{ik}}} \right |}\)i th row sum and by \({r_{i}^{S}}\left (A \right ): = \sum \limits _{k \in S,k \ne i} {\left | {{a_{ik}}} \right |}\) the part of i th row sum, which corresponds to the subset S of the index set N. Obviously, for any subset S of N and any index iN, we have

$${r_{i}}\left( A \right) = {r_{i}^{S}}\left( A \right) + r_{i}^{\bar S}\left( A \right).$$

A matrix A = [aij] ∈ Cn×n is called strictly diagonally dominant (SDD) matrix if for each iN, it holds that aii > ri (A). A matrix A = [aij] ∈ Cn×n is called an H-matrix if its comparison matrix \(\mathcal {M}(A) = \left [ \tilde a_{ij} \right ] \in {R^{n \times n}}\) defined by the formula

$$ a_{ij} = \left\{\begin{array}{ll} \left| {a_{ii}} \right|,&{i = j,}\\ { - \left| {{a_{ij}}} \right|,}&{i \ne j,} \end{array} \right. $$

is a nonsingular M-matrix. It is well-known that a square matrix A is an H-matrix if and only if there exists a positive diagonal matrix X such that AX is SDD [15].

This paper is organized as follows. In Section 2, we put forward a new subclass of H-matrices, called S-QN matrices. Moreover, by discussing the relationship among the subclasses of H-matrices, we find that the set of QN matrices is the proper subset of S-QN matrices; that is, the conception of S-QN matrix is the proper generalization of the conception of QN matrix. In Section 3, we present two kinds of error bounds for LCP(A,q) when the involved matrix A belongs to the set of S-QN matrices. The Error Bound I generalizes the error bound for LCP of QN matrices in [8] by finding a positive diagonal matrix P, such that matrix AP is an SDD matrix. However, the Error Bound I cannot be used when its hypotheses are not satisfied. To avoid this case, we propose a new error bound, called the Error Bound II. The main idea of the Error Bound II is the bound of the inverse of the S-QN matrix, which we get by extending the bound of the inverse of the QN matrix to S-QN matrix. Numerical examples show that the Error Bound I is better than the previous error bounds for H-matrices in some cases and the Error Bound II can improve the Error Bound I in some special cases for S-QN matrices. The conclusion is provided in the last section.

2 S-QN matrix

The class of H-matrices plays an important role in linear complementarity problem. Many convergence analyses of the iterative algorithms for LCP have been considered when the coefficient matrices of LCP are H-matrices. Dehghan and Hajarian [18] gave the convergence analysis of SSOR methods for LCP(A,q) when matrix A is an H-matrix. In [19], Li and Dai presented generalized AOR methods and the convergence analysis for LCP of H-matrices. Similarly, Hadjidimos and Tzoumas discussed the convergence of the matrix analogue of the accelerated overrelaxation iterative method for LCP when the coefficient matrices belong to H+-matrices [20]. At the same time, the error bounds for LCP of H-matrices or its subclasses also attracted many researchers’ interest [6, 8, 11, 13]. Inspired by the S-SDD matrices and QN matrices, we first put forward a new subclass of H-matrices, called S-QN matrices, which is the proper generalization of QN matrices.

Definition 2.1

[10] For any given matrix A = [aij] ∈ Cn×n,n ≥ 2, and for any given nonempty proper subset S of N, then A is called as an S-strictly diagonally dominant (S-SDD) matrix if

  1. (1)

    \(\left | {{a_{ii}}} \right | > {r_{i}^{s}}\left (A \right )\) for all iS;

  2. (2)

    \(\left ({\left | {{a_{ii}}} \right | - {r_{i}^{s}}\left (A \right )} \right )\left ({\left | {{a_{jj}}} \right | - r_{j}^{\bar s}\left (A \right )} \right ) > r_{i}^{\bar s}\left (A\right ){r_{j}^{s}}\left (A \right )\) for all iS, \(j\in {\bar {S}}\).

S-SDD matrices can be characterized in different ways, the following Theorem 2.2 gives out a general characterization by the form of corresponding diagonal scaling matrices, which can be more convenient in some applications. Moreover, these scaling characterizations give us another insight into relation between various subclasses of the class of H-matrices and the class of SDD matrices. The corresponding diagonal scaling matrix W belongs to the set \(\mathcal {W}\), defined as the set of all diagonal matrices whose diagonal entries are either 1 or γ, where γ is an arbitrary positive number, i.e.,

$$\begin{array}{l} {\mathcal{W}} = \underset{S \subset N}{\cup} {W^{S}},\\ {W^{S}} = \left\{ {W = diag\left( {{w_{1}}, {\ldots} ,{w_{n}}} \right):{w_{i}} = \gamma > 0{\kern 1pt} {\kern 1pt} {\kern 1pt} for{\kern 1pt} {\kern 1pt} i \in S{\kern 1pt} {\kern 1pt} {\kern 1pt} and{\kern 1pt} {\kern 1pt} {\kern 1pt} {w_{i}} = 1{\kern 1pt} {\kern 1pt} {\kern 1pt} for{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} i \in \bar S{\kern 1pt} } \right\}. \end{array}$$

Theorem 2.2

[10] A matrix A is an S-SDD matrix if and only if there exists amatrix\(W\in {\mathcal {W}}\)suchthatAW is an SDD matrix.

Numerically, if a matrix A is an S-SDD matrix, we can construct the diagonal matrix W by choosing the scaling parameter γ from the interval (γ1 (A),γ2 (A)) with

$$0 \le {\gamma_{1}}\left( A \right): = \max\limits_{i \in S} \frac{{r_{i}^{\bar s}\left( A \right)}}{{\left| {{a_{ii}}} \right| - {r_{i}^{s}}\left( A \right)}},{\gamma_{2}}\left( A \right): = \min\limits_{j \in \bar S,{r_{j}^{s}}\left( A \right)} \frac{{\left| {{a_{jj}}} \right| - r_{j}^{\bar s}\left( A \right)}}{{{r_{j}^{s}}\left( A \right)}},$$

where, if \({r_{j}^{s}}\left (A \right ) = 0\) for \(j\in {\bar {S}}\), the final fraction is defined to be + .

Assume that A = D + L + U is the standard splitting of a matrix ACn×n into its diagonal (D), strictly lower triangular (L), and strictly upper triangular (U) parts, respectively. Next, we give a characterization of the QN matrices.

Lemma 2.3

[7] LetA = [aij] ∈ Cn×n,n ≥ 2, withaii≠ 0,iN.Then A is a QN matrix if and only if

$$ e > {M^{- 1}}\left| L \right|{\left| D \right|^{- 1}}\left| U \right|e. $$
(2.1)

where

$$ M = \left( {\left| D \right| - \left| L \right|} \right){\left| D \right|^{- 1}}\left( {\left| D \right| - \left| U \right|} \right)=\mathcal{M}(A)+|L||D|^{-1}|U|. $$
(2.2)

Obviously, the matrix M is monotone, i.e., M− 1 is a nonnegative matrix. If we denote

$$ G=I - {M^{- 1}}\left| L \right|{\left| D \right|^{- 1}}\left| U \right|={M^{- 1}}{\mathcal{M}}\left( A \right) , $$
(2.3)

then, G is a Z-matrix (i.e., its off-diagonal entries are nonpositive) and the condition (2.1) amounts to G is strictly diagonal dominant by rows with positive diagonal entries. Thus, A is a QN matrix if and only if for i = 1,2,…,n,

$$ [Ge]_{i}=[{\mathcal{M}}(G)e]_{i}>0. $$
(2.4)

Theorem 3.2 in [7] shows that a QN matrix is an H-matrix; however, an H-matrix may be not a QN matrix. In this paper, we introduce a new kind of subclass of H-matrices, called S-QN matrices, which is also the proper generalization of QN matrices. Next, we give out the definition of S-QN matrix.

Definition 2.4

Let a matrix A = [aij] ∈ Cn×n,n ≥ 2, with aii≠ 0,iN, and any given nonempty proper subset S of N. Then A is called an S-QN matrix if the matrix G defined by (2.3) is an S-SDD matrix with positive diagonal entries.

The following result gives a characterization of S-QN matrices.

Theorem 2.5

A matrix A is an S-QN matrix if and only if there exists amatrix\(W\in {\mathcal {W}}\)suchthatAW is a QN matrix.

Proof

At first, we prove the necessity. Let A = D + L + U be an S-QN matrix, by the Definition 2.4, the matrix G defined by (2.3) is an S-SDD matrix and \(G={\mathcal {M}}(G)\). According to Theorem 2.2, there exists a positive diagonal matrix \(W\in {\mathcal {W}}\) such that GW is an SDD matrix, i.e., for i = 1,2,…,n, we have

$$ [(GW)e]_{i}=[{\mathcal{M}}\left( {GW} \right)e]_{i} > 0.$$

The following is to show AW is a QN matrix. According to (2.4), we only need to prove that \( {\bar G}e = {\mathcal {M}}(\bar G)e>0\), where \(\bar {G}={{\bar M}^{- 1}}{\mathcal {M}}\left ({AW} \right ) = {W^{- 1}}{M^{- 1}}{\mathcal {M}}\left (A \right )W = {W^{- 1}}GW\) and \(\bar M = \left ({\left | {DW} \right | - \left | {LW} \right |} \right ){\left | {DW} \right |^{- 1}}\left ({\left | {DW} \right | - \left | {UW} \right |} \right )= MW\). For i = 1,2,…,n,

$$[\bar{G}e]_{i}=[{\mathcal{M}}\left( {\bar G} \right)e]_{i} = [{\mathcal{M}}\left( {{W^{- 1}}GW} \right)e]_{i} = [{W^{- 1}}{\mathcal{M}}\left( {GW} \right)e]_{i}> 0.$$

Hence, AW is a QN-matrix. The necessity is proved.

Next, we prove the sufficiency. If there exists a positive diagonal matrix \(W\in {\mathcal {W}}\) such that AW is a QN matrix, we need to prove that the matrix A is an S-QN matrix. That is, we need to show the matrix G is an S-SDD matrix with positive diagonal entries.

Since AW is a QN matrix, we have that \( \bar G e= {\mathcal {M}}(\bar G)e={{\bar M}^{- 1}}{\mathcal {M}}\left ({AW} \right )e = W^{-1}GWe=(W^{-1}G)We>0\). By Theorem 2.2, W− 1G is an S-SDD matrix. Moreover, W− 1G can be expressed as

$$ {W^{- 1}}G = \left[ {\begin{array}{*{20}{c}} {\frac{1}{{{w_{1}}}}{g_{11}}}& {\cdots} &{\frac{1}{{{w_{1}}}}{g_{1n}}}\\ {\vdots} & {\ddots} & {\vdots} \\ {\frac{1}{{{w_{n}}}}{g_{n1}}}& {\cdots} &{\frac{1}{{{w_{n}}}}{g_{nn}}} \end{array}} \right]. $$
(2.5)

Obviously, the matrix G is also an S-SDD matrix with positive diagonal entries. The sufficiency is proved. □

Remark 2.6

According to the above, the conclusion is that an S-QN matrix is an H-matrix. When the matrix A is an S-QN matrix, then there exists \(W\in {\mathcal {W}}\) such that AW is a QN-matrix. It is well-known that a QN-matrix is also an H-matrix; that is, there exists diagonal matrix X such that (AW)X is an SDD matrix; i.e., there exists a positive diagonal matrix WX, such that A(WX) is an SDD matrix. Hence, an S-QN matrix is also an H-matrix.

Remark 2.7

It is easy to see that the scaling matrix W for an S-QN matrix A is not unique. Since A is an S-QN matrix, then the matrix G is an S-SDD matrix. From the proof of Theorem 2.5, the scaling matrix W for A satisfies that GW is an SDD matrix. Therefore, the parameter γ of matrix W can be taken from the interval (γ1(G),γ2(G)). It means that any choice from the interval (γ1(G),γ2(G)) can form a scaling matrix W such that AW is a QN matrix.

The following Example 2.1 shows that the class of QN-matrices is the proper subset of the class of S-QN matrices.

Example 2.1

The matrix

$$A = \left[\begin{array}{ccc} 1&-2&0\\ -1&3&-\\ 0&-1&4 \end{array} \right] $$

is an S-QN matrix with S = {2,3}, and it is not a QN matrix.

The following Examples 2.2 and 2.3 illustrate that the set of QN matrices and the set of S-SDD matrices do not contain each other.

Example 2.2

The matrix

$$A = \left[ {\begin{array}{cccc} 1&{0.5}&0&{0.4}\\ {0.5}&1&{0.5}&0\\ 0&{0.5}&1&{0.5}\\ {0.5}&0&{0.5}&1 \end{array}} \right] $$

is a QN matrix, but not an S-SDD matrix for any subset S.

Example 2.3

The matrix

$$A = \left[\begin{array}{cccc} 1&0.5&0&0\\ 0.5&1&0&0\\ 1&2&1&0.5\\ 1&2&0.3&1 \end{array} \right]$$

is an S-SDD matrix for S = {1,2}, but not a QN matrix.

By Theorem 2.2, for an arbitrary S-SDD matrix A, there exists a diagonal matrix \(W\in \mathcal {W}\), such that AW is an SDD matrix. It is well-known that an SDD matrix is also a QN matrix, so matrix AW is a QN matrix. Then, according to Theorem 2.5, the matrix A is an S-QN matrix. Therefore, the class of S-QN matrices contains the class of S-SDD matrices.

Figure 1 displays the relationship among these subclasses of H-matrices.

Fig. 1
figure 1

The relationship about the subclasses of H-matrices

3 Error bounds for LCPs of S-QN matrices

When the matrix involved is a P-matrix, Chen and Xiang proposed the corresponding error bound for LCP(A,q) in (1.2). A big challenge for (1.2) is due to the difficulty for solving \(\max _{d\in [0,1]^{n}}\parallel {(I-{\Lambda } +{\Lambda } A)^{-1}}\parallel _{\infty }\), where Λ = diag(di) with 0 ≤ di ≤ 1. However, if the involved matrix belongs to a subclass of P-matrices, for example, H-matrices with positive diagonals, then many calculable error bounds for the LCP(A,q) can be derived. As a subclass of H-matrices, we present two kinds of error bounds for LCP(A,q) with an S-QN matrix.

3.1 Error bound I for LCPs with an S-QN matrix

García-Esnaola and Peña provided an error bound for LCP(A,q) of H-matrices with positive diagonal entries in [6], it requires to know the diagonal matrix X in advance such that AX is strictly diagonally dominant (SDD). Based on this idea, we first find a diagonal matrix P for an S-QN matrix A such that AP is SDD. Then the error bound of LCP(A,q) when the involved matrix is an S-QN matrix with positive diagonal entries is presented. The following Theorem 3.1 can be found in [6].

Theorem 3.1

[6] Assume thatA = [aij] ∈ Rn×nis an H-matrix with positive diagonal entries.Let\(\bar D = diag\left ({{{\bar d}_{i}}} \right )\)with\({\bar d_{i}} > 0, i\in {N}\), be a diagonal matrix such that\(A\bar {D}\)isstrictly diagonal dominant by rows. For anyiN, let\({\bar \beta _{i}} = {a_{ii}}{\bar d_{i}} - \sum \limits _{j \ne i} {\left | a_{ij} \right |\bar d}_{j}\), then

$$ \max\limits_{d \in {{\left[ {0,1} \right]}^{n}}} {\left\| {{{\left( {I - {\Lambda} + {\Lambda} A} \right)}^{- 1}}} \right\|_{\infty} } \le \max \left\{ {\frac{\max\limits_{i} \left\{ {{{\bar d}_{i}}} \right\}}{{ \min\limits_{i} \left\{ {{{\bar \beta }_{i}}} \right\}}},\frac{{ \max\limits_{i} \left\{ {{{\bar d}_{i}}} \right\}}}{{ \min\limits_{i} \left\{ {{{\bar d}_{i}}} \right\}}}} \right\}. $$
(3.1)

where Λ = diag(di) with 0 ≤ di ≤ 1.

In [8], Dai et al. have found the diagonal matrix K for the QN matrix A such that AK is SDD. Similarly, we can find the corresponding diagonal matrix P = diag(pi) for S-QN matrices.

Theorem 3.2

[8] LetA = [aij] ∈ Rn×nbe a QN matrix with positive diagonal entries such that, for eachi = 1,…,n − 1,aij≠ 0 for somej > i, and for eachi = 2,…,n,aij≠ 0 for somej < i.Letξ := M− 1 |L| |D|− 1 |U|e, where M is defined in (2.2), then the matrixK = diag (k1,…,kn) withk1 := ξ1 + 𝜖, \(\epsilon \in \left ({0,\min \left \{ {1 - {\xi _{1}}, \min \limits _{2 \le i \le n} \frac {{{{\left [ {{\mathcal {M}}\left (A \right )\xi } \right ]}_{i}}}}{{\left | {{a_{i1}}} \right |}}} \right \}} \right )\), where\(\frac {{{{\left [ {{\mathcal {M}}\left (A \right )\xi } \right ]}_{i}}}}{{\left | {{a_{i1}}} \right |}} = \infty \)whenai1 = 0, andki := ξifori = 2,…,n, has positive diagonal entries and it satisfies that AK is SDD.

Corollary 3.3

LetA = [aij] ∈ Rn×nbe an S-QN matrix with positive diagonal entries such that, whenj > i, there existsaij≠ 0 for eachi = 1,…,n − 1;and whenj < i, there existsaij≠ 0 for eachi = 2,…,n.If\(W=diag(w_{i})\in \mathcal {W}\)suchthatAW is a QN matrix,\(\bar \xi = {W^{- 1}}{M^{- 1}}\left | L \right |{\left | D \right |^{ - 1}}\left | U \right |We\), where M is defined in (2.2), andK = diag (k1,…,kn) with\({k_{1}}: = {\bar \xi _{1}} + \epsilon \), \(\epsilon \in \left ({0,\min \left \{ {1 - {{\bar \xi }_{1}}, \min \limits _{2 \le i \le n} \frac {{{{\left [ {{\mathcal {M}}\left ({AW} \right )\bar \xi } \right ]}_{i}}}}{{\left | {{a_{i1}}} \right |{w_{1}}}}} \right \}} \right )\), where\(\frac {{{{\left [ {{\mathcal {M}}\left ({AW} \right )\bar \xi } \right ]}_{i}}}}{{\left | {{a_{i1}}} \right |{w_{1}}}} = \infty \)whenai1 = 0, and\({k_{i}}: ={\bar \xi _{i}}\)fori = 2,…,n.Then the matrixP = WK = diag(pi) is the positive diagonal matrix such thatAP is SDD.

The following results present an error bound for the linear complementarity problem of S-QN matrices, which generalizes the error bound for QN matrices in [8]. This bound can improve the previous bounds for H-matrices.

Theorem 3.4

LetA = [aij] ∈ Rn×nbe an S-QN matrix with positive diagonal entries such that, whenj > i, there existsaij ≠ 0 for eachi = 1,…,n − 1;and whenj < i, there existsaij ≠ 0 for eachi = 2,…,n.Then

$$ \max\limits_{d \in {{\left[ {0,1} \right]}^{n}}} {\left\| {{{\left( {I - {\Lambda} + {\Lambda} A} \right)}^{- 1}}} \right\|_{\infty} } \le \max \left\{ {\frac{{ \max\limits_{i} \left\{ {{p_{i}}} \right\}}}{{ \min\limits_{i} \left\{ {{t_{i}}} \right\}}},\frac{{ \max\limits_{i} \left\{ {{p_{i}}} \right\}}}{{ \min\limits_{i} \left\{ {{p_{i}}} \right\}}}} \right\},{\kern 1pt} {\kern 1pt} $$
(3.2)

wheret1 := 𝜖a11w1, \({t_{i}}: = {a_{ii}}{w_{i}}{\bar \xi _{i}} - \sum \limits _{j \in N\backslash \left \{ i \right \}} {\left | {{a_{ij}}} \right |{w_{j}}{{\bar \xi }_{j}}} - \epsilon \left | {{a_{i1}}} \right |{w_{1}}\)for eachi = 2,…,n; \(\bar {\xi }\), W, Pand𝜖are defined in Corollary 3.3.

Proof

By Remark 2.6, an S-QN matrix is an H-matrix, we can apply the bound (3.1) for H-matrices to S-QN matrices. Then, we need to prove that \({\bar d_{i}} = {p_{i}}\) and \({t_{i}} = {\bar \beta _{i}}\) for all iN. Obviously, \({\bar d_{i}} = {p_{i}}\) for all iN, so we only need to prove that \({t_{i}} = {\bar \beta _{i}}\) for all iN.

By Theorem 2.5, AW is a QN matrix, and

$$ \begin{array}{@{}rcl@{}} &\bar M& = {\mathcal{M}}\left( {AW} \right) + \left| L \right|{\left| D \right|^{- 1}}\left| U \right|W = MW, \\ &\bar \xi& = {W^{- 1}}{M^{- 1}}\left| L \right|{\left| D \right|^{- 1}}\left| U \right|We, \end{array} $$

we can have \({\mathcal {M}}\left ({AW} \right )\bar \xi + \left | L \right |{\left | D \right |^{- 1}}\left | U \right |W\bar \xi = \left | L \right |{\left | D \right |^{- 1}}\left | U \right |We\), in other words, \({\mathcal {M}}\left ({AW} \right )\bar \xi = \left | L \right |{\left | D \right |^{- 1}}\left | U \right |W\left ({e - \bar \xi } \right )\).

Denote ζ = (𝜖,0,…,0)T, then \(Ke = {\left ({{k_{1}}, {\ldots } ,{k_{n}}} \right )^{T}} = {\left ({{{\bar \xi }_{1}} + \epsilon ,{{\bar \xi }_{2}}, {\ldots } ,{{\bar \xi }_{n}}} \right )^{T}} = \bar \xi + \zeta \),

$$ \begin{array}{@{}rcl@{}} {\mathcal{M}}\left( {AP} \right)e &=& {\mathcal{M}}\left( {AWK} \right)e\\ &=& {\mathcal{M}}\left( {AW} \right)Ke = {\mathcal{M}}\left( {AW} \right) \cdot \left( {\bar \xi + \zeta } \right)\\ &=& \left| L \right|{\left| D \right|^{- 1}}\left| U \right|W\left( {e - \bar \xi } \right) + {\mathcal{M}}\left( {AW} \right) \cdot \zeta. \end{array} $$

It is easy to get that

$$\left| L \right|{\left| D \right|^{- 1}}\left| U \right| = \left[ {\begin{array}{*{20}{c}} 0& {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt}0&{\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt} {\cdots} &{\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt}0\\ 0& {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt}{\frac{{\left| {{a_{21}}} \right|}}{{\left| {{a_{11}}} \right|}}\left| {{a_{12}}} \right|}& {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt}{\cdots} &{\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt}{\frac{{\left| {{a_{21}}} \right|}}{{\left| {{a_{11}}} \right|}}\left| {{a_{1n}}} \right|}\\ {\vdots} & {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt}{\vdots} & {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt}{\ddots} & {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt}{\vdots} \\ 0& {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt}{\frac{{\left| {{{\hat a}_{n1}}} \right|}}{{\left| {{{\hat a}_{11}}} \right|}}\left| {{a_{12}}} \right|}& {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt}{\cdots} &{\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt}{\frac{{\left| {{a_{n1}}} \right|}}{{\left| {{a_{11}}} \right|}}\left| {{a_{1n}}} \right| + {\cdots} \frac{{\left| {{a_{n,n - 1}}} \right|}}{{\left| {{a_{n - 1,n - 1}}} \right|}}\left| {{a_{n - 1,n}}} \right|} \end{array}} \right].$$

For i = 1, \({\bar \beta _{1}} = {\left [ {{\mathcal {M}}\left ({AP} \right )e} \right ]_{1}} = {\left [ {{\mathcal {M}}\left ({AWK} \right )e} \right ]_{1}} = {\left [ {{\mathcal {M}}\left ({AW} \right ) \cdot \zeta } \right ]_{1}} = \epsilon {a_{11}}{w_{1}} = {t_{1}}.\) For i = 2,…,n,

$$ \begin{array}{@{}rcl@{}} {{\bar \beta }_{i}} &=& {\left[ {{\mathcal{M}}\left( {AP} \right)e} \right]_{i}} = {\left[ {{\mathcal{M}}\left( {AWK} \right)e} \right]_{i}}\\ &=& {\left[ {{\mathcal{M}}\left( {AW} \right){{\left( {{{\bar \xi }_{1}} + \epsilon ,{{\bar \xi }_{2}}, {\ldots} ,{{\bar \xi }_{n}}} \right)}^{T}}} \right]_{i}}\\ &=& {a_{ii}}{w_{i}}{{\bar \xi }_{i}} - \sum\limits_{j \in N\backslash \{ i \}} {\left| {{a_{ij}}} \right|{w_{j}}{{\bar \xi }_{j}}} - \epsilon \left| {{a_{i1}}} \right|{w_{1}}\\ &=& {t_{i}}. \end{array} $$

Hence, we can get

$$ \max\limits_{d \in {{\left[ {0,1} \right]}^{n}}} {\left\| {{{\left( {I - {\Lambda} + {\Lambda} A} \right)}^{- 1}}} \right\|_{\infty} } \le \max \left\{ {\frac{{ \max\limits_{i} \left\{ {{p_{i}}} \right\}}}{{ \min\limits_{i} \left\{ {{t_{i}}} \right\}}},\frac{{ \max\limits_{i} \left\{ {{p_{i}}} \right\}}}{{ \min\limits_{i} \left\{ {{p_{i}}} \right\}}}} \right\}.{\kern 1pt} {\kern 1pt} $$

The error bound given by [5] of LCP(A,q) for an H-matrix A = [aij] ∈ Rn×n with positive diagonal entries is the following:

$$ \max\limits_{d \in {{\left[ {0,1} \right]}^{n}}} {\left\| {{{\left( {I - {\Lambda} + {\Lambda} A} \right)}^{- 1}}} \right\|_{\infty} } \le {\left\| {{\mathcal{M}}{{\left( A \right)}^{- 1}}\max \left( { {\Gamma} ,I} \right)} \right\|_{\infty} }, $$
(3.3)

where \({\mathcal {M}}\left (A \right )\) is the comparison matrix of A, Γ = diag(a11,a22,…,ann) is the diagonal part of A and max (Γ,I) = diag(max {a11,1},⋯,max {ann,1}).

The following Example 3.1 illustrates that the bound (3.2) obtained by Theorem 3.4 is better than the bound (3.3) given in [5].

Example 3.1

Consider the following matrix

$$A = \left[ {\begin{array}{*{20}{c}} 1&{ - 2.5}&0\\ { - 1}&3&{ - 1}\\ 0&{ - 1}&4 \end{array}} \right].$$

It is easy to check that the matrix A is an S-QN matrix for S = {2,3} and W = diag(1,0.4,0.4). By Theorem 3.4, we can get \(\bar \xi = {\left ({0.9306,0.9306,0.2917} \right )^{T}}\), if we choose 𝜖 = 0.0347, then

$$K=diag(0.9653,0.9306,0.2917),$$
$$P=diag(0.9653,0.3722,0.1167),$$
$$(t_{1},t_{2},t_{3})^{T}={{\mathcal{M}}({AP})e}=(0.0347,0.0347,0.0945)^{T}.$$

So the bound (3.2) in Theorem 3.4 is

$$\max \left\{ {\frac{{ \max\limits_{i} \left\{ {{p_{i}}} \right\}}}{{ \min\limits_{i} \left\{ {{t_{i}}} \right\}}},\frac{{ \max\limits_{i} \left\{ {{p_{i}}} \right\}}}{{ \min\limits_{i} \left\{ {{p_{i}}} \right\}}}} \right\} = 27.8184,$$

which is smaller than the bound (3.3)

$${\left\| {{\mathcal{M}}{{\left( A \right)}^{- 1}}\max \left( { {\Gamma} ,I} \right)} \right\|_{\infty} } = 51.$$

3.2 Error bound II for LCPs with an S-QN matrix

In Theorem 3.4, we get an error bound for LCPs of S-QN matrices. Although the bound (3.2) can improve the previous error bound for H-matrices, it is still worth improving. For example, for a given S-QN matrix A = [aij] ∈ Rn×n with positive diagonals, if there exists some iN such that aij = 0 for any j > i or j < i, then the bound (3.2) can not be used to estimate \( \max \limits _{d \in {{\left [ {0,1} \right ]}^{n}}} {\left \| {{{\left ({I - {\Lambda } + {\Lambda } A} \right )}^{- 1}}} \right \|_{\infty } }\). Besides, when 𝜖 → 0,

$${t_{1}} = \epsilon {a_{11}}{w_{1}} \ and\ \min\limits_{i \in N} \left\{ {{t_{i}}} \right\} \to 0,$$

which implies that

$$\max \left\{ {\frac{{ \max\limits_{i} \left\{ {{p_{i}}} \right\}}}{{ \min\limits_{i} \left\{ {{t_{i}}} \right\}}},\frac{{ \max\limits_{i} \left\{ {{p_{i}}} \right\}}}{{ \min\limits_{i} \left\{ {{p_{i}}} \right\}}}} \right\} \to + \infty .$$

These facts show that when the assumption of Theorem 3.4 is not satisfied or 𝜖→0, using the bound (3.2) to estimate \(\max \limits _{d \in {{\left [ {0,1} \right ]}^{n}}} {\left \| {{{\left ({I - {\Lambda } + {\Lambda } A} \right )}^{- 1}}} \right \|_{\infty } }\) is inappropriate. To overcome these drawbacks, we propose a new error bound for S-QN matrices which is better than the bound (3.2) in some cases. Some useful lemmas are listed here.

Lemma 3.5

[11] Letα > 0 andη > 0. Then for anyx ∈ [0,1],

$$\frac{1}{{1 - x + \alpha x}} \le \frac{1}{{\min \left\{ {\alpha ,1} \right\}}},$$

and

$$\frac{{\eta x}}{{1 - x + \alpha x}} \le \frac{\eta }{\alpha }.$$

Lemma 3.6

[12] LetA = [aij] ∈ Cn×nbea matrix withaii≠ 0 foriNand\(\hat A = I - {\Lambda } + {\Lambda } A = \left [ {{{\hat a}_{ij}}} \right ]\), where Λ = diag(di) with 0 ≤ di ≤ 1.Then

$${z_{i}}({\hat A} ) \le {\eta_{i}}\left( A \right)$$

and

$$\frac{{{z_{i}}({\hat A} )}}{{{{\hat a}_{ii}}}} \le \frac{{{\eta_{i}}\left( A \right)}}{{\min \left\{ {{a_{ii}},1} \right\}}},$$

where\({z_{1}}({\hat A} ) = {\eta _{1}}\left (A \right ) = 1,{z_{i}}({\hat A} ) = \sum \limits _{j = 1}^{i - 1} {\frac {{| {{{\hat a}_{ij}}}|}}{{| {{{\hat a}_{jj}}} |}}{z_{j}}({\hat A} )} + 1\), and

$${\eta_{i}}\left( A \right) = \sum\limits_{j = 1}^{i - 1} {\frac{{\left| {{a_{ij}}} \right|}}{{\min \left\{ {\left| {{a_{jj}}} \right|,1} \right\}}}{\eta_{j}}\left( A \right)} + 1,i = 2,3, {\ldots} ,n.$$

Lemma 3.7

[7] LetA = [aij] ∈ Cn×n,n ≥ 2, be a QN matrix. Then

$$ {\left\| {{A^{- 1}}} \right\|_{\infty} } \le \max\limits_{i \in N} \frac{{{{\left\{ {{M^{- 1}}e} \right\}}_{i}}}}{{{{\left\{ {{M^{- 1}}{\mathcal{M}}\left( A \right)e} \right\}}_{i}}}}{\kern 1pt} . $$
(3.4)

whereMis defined in (2.2).

Next, we will introduce the new error bound for LCPs when the involved matrix is an S-QN matrix. To calculate \(\max \limits _{d \in {{\left [ {0,1} \right ]}^{n}}} {\left \| {{{\left ({I - {\Lambda } + {\Lambda } A} \right )}^{- 1}}} \right \|_{\infty } }\), we first prove that the matrix \(\hat A = I - {\Lambda } + {\Lambda } A\) is also an S-QN matrix when the matrix A is an S-QN matrix. Then, the error bound is estimated by using the bound of the inverse of the S-QN matrix \(\hat {A}\).

Lemma 3.8

LetA = [aij] ∈ Cn×nbe an S-QN matrix with positive diagonal entries,and\(\hat A = I - {\Lambda } + {\Lambda } A = \left [ {{{\hat a}_{ij}}} \right ]\), where Λ = diag(di) with 0 ≤ di ≤ 1.Then\(\hat {A}\)isan S-QN matrix.

Proof

Since A is an S-QN matrix, there exists a diagonal matrix \(W =diag(w_{i})\in \mathcal {W}\) such that \(\bar {A}=AW\) is a QN matrix. Let \(\bar \xi = {W^{- 1}}{M^{- 1}}\left | L \right |{\left | D \right |^{- 1}}\left | U \right |We=(\bar {\xi }_{1},\bar {\xi }_{2},\ldots ,\bar {\xi }_{n})^{T}\), where M is defined in (2.2), then

$$ e > \bar \xi. $$
(3.5)

Denote

$${\hat a_{ij}} = \left\{ \begin{array}{ll} 1 - {d_{i}} + {d_{i}}{a_{ij}}, &i = j,\\ {d_{i}}{a_{ij}},& i \ne j. \end{array} \right.$$

Since A = D + L + U, it follows that \(\hat {A}\) and \(\bar {A}\) can be split in the form of \(\hat A = \hat D + \hat L + \hat U\), \(\bar {A}=\bar {D}+\bar {L}+\bar {U}\), respectively.

To prove \(\hat {A}\) is an S-QN matrix, by Theorem 2.5, we only need to prove that \(\hat {A}W\) is a QN matrix. If we denote

$$ \hat{\xi}=W^{-1}\hat{M}^{-1}|\hat{L}||\hat{D}^{-1}||\hat{U}|We=(\hat{\xi}_{1},\hat{\xi}_{2},\ldots,\hat{\xi}_{n})^{T}, $$
(3.6)

where \(\hat {M} = (|\hat {D}|-|\hat {L}|)|\hat {D}^{-1}|(|\hat {D}-\hat {U}|),\) then we only need to prove \(e > \hat \xi \). Denote

$$ \hat {v} = |\hat{L}||\hat{D}^{-1}||\hat{U}|We=(\hat{v}_{1},\hat{v}_{2},\ldots,\hat{v}_{n})^{T}, $$
(3.7)

where

$$|\hat{L}||\hat{D}^{-1}||\hat{U}|W = \left[ {\begin{array}{*{20}{c}} 0&{\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt}0&{\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt} {\cdots} &{\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt}0\\ 0&{\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt}{\frac{{\left| {{{\hat a}_{21}}} \right|}}{{\left| {{{\hat a}_{11}}} \right|}}\left| {{{\hat a}_{12}}} \right|{w_{2}}}& {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt}{\cdots} &{\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt}{\frac{{\left| {{{\hat a}_{21}}} \right|}}{{\left| {{{\hat a}_{11}}} \right|}}\left| {{{\hat a}_{1n}}} \right|{w_{n}}}\\ {\vdots} & {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt}{\vdots} & {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt}{\ddots} & {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt}{\vdots} \\ 0&{\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt}{\frac{{\left| {{{\hat a}_{n1}}} \right|}}{{\left| {{{\hat a}_{11}}} \right|}}\left| {{{\hat a}_{12}}} \right|{w_{2}}}& {\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt}{\cdots} &{\kern 1pt} {\kern 1pt}{\kern 1pt} {\kern 1pt}{\frac{{\left| {{{\hat a}_{n1}}} \right|}}{{\left| {{{\hat a}_{11}}} \right|}}\left| {{{\hat a}_{1n}}} \right|{w_{n}} + {\cdots} \frac{{\left| {{{\hat a}_{n,n - 1}}} \right|}}{{\left| {{{\hat a}_{n - 1,n - 1}}} \right|}}\left| {{{\hat a}_{n - 1,n}}} \right|{w_{n}}} \end{array}} \right].$$

Then we can get

$$ {\hat v_{1}} = 0,\\ {\hat v_{i}} = \sum\limits_{j = 2}^{n} {\left( {\sum\limits_{k = 1}^{i - 1} {\frac{{\left| {{{\hat a}_{ik}}} \right|}}{{\left| {{{\hat a}_{kk}}} \right|}}\left| {{{\hat a}_{kj}}} \right|{w_{j}}} } \right)} \!= \sum\limits_{k = 1}^{i - 1} {\left( {\frac{{\left| {{{\hat a}_{ik}}} \right|}}{{\left| {{{\hat a}_{kk}}} \right|}}\sum\limits_{j = k + 1}^{n} {\left| {{{\hat a}_{kj}}} \right|{w_{j}}} } \right)} {\kern 1pt} {\kern 1pt}, i=2,...,n. $$
(3.8)

From (3.6) and (3.7), we have \(\hat \xi = {W^{- 1}}{{\hat M}^{- 1}}\hat v = {W^{- 1}}(|\hat {D}|-|\hat {U}|)^{-1}|\hat {D}|(|\hat {D}|-|\hat {L}|)^{-1}\hat {v}\). If we denote \(\hat \lambda = (|\hat {D}|-|\hat {L}|)^{-1}\hat {v}={({{{\hat \lambda }_{1}},\ldots ,{{\hat \lambda }_{n}}} )^{T}} ,\) then we get

$$ \hat \xi = {W^{- 1}}(|\hat{D}|-|\hat{U}|)^{-1}|\hat{D}|\hat {\lambda} , $$
(3.9)

and

$$ (|\hat{D}|-|\hat{L}|)\hat {\lambda} = \hat{ v}. $$
(3.10)

By (3.8) and (3.10), we can obtain the entries of \(\hat {\lambda }\) as

$$ {\hat \lambda_{1}} = 0, \\ {\hat \lambda_{i}} = \frac{{{{\hat v}_{i}}}}{{\left| {{{\hat a}_{ii}}} \right|}} + \sum\limits_{j = 1}^{i - 1} {\frac{{\left| {{{\hat a}_{ij}}} \right|}}{{\left| {{{\hat a}_{ii}}} \right|}}} {\hat \lambda_{j}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}, i=2,\dots,n. $$
(3.11)

Moreover, followed by (3.9), we have \((|\hat {D}|-|\hat {U}|)W\hat {\xi } = |\hat { D}|\hat {\lambda },\) this implies the following recursive relations

$$ \begin{array}{@{}rcl@{}} {\hat \xi_{n}} &=& \frac{{{{\hat \lambda }_{n}}}}{{{w_{n}}}},\\ {\hat \xi_{i}} &=& \frac{{{{\hat \lambda }_{i}}}}{{{w_{i}}}} + \sum\limits_{j = i+1}^{n} {\frac{{\left| {{{\hat a}_{ij}}} \right|{w_{j}}}}{{\left| {{{\hat a}_{ii}}} \right|{w_{i}}}}} {\hat \xi_{j}}{\kern 1pt} {\kern 1pt}, i=n-1,\ldots,1. \end{array} $$
(3.12)

Similarly, if we denote \(\bar {v}=|\bar {L}||\bar {D}^{-1}||\bar {U}|We=(\bar {v}_{1},\bar {v}_{2},...,\bar {v}_{n})^{T}, \bar {\lambda }=(|\bar {D}|-|\bar {L}|)^{-1}\bar {v}=(\bar {\lambda }_{1},\bar {\lambda }_{2},\ldots ,\bar {\lambda }_{n})^{T},\) then

$$ {\bar \lambda_{1}} = 0,\\ {\bar \lambda_{i}} = \frac{{\bar{v}_{i}}}{{\left| {{a_{ii}}} \right|}} + \sum\limits_{j = 1}^{i - 1} {\frac{{\left| {{a_{ij}}} \right|}}{{\left| {{a_{ii}}} \right|}}} {\bar \lambda_{j}}, i=2,..,n, $$

where

$$ {\bar v_{i}} = \sum\limits_{k = 1}^{i - 1} {\left( {\frac{{\left| {{a_{ik}}} \right|}}{{\left| {{a_{kk}}} \right|}}\sum\limits_{j = k + 1}^{n} {\left| {{a_{kj}}} \right|{w_{j}}} } \right)} {\kern 1pt} ,{\kern 1pt} $$
(3.13)

and

$$ \begin{array}{@{}rcl@{}} {\bar \xi_{n}} &=& \frac{{{{\bar \lambda }_{n}}}}{{{w_{n}}}},\\ {\bar \xi_{i}} &=& \frac{{{{\bar \lambda }_{i}}}}{{{w_{i}}}} + \sum\limits_{j = i+1}^{n} {\frac{{\left| {{a_{ij}}} \right|{w_{j}}}}{{\left| {{a_{ii}}} \right|{w_{i}}}}} {\bar \xi_{j}}{\kern 1pt} {\kern 1pt} {\kern 1pt}, i=n-1,...,1. \end{array} $$
(3.14)

Next, we use mathematical induction to prove \({\hat \lambda _{i}} \le {\bar \lambda _{i}}\) for any iN.

  1. (1)

    The conclusion is hold for i = 1 and i = 2, since for \({\hat \lambda _{1}} = {\bar \lambda _{1}}=0,\) and

    $$ \begin{array}{@{}rcl@{}} {{\hat \lambda }_{2}} &=& \frac{{{{\hat v}_{2}}}}{{\left| {{{\hat a}_{22}}} \right|}} + \frac{{\left| {{{\hat a}_{21}}} \right|}}{{\left| {{{\hat a}_{22}}} \right|}}{{\hat \lambda }_{1}} =\frac{{{{\hat v}_{2}}}}{{\left| {{{\hat a}_{22}}} \right|}}\\ &=& \frac{1}{{\left| {{{\hat a}_{22}}} \right|}} \cdot \frac{{\left| {{{\hat a}_{21}}} \right|}}{{\left| {{{\hat a}_{11}}} \right|}}\sum\limits_{j = 2}^{n} {\left| {{{\hat a}_{1j}}} \right|{w_{j}}} \\ &=& \frac{{{d_{2}}\left| {{a_{21}}} \right|}}{{1 - {d_{2}} + {d_{2}}|{a_{22}|}}} \cdot \frac{{{d_{2}}}}{{1 - {d_{1}} + {d_{1}}|{a_{11}}|}} \cdot \sum\limits_{j = 2}^{n} {{d_{1}}|{a_{1j}}|{w_{j}}} \\ &\le& \frac{{\left| {{a_{21}}} \right|}}{{|{a_{22}}|}} \cdot \frac{1}{{|{a_{11}}|}} \cdot \sum\limits_{j = 2}^{n} {|{a_{1j}}|{w_{j}}} \\ &=& \frac{{{v_{2}}}}{{|{a_{22}}|}} = {{\bar \lambda }_{2}}, \end{array} $$
  2. (2)

    we assume that \({\hat \lambda _{i}} \leq {\bar \lambda _{i}}\) for i = k, then by (3.8),

    $$ \begin{array}{@{}rcl@{}} {{\hat \lambda }_{k + 1}} &=& \frac{{{{\hat v}_{k + 1}}}}{{\left| {{{\hat a}_{k + 1,k + 1}}} \right|}} + \sum\limits_{j = 1}^{k} {\frac{{\left| {{{\hat a}_{k + 1,j}}} \right|}}{{\left| {{{\hat a}_{k + 1,k + 1}}} \right|}}} {{\hat \lambda }_{j}}\\ {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} &=& \frac{1}{{\left| {{{\hat a}_{k + 1,k + 1}}} \right|}}\left( {\sum\limits_{l = 1}^{k} {\left( {\frac{{\left| {{{\hat a}_{k + 1,l}}} \right|}}{{\left| {{{\hat a}_{ll}}} \right|}}\sum\limits_{j = l + 1}^{n} {\left| {{{\hat a}_{lj}}} \right|{w_{j}}} } \right)} } \right) + \sum\limits_{j = 1}^{k} {\frac{{\left| {{{\hat a}_{k + 1,j}}} \right|}}{{\left| {{{\hat a}_{k + 1,k + 1}}} \right|}}} {{\hat \lambda }_{j}}\\ &=& \sum\limits_{l = 1}^{k} {\left( {\frac{{\left| {{{\hat a}_{k + 1,l}}} \right|}}{{\left| {{{\hat a}_{k + 1,k + 1}}} \right|}}\sum\limits_{j = l + 1}^{n} {\frac{{\left| {{{\hat a}_{lj}}} \right|}}{{\left| {{{\hat a}_{ll}}} \right|}}{w_{j}}} } \right)} + \sum\limits_{j = 1}^{k} {\frac{{\left| {{{\hat a}_{k + 1,j}}} \right|}}{{\left| {{{\hat a}_{k + 1,k + 1}}} \right|}}} {{\hat \lambda }_{j}}\\ &=& \sum\limits_{l = 1}^{k} {\left( {\frac{{{d_{k + 1}}\left| {{a_{k + 1,l}}} \right|}}{{1 - {d_{k + 1}} + {d_{k + 1}}\left| {{a_{k + 1,k + 1}}} \right|}}\sum\limits_{j = l + 1}^{n} {\frac{{{d_{lj}}\left| {{a_{lj}}} \right|}}{{1 - {d_{l}}+ {d_{l}}\left| {{a_{ll}}} \right|}}{w_{j}}} } \right)}\\ &&+ \sum\limits_{j = 1}^{k} {\frac{{{d_{k + 1,l}}\left| {{a_{k + 1,j}}} \right|}}{{1 - {d_{k + 1}} + {d_{k + 1}}\left| {{a_{k + 1,k + 1}}} \right|}}} {{\hat \lambda }_{j}}\\ &\le& \sum\limits_{l = 1}^{k} {\left( {\frac{{\left| {{a_{k + 1,l}}} \right|}}{{\left| {{a_{k + 1,k + 1}}} \right|}}\sum\limits_{j = l + 1}^{n} {\frac{{\left| {{a_{lj}}} \right|}}{{\left| {{a_{ll}}} \right|}}{w_{j}}} } \right)} + \sum\limits_{j = 1}^{k} {\frac{{\left| {{a_{k + 1,j}}} \right|}}{{\left| {{a_{k + 1,k + 1}}} \right|}}} {{\bar \lambda }_{j}}\\ &=& \frac{{{v_{k + 1}}}}{{\left| {{a_{k + 1,k + 1}}} \right|}} + \sum\limits_{j = 1}^{k} {\frac{{\left| {{a_{k + 1,j}}} \right|}}{{\left| {{a_{k + 1,k + 1}}} \right|}}} {{\bar \lambda }_{j}}\\ &=& {{\bar \lambda }_{k + 1}}. \end{array} $$

So \({\hat \lambda _{i}} \leq {\bar \lambda _{i}}\) is hold for any iN.

By (3.12), (3.14) and Lemma 3.5, we can get \(\hat \xi \le \bar \xi \). From (3.5), we can get \(e > \hat \xi \), i.e., \(\hat AW\) is a QN matrix. Therefore, \(\hat A = I - {\Lambda } + {\Lambda } A\) is an S-QN matrix. □

The following lemma extends the bound of the inverse of the QN matrix given by Lemma 3.7 to the S-QN matrix.

Lemma 3.9

LetA = [aij] ∈ Cn×n,n ≥ 2 be an S-QN matrix, if\(W=diag(w_{i})\in \mathcal {W}\)suchthatAW is a QN matrix,\(\delta = \min \{ {w^{-1}_{i}} \},\)then

$$ {\| {{A^{- 1}}} \|_{\infty} } \le \frac{1}{\delta } \max\limits_{i \in N} \frac{{{{\left\{ {{W^{- 1}}{M^{- 1}}e} \right\}}_{i}}}}{{{{\left\{ {{W^{- 1}}{M^{- 1}}{\mathcal{M}}\left( A \right)We} \right\}}_{i}}}}{\kern 1pt} ,{\kern 1pt} {\kern 1pt} $$
(3.15)

whereMis defined in (2.2).

Proof

Since A is an S-QN matrix, AW is a QN matrix. By Lemma 3.7

$${\| {{{({AW} )}^{- 1}}}\|_{\infty} } \le \max\limits_{i \in N} \frac{{{{\left\{ {{W^{- 1}}{M^{- 1}}e} \right\}}_{i}}}}{{{{\left\{ {{W^{- 1}}{M^{- 1}}{\mathcal{M}}\left( A \right)We} \right\}}_{i}}}}.$$

Denote \(A^{-1}=[a_{ij}^{(-1)}]\) and \(R_{i}(A^{-1})=\sum \limits _{j=1}^{n} a_{ij}^{(-1)}\) (the i th row sum), then ∥A− 1 and ∥(AW)− 1 can be expressed as

$${\| {{A^{- 1}}} \|_{\infty} } = \max\limits_{i} \{ {R_{i}(A^{-1}) \}, {\kern 1pt}{\kern 1pt}{\kern 1pt} {\| {{{({AW} )}^{- 1}}} \|_{\infty} } = \max\limits_{i} \{ {w_{i}^{- 1}R_{i}(A^{-1})}} \},$$

respectively. Let \(\delta = {\kern 1pt} \min \limits _{i} {\kern 1pt} \{ {w_{i}^{- 1}} \}\), then

$$\delta {\| {{A^{- 1}}} \|_{\infty} } = \delta \max\limits_{i} \{ {R_{i}(A^{-1})} \} \le \max\limits_{i} \{ {w_{i}^{- 1}R_{i}(A^{-1})} \} = {\| {{{({AW} )}^{- 1}}} \|_{\infty} }.$$

Hence

$${\| {{A^{- 1}}} \|_{\infty} } \le \frac{1}{\delta } \max\limits_{i \in N} \frac{{{{\left\{ {{W^{- 1}}{M^{- 1}}e} \right\}}_{i}}}}{{{{\left\{ {{W^{- 1}}{M^{- 1}}{\mathcal{M}}\left( A \right)We} \right\}}_{i}}}}{\kern 1pt} .$$

Theorem 3.10

LetA = [aij] ∈ Rn×n, andn ≥ 2 be an S-QN matrix with positive diagonal entries,if\(W=diag(w_{i})\in \mathcal {W}\), such thatAW is a QN matrix,\(\delta = \min \{ {w^{-1}_{i}} \}\).Then

$$ \max\limits_{d \in {{\left[ {0,1} \right]}^{n}}} {\left\| {{{\left( {I - {\Lambda} + {\Lambda} A} \right)}^{- 1}}} \right\|_{\infty} } \le \frac{1}{\delta } \max\limits_{i \in N} \frac{{{{\left\{ {{W^{- 1}}\beta } \right\}}_{i}}}}{{{{\left\{ {e - \bar \xi } \right\}}_{i}}}}, $$
(3.16)

where\(\bar \xi = {W^{- 1}}{M^{- 1}}\left | L \right |{\left | D \right |^{- 1}}\left | U \right |We\), M is defined in (2.2), β = (β1,β2,…,βn)Twith\({\beta _{n}} = \frac {{{\eta _{n}}\left (A \right )}}{{\min \left \{ {{a_{nn}},1} \right \}}}\), \({\beta _{i}} = \frac {{{\eta _{i}}\left (A \right )}}{{\min \left \{ {{a_{ii}},1} \right \}}} + \sum \limits _{j = i + 1}^{n} {\frac {{\left | {{a_{ij}}} \right |}}{{\left | {{a_{ii}}} \right |}}} {\beta _{j}},i = n-1, {\ldots } , 1,\)and ηi (A) is defined in Lemma 3.6.

Proof

By Lemmas 3.8 and 3.9, \(\hat A = I - {\Lambda } + {\Lambda } A\) is an S-QN matrix, and

$$ {\left\| {{{\left( {I - {\Lambda} + {\Lambda} A} \right)}^{- 1}}} \right\|_{\infty} } \le \frac{1}{\delta } \max\limits_{i \in N} \frac{{{{\left\{ {{W^{- 1}}{{\hat M}^{- 1}}e} \right\}}_{i}}}}{{{{\left\{ {{W^{- 1}}{{\hat M}^{- 1}}{\mathcal{M}}({\hat A} )We} \right\}}_{i}}}}. $$
(3.17)

By (2.3)

$$ \begin{array}{@{}rcl@{}} W^{-1}\hat{M}^{-1}{\mathcal{M}}(\hat{A})We&=&W^{-1}(I-\hat{M}^{-1}|\hat{L}||\hat{D}|^{-1}|\hat{U}|)We\\ &=&(I-W^{-1}\hat{M}^{-1}|\hat{L}||\hat{D}|^{-1}|\hat{U}|W)e\\ &=&e - \hat {\xi }. \end{array} $$

Denote \(z({\hat A} ) = {({{z_{1}}({\hat A}),{z_{2}}({\hat A} ), {\ldots } ,{z_{n}}({\hat A} )} )^{T}}\) given in Lemma 3.6. It follows that \(| {\hat D} |{({| {\hat D} | - | {\hat L} |} )^{- 1}}e = z({\hat A} )\), and \({\hat M^{- 1}}e = {({| {\hat D}| - | {\hat U} |} )^{- 1}}| {\hat D} |{({| {\hat D} | - | {\hat L} |} )^{- 1}}e = {({| {\hat D} | - | {\hat U}|} )^{- 1}}z({\hat A} ),\) then

$$ {\left\| {{{\left( {I - {\Lambda} + {\Lambda} A} \right)}^{- 1}}} \right\|_{\infty} } \le \frac{1}{\delta } \max\limits_{i \in N} \frac{{{{\left\{ {{W^{- 1}}{{({| {\hat D} | - | {\hat U}|})}^{- 1}}z({\hat A} )} \right\}}_{i}}}}{{{{\left\{ {e - \hat{\xi} } \right\}}_{i}}}}. $$
(3.18)

Denote \(y = {({| {\hat D} | - | {\hat U} |} )^{- 1}}z({\hat A} ) = {({{y_{1}},{y_{2}}, {\ldots } ,{y_{n}}} )^{T}}\), then \(({| {\hat D} | - | {\hat U} |} )y = z({\hat A} ),\) and we can obtain

$${y_{i}} = \frac{{{z_{i}}({\hat A} )}}{{\left| {{{\hat a}_{ii}}} \right|}} + \sum\limits_{j = i + 1}^{n} {\frac{{\left| {{{\hat a}_{ij}}} \right|}}{{\left| {{{\hat a}_{jj}}} \right|}}{y_{j}}} ,j = n-1, {\ldots} ,1.$$

Next, we prove yiβi for each iN. In fact, by Lemma 3.6, for i = n,

$${y_{n}} = \frac{{{z_{n}}({\hat A} )}}{{{{\hat a}_{nn}}}} \le \frac{{{\eta_{n}}\left( A \right)}}{{\min \left\{ {{a_{nn}},1} \right\}}} = {\beta_{n}}.$$

For i = n − 1

$${y_{n - 1}}=\frac{{{z_{n - 1}}({\hat A} )}}{{{{\hat a}_{n - 1,n - 1}}}} + \frac{{\left| {{{\hat a}_{n - 1,n}}} \right|}}{{\left| {{{\hat a}_{n - 1,n - 1}}} \right|}} \cdot {y_{n}} \!\le\!\frac{{{\eta_{n - 1}}\left( A \right)}}{{\min \left\{ {{a_{n - 1,n - 1}},1} \right\}}} + \frac{{\left| {{a_{n - 1,n}}} \right|}}{{\left| {{a_{n - 1,n - 1}}} \right|}} \cdot {\beta_{n}} = {\beta_{n - 1}}.$$

Similarly, for i = n − 2,…,2,1, we also have

$${y_{i}} = \frac{{{z_{i}}({\hat A} )}}{{{{\hat a}_{ii}}}} + \sum\limits_{j = i + 1}^{n} {\frac{{\left| {{{\hat a}_{ij}}} \right|}}{{\left| {{{\hat a}_{ii}}} \right|}}{y_{i}}} \le \frac{{{\eta_{i}}\left( A \right)}}{{\min \left\{ {{a_{ii}},1} \right\}}} + \sum\limits_{j = i + 1}^{n} {\frac{{\left| {{a_{ij}}} \right|}}{{\left| {{a_{ii}}} \right|}}{\beta_{j}}} = {\beta_{i}}.$$

So yiβi holds for each iN. If we let β = (β1,β2,…,βn)T, then

$${W^{- 1}}{{\hat M}^{- 1}}e = {W^{- 1}}{({| {\hat D} | - | {\hat U} |})^{- 1}}z({\hat A} ) \le {W^{- 1}}\beta,$$

therefore, by (3.18), we have

$$ \begin{array}{@{}rcl@{}} {\left\| {{{\left( {I - {\Lambda} + {\Lambda} A} \right)}^{- 1}}} \right\|_{\infty} } &\le& \frac{1}{\delta } \max\limits_{i \in N} \frac{{{{\left\{ {{W^{- 1}}{{\hat M}^{- 1}}e} \right\}}_{i}}}}{{{{\left\{ {{W^{- 1}}{{\hat M}^{- 1}}{\mathcal{M}}({\hat A} )We} \right\}}_{i}}}}\\ &\le& \frac{1}{\delta } \max\limits_{i \in N} \frac{{{{\left\{ {{W^{- 1}}\beta } \right\}}_{i}}}}{{{{\left\{ {e - \hat \xi } \right\}}_{i}}}} \le \frac{1}{\delta } \max\limits_{i \in N} \frac{{{{\left\{ {{W^{- 1}}\beta } \right\}}_{i}}}}{{{{\left\{ {e - \bar \xi } \right\}}_{i}}}}. \end{array} $$

Remark 3.11

From Theorem 3.4 and Theorem 3.10, the bound (3.2) and the bound (3.16) both require a scaling matrix \(W\in {\mathcal {W}}\), which is closely related with the set of indices S. Hence, for an S-QN matrix, the choice of the set of indices S will leads to different scaling matrices. Moreover, the obtained bounds will be different.

The following example shows that the error bound (3.2) with the parameter 𝜖 may be smaller than the error bound (3.16) if the proper 𝜖 is taken. However, it is hard to determine the optimal 𝜖.

Example 3.2

Consider the matrix

$$A = \left[ {\begin{array}{*{20}{c}} 1& -{\frac{1}{4}} & -{\frac{1}{4}} & {-{\frac{1}{4}}}\\ {\frac{5}{7}}&1&{\frac{4}{7}}&-\frac{3}{7}\\ 0&{-\frac{1}{2}}&\frac{9}{8}&{\frac{1}{8}}\\ {-\frac{1}{3}}&0&{0}&\frac{4}{3} \end{array}} \right].$$

It is easy to check that A is an S-QN matrix for S = {1,3,4} and W = diag(0.6,1,0.6,0.6). By Theorem 3.4, we can get \(\bar \xi = (0.5445, 0.7127,0.7609, 0.2292 )^{T}\), 𝜖 ∈ (0,0.2233), and

$$P=diag(0.3267+0.6\epsilon,0.7127,0.4565,0.1375).$$

Then, the error bound (3.2) in Theorem 3.4 with respect to 𝜖 is drawn in Fig. 2. Furthermore, by Theorem 3.10, we can calculate that β = (2.6926,3.4316,2.0053,1.3333)T and δ = 1, so the new error bound (3.16) is

$$ \max\limits_{i \in N} \frac{1}{\delta } \cdot \frac{{{{\left\{ {{W^{- 1}}\beta } \right\}}_{i}}}} {{{{\left\{ {e - \bar \xi } \right\}}_{i}}}} = 13.9783.$$
Fig. 2
figure 2

The bounds of \( \max \limits _{d \in {{\left [ {0,1} \right ]}^{n}}} {\left \| {{{\left ({I - {\Lambda } + {\Lambda } A} \right )}^{- 1}}} \right \|_{\infty } }\)

The Example 3.3 illustrates that the error bound (3.16) can improve the error bound (3.2) in some special cases.

Example 3.3

Consider the matrix

$$A = \left[ {\begin{array}{*{20}{c}} 1&-2&0\\ {0.5}&2&{0.5}\\ 0&-1&1 \end{array}} \right].$$

Observe that A is an S-QN matrix for S = {2,3} and W = diag(1,0.5,0.5). By Theorem 3.4, we can get \(\bar \xi = {\left ({0.6875,0.6875,0.75} \right )^{T}}\), 𝜖 ∈ (0,0.3125), and

$$P=diag(0.6875+\epsilon,0.3438,0.3750).$$

Then the error bound (3.2) in Theorem 3.4 with respect to 𝜖 is drawn in Fig. 3. By Theorem 3.10, we can calculate that β = (5.2500,2.1250,2.500)T and δ = 1, so the new error bound (3.16) is

$$ \max\limits_{i \in N} \frac{1}{\delta } \cdot \frac{{{{\left\{ {{W^{- 1}}\beta } \right\}}_{i}}}}{{{{\left\{ {e - \bar \xi } \right\}}_{i}}}} = 20,$$

which is smaller than the bound (3.2) for each 𝜖 ∈ (0,0.3125). So, the bound (3.16) is better than the bound (3.2).

Fig. 3
figure 3

The bounds of \( \max \limits _{d \in {{\left [ {0,1} \right ]}^{n}}} {\left \| {{{\left ({I - {\Lambda } + {\Lambda } A} \right )}^{- 1}}} \right \|_{\infty } }\)

For an S-QN matrix A = (aij), if there exists some iN such that aij = 0 for any j > i or j < i, then the bound (3.2) can not be used. However, the bound (3.16) may have a nice performance such as the following example.

Example 3.4

Consider the matrix

$$A = \left[ {\begin{array}{*{20}{c}} 1&0&{ - 2}\\ 0&1&{ - 1}\\ 0&{ - 0.5}&1 \end{array}} \right].$$

It is easy to verify that A is an S-QN matrix for S = {2,3} and W = diag(1,0.6,0.6), we can calculate that \(\bar \xi = {\left ({0.6000,0.5000,0.5000} \right )^{T}}\). Since a21 = 0, A does not satisfy the hypothesis of Theorem 3.4, and the error bound (3.2) can not be used to estimate \( \max \limits _{d \in {{\left [ {0,1} \right ]}^{n}}} {\left \| {{{\left ({I - {\Lambda } + {\Lambda } A} \right )}^{- 1}}} \right \|_{\infty } }\). However, by Theorem 3.10, we can get that β = (4,2.5,1.5)T and δ = 1, then the error bound (3.16) is

$$ \max\limits_{i \in N} \frac{1}{\delta } \cdot \frac{{{{\left\{ {{W^{- 1}}\beta } \right\}}_{i}}}}{{{{\left\{ {e - \bar \xi } \right\}}_{i}}}} = 10,$$

which implies that the bound (3.16) in Theorem 3.10 can be more effective and has wider application than the bound (3.2).

It is obvious that the bounds we obtained can be applied to the subclass of S-QN matrices. We give an SDD matrix in the following example and compare the pervious bounds with the bound (3.2) and the bound (3.16). The results illustrate that the bound we obtained have nice performance.

Example 3.5

Consider the matrix

$$A = \left[ {\begin{array}{*{20}{c}} 3&-1.9&-1\\ -1&3&-1.9\\ -2&0&3 \end{array}} \right].$$

Observe that A is an SDD matrix, then a Nekrasov matrix, a QN matrix and an S-QN matrix for any set of indices S. Here, we choose S = {1,2} and W = diag(1.2,1.2,1). Since A is an SDD matrix, it is obvious that I −Λ + ΛA is still an SDD matrix. By Lemma 3.5, we apply the Varah bound [21] to the inverse of I −Λ + ΛA, then the bound is

$$ \begin{array}{@{}rcl@{}} \max\limits_{d \in {{\left[ {0,1} \right]}^{n}}} {\left\| {{{\left( {I - {\Lambda} + {\Lambda} A} \right)}^{- 1}}} \right\|_{\infty} }&\leq\frac{1}{\min\limits_{i\in N}\{1-d_{i}+d_{i}|a_{ii}|-d_{i}r_{i}(A)\}}\\ &\leq \frac{1}{\min\limits_{i\in N}\{|a_{ii}|-r_{i}(A)\}}=10. \end{array} $$

Since A is a Nekrasov matrix, then aii > hi(A) for all iN, where h(A) = (2.9,2.8667,1.9333)T. By Lemma 3.6, we get η(A) = (1,1.3333,1.6667)T. Then the bound in [12] for Nekrasov matrices is

$$\max\limits_{i\in N}\frac{\eta_{i}(A)}{\min\{a_{ii}-h_{i}(A),1\}}=10.0023.$$

Moreover, we can calculate that ξ = (0.6774,0.7304,0.6444)T and β = (3.0685,2.3889,1.6667)T. Then, the bound in [9] for QN matrices is

$$ \max\limits_{i \in N} \frac{{\beta_{i}}}{{{{\left\{ {e - \xi } \right\}}_{i}}}} = 9.5118.$$

By Theorem 3.10, we get that \(\bar \xi = {\left ({0.6385,0.6884,0.7289} \right )^{T}}\) and \(\delta =\frac {1}{1.2}\). Then the error bound (3.16) is

$$ \max\limits_{i \in N} \frac{1}{\delta } \cdot \frac{{{{\left\{ {{W^{- 1}}\beta } \right\}}_{i}}}}{{{{\left\{ {e - \bar \xi } \right\}}_{i}}}} = 8.4882.$$

By Theorem 3.4, if we choose 𝜖 = 0.1808, then

$$P=diag(0.9832,0.8261,0.7289), (t_{1},t_{2},t_{3})^{T}=(0.6509,0.8764,1.7528)^{T}.$$

So the bound (3.2) is

$$\max \left\{ {\frac{{ \max\limits_{i} \left\{ {{p_{i}}} \right\}}}{{ \min\limits_{i} \left\{ {{t_{i}}} \right\}}},\frac{{ \max\limits_{i} \left\{ {{p_{i}}} \right\}}}{{ \min\limits_{i} \left\{ {{p_{i}}} \right\}}}} \right\} = 1.5105.$$

Obviously, the bound (3.2) and the bound (3.16) are smaller than other bounds, which means that the bounds we obtained not only can be applied to a wider class of matrices, but also give a sharper bound for some well-known types of matrices.

4 Conclusions

In this paper, we focus on some classes of special matrices and the error bound for linear complementarity problems. First, we put forward a new subclass of H-matrices, called S-QN matrices, which properly generalizes the class of QN matrices. Then, for an S-QN matrix A, we present two kinds of error bounds for LCP(A,q) to estimate the upper bound of \( \max \limits _{d \in {{\left [ {0,1} \right ]}^{n}}} {\left \| {{{\left ({I - {\Lambda } + {\Lambda } A} \right )}^{- 1}}} \right \|_{\infty } }\), called the Error Bound I and the Error Bound II :

  • the Error Bound I: \( \max \limits _{d \in {{\left [ {0,1} \right ]}^{n}}} {\left \| {{{\left ({I - {\Lambda } + {\Lambda } A} \right )}^{- 1}}} \right \|_{\infty } }\leq \max \left \{ {\frac {{ \max \limits _{i} \left \{ {{p_{i}}} \right \}}}{{ \min \limits _{i} \left \{ {{t_{i}}} \right \}}},\frac {{ \max \limits _{i} \left \{ {{p_{i}}} \right \}}}{{ \min \limits _{i} \left \{ {{p_{i}}} \right \}}}} \right \}\) (in Theorem 3.4);

  • the Error Bound II: \( \max \limits _{d \in {{\left [ {0,1} \right ]}^{n}}} {\left \| {{{\left ({I - {\Lambda } + {\Lambda } A} \right )}^{- 1}}} \right \|_{\infty } }\leq \frac {1}{\delta } \max \limits _{i \in N} \frac {{{{\left \{ {{W^{- 1}}\beta } \right \}}_{i}}}}{{{{\left \{ {e - \bar \xi } \right \}}_{i}}}}\) (in Theorem 3.10).

The Error Bound I is the generalization of the error bound for LCPs of QN matrices but it only applies to some special cases. The Error Bound II can be used for every S-QN matrix. Numerical examples illustrate that the two error bounds have nice performances in some cases.