Abstract
Linear complementarity problem (LCP) presents many nice properties when the associated matrix belongs to some special matrix classes, especially H-matrices. In this paper, we put forward a new subclass of H-matrices, called S-QN matrices, which is the proper generalization of the QN matrices. We have proved that for a given S-QN matrix A, there exists a diagonal scaling matrix W such that AW is a QN matrix. Then, we present two kinds of error bounds for LCP of S-QN matrices. The Error Bound I generalizes the error bound for LCP of QN matrices. The Error Bound II overcomes the limitation that the Error Bound I cannot be used. Numerical examples illustrate that the Error Bound I is better than other previous bounds for H-matrices in some cases. Moreover, in some special cases, the Error Bound II can improve considerably the Error Bound I.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The linear complementarity problem is to find a vector x ∈ Rn such that
or to show that no such vector x exists, where A = [aij] ∈ Rn×n and q ∈ Rn. We abbreviate the problem by LCP(A,q) and denote its solution by x∗. Many problems can be described in the form (1.1), such as problems in linear and quadratic programming, the problem of finding a Nash equilibrium point of a bimatrix game, some free boundary problems of fluid mechanics and American options pricing problem (see [1,2,3,4]), etc.
It is well-known that the LCP(A,q) has a unique solution for any q ∈ Rn if and only if A is a P-matrix [2]. When the matrix involved is a P-matrix, Chen and Xiang [5] gave the error bound for the LCP(A,q) as:
where x∗ is the solution of LCP(A,q), r (x) = min {x,Ax + q},Λ = diag(di) with 0 ≤ di ≤ 1, and the min operator r(x) denotes the componentwise minimum of two vectors. Since real H-matrices with positive diagonal entries form a subclass of P-matrices, the research of special H-matrices has aroused many scholars’ interest, such as Nekrasov matrices [10], Ostrowski matrices [13], QN matrices [7], and S-strictly diagonally dominant(S-SDD) matrices [14]. Meanwhile, the error bounds for these subclasses of H-matrices are presented. For example, the corresponding error bounds for LCPs of QN matrices are achieved by Dai et al. in [8] and Gao et al. in [9]. More error bounds for the subclasses of H-matrices can be seen in [12, 16, 17], etc.
The paper uses some basic notations. Denote N := {1,…,n} by the set of indices, by S a nonempty subset of N, by \(\bar S: = N\backslash S\) the complement of S in N, and by e := (1,…,1)T the unit column vector of n elements, respectively. We also denote by \({r_{i}}\left (A \right ): = \sum \limits _{k \in N,k \ne i} {\left | {{a_{ik}}} \right |}\)i th row sum and by \({r_{i}^{S}}\left (A \right ): = \sum \limits _{k \in S,k \ne i} {\left | {{a_{ik}}} \right |}\) the part of i th row sum, which corresponds to the subset S of the index set N. Obviously, for any subset S of N and any index i ∈ N, we have
A matrix A = [aij] ∈ Cn×n is called strictly diagonally dominant (SDD) matrix if for each i ∈ N, it holds that aii > ri (A). A matrix A = [aij] ∈ Cn×n is called an H-matrix if its comparison matrix \(\mathcal {M}(A) = \left [ \tilde a_{ij} \right ] \in {R^{n \times n}}\) defined by the formula
is a nonsingular M-matrix. It is well-known that a square matrix A is an H-matrix if and only if there exists a positive diagonal matrix X such that AX is SDD [15].
This paper is organized as follows. In Section 2, we put forward a new subclass of H-matrices, called S-QN matrices. Moreover, by discussing the relationship among the subclasses of H-matrices, we find that the set of QN matrices is the proper subset of S-QN matrices; that is, the conception of S-QN matrix is the proper generalization of the conception of QN matrix. In Section 3, we present two kinds of error bounds for LCP(A,q) when the involved matrix A belongs to the set of S-QN matrices. The Error Bound I generalizes the error bound for LCP of QN matrices in [8] by finding a positive diagonal matrix P, such that matrix AP is an SDD matrix. However, the Error Bound I cannot be used when its hypotheses are not satisfied. To avoid this case, we propose a new error bound, called the Error Bound II. The main idea of the Error Bound II is the bound of the inverse of the S-QN matrix, which we get by extending the bound of the inverse of the QN matrix to S-QN matrix. Numerical examples show that the Error Bound I is better than the previous error bounds for H-matrices in some cases and the Error Bound II can improve the Error Bound I in some special cases for S-QN matrices. The conclusion is provided in the last section.
2 S-QN matrix
The class of H-matrices plays an important role in linear complementarity problem. Many convergence analyses of the iterative algorithms for LCP have been considered when the coefficient matrices of LCP are H-matrices. Dehghan and Hajarian [18] gave the convergence analysis of SSOR methods for LCP(A,q) when matrix A is an H-matrix. In [19], Li and Dai presented generalized AOR methods and the convergence analysis for LCP of H-matrices. Similarly, Hadjidimos and Tzoumas discussed the convergence of the matrix analogue of the accelerated overrelaxation iterative method for LCP when the coefficient matrices belong to H+-matrices [20]. At the same time, the error bounds for LCP of H-matrices or its subclasses also attracted many researchers’ interest [6, 8, 11, 13]. Inspired by the S-SDD matrices and QN matrices, we first put forward a new subclass of H-matrices, called S-QN matrices, which is the proper generalization of QN matrices.
Definition 2.1
[10] For any given matrix A = [aij] ∈ Cn×n,n ≥ 2, and for any given nonempty proper subset S of N, then A is called as an S-strictly diagonally dominant (S-SDD) matrix if
- (1)
\(\left | {{a_{ii}}} \right | > {r_{i}^{s}}\left (A \right )\) for all i ∈ S;
- (2)
\(\left ({\left | {{a_{ii}}} \right | - {r_{i}^{s}}\left (A \right )} \right )\left ({\left | {{a_{jj}}} \right | - r_{j}^{\bar s}\left (A \right )} \right ) > r_{i}^{\bar s}\left (A\right ){r_{j}^{s}}\left (A \right )\) for all i ∈ S, \(j\in {\bar {S}}\).
S-SDD matrices can be characterized in different ways, the following Theorem 2.2 gives out a general characterization by the form of corresponding diagonal scaling matrices, which can be more convenient in some applications. Moreover, these scaling characterizations give us another insight into relation between various subclasses of the class of H-matrices and the class of SDD matrices. The corresponding diagonal scaling matrix W belongs to the set \(\mathcal {W}\), defined as the set of all diagonal matrices whose diagonal entries are either 1 or γ, where γ is an arbitrary positive number, i.e.,
Theorem 2.2
[10] A matrix A is an S-SDD matrix if and only if there exists amatrix\(W\in {\mathcal {W}}\)suchthatAW is an SDD matrix.
Numerically, if a matrix A is an S-SDD matrix, we can construct the diagonal matrix W by choosing the scaling parameter γ from the interval (γ1 (A),γ2 (A)) with
where, if \({r_{j}^{s}}\left (A \right ) = 0\) for \(j\in {\bar {S}}\), the final fraction is defined to be + ∞.
Assume that A = D + L + U is the standard splitting of a matrix A ∈ Cn×n into its diagonal (D), strictly lower triangular (L), and strictly upper triangular (U) parts, respectively. Next, we give a characterization of the QN matrices.
Lemma 2.3
[7] LetA = [aij] ∈ Cn×n,n ≥ 2, withaii≠ 0,i ∈ N.Then A is a QN matrix if and only if
where
Obviously, the matrix M is monotone, i.e., M− 1 is a nonnegative matrix. If we denote
then, G is a Z-matrix (i.e., its off-diagonal entries are nonpositive) and the condition (2.1) amounts to G is strictly diagonal dominant by rows with positive diagonal entries. Thus, A is a QN matrix if and only if for i = 1,2,…,n,
Theorem 3.2 in [7] shows that a QN matrix is an H-matrix; however, an H-matrix may be not a QN matrix. In this paper, we introduce a new kind of subclass of H-matrices, called S-QN matrices, which is also the proper generalization of QN matrices. Next, we give out the definition of S-QN matrix.
Definition 2.4
Let a matrix A = [aij] ∈ Cn×n,n ≥ 2, with aii≠ 0,i ∈ N, and any given nonempty proper subset S of N. Then A is called an S-QN matrix if the matrix G defined by (2.3) is an S-SDD matrix with positive diagonal entries.
The following result gives a characterization of S-QN matrices.
Theorem 2.5
A matrix A is an S-QN matrix if and only if there exists amatrix\(W\in {\mathcal {W}}\)suchthatAW is a QN matrix.
Proof
At first, we prove the necessity. Let A = D + L + U be an S-QN matrix, by the Definition 2.4, the matrix G defined by (2.3) is an S-SDD matrix and \(G={\mathcal {M}}(G)\). According to Theorem 2.2, there exists a positive diagonal matrix \(W\in {\mathcal {W}}\) such that GW is an SDD matrix, i.e., for i = 1,2,…,n, we have
The following is to show AW is a QN matrix. According to (2.4), we only need to prove that \( {\bar G}e = {\mathcal {M}}(\bar G)e>0\), where \(\bar {G}={{\bar M}^{- 1}}{\mathcal {M}}\left ({AW} \right ) = {W^{- 1}}{M^{- 1}}{\mathcal {M}}\left (A \right )W = {W^{- 1}}GW\) and \(\bar M = \left ({\left | {DW} \right | - \left | {LW} \right |} \right ){\left | {DW} \right |^{- 1}}\left ({\left | {DW} \right | - \left | {UW} \right |} \right )= MW\). For i = 1,2,…,n,
Hence, AW is a QN-matrix. The necessity is proved.
Next, we prove the sufficiency. If there exists a positive diagonal matrix \(W\in {\mathcal {W}}\) such that AW is a QN matrix, we need to prove that the matrix A is an S-QN matrix. That is, we need to show the matrix G is an S-SDD matrix with positive diagonal entries.
Since AW is a QN matrix, we have that \( \bar G e= {\mathcal {M}}(\bar G)e={{\bar M}^{- 1}}{\mathcal {M}}\left ({AW} \right )e = W^{-1}GWe=(W^{-1}G)We>0\). By Theorem 2.2, W− 1G is an S-SDD matrix. Moreover, W− 1G can be expressed as
Obviously, the matrix G is also an S-SDD matrix with positive diagonal entries. The sufficiency is proved. □
Remark 2.6
According to the above, the conclusion is that an S-QN matrix is an H-matrix. When the matrix A is an S-QN matrix, then there exists \(W\in {\mathcal {W}}\) such that AW is a QN-matrix. It is well-known that a QN-matrix is also an H-matrix; that is, there exists diagonal matrix X such that (AW)X is an SDD matrix; i.e., there exists a positive diagonal matrix WX, such that A(WX) is an SDD matrix. Hence, an S-QN matrix is also an H-matrix.
Remark 2.7
It is easy to see that the scaling matrix W for an S-QN matrix A is not unique. Since A is an S-QN matrix, then the matrix G is an S-SDD matrix. From the proof of Theorem 2.5, the scaling matrix W for A satisfies that GW is an SDD matrix. Therefore, the parameter γ of matrix W can be taken from the interval (γ1(G),γ2(G)). It means that any choice from the interval (γ1(G),γ2(G)) can form a scaling matrix W such that AW is a QN matrix.
The following Example 2.1 shows that the class of QN-matrices is the proper subset of the class of S-QN matrices.
Example 2.1
The matrix
is an S-QN matrix with S = {2,3}, and it is not a QN matrix.
The following Examples 2.2 and 2.3 illustrate that the set of QN matrices and the set of S-SDD matrices do not contain each other.
Example 2.2
The matrix
is a QN matrix, but not an S-SDD matrix for any subset S.
Example 2.3
The matrix
is an S-SDD matrix for S = {1,2}, but not a QN matrix.
By Theorem 2.2, for an arbitrary S-SDD matrix A, there exists a diagonal matrix \(W\in \mathcal {W}\), such that AW is an SDD matrix. It is well-known that an SDD matrix is also a QN matrix, so matrix AW is a QN matrix. Then, according to Theorem 2.5, the matrix A is an S-QN matrix. Therefore, the class of S-QN matrices contains the class of S-SDD matrices.
Figure 1 displays the relationship among these subclasses of H-matrices.
3 Error bounds for LCPs of S-QN matrices
When the matrix involved is a P-matrix, Chen and Xiang proposed the corresponding error bound for LCP(A,q) in (1.2). A big challenge for (1.2) is due to the difficulty for solving \(\max _{d\in [0,1]^{n}}\parallel {(I-{\Lambda } +{\Lambda } A)^{-1}}\parallel _{\infty }\), where Λ = diag(di) with 0 ≤ di ≤ 1. However, if the involved matrix belongs to a subclass of P-matrices, for example, H-matrices with positive diagonals, then many calculable error bounds for the LCP(A,q) can be derived. As a subclass of H-matrices, we present two kinds of error bounds for LCP(A,q) with an S-QN matrix.
3.1 Error bound I for LCPs with an S-QN matrix
García-Esnaola and Peña provided an error bound for LCP(A,q) of H-matrices with positive diagonal entries in [6], it requires to know the diagonal matrix X in advance such that AX is strictly diagonally dominant (SDD). Based on this idea, we first find a diagonal matrix P for an S-QN matrix A such that AP is SDD. Then the error bound of LCP(A,q) when the involved matrix is an S-QN matrix with positive diagonal entries is presented. The following Theorem 3.1 can be found in [6].
Theorem 3.1
[6] Assume thatA = [aij] ∈ Rn×nis an H-matrix with positive diagonal entries.Let\(\bar D = diag\left ({{{\bar d}_{i}}} \right )\)with\({\bar d_{i}} > 0, i\in {N}\), be a diagonal matrix such that\(A\bar {D}\)isstrictly diagonal dominant by rows. For anyi ∈ N, let\({\bar \beta _{i}} = {a_{ii}}{\bar d_{i}} - \sum \limits _{j \ne i} {\left | a_{ij} \right |\bar d}_{j}\), then
where Λ = diag(di) with 0 ≤ di ≤ 1.
In [8], Dai et al. have found the diagonal matrix K for the QN matrix A such that AK is SDD. Similarly, we can find the corresponding diagonal matrix P = diag(pi) for S-QN matrices.
Theorem 3.2
[8] LetA = [aij] ∈ Rn×nbe a QN matrix with positive diagonal entries such that, for eachi = 1,…,n − 1,aij≠ 0 for somej > i, and for eachi = 2,…,n,aij≠ 0 for somej < i.Letξ := M− 1 |L| |D|− 1 |U|e, where M is defined in (2.2), then the matrixK = diag (k1,…,kn) withk1 := ξ1 + 𝜖, \(\epsilon \in \left ({0,\min \left \{ {1 - {\xi _{1}}, \min \limits _{2 \le i \le n} \frac {{{{\left [ {{\mathcal {M}}\left (A \right )\xi } \right ]}_{i}}}}{{\left | {{a_{i1}}} \right |}}} \right \}} \right )\), where\(\frac {{{{\left [ {{\mathcal {M}}\left (A \right )\xi } \right ]}_{i}}}}{{\left | {{a_{i1}}} \right |}} = \infty \)whenai1 = 0, andki := ξifori = 2,…,n, has positive diagonal entries and it satisfies that AK is SDD.
Corollary 3.3
LetA = [aij] ∈ Rn×nbe an S-QN matrix with positive diagonal entries such that, whenj > i, there existsaij≠ 0 for eachi = 1,…,n − 1;and whenj < i, there existsaij≠ 0 for eachi = 2,…,n.If\(W=diag(w_{i})\in \mathcal {W}\)suchthatAW is a QN matrix,\(\bar \xi = {W^{- 1}}{M^{- 1}}\left | L \right |{\left | D \right |^{ - 1}}\left | U \right |We\), where M is defined in (2.2), andK = diag (k1,…,kn) with\({k_{1}}: = {\bar \xi _{1}} + \epsilon \), \(\epsilon \in \left ({0,\min \left \{ {1 - {{\bar \xi }_{1}}, \min \limits _{2 \le i \le n} \frac {{{{\left [ {{\mathcal {M}}\left ({AW} \right )\bar \xi } \right ]}_{i}}}}{{\left | {{a_{i1}}} \right |{w_{1}}}}} \right \}} \right )\), where\(\frac {{{{\left [ {{\mathcal {M}}\left ({AW} \right )\bar \xi } \right ]}_{i}}}}{{\left | {{a_{i1}}} \right |{w_{1}}}} = \infty \)whenai1 = 0, and\({k_{i}}: ={\bar \xi _{i}}\)fori = 2,…,n.Then the matrixP = WK = diag(pi) is the positive diagonal matrix such thatAP is SDD.
The following results present an error bound for the linear complementarity problem of S-QN matrices, which generalizes the error bound for QN matrices in [8]. This bound can improve the previous bounds for H-matrices.
Theorem 3.4
LetA = [aij] ∈ Rn×nbe an S-QN matrix with positive diagonal entries such that, whenj > i, there existsaij ≠ 0 for eachi = 1,…,n − 1;and whenj < i, there existsaij ≠ 0 for eachi = 2,…,n.Then
wheret1 := 𝜖a11w1, \({t_{i}}: = {a_{ii}}{w_{i}}{\bar \xi _{i}} - \sum \limits _{j \in N\backslash \left \{ i \right \}} {\left | {{a_{ij}}} \right |{w_{j}}{{\bar \xi }_{j}}} - \epsilon \left | {{a_{i1}}} \right |{w_{1}}\)for eachi = 2,…,n; \(\bar {\xi }\), W, Pand𝜖are defined in Corollary 3.3.
Proof
By Remark 2.6, an S-QN matrix is an H-matrix, we can apply the bound (3.1) for H-matrices to S-QN matrices. Then, we need to prove that \({\bar d_{i}} = {p_{i}}\) and \({t_{i}} = {\bar \beta _{i}}\) for all i ∈ N. Obviously, \({\bar d_{i}} = {p_{i}}\) for all i ∈ N, so we only need to prove that \({t_{i}} = {\bar \beta _{i}}\) for all i ∈ N.
By Theorem 2.5, AW is a QN matrix, and
we can have \({\mathcal {M}}\left ({AW} \right )\bar \xi + \left | L \right |{\left | D \right |^{- 1}}\left | U \right |W\bar \xi = \left | L \right |{\left | D \right |^{- 1}}\left | U \right |We\), in other words, \({\mathcal {M}}\left ({AW} \right )\bar \xi = \left | L \right |{\left | D \right |^{- 1}}\left | U \right |W\left ({e - \bar \xi } \right )\).
Denote ζ = (𝜖,0,…,0)T, then \(Ke = {\left ({{k_{1}}, {\ldots } ,{k_{n}}} \right )^{T}} = {\left ({{{\bar \xi }_{1}} + \epsilon ,{{\bar \xi }_{2}}, {\ldots } ,{{\bar \xi }_{n}}} \right )^{T}} = \bar \xi + \zeta \),
It is easy to get that
For i = 1, \({\bar \beta _{1}} = {\left [ {{\mathcal {M}}\left ({AP} \right )e} \right ]_{1}} = {\left [ {{\mathcal {M}}\left ({AWK} \right )e} \right ]_{1}} = {\left [ {{\mathcal {M}}\left ({AW} \right ) \cdot \zeta } \right ]_{1}} = \epsilon {a_{11}}{w_{1}} = {t_{1}}.\) For i = 2,…,n,
Hence, we can get
□
The error bound given by [5] of LCP(A,q) for an H-matrix A = [aij] ∈ Rn×n with positive diagonal entries is the following:
where \({\mathcal {M}}\left (A \right )\) is the comparison matrix of A, Γ = diag(a11,a22,…,ann) is the diagonal part of A and max (Γ,I) = diag(max {a11,1},⋯,max {ann,1}).
The following Example 3.1 illustrates that the bound (3.2) obtained by Theorem 3.4 is better than the bound (3.3) given in [5].
Example 3.1
Consider the following matrix
It is easy to check that the matrix A is an S-QN matrix for S = {2,3} and W = diag(1,0.4,0.4). By Theorem 3.4, we can get \(\bar \xi = {\left ({0.9306,0.9306,0.2917} \right )^{T}}\), if we choose 𝜖 = 0.0347, then
So the bound (3.2) in Theorem 3.4 is
which is smaller than the bound (3.3)
3.2 Error bound II for LCPs with an S-QN matrix
In Theorem 3.4, we get an error bound for LCPs of S-QN matrices. Although the bound (3.2) can improve the previous error bound for H-matrices, it is still worth improving. For example, for a given S-QN matrix A = [aij] ∈ Rn×n with positive diagonals, if there exists some i ∈ N such that aij = 0 for any j > i or j < i, then the bound (3.2) can not be used to estimate \( \max \limits _{d \in {{\left [ {0,1} \right ]}^{n}}} {\left \| {{{\left ({I - {\Lambda } + {\Lambda } A} \right )}^{- 1}}} \right \|_{\infty } }\). Besides, when 𝜖 → 0,
which implies that
These facts show that when the assumption of Theorem 3.4 is not satisfied or 𝜖→0, using the bound (3.2) to estimate \(\max \limits _{d \in {{\left [ {0,1} \right ]}^{n}}} {\left \| {{{\left ({I - {\Lambda } + {\Lambda } A} \right )}^{- 1}}} \right \|_{\infty } }\) is inappropriate. To overcome these drawbacks, we propose a new error bound for S-QN matrices which is better than the bound (3.2) in some cases. Some useful lemmas are listed here.
Lemma 3.5
[11] Letα > 0 andη > 0. Then for anyx ∈ [0,1],
and
Lemma 3.6
[12] LetA = [aij] ∈ Cn×nbea matrix withaii≠ 0 fori ∈ Nand\(\hat A = I - {\Lambda } + {\Lambda } A = \left [ {{{\hat a}_{ij}}} \right ]\), where Λ = diag(di) with 0 ≤ di ≤ 1.Then
and
where\({z_{1}}({\hat A} ) = {\eta _{1}}\left (A \right ) = 1,{z_{i}}({\hat A} ) = \sum \limits _{j = 1}^{i - 1} {\frac {{| {{{\hat a}_{ij}}}|}}{{| {{{\hat a}_{jj}}} |}}{z_{j}}({\hat A} )} + 1\), and
Lemma 3.7
[7] LetA = [aij] ∈ Cn×n,n ≥ 2, be a QN matrix. Then
whereMis defined in (2.2).
Next, we will introduce the new error bound for LCPs when the involved matrix is an S-QN matrix. To calculate \(\max \limits _{d \in {{\left [ {0,1} \right ]}^{n}}} {\left \| {{{\left ({I - {\Lambda } + {\Lambda } A} \right )}^{- 1}}} \right \|_{\infty } }\), we first prove that the matrix \(\hat A = I - {\Lambda } + {\Lambda } A\) is also an S-QN matrix when the matrix A is an S-QN matrix. Then, the error bound is estimated by using the bound of the inverse of the S-QN matrix \(\hat {A}\).
Lemma 3.8
LetA = [aij] ∈ Cn×nbe an S-QN matrix with positive diagonal entries,and\(\hat A = I - {\Lambda } + {\Lambda } A = \left [ {{{\hat a}_{ij}}} \right ]\), where Λ = diag(di) with 0 ≤ di ≤ 1.Then\(\hat {A}\)isan S-QN matrix.
Proof
Since A is an S-QN matrix, there exists a diagonal matrix \(W =diag(w_{i})\in \mathcal {W}\) such that \(\bar {A}=AW\) is a QN matrix. Let \(\bar \xi = {W^{- 1}}{M^{- 1}}\left | L \right |{\left | D \right |^{- 1}}\left | U \right |We=(\bar {\xi }_{1},\bar {\xi }_{2},\ldots ,\bar {\xi }_{n})^{T}\), where M is defined in (2.2), then
Denote
Since A = D + L + U, it follows that \(\hat {A}\) and \(\bar {A}\) can be split in the form of \(\hat A = \hat D + \hat L + \hat U\), \(\bar {A}=\bar {D}+\bar {L}+\bar {U}\), respectively.
To prove \(\hat {A}\) is an S-QN matrix, by Theorem 2.5, we only need to prove that \(\hat {A}W\) is a QN matrix. If we denote
where \(\hat {M} = (|\hat {D}|-|\hat {L}|)|\hat {D}^{-1}|(|\hat {D}-\hat {U}|),\) then we only need to prove \(e > \hat \xi \). Denote
where
Then we can get
From (3.6) and (3.7), we have \(\hat \xi = {W^{- 1}}{{\hat M}^{- 1}}\hat v = {W^{- 1}}(|\hat {D}|-|\hat {U}|)^{-1}|\hat {D}|(|\hat {D}|-|\hat {L}|)^{-1}\hat {v}\). If we denote \(\hat \lambda = (|\hat {D}|-|\hat {L}|)^{-1}\hat {v}={({{{\hat \lambda }_{1}},\ldots ,{{\hat \lambda }_{n}}} )^{T}} ,\) then we get
and
By (3.8) and (3.10), we can obtain the entries of \(\hat {\lambda }\) as
Moreover, followed by (3.9), we have \((|\hat {D}|-|\hat {U}|)W\hat {\xi } = |\hat { D}|\hat {\lambda },\) this implies the following recursive relations
Similarly, if we denote \(\bar {v}=|\bar {L}||\bar {D}^{-1}||\bar {U}|We=(\bar {v}_{1},\bar {v}_{2},...,\bar {v}_{n})^{T}, \bar {\lambda }=(|\bar {D}|-|\bar {L}|)^{-1}\bar {v}=(\bar {\lambda }_{1},\bar {\lambda }_{2},\ldots ,\bar {\lambda }_{n})^{T},\) then
where
and
Next, we use mathematical induction to prove \({\hat \lambda _{i}} \le {\bar \lambda _{i}}\) for any i ∈ N.
- (1)
The conclusion is hold for i = 1 and i = 2, since for \({\hat \lambda _{1}} = {\bar \lambda _{1}}=0,\) and
$$ \begin{array}{@{}rcl@{}} {{\hat \lambda }_{2}} &=& \frac{{{{\hat v}_{2}}}}{{\left| {{{\hat a}_{22}}} \right|}} + \frac{{\left| {{{\hat a}_{21}}} \right|}}{{\left| {{{\hat a}_{22}}} \right|}}{{\hat \lambda }_{1}} =\frac{{{{\hat v}_{2}}}}{{\left| {{{\hat a}_{22}}} \right|}}\\ &=& \frac{1}{{\left| {{{\hat a}_{22}}} \right|}} \cdot \frac{{\left| {{{\hat a}_{21}}} \right|}}{{\left| {{{\hat a}_{11}}} \right|}}\sum\limits_{j = 2}^{n} {\left| {{{\hat a}_{1j}}} \right|{w_{j}}} \\ &=& \frac{{{d_{2}}\left| {{a_{21}}} \right|}}{{1 - {d_{2}} + {d_{2}}|{a_{22}|}}} \cdot \frac{{{d_{2}}}}{{1 - {d_{1}} + {d_{1}}|{a_{11}}|}} \cdot \sum\limits_{j = 2}^{n} {{d_{1}}|{a_{1j}}|{w_{j}}} \\ &\le& \frac{{\left| {{a_{21}}} \right|}}{{|{a_{22}}|}} \cdot \frac{1}{{|{a_{11}}|}} \cdot \sum\limits_{j = 2}^{n} {|{a_{1j}}|{w_{j}}} \\ &=& \frac{{{v_{2}}}}{{|{a_{22}}|}} = {{\bar \lambda }_{2}}, \end{array} $$ - (2)
we assume that \({\hat \lambda _{i}} \leq {\bar \lambda _{i}}\) for i = k, then by (3.8),
$$ \begin{array}{@{}rcl@{}} {{\hat \lambda }_{k + 1}} &=& \frac{{{{\hat v}_{k + 1}}}}{{\left| {{{\hat a}_{k + 1,k + 1}}} \right|}} + \sum\limits_{j = 1}^{k} {\frac{{\left| {{{\hat a}_{k + 1,j}}} \right|}}{{\left| {{{\hat a}_{k + 1,k + 1}}} \right|}}} {{\hat \lambda }_{j}}\\ {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} &=& \frac{1}{{\left| {{{\hat a}_{k + 1,k + 1}}} \right|}}\left( {\sum\limits_{l = 1}^{k} {\left( {\frac{{\left| {{{\hat a}_{k + 1,l}}} \right|}}{{\left| {{{\hat a}_{ll}}} \right|}}\sum\limits_{j = l + 1}^{n} {\left| {{{\hat a}_{lj}}} \right|{w_{j}}} } \right)} } \right) + \sum\limits_{j = 1}^{k} {\frac{{\left| {{{\hat a}_{k + 1,j}}} \right|}}{{\left| {{{\hat a}_{k + 1,k + 1}}} \right|}}} {{\hat \lambda }_{j}}\\ &=& \sum\limits_{l = 1}^{k} {\left( {\frac{{\left| {{{\hat a}_{k + 1,l}}} \right|}}{{\left| {{{\hat a}_{k + 1,k + 1}}} \right|}}\sum\limits_{j = l + 1}^{n} {\frac{{\left| {{{\hat a}_{lj}}} \right|}}{{\left| {{{\hat a}_{ll}}} \right|}}{w_{j}}} } \right)} + \sum\limits_{j = 1}^{k} {\frac{{\left| {{{\hat a}_{k + 1,j}}} \right|}}{{\left| {{{\hat a}_{k + 1,k + 1}}} \right|}}} {{\hat \lambda }_{j}}\\ &=& \sum\limits_{l = 1}^{k} {\left( {\frac{{{d_{k + 1}}\left| {{a_{k + 1,l}}} \right|}}{{1 - {d_{k + 1}} + {d_{k + 1}}\left| {{a_{k + 1,k + 1}}} \right|}}\sum\limits_{j = l + 1}^{n} {\frac{{{d_{lj}}\left| {{a_{lj}}} \right|}}{{1 - {d_{l}}+ {d_{l}}\left| {{a_{ll}}} \right|}}{w_{j}}} } \right)}\\ &&+ \sum\limits_{j = 1}^{k} {\frac{{{d_{k + 1,l}}\left| {{a_{k + 1,j}}} \right|}}{{1 - {d_{k + 1}} + {d_{k + 1}}\left| {{a_{k + 1,k + 1}}} \right|}}} {{\hat \lambda }_{j}}\\ &\le& \sum\limits_{l = 1}^{k} {\left( {\frac{{\left| {{a_{k + 1,l}}} \right|}}{{\left| {{a_{k + 1,k + 1}}} \right|}}\sum\limits_{j = l + 1}^{n} {\frac{{\left| {{a_{lj}}} \right|}}{{\left| {{a_{ll}}} \right|}}{w_{j}}} } \right)} + \sum\limits_{j = 1}^{k} {\frac{{\left| {{a_{k + 1,j}}} \right|}}{{\left| {{a_{k + 1,k + 1}}} \right|}}} {{\bar \lambda }_{j}}\\ &=& \frac{{{v_{k + 1}}}}{{\left| {{a_{k + 1,k + 1}}} \right|}} + \sum\limits_{j = 1}^{k} {\frac{{\left| {{a_{k + 1,j}}} \right|}}{{\left| {{a_{k + 1,k + 1}}} \right|}}} {{\bar \lambda }_{j}}\\ &=& {{\bar \lambda }_{k + 1}}. \end{array} $$
So \({\hat \lambda _{i}} \leq {\bar \lambda _{i}}\) is hold for any i ∈ N.
By (3.12), (3.14) and Lemma 3.5, we can get \(\hat \xi \le \bar \xi \). From (3.5), we can get \(e > \hat \xi \), i.e., \(\hat AW\) is a QN matrix. Therefore, \(\hat A = I - {\Lambda } + {\Lambda } A\) is an S-QN matrix. □
The following lemma extends the bound of the inverse of the QN matrix given by Lemma 3.7 to the S-QN matrix.
Lemma 3.9
LetA = [aij] ∈ Cn×n,n ≥ 2 be an S-QN matrix, if\(W=diag(w_{i})\in \mathcal {W}\)suchthatAW is a QN matrix,\(\delta = \min \{ {w^{-1}_{i}} \},\)then
whereMis defined in (2.2).
Proof
Since A is an S-QN matrix, AW is a QN matrix. By Lemma 3.7
Denote \(A^{-1}=[a_{ij}^{(-1)}]\) and \(R_{i}(A^{-1})=\sum \limits _{j=1}^{n} a_{ij}^{(-1)}\) (the i th row sum), then ∥A− 1∥∞ and ∥(AW)− 1∥∞ can be expressed as
respectively. Let \(\delta = {\kern 1pt} \min \limits _{i} {\kern 1pt} \{ {w_{i}^{- 1}} \}\), then
Hence
□
Theorem 3.10
LetA = [aij] ∈ Rn×n, andn ≥ 2 be an S-QN matrix with positive diagonal entries,if\(W=diag(w_{i})\in \mathcal {W}\), such thatAW is a QN matrix,\(\delta = \min \{ {w^{-1}_{i}} \}\).Then
where\(\bar \xi = {W^{- 1}}{M^{- 1}}\left | L \right |{\left | D \right |^{- 1}}\left | U \right |We\), M is defined in (2.2), β = (β1,β2,…,βn)Twith\({\beta _{n}} = \frac {{{\eta _{n}}\left (A \right )}}{{\min \left \{ {{a_{nn}},1} \right \}}}\), \({\beta _{i}} = \frac {{{\eta _{i}}\left (A \right )}}{{\min \left \{ {{a_{ii}},1} \right \}}} + \sum \limits _{j = i + 1}^{n} {\frac {{\left | {{a_{ij}}} \right |}}{{\left | {{a_{ii}}} \right |}}} {\beta _{j}},i = n-1, {\ldots } , 1,\)and ηi (A) is defined in Lemma 3.6.
Proof
By Lemmas 3.8 and 3.9, \(\hat A = I - {\Lambda } + {\Lambda } A\) is an S-QN matrix, and
By (2.3)
Denote \(z({\hat A} ) = {({{z_{1}}({\hat A}),{z_{2}}({\hat A} ), {\ldots } ,{z_{n}}({\hat A} )} )^{T}}\) given in Lemma 3.6. It follows that \(| {\hat D} |{({| {\hat D} | - | {\hat L} |} )^{- 1}}e = z({\hat A} )\), and \({\hat M^{- 1}}e = {({| {\hat D}| - | {\hat U} |} )^{- 1}}| {\hat D} |{({| {\hat D} | - | {\hat L} |} )^{- 1}}e = {({| {\hat D} | - | {\hat U}|} )^{- 1}}z({\hat A} ),\) then
Denote \(y = {({| {\hat D} | - | {\hat U} |} )^{- 1}}z({\hat A} ) = {({{y_{1}},{y_{2}}, {\ldots } ,{y_{n}}} )^{T}}\), then \(({| {\hat D} | - | {\hat U} |} )y = z({\hat A} ),\) and we can obtain
Next, we prove yi ≤ βi for each i ∈ N. In fact, by Lemma 3.6, for i = n,
For i = n − 1
Similarly, for i = n − 2,…,2,1, we also have
So yi ≤ βi holds for each i ∈ N. If we let β = (β1,β2,…,βn)T, then
therefore, by (3.18), we have
□
Remark 3.11
From Theorem 3.4 and Theorem 3.10, the bound (3.2) and the bound (3.16) both require a scaling matrix \(W\in {\mathcal {W}}\), which is closely related with the set of indices S. Hence, for an S-QN matrix, the choice of the set of indices S will leads to different scaling matrices. Moreover, the obtained bounds will be different.
The following example shows that the error bound (3.2) with the parameter 𝜖 may be smaller than the error bound (3.16) if the proper 𝜖 is taken. However, it is hard to determine the optimal 𝜖.
Example 3.2
Consider the matrix
It is easy to check that A is an S-QN matrix for S = {1,3,4} and W = diag(0.6,1,0.6,0.6). By Theorem 3.4, we can get \(\bar \xi = (0.5445, 0.7127,0.7609, 0.2292 )^{T}\), 𝜖 ∈ (0,0.2233), and
Then, the error bound (3.2) in Theorem 3.4 with respect to 𝜖 is drawn in Fig. 2. Furthermore, by Theorem 3.10, we can calculate that β = (2.6926,3.4316,2.0053,1.3333)T and δ = 1, so the new error bound (3.16) is
The Example 3.3 illustrates that the error bound (3.16) can improve the error bound (3.2) in some special cases.
Example 3.3
Consider the matrix
Observe that A is an S-QN matrix for S = {2,3} and W = diag(1,0.5,0.5). By Theorem 3.4, we can get \(\bar \xi = {\left ({0.6875,0.6875,0.75} \right )^{T}}\), 𝜖 ∈ (0,0.3125), and
Then the error bound (3.2) in Theorem 3.4 with respect to 𝜖 is drawn in Fig. 3. By Theorem 3.10, we can calculate that β = (5.2500,2.1250,2.500)T and δ = 1, so the new error bound (3.16) is
which is smaller than the bound (3.2) for each 𝜖 ∈ (0,0.3125). So, the bound (3.16) is better than the bound (3.2).
For an S-QN matrix A = (aij), if there exists some i ∈ N such that aij = 0 for any j > i or j < i, then the bound (3.2) can not be used. However, the bound (3.16) may have a nice performance such as the following example.
Example 3.4
Consider the matrix
It is easy to verify that A is an S-QN matrix for S = {2,3} and W = diag(1,0.6,0.6), we can calculate that \(\bar \xi = {\left ({0.6000,0.5000,0.5000} \right )^{T}}\). Since a21 = 0, A does not satisfy the hypothesis of Theorem 3.4, and the error bound (3.2) can not be used to estimate \( \max \limits _{d \in {{\left [ {0,1} \right ]}^{n}}} {\left \| {{{\left ({I - {\Lambda } + {\Lambda } A} \right )}^{- 1}}} \right \|_{\infty } }\). However, by Theorem 3.10, we can get that β = (4,2.5,1.5)T and δ = 1, then the error bound (3.16) is
which implies that the bound (3.16) in Theorem 3.10 can be more effective and has wider application than the bound (3.2).
It is obvious that the bounds we obtained can be applied to the subclass of S-QN matrices. We give an SDD matrix in the following example and compare the pervious bounds with the bound (3.2) and the bound (3.16). The results illustrate that the bound we obtained have nice performance.
Example 3.5
Consider the matrix
Observe that A is an SDD matrix, then a Nekrasov matrix, a QN matrix and an S-QN matrix for any set of indices S. Here, we choose S = {1,2} and W = diag(1.2,1.2,1). Since A is an SDD matrix, it is obvious that I −Λ + ΛA is still an SDD matrix. By Lemma 3.5, we apply the Varah bound [21] to the inverse of I −Λ + ΛA, then the bound is
Since A is a Nekrasov matrix, then aii > hi(A) for all i ∈ N, where h(A) = (2.9,2.8667,1.9333)T. By Lemma 3.6, we get η(A) = (1,1.3333,1.6667)T. Then the bound in [12] for Nekrasov matrices is
Moreover, we can calculate that ξ = (0.6774,0.7304,0.6444)T and β = (3.0685,2.3889,1.6667)T. Then, the bound in [9] for QN matrices is
By Theorem 3.10, we get that \(\bar \xi = {\left ({0.6385,0.6884,0.7289} \right )^{T}}\) and \(\delta =\frac {1}{1.2}\). Then the error bound (3.16) is
By Theorem 3.4, if we choose 𝜖 = 0.1808, then
So the bound (3.2) is
Obviously, the bound (3.2) and the bound (3.16) are smaller than other bounds, which means that the bounds we obtained not only can be applied to a wider class of matrices, but also give a sharper bound for some well-known types of matrices.
4 Conclusions
In this paper, we focus on some classes of special matrices and the error bound for linear complementarity problems. First, we put forward a new subclass of H-matrices, called S-QN matrices, which properly generalizes the class of QN matrices. Then, for an S-QN matrix A, we present two kinds of error bounds for LCP(A,q) to estimate the upper bound of \( \max \limits _{d \in {{\left [ {0,1} \right ]}^{n}}} {\left \| {{{\left ({I - {\Lambda } + {\Lambda } A} \right )}^{- 1}}} \right \|_{\infty } }\), called the Error Bound I and the Error Bound II :
the Error Bound I: \( \max \limits _{d \in {{\left [ {0,1} \right ]}^{n}}} {\left \| {{{\left ({I - {\Lambda } + {\Lambda } A} \right )}^{- 1}}} \right \|_{\infty } }\leq \max \left \{ {\frac {{ \max \limits _{i} \left \{ {{p_{i}}} \right \}}}{{ \min \limits _{i} \left \{ {{t_{i}}} \right \}}},\frac {{ \max \limits _{i} \left \{ {{p_{i}}} \right \}}}{{ \min \limits _{i} \left \{ {{p_{i}}} \right \}}}} \right \}\) (in Theorem 3.4);
the Error Bound II: \( \max \limits _{d \in {{\left [ {0,1} \right ]}^{n}}} {\left \| {{{\left ({I - {\Lambda } + {\Lambda } A} \right )}^{- 1}}} \right \|_{\infty } }\leq \frac {1}{\delta } \max \limits _{i \in N} \frac {{{{\left \{ {{W^{- 1}}\beta } \right \}}_{i}}}}{{{{\left \{ {e - \bar \xi } \right \}}_{i}}}}\) (in Theorem 3.10).
The Error Bound I is the generalization of the error bound for LCPs of QN matrices but it only applies to some special cases. The Error Bound II can be used for every S-QN matrix. Numerical examples illustrate that the two error bounds have nice performances in some cases.
References
Berman, A., Plemmons, R.J.: Nonnegative Matrix in the Mathematical Science. Academic Press, New York (1979)
Cottle, R.W., Pang, J.S., Stone, R.E.: The Linear Complementarity Problem. Academic Press, San Diego (1992)
Schäfer, U.: A linear complementarity problem with a P-Matrix[J]. Siam Review 46(2), 189–201 (2004)
Feng, L., Linetsky, V., Morales, J.L., et al.: On the solution of complementarity problems arising in American options pricing[J]. Optim. Methods Softw. 26(4–5), 813–825 (2011)
Chen, X., Xiang, S.: Computation of error bounds for p-matrix linear complementarity problems[J]. Math. Program. 106(3), 513–525 (2006)
García-Esnaola, M., Peña, J.M.: A comparison of error bounds for linear complementarity problems of H-matrices. Linear Algebra Appl. 433, 956–964 (2010)
Kolotilina, L.Y.: Bounds for the inverses of generalized Nekrasov matrices[J]. J. Math. Sci. 207(5), 786–794 (2015)
Dai, P.F., Li, C.J., Li, Y.T., Zhang, C.Y.: Error bounds for linear complementarity problems of QN-matrices. Calcolo 53, 647–657 (2016)
Gao, L., Wang, Y., Li, C.: New error bounds for the linear complementarity problem of QN-matrices[J]. Numer. Algorithms, pp. 1–14 (2017)
Szulc, T., Cvetković, L., Nedović, M.: Scaling technique for Partition-Nekrasov matrices[J]. Appl. Math. Comput. 271(C), 201–208 (2015)
Li, C.Q., Li, Y.T.: Note on error bounds for linear complementarity problems for B-matrices[J]. Appl. Math. Lett. 57, 108–113 (2016)
Li, C.Q., Dai, P.F., Li, Y.T.: New error bounds for linear complementarity problems of Nekrasov matrices and B-Nekrasov matrices[J]. Numer. Algorithms, pp. 1–13 (2016)
Cvetković, L., Kostić, V., Rauski, S.: A new subclass of H-matrices.[J]. Appl. Math. Comput. 208(1), 206–210 (2009)
Cvetković, L., Kostić, V., Varga, R.S.: A new gersgorin-type eigenvalue inclusion set *[J]. Electronic Transactions on Numerical Analysis Etna 302(5906), 73–80 (2004)
Fiedler, M., Ptak, V.: Generalized norms of matrices and the location of the Spectrum[J]. Czechoslov. Math. J. 12(87), 558–571 (1962)
Dai, P., Li, Y.T., Lu, C.J.: Erratum to: error bounds for linear complementarity problems for SB-matrices[J]. Numer. Algorithms 61(1), 187–187 (2012)
García-Esnaola, M., Peña, J.M.: Error bounds for linear complementarity problems of Nekrasov matrices. Numer. Algorithms 67, 655–667 (2014)
Dehghan, M., Hajarian, M.: Convergence of SSOR methods for linear complementarity problems[J]. Oper. Res. Lett. 37(3), 219–223 (2009)
Li, Y., Dai, P.: Generalized AOR methods for linear complementarity problem[J]. Appl. Math. Comput. 188(1), 7–18 (2007)
Hadjidimos, A., Tzoumas, M.: The solution of the linear complementarity problem by the matrix analogue of the accelerated overrelaxation iterative method[J]. Numer. Algorithms, pp. 1–20 (2016)
Varah, J.M.: A lower bound for the smallest singular value of a matrix[J]. Linear Algebra Appl. 11(1), 3–5 (1975)
Acknowledgments
The authors are thankful to the anonymous referees for their valuable comments to improve the paper.
Funding
The work was supported by the National Natural Science Foundation of China (11671318).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Li, J., Li, G. Error bounds for linear complementarity problems of S-QN matrices. Numer Algor 83, 935–955 (2020). https://doi.org/10.1007/s11075-019-00710-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11075-019-00710-0