Abstract
Let A and B be two M-matrices, \(A^{-1}\) be the inverse of A, and \(\tau (B\circ A^{-1})\) be the minimum eigenvalue of the Hadamard product of B and \(A^{-1}\). Firstly, by using the theories of Schur complements, a lower bound of the main diagonal entries of \(A^{-1}\) is derived and used to present two types of lower bounds of \(\tau (B\circ A^{-1})\). Secondly, in order to obtain bigger lower bounds of \(\tau (B\circ A^{-1})\), two types of lower bounds of \(\tau (B\circ A^{-1})\) with non-negative parameters are constructed. Thirdly, by finding the optimal values of parameters, two preferable lower bounds of \(\tau (B\circ A^{-1})\) are yielded. Finally, numerical examples show the effectiveness of the new methods.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let m and n be two positive integers, \(m,n\ge 2\), \(N=\{1, 2, \ldots , n\}\), \({\mathbb {R}}^n\) be the set of all dimension n real vectors, \({\mathbb {R}}^{m\times n}\) (reps. \({\mathbb {C}}^{m\times n}\)) be the set of all \({m\times n}\) real (reps. complex) matrices, \({\textbf {0}}\) be a vector (or, matrix) whose entries are all zero and be called the zero vector (or, matrix). Let \(x=(x_1,\ldots ,x_n)^\top \in {\mathbb {R}}^n\). We write \(x\ge 0\) (reps. \(x> 0\)) if \(x_i\ge 0\) (reps. \(x_i> 0\)) for \(i\in N\) and call it a nonnegative (reps. positive) vector. Let \(A=(a_{ij})\in {\mathbb {R}}^{m\times n}\) and \(B=(b_{ij})\in {\mathbb {R}}^{m\times n}\). We write \(A\ge B\) (reps. \(A> B\)) if \(a_{ij}\ge b_{ij}\) (reps. \(a_{ij}> b_{ij}\)) for \(i,j\in N\), and write \(A\ge 0\) (reps. \(A> 0\)) if \(a_{ij}\ge 0\) (reps. \(a_{ij}> 0\)) for \(i,j\in N\) and call it a nonnegative (reps. positive) matrix.
Definition 1
[1, 15]Let \(A=(a_{ij})\in {\mathbb {C}}^{n\times n}\). Then, A is called
-
(i)
A strictly diagonally dominant (SDD) matrix, if \(|a_{ii}|> R_i(A)=\sum \limits _{j\ne i}|a_{ij}|\) for \(i\in N\);
-
(ii)
A weakly chained diagonally dominant matrix, if \(|a_{ii}|\ge R_i(A)\) for \(i\in N\), \(J(A)=\{i\in N: |a_{ii}|>R_i(A)\}\ne \emptyset \), and for \(i\in N\), \(i\notin J(A)\), there exist indices \(i_1,\ldots ,i_k\) in N with \(a_{i_r,i_{r+1}}\ne 0\), \(0\le r\le k-1\), where \(i_0=i\) and \(i_k\in J(A)\).
-
(iii)
A reducible matrix if there exists a nonempty proper subset \(I\subsetneqq N\) such that \(a_{ij}=0\) for \(i\in I\) and \(j\not \in I.\) If A is not reducible, then A is irreducible.
Let \(Z_n=\{A=(a_{ij})\in {\mathbb {R}}^{n\times n}:a_{ij}\le 0, i,j\in N, i\ne j\}\). A matrix \(A\in Z_n\) is called an M-matrix [1, 6] if A is nonsingular and the inverse of A, denoted by \(A^{-1}\), is nonnegative. Denote by \(M_n\) the set of all \(n\times n\) M-matrices. If \(A\in M_n\), then \(\tau (A)\equiv \rho (A^{-1})^{-1}\) is a positive eigenvalue of A corresponding to a nonnegative eigenvector x, where \(\rho (A^{-1})\) is the perron eigenvalue of \(A^{-1}\), and \(\tau (A)\) is called the minimum eigenvalue of A [6]. In addition, if A is irreducible, then \(\tau (A)\) is simple and x is a positive vector [1]. It is proved in [5] that \(\tau (A)=\min \{|\lambda |:\lambda \in \sigma (A)\},\) where \(\sigma (A)\) is the spectrum of A.
Let \(\alpha \) and \(\beta \) be two nonempty proper subsets of N, \({\bar{\alpha }}=N-\alpha \) be the complement of \(\alpha \) in N, \(A(\alpha ,\beta )\) be the submatrix of \(A\in {\mathbb {R}}^{n\times n}\) whose rows indexed by \(\alpha \) and columns indexed by \(\beta \), and \(A(\alpha ,\alpha )\) be abbreviated to \(A(\alpha )\). If \(A(\alpha )\) is nonsingular, then
is called the Schur complement [20, 21] of A with respect to \(A(\alpha )\).
The Hadamard product of \(A=(a_{ij})\in {\mathbb {R}}^{m\times n}\) and \(B=(b_{ij})\in {\mathbb {R}}^{m\times n}\) is the matrix \(A\circ B=(a_{ij}b_{ij})\in {\mathbb {R}}^{m\times n}\). It is proved in [10] that if \(A\in M_n\) and \(B\in M_n\), then \(B\circ A^{-1}\in M_n\).
For an M-matrix A, an interesting question is raised in [4]: Whether there is a positive diagonal matrix X such that AX is symmetric or not? This question is also answered in [4, Theorem 4]: When A is irreducible and the minimum eigenvalue, denoted by \(\tau (A\circ A^{-1})\), of \(A\circ A^{-1}\) is 1, such an X exists. Hence, this question can be reduced to the estimation of \(\tau (A\circ A^{-1})\) for an M-matrix A. Fiedler et al. in [4] suggested and took \(\tau (A\circ A^{-1})\) as a measure of the symmetrizability of A, and proved that
Subsequently, Fiedler and Markham [5] proposed the following conjecture, which is independently proved by Yong [18, Theorem 3] and Chen [2, Corollary 2.6]:
Because the lower bound (2) only depends on the dimension n of A and is too small when n is large, in order to obtain a bigger lower bound for \(\tau (A\circ A^{-1})\), which depends on the entries of A, instead of the dimension, many lower bounds of \(\tau (A\circ A^{-1})\) have been yielded [3, 7,8,9, 11,12,13,14, 17, 19, 22,23,25].
Li et al. in [11, Theorem 3.5] obtained the result: If \(A=(a_{ij})\in M_n\) is SDD, then
where
Li et al. in [14, Theorem 3.4] obtained the result: If \(A=(a_{ij})\in M_n\), then
where
Subsequently, Zhou et al. in [24, Theorem 4.8] gave the following result: If \(A=(a_{ij})\in M_n\) is SDD, \(B=(b_{ij})\in M_n\), then
and
Huang et al. in [9, Theorem 3.3] obtained the following result: If \(A=(a_{ij})\in M_n\) is SDD, then
where
Next, in Sect. 2, by using the theories of Schur complements, we derive a lower bound of the main diagonal entries \({\tilde{a}}_{ii}\) of the inverse \(A^{-1}\) for an M-matrix A. In Sect. 3, we present two types of lower bounds for \(\tau (B\circ A^{-1})\), which are proved to be bigger than those in (3), (4), (5), (6) and (7). Subsequently, we construct two types of lower bounds of \(\tau (B\circ A^{-1})\) with parameters and then yield two types of preferable lower bounds for \(\tau (B\circ A^{-1})\) by finding the optimal values of those parameters. In addition, we compare the two types of lower bounds and show their effectiveness by numerical examples. Finally, some concluding remarks are given to end this paper in Sect. 4.
2 A Lower Bound for the Main Diagonal Entries of the Inverse of an M-Matrix
Let \(A\in M_n\) be SDD and \(A^{-1}=({\tilde{a}}_{ij})\in {\mathbb {R}}^{n\times n}\). In this section, a lower bound of \({\tilde{a}}_{ii}\) for \(i\in N\) is derived by using the theories of Schur complements. Before that, some lemmas are listed.
Lemma 1
([6, Theorem 2.5.4]) Let \(A\in M_n\), \(B\in Z_n\) and \(A\le B\). Then \(B\in M_n\), \(A^{-1}\ge B^{-1}\ge 0\) and \(\det B\ge \det A>0\).
Lemma 2
[21, Theorem 2.4] Let \(M=\left( \begin{array}{cc}A&{}B\\ C&{}D\end{array}\right) \) be invertible and A be a nonsingular principal submatrix of M. Then
Lemma 3
[20, Theorem 1.1] Let \(A\in {\mathbb {R}}^{n\times n}\), \(\emptyset \ne \alpha \subsetneqq N\) and \(A(\alpha )\) be nonsingular. Then
Lemma 4
[21, pp. 13] Let \(A\in {\mathbb {R}}^{n\times n}\) be nonsingular and A(i|j) be a submatrix of A obtained by deleting row i and column j of A. Then the inverse of A can be obtained from the adjoint matrix of A, written as \(\textrm{adj}(A)\), whose (i, j)-entry is the cofactor of \(a_{ji}\), that is, \((-1)^{j+i}\det A(j|i)\). In symbols,
Lemma 5
Let \(A=(a_{ij})\in M_n\) be SDD. Then, for \(A^{-1}=({\tilde{a}}_{ij})\), we have
where
Proof
Since \(A\in M_n\), \(A^{-1}\ge 0\). Let \(A_i\) be the submatrix of A obtained by deleting the i-th row and the i-th column of A, \({\textbf {a}}_i=(a_{i1},\ldots ,a_{i,i-1},a_{i,i+1},\ldots ,a_{in})^\top \in {\mathbb {R}}^{n-1}\) and \({\textbf {b}}_i=(a_{1i},\ldots ,a_{i-1,i},a_{i+1,i},\ldots ,a_{ni})^{\top }\in {\mathbb {R}}^{n-1}\) for \(i\in N\). Then \(A_i\in M_{n-1}\) is SDD and
where \(B_{11}=diag (a_{11},\ldots ,a_{i-1,i-1})\), \(B_{12}\in {\mathbb {R}}^{(i-1)\times (n-i)}\) is a zero matrix, \(B_{21}=A(\alpha ,\beta )\in {\mathbb {R}}^{(n-i)\times (i-1)}\) with \(\alpha =\{i+1,\ldots ,n\}\) and \(\beta =\{1,\ldots ,i-1\}\), and \(B_{22}=diag (a_{i+1,i+1},\ldots ,a_{nn})\). By Lemmas 1 and 2 and \(B_i\in Z_{n-1}\), it follows that \(B_i\in M_{n-1}\) and
By (9), we have
Similarly, by Lemmas 1 and 2 and
where \(C_i\in Z_{n-1}\), it follows that \(C_i\in M_n\) and
which implies that
By \(A\in M_n\), \(A_i\in M_{n-1}\) and Lemma 1, we have \(\det A>0\) and \(\det A_i>0\). By (10) and (13), it follows that
Furthermore, from \(a_{ii}>0\) and \(a_{ij}\le 0\) for \(i\ne j\), \(i,j\in N\), it is easy to see that
and, consequently, that
The conclusion follows. \(\square \)
3 Two Types of Lower Bounds for \(\tau (B\circ {A^{-1}})\)
In this section, two types of lower bounds for \(\tau (B\circ {A^{-1}})\) are presented. Before that, some notations and lemmas are listed. For any \(i,j\in N\), \(i\ne j\), let
Similar to the proof of Lemmas 1 and 2 of [23], the following lemma is obtained easily.
Lemma 6
Let \(A=(a_{ij})\in M_n\) be SDD and \(A^{-1}=({\tilde{a}}_{ij})\). Then
Lemma 7
Let \(A=(a_{ij})\in M_n\). Then
and
Proof
Assume that A is irreducible. Then \(\tau (A)\) is an eigenvalue of A corresponding to a positive eigenvector \(x=(x_1,\ldots ,x_n)^\top \), i.e., \(Ax=\tau (A)x\). For each \(i\in N\), by \(\tau (A) x_i=\sum \limits _{j\in N}a_{ij}x_j\), we have
which implies that \(\tau (A)\le a_{ii}\) and, consequently, that \(\tau (A)\le \min \limits _{i\in N}{a_{ii}}\). Let \(x_s=\min \limits _{i\in N}x_i\). Then \(x_s>0\). Furthermore, from (16), we have
which implies that \(a_{ss}-\tau (A) \ge \sum \limits _{j\ne s}(-a_{sj})\), and hence \(\tau (A)\le \sum \limits _{j\in N}a_{sj}\le \max \limits _{i\in N}\sum \limits _{j\in N}a_{ij}\). Let \(x_t=\max \limits _{i\in N}x_i\). Similarly, \(\tau (A)\ge \min \limits _{i\in N}\sum \limits _{j\in N}a_{ij}\) can be given. By using \(A^{-1}x=\tau (A)^{-1}x=\rho (A^{-1})x\), the two-sided bounds (15) can be similarly proved.
Assume that A is reducible. Taking \(\varepsilon >0\) and replacing \(a_{ij}\) with \(a_{ij}-\varepsilon \) for \(i\ne j\) and \(a_{ii}\) with \(a_{ii}+n\varepsilon \), then A is irreducible. Letting \(\varepsilon \rightarrow 0^+\), the conclusions (14) and (15) follow by continuity. \(\square \)
Note here that Shivakumar et al. in [15, Theorem 1.1] also obtained (14) and (15) but restricted A to be a weakly chained diagonally dominant M-matrix. Hence, Lemma 7 can be viewed as a generalization of Theorem 1.1 of [15].
Lemma 8
([6, lemma 5.1.2]) If \(A,B\in {\mathbb {R}}^{m\times n}\) and if \(X\in {\mathbb {R}}^{m\times m}\) and \(Y\in {\mathbb {R}}^{n\times n}\) are diagonal matrices, then
Lemma 9
([16, Corollary 1.14]) Let \(A=(a_{ij})\in {\mathbb {C}}^{n\times n}\). Then
Lemma 10
[5, Proposition 1] If \(A\in M_n\) is irreducible and \(Ax \ge \kappa x\) for a nonnegative nonzero vector x, then \(\tau (A)\ge \kappa \).
Lemma 11
[1, (E17) of Theorem 6.2.3] Let \(A\in Z_n\). Then \(A\in M_n\) if and only if all the leading principal minors of A are positive.
3.1 The First Type of Lower Bounds for \(\tau (B\circ {A^{-1}})\)
Theorem 1
Let \(A=(a_{ij})\in M_n\) and \(B=(b_{ij})\in M_n\). Then
where
Proof
Let \(A^{-1}=({\tilde{a}}_{ij})\). Since \(A\in M_n\), there is a positive diagonal matrix X such that \(X^{-1}AX\in M_n\) is SDD. Furthermore, by Lemma 8 we have
Therefore, we may assume that \(A\in M_n\) is SDD.
Because \(\tau (B\circ A^{-1})\) is an eigenvalue of \(B\circ A^{-1}\), by Lemmas 6 and 9, there is an index \(i\in N\) such that
By Lemma 7, we have \(\tau (B\circ A^{-1})\le b_{ii}\alpha _{ii}\), \(i\in N\). Furthermore, by (18) and Lemma 5, we have
where \(\delta _i\) is defined in (8). Hence, the conclusion (17) follows. \(\square \)
Remark 1
-
(i)
Let \(A=B\in M_n\) be SDD. By Lemmas 5 and 6, it is not difficult to see that
$$\begin{aligned} \varGamma _s=\min \limits _{i\in N}\Bigg \{\frac{a_{ii}+\sum \limits _{j\ne i}a_{ji}p_{ji}}{\delta _i}\Bigg \} \ge \min \limits _{i\in N}\Bigg \{\frac{a_{ii}-\sum \limits _{j\ne i}|a_{ji}|u_{ji}}{a_{ii}-\sum \limits _{j\ne i}\frac{a_{ij}a_{ji}}{a_{jj}}}\Bigg \} \ge \min \limits _{i\in N}\Bigg \{\frac{a_{ii}-\sum \limits _{j\ne i}|a_{ji}|s_{ji}}{a_{ii}}\Bigg \}, \end{aligned}$$which implies that the lower bound in (17) is bigger than those in (3) and (7).
-
(ii)
Let \(A=B\in M_n\). By Lemmas 5 and 6, it is not difficult to see that
$$\begin{aligned} \varGamma _s=\min \limits _{i\in N}\Bigg \{\frac{a_{ii}+\sum \limits _{j\ne i}a_{ji}p_{ji}}{\delta _i}\Bigg \} \ge \min \limits _{i\in N}\Bigg \{\frac{a_{ii}-\sum \limits _{j\ne i}|a_{ji}|m_{ji}}{a_{ii}}\Bigg \}, \end{aligned}$$which implies that the lower bound in (17) is bigger than that in (4).
-
(iii)
Let \(A\in M_n\) be SDD, \(A=A^\top \) and \(B\in M_n\). Then, by Lemmas 5 and 6, we have \(a_{ii}\ge \delta _i>0\), \(s_i=\max \limits _{j\ne i}\{s_{ij}\}\ge s_{ij}=s_{ji}\ge p_{ji}\ge 0\) and \(m_i=\max \limits _{j\ne i}\{m_{ij}\}\ge m_{ij}=m_{ji}\ge p_{ji}\ge 0\) for \(i,j\in N, i\ne j\), and consequently,
$$\begin{aligned} \varGamma _s=\min \limits _{i\in N}\Bigg \{\frac{b_{ii}+\sum \limits _{j\ne i}b_{ji}p_{ji}}{\delta _i}\Bigg \} \ge \min \limits _{i\in N}\Bigg \{\frac{b_{ii}-s_i\sum \limits _{j\ne i}|b_{ji}|}{a_{ii}}\Bigg \} \end{aligned}$$and
$$\begin{aligned} \varGamma _s=\min \limits _{i\in N}\Bigg \{\frac{b_{ii}+\sum \limits _{j\ne i}b_{ji}p_{ji}}{\delta _i}\Bigg \} \ge \min \limits _{i\in N}\Bigg \{\frac{b_{ii}-m_i\sum \limits _{j\ne i}|b_{ji}|}{a_{ii}}\Bigg \}. \end{aligned}$$which implies that the lower bound in (17) is bigger than those in (5) and (6).
\(\square \)
Next, a lower bound of \(\tau (B\circ A^{-1})\) with a nonnegative parameter \(\varepsilon \), which is bigger than that in Theorem 1, is constructed.
Theorem 2
Let \(A=(a_{ij})\in M_n\) and \(B=(b_{ij})\in M_n\). Then
where s is the index such that \(\varGamma _i\) is minimum for \(i\in N\), that is, \(\varGamma _s=\min \limits _{i\in N}\varGamma _i\),
and
Proof
Similar to the proof of Theorem 1, we may assume that \(A\in M_n\) is SDD. Let \(A^{-1}=({\tilde{a}}_{ij})\) and \(\varGamma _s=\min \limits _{i\in N}\varGamma _i\). Taking
and \(\varepsilon \) is a nonnegative number such that
Case I. Assume that \(B\circ A^{-1}\) is irreducible. We next prove that \((B\circ A^{-1})^\top x \ge \kappa x\). By
we have \(((B\circ A^{-1})^\top x)_s\ge \kappa x_s\). For each \(i\in N\), \(i\ne s\), by
we have \(((B\circ A^{-1})^\top x)_i\ge \kappa x_i\), \(i\ne s\). Consequently, \((B\circ A^{-1})^\top x \ge \kappa x\) follows for a positive vector x. Furthermore, by Lemma 10 and \(\tau ((B\circ A^{-1})^\top )=\tau (B\circ A^{-1})\), we have \(\tau (B\circ A^{-1})\ge \kappa \). By the arbitrariness of \(\varepsilon \), the conclusion (19) follows.
Case II. Assume that \(B\circ A^{-1}\) is reducible. Let \(C=(c_{ij})\in {\mathbb {R}}^{n\times n}\) with \(c_{12}=c_{23}=\cdots =c_{n-1,n}=c_{n1}=1\) and other \(c_{ij}=0\). Let \(\vartheta \) be a sufficiently small positive number such that all the leading principal minors of \(B\circ A^{-1}-\vartheta C\) are positive. Then \(B\circ A^{-1}-\vartheta C\) is an irreducible M-matrix by Lemma 11. Substituting \(B\circ A^{-1}-\vartheta C\) for \(B\circ A^{-1}\) in Case I and letting \(\vartheta \rightarrow 0\), (19) holds by continuity. \(\square \)
Next, a remark is given to indicate that the lower bound in Theorem 2 is more precise than that in Theorem 1.
Remark 2
-
(i)
Taking \(\varepsilon =0\) in Theorem 2, then \(\varGamma _s(0)=\varGamma _s\) and \(\varGamma _i(0)=\varGamma _i\) for \(i\in N, i\ne s\), which implies that
$$\begin{aligned} \max \limits _{\varepsilon =0}\min \left\{ \varGamma _s(0),\min \limits _{i\ne s}\varGamma _i(0)\right\} =\min \left\{ \varGamma _s,\min \limits _{i\ne s}\varGamma _i\right\} =\varGamma _s=\min \limits _{i\in N}\varGamma _i \end{aligned}$$and, consequently, that Theorem 1 is a special case of Theorem 2.
-
(ii)
For \(\varepsilon >0\), it is not difficult to see that \(\varGamma _s(\varepsilon )\) is a monotone increasing continuous function of \(\varepsilon \) and that \(\varGamma _i(\varepsilon )\) is a monotone decreasing continuous function of \(\varepsilon \) for \(i\in N, i\ne s\), which implies that there exists a positive number \(\varepsilon ^*\) such that
$$\begin{aligned} \varGamma _s=\varGamma _s(0)\le \varGamma _s(\varepsilon ^*)=\min \limits _{i\ne s}\varGamma _i(\varepsilon ^*) =\max \limits _{\varepsilon \ge 0}\min \left\{ \varGamma _s(\varepsilon ),\min \limits _{i\ne s}\varGamma _i(\varepsilon )\right\} , \end{aligned}$$(24)consequently, the lower bound in Theorem 2 is greater than or equal to that in Theorem 1. Specifically, we can use the positive number \(\varepsilon ^*\) to fine-tune \(\varGamma _s\) and \(\varGamma _i\) for \(i\ne s\): \(\varGamma _s\) is increased to \(\varGamma _s(\varepsilon ^*)\) and \(\varGamma _i\) is decreased to \(\varGamma _i(\varepsilon ^*)\) for \(i\in N, i\ne s\).
\(\square \)
Next, by finding the optimal value of the nonnegative parameter \(\varepsilon \), the specific lower bound for \(\tau (B\circ A^{-1})\) in Theorem 2 is given.
Theorem 3
Let \(A=(a_{ij})\in M_n\) and \(B=(b_{ij})\in M_n\). Then
where s is the index such that \(\varGamma _i\) is minimum for \(i\in N\), that is, \(\varGamma _s=\min \limits _{i\in N}\varGamma _i\), and \(\varepsilon ^*=\min \limits _{i\ne s}\varepsilon _i\), \(\varepsilon _i\) is the positive number such that \(\varGamma _s(\varepsilon )=\varGamma _i(\varepsilon )\).
Proof
For each \(i\in N\) and \(i\ne s\), there exists only one positive real number \(\varepsilon _i\) such that \(\varGamma _s(\varepsilon )=\varGamma _i(\varepsilon )\). Solving, respectively, \(\varGamma _s(\varepsilon )=\varGamma _i(\varepsilon )\) for each \(i\in N\) and \(i\ne s\), then \(\varepsilon _1,\ldots ,\varepsilon _{s-1},\varepsilon _{s+1},\ldots ,\varepsilon _n\) are obtained. For convenience and without loss of generality, assume that \(\varepsilon _1\le \cdots \le \varepsilon _{s-1}\le \varepsilon _{s+1}\le \cdots \le \varepsilon _n\). We next prove that (24) holds when \(\varepsilon ^*=\varepsilon _1\), which implies that \(\varepsilon _1\) is the optimal parameter such that the lower bound \(\max \limits _{\varepsilon \ge 0}\min \left\{ \varGamma _s(\varepsilon ),\min \limits _{i\ne s}\varGamma _i(\varepsilon )\right\} \) in Theorem 2 reaches its maximum.
Since \(\varGamma _s(\varepsilon )\) is a monotone increasing continuous function of \(\varepsilon \), then we have
For each \(i\in N\), \(i\ne s\), since \(\varGamma _i(\varepsilon )\) is a monotone decreasing continuous function of \(\varepsilon \), we have \(\varGamma _i(\varepsilon _1)\ge \varGamma _i(\varepsilon _i).\) Furthermore, by \(\varGamma _i(\varepsilon _i)=\varGamma _s(\varepsilon _i)\) and (26), we have \(\varGamma _i(\varepsilon _1)\ge \varGamma _s(\varepsilon _i)\ge \varGamma _s(\varepsilon _1)\), which implies that \(\min \left\{ \varGamma _s(\varepsilon _1),\min \limits _{i\ne s}\varGamma _i(\varepsilon _1)\right\} =\varGamma _s(\varepsilon _1)\) and, consequently, that \(\tau (B\circ A^{-1})\ge \varGamma _s(\varepsilon _1)\).
Next, we prove that
When \(\varepsilon \in [0,\varepsilon _1]\), as \(\varGamma _s(\varepsilon )\) is an increasing function of \(\varepsilon \), it is clear that \(\varGamma _s(\varepsilon _1)\ge \varGamma _s(\varepsilon )\ge \min \left\{ \varGamma _s(\varepsilon ),\min \limits _{i\ne s}\varGamma _i(\varepsilon )\right\} ,\) which implies that \(\varGamma _s(\varepsilon _1)\ge \max \limits _{0\le \varepsilon \le \varepsilon _1}\min \left\{ \varGamma _s(\varepsilon ),\min \limits _{i\ne s}\varGamma _i(\varepsilon )\right\} .\) When \(\varepsilon \in [\varepsilon _1,+\infty )\), since \(\varGamma _1(\varepsilon )\) is a decreasing function of \(\varepsilon \) and \(\varGamma _1(\varepsilon _1)=\varGamma _s(\varepsilon _1)\), we have
which implies that \(\varGamma _s(\varepsilon _1)\ge \max \limits _{\varepsilon \ge \varepsilon _1}\min \left\{ \varGamma _s(\varepsilon ),\min \limits _{i\ne s}\varGamma _i(\varepsilon )\right\} .\) Hence, (27) holds, which implies that (25) follows. \(\square \)
3.2 The Second Type of Lower Bounds for \(\tau (B\circ {A^{-1}})\)
Theorem 4
Let \(A=(a_{ij})\in M_n\) and \(B=(b_{ij})\in M_n\). Then
where
Proof
Similar to the proof of Theorem 1, we may assume that \(A\in M_n\) is SDD. Let \(A^{-1}=({\tilde{a}}_{ij})\), \(y=({\tilde{a}}_{11}^{-1},\ldots ,{\tilde{a}}_{nn}^{-1})^\top \in {\mathbb {R}}^n\), and
where \(\delta _i\) is defined in (8). By (29), we have \(b_{ii}+\sum \limits _{j\ne i}b_{ij}p_{ij}\ge \gamma \delta _i\), \(i\in N\). By Lemma 5, we have \(0<{\tilde{a}}_{ii}^{-1}\le \delta _i\), \(i\in N\). Consequently, \(y>0\) and \(\gamma y_i =\gamma {\tilde{a}}_{ii}^{-1}\le \gamma \delta _i\) for \(i\in N\). By Lemma 6, we have \({\tilde{a}}_{ij}{\tilde{a}}_{jj}^{-1}\le p_{ij}\) for \(i,j\in N, i\ne j\).
Next, the inequality \((B\circ A^{-1}) y \ge \gamma y\) will be proved. Assume that \(B\circ A^{-1}\) is irreducible. By
it follows that \((B\circ A^{-1}) y\ge \gamma y\) holds for a positive vector y. Furthermore, by Lemma 10, we have \(\tau (B\circ A^{-1})\ge \gamma \), which implies that (28) follows. Assume that \(B\circ A^{-1}\) is reducible. By using the similar proof to Case II of Theorem 2, the conclusion (28) follows immediately. \(\square \)
Remark 3
Let \(A=(a_{ij})\in M_n\) be SDD and \(B=(b_{ij})\in M_n\) with \(R_i(B)=R_i(B^\top )\) for \(i\in N\). By Lemmas 5 and 6, we have \(a_{ii}\ge \delta _i>0\), \(s_i=\max \limits _{j\ne i}\{s_{ij}\}\ge s_{ij}\ge p_{ij}\ge 0\) and \(m_i=\max \limits _{j\ne i}\{m_{ij}\}\ge m_{ij}\ge p_{ij}\ge 0\) for \(i,j\in N, i\ne j\), and consequently,
and
which implies that the lower bound in (28) is bigger than those in (5) and (6). \(\square \)
Next, another lower bound of \(\tau (B\circ A^{-1})\) with a nonnegative parameter \(\theta \), which is bigger than that in Theorem 4, is constructed.
Theorem 5
Let \(A=(a_{ij})\in M_n\) and \(B=(b_{ij})\in M_n\). Then
where t is the index such that \(\varOmega _i\) is minimum for \(i\in N\), that is, \(\varOmega _t=\min \limits _{i\in N}\varOmega _i\),
and
Proof
Similar to the proof of Theorem 1, we assume that \(A\in M_n\) is SDD. Let \(A^{-1}=({\tilde{a}}_{ij})\), \(\varOmega _t=\min \limits _{i\in N}\varOmega _i\),
and \(\theta \) be a nonnegative number such that
By Lemma 5, we have \(\delta _i\ge {\tilde{a}}_{ii}^{-1}>0\) for \(i\in N\), where \(\delta _i\) is defined in (8), and hence z is a positive vector. By (31), we have \(\varOmega _t(\theta )=\Big (b_{tt}+\frac{1}{1+\theta }\sum \limits _{j\ne t}b_{tj}p_{tj}\Big )/\delta _t\ge \omega \), which implies that
For \(i\in N, i\ne t\), by (31), we have \(\varOmega _i(\theta )=\Big (b_{ii}+\sum \limits _{j\ne i}b_{ij}p_{ij}+\theta b_{it}p_{it}\Big )/\delta _i\ge \omega ,\) which implies that
By Lemma 6, we have \({\tilde{a}}_{ij}{\tilde{a}}_{jj}^{-1}\le p_{ij}\) for \(i,j\in N, j\ne i\).
Next, we prove that (30) holds. Assume that \(B\circ A^{-1}\) is irreducible. By (32) and (33), we have
and
which implies that \((B\circ A^{-1})z \ge \omega z\) for a positive vector z. Furthermore, by Lemma 10, we have \(\tau (B\circ A^{-1})\ge \omega \). By the arbitrariness of \(\theta \), the conclusion (30) follows. Assume that \(B\circ A^{-1}\) is reducible. By using the similar proof to Case II of Theorem 2, the conclusion (30) follows immediately. \(\square \)
It is easy to see that \(\varOmega _t(\theta )\) is a monotone increasing continuous function of \(\theta \) and \(\varOmega _i(\theta )\) is a monotone decreasing continuous function of \(\theta \) for \(i\in N, i\ne t\) when \(\theta \ge 0\). Next, by finding the optimal value of the nonnegative parameter \(\theta \) and using the same technique as the proof of Theorem 3, the specific lower bound for \(\tau (B\circ A^{-1})\) in Theorem 5 is presented.
Theorem 6
Let \(A=(a_{ij})\in M_n\) and \(B=(b_{ij})\in M_n\). Then
where t is the index such that \(\varOmega _i\) is minimum for \(i\in N\), that is, \(\varOmega _t=\min \limits _{i\in N}\varOmega _i\), and \(\theta ^*=\min \limits _{i\ne t}\theta _i\), \(\theta _i\) is the positive number such that \(\varOmega _t(\theta )=\varOmega _i(\theta )\).
3.3 Comparison on Two Types of Lower Bounds for \(\tau (B\circ A^{-1})\)
From Theorems 1 and 4, it is not difficult to see that if \(A=A^\top \) and \(B=B^\top \), then \(\varGamma _i=\varOmega _i\) for \(i\in N\), and, consequently, \(\min \limits _{i\in N}\varGamma _i=\min \limits _{i\in N}\varOmega _i\), which implies that the lower bound in Theorem 1 is the same as that in Theorem 4, and that for general M-matrices A and B, sometimes the lower bound in Theorem 1 is bigger than that in Theorem 4, sometimes it is just the opposite. Hence, the following results are given easily.
Theorem 7
Let \(A=(a_{ij})\in M_n\) and \(B=(b_{ij})\in M_n\). Then
where \(\varGamma _i=\Big (b_{ii}+\sum \limits _{j\ne i}b_{ji}p_{ji}\Big )/\delta _i\), \(\varOmega _i=\Big (b_{ii}+\sum \limits _{j\ne i}b_{ij}p_{ij}\Big )/\delta _i\) and \(\delta _i\) is defined in (8);
3.4 Numerical Examples
In this subsection, two examples are given to show the effectiveness of Theorems 1, 3, 4 and 6.
Example 1
Consider the following strictly diagonally dominant M-matrix
Now, the lower bounds of \(\tau (A\circ A^{-1})\) are considered. By (5) (i.e., (4.6) in [24, Theorem 4.8]) with \(A=B\), we have
By (6) (i.e., (4.5) in [24, Theorem 4.8]) with \(A=B\), we have
By (3), i.e., Theorem 3.5 of [11], we have
By (2), i.e., Theorem 3 of [18], we have
By Theorem 3 of [23] (taking the number of iterations as 10), we have
By (4), i.e., Theorem 3.4 of [14], we have
By Theorem 3.4 of [3], we have
By (7), i.e., Theorem 3.3 of [9], we have
Next, we use Theorems 1, 2 and 3 to obtain lower bounds of \(\tau (A\circ A^{-1})\). By Theorem 1 with \(A=B\), we have \(\varGamma _1=0.8465\), \(\varGamma _2=0.7328\), \(\varGamma _3=0.8802\), \(\varGamma _4=0.8789\), and hence
By Theorem 2, we have \(s=2\) and
By Theorem 3, we have \(\varepsilon _1=0.1817\), \(\varepsilon _3=0.3751\), \(\varepsilon _4=0.3724\), and hence
Finally, we use Theorems 4, 5 and 6 to obtain lower bounds of \(\tau (A\circ A^{-1})\). By Theorem 4 with \(A=B\), we have \(\varOmega _1=0.8201\), \(\varOmega _2=0.8609\), \(\varOmega _3=0.8666\), \(\varOmega _4=0.8375\), and hence
By Theorem 5, we have \(t=1\) and
By Theorem 6, we have \(\theta _2=0.0567\), \(\theta _3=0.0870\), \(\theta _4=0.0339\), and hence
In fact, \(\tau (A\circ A^{-1})=0.9787.\) This example shows that the lower bounds obtained by Theorems 1, 3, 4 and 6 are bigger than those in [3, 8, 9, 11, 14, 18, 23, 24]. \(\square \)
Example 2
Let \(A=\left( \begin{array}{ll}2&{}-1\\ -1&{}2\end{array}\right) \in {\mathbb {R}}^{2\times 2}\). Apparently, \(A\in M_2\) is SDD and \(\tau (A\circ A^{-1})=1\) by (1) and (2). By Theorems 1, 3, 4 and 6, we all have \(\tau (A\circ A^{-1})\ge 1\), which shows that the lower bounds in Theorems 1, 3, 4 and 6 can reach the exact value of \(\tau (A\circ A^{-1})\) in some cases. \(\square \)
4 Conclusions
Let \(A=(a_{ij})\in M_n\) and \(B=(b_{ij})\in M_n\). Then \(A^{-1}=({\tilde{a}}_{ij})\ge 0\). In this paper, by using the theories of Schur complements for matrices, we in Lemma 5 derived a lower bound of \({\tilde{a}}_{ii}\), that is, \({\tilde{a}}_{ii}\ge 1/\delta _i>0\), which is used to obtain two types of lower bounds of \(\tau (B\circ A^{-1})\) in Theorems 1 and 4. Subsequently, by using the results in Theorems 1 and 4, we constructed the first type of lower bounds of \(\tau (B\circ A^{-1})\) with a nonnegative parameter \(\varepsilon \) and the second type of lower bounds of \(\tau (B\circ A^{-1})\) with a nonneagvie parameter \(\theta \). Finally, we gave the methods of finding the optimal values of nonnegative parameters \(\varepsilon \) and \(\theta \) to get the preferable lower bounds of \(\tau (B\circ A^{-1})\). In [23], some convergent sequences of the lower bounds of \(\tau (B\circ A^{-1})\) are presented to approximate the exact value of \(\tau (B\circ A^{-1})\). Using the methods in [23], a bigger lower bound may be given.
References
Berman, A., Plemmons, R.J.: Nonnegative matrices in the mathematical sciences. SIAM, Philadelphia (1994)
Chen, S.: A lower bound for the minimum eigenvalue of the Hadamard product of matrices. Linear Algebra Appl. 378, 159–166 (2004)
Cheng, G., Tan, Q., Wang, Z.: Some inequalities for the minimum eigenvalue of the Hadamard product of an \(M\)-matrix and its inverse. J. Inequal. Appl. 2013, 65 (2013)
Fiedler, M., Johnson, C.R., Markham, T.L., Neumann, M.: A trace inequality for \(M\)-matrices and the symmetrizability of a real matrix by a positive diagonal matrix. Linear Algebra Appl. 71, 81–94 (1985)
Fiedler, M., Markham, T.L.: An inequality for the Hadamard product of an \(M\)-matrix and an inverse \(M\)-matrix. Linear Algebra Appl. 101, 1–8 (1988)
Horn, R.A., Johnson, C.R.: Topics in matrix analysis. Cambridge University Press, Cambridge (1991)
Huang, R.: Some inequalities for the Hadamard product and the fan product of matrices. Linear Algebra Appl. 428(7), 1551–1559 (2008)
Huang, Z., Wang, L., Xu, Z.: Some new estimations for the Hadamard product of a nonsingular \(M\)-matrix and its inverse. Math. Inequal. Appl. 20(3), 661–682 (2017)
Huang, Z., Xu, Z., Lu, Q.: Some new inequalities for the Hadamard product of a nonsingular \(M\)-matrix and its inverse. Linear Multilinear Algebra 64(7), 1362–1378 (2016)
Johnson, C.R.: A Hadamard product involving \(M\)-matrices. Linear Multilinear Algebra 4(4), 261–264 (1977)
Li, H.B., Huang, T.Z., Shen, S.Q., Li, H.: Lower bounds for the minimum eigenvalue of Hadamard product of an \(M\)-matrix and its inverse. Linear Algebra Appl. 420(1), 235–247 (2007)
Li, J., Hai, H.: Some new inequalities for the Hadamard product of nonnegative matrices. Linear Algebra Appl. 606, 159–169 (2020)
Li, Y., Liu, X., Yang, X., Li, C.: Some new lower bounds for the minimum eigenvalue of the Hadamard product of an \(M\)-matrix and its inverse. Electron. J. Linear Algebra 22, 630–643 (2011)
Li, Y.T., Chen, F.B., Wang, D.F.: New lower bounds on eigenvalue of the Hadamard product of an \(M\)-matrix and its inverse. Linear Algebra Appl. 430(4), 1423–1431 (2009)
Shivakumar, P.N., Williams, J.J., Ye, Q., Marinov, C.A.: On two-sided bounds related to weakly diagonally dominant \(M\)-matrices with application to digital circuit dynamics. SIAM J. Matrix Anal. Appl. 17(2), 298–312 (1996)
Varga, R.S.: Geršgorin and his circles. Springer, Berlin (2004)
Wang, F., Zhao, J.X., Li, C.Q.: Some new inequalities involving the Hadamard product of an \(M\)-matrix and its inverse. Acta Math. Appl. Sin. Engl. Ser. 33(2), 505–514 (2017)
Yong, X.: Proof of a conjecture of Fiedler and Markham. Linear Algebra Appl. 320(1–3), 167–171 (2000)
Zeng, W., Liu, J.: Lower bound estimation of the minimum eigenvalue of Hadamard product of an M-matrix and its inverse. Bull. Iran. Math. Soc. 48, 1075–1091 (2022)
Zhang, F.: The Schur complement and its applications. Springer, Berlin (2006)
Zhang, F.: Matrix theory: basic results and techniques, 2nd edn. Springer, New York (2011)
Zhao, J., Sang, C.: Some new bounds of the minimum eigenvalue for the Hadamard product of an \(M\)-matrix and an inverse \(M\)-matrix. Open Math. 14(1), 81–88 (2016)
Zhao, J., Wang, F., Sang, C.: Some inequalities for the minimum eigenvalue of the Hadamard product of an \(M\)-matrix and an inverse \(M\)-matrix. J. Inequal. Appl. 2015, 92 (2015)
Zhou, D., Chen, G., Wu, G., Zhang, X.: On some new bounds for eigenvalues of the Hadamard product and the Fan product of matrices. Linear Algebra Appl. 438(3), 1415–1426 (2013)
Zhou, D., Chen, G., Wu, G., Zhang, X.: Some inequalities for the Hadamard product of an \(M\)-matrix and an inverse \(M\)-matrix. J. Inequal. Appl. 2013, 16 (2013)
Acknowledgements
The author is very grateful to the anonymous referees and Editor-in-Chief Prof. Rosihan M. Ali for their insightful comments and constructive suggestions, which considerably improve this manuscript. This work is supported by Guizhou Provincial Science and Technology Projects (Grant Nos. QKHJC-ZK[2021]YB013; QKHJC-ZK[2022]YB215), and Natural Science Research Project of Department of Education of Guizhou Province (Grant Nos. QJJ[2022]015; QJJ[2022]047).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author declares that he has no competing interests.
Additional information
Communicated by Fuad Kittaneh.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Zhao, J. Lower Bounds for the Minimum Eigenvalue of Hadamard Product of M-Matrices. Bull. Malays. Math. Sci. Soc. 46, 18 (2023). https://doi.org/10.1007/s40840-022-01432-8
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s40840-022-01432-8