1 Introduction

Let m and n be two positive integers, \(m,n\ge 2\), \(N=\{1, 2, \ldots , n\}\), \({\mathbb {R}}^n\) be the set of all dimension n real vectors, \({\mathbb {R}}^{m\times n}\) (reps. \({\mathbb {C}}^{m\times n}\)) be the set of all \({m\times n}\) real (reps. complex) matrices, \({\textbf {0}}\) be a vector (or, matrix) whose entries are all zero and be called the zero vector (or, matrix). Let \(x=(x_1,\ldots ,x_n)^\top \in {\mathbb {R}}^n\). We write \(x\ge 0\) (reps. \(x> 0\)) if \(x_i\ge 0\) (reps. \(x_i> 0\)) for \(i\in N\) and call it a nonnegative (reps. positive) vector. Let \(A=(a_{ij})\in {\mathbb {R}}^{m\times n}\) and \(B=(b_{ij})\in {\mathbb {R}}^{m\times n}\). We write \(A\ge B\) (reps. \(A> B\)) if \(a_{ij}\ge b_{ij}\) (reps. \(a_{ij}> b_{ij}\)) for \(i,j\in N\), and write \(A\ge 0\) (reps. \(A> 0\)) if \(a_{ij}\ge 0\) (reps. \(a_{ij}> 0\)) for \(i,j\in N\) and call it a nonnegative (reps. positive) matrix.

Definition 1

[1, 15]Let \(A=(a_{ij})\in {\mathbb {C}}^{n\times n}\). Then, A is called

  1. (i)

    A strictly diagonally dominant (SDD) matrix, if \(|a_{ii}|> R_i(A)=\sum \limits _{j\ne i}|a_{ij}|\) for \(i\in N\);

  2. (ii)

    A weakly chained diagonally dominant matrix, if \(|a_{ii}|\ge R_i(A)\) for \(i\in N\), \(J(A)=\{i\in N: |a_{ii}|>R_i(A)\}\ne \emptyset \), and for \(i\in N\), \(i\notin J(A)\), there exist indices \(i_1,\ldots ,i_k\) in N with \(a_{i_r,i_{r+1}}\ne 0\), \(0\le r\le k-1\), where \(i_0=i\) and \(i_k\in J(A)\).

  3. (iii)

    A reducible matrix if there exists a nonempty proper subset \(I\subsetneqq N\) such that \(a_{ij}=0\) for \(i\in I\) and \(j\not \in I.\) If A is not reducible, then A is irreducible.

Let \(Z_n=\{A=(a_{ij})\in {\mathbb {R}}^{n\times n}:a_{ij}\le 0, i,j\in N, i\ne j\}\). A matrix \(A\in Z_n\) is called an M-matrix [1, 6] if A is nonsingular and the inverse of A, denoted by \(A^{-1}\), is nonnegative. Denote by \(M_n\) the set of all \(n\times n\) M-matrices. If \(A\in M_n\), then \(\tau (A)\equiv \rho (A^{-1})^{-1}\) is a positive eigenvalue of A corresponding to a nonnegative eigenvector x, where \(\rho (A^{-1})\) is the perron eigenvalue of \(A^{-1}\), and \(\tau (A)\) is called the minimum eigenvalue of A [6]. In addition, if A is irreducible, then \(\tau (A)\) is simple and x is a positive vector [1]. It is proved in [5] that \(\tau (A)=\min \{|\lambda |:\lambda \in \sigma (A)\},\) where \(\sigma (A)\) is the spectrum of A.

Let \(\alpha \) and \(\beta \) be two nonempty proper subsets of N, \({\bar{\alpha }}=N-\alpha \) be the complement of \(\alpha \) in N, \(A(\alpha ,\beta )\) be the submatrix of \(A\in {\mathbb {R}}^{n\times n}\) whose rows indexed by \(\alpha \) and columns indexed by \(\beta \), and \(A(\alpha ,\alpha )\) be abbreviated to \(A(\alpha )\). If \(A(\alpha )\) is nonsingular, then

$$\begin{aligned} A/A(\alpha )=A({\bar{\alpha }})-A({\bar{\alpha }},\alpha )\left( A(\alpha )\right) ^{-1}A(\alpha ,{\bar{\alpha }}) \end{aligned}$$

is called the Schur complement [20, 21] of A with respect to \(A(\alpha )\).

The Hadamard product of \(A=(a_{ij})\in {\mathbb {R}}^{m\times n}\) and \(B=(b_{ij})\in {\mathbb {R}}^{m\times n}\) is the matrix \(A\circ B=(a_{ij}b_{ij})\in {\mathbb {R}}^{m\times n}\). It is proved in [10] that if \(A\in M_n\) and \(B\in M_n\), then \(B\circ A^{-1}\in M_n\).

For an M-matrix A, an interesting question is raised in [4]: Whether there is a positive diagonal matrix X such that AX is symmetric or not? This question is also answered in [4, Theorem 4]: When A is irreducible and the minimum eigenvalue, denoted by \(\tau (A\circ A^{-1})\), of \(A\circ A^{-1}\) is 1, such an X exists. Hence, this question can be reduced to the estimation of \(\tau (A\circ A^{-1})\) for an M-matrix A. Fiedler et al. in [4] suggested and took \(\tau (A\circ A^{-1})\) as a measure of the symmetrizability of A, and proved that

$$\begin{aligned} 0<\tau (A\circ A^{-1})\le 1. \end{aligned}$$
(1)

Subsequently, Fiedler and Markham [5] proposed the following conjecture, which is independently proved by Yong [18, Theorem 3] and Chen [2, Corollary 2.6]:

$$\begin{aligned} \tau (A\circ A^{-1})\ge \frac{2}{n}. \end{aligned}$$
(2)

Because the lower bound (2) only depends on the dimension n of A and is too small when n is large, in order to obtain a bigger lower bound for \(\tau (A\circ A^{-1})\), which depends on the entries of A, instead of the dimension, many lower bounds of \(\tau (A\circ A^{-1})\) have been yielded [3, 7,8,9, 11,12,13,14, 17, 19, 22,23,25].

Li et al. in [11, Theorem 3.5] obtained the result: If \(A=(a_{ij})\in M_n\) is SDD, then

$$\begin{aligned} \tau (A\circ A^{-1})\ge \min \limits _{i\in N}\Big \{1-\frac{1}{a_{ii}}\sum \limits _{j\ne i}|a_{ji}|s_{ji}\Big \}, \end{aligned}$$
(3)

where

$$\begin{aligned} d_i=\frac{\sum \limits _{j\ne i}|a_{ij}|}{a_{ii}}, \quad s_{ji}=\frac{|a_{ji}|+\sum \limits _{k\ne j,i}|a_{jk}|d_k}{a_{jj}}~~\text{ and }~~ s_i=\max \limits _{j\ne i}\{s_{ij}\}. \end{aligned}$$

Li et al. in [14, Theorem 3.4] obtained the result: If \(A=(a_{ij})\in M_n\), then

$$\begin{aligned} \tau (A\circ A^{-1})\ge \min \limits _{i\in N}\Big \{1-\frac{1}{a_{ii}}\sum \limits _{j\ne i}|a_{ji}|m_{ji}\Big \}, \end{aligned}$$
(4)

where

$$\begin{aligned}&r_{ji}=\frac{|a_{ji}|}{|a_{jj}|-\sum \limits _{k \ne j,i}|a_{jk}|}, \quad r_i=\max \limits _{j\ne {i}}\{r_{ji}\}, \quad m_{ji}=\frac{|a_{ji}|+\sum \limits _{k\ne {j,i}}|a_{jk}|r_i}{a_{jj}},\\&\quad m_i=\max \limits _{j\ne i}\{m_{ij}\}. \end{aligned}$$

Subsequently, Zhou et al. in [24, Theorem 4.8] gave the following result: If \(A=(a_{ij})\in M_n\) is SDD, \(B=(b_{ij})\in M_n\), then

$$\begin{aligned} \tau (B\circ A^{-1})\ge \min \limits _{i\in N}\Bigg \{\frac{b_{ii}-s_i\sum \limits _{j\ne i}|b_{ji}|}{a_{ii}}\Bigg \} \end{aligned}$$
(5)

and

$$\begin{aligned} \tau (B\circ A^{-1})\ge \min \limits _{i\in N}\Bigg \{\frac{b_{ii}-m_i\sum \limits _{j\ne i}|b_{ji}|}{a_{ii}}\Bigg \}. \end{aligned}$$
(6)

Huang et al. in [9, Theorem 3.3] obtained the following result: If \(A=(a_{ij})\in M_n\) is SDD, then

$$\begin{aligned} \tau (A\circ A^{-1})\ge \min \limits _{i\in N}\Bigg \{\frac{a_{ii}-\sum \limits _{j\ne i}|a_{ji}|u_{ji}}{a_{ii}-\sum \limits _{j\ne i}\frac{a_{ij}a_{ji}}{a_{jj}}}\Bigg \}, \end{aligned}$$
(7)

where

$$\begin{aligned} v_{ji}=\frac{|a_{ji}|+\sum \limits _{k\ne {j,i}}|a_{jk}|s_{ki}}{a_{jj}}\quad \text{ and }\quad u_{ji}=\frac{|a_{ji}|+\sum \limits _{k\ne {j,i}}|a_{jk}|v_{ki}}{a_{jj}}. \end{aligned}$$

Next, in Sect. 2, by using the theories of Schur complements, we derive a lower bound of the main diagonal entries \({\tilde{a}}_{ii}\) of the inverse \(A^{-1}\) for an M-matrix A. In Sect. 3, we present two types of lower bounds for \(\tau (B\circ A^{-1})\), which are proved to be bigger than those in (3), (4), (5), (6) and (7). Subsequently, we construct two types of lower bounds of \(\tau (B\circ A^{-1})\) with parameters and then yield two types of preferable lower bounds for \(\tau (B\circ A^{-1})\) by finding the optimal values of those parameters. In addition, we compare the two types of lower bounds and show their effectiveness by numerical examples. Finally, some concluding remarks are given to end this paper in Sect. 4.

2 A Lower Bound for the Main Diagonal Entries of the Inverse of an M-Matrix

Let \(A\in M_n\) be SDD and \(A^{-1}=({\tilde{a}}_{ij})\in {\mathbb {R}}^{n\times n}\). In this section, a lower bound of \({\tilde{a}}_{ii}\) for \(i\in N\) is derived by using the theories of Schur complements. Before that, some lemmas are listed.

Lemma 1

([6, Theorem 2.5.4]) Let \(A\in M_n\), \(B\in Z_n\) and \(A\le B\). Then \(B\in M_n\), \(A^{-1}\ge B^{-1}\ge 0\) and \(\det B\ge \det A>0\).

Lemma 2

[21, Theorem 2.4] Let \(M=\left( \begin{array}{cc}A&{}B\\ C&{}D\end{array}\right) \) be invertible and A be a nonsingular principal submatrix of M. Then

$$\begin{aligned} M^{-1}=\left( \begin{array}{cc} A^{-1}+A^{-1}B(D-CA^{-1}B)^{-1}CA^{-1}&{}-A^{-1}B(D-CA^{-1}B)^{-1}\\ -(D-CA^{-1}B)^{-1}CA^{-1} &{}(D-CA^{-1}B)^{-1} \end{array}\right) . \end{aligned}$$

Lemma 3

[20, Theorem 1.1] Let \(A\in {\mathbb {R}}^{n\times n}\), \(\emptyset \ne \alpha \subsetneqq N\) and \(A(\alpha )\) be nonsingular. Then

$$\begin{aligned} \det A=\det A(\alpha )\det A/\alpha . \end{aligned}$$

Lemma 4

[21, pp. 13] Let \(A\in {\mathbb {R}}^{n\times n}\) be nonsingular and A(i|j) be a submatrix of A obtained by deleting row i and column j of A. Then the inverse of A can be obtained from the adjoint matrix of A, written as \(\textrm{adj}(A)\), whose (ij)-entry is the cofactor of \(a_{ji}\), that is, \((-1)^{j+i}\det A(j|i)\). In symbols,

$$\begin{aligned} A^{-1}=\frac{\textrm{adj}(A)}{\det A}. \end{aligned}$$

Lemma 5

Let \(A=(a_{ij})\in M_n\) be SDD. Then, for \(A^{-1}=({\tilde{a}}_{ij})\), we have

$$\begin{aligned} {\tilde{a}}_{ii} \ge \frac{1}{\delta _i} \ge \frac{1}{a_{ii}-\sum \limits _{j\ne i}\frac{a_{ij}a_{ji}}{a_{jj}}} \ge \frac{1}{a_{ii}} >0,\quad i\in N, \end{aligned}$$

where

$$\begin{aligned} \delta _i= & {} \min \left\{ a_{ii}-\sum \limits _{j\ne i}\frac{a_{ij}a_{ji}}{a_{jj}}+\sum \limits _{j=i+1}^n\frac{a_{ij}}{a_{jj}}\sum \limits _{k=1}^{i-1}\frac{a_{jk}a_{ki}}{a_{kk}}, a_{ii}\right. \nonumber \\{} & {} \left. -\sum \limits _{j\ne i}\frac{a_{ij}a_{ji}}{a_{jj}}+\sum \limits _{j=i+1}^n\frac{a_{ji}}{a_{jj}}\sum \limits _{k=1}^{i-1}\frac{a_{ik}a_{kj}}{a_{kk}} \right\} .~~ \end{aligned}$$
(8)

Proof

Since \(A\in M_n\), \(A^{-1}\ge 0\). Let \(A_i\) be the submatrix of A obtained by deleting the i-th row and the i-th column of A, \({\textbf {a}}_i=(a_{i1},\ldots ,a_{i,i-1},a_{i,i+1},\ldots ,a_{in})^\top \in {\mathbb {R}}^{n-1}\) and \({\textbf {b}}_i=(a_{1i},\ldots ,a_{i-1,i},a_{i+1,i},\ldots ,a_{ni})^{\top }\in {\mathbb {R}}^{n-1}\) for \(i\in N\). Then \(A_i\in M_{n-1}\) is SDD and

where \(B_{11}=diag (a_{11},\ldots ,a_{i-1,i-1})\), \(B_{12}\in {\mathbb {R}}^{(i-1)\times (n-i)}\) is a zero matrix, \(B_{21}=A(\alpha ,\beta )\in {\mathbb {R}}^{(n-i)\times (i-1)}\) with \(\alpha =\{i+1,\ldots ,n\}\) and \(\beta =\{1,\ldots ,i-1\}\), and \(B_{22}=diag (a_{i+1,i+1},\ldots ,a_{nn})\). By Lemmas 1 and 2 and \(B_i\in Z_{n-1}\), it follows that \(B_i\in M_{n-1}\) and

(9)

By Lemmas 3 and 4, we have

$$\begin{aligned} {\tilde{a}}_{ii}=\frac{\det A_i}{\det A}=\frac{\det A_i}{\det A_i\det A/A_i}=\frac{1}{\det A/A_i}. \end{aligned}$$
(10)

By (9), we have

$$\begin{aligned} \det A/A_i= & {} a_{ii}-{\textbf {a}}_i^\top A_i^{-1}{} {\textbf {b}}_i\nonumber \\\le & {} a_{ii}-{\textbf {a}}_i^\top B_i^{-1} {\textbf {b}}_i = a_{ii}-\sum \limits _{j\ne i}\frac{a_{ij}a_{ji}}{a_{jj}}+\sum \limits _{j=i+1}^n\frac{a_{ij}}{a_{jj}}\sum \limits _{k=1}^{i-1}\frac{a_{jk}a_{ki}}{a_{kk}}. \end{aligned}$$
(11)

Similarly, by Lemmas 1 and 2 and

where \(C_i\in Z_{n-1}\), it follows that \(C_i\in M_n\) and

which implies that

$$\begin{aligned} \det A/A_i={} & {} a_{ii}-{\textbf {a}}_i^\top A_i^{-1} {\textbf {b}}_i\nonumber \\{} & {} \le a_{ii}-{\textbf {a}}_i^\top C_i^{-1} {\textbf {b}}_i = a_{ii}-\sum \limits _{j\ne i}\frac{a_{ij}a_{ji}}{a_{jj}}+\sum \limits _{j=i+1}^n\frac{a_{ji}}{a_{jj}}\sum \limits _{k=1}^{i-1}\frac{a_{ik}a_{kj}}{a_{kk}}.\qquad \end{aligned}$$
(12)

From (11) and (12), we have

$$\begin{aligned} \det A/A_i\le & {} \min \left\{ a_{ii}-\sum \limits _{j\ne i}\frac{a_{ij}a_{ji}}{a_{jj}}+\sum \limits _{j=i+1}^n\frac{a_{ij}}{a_{jj}}\sum \limits _{k=1}^{i-1}\frac{a_{jk}a_{ki}}{a_{kk}}, a_{ii}\right. \nonumber \\{} & {} \quad \left. -\sum \limits _{j\ne i}\frac{a_{ij}a_{ji}}{a_{jj}}+\sum \limits _{j=i+1}^n\frac{a_{ji}}{a_{jj}}\sum \limits _{k=1}^{i-1}\frac{a_{ik}a_{kj}}{a_{kk}} \right\} \nonumber \\= & {} \delta _i. \end{aligned}$$
(13)

By \(A\in M_n\), \(A_i\in M_{n-1}\) and Lemma 1, we have \(\det A>0\) and \(\det A_i>0\). By (10) and (13), it follows that

$$\begin{aligned} {\tilde{a}}_{ii}=\frac{\det A_i}{\det A}=\frac{1}{\det A/A_i}\ge \frac{1}{\delta _i}>0. \end{aligned}$$

Furthermore, from \(a_{ii}>0\) and \(a_{ij}\le 0\) for \(i\ne j\), \(i,j\in N\), it is easy to see that

$$\begin{aligned} 0<\delta _i \le a_{ii}-\sum \limits _{j\ne i}\frac{a_{ij}a_{ji}}{a_{jj}} \le a_{ii} \end{aligned}$$

and, consequently, that

$$\begin{aligned} \frac{1}{\delta _i}\ge \frac{1}{a_{ii}-\sum \limits _{j\ne i}\frac{a_{ij}a_{ji}}{a_{jj}}} \ge \frac{1}{a_{ii}}>0. \end{aligned}$$

The conclusion follows. \(\square \)

3 Two Types of Lower Bounds for \(\tau (B\circ {A^{-1}})\)

In this section, two types of lower bounds for \(\tau (B\circ {A^{-1}})\) are presented. Before that, some notations and lemmas are listed. For any \(i,j\in N\), \(i\ne j\), let

$$\begin{aligned} t_{ji}=\min \{s_{ji},m_{ji}\},\quad q_{ji}=\frac{|a_{ji}|+\sum \limits _{k\ne {j,i}}|a_{jk}|t_{ki}}{a_{jj}}\quad \text{ and }\quad p_{ji}=\frac{|a_{ji}|+\sum \limits _{k\ne {j,i}}|a_{jk}|q_{ki}}{a_{jj}}. \end{aligned}$$

Similar to the proof of Lemmas 1 and 2 of [23], the following lemma is obtained easily.

Lemma 6

Let \(A=(a_{ij})\in M_n\) be SDD and \(A^{-1}=({\tilde{a}}_{ij})\). Then

$$\begin{aligned}&(a)\quad 1> d_j \ge s_{ji} \ge v_{ji} \ge u_{ji} \ge 0,\quad v_{ji}\ge q_{ji}\ge p_{ji} \ge 0,\quad u_{ji} \ge p_{ji},\\&\quad j,i \in N,~ j\ne i; \\&(b)\quad 1> r_i \ge m_{ji} \ge t_{ji} \ge q_{ji} \ge p_{ji} \ge 0,\quad j,i \in N,~ j\ne i;\\&(c)\quad {\tilde{a}}_{ji}\le p_{ji}{\tilde{a}}_{ii},\quad j,i \in N,~ j\ne i. \end{aligned}$$

Lemma 7

Let \(A=(a_{ij})\in M_n\). Then

$$\begin{aligned} 0\le \tau (A)\le \min \limits _{i\in N}{a_{ii}},\quad \min \limits _{i\in N}\sum \limits _{j\in N}a_{ij}\le \tau (A)\le \max \limits _{i\in N}\sum \limits _{j\in N}a_{ij}, \end{aligned}$$
(14)

and

$$\begin{aligned} \frac{1}{\Vert A^{-1}\Vert _\infty }=\frac{1}{\max \limits _{i\in N}\sum \limits _{j\in N}{\tilde{a}}_{ij}}\le \tau (A)\le \frac{1}{\min \limits _{i\in N}\sum \limits _{j\in N}{\tilde{a}}_{ij}}. \end{aligned}$$
(15)

Proof

Assume that A is irreducible. Then \(\tau (A)\) is an eigenvalue of A corresponding to a positive eigenvector \(x=(x_1,\ldots ,x_n)^\top \), i.e., \(Ax=\tau (A)x\). For each \(i\in N\), by \(\tau (A) x_i=\sum \limits _{j\in N}a_{ij}x_j\), we have

$$\begin{aligned} (a_{ii}-\tau (A)) x_i=\sum \limits _{j\ne i}(-a_{ij})x_j\ge 0, \end{aligned}$$
(16)

which implies that \(\tau (A)\le a_{ii}\) and, consequently, that \(\tau (A)\le \min \limits _{i\in N}{a_{ii}}\). Let \(x_s=\min \limits _{i\in N}x_i\). Then \(x_s>0\). Furthermore, from (16), we have

$$\begin{aligned} (a_{ss}-\tau (A)) x_s=\sum \limits _{j\ne s}(-a_{sj})x_j\ge \sum \limits _{j\ne s}(-a_{sj})x_s, \end{aligned}$$

which implies that \(a_{ss}-\tau (A) \ge \sum \limits _{j\ne s}(-a_{sj})\), and hence \(\tau (A)\le \sum \limits _{j\in N}a_{sj}\le \max \limits _{i\in N}\sum \limits _{j\in N}a_{ij}\). Let \(x_t=\max \limits _{i\in N}x_i\). Similarly, \(\tau (A)\ge \min \limits _{i\in N}\sum \limits _{j\in N}a_{ij}\) can be given. By using \(A^{-1}x=\tau (A)^{-1}x=\rho (A^{-1})x\), the two-sided bounds (15) can be similarly proved.

Assume that A is reducible. Taking \(\varepsilon >0\) and replacing \(a_{ij}\) with \(a_{ij}-\varepsilon \) for \(i\ne j\) and \(a_{ii}\) with \(a_{ii}+n\varepsilon \), then A is irreducible. Letting \(\varepsilon \rightarrow 0^+\), the conclusions (14) and (15) follow by continuity. \(\square \)

Note here that Shivakumar et al. in [15, Theorem 1.1] also obtained (14) and (15) but restricted A to be a weakly chained diagonally dominant M-matrix. Hence, Lemma 7 can be viewed as a generalization of Theorem 1.1 of [15].

Lemma 8

([6, lemma 5.1.2]) If \(A,B\in {\mathbb {R}}^{m\times n}\) and if \(X\in {\mathbb {R}}^{m\times m}\) and \(Y\in {\mathbb {R}}^{n\times n}\) are diagonal matrices, then

$$\begin{aligned} X(A\circ B)Y=(XAY)\circ B =(XA)\circ (BY)=(AY)\circ (XB)=A\circ (XBY). \end{aligned}$$

Lemma 9

([16, Corollary 1.14]) Let \(A=(a_{ij})\in {\mathbb {C}}^{n\times n}\). Then

$$\begin{aligned} \sigma (A)\subseteq \bigcup \limits _{i=1}^{n}\Big \{z\in {\mathbb {C}}: |z-a_{ii}|\le \sum \limits _{j\ne i}|a_{ji}|\Big \}. \end{aligned}$$

Lemma 10

[5, Proposition 1] If \(A\in M_n\) is irreducible and \(Ax \ge \kappa x\) for a nonnegative nonzero vector x, then \(\tau (A)\ge \kappa \).

Lemma 11

[1, (E17) of Theorem 6.2.3] Let \(A\in Z_n\). Then \(A\in M_n\) if and only if all the leading principal minors of A are positive.

3.1 The First Type of Lower Bounds for \(\tau (B\circ {A^{-1}})\)

Theorem 1

Let \(A=(a_{ij})\in M_n\) and \(B=(b_{ij})\in M_n\). Then

$$\begin{aligned} \tau (B\circ A^{-1})\ge \varGamma _s=\min \limits _{i\in N}\varGamma _i, \end{aligned}$$
(17)

where

$$\begin{aligned} \varGamma _i= \frac{b_{ii}+\sum \limits _{j\ne i}b_{ji}p_{ji}}{\min \bigg \{ a_{ii}-\sum \limits _{j\ne i}\frac{a_{ij}a_{ji}}{a_{jj}}+\sum \limits _{j=i+1}^n\frac{a_{ij}}{a_{jj}}\sum \limits _{k=1}^{i-1}\frac{a_{jk}a_{ki}}{a_{kk}},~ a_{ii}-\sum \limits _{j\ne i}\frac{a_{ij}a_{ji}}{a_{jj}}+\sum \limits _{j=i+1}^n\frac{a_{ji}}{a_{jj}}\sum \limits _{k=1}^{i-1}\frac{a_{ik}a_{kj}}{a_{kk}} \bigg \}}. \end{aligned}$$

Proof

Let \(A^{-1}=({\tilde{a}}_{ij})\). Since \(A\in M_n\), there is a positive diagonal matrix X such that \(X^{-1}AX\in M_n\) is SDD. Furthermore, by Lemma 8 we have

$$\begin{aligned} \tau (B\circ A^{-1})=\tau (X^{-1}(B\circ A^{-1})X)=\tau (B\circ (X^{-1}AX)^{-1}). \end{aligned}$$

Therefore, we may assume that \(A\in M_n\) is SDD.

Because \(\tau (B\circ A^{-1})\) is an eigenvalue of \(B\circ A^{-1}\), by Lemmas 6 and 9, there is an index \(i\in N\) such that

$$\begin{aligned} |\tau (B\circ A^{-1})-b_{ii}{\tilde{a}}_{ii}| \le \sum \limits _{j\ne i}|b_{ji}{\tilde{a}}_{ji}| \le \sum \limits _{j\ne i}|b_{ji}|p_{ji}{\tilde{a}}_{ii} =-{\tilde{a}}_{ii}\sum \limits _{j\ne i}b_{ji}p_{ji}. \end{aligned}$$
(18)

By Lemma 7, we have \(\tau (B\circ A^{-1})\le b_{ii}\alpha _{ii}\), \(i\in N\). Furthermore, by (18) and Lemma 5, we have

$$\begin{aligned} \tau (B\circ A^{-1}) \ge&b_{ii}{\tilde{a}}_{ii}+{\tilde{a}}_{ii}\sum \limits _{j\ne i}b_{ji}p_{ji}=(b_{ii}+\sum \limits _{j\ne i}b_{ji}p_{ji}){\tilde{a}}_{ii}\\ \ge&\Big (b_{ii}+\sum \limits _{j\ne i}b_{ji}p_{ji}\Big )/\delta _i=\varGamma _i\ge \min \limits _{i\in N}\varGamma _i, \end{aligned}$$

where \(\delta _i\) is defined in (8). Hence, the conclusion (17) follows. \(\square \)

Remark 1

  1. (i)

    Let \(A=B\in M_n\) be SDD. By Lemmas 5 and 6, it is not difficult to see that

    $$\begin{aligned} \varGamma _s=\min \limits _{i\in N}\Bigg \{\frac{a_{ii}+\sum \limits _{j\ne i}a_{ji}p_{ji}}{\delta _i}\Bigg \} \ge \min \limits _{i\in N}\Bigg \{\frac{a_{ii}-\sum \limits _{j\ne i}|a_{ji}|u_{ji}}{a_{ii}-\sum \limits _{j\ne i}\frac{a_{ij}a_{ji}}{a_{jj}}}\Bigg \} \ge \min \limits _{i\in N}\Bigg \{\frac{a_{ii}-\sum \limits _{j\ne i}|a_{ji}|s_{ji}}{a_{ii}}\Bigg \}, \end{aligned}$$

    which implies that the lower bound in (17) is bigger than those in (3) and (7).

  2. (ii)

    Let \(A=B\in M_n\). By Lemmas 5 and 6, it is not difficult to see that

    $$\begin{aligned} \varGamma _s=\min \limits _{i\in N}\Bigg \{\frac{a_{ii}+\sum \limits _{j\ne i}a_{ji}p_{ji}}{\delta _i}\Bigg \} \ge \min \limits _{i\in N}\Bigg \{\frac{a_{ii}-\sum \limits _{j\ne i}|a_{ji}|m_{ji}}{a_{ii}}\Bigg \}, \end{aligned}$$

    which implies that the lower bound in (17) is bigger than that in (4).

  3. (iii)

    Let \(A\in M_n\) be SDD, \(A=A^\top \) and \(B\in M_n\). Then, by Lemmas 5 and 6, we have \(a_{ii}\ge \delta _i>0\), \(s_i=\max \limits _{j\ne i}\{s_{ij}\}\ge s_{ij}=s_{ji}\ge p_{ji}\ge 0\) and \(m_i=\max \limits _{j\ne i}\{m_{ij}\}\ge m_{ij}=m_{ji}\ge p_{ji}\ge 0\) for \(i,j\in N, i\ne j\), and consequently,

    $$\begin{aligned} \varGamma _s=\min \limits _{i\in N}\Bigg \{\frac{b_{ii}+\sum \limits _{j\ne i}b_{ji}p_{ji}}{\delta _i}\Bigg \} \ge \min \limits _{i\in N}\Bigg \{\frac{b_{ii}-s_i\sum \limits _{j\ne i}|b_{ji}|}{a_{ii}}\Bigg \} \end{aligned}$$

    and

    $$\begin{aligned} \varGamma _s=\min \limits _{i\in N}\Bigg \{\frac{b_{ii}+\sum \limits _{j\ne i}b_{ji}p_{ji}}{\delta _i}\Bigg \} \ge \min \limits _{i\in N}\Bigg \{\frac{b_{ii}-m_i\sum \limits _{j\ne i}|b_{ji}|}{a_{ii}}\Bigg \}. \end{aligned}$$

    which implies that the lower bound in (17) is bigger than those in (5) and (6).

\(\square \)

Next, a lower bound of \(\tau (B\circ A^{-1})\) with a nonnegative parameter \(\varepsilon \), which is bigger than that in Theorem 1, is constructed.

Theorem 2

Let \(A=(a_{ij})\in M_n\) and \(B=(b_{ij})\in M_n\). Then

$$\begin{aligned} \tau (B\circ A^{-1})\ge \max \limits _{\varepsilon \ge 0}\min \left\{ \varGamma _s(\varepsilon ),~\min \limits _{i\ne s}\varGamma _i(\varepsilon )\right\} , \end{aligned}$$
(19)

where s is the index such that \(\varGamma _i\) is minimum for \(i\in N\), that is, \(\varGamma _s=\min \limits _{i\in N}\varGamma _i\),

$$\begin{aligned} \varGamma _s(\varepsilon )= \frac{b_{ss}+\frac{1}{1+\varepsilon }\sum \limits _{j\ne s}b_{js}p_{js}}{\min \bigg \{ a_{ss}-\sum \limits _{j\ne s}\frac{a_{sj}a_{js}}{a_{jj}}+\sum \limits _{j=s+1}^n\frac{a_{sj}}{a_{jj}}\sum \limits _{k=1}^{s-1}\frac{a_{jk}a_{ks}}{a_{kk}},~ a_{ss}-\sum \limits _{j\ne s}\frac{a_{sj}a_{js}}{a_{jj}}+\sum \limits _{j=s+1}^n\frac{a_{js}}{a_{jj}}\sum \limits _{k=1}^{s-1}\frac{a_{sk}a_{kj}}{a_{kk}} \bigg \}},\nonumber \\ \end{aligned}$$
(20)

and

$$\begin{aligned} \varGamma _i(\varepsilon )= & {} \frac{b_{ii}+\sum \limits _{j\ne i}b_{ji}p_{ji}+\varepsilon b_{si}p_{si}}{\min \bigg \{ a_{ii}-\sum \limits _{j\ne i}\frac{a_{ij}a_{ji}}{a_{jj}}+\sum \limits _{j=i+1}^n\frac{a_{ij}}{a_{jj}}\sum \limits _{k=1}^{i-1}\frac{a_{jk}a_{ki}}{a_{kk}},~ a_{ii}-\sum \limits _{j\ne i}\frac{a_{ij}a_{ji}}{a_{jj}}+\sum \limits _{j=i+1}^n\frac{a_{ji}}{a_{jj}}\sum \limits _{k=1}^{i-1}\frac{a_{ik}a_{kj}}{a_{kk}} \bigg \}},\nonumber \\{} & {} \quad i\ne s. \end{aligned}$$
(21)

Proof

Similar to the proof of Theorem 1, we may assume that \(A\in M_n\) is SDD. Let \(A^{-1}=({\tilde{a}}_{ij})\) and \(\varGamma _s=\min \limits _{i\in N}\varGamma _i\). Taking

$$\begin{aligned}{} & {} x=(x_1,\ldots ,x_{s-1},x_s,x_{s+1},\ldots ,x_n)^\top \in {\mathbb {R}}^n,\nonumber \\{} & {} \quad \text{ where }~ x_i=\left\{ \begin{array}{ll} 1+\varepsilon ,&{}\textrm{if}\ i=s,\\ 1,&{}\textrm{if}\ i\ne s,\ i\in N, \\ \end{array} \right. \quad \end{aligned}$$
(22)

and \(\varepsilon \) is a nonnegative number such that

$$\begin{aligned} \kappa =\min \left\{ \varGamma _s(\varepsilon ),~\min \limits _{i\ne s}\varGamma _i(\varepsilon )\right\} >0. \end{aligned}$$
(23)

Case I. Assume that \(B\circ A^{-1}\) is irreducible. We next prove that \((B\circ A^{-1})^\top x \ge \kappa x\). By

$$\begin{aligned} ((B\circ A^{-1})^\top x)_s&=(1+\varepsilon )b_{ss}{\tilde{a}}_{ss}+\sum \limits _{j\ne s}b_{js}{\tilde{a}}_{js}\\&\ge \Big ((1+\varepsilon )b_{ss}+\sum \limits _{j\ne s}b_{js}p_{js}\Big ){\tilde{a}}_{ss} \qquad \text{(by } \text{ Lemma } \text{6) }\\&\ge \Big ((1+\varepsilon )b_{ss}+\sum \limits _{j\ne s}b_{js}p_{js}\Big )/\delta _{s} \qquad \text{(by } \text{ Lemma } \text{5) }\\&=(1+\varepsilon )\varGamma _s(\varepsilon ) \qquad \text{(by } \text{((20))) }\\&\ge (1+\varepsilon )\kappa \qquad \text{(by } \text{((23))) }\\&=\kappa x_s \qquad \text{(by } \text{((22))) }, \end{aligned}$$

we have \(((B\circ A^{-1})^\top x)_s\ge \kappa x_s\). For each \(i\in N\), \(i\ne s\), by

$$\begin{aligned} ((B\circ A^{-1})^\top x)_i&=b_{ii}{\tilde{a}}_{ii}+\sum \limits _{j\ne i}b_{ji}{\tilde{a}}_{ji}+\varepsilon b_{si}{\tilde{a}}_{si}\\&\ge (b_{ii}+\sum \limits _{j\ne i}b_{ji}p_{ji}+\varepsilon b_{si}p_{si}){\tilde{a}}_{ii} \qquad \text{(by } \text{ Lemma } \text{6) }\\&\ge (b_{ii}+\sum \limits _{j\ne i}b_{ji}p_{ji}+\varepsilon b_{si}p_{si})/\delta _{i} \qquad \text{(by } \text{ Lemma } \text{5) }\\&=\varGamma _i(\varepsilon )\ge \kappa \qquad \text{(by } \text{(21) } \text{ and } \text{(23)) }\\&=\kappa x_i \qquad \text{(by } \text{(22)) }, \end{aligned}$$

we have \(((B\circ A^{-1})^\top x)_i\ge \kappa x_i\), \(i\ne s\). Consequently, \((B\circ A^{-1})^\top x \ge \kappa x\) follows for a positive vector x. Furthermore, by Lemma 10 and \(\tau ((B\circ A^{-1})^\top )=\tau (B\circ A^{-1})\), we have \(\tau (B\circ A^{-1})\ge \kappa \). By the arbitrariness of \(\varepsilon \), the conclusion (19) follows.

Case II. Assume that \(B\circ A^{-1}\) is reducible. Let \(C=(c_{ij})\in {\mathbb {R}}^{n\times n}\) with \(c_{12}=c_{23}=\cdots =c_{n-1,n}=c_{n1}=1\) and other \(c_{ij}=0\). Let \(\vartheta \) be a sufficiently small positive number such that all the leading principal minors of \(B\circ A^{-1}-\vartheta C\) are positive. Then \(B\circ A^{-1}-\vartheta C\) is an irreducible M-matrix by Lemma 11. Substituting \(B\circ A^{-1}-\vartheta C\) for \(B\circ A^{-1}\) in Case I and letting \(\vartheta \rightarrow 0\), (19) holds by continuity. \(\square \)

Next, a remark is given to indicate that the lower bound in Theorem 2 is more precise than that in Theorem 1.

Remark 2

  1. (i)

    Taking \(\varepsilon =0\) in Theorem 2, then \(\varGamma _s(0)=\varGamma _s\) and \(\varGamma _i(0)=\varGamma _i\) for \(i\in N, i\ne s\), which implies that

    $$\begin{aligned} \max \limits _{\varepsilon =0}\min \left\{ \varGamma _s(0),\min \limits _{i\ne s}\varGamma _i(0)\right\} =\min \left\{ \varGamma _s,\min \limits _{i\ne s}\varGamma _i\right\} =\varGamma _s=\min \limits _{i\in N}\varGamma _i \end{aligned}$$

    and, consequently, that Theorem 1 is a special case of Theorem 2.

  2. (ii)

    For \(\varepsilon >0\), it is not difficult to see that \(\varGamma _s(\varepsilon )\) is a monotone increasing continuous function of \(\varepsilon \) and that \(\varGamma _i(\varepsilon )\) is a monotone decreasing continuous function of \(\varepsilon \) for \(i\in N, i\ne s\), which implies that there exists a positive number \(\varepsilon ^*\) such that

    $$\begin{aligned} \varGamma _s=\varGamma _s(0)\le \varGamma _s(\varepsilon ^*)=\min \limits _{i\ne s}\varGamma _i(\varepsilon ^*) =\max \limits _{\varepsilon \ge 0}\min \left\{ \varGamma _s(\varepsilon ),\min \limits _{i\ne s}\varGamma _i(\varepsilon )\right\} , \end{aligned}$$
    (24)

    consequently, the lower bound in Theorem 2 is greater than or equal to that in Theorem 1. Specifically, we can use the positive number \(\varepsilon ^*\) to fine-tune \(\varGamma _s\) and \(\varGamma _i\) for \(i\ne s\): \(\varGamma _s\) is increased to \(\varGamma _s(\varepsilon ^*)\) and \(\varGamma _i\) is decreased to \(\varGamma _i(\varepsilon ^*)\) for \(i\in N, i\ne s\).

\(\square \)

Next, by finding the optimal value of the nonnegative parameter \(\varepsilon \), the specific lower bound for \(\tau (B\circ A^{-1})\) in Theorem 2 is given.

Theorem 3

Let \(A=(a_{ij})\in M_n\) and \(B=(b_{ij})\in M_n\). Then

$$\begin{aligned} \tau (B\circ A^{-1})\ge \varGamma ^*:=\varGamma _s(\varepsilon ^*), \end{aligned}$$
(25)

where s is the index such that \(\varGamma _i\) is minimum for \(i\in N\), that is, \(\varGamma _s=\min \limits _{i\in N}\varGamma _i\), and \(\varepsilon ^*=\min \limits _{i\ne s}\varepsilon _i\), \(\varepsilon _i\) is the positive number such that \(\varGamma _s(\varepsilon )=\varGamma _i(\varepsilon )\).

Proof

For each \(i\in N\) and \(i\ne s\), there exists only one positive real number \(\varepsilon _i\) such that \(\varGamma _s(\varepsilon )=\varGamma _i(\varepsilon )\). Solving, respectively, \(\varGamma _s(\varepsilon )=\varGamma _i(\varepsilon )\) for each \(i\in N\) and \(i\ne s\), then \(\varepsilon _1,\ldots ,\varepsilon _{s-1},\varepsilon _{s+1},\ldots ,\varepsilon _n\) are obtained. For convenience and without loss of generality, assume that \(\varepsilon _1\le \cdots \le \varepsilon _{s-1}\le \varepsilon _{s+1}\le \cdots \le \varepsilon _n\). We next prove that (24) holds when \(\varepsilon ^*=\varepsilon _1\), which implies that \(\varepsilon _1\) is the optimal parameter such that the lower bound \(\max \limits _{\varepsilon \ge 0}\min \left\{ \varGamma _s(\varepsilon ),\min \limits _{i\ne s}\varGamma _i(\varepsilon )\right\} \) in Theorem 2 reaches its maximum.

Since \(\varGamma _s(\varepsilon )\) is a monotone increasing continuous function of \(\varepsilon \), then we have

$$\begin{aligned} \varGamma _s(\varepsilon _1)\le \cdots \le \varGamma _s(\varepsilon _{s-1})\le \varGamma _s(\varepsilon _{s+1})\le \cdots \le \varGamma _s(\varepsilon _n). \end{aligned}$$
(26)

For each \(i\in N\), \(i\ne s\), since \(\varGamma _i(\varepsilon )\) is a monotone decreasing continuous function of \(\varepsilon \), we have \(\varGamma _i(\varepsilon _1)\ge \varGamma _i(\varepsilon _i).\) Furthermore, by \(\varGamma _i(\varepsilon _i)=\varGamma _s(\varepsilon _i)\) and (26), we have \(\varGamma _i(\varepsilon _1)\ge \varGamma _s(\varepsilon _i)\ge \varGamma _s(\varepsilon _1)\), which implies that \(\min \left\{ \varGamma _s(\varepsilon _1),\min \limits _{i\ne s}\varGamma _i(\varepsilon _1)\right\} =\varGamma _s(\varepsilon _1)\) and, consequently, that \(\tau (B\circ A^{-1})\ge \varGamma _s(\varepsilon _1)\).

Next, we prove that

$$\begin{aligned} \varGamma _s(\varepsilon _1)=\max \limits _{\varepsilon \ge 0}\min \left\{ \varGamma _s(\varepsilon ),\min \limits _{i\ne s}\varGamma _i(\varepsilon )\right\} . \end{aligned}$$
(27)

When \(\varepsilon \in [0,\varepsilon _1]\), as \(\varGamma _s(\varepsilon )\) is an increasing function of \(\varepsilon \), it is clear that \(\varGamma _s(\varepsilon _1)\ge \varGamma _s(\varepsilon )\ge \min \left\{ \varGamma _s(\varepsilon ),\min \limits _{i\ne s}\varGamma _i(\varepsilon )\right\} ,\) which implies that \(\varGamma _s(\varepsilon _1)\ge \max \limits _{0\le \varepsilon \le \varepsilon _1}\min \left\{ \varGamma _s(\varepsilon ),\min \limits _{i\ne s}\varGamma _i(\varepsilon )\right\} .\) When \(\varepsilon \in [\varepsilon _1,+\infty )\), since \(\varGamma _1(\varepsilon )\) is a decreasing function of \(\varepsilon \) and \(\varGamma _1(\varepsilon _1)=\varGamma _s(\varepsilon _1)\), we have

$$\begin{aligned} \varGamma _s(\varepsilon _1)=\varGamma _1(\varepsilon _1)\ge \varGamma _1(\varepsilon )\ge \min \left\{ \varGamma _s(\varepsilon ),\min \limits _{i\ne s}\varGamma _i(\varepsilon )\right\} , \end{aligned}$$

which implies that \(\varGamma _s(\varepsilon _1)\ge \max \limits _{\varepsilon \ge \varepsilon _1}\min \left\{ \varGamma _s(\varepsilon ),\min \limits _{i\ne s}\varGamma _i(\varepsilon )\right\} .\) Hence, (27) holds, which implies that (25) follows. \(\square \)

3.2 The Second Type of Lower Bounds for \(\tau (B\circ {A^{-1}})\)

Theorem 4

Let \(A=(a_{ij})\in M_n\) and \(B=(b_{ij})\in M_n\). Then

$$\begin{aligned} \tau (B\circ A^{-1})\ge \varOmega _t=\min \limits _{i\in N}\varOmega _i, \end{aligned}$$
(28)

where

$$\begin{aligned} \varOmega _i= \frac{b_{ii}+\sum \limits _{j\ne i}b_{ij}p_{ij}}{\min \bigg \{ a_{ii}-\sum \limits _{j\ne i}\frac{a_{ij}a_{ji}}{a_{jj}}+\sum \limits _{j=i+1}^n\frac{a_{ij}}{a_{jj}}\sum \limits _{k=1}^{i-1}\frac{a_{jk}a_{ki}}{a_{kk}},~ a_{ii}-\sum \limits _{j\ne i}\frac{a_{ij}a_{ji}}{a_{jj}}+\sum \limits _{j=i+1}^n\frac{a_{ji}}{a_{jj}}\sum \limits _{k=1}^{i-1}\frac{a_{ik}a_{kj}}{a_{kk}} \bigg \}}. \end{aligned}$$

Proof

Similar to the proof of Theorem 1, we may assume that \(A\in M_n\) is SDD. Let \(A^{-1}=({\tilde{a}}_{ij})\), \(y=({\tilde{a}}_{11}^{-1},\ldots ,{\tilde{a}}_{nn}^{-1})^\top \in {\mathbb {R}}^n\), and

$$\begin{aligned} \gamma =\min \limits _{i\in N}\Big \{\big (b_{ii}+\sum \limits _{j\ne i}b_{ij}p_{ij}\big )/\delta _i\Big \}, \end{aligned}$$
(29)

where \(\delta _i\) is defined in (8). By (29), we have \(b_{ii}+\sum \limits _{j\ne i}b_{ij}p_{ij}\ge \gamma \delta _i\), \(i\in N\). By Lemma 5, we have \(0<{\tilde{a}}_{ii}^{-1}\le \delta _i\), \(i\in N\). Consequently, \(y>0\) and \(\gamma y_i =\gamma {\tilde{a}}_{ii}^{-1}\le \gamma \delta _i\) for \(i\in N\). By Lemma 6, we have \({\tilde{a}}_{ij}{\tilde{a}}_{jj}^{-1}\le p_{ij}\) for \(i,j\in N, i\ne j\).

Next, the inequality \((B\circ A^{-1}) y \ge \gamma y\) will be proved. Assume that \(B\circ A^{-1}\) is irreducible. By

$$\begin{aligned} ((B\circ A^{-1}) y)_i =b_{ii}+\sum \limits _{j\ne i}b_{ij}{\tilde{a}}_{ij}{\tilde{a}}_{jj}^{-1} \ge b_{ii}+\sum \limits _{j\ne i}b_{ij}p_{ij} \ge \gamma \delta _i \ge \gamma {\tilde{a}}_{ii}^{-1} = \gamma y_i,\quad i\in N, \end{aligned}$$

it follows that \((B\circ A^{-1}) y\ge \gamma y\) holds for a positive vector y. Furthermore, by Lemma 10, we have \(\tau (B\circ A^{-1})\ge \gamma \), which implies that (28) follows. Assume that \(B\circ A^{-1}\) is reducible. By using the similar proof to Case II of Theorem 2, the conclusion (28) follows immediately. \(\square \)

Remark 3

Let \(A=(a_{ij})\in M_n\) be SDD and \(B=(b_{ij})\in M_n\) with \(R_i(B)=R_i(B^\top )\) for \(i\in N\). By Lemmas 5 and 6, we have \(a_{ii}\ge \delta _i>0\), \(s_i=\max \limits _{j\ne i}\{s_{ij}\}\ge s_{ij}\ge p_{ij}\ge 0\) and \(m_i=\max \limits _{j\ne i}\{m_{ij}\}\ge m_{ij}\ge p_{ij}\ge 0\) for \(i,j\in N, i\ne j\), and consequently,

$$\begin{aligned} \varOmega _t=\min \limits _{i\in N}\Bigg \{\frac{b_{ii}+\sum \limits _{j\ne i}b_{ij}p_{ij}}{\delta _i}\Bigg \} \ge \min \limits _{i\in N}\Bigg \{\frac{b_{ii}-s_i\sum \limits _{j\ne i}|b_{ji}|}{a_{ii}}\Bigg \} \end{aligned}$$

and

$$\begin{aligned} \varOmega _t=\min \limits _{i\in N}\Bigg \{\frac{b_{ii}+\sum \limits _{j\ne i}b_{ij}p_{ij}}{\delta _i}\Bigg \} \ge \min \limits _{i\in N}\Bigg \{\frac{b_{ii}-m_i\sum \limits _{j\ne i}|b_{ji}|}{a_{ii}}\Bigg \}, \end{aligned}$$

which implies that the lower bound in (28) is bigger than those in (5) and (6). \(\square \)

Next, another lower bound of \(\tau (B\circ A^{-1})\) with a nonnegative parameter \(\theta \), which is bigger than that in Theorem 4, is constructed.

Theorem 5

Let \(A=(a_{ij})\in M_n\) and \(B=(b_{ij})\in M_n\). Then

$$\begin{aligned} \tau (B\circ A^{-1})\ge \max \limits _{\theta \ge 0}\min \left\{ \varOmega _t(\theta ),~\min \limits _{i\ne t}\varOmega _i(\theta )\right\} , \end{aligned}$$
(30)

where t is the index such that \(\varOmega _i\) is minimum for \(i\in N\), that is, \(\varOmega _t=\min \limits _{i\in N}\varOmega _i\),

$$\begin{aligned} \varOmega _t(\theta )= \frac{b_{tt}+\frac{1}{1+\theta }\sum \limits _{j\ne t}b_{tj}p_{tj}}{\min \bigg \{ a_{tt}-\sum \limits _{j\ne t}\frac{a_{tj}a_{jt}}{a_{jj}}+\sum \limits _{j=t+1}^n\frac{a_{tj}}{a_{jj}}\sum \limits _{k=1}^{t-1}\frac{a_{jk}a_{kt}}{a_{kk}},~ a_{tt}-\sum \limits _{j\ne t}\frac{a_{tj}a_{jt}}{a_{jj}}+\sum \limits _{j=t+1}^n\frac{a_{jt}}{a_{jj}}\sum \limits _{k=1}^{t-1}\frac{a_{tk}a_{kj}}{a_{kk}} \bigg \}}, \end{aligned}$$

and

$$\begin{aligned} \varOmega _i(\theta )= & {} \frac{b_{ii}+\sum \limits _{j\ne i}b_{ij}p_{ij}+\theta b_{it}p_{it}}{\min \bigg \{ a_{ii}-\sum \limits _{j\ne i}\frac{a_{ij}a_{ji}}{a_{jj}}+\sum \limits _{j=i+1}^n\frac{a_{ij}}{a_{jj}}\sum \limits _{k=1}^{i-1}\frac{a_{jk}a_{ki}}{a_{kk}},~ a_{ii}-\sum \limits _{j\ne i}\frac{a_{ij}a_{ji}}{a_{jj}}+\sum \limits _{j=i+1}^n\frac{a_{ji}}{a_{jj}}\sum \limits _{k=1}^{i-1}\frac{a_{ik}a_{kj}}{a_{kk}} \bigg \}},\\{} & {} \quad i\ne t. \end{aligned}$$

Proof

Similar to the proof of Theorem 1, we assume that \(A\in M_n\) is SDD. Let \(A^{-1}=({\tilde{a}}_{ij})\), \(\varOmega _t=\min \limits _{i\in N}\varOmega _i\),

$$\begin{aligned} z=({\tilde{a}}_{11}^{-1},\ldots ,{\tilde{a}}_{t-1,t-1}^{-1},(1+\theta ){\tilde{a}}_{tt}^{-1},{\tilde{a}}_{t+1,t+1}^{-1},\ldots ,{\tilde{a}}_{nn}^{-1})^\top \in {\mathbb {R}}^n \end{aligned}$$

and \(\theta \) be a nonnegative number such that

$$\begin{aligned} \omega =\min \left\{ \varOmega _t(\theta ),~\min \limits _{i\ne t}\varOmega _i(\theta )\right\} >0. \end{aligned}$$
(31)

By Lemma 5, we have \(\delta _i\ge {\tilde{a}}_{ii}^{-1}>0\) for \(i\in N\), where \(\delta _i\) is defined in (8), and hence z is a positive vector. By (31), we have \(\varOmega _t(\theta )=\Big (b_{tt}+\frac{1}{1+\theta }\sum \limits _{j\ne t}b_{tj}p_{tj}\Big )/\delta _t\ge \omega \), which implies that

$$\begin{aligned} (1+\theta )b_{tt}+\sum \limits _{j\ne t}b_{tj}p_{tj}\ge (1+\theta )\omega \delta _t. \end{aligned}$$
(32)

For \(i\in N, i\ne t\), by (31), we have \(\varOmega _i(\theta )=\Big (b_{ii}+\sum \limits _{j\ne i}b_{ij}p_{ij}+\theta b_{it}p_{it}\Big )/\delta _i\ge \omega ,\) which implies that

$$\begin{aligned} b_{ii}+\sum \limits _{j\ne i}b_{ij}p_{ij}+\theta b_{it}p_{it}\ge \omega \delta _i. \end{aligned}$$
(33)

By Lemma 6, we have \({\tilde{a}}_{ij}{\tilde{a}}_{jj}^{-1}\le p_{ij}\) for \(i,j\in N, j\ne i\).

Next, we prove that (30) holds. Assume that \(B\circ A^{-1}\) is irreducible. By (32) and (33), we have

$$\begin{aligned} ((B\circ A^{-1}) z)_t =&(1+\theta )b_{tt}+\sum \limits _{j\ne t}b_{tj}{\tilde{a}}_{tj}{\tilde{a}}_{jj}^{-1} \ge (1+\theta )b_{tt}+\sum \limits _{j\ne t}b_{tj}p_{tj}\\&\ge (1+\theta )\omega \delta _t \ge (1+\theta )\omega {\tilde{a}}_{tt}^{-1} = \omega z_t \end{aligned}$$

and

$$\begin{aligned} ((B\circ A^{-1}) z)_i =&b_{ii}+\sum \limits _{j\ne i}b_{ij}{\tilde{a}}_{ij}{\tilde{a}}_{jj}^{-1}+\theta b_{it}{\tilde{a}}_{it}{\tilde{a}}_{tt}^{-1} \ge b_{ii}+\sum \limits _{j\ne i}b_{ij}p_{ij}+\theta b_{it}p_{it}\\&\ge \omega \delta _i \ge \omega {\tilde{a}}_{ii}^{-1} = \omega z_i,\quad i\ne t, \end{aligned}$$

which implies that \((B\circ A^{-1})z \ge \omega z\) for a positive vector z. Furthermore, by Lemma 10, we have \(\tau (B\circ A^{-1})\ge \omega \). By the arbitrariness of \(\theta \), the conclusion (30) follows. Assume that \(B\circ A^{-1}\) is reducible. By using the similar proof to Case II of Theorem 2, the conclusion (30) follows immediately. \(\square \)

It is easy to see that \(\varOmega _t(\theta )\) is a monotone increasing continuous function of \(\theta \) and \(\varOmega _i(\theta )\) is a monotone decreasing continuous function of \(\theta \) for \(i\in N, i\ne t\) when \(\theta \ge 0\). Next, by finding the optimal value of the nonnegative parameter \(\theta \) and using the same technique as the proof of Theorem 3, the specific lower bound for \(\tau (B\circ A^{-1})\) in Theorem 5 is presented.

Theorem 6

Let \(A=(a_{ij})\in M_n\) and \(B=(b_{ij})\in M_n\). Then

$$\begin{aligned} \tau (B\circ A^{-1})\ge \varOmega ^*:=\varOmega _t(\theta ^*), \end{aligned}$$

where t is the index such that \(\varOmega _i\) is minimum for \(i\in N\), that is, \(\varOmega _t=\min \limits _{i\in N}\varOmega _i\), and \(\theta ^*=\min \limits _{i\ne t}\theta _i\), \(\theta _i\) is the positive number such that \(\varOmega _t(\theta )=\varOmega _i(\theta )\).

3.3 Comparison on Two Types of Lower Bounds for \(\tau (B\circ A^{-1})\)

From Theorems 1 and 4, it is not difficult to see that if \(A=A^\top \) and \(B=B^\top \), then \(\varGamma _i=\varOmega _i\) for \(i\in N\), and, consequently, \(\min \limits _{i\in N}\varGamma _i=\min \limits _{i\in N}\varOmega _i\), which implies that the lower bound in Theorem 1 is the same as that in Theorem 4, and that for general M-matrices A and B, sometimes the lower bound in Theorem 1 is bigger than that in Theorem 4, sometimes it is just the opposite. Hence, the following results are given easily.

Theorem 7

Let \(A=(a_{ij})\in M_n\) and \(B=(b_{ij})\in M_n\). Then

$$\begin{aligned} (a)\tau (B\circ A^{-1})\ge \max \Big \{\min \limits _{i\in N}\varGamma _i, \min \limits _{i\in N}\varOmega _i\Big \}, \end{aligned}$$

where \(\varGamma _i=\Big (b_{ii}+\sum \limits _{j\ne i}b_{ji}p_{ji}\Big )/\delta _i\), \(\varOmega _i=\Big (b_{ii}+\sum \limits _{j\ne i}b_{ij}p_{ij}\Big )/\delta _i\) and \(\delta _i\) is defined in (8);

$$\begin{aligned} (b)\tau (B\circ A^{-1})\ge \max \big \{\varGamma ^*, \varOmega ^*\big \}. \end{aligned}$$

3.4 Numerical Examples

In this subsection, two examples are given to show the effectiveness of Theorems 1, 3, 4 and 6.

Example 1

Consider the following strictly diagonally dominant M-matrix

$$\begin{aligned}A=\left( \begin{array}{rrrr} 17&{}-3&{}-5&{}-5\\ -5&{}11&{}-1&{}-1\\ -4&{}-4&{}16&{}-2\\ -3&{}-4&{}-3&{}16\\ \end{array} \right) .\end{aligned}$$

Now, the lower bounds of \(\tau (A\circ A^{-1})\) are considered. By (5) (i.e., (4.6) in [24, Theorem 4.8]) with \(A=B\), we have

$$\begin{aligned} \tau (A\circ A^{-1})\ge 0.4318. \end{aligned}$$

By (6) (i.e., (4.5) in [24, Theorem 4.8]) with \(A=B\), we have

$$\begin{aligned} \tau (A\circ A^{-1})\ge 0.4444. \end{aligned}$$

By (3), i.e., Theorem 3.5 of [11], we have

$$\begin{aligned} \tau (A\circ A^{-1})\ge 0.4771. \end{aligned}$$

By (2), i.e., Theorem 3 of [18], we have

$$\begin{aligned} \tau (A\circ A^{-1})\ge 0.5000. \end{aligned}$$

By Theorem 3 of [23] (taking the number of iterations as 10), we have

$$\begin{aligned} \tau (A\circ A^{-1})\ge 0.5674. \end{aligned}$$

By (4), i.e., Theorem 3.4 of [14], we have

$$\begin{aligned} \tau (A\circ A^{-1})\ge 0.5844. \end{aligned}$$

By Theorem 3.4 of [3], we have

$$\begin{aligned} \tau (A\circ A^{-1})\ge 0.5893. \end{aligned}$$

By (7), i.e., Theorem 3.3 of [9], we have

$$\begin{aligned} \tau (A\circ A^{-1})\ge 0.6526. \end{aligned}$$

By Theorem 3 of [8], we have

$$\begin{aligned} \tau (A\circ A^{-1})\ge 0.6767. \end{aligned}$$

Next, we use Theorems 1, 2 and 3 to obtain lower bounds of \(\tau (A\circ A^{-1})\). By Theorem 1 with \(A=B\), we have \(\varGamma _1=0.8465\), \(\varGamma _2=0.7328\), \(\varGamma _3=0.8802\), \(\varGamma _4=0.8789\), and hence

$$\begin{aligned} \tau (A\circ A^{-1})\ge \min \{\varGamma _1,\varGamma _2,\varGamma _3,\varGamma _4\}=\varGamma _2=0.7328. \end{aligned}$$

By Theorem 2, we have \(s=2\) and

$$\begin{aligned}&\varGamma _2(\varepsilon )=1.2384-\frac{0.5056}{1+\varepsilon },\quad \varGamma _1(\varepsilon )=0.8465-0.1979\varepsilon , \quad \\&\varGamma _3(\varepsilon )=0.8802-0.0253\varepsilon , \quad \varGamma _4(\varepsilon )=0.8789-0.0239\varepsilon . \end{aligned}$$

By Theorem 3, we have \(\varepsilon _1=0.1817\), \(\varepsilon _3=0.3751\), \(\varepsilon _4=0.3724\), and hence

$$\begin{aligned} \varepsilon ^*=\min \{\varepsilon _1, \varepsilon _3, \varepsilon _4\}=0.1817 ~~\text{ and }~~ \tau (A\circ A^{-1})\ge \varGamma ^*:=\varGamma _2(\varepsilon ^*)=0.8105. \end{aligned}$$

Finally, we use Theorems 4, 5 and 6 to obtain lower bounds of \(\tau (A\circ A^{-1})\). By Theorem 4 with \(A=B\), we have \(\varOmega _1=0.8201\), \(\varOmega _2=0.8609\), \(\varOmega _3=0.8666\), \(\varOmega _4=0.8375\), and hence

$$\begin{aligned} \tau (A\circ A^{-1})\ge \min \{\varOmega _1,\varOmega _2,\varOmega _3,\varOmega _4\}=\varOmega _1=0.8201. \end{aligned}$$

By Theorem 5, we have \(t=1\) and

$$\begin{aligned}&\varOmega _1(\theta )=1.2640-\frac{0.4439}{1+\theta },\quad \varOmega _2(\theta )=0.8609-0.2996\theta , \quad \\&\varOmega _3(\theta )=0.8666-0.1262\theta , \quad \varOmega _4(\theta )=0.8375-0.0844\theta . \end{aligned}$$

By Theorem 6, we have \(\theta _2=0.0567\), \(\theta _3=0.0870\), \(\theta _4=0.0339\), and hence

$$\begin{aligned} \theta ^*=\min \{\theta _2, \theta _3, \theta _4\}=0.0339 ~~\text{ and }~~ \tau (A\circ A^{-1})\ge \varOmega ^*:=\varOmega _1(\theta ^*)=0.8346. \end{aligned}$$

In fact, \(\tau (A\circ A^{-1})=0.9787.\) This example shows that the lower bounds obtained by Theorems 1, 3, 4 and 6 are bigger than those in [3, 8, 9, 11, 14, 18, 23, 24]. \(\square \)

Example 2

Let \(A=\left( \begin{array}{ll}2&{}-1\\ -1&{}2\end{array}\right) \in {\mathbb {R}}^{2\times 2}\). Apparently, \(A\in M_2\) is SDD and \(\tau (A\circ A^{-1})=1\) by (1) and (2). By Theorems 1, 3, 4 and 6, we all have \(\tau (A\circ A^{-1})\ge 1\), which shows that the lower bounds in Theorems 1, 3, 4 and 6 can reach the exact value of \(\tau (A\circ A^{-1})\) in some cases. \(\square \)

4 Conclusions

Let \(A=(a_{ij})\in M_n\) and \(B=(b_{ij})\in M_n\). Then \(A^{-1}=({\tilde{a}}_{ij})\ge 0\). In this paper, by using the theories of Schur complements for matrices, we in Lemma 5 derived a lower bound of \({\tilde{a}}_{ii}\), that is, \({\tilde{a}}_{ii}\ge 1/\delta _i>0\), which is used to obtain two types of lower bounds of \(\tau (B\circ A^{-1})\) in Theorems 1 and 4. Subsequently, by using the results in Theorems 1 and 4, we constructed the first type of lower bounds of \(\tau (B\circ A^{-1})\) with a nonnegative parameter \(\varepsilon \) and the second type of lower bounds of \(\tau (B\circ A^{-1})\) with a nonneagvie parameter \(\theta \). Finally, we gave the methods of finding the optimal values of nonnegative parameters \(\varepsilon \) and \(\theta \) to get the preferable lower bounds of \(\tau (B\circ A^{-1})\). In [23], some convergent sequences of the lower bounds of \(\tau (B\circ A^{-1})\) are presented to approximate the exact value of \(\tau (B\circ A^{-1})\). Using the methods in [23], a bigger lower bound may be given.