1 Introduction

Eigenvalue problems of higher-order tensors have become an important topic of study in a new applied mathematics branch and numerical multilinear algebra, and they have a wide range of practical applications [1,2,3,4,5].

The class of \(\mathcal {M}\)-tensor introduced in [6, 7] is the generalization M-matrices [8]. And some important properties of \(\mathcal {M}\)-tensors and nonsingular \(\mathcal {\mathcal {M}}\)-tensors have been established in [7, 9]. It is noteworthy that some applications of \(\mathcal {M}\)-tensors [6, 7, 9, 10] are related to the eigenvalue problems of \(\mathcal {M}\)-tensors. In [11,12,13,14], some bounds for the minimum H-eigenvalue of nonsingular \(\mathcal {M}\)-tensors have been proposed. The main aim of this paper is to present some new bounds for the minimum H-eigenvalue of weakly irreducible nonsingular \(\mathcal {M}\)-tensors, and these bounds improve some existing ones.

Let \(\mathbb {C}(\mathbb {R})\) denote the set of all complex (real) field and \(N=\{1,2,\ldots ,n\}\). We consider an m-order n-dimensional tensor \(\mathcal {A}=(a_{i_{1}i_{2}\ldots i_{m}})\) consisting of \(n^{m}\) entries, denoted by \(\mathcal {A}\in \mathbb {C}^{[m,n]}(\mathbb {R}^{[m,n]})\), if

$$\begin{aligned} a_{i_{1}i_{2}\ldots i_{m}}\in \mathbb {C}(\mathbb {R}),\nonumber \end{aligned}$$

where \(i_{j}=1,2,\ldots ,n\) for \(j=1,2,\ldots ,m\) [9, 15]. Obviously, a vector is a tensor of order 1 and a matrix is a tensor of order 2. Moreover, an m-order n-dimensional tensor \(\mathcal {I}=(\delta _{i_{1}i_{2}\ldots i_{m}})\) is called the unit tensor [16], if its entries are \(\delta _{i_{1}\ldots i_{m}}\) for \(i_{1}, \ldots , i_{m}\in N\), where

$$\begin{aligned} \delta _{i_{1}i_{2}\ldots i_{m}}= \left\{ \begin{array}{l} 1 \qquad \mathrm {if}\,\ i_{1}=\cdots =i_{m},\\ 0,\quad \ \mathrm {otherwise}. \end{array} \right. \nonumber \end{aligned}$$

Let \(\mathcal {A}\in \mathbb {C}^{[m,n]}\), if there exist a number \(\lambda \in \mathbb {C}\) and a nonzero vector \(x=(x_{1},x_{2},\ldots ,x_{n})^\mathrm{T}\in \mathbb {C}^{n}\) that are solutions of the following homogeneous polynomial equations:

$$\begin{aligned} \mathcal {A}x^{m-1}=\lambda x^{[m-1]}, \end{aligned}$$

then \(\lambda \) is an eigenvalue of \(\mathcal {A}\) and x is the eigenvector of \(\mathcal {A}\) associated with \(\lambda \) [1, 15, 17, 18], where \(\mathcal {A}x^{m-1}\) and \(\lambda x^{[m-1]}\) are vectors, whose ith components are

$$\begin{aligned} (\mathcal {A}x^{m-1})_{i}=\sum \limits _{i_{2},\ldots ,i_{m}\in N}a_{ii_{2}\ldots i_{m}}x_{i_{2}}\ldots x_{i_{m}} \end{aligned}$$

and

$$\begin{aligned} (x^{[m-1]})_{i}=x_{i}^{m-1}. \end{aligned}$$

Furthermore, if \(\lambda \) and x are restricted to the real field, then we call \(\lambda \) an H-eigenvalue of \(\mathcal {A}\) and x an H-eigenvector of \(\mathcal {A}\) associated with \(\lambda \) [15].

Let \(\Gamma \) be a digraph with vertex set V and arc set E. If there exist directed paths from i to j and j to i for any \(i,j\in V (i\ne j)\), then \(\Gamma \) is called strongly connected. For each vertex \(i\in V\), if there exists a circuit such that i belong to the circuit, then \(\Gamma \) is called weakly connected. For a tensor \(\mathcal {A}=(a_{i_{1}\ldots i_{m}})\in \mathbb {C}^{[m,n]}\), we associate \(\mathcal {A}\) with a digraph \(\Gamma _{\mathcal {A}}\) as follows. The vertex set of \(\Gamma _{\mathcal {A}}\) is \(V(\mathcal {A})=\{1,\ldots , n\}\), and the arc set of \(\Gamma _{\mathcal {A}}\) is \( E=\{(i,j)|a_{ii_{2}\ldots i_{m}}\ne 0,j\in \{i_{2}.\ldots ,i_{m}\}\ne \{i,\ldots ,i\}\}\). Let \(C(\mathcal {A})\) denote the set of circuits of \(\Gamma _{\mathcal {A}}\). A tensor \(\mathcal {A}\) is called weakly irreducible if \(\Gamma _{\mathcal {A}}\) is strongly connected [19,20,21]. The tensor \(\mathcal {A}\) is called reducible if there exists a nonempty proper index subset \(J\in N\) such that \(a_{i_{1}i_{2}\ldots i_{m}}=0, \forall i_{1}\in J, \forall i_{2},\ldots , i_{m}\notin J\). If \(\mathcal {A}\) is not reducible, then we call \(\mathcal {A}\) irreducible [22].

Let \(\rho (A)=\max \{|\lambda |:\lambda ~\mathrm {is~an~eigenvalue~of}~A\}\), where \(|\lambda |\) denotes the modulus of \(\lambda \). We call \(\rho (A)\) the spectral radius of tensor \(\mathcal {A}\) [23]. An m-order n-dimensional tensor \(\mathcal {A}\) is called nonnegative [1, 2, 16, 23, 24], if each entry is nonnegative. We call a tensor \(\mathcal {A}\) a \(\mathcal {Z}\)-tensor, if all of its off-diagonal entries are nonpositive, which is equivalent to write \(\mathcal {A} = s\mathcal {I}-\mathcal {B}\), where \(s>0\) and \(\mathcal {B}\) is a nonnegative tensor \((\mathcal {B}\ge 0)\), and the set of m-order and n-dimensional \(\mathcal {Z}\)-tensors is denoted by \(\mathbb {Z}\). A \(\mathcal {Z}\)-tensor \(\mathcal {A} = s\mathcal {I}-\mathcal {B}\) is an \(\mathcal {M}\)-tensor if \(s\ge \rho (\mathcal {B})\), and it is a nonsingular (strong) \(\mathcal {M}\)-tensor if \(s>\rho (\mathcal {B})\) [6, 7, 9].

Denote by \(\tau (\mathcal {A})\) the minimum value of the real part of all eigenvalues of the tensor \(\mathcal {A}\). Let \(\mathcal {A}=(a_{i_{1}i_{2}\ldots i_{m}})\in \mathbb {R}^{[m,n]}\). For \(i,j\in N, j\ne i\), we denote

$$\begin{aligned}&R_{i}(\mathcal {A})=\sum \limits _{i_{2},\ldots , i_{m}=1}^{n}a_{ii_{2}\ldots i_{m}}, \quad R_{\max }(\mathcal {A}) =\max \limits _{i\in N}R_{i}(\mathcal {A}), \quad R_{\min }(\mathcal {A})=\min \limits _{i\in N}R_{i}(\mathcal {A}){,}\nonumber \\&r_{i}(\mathcal {A})=\sum \limits _{\delta _{ii_{2}\ldots i_{m}}=0}|a_{ii_{2}\ldots i_{m}}|, \quad r_{i}^{j}(\mathcal {A}) =\sum \limits _{\begin{array}{c} \delta _{ii_{2}\ldots i_{m}}=0,\\ \delta _{ji_{2}\ldots i_{m}=0} \end{array}}|a_{ii_{2}\ldots i_{m}}|=r_{i}(\mathcal {A})-|a_{ij\ldots j}|.\nonumber \end{aligned}$$

In recent years, much literature has focused on the bounds of the minimum H-eigenvalue of nonsingular \(\mathcal {M}\)-tensors. In [11], He and Huang first proposed the upper and lower bounds for the minimum H-eigenvalue of irreducible nonsingular \(\mathcal {M}\)-tensors as follows.

Lemma 1.1

[11] Let \(\mathcal {A}\in \mathbb {R}^{[m,n]}\) be an irreducible nonsingular \(\mathcal {M}\)-tensor. Then

$$\begin{aligned} R_{\min }(\mathcal {A})\le \tau (\mathcal {A})\le R_{\max }(\mathcal {A}).\nonumber \end{aligned}$$

Lemma 1.2

[11] Let \(\mathcal {A}\in \mathbb {R}^{[m,n]}\) be an irreducible nonsingular \(\mathcal {M}\)-tensor. Then

$$\begin{aligned}&\min \limits _{\begin{array}{c} i,j\in N,\\ j\ne i \end{array}}\frac{1}{2}\left\{ a_{i\ldots i}+a_{j\ldots j}-r_{i}^{j}(\mathcal {A}) -\Delta _{ij}^{\frac{1}{2}}(\mathcal {A})\right\} \le \tau (\mathcal {A})\\&\qquad \qquad \quad \le \max \limits _{\begin{array}{c} i,j\in N,\\ j\ne i \end{array}}\frac{1}{2} \left\{ a_{i\ldots i}+a_{j\ldots j}-r_{i}^{j}(\mathcal {A})-\Delta _{ij}^{\frac{1}{2}}(\mathcal {A})\right\} ,\nonumber \end{aligned}$$

where

$$\begin{aligned} \Delta _{ij}(\mathcal {A})=(a_{i\ldots i}-a_{j\ldots j}+r_{i}^{j}(\mathcal {A}))^{2}-4a_{ij\ldots j}r_{j}(\mathcal {A}).\nonumber \end{aligned}$$

Recently, Zhao and Sang in [13] pointed out that there are some errors in the calculation process of Lemma 1.2, and the correction is as follows:

Lemma 1.3

[13] Let \(\mathcal {A}\in \mathbb {R}^{[m,n]}\) be an irreducible nonsingular \(\mathcal {M}\)-tensor. Then

$$\begin{aligned} \tau (\mathcal {A})\ge \min \limits _{\begin{array}{c} i,j\in N,\\ j\ne i \end{array}}L_{ij}(A),\nonumber \end{aligned}$$

where

$$\begin{aligned} L_{ij}(A)=\frac{1}{2}\left\{ a_{i\ldots i}+a_{j\ldots j}-r_{i}^{j}(\mathcal {A})-\left[ (a_{i\ldots i}-a_{j\ldots j}-r_{i}^{j}(\mathcal {A}))^{2} -4a_{ij\ldots j}r_{j}(\mathcal {A})\right] ^{\frac{1}{2}}\right\} . \end{aligned}$$

In addition, Wang and Wei presented the upper and lower bounds on \(\tau (\mathcal {A})\) for a weakly irreducible nonsingular \(\mathcal {M}\)-tensor as follows.

Lemma 1.4

[12] Let \(\mathcal {A}\in \mathbb {R}^{[m,n]}\) be a weakly irreducible nonsingular \(\mathcal {M}\)-tensor. Then

$$\begin{aligned}&\min \limits _{\begin{array}{c} i,j\in N,\\ j\ne i \end{array}}\frac{1}{2}\left\{ a_{i\ldots i}+a_{j\ldots j}-\widetilde{r}_{i}(\mathcal {A})-\widetilde{\Delta }_{ij}^{\frac{1}{2}}(\mathcal {A})\right\} \le \tau (\mathcal {A})\\&\quad \quad \quad \le \max \limits _{\begin{array}{c} i,j\in N,\\ j\ne i \end{array}}\frac{1}{2}\left\{ a_{i\ldots i}+a_{j\ldots j}-\widetilde{r}_{i}(\mathcal {A}) -\widetilde{\Delta }_{ij}^{\frac{1}{2}}(\mathcal {A})\right\} , \end{aligned}$$

where

$$\begin{aligned} (M(\mathcal {A}))_{ij}= \left\{ \begin{array}{l} r_{i}(\mathcal {A})\qquad \ i=j,\\ |a_{ij\ldots j}|, \quad \ i\ne j. \end{array} \right. \end{aligned}$$

is a nonnegative matrix and \(\widetilde{\Delta }_{ij}(\mathcal {A})=(a_{i\ldots i}-a_{j\ldots j}-\widetilde{r}_{i}(\mathcal {A}))^{2}+4r_{i}(M(\mathcal {A}))r_{j}(\mathcal {A})\), with \(r_{i}(M(\mathcal {A}))=\sum \nolimits _{j\ne i}(M(\mathcal {A}))_{ij},\widetilde{r}_{i}(\mathcal {A})=r_{i}(\mathcal {A})-r_{i}(M(\mathcal {A}))\).

In this paper, we continue this research on the estimates of the minimum H-eigenvalue for weakly irreducible nonsingular \(\mathcal {M}\)-tensors; inspired by the ideas of [25, 26], we obtain two new estimates of the minimum H-eigenvalue for weakly irreducible nonsingular \(\mathcal {M}\)-tensors. They are proved to be tighter than Lemmas 1.1 and  1.2 in corrected form. Finally, we derive a sharper bound in Ky Fan theorem for nonsingular \(\mathcal {M}\)-tensors.

The remainder of the paper is organized as follows. In Sect. 2, we recollect some useful lemmas on tensors which are utilized in the following proofs, then focus on the estimates of \(\tau (\mathcal {A})\) and establish some new bounds for \(\tau (\mathcal {A})\). In Sect. 3, a sharper bound in Ky Fan theorem is obtained. Finally, some conclusions are given to end this paper in Sect. 4.

2 Several New Estimates of the Minimum H-eigenvalue

In this section, we give several new estimates of the minimum H-eigenvalue for weakly irreducible nonsingular \(\mathcal {M}\)-tensors.

Lemma 2.1

[12] If a tensor \(\mathcal {A}\) is irreducible, then \(\mathcal {A}\) is weakly irreducible.

Lemma 2.2

[11] Let \(\mathcal {A}\) be a nonsingular \(\mathcal {M}\)-tensor and denote by \(\tau (\mathcal {A})\) the minimum value of the real part of all eigenvalues of \(\mathcal {A}\). Then \(\tau (\mathcal {A})\) is an eigenvalue of \(\mathcal {A}\) with a nonnegative eigenvector. Moreover, if \(\mathcal {A}\) is irreducible, then \(\tau (\mathcal {A})\) is the unique eigenvalue with a positive eigenvector.

Zhang et al. [6] obtained some results similar to those of Lemma 2.2 for weakly irreducible nonsingular \(\mathcal {M}\)-tensors in the following lemma.

Lemma 2.3

[6] Let \(\mathcal {A}\) be a nonsingular \(\mathcal {M}\)-tensor and denote by \(\tau (\mathcal {A})\) the minimum value of the real part of all eigenvalues of \(\mathcal {A}\). Then \(\tau (\mathcal {A})\) is an H-eigenvalue of \(\mathcal {A}\) with a nonnegative eigenvector. Moreover, if \(\mathcal {A}\) is a weakly irreducible \(\mathcal {Z}\)-tensor, then \(\tau (\mathcal {A})\) is the unique eigenvalue with a positive eigenvector.

Lemma 2.4

[27] Let \(\mathcal {A}\) be a weakly irreducible nonsingular \(\mathcal {M}\)-tensor. Then \(\tau (\mathcal {A})<\min \limits _{i\in N}\{a_{ii\ldots i}\}\).

For any given diagonal nonsingular matrix \(D=\mathrm {diag}(d_{1},\ldots , d_{n})\), we define a tensor \(\mathcal {A}_{D}\) as follows:

$$\begin{aligned} \mathcal {A}_{D}=\mathcal {A}\times _{1}D^{1-m}\times _{2}D\times _{3}\cdots \times _{m}D,\nonumber \end{aligned}$$

where \(\times _{k}\) is k-mode tensor-matrix multiplication between \(\mathcal {A}\) and D [28]. Here the entries of \(\mathcal {A}_{D}\) are given by [9] as follows:

$$\begin{aligned} (\mathcal {A}_{D})_{i_{1}i_{2}\ldots i_{m}}=\mathcal {A}_{i_{1}i_{2}\ldots i_{m}}d_{1}^{1-m}d_{2}\ldots d_{m},\quad 1\le i_{1},i_{2},\ldots ,i_{m}\le n.\nonumber \end{aligned}$$

Lemma 2.5

[23] The tensors \(\mathcal {A}_{D}\) and \(\mathcal {A}\) have the same set of eigenvalues.

Lemma 2.6

Let \(f(x)=a_{1}x^{2}+b_{1}x+c_{1}\), \(g(x)=a_{2}x^{2}+b_{2}x+c_{2}\), where \(a_{1}>0\) and \(a_{2}>0\). Assume that \(x_{1},x_{2}\) and \(\widetilde{x}_{1},\widetilde{x}_{2}\) are roots of \(f(x)=0\) and \(g(x)=0\), respectively. Then the solution of \(f(x)\le 0\) is \([x_{1},x_{2}]\), and that of \(g(x)\le 0\) is \([\widetilde{x}_{1},\widetilde{x}_{2}]\). If \(g(x)\le 0\) under the condition \(f(x)\le 0\), then \([x_{1},x_{2}]\subseteq [\widetilde{x}_{1},\widetilde{x}_{2}]\).

Proof

It is obvious that \([x_{1},x_{2}]\) and \([\widetilde{x}_{1},\widetilde{x}_{2}]\) are the solutions of \(f(x)\le 0\) and \(g(x)\le 0\), respectively. Since \(g(x)\le 0\) under the condition \(f(x)\le 0\), we get \(g(x)\le 0\) for any \(x\in [x_{1},x_{2}]\). And because \([\widetilde{x}_{1},\widetilde{x}_{2}]\) is the solution of \(g(x)\le 0\), we can obtain that \(x\in [\widetilde{x}_{1},\widetilde{x}_{2}]\), i.e., \([x_{1},x_{2}]\subseteq [\widetilde{x}_{1},\widetilde{x}_{2}]\). \(\square \)

Lemma 2.7

([29], Lemmas 2.2 and  2.3) Let \(a,b,c \ge 0\) and \(d >0\).

(I) If \(\frac{a}{b+c+d}\le 1\), then

$$\begin{aligned} \frac{a-(b+c)}{d}\le \frac{a-b}{c+d}\le \frac{a}{b+c+d}.\nonumber \end{aligned}$$

(II) If \(\frac{a}{b+c+d}\ge 1\), then

$$\begin{aligned} \frac{a-(b+c)}{d}\ge \frac{a-b}{c+d}\ge \frac{a}{b+c+d}.\nonumber \end{aligned}$$

2.1 The New Brauer-Type Estimates of Minimum H-eigenvalue

In this subsection, we present the new Brauer-type estimates of minimum H-eigenvalue for weakly irreducible nonsingular \(\mathcal {M}\)-tensors, which are tighter than the results in Lemmas 1.1 and 1.2 in corrected form.

We denote

$$\begin{aligned}&\Delta _{i}=\{(i_{2},i_{3},\ldots ,i_{m}): i_{j}=i~\mathrm {for~some}~j\in \{2,\ldots ,m\},~\mathrm {where}~i,i_{2},\ldots ,i_{m}\in N\},\nonumber \\&\overline{\Delta }_{i}=\{(i_{2},i_{3},\ldots ,i_{m}): i_{j}\ne i~\mathrm {for~any}~j\in \{2,\ldots ,m\},~\mathrm {where}~i,i_{2},\ldots ,i_{m}\in N\},\nonumber \end{aligned}$$

and let

$$\begin{aligned} r_{i}^{\Delta _{i}}(\mathcal {A})=\sum \limits _{\begin{array}{c} (i_{2},\ldots ,i_{m})\in \Delta _{i},\\ \delta _{ii_{2}\ldots i_{m}}=0 \end{array}}|a_{ii_{2}\ldots i_{m}}|, \quad r_{i}^{\overline{\Delta }_{i}}(\mathcal {A})=\sum \limits _{(i_{2},\ldots ,i_{m})\in \overline{\Delta }_{i}}|a_{ii_{2}\ldots i_{m}}|.\nonumber \end{aligned}$$

Then, \(r_{i}(\mathcal {A})=r_{i}^{\Delta _{i}}(\mathcal {A})+r_{i}^{\overline{\Delta } _{i}}(\mathcal {A}).\)

Theorem 2.1

Let \(\mathcal {A}=(a_{i_{1}i_{2}\ldots i_{m}})\in \mathbb {R}^{[m,n]}\) be a weakly irreducible nonsingular \(\mathcal {M}\)-tensor with \(n\ge 2\). Then

$$\begin{aligned} \Lambda _{\min }\le \tau (\mathcal {A})\le \overline{\Lambda }_{\max },\nonumber \end{aligned}$$

where

$$\begin{aligned}&\Lambda _{\min }=\min \{\widetilde{\Lambda }_{\min },\overline{\Lambda }_{\min }\},\quad \widetilde{\Lambda }_{\min } =\min \limits _{i\in N}\{a_{ii\ldots i}-r_{i}^{\Delta _{i}}(\mathcal {A})\},\nonumber \\&\overline{\Lambda }_{\min }=\min \limits _{\begin{array}{c} i,j\in N,\\ j\ne i \end{array}}\max \{\frac{1}{2} (a_{ii\ldots i}+a_{j j\ldots j}-r_{i}^{\Delta _{i}}(\mathcal {A})-r_{j}^{\overline{\Delta }_{i}}(\mathcal {A}) -\Omega _{i,j}^{\frac{1}{2}}),R_{i}(\mathcal {A})\},\nonumber \\&\overline{\Lambda }_{\max }=\max \limits _{\begin{array}{c} i,j\in N,\\ j\ne i \end{array}}\min \{\frac{1}{2} (a_{ii\ldots i}+a_{j j\ldots j}-r_{i}^{\Delta _{i}}(\mathcal {A})-r_{j}^{\overline{\Delta }_{i}}(\mathcal {A}) -\Omega _{i,j}^{\frac{1}{2}}),R_{i}(\mathcal {A})\},\nonumber \\&\Omega _{i,j}=(a_{ii\ldots i}-a_{jj\ldots j}-r_{i}^{\Delta _{i}}(\mathcal {A})+ r_{j}^{\overline{\Delta }_{i}}(\mathcal {A}))^{2}+4r_{i}^{\overline{\Delta }_{i}}(\mathcal {A}) r_{j}^{\Delta _{i}}(\mathcal {A}).\nonumber \end{aligned}$$

Proof

Since \(\mathcal {A}\) is weakly irreducible nonsingular \(\mathcal {M}\)-tensor, by Lemma 2.3, there exists \(x=(x_{1},x_{2},\ldots , x_{n})^\mathrm{T}>0\) such that

$$\begin{aligned} \mathcal {A}x^{m-1}=\tau (\mathcal {A})x^{[m-1]}. \end{aligned}$$
(2.1)

Now, the proof is proceeded in two steps.

(i) Let \(x_{t}\ge x_{l}\ge \max \{x_{k}:k\in N,k\ne t,k\ne l\}\) (where the last term above is defined to be zero if \(n=2\)). From (2.1), we have

$$\begin{aligned} (a_{tt\ldots t}-\tau (\mathcal {A}))x_{t}^{m-1}= & {} -\sum \limits _{\begin{array}{c} (i_{2},\ldots ,i_{m})\in \Delta _{t},\\ \delta _{ti_{2}\ldots i_{m}=0} \end{array}}a_{ti_{2}\ldots i_{m}}x_{i_{2}}x_{i_{3}}\ldots {x_{i_{m}}}\\&- \sum \limits _{(i_{2},\ldots ,i_{m})\in \overline{\Delta }_{t}}a_{ti_{2}\ldots i_{m}}x_{i_{2}}x_{i_{3}}\ldots {x_{i_{m}}}.\nonumber \end{aligned}$$

Using the inequality technique gives

$$\begin{aligned} (a_{tt\ldots t}-\tau (\mathcal {A}))x_{t}^{m-1}= & {} \sum \limits _{\begin{array}{c} (i_{2},\ldots ,i_{m})\in \Delta _{t},\\ \delta _{ti_{2}\ldots i_{m}=0} \end{array}}|a_{ti_{2}\ldots i_{m}}|x_{i_{2}}x_{i_{3}}\ldots {x_{i_{m}}}+ \sum \limits _{(i_{2},\ldots ,i_{m})\in \overline{\Delta }_{t}}|a_{ti_{2}\ldots i_{m}}|x_{i_{2}}x_{i_{3}}\ldots {x_{i_{m}}}\nonumber \\\le & {} \sum \limits _{\begin{array}{c} (i_{2},\ldots ,i_{m})\in \Delta _{t},\\ \delta _{ti_{2}\ldots i_{m}=0} \end{array}} |a_{ti_{2}\ldots i_{m}}|x_{t}^{m-1}+\sum \limits _{(i_{2},\ldots ,i_{m})\in \overline{\Delta }_{t}} |a_{ti_{2}\ldots i_{m}}|x_{l}^{m-1}\nonumber \\= & {} r_{t}^{\Delta _{t}}(\mathcal {A})x_{t}^{m-1}+r_{t}^{\overline{\Delta }_{t}}(\mathcal {A})x_{l}^{m-1}. \end{aligned}$$

Equivalently

$$\begin{aligned} (a_{tt\ldots t}-\tau (\mathcal {A})-r_{t}^{\Delta _{t}}(\mathcal {A}))x_{t}^{m-1}\le r_{t}^{\overline{\Delta }_{t}}(\mathcal {A})x_{l}^{m-1}.\nonumber \end{aligned}$$

If \(a_{tt\ldots t}-\tau (\mathcal {A})-r_{t}^{\Delta _{t}}(\mathcal {A})\le 0\), then

$$\begin{aligned} \tau (\mathcal {A})\ge a_{tt\ldots t}-r_{t}^{\Delta _{t}}(\mathcal {A})\ge \min \limits _{i\in N}\{a_{ii\ldots i} -r_{i}^{\Delta _{i}}(\mathcal {A})\}. \end{aligned}$$
(2.2)

Otherwise, we have \(a_{tt\ldots t}-\tau (\mathcal {A})-r_{t}^{\Delta _{t}}(\mathcal {A})> 0\), which means that

$$\begin{aligned} 0<(a_{tt\ldots t}-\tau (\mathcal {A})-r_{t}^{\Delta _{t}}(\mathcal {A}))x_{t}^{m-1}\le r_{t}^{\overline{\Delta }_{t}}(\mathcal {A})x_{l}^{m-1}. \end{aligned}$$
(2.3)

On the other hand, by (2.1) we can get

$$\begin{aligned} (a_{ll\ldots l}-\tau (\mathcal {A}))x_{l}^{m-1}= & {} \sum \limits _{(i_{2},\ldots ,i_{m})\in \Delta _{t}}|a_{li_{2}\ldots i_{m}}|x_{i_{2}} x_{i_{3}}\ldots {x_{i_{m}}}+\sum \limits _{\begin{array}{c} (i_{2},\ldots ,i_{m})\in \overline{\Delta }_{t},\\ \delta _{li_{2}\ldots i_{m}=0} \end{array}}|a_{li_{2}\ldots i_{m}}|x_{i_{2}}x_{i_{3}}\ldots {x_{i_{m}}}\nonumber \\\le & {} \sum \limits _{(i_{2},\ldots ,i_{m})\in \Delta _{t}}|a_{li_{2}\ldots i_{m}}|x_{t}^{m-1}+ \sum \limits _{\begin{array}{c} (i_{2},\ldots ,i_{m})\in \overline{\Delta }_{t},\\ \delta _{li_{2}\ldots i_{m}=0} \end{array}}|a_{li_{2}\ldots i_{m}}|x_{l}^{m-1}\nonumber \\= & {} r_{l}^{\Delta _{t}}(\mathcal {A})x_{t}^{m-1}+r_{l}^{\overline{\Delta }_{t}}(\mathcal {A})x_{l}^{m-1}, \end{aligned}$$

i.e.,

$$\begin{aligned} (a_{ll\ldots l}-\tau (\mathcal {A})-r_{l}^{\overline{\Delta }_{t}}(\mathcal {A}))x_{l}^{m-1}\le r_{l}^{\Delta _{t}}(\mathcal {A})x_{t}^{m-1}. \end{aligned}$$
(2.4)

Multiplying Inequalities (2.3) and (2.4) yields

$$\begin{aligned}&(a_{tt\ldots t}-\tau (\mathcal {A})-r_{t}^{\Delta _{t}}(\mathcal {A}))(a_{ll\ldots l} -\tau (\mathcal {A})-r_{l}^{\overline{\Delta }_{t}}(\mathcal {A}))x_{t}^{m-1}x_{l}^{m-1}\\&\quad \le r_{t}^{\overline{\Delta }_{t}}(\mathcal {A}){r_{l}^{\Delta _{t}}(\mathcal {A})}x_{t}^{m-1}x_{l}^{m-1}. \end{aligned}$$

Note that \(x_{t}^{m-1}x_{l}^{m-1}>0\), thus

$$\begin{aligned} (a_{tt\ldots t}-\tau (\mathcal {A})-r_{t}^{\Delta _{t}}(\mathcal {A}))(a_{ll\ldots l} -\tau (\mathcal {A})-r_{l}^{\overline{\Delta }_{t}}(\mathcal {A}))\le r_{t}^{\overline{\Delta }_{t}}(\mathcal {A}){r_{l}^{\Delta _{t}}(\mathcal {A})}, \end{aligned}$$

which is equivalent to

$$\begin{aligned}&\tau (\mathcal {A})^{2}-(a_{tt\ldots t}+a_{ll\ldots l}-r_{t}^{\Delta _{t}}(\mathcal {A}) -r_{l}^{\overline{\Delta }_{t}}(\mathcal {A}))\tau (\mathcal {A})\\&\quad +\, (a_{tt\ldots t}-r_{t}^{\Delta _{t}}(\mathcal {A})) (a_{ll\ldots l}-r_{l}^{\overline{\Delta }_{t}}(\mathcal {A})) -r_{t}^{\overline{\Delta }_{t}}(\mathcal {A})r_{l}^{\Delta _{t}}(\mathcal {A})\le 0. \end{aligned}$$

This gives the following bounds for \(\tau (\mathcal {A})\),

$$\begin{aligned} \tau (\mathcal {A})\ge \frac{1}{2}\left( a_{tt\ldots t}+a_{ll\ldots l} -r_{t}^{\Delta _{t}}(\mathcal {A})-r_{l}^{\overline{\Delta }_{t}}(\mathcal {A})-\Omega _{t,l}^{\frac{1}{2}}\right) , \end{aligned}$$
(2.5)

where

$$\begin{aligned} \Omega _{t,l}=(a_{tt\ldots t}-a_{ll\ldots l}-r_{t}^{\Delta _{t}}(\mathcal {A})+ r_{l}^{\overline{\Delta }_{t}}(\mathcal {A}))^{2}+4r_{t}^{\overline{\Delta }_{t}}(\mathcal {A}) r_{l}^{\Delta _{t}}(\mathcal {A}).\nonumber \end{aligned}$$

Furthermore, by Inequality (2.3), we can get that

$$\begin{aligned} a_{tt\ldots t}-\tau (\mathcal {A})-r_{t}^{\Delta _{t}}(\mathcal {A})\le r_{t}^{\overline{\Delta }_{t}}(\mathcal {A});\nonumber \end{aligned}$$

consequently,

$$\begin{aligned} \tau (\mathcal {A})\ge a_{tt\ldots t}-r_{t}^{\Delta _{t}}(\mathcal {A}) -r_{t}^{\overline{\Delta }_{t}}(\mathcal {A})=a_{tt\ldots t}-r_{t}(\mathcal {A})=R_{t}(\mathcal {A}). \end{aligned}$$
(2.6)

Combining Inequalities (2.5) and (2.6), we have

$$\begin{aligned} \tau (\mathcal {A})\ge & {} \max \left\{ \frac{1}{2}(a_{tt\ldots t}+a_{ll\ldots l} -r_{t}^{\Delta _{t}}(\mathcal {A})-r_{l}^{\overline{\Delta }_{t}}(\mathcal {A}) -\Omega _{t,l}^{\frac{1}{2}}),R_{t}(\mathcal {A})\right\} \nonumber \\\ge & {} \min \limits _{\begin{array}{c} i,j\in N,\\ j\ne i \end{array}}\max \left\{ \frac{1}{2}(a_{ii\ldots i} +a_{j j\ldots j}-r_{i}^{\Delta _{i}}(\mathcal {A})-r_{j}^{\overline{\Delta }_{i}}(\mathcal {A}) -\Omega _{i,j}^{\frac{1}{2}}),R_{i}(\mathcal {A})\right\} .\qquad \quad \end{aligned}$$
(2.7)

The first inequality in Theorem 2.1 follows from Inequalities (2.2) and (2.7).

(ii) Let \(x_{p}\le x_{q}\le \min \{x_{k}:k\in N,k\ne p,k\ne q\}\). By (2.1), we derive that

$$\begin{aligned} (a_{pp\ldots p}-\tau (\mathcal {A}))x_{p}^{m-1}= & {} -\sum \limits _{\begin{array}{c} (i_{2},\ldots ,i_{m})\in \Delta _{p},\\ \delta _{pi_{2}\ldots i_{m}=0} \end{array}} a_{pi_{2}\ldots i_{m}}x_{i_{2}}x_{i_{3}}\ldots {x_{i_{m}}}\\&-\sum \limits _{(i_{2},\ldots ,i_{m})\in \overline{\Delta }_{p}} a_{pi_{2}\ldots i_{m}}x_{i_{2}}x_{i_{3}}\ldots {x_{i_{m}}}\nonumber \end{aligned}$$

and

$$\begin{aligned} (a_{qq\ldots q}-\tau (\mathcal {A}))x_{q}^{m-1}= & {} -\sum \limits _{(i_{2},\ldots ,i_{m})\in \Delta _{p}} a_{qi_{2}\ldots i_{m}}x_{i_{2}}x_{i_{3}}\ldots {x_{i_{m}}}\\&-\sum \limits _{\begin{array}{c} (i_{2},\ldots ,i_{m})\in \overline{\Delta }_{p},\\ \delta _{qi_{2}\ldots i_{m}=0} \end{array}} a_{qi_{2}\ldots i_{m}}x_{i_{2}}x_{i_{3}}\ldots {x_{i_{m}}}.\nonumber \end{aligned}$$

Using the inequality technique gives

$$\begin{aligned} (a_{pp\ldots p}-\tau (\mathcal {A}))x_{p}^{m-1}= & {} \sum \limits _{\begin{array}{c} (i_{2},\ldots ,i_{m})\in \Delta _{p},\\ \delta _{pi_{2}\ldots i_{m}=0} \end{array}} |a_{pi_{2}\ldots i_{m}}|x_{i_{2}}x_{i_{3}}\ldots {x_{i_{m}}}+\sum \limits _{(i_{2},\ldots ,i_{m})\in \overline{\Delta }_{p}} |a_{pi_{2}\ldots i_{m}}|x_{i_{2}}x_{i_{3}}\ldots {x_{i_{m}}}\nonumber \\\ge & {} \sum \limits _{\begin{array}{c} (i_{2},\ldots ,i_{m})\in \Delta _{p},\\ \delta _{pi_{2}\ldots i_{m}=0} \end{array}} |a_{pi_{2}\ldots i_{m}}|x_{p}^{m-1}+\sum \limits _{(i_{2},\ldots ,i_{m})\in \overline{\Delta }_{p}} |a_{pi_{2}\ldots i_{m}}|x_{q}^{m-1}\nonumber \\= & {} r_{p}^{\Delta _{p}}(\mathcal {A})x_{p}^{m-1}+r_{p}^{\overline{\Delta }_{p}}(\mathcal {A})x_{q}^{m-1} \end{aligned}$$
(2.8)

and

$$\begin{aligned} (a_{qq\ldots q}-\tau (\mathcal {A}))x_{q}^{m-1}= & {} \sum \limits _{(i_{2},\ldots ,i_{m})\in \Delta _{p}} |a_{qi_{2}\ldots i_{m}}|x_{i_{2}}x_{i_{3}}\ldots {x_{i_{m}}} +\sum \limits _{\begin{array}{c} (i_{2},\ldots ,i_{m})\in \overline{\Delta }_{p},\\ \delta _{qi_{2}\ldots i_{m}=0} \end{array}} |a_{qi_{2}\ldots i_{m}}|x_{i_{2}}x_{i_{3}}\ldots {x_{i_{m}}}\nonumber \\\ge & {} \sum \limits _{(i_{2},\ldots ,i_{m})\in \Delta _{p}} |a_{qi_{2}\ldots i_{m}}|x_{p}^{m-1}+\sum \limits _{\begin{array}{c} (i_{2},\ldots ,i_{m})\in \overline{\Delta }_{p},\\ \delta _{qi_{2}\ldots i_{m}=0} \end{array}} |a_{qi_{2}\ldots i_{m}}|x_{q}^{m-1}\nonumber \\= & {} r_{q}^{\Delta _{p}}(\mathcal {A})x_{p}^{m-1}+r_{q}^{\overline{\Delta }_{p}}(\mathcal {A})x_{q}^{m-1}. \end{aligned}$$
(2.9)

Combining Inequalities (2.8) and (2.9) and using the same method as the proof in (i), we can deduce the following result:

$$\begin{aligned} \tau (\mathcal {A})\le & {} \min \left\{ \frac{1}{2}\left( a_{pp\ldots p}+a_{qq\ldots q} -r_{p}^{\Delta _{p}}(\mathcal {A})-r_{q}^{\overline{\Delta }_{p}}(\mathcal {A}) -\Omega _{p,q}^{\frac{1}{2}}\right) ,R_{p}(\mathcal {A})\right\} \nonumber \\\le & {} \max \limits _{\begin{array}{c} i,j\in N,\\ j\ne i \end{array}}\min \left\{ \frac{1}{2}\left( a_{ii\ldots i} +a_{j j\ldots j}-r_{i}^{\Delta _{i}}(\mathcal {A})-r_{j}^{\overline{\Delta }_{i}}(\mathcal {A}) -\Omega _{i,j}^{\frac{1}{2}}\right) ,R_{i}(\mathcal {A})\right\} . \end{aligned}$$

This completes our proof of Theorem 2.1. \(\square \)

We now give the following comparison theorem for Theorem 2.1 and Lemma 1.2 in corrected form. First, we prove that the lower bound of Theorem 2.1 is better than that of Lemma 1.2 in corrected form.

Theorem 2.2

Let \(\mathcal {A}=(a_{i_{1}i_{2}\ldots i_{m}})\in \mathbb {R}^{[m,n]}\) be a weakly irreducible nonsingular \(\mathcal {M}\)-tensor with \(n\ge 2\). Then

$$\begin{aligned} \Lambda _{\min }\ge \min \limits _{\begin{array}{c} i,j\in N,\\ j\ne i \end{array}}L_{ij}(\mathcal {A}). \end{aligned}$$

Proof

From proof of Lemma 1.3, we can see that \(\tau (\mathcal {A})\ge \min \nolimits _{\begin{array}{c} i,j\in N,\\ j\ne i \end{array}}L_{ij}(\mathcal {A})\) is obtained by solving the following quadratic inequality

$$\begin{aligned} (a_{ii\ldots i}-\tau (\mathcal {A})-r_{i}^{j}(\mathcal {A}))(a_{jj\ldots j}-\tau (\mathcal {A}))\le -a_{ij\ldots j}r_{j}(\mathcal {A}). \end{aligned}$$

Let \(g^{ij}(\tau (\mathcal {A}))=(a_{ii\ldots i}-\tau (\mathcal {A})-r_{i}^{j}(\mathcal {A}))(a_{jj\ldots j}-\tau (\mathcal {A}))-(-a_{ij\ldots j})r_{j}(\mathcal {A})\), and the left solution of \(g^{ij}(\tau (\mathcal {A}))=0\) is \(L_{ij}(\mathcal {A})\). If \(\Lambda _{\min }=\widetilde{\Lambda }_{\min } =\min \nolimits _{i\in N}\{a_{ii\ldots i}-r_{i}^{\Delta _{i}}(\mathcal {A})\}\), then there exists \(i_{0}\in N\) such that

$$\begin{aligned} \Lambda _{\min }=\widetilde{\Lambda }_{\min }=a_{i_{0}\ldots i_{0}}-r_{i_{0}}^{\Delta _{i_{0}}}(\mathcal {A}).\nonumber \end{aligned}$$

From Theorem 2.1, we get

$$\begin{aligned} \tau (\mathcal {A})\ge \Lambda _{\min }=a_{i_{0}\ldots i_{0}}-r_{i_{0}}^{\Delta _{i_{0}}}(\mathcal {A}),\nonumber \end{aligned}$$

which together with Lemma 2.4 results in

$$\begin{aligned} g^{i_{0}j}(\tau (\mathcal {A}))=(a_{i_{0}\ldots i_{0}}-\tau (\mathcal {A})-r_{i_{0}}^{j}(\mathcal {A}))(a_{jj\ldots j}-\tau (\mathcal {A})) -(-a_{i_{0}j\ldots j})r_{j}(\mathcal {A})\le 0.\nonumber \end{aligned}$$

By Lemma 2.6, we derive that

$$\begin{aligned} \Lambda _{\min }=a_{i_{0}\ldots i_{0}}-r_{i_{0}}^{\Delta _{i_{0}}}(\mathcal {A})\ge L_{i_{0}j}(\mathcal {A})\ge \min \limits _{\begin{array}{c} i,j\in N,\\ j\ne i \end{array}}L_{ij}(\mathcal {A}). \end{aligned}$$
(2.10)

If \(\Lambda _{\min }=\overline{\Lambda }_{\min }=\min \limits _{\begin{array}{c} i,j\in N,\\ j\ne i \end{array}}\max \{\frac{1}{2} (a_{ii\ldots i}+a_{j j\ldots j}-r_{i}^{\Delta _{i}}(\mathcal {A})-r_{j}^{\overline{\Delta }_{i}}(\mathcal {A}) -\Omega _{i,j}^{\frac{1}{2}}),R_{i}(\mathcal {A})\}\), then there exist \(i_{1},j_{1}\in N\) such that

$$\begin{aligned} \tau (\mathcal {A})\ge \Lambda _{\min }=\max \left\{ \frac{1}{2} \left( a_{i_{1}\ldots i_{1}}+a_{j_{1}\ldots j_{1}}-r_{i_{1}}^{\Delta _{i_{1}}}(\mathcal {A})-r_{j_{1}}^{\overline{\Delta }_{i_{1}}}(\mathcal {A}) -\Omega _{i_{1},j_{1}}^{\frac{1}{2}}\right) ,R_{i_{1}}(\mathcal {A})\right\} ,\nonumber \\ \end{aligned}$$
(2.11)

which means that

$$\begin{aligned} \tau (\mathcal {A})\ge R_{i_{1}}(\mathcal {A}) \end{aligned}$$
(2.12)

and

$$\begin{aligned} \tau (\mathcal {A})\ge \frac{1}{2} \left( a_{i_{1}\ldots i_{1}}+a_{j_{1}\ldots j_{1}}-r_{i_{1}}^{\Delta _{i_{1}}}(\mathcal {A})-r_{j_{1}}^{\overline{\Delta }_{i_{1}}}(\mathcal {A}) -\Omega _{i_{1},j_{1}}^{\frac{1}{2}}\right) . \end{aligned}$$
(2.13)

By proof of Theorem 2.1, we see that \(K_{i_{1}j_{1}}(\mathcal {A}):=\frac{1}{2} (a_{i_{1}\ldots i_{1}}+a_{j_{1}\ldots j_{1}}-r_{i_{1}}^{\Delta _{i_{1}}}(\mathcal {A})-r_{j_{1}}^{\overline{\Delta }_{i_{1}}}(\mathcal {A}) -\Omega _{i_{1},j_{1}}^{\frac{1}{2}})\) is the left root of the following equation

$$\begin{aligned} (a_{i_{1}\ldots i_{1}}-\tau (\mathcal {A})-r_{i_{1}}^{\Delta _{i_{1}}}(\mathcal {A}))(a_{j_{1}\ldots j_{1}} -\tau (\mathcal {A})-r_{j_{1}}^{\overline{\Delta }_{i_{1}}}(\mathcal {A}))- r_{i_{1}}^{\overline{\Delta }_{i_{1}}}(\mathcal {A}){r_{j_{1}}^{\Delta _{i_{1}}}(\mathcal {A})}=0,\nonumber \end{aligned}$$

so, we let

$$\begin{aligned} f^{i_{1}j_{1}}(\tau (\mathcal {A})):= & {} \left( a_{i_{1}\ldots i_{1}}-\tau (\mathcal {A})-r_{i_{1}}^{\Delta _{i_{1}}}(\mathcal {A})\right) \left( a_{j_{1}\ldots j_{1}} -\tau (\mathcal {A})-r_{j_{1}}^{\overline{\Delta }_{i_{1}}}(\mathcal {A})\right) \\&- r_{i_{1}}^{\overline{\Delta }_{i_{1}}}(\mathcal {A})r_{j_{1}}^{\Delta _{i_{1}}}(\mathcal {A}).\nonumber \end{aligned}$$

By Lemma 2.6, if \(g^{i_{1}j_{1}}(\tau (\mathcal {A}))\le 0\) under the condition \(f^{i_{1}j_{1}}(\tau (\mathcal {A}))\le 0\), then \(K_{i_{1}j_{1}}(\mathcal {A})\ge L_{i_{1}j_{1}}(\mathcal {A})\ge \min \nolimits _{\begin{array}{c} i,j\in N,\\ j\ne i \end{array}}L_{ij}(\mathcal {A})\). Combining with (2.11), we can derive that \(\Lambda _{\min }\ge \min \limits _{\begin{array}{c} i,j\in N,\\ j\ne i \end{array}}L_{ij}(\mathcal {A})\). Therefore, now we only need to prove that \(g^{i_{1}j_{1}}(\tau (\mathcal {A}))\le 0\) under the condition \(f^{i_{1}j_{1}}(\tau (\mathcal {A}))\le 0\).

When \(a_{i_{1}\ldots i_{1}}-\tau (\mathcal {A})-r_{i_{1}}^{\Delta _{i_{1}}}(\mathcal {A})\le 0\), it is not difficult to get the following form

$$\begin{aligned} g^{i_{1}j_{1}}(\tau (\mathcal {A}))=(a_{i_{1}\ldots i_{1}}-\tau (\mathcal {A})-r_{i_{1}}^{j_{1}}(\mathcal {A}))(a_{j_{1}\ldots j_{1}}-\tau (\mathcal {A})) -(-a_{i_{1}j_{1}\ldots j_{1}})r_{j_{1}}(\mathcal {A})\le 0.\nonumber \end{aligned}$$

Otherwise, we have \(a_{i_{1}\ldots i_{1}}-\tau (\mathcal {A})-r_{i_{1}}^{\Delta _{i_{1}}}(\mathcal {A})> 0\). From the condition \(f^{i_{1}j_{1}}(\tau (\mathcal {A}))\le 0\), we have

$$\begin{aligned} (a_{i_{1}\ldots i_{1}}-\tau (\mathcal {A})-r_{i_{1}}^{\Delta _{i_{1}}}(\mathcal {A}))(a_{j_{1}\ldots j_{1}} -\tau (\mathcal {A})-r_{j_{1}}^{\overline{\Delta }_{i_{1}}}(\mathcal {A}))\le r_{i_{1}}^{\overline{\Delta }_{i_{1}}}(\mathcal {A})r_{j_{1}}^{\Delta _{i_{1}}}(\mathcal {A}).\nonumber \\ \end{aligned}$$
(2.14)

If \(r_{i_{1}}^{\overline{\Delta }_{i_{1}}}(\mathcal {A})r_{j_{1}}^{\Delta _{i_{1}}}(\mathcal {A})=0\), then

$$\begin{aligned} a_{j_{1}\ldots j_{1}}-\tau (\mathcal {A})-r_{j_{1}}^{\overline{\Delta }_{i_{1}}}(\mathcal {A})\le 0\le r_{j_{1}}^{\Delta _{i_{1}}}(\mathcal {A}), \end{aligned}$$

which leads to

$$\begin{aligned} a_{j_{1}\ldots j_{1}}-\tau (\mathcal {A})\le r_{j_{1}}(\mathcal {A}). \end{aligned}$$
(2.15)

In addition, by (2.12) we have

$$\begin{aligned} \tau (\mathcal {A})\ge a_{i_{1}\ldots i_{1}}-(r_{i_{1}}^{j_{1}}(\mathcal {A})+(-a_{i_{1}j_{1}\ldots j_{1}})), \end{aligned}$$

i.e.,

$$\begin{aligned} a_{i_{1}\ldots i_{1}}-\tau (\mathcal {A})-r_{i_{1}}^{j_{1}}(\mathcal {A})\le -a_{i_{1}j_{1}\ldots j_{1}}. \end{aligned}$$
(2.16)

Note that \(\tau (\mathcal {A})<a_{j_{1}\ldots j_{1}}\), then multiplying Inequality (2.15) with Inequality (2.16) gives

$$\begin{aligned} ({a_{i_{1}\ldots i_{1}}}-\tau (\mathcal {A})-r_{i_{1}}^{j_{1}}(\mathcal {A}))(a_{j_{1}\ldots j_{1}}-\tau (\mathcal {A})) \le (-a_{i_{1}j_{1}\ldots j_{1}})r_{j_{1}}(\mathcal {A}), \end{aligned}$$

which implies that \(g^{i_{1}j_{1}}(\tau (\mathcal {A}))\le 0\).

If \(r_{i_{1}}^{\overline{\Delta }_{i_{1}}}(\mathcal {A})r_{j_{1}}^{\Delta _{i_{1}}}(\mathcal {A})>0\), then by dividing Inequality (2.14) by \(r_{i_{1}}^{\overline{\Delta }_{i_{1}}}(\mathcal {A})r_{j_{1}}^{\Delta _{i_{1}}}(\mathcal {A})\), we get

$$\begin{aligned} \frac{a_{i_{1}\ldots i_{1}}-\tau (\mathcal {A})-r_{i_{1}}^{\Delta _{i_{1}}}(\mathcal {A})}{r_{i_{1}}^{\overline{\Delta }_{i_{1}}}(\mathcal {A})}\frac{a_{j_{1}\ldots j_{1}} -\tau (\mathcal {A})-r_{j_{1}}^{\overline{\Delta }_{i_{1}}}(\mathcal {A})}{r_{j_{1}}^{\Delta _{i_{1}}}(\mathcal {A})}\le 1. \end{aligned}$$
(2.17)

By (2.12), we have

$$\begin{aligned} \frac{a_{i_{1}\ldots i_{1}}-\tau (\mathcal {A})-r_{i_{1}}^{\Delta _{i_{1}}}(\mathcal {A})}{r_{i_{1}}^{\overline{\Delta }_{i_{1}}}(\mathcal {A})}\le 1. \end{aligned}$$
(2.18)

Then it follows from Inequality (2.17) that

$$\begin{aligned} \frac{a_{j_{1}\ldots j_{1}} -\tau (\mathcal {A})-r_{j_{1}}^{\overline{\Delta }_{i_{1}}}(\mathcal {A})}{r_{j_{1}}^{\Delta _{i_{1}}}(\mathcal {A})}\ge 1,\nonumber \end{aligned}$$

or

$$\begin{aligned} \frac{a_{j_{1}\ldots j_{1}} -\tau (\mathcal {A})-r_{j_{1}}^{\overline{\Delta }_{i_{1}}}(\mathcal {A})}{r_{j_{1}}^{\Delta _{i_{1}}}(\mathcal {A})}\le 1.\nonumber \end{aligned}$$

When \(-a_{i_{1}j_{1}\ldots j_{1}}>0\), from the part (I) in Lemma 2.7 and Inequality (2.18) we have

$$\begin{aligned} \frac{a_{i_{1}\ldots i_{1}}-\tau (\mathcal {A})-r_{i_{1}}^{j_{1}}(\mathcal {A})}{-a_{i_{1}j_{1}\ldots j_{1}}} \le \frac{a_{i_{1}\ldots i_{1}}-\tau (\mathcal {A})-r_{i_{1}}^{\Delta _{i_{1}}}(\mathcal {A})}{r_{i_{1}}^{\overline{\Delta }_{i_{1}}}(\mathcal {A})}. \end{aligned}$$
(2.19)

Furthermore, if \(\frac{a_{j_{1}\ldots j_{1}} -\tau (\mathcal {A})-r_{j_{1}}^{\overline{\Delta }_{i_{1}}}(\mathcal {A})}{r_{j_{1}}^{\Delta _{i_{1}}}(\mathcal {A})}\ge 1\), it follows from the part (II) in Lemma 2.7 that

$$\begin{aligned} \frac{a_{j_{1}\ldots j_{1}}-\tau (\mathcal {A})}{r_{j_{1}}(\mathcal {A})}\le \frac{a_{j_{1}\ldots j_{1}} -\tau (\mathcal {A})-r_{j_{1}}^{\overline{\Delta }_{i_{1}}}(\mathcal {A})}{r_{j_{1}}^{\Delta _{i_{1}}}(\mathcal {A})}. \end{aligned}$$
(2.20)

Then multiplying Inequality (2.19) with Inequality (2.20), together with (2.17), gives

$$\begin{aligned}&\frac{a_{i_{1}\ldots i_{1}}-\tau (\mathcal {A})-r_{i_{1}}^{j_{1}}(\mathcal {A})}{-a_{i_{1}j_{1}\ldots j_{1}}}\frac{a_{j_{1}\ldots j_{1}} -\tau (\mathcal {A})}{r_{j_{1}}(\mathcal {A})}\\&\quad \le \frac{a_{i_{1}\ldots i_{1}}-\tau (\mathcal {A})-r_{i_{1}}^{\Delta _{i_{1}}}(\mathcal {A})}{r_{i_{1}}^{\overline{\Delta }_{i_{1}}}(\mathcal {A})}\frac{a_{j_{1}\ldots j_{1}} -\tau (\mathcal {A})-r_{j_{1}}^{\overline{\Delta }_{i_{1}}}(\mathcal {A})}{r_{j_{1}}^{\Delta _{i_{1}}}(\mathcal {A})}\le 1, \end{aligned}$$

equivalently,

$$\begin{aligned} (a_{i_{1}\ldots i_{1}}-\tau (\mathcal {A})-r_{i_{1}}^{j_{1}}(\mathcal {A}))(a_{j_{1}\ldots j_{1}}-\tau (\mathcal {A})) \le (-a_{i_{1}j_{1}\ldots j_{1}})r_{j_{1}}(\mathcal {A}),\nonumber \end{aligned}$$

that is \(g^{i_{1}j_{1}}(\tau (\mathcal {A}))\le 0\). And if \(\frac{a_{j_{1}\ldots j_{1}} -\tau (\mathcal {A})-r_{j_{1}}^{\overline{\Delta }_{i_{1}}}(\mathcal {A})}{r_{j_{1}}^{\Delta _{i_{1}}}(\mathcal {A})}\le 1\), then

$$\begin{aligned} a_{j_{1}\ldots j_{1}}-\tau (\mathcal {A})\le r_{j_{1}}(\mathcal {A}). \nonumber \end{aligned}$$

Inequality (2.18) implies

$$\begin{aligned} a_{i_{1}\ldots i_{1}}-\tau (\mathcal {A})-r_{i_{1}}^{j_{1}}(\mathcal {A})\le -a_{i_{1}j_{1}\ldots j_{1}}.\nonumber \end{aligned}$$

The above two inequalities lead to

$$\begin{aligned} (a_{i_{1}\ldots i_{1}}-\tau (\mathcal {A})-r_{i_{1}}^{j_{1}}(\mathcal {A}))(a_{j_{1}\ldots j_{1}}-\tau (\mathcal {A})) \le (-a_{i_{1}j_{1}\ldots j_{1}})r_{j_{1}}(\mathcal {A}),\nonumber \end{aligned}$$

i.e., \(g^{i_{1}j_{1}}(\tau (\mathcal {A}))\le 0\).

When \(a_{i_{1}j_{1}\ldots j_{1}}=0\), from (2.18), we easily get

$$\begin{aligned} a_{i_{1}\ldots i_{1}}-\tau (\mathcal {A})-r_{i_{1}}^{j_{1}}(\mathcal {A})\le 0=-a_{i_{1}j_{1}\ldots j_{1}}.\nonumber \end{aligned}$$

Hence,

$$\begin{aligned} (a_{i_{1}\ldots i_{1}}-\tau (\mathcal {A})-r_{i_{1}}^{j_{1}}(\mathcal {A}))(a_{j_{1}\ldots j_{1}}-\tau (\mathcal {A})) \le 0=(-a_{i_{1}j_{1}\ldots j_{1}})r_{j_{1}}(\mathcal {A}),\nonumber \end{aligned}$$

i.e., \(g^{i_{1}j_{1}}(\tau (\mathcal {A}))\le 0\).\(\square \)

By using the technique in the proof of Theorem 2.2, we can get \(\overline{\Lambda }_{\max }\le \max \nolimits _{\begin{array}{c} i,j\in N,\\ j\ne i \end{array}}L_{ij}(\mathcal {A})\). Combining with Theorem 5 in [13], we can easily obtain the bounds in Theorem 2.1 are shaper than Lemmas 1.1 and 1.2 in corrected form.

Now we take an example to show the efficiency of the bounds established in Theorem 2.1.

Example 2.1

Let \(\mathcal {A}=(a_{ijk})\in \mathbb {R}^{[3,3]}\) be a weakly irreducible \(\mathcal {M}\)-tensor with entries defined as follows:

$$\begin{aligned} \mathcal {A}=[A(1,:,:),A(2,:,:),A(3,:,:)]\in \mathbb {R}^{[3,3]},\nonumber \end{aligned}$$

where

$$\begin{aligned} A(1,:,:)= & {} \left( \begin{array}{c@{\quad }c@{\quad }c} 15 &{} 0 &{} 0 \\ 0&{} -0.5 &{} -0.2 \\ 0 &{} -1 &{} -2 \\ \end{array} \right) , A(2,:,:)=\left( \begin{array}{c@{\quad }c@{\quad }c} -1 &{} -5.8 &{} -2 \\ 0 &{} 55 &{} 0 \\ 0 &{} 0 &{} -0.5 \\ \end{array} \right) ,\\ A(3,:,:)= & {} \left( \begin{array}{c@{\quad }c@{\quad }c} -1 &{} -2 &{} 0 \\ 0 &{} -1 &{} -3 \\ 0 &{} -3 &{} 15 \\ \end{array} \right) . \end{aligned}$$

We compare the results derived in Theorem 2.1 with those of Lemmas 1.1, 1.2 in the correct form and Lemma 1.4. By Lemma 1.1, we have

$$\begin{aligned} 5\le \tau (\mathcal {A})\le 45.7.\nonumber \end{aligned}$$

By Lemma 1.2 in the corrected form, we get

$$\begin{aligned} 5.4256\le \tau (\mathcal {A})\le 14.8406.\nonumber \end{aligned}$$

By Lemma 1.4, we obtain

$$\begin{aligned} 5.8038\le \tau (\mathcal {A})\le 14.7458.\nonumber \end{aligned}$$

By Theorem 2.1 , we have

$$\begin{aligned} 8.4610\le \tau (\mathcal {A})\le 10.4580.\nonumber \end{aligned}$$

This example shows that the upper and lower bounds in Theorem 2.1 are better than those in Lemmas 1.1, 1.2 and 1.4.

2.2 The New S-type Estimates of Minimum H-eigenvalue

In this subsection, the new S-type estimates of minimum H-eigenvalue for weakly irreducible nonsingular \(\mathcal {M}\)-tensor are derived, which are better than the ones in Lemmas 1.1 and 1.2 in corrected form.

Given a nonempty proper subset S of N, we denote

$$\begin{aligned}&\Delta ^{N}=\{(i_{2},i_{3},\ldots ,i_{m}): ~\mathrm {each}~i_{j}\in N,~\mathrm {for}~ j\in 2,3,\ldots , m\},\nonumber \\&\Delta ^{S}=\{(i_{2},i_{3},\ldots ,i_{m}): ~\mathrm {each}~i_{j}\in S,~\mathrm {for}~ j\in 2,3,\ldots , m\}, \overline{\Delta ^{S}}= \Delta ^{N}\backslash \Delta ^{S}.\nonumber \end{aligned}$$

This implies that for \(i\in S\), we have

$$\begin{aligned} r_{i}(\mathcal {A})=r_{i}^{\Delta ^{S}}(\mathcal {A})+r_{i}^{\overline{\Delta _{S}}}(\mathcal {A}),\nonumber \end{aligned}$$

where

$$\begin{aligned} r_{i}^{\Delta ^{S}}(\mathcal {A})=\sum \limits _{\begin{array}{c} (i_{2},\ldots ,i_{m})\in \Delta ^{S},\\ \delta _{ii_{2}\ldots i_{m}}=0 \end{array}}|a_{ii_{2}\ldots i_{m}}|, \quad r_{i}^{\overline{\Delta ^{S}}}(\mathcal {A})=\sum \limits _{(i_{2},\ldots ,i_{m})\in \overline{\Delta ^{S}}}|a_{ii_{2}\ldots i_{m}}|.\nonumber \end{aligned}$$

Theorem 2.3

Let \(\mathcal {A}=(a_{i_{1}i_{2}\ldots i_{m}})\in \mathbb {R}^{[m,n]}\) be a weakly irreducible nonsingular \(\mathcal {M}\)-tensor with \(n\ge 2\), and S be a nonempty proper subset of N. Then

$$\begin{aligned} \Upsilon _{\min }(\mathcal {A})\le \tau (\mathcal {A})\le \Upsilon _{\max }(\mathcal {A}),\nonumber \end{aligned}$$

where

$$\begin{aligned}&\Upsilon _{\min }(\mathcal {A})=\min \{\overline{\Upsilon }^{S}(\mathcal {A}),\overline{\Upsilon }^{\overline{S}}(\mathcal {A})\},~ \Upsilon _{\max }(\mathcal {A})=\max \{\widetilde{\Upsilon }^{S}(\mathcal {A}),\widetilde{\Upsilon }^{\overline{S}}(\mathcal {A})\},\nonumber \\&\overline{\Upsilon }^{S}(\mathcal {A})=\min \limits _{\begin{array}{c} i\in S,\\ j\in \overline{S} \end{array}}\max \{\frac{1}{2} (a_{jj\ldots j}+a_{i i\ldots i}-r_{j}^{\overline{\Delta ^{S}}}(\mathcal {A}) -(\Psi ^{S}_{i,j})^{\frac{1}{2}}),R_{j}(\mathcal {A})\},\nonumber \\&\widetilde{\Upsilon }^{S}(\mathcal {A})=\max \limits _{\begin{array}{c} i\in S,\\ j\in \overline{S} \end{array}}\min \{\frac{1}{2} (a_{ii\ldots i}+a_{j j\ldots j}-r_{j}^{\overline{\Delta ^{S}}}(\mathcal {A}) -(\Psi ^{S}_{i,j})^{\frac{1}{2}}),R_{j}(\mathcal {A})\},\nonumber \\&\Psi ^{S}_{i,j}=(a_{jj\ldots j}-a_{ii\ldots i}-r_{j}^{\overline{\Delta ^{S}}}(\mathcal {A}))^{2} +4r_{j}^{\Delta ^{S}}(\mathcal {A})r_{i}(\mathcal {A}).\nonumber \end{aligned}$$

Proof

Since \(\mathcal {A}\) is a weakly irreducible nonsingular \(\mathcal {M}\)-tensor, by Lemma 2.3, there exists \(x=(x_{1},x_{2},\ldots ,x_{n})^\mathrm{T}>0\) such that

$$\begin{aligned} \mathcal {A}x^{m-1}=\tau (\mathcal {A})x^{[m-1]}. \end{aligned}$$
(2.21)

(i) Let \(x_{l}=\max \nolimits _{i\in S}x_{i}\) and \(x_{t}=\max \nolimits _{i\in \overline{S}}x_{i}\). Next, we divide into two cases to prove.

Case I\(x_{t}\ge x_{l}\), that is, \(x_{t}=\max \nolimits _{i\in N}x_{i}\). From (2.21), we have

$$\begin{aligned} (\tau (\mathcal {A})-a_{tt\ldots t})x_{t}^{m-1}= & {} \sum \limits _{(i_{2},\ldots ,i_{m})\in \Delta ^{S}}a_{ti_{2}\ldots i_{m}}x_{i_{2}}x_{i_{3}}\ldots {x_{i_{m}}}\\&+ \sum \limits _{\begin{array}{c} (i_{2},\ldots ,i_{m})\in \overline{\Delta ^{S}},\\ \delta _{ti_{2}\ldots i_{m}=0} \end{array}}a_{ti_{2}\ldots i_{m}}x_{i_{2}}x_{i_{3}}\ldots {x_{i_{m}}}.\nonumber \end{aligned}$$

Using the inequality technique, together with \(\tau (\mathcal {A})< a_{tt\ldots t}\), gives

$$\begin{aligned} (a_{tt\ldots t}-\tau (\mathcal {A}))x_{t}^{m-1}= & {} \sum \limits _{(i_{2},\ldots ,i_{m})\in \Delta ^{S}}|a_{ti_{2}\ldots i_{m}}|x_{i_{2}}x_{i_{3}}\ldots {x_{i_{m}}}\\&+ \sum \limits _{\begin{array}{c} (i_{2},\ldots ,i_{m})\in \overline{\Delta ^{S}},\\ \delta _{ti_{2}\ldots i_{m}=0} \end{array}}|a_{ti_{2}\ldots i_{m}}|x_{i_{2}}x_{i_{3}}\ldots {x_{i_{m}}}\nonumber \\\le & {} \sum \limits _{(i_{2},\ldots ,i_{m})\in \Delta ^{S}} |a_{ti_{2}\ldots i_{m}}|x_{l}^{m-1}+\sum \limits _{\begin{array}{c} (i_{2},\ldots ,i_{m})\in \overline{\Delta ^{S}},\\ \delta _{ti_{2}\ldots i_{m}=0} \end{array}} |a_{ti_{2}\ldots i_{m}}|x_{t}^{m-1}\nonumber \\= & {} r_{t}^{\Delta ^{S}}(\mathcal {A})x_{l}^{m-1}+r_{t}^{\overline{\Delta ^{S}}}(\mathcal {A})x_{t}^{m-1};\nonumber \end{aligned}$$

hence,

$$\begin{aligned} (a_{tt\ldots t}-\tau (\mathcal {A})-r_{t}^{\overline{\Delta ^{S}}}(\mathcal {A}))x_{t}^{m-1} \le r_{t}^{\Delta ^{S}}(\mathcal {A})x_{l}^{m-1}. \end{aligned}$$
(2.22)

On the other hand, by (2.21), we also get that

$$\begin{aligned} (a_{ll\ldots l}-\tau (\mathcal {A}))x_{l}^{m-1}= & {} \sum \limits _{\begin{array}{c} i_{2},\ldots ,i_{m}\in N,\\ \delta _{li_{2}\ldots i_{m}=0} \end{array}}|a_{li_{2}\ldots i_{m}}|x_{i_{2}}x_{i_{3}}\ldots {x_{i_{m}}}\nonumber \\\le & {} r_{l}(\mathcal {A})x_{t}^{m-1}. \end{aligned}$$
(2.23)

Multiplying (2.22) with (2.23) gives

$$\begin{aligned} (a_{tt\ldots t}-\tau (\mathcal {A})-r_{t}^{\overline{\Delta ^{S}}}(\mathcal {A}))(a_{ll\ldots l}-\tau (\mathcal {A}))\le r_{t}^{\Delta ^{S}}(\mathcal {A})r_{l}(\mathcal {A}).\nonumber \end{aligned}$$

Solving the above quadratic inequality yields

$$\begin{aligned} \tau (\mathcal {A})\ge \frac{1}{2}(a_{tt\ldots t}+a_{ll\ldots l} -r_{t}^{\overline{\Delta ^{S}}}(\mathcal {A})-({\Psi ^{S}_{l,t}})^{\frac{1}{2}}), \end{aligned}$$
(2.24)

with

$$\begin{aligned} {\Psi ^{S}_{l,t}}=(a_{tt\ldots t}-a_{ll\ldots l}-r_{t}^{\overline{\Delta ^{S}}}(\mathcal {A}))^{2} +4r_{t}^{\Delta ^{S}}(\mathcal {A})r_{l}(\mathcal {A}).\nonumber \end{aligned}$$

Furthermore, by Inequality (2.22), we can get that

$$\begin{aligned} a_{tt\ldots t}-\tau (\mathcal {A})-r_{t}^{\overline{\Delta ^{S}}}(\mathcal {A})\le r_{t}^{\Delta ^{S}}(\mathcal {A}),\nonumber \end{aligned}$$

i.e.,

$$\begin{aligned} \tau (\mathcal {A})\ge a_{tt\ldots t}-r_{t}^{\Delta ^{S}}(\mathcal {A}) -r_{t}^{\overline{\Delta ^{S}}}(\mathcal {A})=a_{tt\ldots t}-r_{t}(\mathcal {A})=R_{t}(\mathcal {A}). \end{aligned}$$
(2.25)

It follows from Inequalities (2.24) and (2.25) that

$$\begin{aligned} \tau (\mathcal {A})\ge & {} \max \{\frac{1}{2}(a_{tt\ldots t}+a_{ll\ldots l} -r_{t}^{\overline{\Delta ^{S}}}(\mathcal {A}) -(\Psi ^{S}_{l,t})^{\frac{1}{2}}),R_{t}(\mathcal {A})\}\nonumber \\\ge & {} \min \limits _{\begin{array}{c} i\in S,\\ j\in \overline{S} \end{array}}\max \{\frac{1}{2}(a_{ii\ldots i} +a_{j j\ldots j}-r_{j}^{\overline{\Delta ^{S}}}(\mathcal {A}) -(\Psi ^{S}_{i,j})^{\frac{1}{2}}),R_{j}(\mathcal {A})\}. \end{aligned}$$
(2.26)

Case II\(x_{l}\ge x_{t}\), that is, \(x_{l}=\max \limits _{i\in N}x_{i}\). In a similar manner to the proof of Case I, we have

$$\begin{aligned} (a_{ll\ldots l}-\tau (\mathcal {A})-r_{l}^{\overline{\Delta ^{\overline{S}}}}(\mathcal {A}))x_{l}^{m-1} \le r_{l}^{\Delta ^{\overline{S}}}(\mathcal {A})x_{t}^{m-1}\nonumber \end{aligned}$$

and

$$\begin{aligned} (a_{tt\ldots t}-\tau (\mathcal {A}))x_{t}^{m-1}\le r_{t}(\mathcal {A})x_{l}^{m-1}.\nonumber \end{aligned}$$

Note that \(x_{t}x_{l}>0\). Thus,

$$\begin{aligned} (a_{ll\ldots l}-\tau (\mathcal {A})-r_{l}^{\overline{\Delta ^{\overline{S}}}}(\mathcal {A}))(a_{tt\ldots t}-\tau (\mathcal {A})) \le r_{l}^{\Delta ^{\overline{S}}}(\mathcal {A})r_{t}(\mathcal {A})\nonumber \end{aligned}$$

and

$$\begin{aligned} \tau (\mathcal {A})\ge a_{ll\ldots l}-r_{l}^{\Delta ^{\overline{S}}}(\mathcal {A}) -r_{l}^{\overline{\Delta ^{\overline{S}}}}(\mathcal {A}){=R_{l}(\mathcal {A})}.\nonumber \end{aligned}$$

Then, solve for \(\tau (\mathcal {A})\),

$$\begin{aligned} \tau (\mathcal {A})\ge & {} \max \left\{ \frac{1}{2}\left( a_{tt\ldots t}+a_{ll\ldots l} -r_{l}^{\overline{\Delta ^{\overline{S}}}}(\mathcal {A}) -(\Psi ^{\overline{S}}_{t,l})^{\frac{1}{2}}\right) ,R_{l}(\mathcal {A})\right\} \nonumber \\\ge & {} \min \limits _{\begin{array}{c} i\in \overline{S},\\ j\in S \end{array}}\max \left\{ \frac{1}{2}\left( a_{ii\ldots i} +a_{j j\ldots j}-r_{j}^{\overline{\Delta ^{\overline{S}}}}(\mathcal {A}) -(\Psi ^{\overline{S}}_{i,j})^{\frac{1}{2}}\right) ,R_{j}(\mathcal {A})\right\} .\quad \end{aligned}$$
(2.27)

Combining (2.26) and (2.27) yields the first inequality of Theorem 2.3.

(ii) Let \(x_{p}=\min \nolimits _{i\in S}x_{i}\) and \(x_{q}=\min \nolimits _{i\in \overline{S}}x_{i}\). Dividing into two cases to prove: \(x_{p}\ge x_{q}\) and \(x_{q}\ge x_{p}\) and by the analogical proof as (i), we can prove the second inequality of Theorem 2.3.\(\square \)

Next, we show the bounds of Theorem 2.3 are sharper than those of Lemma 1.2 in corrected form. We first proof that the lower bound of Theorem 2.3 is greater than or equal to than that of Lemma 1.2 in corrected form.

Theorem 2.4

Let \(\mathcal {A}=(a_{i_{1}i_{2}\ldots i_{m}})\in \mathbb {R}^{[m,n]}\) be a weakly irreducible nonsingular \(\mathcal {M}\)-tensor with \(n\ge 2\). Then

$$\begin{aligned} \Upsilon _{\min }(\mathcal {A})\ge \min \limits _{\begin{array}{c} i,j\in N,\\ j\ne i \end{array}}L_{ij}(\mathcal {A}).\nonumber \end{aligned}$$

Proof

By Theorem 2.3, we have \(\Upsilon _{\min }(\mathcal {A})=\overline{\Upsilon }^{S}(\mathcal {A})\) or \(\Upsilon _{\min }(\mathcal {A}) =\overline{\Upsilon }^{\overline{S}}(\mathcal {A})\). Without loss of generality, we suppose that \(\Upsilon _{\min }(\mathcal {A})=\overline{\Upsilon }^{S}(\mathcal {A})\) (we can prove it similarly if \(\Upsilon _{\min }(\mathcal {A})=\overline{\Upsilon }^{\overline{S}}(\mathcal {A})\)). Then there are \(i_{2}\in S,j_{2}\in \overline{S}\) such that

$$\begin{aligned} \Upsilon _{\min }(\mathcal {A})=\overline{\Upsilon }^{S}(\mathcal {A})=\max \left\{ \frac{1}{2} \left( a_{j_{2}\ldots j_{2}}+a_{i_{2}\ldots i_{2}}-r_{j_{2}}^{\overline{\Delta ^{S}}}(\mathcal {A}) -(\Psi ^{S}_{i_{2},j_{2}})^{\frac{1}{2}}\right) ,R_{j_{2}}(\mathcal {A})\right\} ,\nonumber \end{aligned}$$

which leads to

$$\begin{aligned} \tau (\mathcal {A})\ge R_{j_{2}}(\mathcal {A}) \end{aligned}$$
(2.28)

and

$$\begin{aligned} \tau (\mathcal {A})\ge \frac{1}{2} \left( a_{j_{2}\ldots j_{2}}+a_{i_{2}\ldots i_{2}}-r_{j_{2}}^{\overline{\Delta ^{S}}}(\mathcal {A}) -(\Psi ^{S}_{i_{2},j_{2}})^{\frac{1}{2}}\right) . \end{aligned}$$
(2.29)

From proof of Theorem 2.3, Inequality (2.29) is derived by solving the following quadratic inequality

$$\begin{aligned} \left( a_{j_{2}\ldots j_{2}}-\tau (\mathcal {A})-r_{j_{2}}^{\overline{\Delta ^{S}}}(\mathcal {A})\right) \left( a_{i_{2}\ldots i_{2}}-\tau (\mathcal {A})\right) \le r_{j_{2}}^{\Delta ^{S}}(\mathcal {A})r_{i_{2}}(\mathcal {A}).\nonumber \end{aligned}$$

So we let \(h^{i_{2}j_{2}}(\tau (\mathcal {A}))=(a_{j_{2}\ldots j_{2}}-\tau (\mathcal {A})-r_{j_{2}}^{\overline{\Delta ^{S}}}(\mathcal {A}))(a_{i_{2}\ldots i_{2}} -\tau (\mathcal {A}))-r_{j_{2}}^{\Delta ^{S}}(\mathcal {A})r_{i_{2}}(\mathcal {A})\) and \(W_{i_{2}j_{2}}(\mathcal {A}):=\frac{1}{2} (a_{j_{2}\ldots j_{2}}+a_{i_{2}\ldots i_{2}}-r_{j_{2}}^{\overline{\Delta ^{S}}}(\mathcal {A}) -(\Psi ^{S}_{i_{2},j_{2}})^{\frac{1}{2}})\) is the left root of the equation \(h^{i_{2}j_{2}}(\tau (\mathcal {A}))=0\). By Lemma 2.6, if \(g^{j_{2}i_{2}}(\tau (\mathcal {A}))\le 0\) under the condition \(h^{i_{2}j_{2}}(\tau (\mathcal {A}))\le 0\), then \(W_{i_{2}j_{2}}(\mathcal {A}) \ge L_{j_{2}i_{2}}(\mathcal {A})\ge \min \nolimits _{\begin{array}{c} i,j\in N,\\ j\ne i \end{array}}L_{ij}(\mathcal {A})\), that is, \(\Upsilon _{\min }(\mathcal {A})\ge \min \nolimits _{\begin{array}{c} i,j\in N,\\ j\ne i \end{array}}L_{ij}(\mathcal {A})\). We now prove that \(g^{j_{2}i_{2}}(\tau (\mathcal {A}))\le 0\) under the condition \(h^{i_{2}j_{2}}(\tau (\mathcal {A}))\le 0\). From the condition \(h^{i_{2}j_{2}}(\tau (\mathcal {A}))\le 0\), we have

$$\begin{aligned} \left( a_{j_{2}\ldots j_{2}}-\tau (\mathcal {A})-r_{j_{2}}^{\overline{\Delta ^{S}}}(\mathcal {A})\right) \left( a_{i_{2}\ldots i_{2}}-\tau (\mathcal {A})\right) \le r_{j_{2}}^{\Delta ^{S}}(\mathcal {A})r_{i_{2}}(\mathcal {A}). \end{aligned}$$
(2.30)

If \(r_{j_{2}}^{\Delta ^{S}}(\mathcal {A})r_{i_{2}}(\mathcal {A})=0\), then \(r_{j_{2}}^{\Delta ^{S}}(\mathcal {A})=0\) or \(r_{i_{2}}(\mathcal {A})=0\). When \(r_{j_{2}}^{\Delta ^{S}}(\mathcal {A})=0\), we get \(-a_{j_{2}i_{2}\ldots i_{2}}=0, r_{j_{2}}^{\overline{\Delta ^{S}}}(\mathcal {A})=r_{j_{2}}^{i_{2}}(\mathcal {A})\). Therefore,

$$\begin{aligned} \left( a_{j_{2}\ldots j_{2}}-\tau (\mathcal {A})-r_{j_{2}}^{i_{2}}(\mathcal {A})\right) \left( a_{i_{2}\ldots i_{2}}-\tau (\mathcal {A})\right)= & {} \left( a_{j_{2}\ldots j_{2}}-\tau (\mathcal {A}) -r_{j_{2}}^{\overline{\Delta ^{S}}}(\mathcal {A})\right) \left( a_{i_{2}\ldots i_{2}}-\tau (\mathcal {A})\right) \nonumber \\\le & {} r_{j_{2}}^{\Delta ^{S}}(\mathcal {A})r_{i_{2}}(\mathcal {A})\nonumber \\= & {} 0\nonumber \\= & {} (-a_{j_{2}i_{2}\ldots i_{2}})r_{i_{2}}(\mathcal {A});\nonumber \end{aligned}$$

consequently, \(g^{i_{2}j_{2}}(\tau (\mathcal {A}))\le 0\). When \(r_{i_{2}}(\mathcal {A})=0\),

$$\begin{aligned} \left( a_{j_{2}\ldots j_{2}}-\tau (\mathcal {A})-r_{j_{2}}^{i_{2}}(\mathcal {A})\right) \left( a_{i_{2}\ldots i_{2}}-\tau (\mathcal {A})\right)\le & {} \left( a_{j_{2}\ldots j_{2}}-\tau (\mathcal {A})-r_{j_{2}}^{\overline{\Delta ^{S}}}(\mathcal {A})\right) \left( a_{i_{2}\ldots i_{2}}-\tau (\mathcal {A})\right) \nonumber \\\le & {} r_{j_{2}}^{\Delta ^{S}}(\mathcal {A})r_{i_{2}}(\mathcal {A})\nonumber \\= & {} 0\nonumber \\= & {} \left( -a_{j_{2}i_{2}\ldots i_{2}}\right) r_{i_{2}}(\mathcal {A}). \end{aligned}$$

This leads to \(g^{j_{2}i_{2}}(\tau (\mathcal {A}))\le 0\).

If \(r_{j_{2}}^{\Delta ^{S}}(\mathcal {A})r_{i_{2}}(\mathcal {A})>0\), then we can equivalently express Inequality (2.30) as

$$\begin{aligned} \frac{a_{j_{2}\ldots j_{2}}-\tau (\mathcal {A})-r_{j_{2}}^{\overline{\Delta ^{S}}}(\mathcal {A})}{r_{j_{2}}^{\Delta ^{S}}(\mathcal {A})} \frac{a_{i_{2}\ldots i_{2}}-\tau (\mathcal {A})}{r_{i_{2}}(\mathcal {A})}\le 1. \end{aligned}$$
(2.31)

By (2.28), we have \(\frac{a_{j_{2}\ldots j_{2}}-\tau (\mathcal {A})-r_{j_{2}}^{\overline{\Delta ^{S}}}(\mathcal {A})}{r_{j_{2}}^{\Delta ^{S}}(\mathcal {A})}\le 1\), and when \(a_{j_{2}i_{2}\ldots i_{2}}>0\), from the part (I) in Lemma 2.7 we have

$$\begin{aligned} \frac{a_{j_{2}\ldots j_{2}}-\tau (\mathcal {A})-r_{j_{2}}^{i_{2}}(\mathcal {A})}{-a_{j_{2}i_{2}\ldots i_{2}}} \le \frac{a_{j_{2}\ldots j_{2}}-\tau (\mathcal {A})-r_{j_{2}}^{\overline{\Delta ^{S}}}(\mathcal {A})}{r_{j_{2}}^{\Delta ^{S}}(\mathcal {A})},\nonumber \end{aligned}$$

together with Inequality (2.31), we can derive that

$$\begin{aligned} \frac{a_{j_{2}\ldots j_{2}}-\tau (\mathcal {A})-r_{j_{2}}^{i_{2}}(\mathcal {A})}{-a_{j_{2}i_{2}\ldots i_{2}}}\frac{a_{i_{2}\ldots i_{2}} -\tau (\mathcal {A})}{r_{i_{2}}(\mathcal {A})}\le \frac{a_{j_{2}\ldots j_{2}}-\tau (\mathcal {A})-r_{j_{2}}^{\overline{\Delta ^{S}}}(\mathcal {A})}{r_{j_{2}}^{\Delta ^{S}}(\mathcal {A})}\frac{a_{i_{2}\ldots i_{2}}-\tau (\mathcal {A})}{r_{i_{2}}(\mathcal {A})}\le 1. \end{aligned}$$

i.e., \(g^{j_{2}i_{2}}(\tau (\mathcal {A}))\le 0\). When \(a_{j_{2}i_{2}\ldots i_{2}}=0\), by (2.28) we easily get

$$\begin{aligned} a_{j_{2}\ldots j_{2}}-\tau (\mathcal {A})-r_{j_{2}}^{i_{2}}(\mathcal {A})\le 0=-a_{j_{2}i_{2}\ldots i_{2}}.\nonumber \end{aligned}$$

Hence,

$$\begin{aligned} (a_{j_{2}\ldots j_{2}}-\tau (\mathcal {A})-r_{j_{2}}^{i_{2}}(\mathcal {A}))(a_{i_{2}\ldots i_{2}}-\tau (\mathcal {A}))\le 0 =-a_{j_{2}i_{2}\ldots i_{2}}r_{i_{2}}(\mathcal {A}).\nonumber \end{aligned}$$

This also implies \(g^{j_{2}i_{2}}(\tau (\mathcal {A}))\le 0\). This completes our proof of Theorem 2.4. \(\square \)

By using the technique in the proof of Theorem 2.4, we can get \(\Upsilon _{\max }(\mathcal {A})\le \max \nolimits _{\begin{array}{c} i,j\in N,\\ j\ne i \end{array}}L_{ij}(\mathcal {A})\). Together with Theorem 5 in [13], we can easily see the bounds in Theorem 2.3 are better than Lemmas 1.1 and 1.2 in corrected form.

Let us show the advantage of Theorem 2.3 over the results in Lemma 1.1, 1.2 which are corrected, Lemma 1.4 and newly derived by Huang et al. [14] by a simple example as follows.

Example 2.2

Let \(\mathcal {A}=(a_{ijk})\in \mathbb {R}^{[3,4]}\) be a weakly irreducible \(\mathcal {M}\)-tensor with entries defined as follows:

$$\begin{aligned} \mathcal {A}=[A(1,:,:),A(2,:,:),A(3,:,:),A(4,:,:)]\in \mathbb {R}^{[3,4]},\nonumber \end{aligned}$$

where

$$\begin{aligned} A(1,:,:)=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 37 &{} -2 &{} -1 &{} -4\\ -1&{} -3 &{} -3 &{} -2\\ -1 &{} -1 &{} -3 &{} -2\\ -2 &{} -3 &{} -3 &{} -3\\ \end{array} \right) , A(2,:,:)=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} -2 &{} -4 &{} -2 &{} -3\\ -1&{} 39 &{} -2 &{} -1\\ -3 &{} -3 &{} -4 &{} -2\\ -2 &{} -3 &{} -1 &{} -4\\ \end{array} \right) ,\nonumber \\ A(3,:,:)=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} -4 &{} -1 &{} -1 &{} -1\\ -1&{} 0 &{} -2 &{} -3\\ -1 &{} -1 &{} 35 &{} -1\\ -2 &{} -2 &{} -4 &{} -3\\ \end{array} \right) , A(4,:,:)=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} -2 &{} -4 &{} 0 &{} 1\\ -4&{} -4 &{} -2 &{} -4\\ -3 &{} 0 &{} -3 &{} -3\\ -3 &{} -3 &{} -4 &{} 49\\ \end{array} \right) . \end{aligned}$$

We now compute the bounds for \(\tau (\mathcal {A})\). Let \(S=\{1,2\}\), then \(\overline{S}=\{3,4\}\). By Lemma 1.1, we have

$$\begin{aligned} 2\le \tau (\mathcal {A})\le 9. \end{aligned}$$

By Lemma 1.2 in the corrected form, we get

$$\begin{aligned} 2.0541\le \tau (\mathcal {A})\le 8.8969. \end{aligned}$$

By Lemma 1.4, we obtain

$$\begin{aligned} 2.2233\le \tau (\mathcal {A})\le 8.7447.\nonumber \end{aligned}$$

By Theorem 3.5 in [14], we get

$$\begin{aligned} 2.6604\le \tau (\mathcal {A})\le 8.1955.\nonumber \end{aligned}$$

By Theorem 2.3, we have

$$\begin{aligned} 3.5550\le \tau (\mathcal {A})\le 7.1629.\nonumber \end{aligned}$$

Obviously, the bounds given in Theorem 2.3 are sharper than the aforementioned existing results.

3 Ky Fan Theorem

In [11], He and Huang gave the Ky Fan theorem for nonsingular \(\mathcal {M}\)-tensors as follows:

Lemma 3.1

[11] Let \(\mathcal {A}\), \(\mathcal {B} \) be of order m dimension n, suppose that \(\mathcal {B}\) is a nonsingular \(\mathcal {M}\)-tensor and \(|b_{i_{1},\ldots ,i_{m}}|\ge |a_{i_{1},\ldots ,i_{m}}|\) for any \(i_{1},\ldots ,i_{m} \in N \) and \(\delta _{i_{1},\ldots , i_{m}}\ne 0\). Then, for any eigenvalue \(\lambda \) of \(\mathcal {A}\), there exists \(i\in N\) such that

$$\begin{aligned} |\lambda -a_{i\ldots i}|\le b_{i\ldots i}-\tau (\mathcal {B}). \end{aligned}$$
(3.1)

In [19], Bu et al. derived the following Brualdi-type eigenvalue inclusion sets of tensors.

Lemma 3.2

[19] Let \(\mathcal {A}=(a_{i_{1},\ldots ,i_{m}})\in \mathbb {C}^{[m,n]}\) be a tensor such that \(\Gamma _{\mathcal {A}}\) is weakly connected. Then,

$$\begin{aligned} \sigma (\mathcal {A})\subseteq \bigcup \limits _{\gamma \in C(\mathcal {A})}\left\{ z\in \mathbb {C}: \prod \limits _{i\in \gamma }|z-a_{ii\ldots i}| \le \prod \limits _{i\in \gamma }r_{i}(\mathcal {A})\right\} .\nonumber \end{aligned}$$

Based on Lemma 3.2, we derive a new set in Ky Fan theorem, which is sharper than the one in (3.1).

Theorem 3.1

Let \(\mathcal {A}, \mathcal {B}\) be m-order n-dimensional tensors such that \(\Gamma _{\mathcal {A}}\) is weakly connected and \(\mathcal {B}\) be a nonsingular \(\mathcal {M}\)-tensor, and \(|b_{i_{1}\ldots i_{m}}|\ge |a_{i_{1}\ldots i_{m}}|\) for all \(i_{1}\ne \ldots \ne i_{m}\). Then, there exists a circuit \(\gamma \in C(\mathcal {A})\), such that

$$\begin{aligned} \prod \limits _{i\in \gamma }|\lambda -a_{ii\ldots i}|\le \prod \limits _{i\in \gamma }(b_{ii\ldots i}-\tau (\mathcal {B})).\nonumber \end{aligned}$$

Proof

We first suppose that \(\mathcal {B}\) is irreducible, by Lemma 2.2, there exists \(x=(x_{1},x_{2},\ldots , x_{n})^\mathrm{T}>0\) such that

$$\begin{aligned} \mathcal {B}x^{m-1}=\tau (\mathcal {B})x^{[m-1]}. \end{aligned}$$
(3.2)

Let \(D=\mathrm {diag}(x_{1},\ldots ,x_{n})\), \(\mathcal {A}_{D}=\mathcal {A}D^{1-m}\overbrace{D\ldots D}^{m-1}\), \(y=(y_{1},\ldots ,y_{n})^\mathrm{T}\) be an eigenvector of \(\mathcal {A}_{D}\) corresponding to \(\lambda \). Then

$$\begin{aligned} \mathcal {A}_{D}y^{m-1}=\lambda y^{[m-1]}.\nonumber \end{aligned}$$

By Lemma 2.5, we have

$$\begin{aligned} \lambda (\mathcal {A})=\lambda (\mathcal {A}_{D}).\nonumber \end{aligned}$$

Equation (3.2) implies that for any i,

$$\begin{aligned} (b_{i\ldots i}-\tau (\mathcal {B}))x_{i}^{m-1}=-\sum \limits _{\delta _{ii_{2}\ldots i_{m}}=0}b_{ii_{2}\ldots i_{m}}{x_{i_{2}}\ldots x_{i_{m}}} =\sum \limits _{\delta _{ii_{2}\ldots i_{m}}=0}|b_{ii_{2}\ldots i_{m}}|{x_{i_{2}}\ldots x_{i_{m}}} ,\nonumber \end{aligned}$$

which is equivalent to

$$\begin{aligned} b_{i\ldots i}-\tau (\mathcal {B}) =\sum \limits _{\delta _{ii_{2}\ldots i_{m}}=0}|b_{ii_{2}\ldots i_{m}}|x_{i}^{1-m}{x_{i_{2}}} \ldots {x_{i_{m}}} .\nonumber \end{aligned}$$

Since \(\Gamma _{\mathcal {A}}\) is weakly connected, so is \(\Gamma _{\mathcal {A}_{D}}\). From Lemma 3.2 and the above equation, for any eigenvalue \(\lambda \) of \(\mathcal {A}_{D}\), there exists a circuit \(\gamma \in C(\mathcal {A})\), such that

$$\begin{aligned} \prod \limits _{i\in \gamma }|\lambda -a_{ii\ldots i}|\le & {} \prod \limits _{i\in \gamma }r_{i}(\mathcal {A}_{D})\nonumber \\= & {} \prod \limits _{i\in \gamma } \left( \sum \limits _{\delta _{ii_{2}\ldots i_{m}=0}}|a_{ii_{2}\ldots i_{m}}|x_{i}^{1-m}x_{i_{2}}\ldots x_{i_{m}}\right) \nonumber \\\le & {} \prod \limits _{i\in \gamma } \left( \sum \limits _{\delta _{ii_{2}\ldots i_{m}=0}}|b_{ii_{2}\ldots i_{m}}|x_{i}^{1-m}x_{i_{2}}\ldots x_{i_{m}}\right) \nonumber \\= & {} \prod \limits _{i\in \gamma }(b_{i\ldots i}-\tau (\mathcal {B})).\nonumber \end{aligned}$$

When the tensor \(\mathcal {B}\) is reducible, by replacing the zero entries of \(\mathcal {B}\) with \(-\frac{1}{k}\), where k is a positive integer, we see that the Z-tensor \(\mathcal {B}_{k}\) is irreducible and \(|(\mathcal {B}_{k})_{i_{1}\ldots i_{m}}|\ge |\mathcal {A}_{i_{1}\ldots i_{m}}|\). Then there exists a circuit \(\gamma \in C(\mathcal {A})\) such that

$$\begin{aligned} \prod \limits _{i\in \gamma }|\lambda -a_{ii\ldots i}|\le \prod \limits _{i\in \gamma }(b_{ii\ldots i}-\tau (\mathcal {B}_{k})). \end{aligned}$$
(3.3)

From the proof process of Theorem 3.6 in [14], we have

$$\begin{aligned} \lim _{k\rightarrow \infty }\tau (\mathcal {B}_{k})=\tau (\mathcal {B}).\nonumber \end{aligned}$$

In Inequality (3.3), letting \(k\rightarrow \infty \) results in

$$\begin{aligned} \prod \limits _{i\in \gamma }|\lambda -a_{ii\ldots i}|\le \prod \limits _{i\in \gamma }(b_{ii\ldots i}-\tau (\mathcal {B})).\nonumber \end{aligned}$$

This completes our proof of Theorem 3.1.\(\square \)

Denote

$$\begin{aligned}&G(\mathcal {A})= \bigcup \limits _{i\in N}\left\{ z\in \mathbb {C}: |z-a_{ii\ldots i}|\le (b_{i\ldots i}-\tau (\mathcal {B}))\right\} ,\nonumber \\&S(\mathcal {A})= \bigcup \limits _{\gamma \in C(\mathcal {A})}\left\{ z\in \mathbb {C}: \prod \limits _{i\in \gamma }|z-a_{ii\ldots i}|\le \prod \limits _{i\in \gamma }(b_{i\ldots i}-\tau (\mathcal {B}))\right\} .\nonumber \end{aligned}$$

It follows from Lemma 3.1 and Theorem 3.1 that \(\sigma (\mathcal {A})\subseteq G(\mathcal {A})\) and \(\sigma (\mathcal {A})\subseteq S(\mathcal {A})\). Next, we compare the sets \(S(\mathcal {A})\) and \(G(\mathcal {A})\) in the following theorem, showing that Theorem 3.1 is better than the Ky Fan theorem.

Theorem 3.2

Let \(\mathcal {A}, \mathcal {B}\) be m-order n-dimensional tensors such that \(\Gamma _{\mathcal {A}}\) is weakly connected, \(\mathcal {B}\) be a nonsingular \(\mathcal {M}\)-tensor, and \(|b_{i_{1}\ldots i_{m}}|\ge |a_{i_{1}\ldots i_{m}}|\) for all \(i_{1}\ne \ldots \ne i_{m}\). Then

$$\begin{aligned} S(\mathcal {A})\subseteq G(\mathcal {A}).\nonumber \end{aligned}$$

Proof

For any \(z\in S(\mathcal {A})\), if \(z\notin G(\mathcal {A})\), then \(|z-a_{ii\ldots i}|>b_{ii\ldots i}-\tau (\mathcal {B})~(i=1,2,\ldots ,n)\). In this case, \(\prod \nolimits _{i\in \gamma }|z-a_{ii\ldots i}|>\prod \nolimits _{i\in \gamma }(b_{ii\ldots i}-\tau (\mathcal {B}))\) for any \(\gamma \in C(\mathcal {A})\), a contradiction to \(z\in S(\mathcal {A})\). Hence \(z\in G(\mathcal {A})\), i.e., \(S(\mathcal {A})\subseteq G(\mathcal {A})\).\(\square \)

4 Conclusions

In this paper, several new estimates of the minimum H-eigenvalue for weakly irreducible nonsingular \(\mathcal {M}\)-tensors are presented, which are proved to be sharper than those of [11, 12]. On the other hand, we have studied a new Ky Fan-type theorem. It should be noted that the new Ky Fan theorem is based on the condition that \(\Gamma _{\mathcal {A}}\) is weakly connected and \(\mathcal {B}\) is a nonsingular \(\mathcal {M}\)-tensor, and the new Ky Fan-type theorem improves the one in [11].

However, the new S-type estimates for minimum H-eigenvalue depend on the set S. Then an interesting problem is how to pick S to make the bounds exhibited in Theorem 2.3 as tight as possible. But it is very difficult when the dimension of the tensor \(\mathcal {A}\) is large. Therefore, future work will include numerical or theoretical studies for finding the best choice for S.