1 Introduction

In this note, we consider the following absolute value equation (AVE)

$$\begin{aligned} Ax-B|x|=b, \end{aligned}$$
(1)

where \(A, B\in \mathbb {R}^{n\times n}\) and \(b\in \mathbb {R}^{n}\). We show that if matrices A and B satisfy

$$\begin{aligned} \sigma _{\max }(B) < \sigma _{\min }(A), \end{aligned}$$

then the AVE (1) for any b has a unique solvability, where \(\sigma _{\max }\) and \(\sigma _{\min }\), respectively, denote the maximal and minimal singular values. This result is weaker than the following condition

$$\begin{aligned} \sigma _{\max }(|B|) < \sigma _{\min }(A), \end{aligned}$$

which was provided in [1] by Rohn, one can see [1] for more details.

At present, the AVE (1) has attracted considerable attention because the AVE (1) is used as a useful tool in optimization, such as the linear complementarity problem, linear programming and convex quadratic programming, and so on.

Recently, it has been studied from two aspects: one is theoretical analysis, the other is to develop many efficient methods for solving the AVE (1). The former focuses on the theorem of alternatives, various equivalent reformulations, and the existence and nonexistence of solutions, see [1,2,3,4,5,6,7]. Especially, in [6], the authors presented some necessary and sufficient conditions for the unique solution of the AVE (1) with \(B=I\), where I denotes the identity matrix. The later focuses on exploring some numerical methods for solving the AVE (1), such as the smoothing Newton method [8], the generalized Newton method [9], the sign accord method [10], the Picard-HSS method [11], the relaxed nonlinear PHSS-like method [12], Levenberg–Marquardt method [13], the finite succession of linear programs [14], the modified generalized Newton method [15, 16], the preconditioned AOR method [17] and the modified Newton-type method [18].

2 The main result

In this section, we will give our main result.

To give our main result, the following lemma is required.

Lemma 2.1

If matrices A and B satisfy

$$\begin{aligned} \sigma _{\max }(B) < \sigma _{\min }(A), \end{aligned}$$

then the matrix \((A-B)^{-1}(A+B)\) is positive definite.

Proof

Since \(\sigma _{\max }(B) < \sigma _{\min }(A)\), for all nonzero \(x \in {\mathbb {R}}^{n}\), we have

$$\begin{aligned} x^{T}AA^{T}x\ge \lambda _{\min }(AA^{T})>\lambda _{\max }(BB^{T})\ge x^{T}BB^{T}x. \end{aligned}$$

Clearly,

$$\begin{aligned} x^{T}(AA^{T}-BB^{T})x>0. \end{aligned}$$

Noting that \(x^{T}BA^{T}x=x^{T}AB^{T}x\). Further, we have

$$\begin{aligned} 0<x^{T}(AA^{T}-BB^{T}+BA^{T}-AB^{T})x=x^{T}(A+B)(A^{T}-B^{T})x. \end{aligned}$$
(2)

Let \((A^{T}-B^{T})x=y\). By the simple computations, we have

$$\begin{aligned} x^{T}(A+B)(A^{T}-B^{T})x=y^{T}(A-B)^{-1}(A+B)y. \end{aligned}$$

It follows that \(y^{T}(A-B)^{-1}(A+B)y>0\) from Eq. (2), which implies that the matrix \((A-B)^{-1}(A+B)\) is positive definite. \(\square \)

Theorem 2.1

If matrices A and B satisfy

$$\begin{aligned} \sigma _{\max }(B) < \sigma _{\min }(A), \end{aligned}$$

then the AVE (1) for any b has a unique solution.

Proof

Let \(x_{+}=\frac{|x|+x}{2}\) and \(x_{-}=\frac{|x|-x}{2}\). Then

$$\begin{aligned} x=x_{+}-x_{-}, |x|=x_{+}+x_{-}. \end{aligned}$$
(3)

Substituting (3) into (1), we obtain

$$\begin{aligned} x_{+}=(A-B)^{-1}(A+B)x_{-}+(A-B)^{-1}b. \end{aligned}$$
(4)

Based on the results in Lemma 2.1, \((A-B)^{-1}(A+B)\) is a P-matrix. Therefore, the linear complementarity problem (4) is uniquely solvable for any b in [19] and so is the AVE (1) for any b. \(\square \)

On the unique solvability of the AVE (1), the following result was provided in [1].

Theorem 2.2

[1] Let \(A,B\in \mathbb {R}^{n\times n}\) satisfy

$$\begin{aligned} \sigma _{\max }(|B|) < \sigma _{\min }(A), \end{aligned}$$

then the AVE (1) for any b has a unique solution.

Remark 2.1

It is noted that the condition in Theorem 2.1 may be weaker than the condition in Theorem 2.2. In fact, for \(B\in \mathbb {R}^{n\times n}\), we have

$$\begin{aligned} B^{T}B\le |B^{T}B|\le |B^{T}|\cdot |B|. \end{aligned}$$

Based on Theorem 8.1.18 in [20], we have

$$\begin{aligned} \rho (B^{T}B)\le \rho (|B^{T}B|)\le \rho (|B^{T}|\cdot |B|), \end{aligned}$$

where \(\rho (\cdot )\) denotes the spectral radius of the matrix. It follows that \(\sigma _{\max }(B)\le \sigma _{\max }(|B|)\).

Remark 2.2

When \(B=I\), Theorem 2.1 reduces to Proposition 3 (i) in [2]. That is to say, Theorem 2.1 is a generalization of Proposition 3 (i) in [2].

The following example shows that Theorem 2.2 in [1] may be invalid to judge the unique solution of the certain AVE, whereas, Theorem 2.1 can judge the unique solution of the certain AVE.

Example 2.1

Consider the following AVE

$$\begin{aligned} \underbrace{\left[ \begin{array}{cc} 2.5&{}\quad 0\\ 0&{}\quad 2.5 \end{array}\right] }_{A} \left[ \begin{array}{c} x_{1}\\ x_{2} \end{array}\right] -\underbrace{\left[ \begin{array}{cc} 2 &{} \quad -1\\ 1 &{} \quad 1 \end{array}\right] }_{B} \left[ \begin{array}{c} |x_{1}|\\ |x_{2}| \end{array}\right] =\left[ \begin{array}{c} 1.5\\ 0.5 \end{array}\right] . \end{aligned}$$
(5)

By the simple computations, \(\sigma _{\min }(A)=\sigma _{\max }(A)=2.5\), \(\sigma _{\max }(B)=2.3028\) and \(\sigma _{\max }(|B|)=2.618\). When we use Theorem 2.2 in [1], we find that \(\sigma _{\min }(A)<\sigma _{\max }(|B|)\). Clearly, Theorem 2.2 is invalid to judge the unique solution of the AVE (5). Whereas, when using Theorem 2.1, i.e., \(\sigma _{\min }(A)>\sigma _{\max }(B)\), we find that the AVE (5) is unique solution. In fact, the unique solution of the AVE (5) is \(x_{1}=x_{2}=1\). This further shows that Theorem 2.1 is indeed superior to Theorem 2.2 in [1] under certain conditions.