1 Introduction

Considering the absolute value equation(AVE)

$$\begin{aligned} Ax-\vert x\vert =b, \end{aligned}$$
(1)

where \(A\in {\mathbb {R}}^{n\times n}\) and \(b\in {\mathbb {R}}^n\), and \(x\in {\mathbb {R}}^n\) is an unknown vector to be determined, \(\vert x\vert \) denotes the vector with absolute values of each component of x. System (1) is a special case of the generalized absolute value equation (GAVE) [16]:

$$\begin{aligned} Ax-B\vert x\vert =b, \end{aligned}$$
(2)

where \(B\in {\mathbb {R}}^{n \times n}\), which was firstly introduced by Rohn [16] and studied in a more general background in [12, 14, 15]. AVE (1) arises in a variety of scientific computing and engineering applications such as linear programming [10, 12], the quasi-complementarity problems [18], quadratic programming, the general linear complementarity problem [3], and so on.

Recently, many scholars have studied the unique solvability of AVE (1) and GAVE (2), for example, Wu and Li presented two necessary and sufficient conditions and some sufficient conditions for the unique solvability of AVE (1) in [19]. More solvability conditions of AVE (1) can be found in [7] and references therein. In order to approximate its numerical solution, a large number of methods have been proposed to solve AVE (1) or GAVE (2), including modified or generalized Newton method [15, 20], matrix splitting iterative method [1], Picard-type method [17], the neural network model methods [4, 13], and the methods based on the equivalent two-by-two block form, such as the SOR-like method [6, 8], the fixed point iteration (FPI) method [9], the modified fixed point iteration (MFPI) method [21] and the shift-splitting fixed point iteration method [11].

By reformulating the AVE (1) to an equivalent two-by-two block form, Ke [9] proposed the FPI method for solving the AVE (1), which can be described as

Method 1

(FPI Method [9]) Let \(A\in {\mathbb {R}}^{n\times n}\) be a nonsingular matrix and \(b\in {\mathbb {R}}^n\). Given the initial vectors \(x^{(0)}\in {\mathbb {R}}^n\) and \(y^{(0)}\in {\mathbb {R}}^n\), for \(k=0,1,2,\cdots \) until the iteration sequence \(\{x^{(k)},y^{(k)}\}_{k=0}^{+\infty }\) is convergent, compute

$$\begin{aligned} \left\{ \begin{array} {l} x^{(k+1)}=A^{-1}(y^{(k)}+b), \\ y^{(k+1)}=(1-\omega )y^{(k)}+\omega \vert x^{(k+1)}\vert , \end{array}\right. \end{aligned}$$
(3)

where the relaxation parameter \(\omega >0\).

Note that there is a linear system with coefficient matrix A need to be solved at each step of the FPI method, we prefer to use iterative method to approximate its solution since matrix A is always large and sparse. If we split A as

$$\begin{aligned} A=\dfrac{1}{2}(\alpha I+A)-\dfrac{1}{2}(\alpha I-A), \end{aligned}$$

where \(\alpha \) is a positive parameter, and approximate \(x^{(k+1)}\) in the FPI method by the shift-splitting method [2], then we have the following shift-splitting FPI method (abbreviated as FPI-SS method) for solving AVE (1)

Method 2

(FPI-SS Method for AVE (1)) Let \(A\in {\mathbb {R}}^{n\times n}\) be a nonsingular matrix and \(b\in {\mathbb {R}}^n\). Let \(\alpha \) be a positive constant such that \(\alpha I+A\in R^{n\times n}\) is nonsingular. Given the initial vectors \(x^{(0)}\in {\mathbb {R}}^n\) and \(y^{(0)}\in {\mathbb {R}}^n\), compute \((x^{(k+1)}, y^{(k+1)})\) for \(k=0, 1, 2, \cdots \) until the iteration sequence \(\{x^{(k)}, y^{(k)}\}_{k=0}^{+\infty }\) is convergent, compute

$$\begin{aligned} \left\{ \begin{array} {l} x^{(k+1)}=(\alpha I+A)^{-1}(\alpha I-A)x^{(k)}+2(\alpha I+A)^{-1}(y^{(k)}+b), \\ y^{(k+1)}=(1-\omega )y^{(k)}+\omega \vert x^{(k+1)}\vert , \end{array}\right. \end{aligned}$$
(4)

where the parameter \(\omega \) is a positive constant.

The FPI-SS method is an inexact FPI method, and is firstly proposed for solving the GAVE (2) in [11]. When \(B=I\), the GAVE (2) is the AVE (1), and Algorithm 3 in [11] reduces to Method 2. In this paper, based on the preconditioned shift-splitting technique, we propose another inexact FPI method for solving the AVE (1). This paper is organized as follows. In Sect. 2, new inexact FPI method for solving the AVE (1) is established. The convergence analysis of the proposed method is studied in Sect. 3. In Sect. 4, numerical experiments are present to illustrate the effectiveness and feasibility of the proposed method. Finally, a brief conclusion is given in Sect. 5.

2 The FPI-PSS method

Similar to [5], assume that A is splitted as

$$\begin{aligned} A=\dfrac{1}{2}(\alpha P+A)-\dfrac{1}{2}(\alpha P-A) \end{aligned}$$

with positive parameter \(\alpha \) and symmetric positive definite matrix P, then the \(x^{(k+1)}\) in the FPI method (3) can be solved by the following preconditioned shift-splitting (PSS) method

$$\begin{aligned} x^{(k+1)}=(\alpha P+A)^{-1}(\alpha P-A)x^{(k)}+2(\alpha P+A)^{-1}(y^{(k)}+b). \end{aligned}$$

Hence, we have the following inexact FPI method, termed as the FPI-PSS method, for solving the AVE (1)

Method 3

(FPI-PSS Method for AVE (1)) Let \(A\in {\mathbb {R}}^{n\times n}\), \(b\in {\mathbb {R}}^{n}\). Given the initial vectors \(x^{(0)}\) \(\in \) \({\mathbb {R}}^{n}\) and \(y^{(0)}\in {\mathbb {R}}^{n}\), compute \(\{x^{(k+1)},y^{(k+1)}\}\) for \(k=0,1,2,\ldots \) using the following iteration scheme until \(\{x^{(k)},y^{(k)}\}_{k=0}^{+\infty }\) satisfies the stopping criterion:

$$\begin{aligned} \left\{ \begin{array} {l} x^{(k+1)}=(\alpha P+A)^{-1}(\alpha P-A)x^{(k)}+2(\alpha P+A)^{-1}(y^{(k)}+b),\\ y^{(k+1)}=(1-\omega )y^{(k)}+\omega \vert x^{(k+1)}\vert , \end{array}\right. \end{aligned}$$
(5)

where \(\alpha \) is a positive iteration parameter and P is symmetric positive definite matrix.

Clearly, the iteration matrix of the FPI-PSS method is

$$\begin{aligned} M= \left[ \begin{array}{cc} \alpha P+A &{} 0\\ -\omega D(x) &{} I \end{array} \right] ^{-1} \left[ \begin{array} {cc} \alpha P-A &{} 2I \\ 0 &{} (1-\omega )I \end{array}\right] , \end{aligned}$$

where D(x) is a diagonal matrix of the form \(D(x)=\textrm{diag}(\textrm{sign}(x))\) wherein \(\textrm{sign}(x)\) denotes a vector with components equal to 1, 0 or \(-1\) depending on whether the corresponding component of x is positive, zero or negative, respectively.

Especially, when \(P=I\), the FPI-PSS method becomes the FPI-SS method. Therefore, the proposed FPI-PSS method is a generalization of Method 2. Moreover, we can see that the FPI-PSS method has the same computational processes as the shift-splitting fixed point iteration method in [11], so the FPI-PSS method can also be used to solve the GAVE (2).

3 Convergence of the FPI-PSS method

In this section, the convergence of the FPI-PSS method for solving the AVE (1) is studied. Let \(\rho (M)\) denotes the spectral radius of the iteration matrix M, then the FPI-PSS method is convergent if and only if \(\rho (M)<1\). Assume that \(\lambda \) is an eigenvalue of matrix M and \([u,v]^{T}\) is the corresponding eigenvector, we have

$$\begin{aligned} M \left[ \begin{array} {c} u \\ v \end{array} \right] = \lambda \left[ \begin{array} {c} u \\ v \\ \end{array} \right] , \end{aligned}$$

where is equivalent to

$$\begin{aligned} \left\{ \begin{array} {l} (\alpha P-A)u+2v=\lambda (\alpha P+A)u, \\ (1-\omega )v=\lambda (-\omega D(x)u+v). \end{array}\right. \end{aligned}$$
(6)

Next, we will study the convergence of the FPI-PSS method. For this purpose, several helpful lemmas are presented as follows.

Lemma 1

[6] Let A \(\in {\mathbb {R}}^{n \times n}\), if the smallest singular value of the A exceed 1 and \(\eta \) is an eigenvalue of the matrix \(D(x)A^{-1}\), then \(\vert \eta \vert <1\).

Lemma 2

[22] Consider the real quadratic equation \(x^{2}+bx+d=0\), where b and d are real numbers. Both roots of the equation are less than one in modulus if and only if \(\vert d\vert <1\) and \(\vert b\vert <1+d\).

Lemma 3

Let A \(\in {\mathbb {R}}^{n \times n}\), if the smallest singular value of the A exceed 1 and \(\lambda \) is an eigenvalue of the matrix M, then \(\lambda \ne 1\).

Proof

If \(\lambda =1\) is an eigenvalue of matrix M, then (6) is equivalent to

$$\begin{aligned} \left\{ \begin{array} {l} (\alpha P-A)u+2v=(\alpha P+A)u, \\ (1-\omega )v=-\omega D(x)u+v. \end{array}\right. \end{aligned}$$
(7)

From (7), we can get that

$$\begin{aligned} (I-D(x)A^{-1})u=0. \end{aligned}$$

It follows from Lemma 1 that \(I-D(x)A^{-1}\) is nonsingular, so we have \(u=0\). Then, from (7) we can get that \(v=0\). We have the contradictory conclusion with properties of eigenvector. Hence \(\lambda \ne 1\). \(\square \)

Lemma 4

Let A \(\in {\mathbb {R}}^{n \times n}\) be a nonsingular matrix and \(\omega \) \(>0\), if \(\lambda \) satisfies

$$\begin{aligned} (\lambda -1)(\lambda +\omega -1)\alpha Pu+(\lambda +1)(\lambda +\omega -1)Au=2\lambda \omega D(x)u, \end{aligned}$$
(8)

then \(\lambda \) is an eigenvalue of the matrix M. Conversely, if \(\lambda \) is an eigenvalue of the matrix M such that \(\lambda \ne 1-\omega \), then \(\lambda \) satisfies (8).

Proof

Let \([u,v]^{T}\) be the eigenvector of M corresponding to the eigenvalue \(\lambda \). Then it follows from (6) that

$$\begin{aligned} \left\{ \begin{array} {l} (\lambda -1)\alpha Pu+(\lambda +1)Au=2v \\ (\lambda +\omega -1)v=\lambda \omega D(x)u. \end{array}\right. \end{aligned}$$
(9)

Combining the two equality in (9), we get (8). \(\square \)

We can prove the other assertion by reversing the process.

Theorem 1

Let A be a symmetric positive definite matrix. Assume that \(\lambda \) is an eigenvalue of iteration matrix M and \([u,v]^{T}\in {\mathbb {C}}^{n\times n}\) is the corresponding eigenvector. Denote \(a=\dfrac{u^{*}Au}{u^{*}Pu}, c=\dfrac{u^{*}D(x)u}{u^{*}Pu}.\) Then the FPI-PSS method is convergence if and only if the following conditions are satisfied

$$\begin{aligned} \left\{ \begin{array} {ll} 0<\omega<\dfrac{2\alpha }{\alpha -c},&{} \alpha>a>c, \\ 0<\omega<\dfrac{2a}{a-\alpha },&{} c<\alpha<\sqrt{ac}<a, \\ 0<\omega<\dfrac{2\alpha }{\alpha -c},&{} c<\sqrt{ac}<\alpha< a, \\ 0<\omega<\dfrac{2a}{a-\alpha },&{} \alpha<c<a. \end{array}\right. \end{aligned}$$

Proof

From Lemma 4 we know that \(\lambda \) satisfies (8). Multiplying \(\dfrac{u^{*}}{u^{*}Pu}\) on both sides of (8), we get

$$\begin{aligned} (\lambda -1)(\lambda +\omega -1)\alpha \dfrac{u^{*}Pu}{u^{*}Pu}+(1+\lambda )(\lambda +\omega -1)\dfrac{u^{*}Au}{u^{*}Pu}-2\lambda \omega \dfrac{u^{*}D(x)u}{u^{*}Pu}=0, \end{aligned}$$

that is

$$\begin{aligned} (\lambda -1)(\lambda +\omega -1)\alpha +(1+\lambda )(\lambda +\omega -1)a-2\lambda \omega c=0, \end{aligned}$$

or equivalently,

$$\begin{aligned} \lambda ^{2}+\dfrac{(\omega -2)\alpha +\omega a-2\omega c}{a+\alpha }\lambda +\dfrac{(1-\omega )(\alpha -a)}{a+\alpha }=0. \end{aligned}$$

From Lemma 2 and Lemma 3, we know that the FPI-PSS method is convergent if and only if

$$\begin{aligned} \left| \dfrac{(1-\omega )(\alpha -a)}{a+\alpha } \right| <1 \end{aligned}$$

and

$$\begin{aligned} \left| \dfrac{(\omega -2)\alpha +\omega a-2\omega c}{a+\alpha } \right| <1+\dfrac{(1-\omega )(\alpha -a)}{a+\alpha }. \end{aligned}$$

In what follows, we divide our discussion into three cases for solving above inequalities.

Case 1: \(c=0\)

In this case, we have \(\lambda =\dfrac{\alpha -a}{\alpha +a}\), obviously, \(|\lambda |<1\).

Case 2: \(c>0\)

Now, when \(\alpha>a>c\), we get that

$$\begin{aligned} 0<\omega <\dfrac{2\alpha }{\alpha -c}, \end{aligned}$$

while when \(a>\alpha >c\), we obtain that

$$\begin{aligned} \left\{ \begin{array} {ll} 0<\omega<\dfrac{2a}{a-\alpha },&{}c<\alpha<\sqrt{ac}<a \\ 0<\omega<\dfrac{2\alpha }{\alpha -c},&{}c<\sqrt{ac}<\alpha \le a. \end{array}\right. \end{aligned}$$

and when \(\alpha<c<a\), we have

$$\begin{aligned} 0<\omega <\dfrac{2a}{a-\alpha }. \end{aligned}$$

Case 3: \(c<0\)

In this case, we have the same results as in the Case 2.

According to Case 1, 2 and 3, the proof is completed. \(\square \)

4 Numerical experiments

In this section, three example are given to illustrate the feasibility and efficiency of the FPI-PSS method proposed in this work. To this end, we compare the FPI-PSS method with the FPI method [9], the FPI-SS method (4) and two new fixed point iteration method [1] from aspects of the numbers of iteration steps (denoted as “IT”), elapsed CPU time in seconds (denoted as “CPU”), and relative residual error (denoted as “RES”) which is defined by

$$\begin{aligned} \textrm{RES}:={\Vert Ax^{(k)}-\vert x^{(k)}\vert -b\Vert _2}. \end{aligned}$$

In our implementation, we choose \(P=I+H\) with \(H=\frac{A+A^T}{2}\), all initial guess vectors \(x^{(0)}\) and \(y^{(0)}\) are selected to zero vectors and all iterations are terminated if RES \(\le 10^{-6}\) or the maximum number of iteration step \(k_{\max }\) exceeds 500. All computations are performed in MATLAB R2022b on a personal computer with 1.80GHZ central processing unit (Intel(R) Core(TM) i7-8550U) and 8GB memory.

Example 1

Let the coefficient matrix \(A\in {\mathbb {R}}^{n\times n}\) of AVE (1) be defined by \(A={\widehat{A}}+\mu I\in {\mathbb {R}}^{n\times n}\), where

$$\begin{aligned} {\widehat{A}}=\textrm{Tridiag}(-I,S,-I)=\left[ \begin{array}{cccccc} S &{}-I&{}0&{}\cdots &{}0&{}0\\ -I&{}S&{}-I&{}\cdots &{}0&{}0\\ 0 &{}-I&{}S&{}\cdots &{}0&{}0\\ \vdots &{}\vdots &{}&{}\ddots &{}\vdots &{}\vdots \\ 0&{}0&{}\cdots &{}\cdots &{}S&{}-I\\ 0&{}0&{}\cdots &{}\cdots &{}-I&{}S\\ \end{array} \right] \in {\mathbb {R}}^{n\times n} \end{aligned}$$

is a block-tridiagonal matrix,

$$\begin{aligned} S=\textrm{tridiag}(-1,4,-1)=\left[ \begin{array}{cccccc} 4&{}-1&{}0&{}\cdots &{}0&{}0\\ -1&{}4&{}-1&{}\cdots &{}0&{}0\\ 0&{}-1&{}4&{}\cdots &{}0&{}0\\ \vdots &{}\vdots &{}&{}\ddots &{}\vdots &{}\vdots \\ 0&{}0&{}\cdots &{}\cdots &{}4&{}-1\\ 0&{}0&{}\cdots &{}\cdots &{}-1&{}4\\ \end{array} \right] \in {\mathbb {R}}^{m\times m} \end{aligned}$$

is a tridiagonal matrix, \(n=m^2\). Let \(x^*=(-0.5,-1,-0.5,\ldots ,-0.5,-1,\ldots )^T\in {\mathbb {R}}^n\) be the exact solution of the AVE (1).

Table 1 Numerical results for Example 1 with \(\mu =1\)
Table 2 Numerical results for Example 1 with \(\mu =4\)

For different problem scales \(n=m^{2}\), the optimal experimental parameters, IT, CPU and RES of the FPI, FPI-SS and FPI-PSS methods for Example 1 are listed in Table 1 and Table 2 for \(\mu =1\) and \(\mu =4\), respectively. The optimal parameters are obtained through the numerical experiments, which result in the least number of iteration steps of each methods.

From Table 1 and Table 2, we can see that each of the tested methods can successfully converge to the exact solution of AVE (1), and the number of iteration steps decreases with \(\mu \). Among all tested iteration methods, the FPI-PSS method is the most efficient one as it is requires the least iterative steps and the least computation time to achieve the terminated criterion.

Example 2

Let the coefficient matrix \(A\in {\mathbb {R}}^{n\times n}\) of AVE (1) be defined by \(A={\widehat{A}}+\mu I\in {\mathbb {R}}^{n\times n}\), where

$$\begin{aligned} {\widehat{A}}=\textrm{Tridiag}(-1.5I,S,-0.5I)=\left[ \begin{array}{cccccc} S&{}-0.5I&{}0&{}\cdots &{}0&{}0\\ -1.5I&{}S&{}-0.5I&{}\cdots &{}0&{}0\\ 0&{}-1.5I&{}S&{}\cdots &{}0&{}0\\ \vdots &{}\vdots &{}&{}\ddots &{}\vdots &{}\vdots \\ 0&{}0&{}\cdots &{}\cdots &{}S&{}-0.5I\\ 0&{}0&{}\cdots &{}\cdots &{}-1.5I&{}S\\ \end{array} \right] \in {\mathbb {R}}^{n\times n} \end{aligned}$$

is a block-tridiagonal matrix,

$$\begin{aligned} S=\textrm{tridiag}(-1.5,4,-0.5)=\left[ \begin{array}{cccccc} 4&{}-0.5&{}0&{}\cdots &{}0&{}0\\ -1.5&{}4&{}-0.5&{}\cdots &{}0&{}0\\ 0&{}-1.5&{}4&{}\cdots &{}0&{}0\\ \vdots &{}\vdots &{}&{}\ddots &{}\vdots &{}\vdots \\ 0&{}0&{}\cdots &{}\cdots &{}4&{}-0.5\\ 0&{}0&{}\cdots &{}\cdots &{}-1.5&{}4\\ \end{array} \right] \in {\mathbb {R}}^{m\times m} \end{aligned}$$

is a tridiagonal matrix, \(n=m^2\). Let \(x^*=(-0.5,-1,-0.5,\ldots ,-0.5,-1,\ldots )^T\in {\mathbb {R}}^n\) be the exact solution of the AVE (1).

Table 3 Numerical results for Example 2 with \(\mu =1\)
Table 4 Numerical results for Example 2 with \(\mu =4\)

In Table 3 and Table 4, we report the numerical results for Example 2 with \(\mu =1\) and \(\mu =4\), respectively. Notably, the FPI-PSS method requires the least iteration steps and costs the least computing time than the FPI method and the FPI-SS method.

Example 3

[1] Let the coefficient matrix \(A\in {\mathbb {R}}^{n\times n}\) of AVE (1) be defined by \(A={\widehat{A}}+\mu I\in {\mathbb {R}}^{n\times n}\), where

$$\begin{aligned} {\widehat{A}}=\textrm{Tridiag}(-1.5I,S,-0.5I)=\left[ \begin{array}{cccccc} S&{}-0.5I&{}0&{}\cdots &{}0&{}0\\ -1.5I&{}S&{}-0.5I&{}\cdots &{}0&{}0\\ 0&{}-1.5I&{}S&{}\cdots &{}0&{}0\\ \vdots &{}\vdots &{}&{}\ddots &{}\vdots &{}\vdots \\ 0&{}0&{}\cdots &{}\cdots &{}S&{}-0.5I\\ 0&{}0&{}\cdots &{}\cdots &{}-1.5I&{}S\\ \end{array} \right] \in {\mathbb {R}}^{n\times n} \end{aligned}$$

is a block-tridiagonal matrix,

$$\begin{aligned} S=\textrm{tridiag}(-1.5,8,-0.5)=\left[ \begin{array}{cccccc} 8&{}-0.5&{}0&{}\cdots &{}0&{}0\\ -1.5&{}8&{}-0.5&{}\cdots &{}0&{}0\\ 0&{}-1.5&{}8&{}\cdots &{}0&{}0\\ \vdots &{}\vdots &{}&{}\ddots &{}\vdots &{}\vdots \\ 0&{}0&{}\cdots &{}\cdots &{}8&{}-0.5\\ 0&{}0&{}\cdots &{}\cdots &{}-1.5&{}8\\ \end{array} \right] \in {\mathbb {R}}^{m\times m} \end{aligned}$$

is a tridiagonal matrix, \(n=m^2\). Let \(x^*=(-1,1,-1,\ldots ,-1,1,\ldots )^T\in {\mathbb {R}}^n\) be the exact solution of the AVE (1).

Table 5 Numerical results for Example 3 with \(\mu =4\)

Table 5 presents the numerical results of the FPI-PSS method and two methods in [1], where Method I with parameter 1 and Method II with parameter 0.97, see [1] for more details. From Table 5, we can see that the number of iteration steps for Method I is the same as the FPI-PSS method, but the FPI-PSS method requires less time than other two methods to achieve the terminated criterion. Thus, the proposed FPI-PSS method is more effective and feasible for solving the AVE (1).

At the end of this section, we give the following remark. From the numerical results of this section, we can see that the numerical optimal parameter \(\omega \) of the FPI-PSS method is \(\omega _{opt}=1\) in three tested examples with different problem scales. If \(\omega =1\), the iterative scheme of the FPI-PSS method becomes

$$\begin{aligned} x^{(k+1)}=(\alpha P+A)^{-1}(\alpha P-A)x^{(k)}+(\alpha P+A)^{-1}(\vert x^{(k)}\vert +b), \end{aligned}$$

which is an inexact Picard method for solving the AVE (1).

5 Conclusions

In this paper, we propose an inexact fixed point iteration method, termed as FPI-PSS method, to solve the absolute value equation. The FPI-PSS method is constructed by combining the preconditioned shift-splitting iteration method with the fixed point iteration method. Some convergence conditions of FPI-PSS method are given. In addition, three examples show that the FPI-PSS method is superior to other comparison methods from the aspects of iteration steps and computing times. However, how to choose the optimal involved parameters in the FPI-PSS method need further study.