1 Introduction

In this paper, we investigate the iterative methods for solving the absolute value equations (AVEs)

$$\begin{aligned} Ax-B\vert x\vert =b, \end{aligned}$$
(1)

where \(A, B \in R^{n \times n}\) with \(B\not =0\), \(b,x \in R^{n}\), and \(\vert x\vert \) denotes the vector with absolute values of components of x. When \(B=I\), the AVEs (1) reduce to a special form

$$\begin{aligned} Ax-\vert x\vert =b. \end{aligned}$$
(2)

If B is nonsingular, then the AVEs (1) can be rewritten to \(B^{-1}Ax-\vert x\vert =B^{-1}b\), which has the form (2).

The system (1) is first introduced in Rohn (2004). For the existence and uniqueness of the solution of (1) and (2), one can see (Hu and Huang 2010; Mangasarian 2009b; Mangasarian and Meyer 2006; Rohn 2004, 2009; Wu 2021; Wu and Li 2018). In Mangasarian (2007), Mangasarian and Meyer (2006), Mezzadri (2020), Rohn (2004), it has been proved that an AVEs is equivalent to a linear complementarity problem (LCP), which is NP-hard. Hence, they have proved that solving the AVEs is NP-hard.

In this paper, we always assume that the AVEs is solvable.

In recent years, under the conditions of the existence and uniqueness of the solution of the AVEs (1) and (2), a variety of efficient iterative methods have been presented, such as for solving the AVEs (1), a Picard method (Rohn et al. 2014), the generalized Newton method (Hu et al. 2011; Miao et al. 2015; Wang et al. 2019), a smoothing Newton method (Miao et al. 2017), the Newton-based matrix splitting iterative method (Zhou et al. 2021), the AOR method and a preconditioned AOR method (Li 2017), and for solving the AVEs (2) the generalized Newton method (Li 2016a; Mangasarian 2009a; Moosaei et al. 2015; Zhang and Wei 2009), the generalized Traubs method (Haghani 2015), the fixed point iteration method (Ke 2020; Yu et al. 2020), the SOR-like methods (Dong et al. 2020; Ke and Ma 2017; Li and Wu 2020), the Picard-HSS iteration method (Salkuyeh 2014), the nonlinear MHSS-like method (Li 2016b), the Picard-CSCS iteration method and the nonlinear CSCS-like iteration method (Gu et al. 2017), the Picard-HSS-SOR iteration method (Zheng 2020), a generalized and a preconditioned generalized Gauss-Seidel methods (Edalatpour et al. 2017), and so on.

In this paper, to improve computing efficiency, we present a relaxed-based matrix splitting method (RMS) for solving the AVEs (1). As special cases, we propose a relaxed-based Picard (RP) method, a relaxed-based AOR (RAOR) method, and a relaxed-based SOR (RSOR) method. These methods include some known methods as special cases, such as the Newton-based matrix splitting iterative method (NMS method) (Zhou et al. 2021), the modified Newton type iteration method (MN method) (Wang et al. 2019), the Picard method (Rohn et al. 2014), the new SOR-like method (Dong et al. 2020), the fixed point iteration (FPI) method (Ke 2020; Yu et al. 2020), the SOR-like method (Ke and Ma 2017), the AOR method (Li 2017), the modified SOR-like (MSOR-like) method (Li and Wu 2020), etc. Moreover, we prove convergence theorems of the proposed methods. Finally, we use numerical examples to demonstrate our theoretical analysis and the superiority of the RMS method.

This paper is organized as follows. In Sect. 2, some notations and lemmas are reviewed. In Sect. 3, we propose the RMS, RP, RAOR, and RSOR methods for solving the AVEs (1). In Sect. 4, the convergence analysis of the proposed method is presented. Numerical examples and conclusions are given in Sects. 5 and 6, respectively.

2 Preliminaries

In this section, some notations and auxiliary results are presented.

Let \(A \in R^{n \times n}\) be the set of \(n \times n\) matrices with real entries and \(R^{n}=R^{n \times 1}\). The ith component of a vector \(x \in R^{n}\) is denoted by \(x_{i}\). Denote \(\vert x\vert \) the vector with ith component equal to \(\vert x_{i}\vert \). For matrix A, an expression \(A=Q-R\) is called a splitting of A when Q is nonsingular. \(\Vert A\Vert \) denotes the spectral norm defined by \(\Vert A\Vert =\max \{\Vert Ax\Vert :x \in R^{n},\Vert x\Vert =1\}\), where \(\Vert x\Vert \) is the 2-norm. For matrix \(A=(a_{ij}), B=(b_{ij}) \in R^{n \times n}\), we say \(A \ge B\) if \(a_{ij} \ge b_{ij}\), \(i,j = 1,\cdots , n\).

Lemma 1

(Young 1971, Lemma 6-2.1) If b and c are real, then both roots of the quadratic equation

$$\begin{aligned} x^{2}+bx+c=0 \end{aligned}$$

are less than one in modulus if and only if

$$\begin{aligned} \vert c \vert<1, \vert b \vert <1+c. \end{aligned}$$

The following results are obvious.

Lemma 2

For \(x,y \in R^{n}\), the following results hold:

  1. (i)

    \(\Vert \vert x \vert -\vert y \vert \Vert \le \Vert x-y\Vert \);

  2. (ii)

    if \(0 \le x \le y\), then \(\Vert x\Vert \le \Vert y\Vert \);

  3. (iii)

    if \(x \le y\) and \(Q \ge 0\), then \(Qx \le Qy\).

3 Relaxed-based matrix splitting method

In this section, some new methods for solving the AVEs (1) are presented.

Let \(y=\vert x\vert \). Then, the AVEs (1) is equivalent to a new AVEs

$$\begin{aligned} \left\{ \begin{array}{l} Ax-By=b,\\ y=\vert x\vert . \end{array}\right. \end{aligned}$$

We split A into

$$\begin{aligned} A = Q - R, \end{aligned}$$
(3)

where Q is nonsingular. Then, we define a relaxed-based matrix splitting (RMS) method for solving the AVEs (1) as

$$\begin{aligned} \left\{ \begin{array}{l} x^{k+1}=Q^{-1}(Rx^{k}+ By^{k}+ b), \\ y^{k+1}=(1-\tau )y^{k}+\tau \vert x^{k+1}\vert , \end{array}\right. k=0,1,2,\cdots , \end{aligned}$$
(4)

where \(\tau \) is a positive constant.

The RMS method has a general form and it contains many existing iterative methods as its special cases.

Let \(Q=\frac{1}{\omega }A\), \(R=\frac{1-\omega }{\omega }A\) with \(\omega \not =0\). Then, the RMS method defined by (4) reduces to a relaxed-based Picard (RP) method defined as

$$\begin{aligned} \left\{ \begin{array}{l} x^{k+1}=(1-\omega )x^{k}+\omega A^{-1}(By^{k}+b),\\ y^{k+1}=(1-\tau )y^{k}+\tau \vert x^{k+1}\vert . \end{array}\right. \end{aligned}$$
(5)

The RP method is simple in format and more efficient when \(A^{-1}\) is easy to calculate. Especially, when A is a strictly diagonally dominant matrix, we usually choose it to solve AVE.

For \(\tau =1\), the RMS method (4) reduces to the Newton-based matrix splitting iterative method (NMS method) (Zhou et al. 2021)

$$\begin{aligned} x^{k+1}=Q^{-1}(Rx^{k}+ B\vert x^{k}\vert + b), \end{aligned}$$
(6)

the modified Newton type iteration method (MN method) whenever R is a positive semi-definite matrix (Wang et al. 2019), and a Picard method for \(R=0\) (Rohn et al. 2014)

$$\begin{aligned} x^{k+1}=A^{-1}(B\vert x^{k}\vert + b). \end{aligned}$$
(7)

Comparing with the NMS method (6), the RMS method (4) has a relaxation factor, so it is more effective.

We decompose A into

$$\begin{aligned} A = D - L - U, \end{aligned}$$

where D is a diagonal matrix, and L and U are strictly lower and upper triangular matrices of A, respectively, as usual.

In Wang et al. (2019); Zhou et al. (2021), from the NMS method, the authors propose some special iterative methods, which include a special MN method

$$\begin{aligned} x^{k+1}=(A + \Omega )^{-1}(\Omega x^k + B\vert x^{k}\vert + b), \end{aligned}$$
(8)

the Newton-based Gauss-Seidel (NGS) method

$$\begin{aligned} x^{k+1}=(D + \Omega - L)^{-1}[(\Omega +U)x^k + B\vert x^{k}\vert + b], \end{aligned}$$
(9)

and the Newton-based SOR (NSOR) method

$$\begin{aligned} x^{k+1}=(D + \alpha \Omega - \alpha L)^{-1}\{[\alpha \Omega +(1-\alpha )D + \alpha U]x^k + \alpha (B\vert x^{k}\vert + b)\}. \end{aligned}$$
(10)

For \(\omega \in \Re \setminus \{0\}\) and \(\gamma \in \Re \), let

$$\begin{aligned} A = Q_{\gamma ,\omega } - R_{\gamma ,\omega }, \end{aligned}$$

where

$$\begin{aligned} Q_{\gamma ,\omega } = \frac{1}{\omega }(D - \gamma L), \; R_{\gamma ,\omega } = \frac{1}{\omega } \left[ (1-\omega )D + (\omega -\gamma )L + \omega U \right] . \end{aligned}$$

Then, the RMS method defined by (4) is called the relaxed-based AOR (RAOR) method defined as

$$\begin{aligned} \left\{ \begin{array}{l} x^{k+1}=(D-\gamma L)^{-1}\{[(1-\omega )D+(\omega -\gamma )L+\omega U]x^{k}+\omega (By^{k}+b)\}, \\ y^{k+1}=(1-\tau )y^{k}+\tau \vert x^{k+1}\vert . \end{array}\right. \end{aligned}$$
(11)

For \(\tau =1\), the RAOR method (11) turns into the AOR method (Li 2017):

$$\begin{aligned} x^{k+1}=(D-\gamma L)^{-1}\{[(1-\omega )D+(\omega -\gamma )L+\omega U]x^{k}+\omega (B\vert x^{k}\vert +b)\}. \end{aligned}$$
(12)

Comparing with the AOR method (12), the RAOR method (11) has a relaxation factor, so it is more effective.

For the case when \(\omega = \gamma \), the RAOR method can be called the relaxed-based SOR (RSOR) method

$$\begin{aligned} \left\{ \begin{array}{l} x^{k+1}=(D-\omega L)^{-1}\{[(1-\omega )D+\omega U]x^{k}+\omega (By^{k}+b)\}, \\ y^{k+1}=(1-\tau )y^{k}+\tau \vert x^{k+1}\vert . \end{array}\right. \end{aligned}$$
(13)

For the AVEs (2), some known methods can be derived as follows.

In this case, the RP method (5) is equivalent to the new SOR-like method (Dong et al. 2020), where \(\tau = \omega /\sigma \) with \(\sigma >0\).

The RP method (5) reduces to the fixed point iteration (FPI) method whenever \(\omega =1\) (Ke 2020; Yu et al. 2020)

$$\begin{aligned} \left\{ \begin{array}{l} x^{k+1}= A^{-1}(y^{k}+b),\\ y^{k+1}=(1-\tau )y^{k}+\tau \vert x^{k+1}\vert , \end{array}\right. \end{aligned}$$
(14)

and the SOR-like method whenever \(\tau =\omega \) (Ke and Ma 2017)

$$\begin{aligned} \left\{ \begin{array}{l} x^{k+1}=(1-\omega )x^{k}+\omega A^{-1}(y^{k}+b),\\ y^{k+1}=(1-\omega )y^{k}+\omega \vert x^{k+1}\vert . \end{array}\right. \end{aligned}$$
(15)

Comparing with the FPI and SOR-like methods, the RP method has more degrees of freedom, so it may be more effective.

The RSOR method (13) reduces to the modified SOR-like (MSOR-like) method whenever \(\tau =\omega \) (Li and Wu 2020)

$$\begin{aligned} \left\{ \begin{array}{l} x^{k+1}=(D-\omega L)^{-1}\{[(1-\omega )D+\omega U]x^{k}+\omega (y^{k}+b)\}, \\ y^{k+1}=(1-\omega )y^{k}+\omega \vert x^{k+1}\vert . \end{array}\right. \end{aligned}$$
(16)

Comparing with the MSOR-like method, the degree of freedom of the RSOR method is bigger, so it may be more effective.

4 Convergence analysis

In this section, we discuss the convergence of the methods proposed in Sect. 3.

For the AVEs (1) and the splitting (3), denote

$$\begin{aligned} \mu =\vert \vert Q^{-1}R\vert \vert , \; \nu =\vert \vert Q^{-1}B\vert \vert . \end{aligned}$$
(17)

Lemma 3

If

$$\begin{aligned} \mu \vert 1-\tau \vert<1, \; \tau \nu <(1-\mu )(1-\vert 1-\tau \vert ), \end{aligned}$$
(18)

then the RMS method (4) for solving the AVEs (1) is convergent.

Proof

Let \(\{x^{*},y^{*}\}\) be solution of the AVEs (1). Then, it holds that

$$\begin{aligned} \left\{ \begin{array}{l} x^{*}=Q^{-1}Rx^{*}+Q^{-1}By^{*}+Q^{-1}b, \\ y^{*}=(1-\tau )y^{*}+\tau \vert x^{*}\vert . \end{array}\right. \end{aligned}$$
(19)

Denote

$$\begin{aligned} \left\{ \begin{array}{l} \varepsilon _{k}^{x}=x^{k}-x^{*},\\ \varepsilon _{k}^{y}=y^{k}-y^{*}, \end{array}\right. k=0,1,2,\cdots \end{aligned}$$

Subtracting (19) from (4), we have

$$\begin{aligned} \left\{ \begin{array}{l} \varepsilon _{k+1}^{x}=Q^{-1}R\varepsilon _{k}^{x}+Q^{-1}B\varepsilon _{k}^{y},\\ \varepsilon _{k+1}^{y}=(1-\tau )\varepsilon _{k}^{y}+\tau (\vert x^{k+1}\vert -\vert x^{*}\vert ), \end{array}\right. \end{aligned}$$

so that

$$\begin{aligned} \vert \varepsilon _{k+1}^{x}\vert\le & {} \vert Q^{-1}R\vert \vert \varepsilon _{k}^{x}\vert +\vert Q^{-1}B\vert \vert \varepsilon _{k}^{y}\vert ,\\ \vert \varepsilon _{k+1}^{y}\vert\le & {} \vert 1-\tau \vert \vert \varepsilon _{k}^{y}\vert +\tau \vert \vert x^{k+1}\vert -\vert x^{*}\vert \vert \le \vert 1-\tau \vert \vert \varepsilon _{k}^{y}\vert +\tau \vert \varepsilon _{k+1}^{x}\vert . \end{aligned}$$

By Lemma 2, we have

$$\begin{aligned} \vert \vert \varepsilon _{k+1}^{x}\vert \vert \le \mu \vert \vert \varepsilon _{k}^{x}\vert \vert + \nu \vert \vert \varepsilon _{k}^{y}\vert \vert , \; \vert \vert \varepsilon _{k+1}^{y}\vert \vert \le \vert 1-\tau \vert \vert \vert \varepsilon _{k}^{y}\vert \vert +\tau \vert \vert \varepsilon _{k+1}^{x}\vert \vert . \end{aligned}$$

This can be rewritten as

$$\begin{aligned} \left[ \begin{array}{cc} -\tau &{} 1 \\ 1 &{} 0 \end{array}\right] \left[ \begin{array}{c} \vert \vert \varepsilon _{k+1}^{x}\vert \vert \\ \vert \vert \varepsilon _{k+1}^{y}\vert \vert \end{array}\right] \le \left[ \begin{array}{cc} 0 &{} \vert 1-\tau \vert \\ \mu &{} \nu \end{array}\right] \left[ \begin{array}{c} \vert \vert \varepsilon _{k}^{x}\vert \vert \\ \vert \vert \varepsilon _{k}^{y}\vert \vert \end{array}\right] , \end{aligned}$$

which implies that

$$\begin{aligned} \left[ \begin{array}{c} \vert \vert \varepsilon _{k+1}^{x}\vert \vert \\ \vert \vert \varepsilon _{k+1}^{y}\vert \vert \end{array}\right]\le & {} \left[ \begin{array}{cc} 0 &{} 1 \\ 1 &{} \tau \end{array}\right] \left[ \begin{array}{ccc} 0 &{}&{} \vert 1-\tau \vert \\ \mu &{}&{} \nu \end{array}\right] \left[ \begin{array}{c} \vert \vert \varepsilon _{k}^{x}\vert \vert \\ \vert \vert \varepsilon _{k}^{y}\vert \vert \end{array}\right] \nonumber \\\le & {} \left[ \begin{array}{ccc} \mu &{}&{} \nu \\ \tau \mu &{}&{} \vert 1-\tau \vert +\tau \nu \end{array}\right] \left[ \begin{array}{c} \vert \vert \varepsilon _{k}^{x}\vert \vert \\ \vert \vert \varepsilon _{k}^{y}\vert \vert \end{array}\right] \nonumber \\\le & {} \cdots \le \left[ \begin{array}{ccc} \mu &{}&{} \nu \\ \tau \mu &{}&{} \vert 1-\tau \vert +\tau \nu \end{array}\right] ^{k+1}\left[ \begin{array}{c} \vert \vert \varepsilon _{0}^{x}\vert \vert \\ \vert \vert \varepsilon _{0}^{y}\vert \vert \end{array}\right] . \end{aligned}$$
(20)

Let

$$\begin{aligned} W=\left[ \begin{array}{ccc} \mu &{}&{} \nu \\ \tau \mu &{}&{} \vert 1-\tau \vert +\tau \nu \end{array}\right] , \end{aligned}$$

and let \(\lambda \) be an eigenvalue of W. Then, we have

$$\begin{aligned} \det (\lambda I - W) = \det \left[ \begin{array}{ccc} \lambda -\mu &{}&{} -\nu \\ -\tau \mu &{}&{} \lambda -\vert 1-\tau \vert -\tau \nu \end{array}\right] =0. \end{aligned}$$

That is

$$\begin{aligned} \lambda ^{2}-(\vert 1-\tau \vert +\tau \nu +\mu )\lambda +\mu \vert 1-\tau \vert =0. \end{aligned}$$

By (18), it holds that

$$\begin{aligned} \mu \vert 1-\tau \vert<1, \; \vert 1-\tau \vert +\tau \nu +\mu <1+\mu \vert 1-\tau \vert . \end{aligned}$$

It follows by Lemma 1 that \(\vert \lambda \vert <1\), which implies \(\rho (W)<1\).

Hence, \(W^{k} \rightarrow 0\) (\(k \rightarrow \infty \)), so that, by (20), \(\varepsilon _{k}^{x} \rightarrow 0\), \(\varepsilon _{k}^{y} \rightarrow 0\) (\(k \rightarrow \infty \)). This has proved that the RMS method (4) is convergent. \(\square \)

Theorem 4

Suppose that \(\mu + \nu <1\). If

$$\begin{aligned} 0< \tau < \frac{2(1-\mu )}{1-\mu +\nu }, \end{aligned}$$
(21)

then the RMS method (4) for solving the AVEs (1) is convergent.

Proof

If \(0 < \tau \le 1\), then \(\mu \vert 1-\tau \vert = \mu (1-\tau ) < 1\) and \(\tau \nu < \tau (1-\mu ) = (1-\mu )(1-\vert 1-\tau \vert )\). Hence, the inequality (18) holds.

For \(1\le \tau < 2(1-\mu )/(1-\mu +\nu )\), on the one hand, it gets that \(\tau \nu <(1-\mu )(2-\tau ) = (1-\mu )(1-\vert 1-\tau \vert )\), which shows that the second inequality in (18) holds. On the other hand, since \(2\mu (1-\mu )/(1-\mu +\nu ) < 1+\mu \), it derives that \(\tau \mu < 1+\mu \), so that \(\mu \vert \tau -1\vert = \mu (\tau -1)<1\), which shows that the first inequality in (18) holds.

In a word, we have proved that if (21) is satisfied, then the inequality (18) holds. It follows by Lemma 3 that the RMS method (4) for solving the AVEs (1) is convergent. \(\square \)

Theorem 5

Let A be nonsingular and \(\Vert A^{-1}B\Vert < 1\). Then, the RP method (5) for solving the AVEs (1) is convergent, whenever either

$$\begin{aligned} 0< \omega \le 1, \; 0< \tau < \frac{2}{1 + \Vert A^{-1}B\Vert } \end{aligned}$$

or

$$\begin{aligned} 1< \omega< \frac{2}{1 + \Vert A^{-1}B\Vert }, \; 0< \tau < \frac{2(2- \omega )}{2-\omega + \omega \Vert A^{-1}B\Vert }. \end{aligned}$$

Proof

For the RP method, \(\mu \) and \(\nu \) defined by (17) reduce to \(\mu =\vert 1-\omega \vert \) and \(\nu =\vert \omega \vert \Vert A^{-1}B\Vert \).

When \(\Vert A^{-1}B\Vert < 1\) and \(0< \omega < 2/(1 + \Vert A^{-1}B\Vert )\), it is easy to prove that \(\mu + \nu = \vert 1-\omega \vert + \omega \Vert A^{-1}B\Vert < 1\). And it gets that

$$\begin{aligned} \frac{2(1-\mu )}{1-\mu +\nu } = \frac{2(1-\vert 1-\omega \vert )}{1-\vert 1-\omega \vert +\omega \Vert A^{-1}B\Vert }. \end{aligned}$$

Now, by Theorem 4, the required result follows directly. \(\square \)

When \(\omega =1\), by Theorem 5, the following corollary is obvious.

Corollary 6

Let A be nonsingular. If

$$\begin{aligned} \Vert A^{-1}\Vert< 1, \; 0< \tau < \frac{2}{1+\Vert A^{-1}\Vert }, \end{aligned}$$

then the FPI method (14) for solving the AVEs (2) is convergent.

This convergence result is better than the corresponding one given by (Ke 2020, Theorem 2.1).

It is easy to see that if \(\tau = \omega \), then the condition in Theorem 5 reduces to

$$\begin{aligned} \Vert A^{-1}B\Vert< 1, \; 0< \omega < \frac{2}{1+\sqrt{\Vert A^{-1}B\Vert }}. \end{aligned}$$

Hence, by Theorem 5, the following corollary is direct.

Corollary 7

Let A be nonsingular. If

$$\begin{aligned} \Vert A^{-1}\Vert< 1, \; 0< \omega < \frac{2}{1+\sqrt{\Vert A^{-1}\Vert }}, \end{aligned}$$

then the SOR-like method (15) for solving the AVEs (2) is convergent.

This convergence condition is simpler than the corresponding one given by Ke and Ma (2017).

Lemma 8

(Li 2017, Lemma 4.1) If \(\rho (\vert Q^{-1}R\vert + \vert Q^{-1}B\vert ) < 1\), then the NMS method (6) for solving the AVEs (1) is convergent.

This lemma implies (Zhou et al. 2021, Theorem 4.1) and is better than (Wang et al. 2019, Theorem 3.1).

Since the Picard method (7) is a special case of the NMS method, from Lemma 8, the following corollary is obvious.

Corollary 9

If A is nonsingular and \(\rho (\vert A^{-1}B\vert ) < 1\), then the Picard method (7) for solving the AVEs (1) is convergent.

This corollary is consistent with (Moosaei et al. 2015, Theorem 4.3) and is better than (Wang et al. 2019, Corollary 3.2). And the convergence given in (Rohn et al. 2014, Theorem 2) can be derived from this corollary directly.

From Theorem 4, the following convergence for the RAOR method (11) is direct.

Theorem 10

Let \(\tilde{\mu }=\vert \vert (D-\gamma L)^{-1}[(1-\omega )D+(\omega -\gamma )L+\omega U]\vert \vert \), \(\tilde{\nu }=\vert \vert (D-\gamma L)^{-1} B\vert \vert \). Assume that \(\tilde{\mu } + \omega \tilde{\nu } < 1\). If

$$\begin{aligned} 0< \tau < \frac{2(1-\tilde{\mu })}{1-\tilde{\mu }+\omega \tilde{\nu }}, \end{aligned}$$

then the RAOR method (11) for solving the AVEs (1) is convergent.

When \(\omega =\gamma \), we can obtain the convergence of the RSOR method (13).

For the MSOR-like method (16), this theorem is consistent with (Li and Wu 2020, Theorem 1).

It is easy to prove that if \(\tilde{\mu } + \omega \tilde{\nu } < 1\) then

$$\begin{aligned} \frac{2(1-\tilde{\mu })}{1-\tilde{\mu }+\omega \tilde{\nu }} > 1. \end{aligned}$$

Hence, by Theorem 10, the following result is immediately.

Corollary 11

Let \(\tilde{\mu }=\vert \vert (D-\gamma L)^{-1}[(1-\omega )D+(\omega -\gamma )L+\omega U]\vert \vert \), \(\tilde{\nu }=\vert \vert (D-\gamma L)^{-1} B\vert \vert \). Assume that \(\tilde{\mu } + \omega \tilde{\nu } < 1\). Then, the AOR method (12) for solving the AVEs (1) is convergent.

5 Numerical examples

In this section, three numerical examples are tested to demonstrate the effectiveness and feasibility of our methods for solving the AVEs (1) and (2) from aspects of the iteration times, the elapsed computation time, and the relative residual error.

For convenience, the iteration times, the elapsed computation time (unit: second), and the relative residual error are, respectively, denoted as ‘IT’, ‘CPU’(unit: s) and ‘RES’. In the computation, all initial vector are taken as zero vectors, and numerical methods are terminated once the relative error satisfies \(\vert \vert Ax_{k}-B\vert x_{k}\vert -b\vert \vert _{2}/\vert \vert b\vert \vert _{2} \le 10^{-6}\) or the CPU is more than 500 s. All the tests are performed under Matlab 2016b on a personal computer with 2.30GHZ central processing unit (Inter(R) Core(TM) i5-8259U), 8GB memory, and Windows 10 operating system.

Example 1

(Guo et al. 2019) Let m be a prescribed positive integer and \(n=m^{2}\). Consider the AVEs (2) with

$$\begin{aligned}{} & {} A=tridiag(-I,S,-I)=\left[ \begin{array}{cccccc} S &{} -I &{} 0 &{} \cdots &{} 0 &{} 0\\ -I &{} S &{} -I &{} \cdots &{} 0 &{} 0\\ 0 &{} -I &{} S &{} \cdots &{} 0 &{} 0\\ \vdots &{} \vdots &{} \vdots &{}\ddots &{} \vdots &{} \vdots \\ 0 &{} 0 &{} 0 &{} \cdots &{} S &{} -I\\ 0 &{} 0 &{} 0 &{} \cdots &{} -I &{} S\\ \end{array}\right] \in R^{n \times n},\\{} & {} S=tridiag(-1,8,-1)=\left[ \begin{array}{cccccc} 8 &{} -1 &{} 0 &{} \cdots &{} 0 &{} 0\\ -1 &{} 8 &{} -1 &{} \cdots &{} 0 &{} 0\\ 0 &{} -1 &{} 8 &{} \cdots &{} 0 &{} 0\\ \vdots &{} \vdots &{} \vdots &{}\ddots &{} \vdots &{} \vdots \\ 0 &{} 0 &{} 0 &{} \cdots &{} 8 &{} -1\\ 0 &{} 0 &{} 0 &{} \cdots &{} -1 &{} 8\\ \end{array}\right] \in R^{m \times m},\\{} & {} x^{*}=(-1,1,-1,1,\cdots ,-1,1)^{\top } \in R^{n}, \; b=Ax^{*}-\vert x^{*}\vert . \end{aligned}$$

We test the RP method (5) with the experimental optimal parameters \((\omega ,\tau )\), the SOR-like method (15) with the optimal parameter given in Guo et al. (2019), the RAOR method (11) with the parameters \((\omega ,\gamma ,\tau )=(0.9,0.85,1.6)\), the AOR method (12) with the parameters \((\omega ,\gamma )=(0.9,0.85)\), the MN method (8), the NGS method (9), and the NSOR method (10) with the experimental optimal parameter, respectively. For the MN, NGS, and NSOR methods, \(\Omega \) is taken as two common cases. The experimental results are listed in Table .

Table 1 Numerical results of Example 1

From Table 1, we can see that these seven tested methods can fast converge to the solution \(x^{*}\) for different sizes. When the matrix dimension increases, it is almost constant, which shows that these seven tested methods are stable. Furthermore, we can see that the RP, SOR-like, and RAOR methods have less CPU and IT than the other methods. In terms of the elapsed CPU and IT, the RP method requires the least computing time and iteration times.

In Mangasarian and Meyer (2006), the linear complementarity problem (LCP)

$$\begin{aligned} Mz+q \ge 0, z \ge 0, z^{\top }(Mz+q)=0 \end{aligned}$$
(22)

with \(M \in R^{n \times n}\), \(det(I-M) \ne 0\) and \(q \in R^n\), can be reduced to the AVEs

$$\begin{aligned} (M + I )x - (M - I )\vert x\vert = q \end{aligned}$$
(23)

with \(x = \frac{1}{2}[(M - I )z + q]\).

Example 2

(Zhou et al. 2021) Consider the LCP (22), where \(M=\hat{M}+ 4 I\) with

$$\begin{aligned} \hat{M}=tridiag(-I,S,-I) \in R^{n \times n}, \; S=tridiag(-1,4,-1) \in R^{m \times m}, \end{aligned}$$

and \(q=-Mz^{*}\) with \(z^{*}= (1.2,1.2,\cdots ,1.2)^{\top } \in R^{n}\) being its unique solution. In this case, the unique solution of the corresponding AVEs (23) is

$$\begin{aligned} x^{*}= (-0.6,-0.6,\cdots ,-0.6)^{\top } \in R^{n}. \end{aligned}$$

We test the RP method (5) with the parameters \((\omega ,\tau ) = (0.6,0.998)\) and \((\omega ,\tau ) = (0.998,0.598)\), the Picard method (7), the MN method (8), the NGS method (9), and the NSOR method (10) with the optimal parameter \(\alpha = 0.9\). For the MN, NGS, and NSOR methods, set \(\Omega =\hat{M}\).

Some iterative methods for solving the LCP (22) have been proposed. In Huang and Cui (2022), the authors given a class of RMMS method, in which the RMSOR method is the more effective one. Here, we test the RMSOR method, where \(\Omega =0.5I\), \(\gamma =1\), and the parameter in SOR splitting and relaxation parameter \(\theta \) are, respectively, selected as the experimental optimal parameters 0.6 and 0.98.

The experimental results are listed in Table , where ‘-’ denotes the CPU time larger than 500 s.

Table 2 Numerical results of Example 2

From Table 2, we can see that the Picard method does not converge within 500 s, and the other six tested methods can fast converge to the solution \(x^{*}\) for different sizes. When the matrix dimension increases, IT is constant, which shows that the other six tested methods are all effective and stable except the MN method. Further, we can see that the RP, NGS, and NSOR methods have less CPU and IT than the other methods. In terms of the elapsed CPU and IT, the RP method requires the least computing time and iteration times.

Example 3

Consider the AVEs (1) with the matrix A and B are randomly generated by the following matlab procedure: \(A=rand(n,n)+n*eye(n) \in R^{n \times n}\), \(B=rand(n,n)+\frac{n}{2}* eye(n) \in R^{n \times n}\), \(b=ones(n,1) \in R^{n \times 1} \).

We test the RAOR method (11), the Picard method (7), and the MSOR-like method (16), where we choose five increasing sizes: \(n=4000, 5000, 6000, 7000, 8000\). The experimental results are listed in Table , where ‘–’ denotes the CPU time larger than 500 s.

Table 3 Numerical results of Example 3

From Table 3, we can see that the MSOR-like method does not converge within 500 s, and the RAOR method has less CPU and IT than the Picard method. This shows that the RAOR method is superior to the Picard and MSOR-like methods under some conditions.

6 Conclusions

In this paper, we investigate the iterative methods for solving the AVEs (1). Using matrix splitting and the relaxed technique, a relaxed-based matrix splitting (RMS) method is presented. As special cases, we propose the RP, RAOR, and RSOR methods. These methods include some known methods as special cases, such as the Newton-based matrix splitting iterative method (NMS method) (Zhou et al. 2021), the modified Newton type iteration method (MN method) (Wang et al. 2019), the Picard method (Rohn et al. 2014), the NSOR method (Dong et al. 2020), the fixed point iteration (FPI) method (Ke 2020; Yu et al. 2020), the SOR-like method (Ke and Ma 2017), the AOR method (Li 2017), the modified SOR-like (MSOR-like) method (Li and Wu 2020), etc.

We prove convergence theorems of the proposed methods. Numerical examples show that our methods are superior to the Picard, SOR-like, MN, NGS, AOR, NSOR, MSOR-like methods and the RMSOR method given in Huang and Cui (2022) under some conditions.

For the RMS method, how to construct more effective splitting and make the method converge faster still needs further research.

With reference to Li (2017), we can construct a preconditioned relaxed-based matrix splitting method.

For the RMS, RP, RAOR, and RSOR methods, there are some parameters involved. For the selection of parameters, trial calculation is mainly used. How to select better parameters is an interesting and important problem, and of course, it is also a difficult problem, which deserves further study.