1 Introduction

Consider the generalized system of absolute value equations (GAVEs) of the form

$$\begin{aligned} Ax + B\!\left| x \right| = b, \end{aligned}$$
(1)

where \(A,B\in \mathbb {R}^{n\times n}\) and \(b\in \mathbb {R}^n\) are given and \(\left| x \right| = {(\left| {{x_1}} \right| ,\left| {{x_2}} \right| ,\cdots ,\left| {{x_n}} \right| )^\textrm{T}}\in \mathbb {R}^n\) denotes the componentwise absolute value of the unknown vector \(x\in \mathbb {R}^n\). The GAVE (1) was formally introduced by Rohn [29] and further investigated in [8, 19, 25]. Moreover, the GAVE (1) is a special case of the system of weakly nonlinear equations, the latter has been studied extensively and deeply early in [1, 2, 6, 27]. When we choose B being the negative identity matrix, i.e., \(B=-I\), the GAVE is reduced to the system of absolute value equations (AVE) of the form

$$\begin{aligned} Ax - \left| x \right| = b. \end{aligned}$$
(2)

The existence and uniqueness of the solution of (2) were discussed in Refs. [25, 29]. In the following, we suppose that the solution of (2) is existent and unique.

It is well known that the AVE (2) can be derived from the linear complementarity problem (LCP) [3, 9, 10, 25], which has broad applications in many areas of scientific computing and engineering applications such as linear programming, quadratic programming, bimatrix games, quasi-complementarity problems, and so on [12, 22, 25, 29, 35]. Owing to the existence of the nonlinear and non-differentiable term \(\left| x \right| \), the AVE (2) is an NP hard problem. If \(\left| x \right| \) does not exist, the AVE (2) is reduced to a system of linear equations, which can be solved by direct or iterative methods; see [7].

In the past two decades, the AVE (2) has attracted increasing interest from various researchers due to its simple and special structure. Many iteration methods, such as the successive linearization method [22], the sign accord method [30], the hybrid method [24], the optimization method [28], and so on, are employed to approximate the solution of the AVE (2). In 2009, stimulated by the idea of the Newton method, Mangasarian [23] introduced the subgradient for the non-differentiable term \(\left| x \right| \) and presented a generalized Newton (GN) iteration method of the form

$$\begin{aligned} (A - D({x^{(k)}})){x^{(k + 1)}} = b, \end{aligned}$$

where \(D(x) = \mathrm{{diag}}(\mathrm{{sgn}}(x))\), \(x\in \mathbb {R}^n\), \(\mathrm{{sgn}}(x)\) denotes a vector which components equal to \(-1,0\), or 1 depending on whether the corresponding component of the vector x is negative, zero, or positive. Thereafter, some scholars established more efficient iteration methods based on the GN iteration, for example, the generalized Traubs method [15], the modified GN method [20], and the relaxed GN method [11]. However, these iteration methods require expensive computing costs in actual computations because the coefficient matrix changes with the iteration steps. In 2014, Rohn et al. [31] proposed a more practical Picard iteration method to solve the AVE (2). That is

$$\begin{aligned} A{x^{(k + 1)}} = |{x^{(k)}}| + b,\quad k = 0,1,2,\cdots . \end{aligned}$$
(3)

Compared with the GN methods, the coefficient matrix of the Picard iteration scheme (3) is no longer changed with the iteration steps. Hence, the computing cost in each step of the Picard iteration is cheaper than those of the GN methods. More Picard iteration methods used for solving the AVE can be seen in Refs. [13, 26, 32]. In addition, one of the important sources of the problem (2) is the modulus-based fixed-point reformulation introduced and analyzed in [3]. The iteration scheme in (3) and most of the schemes mentioned in the part around are special cases of the methods in [2].

Recently, by equivalently reformulating the AVE (2) as a block two-by-two nonlinear equation, Ke and Ma [18] proposed an SOR-like iteration method for solving the AVE (2). Hereafter, other iteration methods used for solving the equivalent block two-by-two form of the AVE (2) are also established, for example, the fixed point (FP) iteration method [17, 38], the block-diagonal and anti-block-diagonal splitting (BAS) iteration method [21], and so on. The BAS is closely related to the Hermitian and skew-Hermitian splitting (HSS) [5] and the modified HSS (MHSS) [4]. In this work, to further improve the efficiency of the BAS iteration method, a new iteration method is proposed by introducing a dynamic control parameter for the BAS iteration scheme. Since the control parameter is determined by minimizing the residual norm, the new iteration method is named as the minimum residual BAS (abbreviated as MRBAS) iteration. This acceleration idea comes from the minimum residual HSS (MRHSS) iteration method used for solving non-Hermitian positive definite linear systems [36]. Other similar and important acceleration techniques, such as the minimum residual smoothing, can be found in Refs. [14, 34].

The remainder part of this paper is organized as follows. In Sect. 2, we define the MRBAS iteration method by introducing a control parameter for the BAS iteration scheme. In Sect. 3, the convergence of the MRBAS iteration method for solving the system of absolute value equations (2) is carefully discussed under suitable restrictions. In Sect. 4, numerical experiments are employed to verify the feasibility and effectiveness of the MRBAS iteration method. Finally, in Sect. 5, we give a brief conclusion for this paper.

2 The MRBAS Iteration Method

In this section, by introducing a control parameter for the BAS iteration scheme, the MRBAS iteration method is proposed to solve the AVE (2).

Let \(y = |x|\). Then, the AVE (2) can be rewritten as

$$\begin{aligned} \left\{ {\begin{array}{*{20}{c}} {Ax - y = b,}\\ { - |x| + y = 0,} \end{array}} \right. \end{aligned}$$

which is equivalent to

$$\begin{aligned} \bar{A}z = \left[ {\begin{array}{*{20}{c}} A&{}{ - I}\\ { - \hat{D}}&{}I \end{array}} \right] \left[ {\begin{array}{*{20}{c}} x\\ y \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} b\\ 0 \end{array}} \right] = \bar{b}, \end{aligned}$$
(4)

where \(\hat{D} = D(x) = \mathrm{{diag}}(\mathrm{{sgn}}(x))\), \(x\in \mathbb {R}^n\). The coefficient matrix \(\bar{A}\) can be split into the sum of a block-diagonal matrix and an anti-block-diagonal matrix, i.e.,

$$\begin{aligned} \bar{A} = \left[ {\begin{array}{*{20}{c}} A&{}{ - I}\\ { - \hat{D}}&{}I \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} A&{}0\\ 0&{}I \end{array}} \right] + \left[ {\begin{array}{*{20}{c}} 0&{}{ - I}\\ { - \hat{D}}&{}0 \end{array}} \right] . \end{aligned}$$

It is easy to see that matrix \(\bar{A}\) yields

$$\begin{aligned} \bar{A} = \left[ {\begin{array}{*{20}{c}} {\alpha I + A}&{}0\\ 0&{}{\alpha I + I} \end{array}} \right] - \left[ {\begin{array}{*{20}{c}} {\alpha I}&{}I\\ {\hat{D}}&{}{\alpha I} \end{array}} \right] , \end{aligned}$$
(5)

where \(\alpha \) is a given positive constant such that \(\alpha I + A\) is nonsigular.

Based on the BAS, the BAS iteration method used for solving nonlinear equation (4) can be defined as follows.

Method 1

(The BAS iteration method) [21] Let \(\alpha \) be a given positive constant such that \(\alpha I + A\) is nonsigular. Given the initial vector \({x^{(0)}}\in \mathbb {R}^n\) and \({y^{(0)}}= |x^{(0)} |\). For \(k =0,1,2,\cdots ,\) until the iteration sequence \(\left\{ (x^{(k)}, y^{(k)}) \right\} _{k = 0}^{ + \infty }\) is convergent, calculate

$$\begin{aligned} \left\{ {\begin{array}{*{20}{l}} {{x^{(k + 1)}} = {{(\alpha I + A)}^{ - 1}}(\alpha {x^{(k)}} + {y^{(k)}} + b),}\\ {{y^{(k + 1)}} = \frac{1}{{1 + \alpha }}(\hat{D}{x^{(k)}} + \alpha {y^{(k)}}),} \end{array}} \right. \end{aligned}$$

or

$$\begin{aligned} \left[ {\begin{array}{*{20}{c}} {\alpha I + A}&{}0\\ 0&{}{\alpha I + I} \end{array}} \right] \left[ {\begin{array}{*{20}{c}} {{x^{(k + 1)}}}\\ {{y^{(k + 1)}}} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {\alpha I}&{}I\\ {\hat{D}}&{}{\alpha I} \end{array}} \right] \left[ {\begin{array}{*{20}{c}} {{x^{(k)}}}\\ {{y^{(k)}}} \end{array}} \right] + \left[ {\begin{array}{*{20}{c}} b\\ 0 \end{array}} \right] . \end{aligned}$$
(6)

Numerical results in Ref. [21] show that the BAS iteration method is feasible and robust. To further improve the efficiency of the BAS method, we will drive a modified version of the BAS iteration method in the following. Firstly, denote

$$\begin{aligned} M = \left[ {\begin{array}{*{20}{c}} A&{}0\\ 0&{}I \end{array}} \right] ,\quad N = \left[ {\begin{array}{*{20}{c}} 0&{}I\\ {\hat{D}}&{}0 \end{array}} \right] ,\quad z = \left[ {\begin{array}{*{20}{c}} x\\ y \end{array}} \right] ,\quad \bar{b} = \left[ {\begin{array}{*{20}{c}} b\\ 0 \end{array}} \right] . \end{aligned}$$

Then, the splitting (5) becomes \(\bar{A} = (\alpha I + M) - (\alpha I + N)\). Hence, the BAS iteration scheme (6) can be rewritten as

$$\begin{aligned} (\alpha I + M){z^{(k + 1)}} = (\alpha I + N){z^{(k)}} + \bar{b}, \end{aligned}$$

or, equivalently,

$$\begin{aligned} (\alpha I + M){z^{(k + 1)}} = (\alpha I + M){z^{(k)}} + \bar{b} - \bar{A}{z^{(k)}}. \end{aligned}$$
(7)

Multiplying \({(\alpha I + M)^{ - 1}}\) on both sides of (7) from left gives

$$\begin{aligned} {z^{(k + 1)}} = {z^{(k)}} + {(\alpha I + M)^{ - 1}}(\bar{b} - \bar{A}{z^{(k)}}). \end{aligned}$$
(8)

Denote \({r^{(k)}} = \bar{b} - \bar{A}{z^{(k)}}\) and \({\delta ^{(k)}} = {(\alpha I + M)^{ - 1}}{r^{(k)}}\), the BAS iteration scheme (8) can be rewritten as

$$\begin{aligned} {z^{(k + 1)}} = {z^{(k)}} + {\delta ^{(k)}}, \end{aligned}$$
(9)

where \(\delta ^{(k)}\) can be viewed as the search direction from \({z^{(k)}}\) to \({z^{(k + 1)}}\). The step size equals \(\Vert \delta ^{(k)}\Vert \), where \(\Vert \cdot \Vert \) denotes the 2-norm of a given vector.

To further improve the efficiency of the BAS iteration scheme (9), we introduce a parameter \({\omega _k}\) to control the step size, and hope that the step size in each iteration step is optimal. This leads to a relaxed iteration scheme of the form

$$\begin{aligned} {z^{(k + 1)}} = {z^{(k)}} + {\omega _k}{\delta ^{(k)}}. \end{aligned}$$
(10)

In the immediately following, we try to determine the optimal value of \({\omega _k}\).

Note that \({r^{(k)}} = \bar{b} - \bar{A}{z^{(k)}}\), \({\delta ^{(k)}} = {(\alpha I + M)^{ - 1}}{r^{(k)}}\), and denote \(S = \bar{A}{(\alpha I + M)^{ - 1}}\), the residual form of the iteration scheme (10) can be easily derived. That is

$$\begin{aligned} {r^{(k + 1)}} = {r^{(k)}} - {\omega _k}S {r^{(k)}}. \end{aligned}$$
(11)

In the following, we determine the value of \({\omega _k}\) by minimizing the residual norm \(\Vert {{r^{(k + 1)}}}\Vert \). The simple calculation gives

$$\begin{aligned} \Vert r^{(k + 1)}\Vert ^2 = \Vert {r^{(k)}}\Vert ^2 - 2{\omega _k}({r^{(k)}},S {r^{(k)}}) + \omega _k^{\,2}\Vert S {r^{(k)}}\Vert ^2. \end{aligned}$$

Hence, \(\Vert r^{(k + 1)}\Vert ^2\) can be viewed as a quadratic function of \({\omega _k}\). Its minimum value is achieved at

$$\begin{aligned} {\omega _k} = \frac{{({r^{(k)}},S {r^{(k)}})}}{{\Vert S {r^{(k)}}\Vert ^2}}. \end{aligned}$$
(12)

Owing to \(S = \bar{A}{(\alpha I + M)^{ - 1}}\) and \({\delta ^{(k)}} = {(\alpha I + M)^{ - 1}}{r^{(k)}}\), the parameter \({\omega _k}\) can be equivalently rewritten as

$$\begin{aligned} {\omega _k} = \frac{{({r^{(k)}},\bar{A}{{(\alpha I + M)}^{ - 1}}{r^{(k)}})}}{{\Vert \bar{A}{(\alpha I + M)}^{ - 1}r^{(k)}\Vert ^2}} = \frac{{({r^{(k)}},\bar{A}{\delta ^{(k)}})}}{\Vert \bar{A}\delta ^{(k)}\Vert ^2}. \end{aligned}$$

Therefore, the MRBAS iteration method can be defined as follows.

Method 2

(The MRBAS iteration method) Let \(\alpha \) be a positive parameter such that \(\alpha I + A\) is nonsingular. Given an initial vector \(x^{(0)}\in \mathbb {R}^n\), define \(z^{(0)}=\left\{ (x^{(0)})^\textrm{T},|x^{(0)}|^\textrm{T}\right\} ^\textrm{T}\). For \(k=0,1,2,\cdots \), until the iteration sequence \(\left\{ z^{(k)} \right\} \) converges, compute

$$\begin{aligned} {z^{(k + 1)}} = {z^{(k)}} + {\omega _k}{\delta ^{(k)}}, \end{aligned}$$

where

$$\begin{aligned} {\omega _k} = \frac{{({r^{(k)}},\bar{A}{\delta ^{(k)}})}}{{{{\left\| {\bar{A}{\delta ^{(k)}}} \right\| }^2}}},\quad {\delta ^{(k)}} = {(\alpha I + M)^{ - 1}}{r^{(k)}},\quad {r^{(k)}} = \bar{b} - \bar{A}{z^{(k)}}. \end{aligned}$$

Remark 1

The MRBAS iteration method is reduced to the BAS iteration method if we choose \({\omega _k} = 1\).

3 The Convergence Property of the MRBAS Iteration Method

In order to derive the convergence property of the MRBAS iteration method used for solving the AVE (2), we first give several useful lemmas.

Lemma 1

[33] For any matrix \(A\in \mathbb {R}^{n\times n}\), \(B\in \mathbb {R}^{n\times n}\), the following results hold:

  • \(\Vert A\Vert \geqslant 0\), where “=” holds if and only if \(A = O\);

  • if k is a scalar, then \(\Vert k A\Vert = |k|\, \Vert A\Vert \);

  • \(\Vert A + B\Vert \leqslant \Vert A\Vert + \Vert B\Vert \);

  • \(\Vert A B\Vert \leqslant \Vert A\Vert \, \Vert B\Vert \).

Lemma 2

[16] For any matrix \(A=(a_{ij})\in \mathbb {R}^{n\times n}\) and \(B=(b_{ij})\in \mathbb {R}^{n\times n}\), if \(O \leqslant A \leqslant B\), then \(\Vert A\Vert _p \leqslant \Vert B\Vert _p\), where \({\left\| \cdot \right\| _p}\) stands for the p-norm of a matrix, \(A\leqslant B\) means \(a_{ij}\leqslant b_{ij}\) for \(i,j=1,2,\cdots ,n\).

Lemma 3

[37] Both roots of the real quadratic equation \({x^2} - ax + b = 0\) are less than one in modulus if and only if \(\left| b \right| < 1\) and \(\left| a \right| < 1 + b\).

With the help of the above three lemmas, the convergence property of the MRBAS iteration method can be derived in the following.

Theorem 1

Let \(A\in \mathbb {R}^{n\times n}\), \(b\in \mathbb {R}^n\), \(\alpha \) be a positive constant such that \(\alpha I + A\in \mathbb {R}^{n\times n}\) is a nonsingular matrix. Denote \(\eta = \Vert {\left( {\alpha I + A} \right) ^{ - 1}}\Vert \). Then, we have

$$\begin{aligned} \Vert r^{(k + 1)}\Vert \leqslant \Vert L\Vert \cdot \Vert r^{(k)}\Vert , \end{aligned}$$
(13)

where

$$\begin{aligned} {L = \left( {\begin{array}{*{20}{c}} {\alpha \eta }&{}{\frac{1}{{\alpha + 1}}}\\ \eta &{}{\frac{\alpha }{{\alpha + 1}}} \end{array}} \right) }. \end{aligned}$$

Moreover, it follows that \(\Vert L\Vert < 1\) if and only if the parameter \(\alpha \) satisfies \((\alpha +1)\eta < 1\). That means if the condition \((\alpha +1)\eta < 1\) holds, the iteration sequence \(\left\{ {{z^{(k)}}} \right\} \) generated by the MRBAS iteration method converges to the exact solution of the AVE (4) for the initial vector \(z^{(0)}=\left\{ (x^{(0)})^\textrm{T},|x^{(0)}|^\textrm{T}\right\} ^\textrm{T}\) with \(x^{(0)}\) being any vector in \(\mathbb {R}^n\).

Proof

Taking the 2-norm on both sides of (11) gives

$$\begin{aligned} \Vert {r^{(k + 1)}}\Vert = \Vert {r^{(k)}} - {\omega _k}S {r^{(k)}}\Vert = \Vert \left( {I - {\omega _k}S } \right) {r^{(k)}}\Vert . \end{aligned}$$

From the derivation of the MRBAS iteration method in Sect. 2, the parameter \({\omega _k}\) defined by (12) is the minimum point of the residual norm \(\Vert {{r^{(k + 1)}}} \Vert \), which means

$$\begin{aligned} \Vert {r^{(k + 1)}}\Vert = \Vert (I - {\omega _k}S ){r^{(k)}}\Vert = \mathop {\min }\limits _{\omega \in \mathbb {R}} \Vert (I - \omega S ){r^{(k)}}\Vert \leqslant \Vert (I - S ){r^{(k)}}\Vert . \end{aligned}$$
(14)

Note that \(S=\bar{A}{(\alpha I + M)^{ - 1}}\), the matrix \(I - S \) can be rewritten as

$$\begin{aligned} I - S = (\alpha I + N){(\alpha I + M)^{ - 1}}= \left( {\begin{array}{*{20}{c}} {\alpha {{(\alpha I + A)}^{ - 1}}}&{}{\frac{1}{{\alpha + 1}}I}\\ {\hat{D}{{(\alpha I + A)}^{ - 1}}}&{}{\frac{\alpha }{{\alpha + 1}}I} \end{array}} \right) . \end{aligned}$$

Denote \(\tilde{r}^{(k + 1)}=(I - S )r^{(k)}\). If we write \(r^{(k)}\) as \(r^{(k)}=((r^{(k)}_x)^\textrm{T},(r^{(k)}_y)^\textrm{T})^\textrm{T}\) with \(r^{(k)}_x,r^{(k)}_y\in \mathbb {R}^n\), then the components of \(\tilde{r}^{(k + 1)}=((\tilde{r}^{(k+1)}_x)^\textrm{T},(\tilde{r}^{(k+1)}_y)^\textrm{T})^\textrm{T}\) yield

$$\begin{aligned} \tilde{r}_x^{(k + 1)}= \alpha {(\alpha I + A)^{ - 1}}r_x^{(k)} + \frac{1}{{\alpha + 1}}r_y^{(k)}, \quad \tilde{r}_y^{(k + 1)}=\hat{D}{(\alpha I + A)^{ - 1}}r_x^{(k)} + \frac{\alpha }{{\alpha + 1}}r_y^{(k)}. \end{aligned}$$

Using Lemma 1 and noticing that \(\eta = \Vert {(\alpha I + A)^{ - 1}}\Vert \) and \(\Vert \hat{D}\Vert \leqslant 1\), the 2-norms of \(\tilde{r}_x^{(k + 1)}\) and \(\tilde{r}_y^{(k + 1)}\), respectively, satisfy

$$\begin{aligned} {\Vert \tilde{r}_x^{(k + 1)}\Vert \leqslant \alpha \eta \Vert r_x^{(k)}\Vert + \frac{1}{{\alpha + 1}}\Vert r_y^{(k)}\Vert } \end{aligned}$$
(15)

and

$$\begin{aligned} {\Vert \tilde{r}_y^{(k + 1)}\Vert \leqslant \eta \Vert r_x^{(k)}\Vert + \frac{\alpha }{{\alpha + 1}}\Vert r_y^{(k)}\Vert }. \end{aligned}$$
(16)

The inequalities (15) and (16) can be written as the following matrix-vector form:

$$\begin{aligned} {\tilde{R}^{(k + 1)}} \leqslant L \cdot {R^{(k)}}, \end{aligned}$$
(17)

where

$$\begin{aligned} {\tilde{R}^{(k + 1)}} = \left( {\begin{array}{*{20}{c}} {\Vert \tilde{r}_x^{(k + 1)}\Vert }\\ {\Vert \tilde{r}_y^{(k + 1)}\Vert } \end{array}} \right) ,\quad {L = \left( {\begin{array}{*{20}{c}} {\alpha \eta }&{}{\frac{1}{{\alpha + 1}}}\\ \eta &{}{\frac{\alpha }{{\alpha + 1}}} \end{array}} \right) },\quad {R^{(k)}} = \left( {\begin{array}{*{20}{c}} {\Vert r_x^{(k)}\Vert }\\ {\Vert r_y^{(k)}\Vert } \end{array}} \right) . \end{aligned}$$

Taking the 2-norm on both sides of (17) and applying Lemmas 1 and 2, we get

$$\begin{aligned} \Vert \tilde{R}^{(k + 1)}\Vert \leqslant \Vert L \cdot {R^{(k)}}\Vert \leqslant \Vert L\Vert \cdot \Vert {R^{(k)}}\Vert . \end{aligned}$$
(18)

Note that \(\Vert \tilde{r}^{(k + 1)}\Vert \!=\!\sqrt{\Vert \tilde{r}_x^{(k+1)}\Vert ^2 \!+\! \Vert \tilde{r}_y^{(k+1)}\Vert ^2}\!=\!\Vert \tilde{R}^{(k + 1)}\Vert \) and \(\Vert {r^{(k)}}\Vert \!=\!\sqrt{\Vert r_x^{(k)}\Vert ^2 \!+\! \Vert r_y^{(k)}\Vert ^2}=\Vert {R^{(k)}}\Vert \), the inequality (18) is equivalent to

$$\begin{aligned} \Vert \tilde{r}^{(k + 1)}\Vert \leqslant \Vert L\Vert \cdot \Vert {r^{(k)}}\Vert . \end{aligned}$$
(19)

From the definition of \(\tilde{r}^{(k + 1)}\), i.e., \(\tilde{r}^{(k + 1)}=(I - S )r^{(k)}\), we can derive from (14) that \(\Vert r^{(k + 1)}\Vert \leqslant \Vert \tilde{r}^{(k + 1)}\Vert \). Thus, using (19), we finally obtain the inequality (13).

In the following, we derive the convergence condition of the MRBAS iteration method. Due to

$$\begin{aligned} {L^\textrm{T} L = \left( {\begin{array}{*{20}{c}} {\alpha \eta }&{}\eta \\ {\frac{1}{{\alpha + 1}}}&{}{\frac{\alpha }{{\alpha + 1}}} \end{array}} \right) \cdot \left( {\begin{array}{*{20}{c}} {\alpha \eta }&{}{\frac{1}{{\alpha + 1}}}\\ \eta &{}{\frac{\alpha }{{\alpha + 1}}} \end{array}} \right) = \left( {\begin{array}{*{20}{c}} {({\alpha ^2} + 1){\eta ^2}}&{}{\frac{{2\alpha \eta }}{{\alpha + 1}}}\\ {\frac{{2\alpha \eta }}{{\alpha + 1}}}&{}{\frac{{{\alpha ^2} + 1}}{{{{(\alpha + 1)}^2}}}} \end{array}} \right) }, \end{aligned}$$

we have

$$\begin{aligned} {\mathrm{{tr}}(L^\textrm{T} L) = ({\alpha ^2} + 1)\left( {{\eta ^2} + \frac{1}{{{{(\alpha + 1)}^2}}}} \right) }, \end{aligned}$$

and

$$\begin{aligned} {\det (L^\textrm{T} L) = {(\alpha - 1)^2}{\eta ^2}}. \end{aligned}$$

Let \(\lambda \) be any eigenvalue of the matrix \(L^\textrm{T} L\). Then, it yields the following real quadratic equation:

$$\begin{aligned} {{\lambda ^2} - ({\alpha ^2} + 1)\left( {{\eta ^2} + \frac{1}{{{{(\alpha + 1)}^2}}}} \right) \lambda + {(\alpha - 1)^2}{\eta ^2} = 0}. \end{aligned}$$

From Lemma 3, it follows that \(\Vert L\Vert < 1\) if and only if

$$\begin{aligned} {{(\alpha - 1)}^2\eta ^2 < 1} \end{aligned}$$
(20)

and

$$\begin{aligned} {{({\alpha ^2} + 1)\left( {{\eta ^2} + \frac{1}{{{{(\alpha + 1)}^2}}}} \right) - {{(\alpha \mathrm{{ - }}1)}^2}{\eta ^2} - 1 < 0}}. \end{aligned}$$
(21)

After simple calculation, the condition (21) can be simplified as

$$\begin{aligned} {(\alpha + 1)\eta <1}. \end{aligned}$$
(22)

The condition (20) is equivalent to

$$\begin{aligned} {\max \{(1-\alpha )\eta ,(\alpha - 1)\eta \} < 1}. \end{aligned}$$
(23)

In as much as \(\alpha ,\eta >0\), the left-hand-side of (23) is less than that of (22), i.e., \(\max \{(1-\alpha )\eta ,(\alpha - 1)\eta \}<(\alpha + 1)\eta \). This means that once (22) is true, then (23) is also true. Therefore, we can conclude that \(\Vert L\Vert < 1\) if and only if the parameter \(\alpha \) satisfies (22).

Using the inequality (13) successively, we conclude

$$\begin{aligned} \Vert {r^{(k)}}\Vert \leqslant \Vert L\Vert \cdot \Vert {r^{(k - 1)}}\Vert \leqslant \Vert L\Vert ^2 \cdot \Vert {r^{(k - 2)}}\Vert \leqslant \cdots \leqslant \Vert L\Vert ^k \cdot \Vert {r^{(0)}}\Vert . \end{aligned}$$

Due to \(\Vert L\Vert < 1\), it follows that \(\mathop {\lim }\limits _{k \rightarrow \infty } \Vert {r^{(k)}}\Vert = 0\), which implies the iteration sequence \(\left\{ {{z^{(k)}}} \right\} \) is convergent to the exact solution of the AVE (4) if \(\alpha \) satisfies \((\alpha + 1)\eta <1\).

4 Numerical Experiments

In this section, we use two examples to verify the feasibility and effectiveness of the MRBAS iteration method for solving the AVE (2).

The numerical results including numbers of iteration steps (denoted as IT) and elapsed CPU times (in seconds, denoted as CPU) of the MRBAS iteration method are compared with those of the BAS [21], the SOR-like [18], and the FP [17] iteration methods. The iteration parameters \(\alpha \) involved in these iteration methods are selected as the experimentally found optimal ones, which lead to the least number of iteration steps. If the optimal iteration parameters form an interval, then we select the optimal parameter as the one that belongs to this interval and leads to the least CPU time. The optimal iteration parameter determined in such a manner is denoted as \({\alpha _{\exp }}\).

In the computations, we choose the initial vectors to be zero vector, i.e., \(x^{(0)}=0\). All iterations are stopped when the number of iteration steps exceeds 500, or \(\mathrm{{RES}} \leqslant {10^{\mathrm{{ - }}6}}\), where RES denotes the relative residual error of the form

$$\begin{aligned} \mathrm{{RES = }}\frac{{\Vert A{x^{(k)}} - |{x^{(k)}}| - b\Vert }}{{\Vert b\Vert }}. \end{aligned}$$

In addition, all the computations are implemented in MATLAB [version 9.12.0.1884302 (R2022a)] on a personal computer with 1.6 GHZ central processing unit [Intel(R) Core(TM) i5-8250U] and 8.0 GB memory.

Example 1

Let the coefficient matrix A in (2) be

$$\begin{aligned} A = \mathrm{{tridiag}}( - 1,4, - 1) = \left( {\begin{array}{*{20}{c}} 4&{}{ - 1}&{}0&{} \cdots &{}0&{}0\\ { - 1}&{}4&{}{ - 1}&{} \cdots &{}0&{}0\\ 0&{}{ - 1}&{}4&{} \cdots &{}0&{}0\\ \vdots &{} \vdots &{} \vdots &{} \ddots &{} \vdots &{} \vdots \\ 0&{}0&{}0&{} \cdots &{}4&{}{ - 1}\\ 0&{}0&{}0&{} \cdots &{}{ - 1}&{}4 \end{array}} \right) \in \mathbb {R}^{n\times n}. \end{aligned}$$

The vector b is chosen as \(b = A{x^*} - |{x^*}|\), where \(x^* = (1,-2,1,-2,\cdots )^\textrm{T} \in \mathbb {R}^{n}\) is the exact solution.

The MRBAS together with the BAS, the SOR-like, and the FP iteration methods is used to approximate the solution of this problem. In Table 1, the experimentally found optimal parameter values, denoted by \(\alpha _{\exp }\), of all the tested iteration methods are listed for different problem sizes n. Using these optimal parameter values, the numerical results of the four iteration methods used for solving (2) with different problem sizes, i.e., \(n=60^2,70^2,80^2,90^2\), and \(100^2\), are also listed in Table 1.

From the numerical results, we can see that all the tested iteration methods with respective optimal parameters are convergent for different problem sizes n. Among the four iteration methods, the MRBAS method is the most efficient one as it costs the least iteration steps and CPU times, compared with the BAS, the SOR-like, and the FP iteration methods.

Table 1 The numerical results of the tested iteration methods with experimentally found optimal parameter values for Example 1

Example 2

[39] Consider the AVE in (2) with

$$\begin{aligned} A = \mathrm{{tridiag}}( - 0.5I,T, - 1.5I) = \left( {\begin{array}{*{20}{c}} T&{}{ - 1.5I}&{}0&{} \cdots &{}0&{}0\\ { - 0.5I}&{}T&{}{ - 1.5I}&{} \cdots &{}0&{}0\\ 0&{}{ - 0.5I}&{}T&{} \cdots &{}0&{}0\\ \vdots &{} \vdots &{} \vdots &{} \ddots &{} \vdots &{} \vdots \\ 0&{}0&{}0&{} \cdots &{}S&{}{ - 1.5I}\\ 0&{}0&{}0&{} \cdots &{}{ - 0.5I}&{}T \end{array}} \right) \in \mathbb {R}^{n\times n} \end{aligned}$$

and

$$\begin{aligned} T = \mathrm{{tridiag}}( - 0.5,8, - 1.5) = \left( {\begin{array}{*{20}{c}} 8&{}{ - 1.5}&{}0&{} \cdots &{}0&{}0\\ { - 0.5}&{}8&{}{ - 1.5}&{} \cdots &{}0&{}0\\ 0&{}{ - 0.5}&{}8&{} \cdots &{}0&{}0\\ \vdots &{} \vdots &{} \vdots &{} \ddots &{} \vdots &{} \vdots \\ 0&{}0&{}0&{} \cdots &{}8&{}{ - 1.5}\\ 0&{}0&{}0&{} \cdots &{}{ - 0.5}&{}8 \end{array}} \right) \in \mathbb {R}^{m\times m}. \end{aligned}$$

The vector b is chosen as \(b = A{x^*} - |{x^*}|\), where \(x^* = (1,-2,1,-2,\cdots )^\textrm{T} \in \mathbb {R}^{n}\) is the exact solution with \(n = {m^2}.\)

Similar to Example 1, the experimentally found optimal parameter values \(\alpha _{\exp }\), the numbers of iteration steps and the elapsed CPU times of the four iteration methods, i.e., the MRBAS, BAS, SOR-like, and FP, used for solving Example 2 are listed in Table 2. Numerical results show that the MRBAS method always performs better than the BAS, the SOR-like, and the FP iteration methods for all the tested cases. Moreover, the numbers of iteration steps costed by the MRBAS, the BAS, the SOR-like, and the FP iteration methods are all n-independent.

Therefore, we can conclude that the MRBAS iteration method proposed in this work is robust and efficient for the tested problems.

Table 2 The numerical results of the tested iteration methods with experimentally found optimal parameter values for Example 2

5 Conclusion

In this work, based on the BAS iteration scheme, we propose an accelerated non-stationary iteration method named the MRBAS iteration to solve the system of AVEs. This new iteration method is convergent under suitable conditions. Numerical results show that the proposed MRBAS iteration method is robust and efficient for the two examples. However, we use the experimentally found optimal values of parameter \(\alpha \) in the implementation of the MRBAS iteration method. How to obtain an easily calculated and efficient parameter value is still an open problem, which may be considered in the future.