Abstract
In this work, by applying the minimum residual technique to the block-diagonal and anti-block-diagonal splitting (BAS) iteration scheme, an iteration method named minimum residual BAS (MRBAS) is proposed to solve a two-by-two block system of nonlinear equations arising from the reformulation of the system of absolute value equations (AVEs). The theoretical analysis shows that the MRBAS iteration method is convergent under suitable conditions. Numerical results demonstrate the feasibility and the effectiveness of the MRBAS iteration method.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Consider the generalized system of absolute value equations (GAVEs) of the form
where \(A,B\in \mathbb {R}^{n\times n}\) and \(b\in \mathbb {R}^n\) are given and \(\left| x \right| = {(\left| {{x_1}} \right| ,\left| {{x_2}} \right| ,\cdots ,\left| {{x_n}} \right| )^\textrm{T}}\in \mathbb {R}^n\) denotes the componentwise absolute value of the unknown vector \(x\in \mathbb {R}^n\). The GAVE (1) was formally introduced by Rohn [29] and further investigated in [8, 19, 25]. Moreover, the GAVE (1) is a special case of the system of weakly nonlinear equations, the latter has been studied extensively and deeply early in [1, 2, 6, 27]. When we choose B being the negative identity matrix, i.e., \(B=-I\), the GAVE is reduced to the system of absolute value equations (AVE) of the form
The existence and uniqueness of the solution of (2) were discussed in Refs. [25, 29]. In the following, we suppose that the solution of (2) is existent and unique.
It is well known that the AVE (2) can be derived from the linear complementarity problem (LCP) [3, 9, 10, 25], which has broad applications in many areas of scientific computing and engineering applications such as linear programming, quadratic programming, bimatrix games, quasi-complementarity problems, and so on [12, 22, 25, 29, 35]. Owing to the existence of the nonlinear and non-differentiable term \(\left| x \right| \), the AVE (2) is an NP hard problem. If \(\left| x \right| \) does not exist, the AVE (2) is reduced to a system of linear equations, which can be solved by direct or iterative methods; see [7].
In the past two decades, the AVE (2) has attracted increasing interest from various researchers due to its simple and special structure. Many iteration methods, such as the successive linearization method [22], the sign accord method [30], the hybrid method [24], the optimization method [28], and so on, are employed to approximate the solution of the AVE (2). In 2009, stimulated by the idea of the Newton method, Mangasarian [23] introduced the subgradient for the non-differentiable term \(\left| x \right| \) and presented a generalized Newton (GN) iteration method of the form
where \(D(x) = \mathrm{{diag}}(\mathrm{{sgn}}(x))\), \(x\in \mathbb {R}^n\), \(\mathrm{{sgn}}(x)\) denotes a vector which components equal to \(-1,0\), or 1 depending on whether the corresponding component of the vector x is negative, zero, or positive. Thereafter, some scholars established more efficient iteration methods based on the GN iteration, for example, the generalized Traubs method [15], the modified GN method [20], and the relaxed GN method [11]. However, these iteration methods require expensive computing costs in actual computations because the coefficient matrix changes with the iteration steps. In 2014, Rohn et al. [31] proposed a more practical Picard iteration method to solve the AVE (2). That is
Compared with the GN methods, the coefficient matrix of the Picard iteration scheme (3) is no longer changed with the iteration steps. Hence, the computing cost in each step of the Picard iteration is cheaper than those of the GN methods. More Picard iteration methods used for solving the AVE can be seen in Refs. [13, 26, 32]. In addition, one of the important sources of the problem (2) is the modulus-based fixed-point reformulation introduced and analyzed in [3]. The iteration scheme in (3) and most of the schemes mentioned in the part around are special cases of the methods in [2].
Recently, by equivalently reformulating the AVE (2) as a block two-by-two nonlinear equation, Ke and Ma [18] proposed an SOR-like iteration method for solving the AVE (2). Hereafter, other iteration methods used for solving the equivalent block two-by-two form of the AVE (2) are also established, for example, the fixed point (FP) iteration method [17, 38], the block-diagonal and anti-block-diagonal splitting (BAS) iteration method [21], and so on. The BAS is closely related to the Hermitian and skew-Hermitian splitting (HSS) [5] and the modified HSS (MHSS) [4]. In this work, to further improve the efficiency of the BAS iteration method, a new iteration method is proposed by introducing a dynamic control parameter for the BAS iteration scheme. Since the control parameter is determined by minimizing the residual norm, the new iteration method is named as the minimum residual BAS (abbreviated as MRBAS) iteration. This acceleration idea comes from the minimum residual HSS (MRHSS) iteration method used for solving non-Hermitian positive definite linear systems [36]. Other similar and important acceleration techniques, such as the minimum residual smoothing, can be found in Refs. [14, 34].
The remainder part of this paper is organized as follows. In Sect. 2, we define the MRBAS iteration method by introducing a control parameter for the BAS iteration scheme. In Sect. 3, the convergence of the MRBAS iteration method for solving the system of absolute value equations (2) is carefully discussed under suitable restrictions. In Sect. 4, numerical experiments are employed to verify the feasibility and effectiveness of the MRBAS iteration method. Finally, in Sect. 5, we give a brief conclusion for this paper.
2 The MRBAS Iteration Method
In this section, by introducing a control parameter for the BAS iteration scheme, the MRBAS iteration method is proposed to solve the AVE (2).
Let \(y = |x|\). Then, the AVE (2) can be rewritten as
which is equivalent to
where \(\hat{D} = D(x) = \mathrm{{diag}}(\mathrm{{sgn}}(x))\), \(x\in \mathbb {R}^n\). The coefficient matrix \(\bar{A}\) can be split into the sum of a block-diagonal matrix and an anti-block-diagonal matrix, i.e.,
It is easy to see that matrix \(\bar{A}\) yields
where \(\alpha \) is a given positive constant such that \(\alpha I + A\) is nonsigular.
Based on the BAS, the BAS iteration method used for solving nonlinear equation (4) can be defined as follows.
Method 1
(The BAS iteration method) [21] Let \(\alpha \) be a given positive constant such that \(\alpha I + A\) is nonsigular. Given the initial vector \({x^{(0)}}\in \mathbb {R}^n\) and \({y^{(0)}}= |x^{(0)} |\). For \(k =0,1,2,\cdots ,\) until the iteration sequence \(\left\{ (x^{(k)}, y^{(k)}) \right\} _{k = 0}^{ + \infty }\) is convergent, calculate
or
Numerical results in Ref. [21] show that the BAS iteration method is feasible and robust. To further improve the efficiency of the BAS method, we will drive a modified version of the BAS iteration method in the following. Firstly, denote
Then, the splitting (5) becomes \(\bar{A} = (\alpha I + M) - (\alpha I + N)\). Hence, the BAS iteration scheme (6) can be rewritten as
or, equivalently,
Multiplying \({(\alpha I + M)^{ - 1}}\) on both sides of (7) from left gives
Denote \({r^{(k)}} = \bar{b} - \bar{A}{z^{(k)}}\) and \({\delta ^{(k)}} = {(\alpha I + M)^{ - 1}}{r^{(k)}}\), the BAS iteration scheme (8) can be rewritten as
where \(\delta ^{(k)}\) can be viewed as the search direction from \({z^{(k)}}\) to \({z^{(k + 1)}}\). The step size equals \(\Vert \delta ^{(k)}\Vert \), where \(\Vert \cdot \Vert \) denotes the 2-norm of a given vector.
To further improve the efficiency of the BAS iteration scheme (9), we introduce a parameter \({\omega _k}\) to control the step size, and hope that the step size in each iteration step is optimal. This leads to a relaxed iteration scheme of the form
In the immediately following, we try to determine the optimal value of \({\omega _k}\).
Note that \({r^{(k)}} = \bar{b} - \bar{A}{z^{(k)}}\), \({\delta ^{(k)}} = {(\alpha I + M)^{ - 1}}{r^{(k)}}\), and denote \(S = \bar{A}{(\alpha I + M)^{ - 1}}\), the residual form of the iteration scheme (10) can be easily derived. That is
In the following, we determine the value of \({\omega _k}\) by minimizing the residual norm \(\Vert {{r^{(k + 1)}}}\Vert \). The simple calculation gives
Hence, \(\Vert r^{(k + 1)}\Vert ^2\) can be viewed as a quadratic function of \({\omega _k}\). Its minimum value is achieved at
Owing to \(S = \bar{A}{(\alpha I + M)^{ - 1}}\) and \({\delta ^{(k)}} = {(\alpha I + M)^{ - 1}}{r^{(k)}}\), the parameter \({\omega _k}\) can be equivalently rewritten as
Therefore, the MRBAS iteration method can be defined as follows.
Method 2
(The MRBAS iteration method) Let \(\alpha \) be a positive parameter such that \(\alpha I + A\) is nonsingular. Given an initial vector \(x^{(0)}\in \mathbb {R}^n\), define \(z^{(0)}=\left\{ (x^{(0)})^\textrm{T},|x^{(0)}|^\textrm{T}\right\} ^\textrm{T}\). For \(k=0,1,2,\cdots \), until the iteration sequence \(\left\{ z^{(k)} \right\} \) converges, compute
where
Remark 1
The MRBAS iteration method is reduced to the BAS iteration method if we choose \({\omega _k} = 1\).
3 The Convergence Property of the MRBAS Iteration Method
In order to derive the convergence property of the MRBAS iteration method used for solving the AVE (2), we first give several useful lemmas.
Lemma 1
[33] For any matrix \(A\in \mathbb {R}^{n\times n}\), \(B\in \mathbb {R}^{n\times n}\), the following results hold:
-
\(\Vert A\Vert \geqslant 0\), where “=” holds if and only if \(A = O\);
-
if k is a scalar, then \(\Vert k A\Vert = |k|\, \Vert A\Vert \);
-
\(\Vert A + B\Vert \leqslant \Vert A\Vert + \Vert B\Vert \);
-
\(\Vert A B\Vert \leqslant \Vert A\Vert \, \Vert B\Vert \).
Lemma 2
[16] For any matrix \(A=(a_{ij})\in \mathbb {R}^{n\times n}\) and \(B=(b_{ij})\in \mathbb {R}^{n\times n}\), if \(O \leqslant A \leqslant B\), then \(\Vert A\Vert _p \leqslant \Vert B\Vert _p\), where \({\left\| \cdot \right\| _p}\) stands for the p-norm of a matrix, \(A\leqslant B\) means \(a_{ij}\leqslant b_{ij}\) for \(i,j=1,2,\cdots ,n\).
Lemma 3
[37] Both roots of the real quadratic equation \({x^2} - ax + b = 0\) are less than one in modulus if and only if \(\left| b \right| < 1\) and \(\left| a \right| < 1 + b\).
With the help of the above three lemmas, the convergence property of the MRBAS iteration method can be derived in the following.
Theorem 1
Let \(A\in \mathbb {R}^{n\times n}\), \(b\in \mathbb {R}^n\), \(\alpha \) be a positive constant such that \(\alpha I + A\in \mathbb {R}^{n\times n}\) is a nonsingular matrix. Denote \(\eta = \Vert {\left( {\alpha I + A} \right) ^{ - 1}}\Vert \). Then, we have
where
Moreover, it follows that \(\Vert L\Vert < 1\) if and only if the parameter \(\alpha \) satisfies \((\alpha +1)\eta < 1\). That means if the condition \((\alpha +1)\eta < 1\) holds, the iteration sequence \(\left\{ {{z^{(k)}}} \right\} \) generated by the MRBAS iteration method converges to the exact solution of the AVE (4) for the initial vector \(z^{(0)}=\left\{ (x^{(0)})^\textrm{T},|x^{(0)}|^\textrm{T}\right\} ^\textrm{T}\) with \(x^{(0)}\) being any vector in \(\mathbb {R}^n\).
Proof
Taking the 2-norm on both sides of (11) gives
From the derivation of the MRBAS iteration method in Sect. 2, the parameter \({\omega _k}\) defined by (12) is the minimum point of the residual norm \(\Vert {{r^{(k + 1)}}} \Vert \), which means
Note that \(S=\bar{A}{(\alpha I + M)^{ - 1}}\), the matrix \(I - S \) can be rewritten as
Denote \(\tilde{r}^{(k + 1)}=(I - S )r^{(k)}\). If we write \(r^{(k)}\) as \(r^{(k)}=((r^{(k)}_x)^\textrm{T},(r^{(k)}_y)^\textrm{T})^\textrm{T}\) with \(r^{(k)}_x,r^{(k)}_y\in \mathbb {R}^n\), then the components of \(\tilde{r}^{(k + 1)}=((\tilde{r}^{(k+1)}_x)^\textrm{T},(\tilde{r}^{(k+1)}_y)^\textrm{T})^\textrm{T}\) yield
Using Lemma 1 and noticing that \(\eta = \Vert {(\alpha I + A)^{ - 1}}\Vert \) and \(\Vert \hat{D}\Vert \leqslant 1\), the 2-norms of \(\tilde{r}_x^{(k + 1)}\) and \(\tilde{r}_y^{(k + 1)}\), respectively, satisfy
and
The inequalities (15) and (16) can be written as the following matrix-vector form:
where
Taking the 2-norm on both sides of (17) and applying Lemmas 1 and 2, we get
Note that \(\Vert \tilde{r}^{(k + 1)}\Vert \!=\!\sqrt{\Vert \tilde{r}_x^{(k+1)}\Vert ^2 \!+\! \Vert \tilde{r}_y^{(k+1)}\Vert ^2}\!=\!\Vert \tilde{R}^{(k + 1)}\Vert \) and \(\Vert {r^{(k)}}\Vert \!=\!\sqrt{\Vert r_x^{(k)}\Vert ^2 \!+\! \Vert r_y^{(k)}\Vert ^2}=\Vert {R^{(k)}}\Vert \), the inequality (18) is equivalent to
From the definition of \(\tilde{r}^{(k + 1)}\), i.e., \(\tilde{r}^{(k + 1)}=(I - S )r^{(k)}\), we can derive from (14) that \(\Vert r^{(k + 1)}\Vert \leqslant \Vert \tilde{r}^{(k + 1)}\Vert \). Thus, using (19), we finally obtain the inequality (13).
In the following, we derive the convergence condition of the MRBAS iteration method. Due to
we have
and
Let \(\lambda \) be any eigenvalue of the matrix \(L^\textrm{T} L\). Then, it yields the following real quadratic equation:
From Lemma 3, it follows that \(\Vert L\Vert < 1\) if and only if
and
After simple calculation, the condition (21) can be simplified as
The condition (20) is equivalent to
In as much as \(\alpha ,\eta >0\), the left-hand-side of (23) is less than that of (22), i.e., \(\max \{(1-\alpha )\eta ,(\alpha - 1)\eta \}<(\alpha + 1)\eta \). This means that once (22) is true, then (23) is also true. Therefore, we can conclude that \(\Vert L\Vert < 1\) if and only if the parameter \(\alpha \) satisfies (22).
Using the inequality (13) successively, we conclude
Due to \(\Vert L\Vert < 1\), it follows that \(\mathop {\lim }\limits _{k \rightarrow \infty } \Vert {r^{(k)}}\Vert = 0\), which implies the iteration sequence \(\left\{ {{z^{(k)}}} \right\} \) is convergent to the exact solution of the AVE (4) if \(\alpha \) satisfies \((\alpha + 1)\eta <1\).
4 Numerical Experiments
In this section, we use two examples to verify the feasibility and effectiveness of the MRBAS iteration method for solving the AVE (2).
The numerical results including numbers of iteration steps (denoted as IT) and elapsed CPU times (in seconds, denoted as CPU) of the MRBAS iteration method are compared with those of the BAS [21], the SOR-like [18], and the FP [17] iteration methods. The iteration parameters \(\alpha \) involved in these iteration methods are selected as the experimentally found optimal ones, which lead to the least number of iteration steps. If the optimal iteration parameters form an interval, then we select the optimal parameter as the one that belongs to this interval and leads to the least CPU time. The optimal iteration parameter determined in such a manner is denoted as \({\alpha _{\exp }}\).
In the computations, we choose the initial vectors to be zero vector, i.e., \(x^{(0)}=0\). All iterations are stopped when the number of iteration steps exceeds 500, or \(\mathrm{{RES}} \leqslant {10^{\mathrm{{ - }}6}}\), where RES denotes the relative residual error of the form
In addition, all the computations are implemented in MATLAB [version 9.12.0.1884302 (R2022a)] on a personal computer with 1.6 GHZ central processing unit [Intel(R) Core(TM) i5-8250U] and 8.0 GB memory.
Example 1
Let the coefficient matrix A in (2) be
The vector b is chosen as \(b = A{x^*} - |{x^*}|\), where \(x^* = (1,-2,1,-2,\cdots )^\textrm{T} \in \mathbb {R}^{n}\) is the exact solution.
The MRBAS together with the BAS, the SOR-like, and the FP iteration methods is used to approximate the solution of this problem. In Table 1, the experimentally found optimal parameter values, denoted by \(\alpha _{\exp }\), of all the tested iteration methods are listed for different problem sizes n. Using these optimal parameter values, the numerical results of the four iteration methods used for solving (2) with different problem sizes, i.e., \(n=60^2,70^2,80^2,90^2\), and \(100^2\), are also listed in Table 1.
From the numerical results, we can see that all the tested iteration methods with respective optimal parameters are convergent for different problem sizes n. Among the four iteration methods, the MRBAS method is the most efficient one as it costs the least iteration steps and CPU times, compared with the BAS, the SOR-like, and the FP iteration methods.
Example 2
[39] Consider the AVE in (2) with
and
The vector b is chosen as \(b = A{x^*} - |{x^*}|\), where \(x^* = (1,-2,1,-2,\cdots )^\textrm{T} \in \mathbb {R}^{n}\) is the exact solution with \(n = {m^2}.\)
Similar to Example 1, the experimentally found optimal parameter values \(\alpha _{\exp }\), the numbers of iteration steps and the elapsed CPU times of the four iteration methods, i.e., the MRBAS, BAS, SOR-like, and FP, used for solving Example 2 are listed in Table 2. Numerical results show that the MRBAS method always performs better than the BAS, the SOR-like, and the FP iteration methods for all the tested cases. Moreover, the numbers of iteration steps costed by the MRBAS, the BAS, the SOR-like, and the FP iteration methods are all n-independent.
Therefore, we can conclude that the MRBAS iteration method proposed in this work is robust and efficient for the tested problems.
5 Conclusion
In this work, based on the BAS iteration scheme, we propose an accelerated non-stationary iteration method named the MRBAS iteration to solve the system of AVEs. This new iteration method is convergent under suitable conditions. Numerical results show that the proposed MRBAS iteration method is robust and efficient for the two examples. However, we use the experimentally found optimal values of parameter \(\alpha \) in the implementation of the MRBAS iteration method. How to obtain an easily calculated and efficient parameter value is still an open problem, which may be considered in the future.
Availability of Data and Materials
Not applicable.
References
Bai, Z.-Z.: Parallel multisplitting two-stage iterative methods for large sparse systems of weakly nonlinear equations. Numer. Algorithms 15(3/4), 347–372 (1997)
Bai, Z.-Z.: A class of two-stage iterative methods for systems of weakly nonlinear equations. Numer. Algorithms 14, 295–319 (1997)
Bai, Z.-Z.: Modulus-based matrix splitting iteration methods for linear complementarity problems. Numer. Linear Algebra Appl. 17, 917–933 (2010)
Bai, Z.-Z., Benzi, M., Chen, F.: Modified HSS iteration methods for a class of complex symmetric linear systems. Computing 87, 93–111 (2010)
Bai, Z.-Z., Golub, G.H., Ng, M.K.: Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. SIAM J. Matrix Anal. Appl. 24, 603–626 (2003)
Bai, Z.-Z., Migallón, V., Penadés, J., Szyld, D.B.: Block and asynchronous two-stage methods for mildly nonlinear systems. Numer. Math. 82, 1–20 (1999)
Bai, Z.-Z., Pan, J.-Y.: Matrix Analysis and Computations. SIAM, Philadelphia (2021)
Bai, Z.-Z., Yang, X.: On HSS-based iteration methods for weakly nonlinear systems. Appl. Numer. Math. 59, 2923–2936 (2009)
Bai, Z.-Z., Zhang, L.-L.: Modulus-based synchronous multisplitting iteration methods for linear complementarity problems. Numer. Linear Algebra Appl. 20, 425–439 (2013)
Caccetta, L., Qu, B., Zhou, G.-L.: A globally and quadratically convergent method for absolute value equations. Comput. Optim. Appl. 48, 45–58 (2011)
Cao, Y., Shi, Q., Zhu, S.-L.: A relaxed generalized Newton iteration method for generalized absolute value equations. AIMS Math. 6, 1258–1275 (2021)
Cottle, R.W., Pang, J.-S., Stone, R.E.: The Linear Complementarity Problem. SIAM, Philadelphia (2009)
Dehghan, M., Shirilord, A.: Matrix multisplitting Picard-iterative method for solving generalized absolute value matrix equation. Appl. Numer. Math. 158, 425–438 (2020)
Gutknecht, M.H., Rozložník, M.: Residual smoothing techniques: do they improve the limiting accuracy of iterative solvers? BIT Numer. Math. 41, 86–114 (2001)
Haghani, F.K.: On generalized Traub’s method for absolute value equations. J. Optim. Theory Appl. 166, 619–625 (2015)
Horn, R.A., Johnson, C.R.: Matrix Analysis. Cambridge University Press, Cambridge (2012)
Ke, Y.-F.: The new iteration algorithm for absolute value equation. Appl. Math. Lett. 99, 105990 (2020)
Ke, Y.-F., Ma, C.-F.: SOR-like iteration method for solving absolute value equations. Appl. Math. Comput. 311, 195–202 (2017)
Ketabchi, S., Moosaei, H.: An efficient method for optimal correcting of absolute value equations by minimal changes in the right hand side. Comput. Math. Appl. 64, 1882–1885 (2012)
Li, C.-X.: A modified generalized Newton method for absolute value equations. J. Optim. Theory Appl. 170, 1055–1059 (2016)
Li, C.-X., Wu, S.-L.: Block-diagonal and anti-block-diagonal splitting iteration method for absolute value equation. In: Song, H., Jiang, D. (eds.) Simulation Tools and Techniques, vol. 369, pp. 572–581. Springer, Cham (2021)
Mangasarian, O.L.: Absolute value programming. Comput. Optim. Appl. 36, 43–53 (2007)
Mangasarian, O.L.: A generalized Newton method for absolute value equations. Optim. Lett. 3, 101–108 (2009)
Mangasarian, O.L.: A hybrid algorithm for solving the absolute value equation. Optim. Lett. 9, 1469–1474 (2015)
Mangasarian, O.L., Meyer, R.R.: Absolute value equations. Linear Algebra Appl. 419, 359–367 (2006)
Miao, S.-X., Xiong, X.-T., Wen, J.: On Picard-SHSS iteration method for absolute value equation. AIMS Math. 6, 1743–1753 (2021)
Ortega, J.M., Rheinboldt, W.C.: Iterative Solution of Nonlinear Equations in Several Variables. SIAM, Philadelphia (2000)
Prokopyev, O.: On equivalent reformulations for absolute value equations. Comput. Optim. Appl. 44, 363–372 (2009)
Rohn, J.: A theorem of the alternatives for the equation \(Ax+B|x|=b\). Linear Multilinear Algebra 52, 421–426 (2004)
Rohn, J.: An algorithm for solving the absolute value equations. Electron. J. Linear Algebra 18, 589–599 (2009)
Rohn, J., Hooshyarbakhsh, V., Farhadsefat, R.: An iterative method for solving absolute value equations and sufficient conditions for unique solvability. Optim. Lett. 8, 35–44 (2014)
Salkuyeh, D.K.: The Picard-HSS iteration method for absolute value equations. Optim. Lett. 8, 2191–2202 (2014)
Varga, R.S.: Matrix Iterative Analysis. Springer, Berlin (2000)
Walker, H.F.: Residual smoothing and peak/plateau behavior in Krylov subspace methods. Appl. Numer. Math. 19, 279–286 (1995)
Wu, S.-L., Guo, P.: Modulus-based matrix splitting algorithms for the quasi-complementarity problems. Appl. Numer. Math. 132, 127–137 (2018)
Yang, A.-L., Wu, Y.-J., Cao, Y.: Minimum residual Hermitian and skew-Hermitian splitting iteration method for non-Hermitian positive definite linear systems. BIT Numer. Math. 59, 299–319 (2019)
Young, D.M.: Iterative Solution of Large Linear Systems. Academic Press, New York (1971)
Yu, D.-M., Chen, C.-R., Han, D.-R.: A modified fixed point iteration method for solving the system of absolute value equations. Optimization 71, 449–461 (2022)
Zhang, J.-L., Zhang, G.-F., Liang, Z.-Z.: A modified generalized SOR-like method for solving an absolute value equation. Linear Multilinear Algebra 71, 1578–1595 (2023)
Acknowledgements
This work is supported by the National Natural Science Foundation of China [Grant no. 12161030] and the Natural Science Foundation of Hainan Province China [Grant nos. 523MS039 and 121MS030].
Author information
Authors and Affiliations
Corresponding authors
Ethics declarations
Conflict of Interest
The authors have no conflict of interest to declare.
Ethics Approval
Not applicable.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Dai, YX., Yan, RY. & Yang, AL. Minimum Residual BAS Iteration Method for Solving the System of Absolute Value Equations. Commun. Appl. Math. Comput. (2024). https://doi.org/10.1007/s42967-024-00403-z
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s42967-024-00403-z
Keywords
- Absolute value equations (AVEs)
- Block-diagonal and anti-block-diagonal splitting (BAS)
- Minimum residual
- Minimum residual BAS (MRBAS) iteration
- Convergence analysis