Abstract
In this paper, we investigate the iterative methods for solving the absolute value equations (AVEs). Using matrix splitting and the relaxed technique, a relaxed-based matrix splitting (RMS) method is presented. As special cases, we propose a relaxed-based Picard (RP) method, relaxed-based AOR (RAOR) method, and relaxed-based SOR (RSOR) method. These methods include some known methods as special cases, such as the Newton-based matrix splitting iterative method, the modified Newton type iteration method, the Picard method, a new SOR-like method, the fixed point iteration method, the SOR-like method, the AOR method, the modified SOR-like method, etc. Some convergence conditions of the proposed method are presented. Numerical examples verify the theoretical results and the advantages of the new methods.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this paper, we investigate the iterative methods for solving the absolute value equations (AVEs)
where \(A, B \in R^{n \times n}\) with \(B\not =0\), \(b,x \in R^{n}\), and \(\vert x\vert \) denotes the vector with absolute values of components of x. When \(B=I\), the AVEs (1) reduce to a special form
If B is nonsingular, then the AVEs (1) can be rewritten to \(B^{-1}Ax-\vert x\vert =B^{-1}b\), which has the form (2).
The system (1) is first introduced in Rohn (2004). For the existence and uniqueness of the solution of (1) and (2), one can see (Hu and Huang 2010; Mangasarian 2009b; Mangasarian and Meyer 2006; Rohn 2004, 2009; Wu 2021; Wu and Li 2018). In Mangasarian (2007), Mangasarian and Meyer (2006), Mezzadri (2020), Rohn (2004), it has been proved that an AVEs is equivalent to a linear complementarity problem (LCP), which is NP-hard. Hence, they have proved that solving the AVEs is NP-hard.
In this paper, we always assume that the AVEs is solvable.
In recent years, under the conditions of the existence and uniqueness of the solution of the AVEs (1) and (2), a variety of efficient iterative methods have been presented, such as for solving the AVEs (1), a Picard method (Rohn et al. 2014), the generalized Newton method (Hu et al. 2011; Miao et al. 2015; Wang et al. 2019), a smoothing Newton method (Miao et al. 2017), the Newton-based matrix splitting iterative method (Zhou et al. 2021), the AOR method and a preconditioned AOR method (Li 2017), and for solving the AVEs (2) the generalized Newton method (Li 2016a; Mangasarian 2009a; Moosaei et al. 2015; Zhang and Wei 2009), the generalized Traubs method (Haghani 2015), the fixed point iteration method (Ke 2020; Yu et al. 2020), the SOR-like methods (Dong et al. 2020; Ke and Ma 2017; Li and Wu 2020), the Picard-HSS iteration method (Salkuyeh 2014), the nonlinear MHSS-like method (Li 2016b), the Picard-CSCS iteration method and the nonlinear CSCS-like iteration method (Gu et al. 2017), the Picard-HSS-SOR iteration method (Zheng 2020), a generalized and a preconditioned generalized Gauss-Seidel methods (Edalatpour et al. 2017), and so on.
In this paper, to improve computing efficiency, we present a relaxed-based matrix splitting method (RMS) for solving the AVEs (1). As special cases, we propose a relaxed-based Picard (RP) method, a relaxed-based AOR (RAOR) method, and a relaxed-based SOR (RSOR) method. These methods include some known methods as special cases, such as the Newton-based matrix splitting iterative method (NMS method) (Zhou et al. 2021), the modified Newton type iteration method (MN method) (Wang et al. 2019), the Picard method (Rohn et al. 2014), the new SOR-like method (Dong et al. 2020), the fixed point iteration (FPI) method (Ke 2020; Yu et al. 2020), the SOR-like method (Ke and Ma 2017), the AOR method (Li 2017), the modified SOR-like (MSOR-like) method (Li and Wu 2020), etc. Moreover, we prove convergence theorems of the proposed methods. Finally, we use numerical examples to demonstrate our theoretical analysis and the superiority of the RMS method.
This paper is organized as follows. In Sect. 2, some notations and lemmas are reviewed. In Sect. 3, we propose the RMS, RP, RAOR, and RSOR methods for solving the AVEs (1). In Sect. 4, the convergence analysis of the proposed method is presented. Numerical examples and conclusions are given in Sects. 5 and 6, respectively.
2 Preliminaries
In this section, some notations and auxiliary results are presented.
Let \(A \in R^{n \times n}\) be the set of \(n \times n\) matrices with real entries and \(R^{n}=R^{n \times 1}\). The ith component of a vector \(x \in R^{n}\) is denoted by \(x_{i}\). Denote \(\vert x\vert \) the vector with ith component equal to \(\vert x_{i}\vert \). For matrix A, an expression \(A=Q-R\) is called a splitting of A when Q is nonsingular. \(\Vert A\Vert \) denotes the spectral norm defined by \(\Vert A\Vert =\max \{\Vert Ax\Vert :x \in R^{n},\Vert x\Vert =1\}\), where \(\Vert x\Vert \) is the 2-norm. For matrix \(A=(a_{ij}), B=(b_{ij}) \in R^{n \times n}\), we say \(A \ge B\) if \(a_{ij} \ge b_{ij}\), \(i,j = 1,\cdots , n\).
Lemma 1
(Young 1971, Lemma 6-2.1) If b and c are real, then both roots of the quadratic equation
are less than one in modulus if and only if
The following results are obvious.
Lemma 2
For \(x,y \in R^{n}\), the following results hold:
-
(i)
\(\Vert \vert x \vert -\vert y \vert \Vert \le \Vert x-y\Vert \);
-
(ii)
if \(0 \le x \le y\), then \(\Vert x\Vert \le \Vert y\Vert \);
-
(iii)
if \(x \le y\) and \(Q \ge 0\), then \(Qx \le Qy\).
3 Relaxed-based matrix splitting method
In this section, some new methods for solving the AVEs (1) are presented.
Let \(y=\vert x\vert \). Then, the AVEs (1) is equivalent to a new AVEs
We split A into
where Q is nonsingular. Then, we define a relaxed-based matrix splitting (RMS) method for solving the AVEs (1) as
where \(\tau \) is a positive constant.
The RMS method has a general form and it contains many existing iterative methods as its special cases.
Let \(Q=\frac{1}{\omega }A\), \(R=\frac{1-\omega }{\omega }A\) with \(\omega \not =0\). Then, the RMS method defined by (4) reduces to a relaxed-based Picard (RP) method defined as
The RP method is simple in format and more efficient when \(A^{-1}\) is easy to calculate. Especially, when A is a strictly diagonally dominant matrix, we usually choose it to solve AVE.
For \(\tau =1\), the RMS method (4) reduces to the Newton-based matrix splitting iterative method (NMS method) (Zhou et al. 2021)
the modified Newton type iteration method (MN method) whenever R is a positive semi-definite matrix (Wang et al. 2019), and a Picard method for \(R=0\) (Rohn et al. 2014)
Comparing with the NMS method (6), the RMS method (4) has a relaxation factor, so it is more effective.
We decompose A into
where D is a diagonal matrix, and L and U are strictly lower and upper triangular matrices of A, respectively, as usual.
In Wang et al. (2019); Zhou et al. (2021), from the NMS method, the authors propose some special iterative methods, which include a special MN method
the Newton-based Gauss-Seidel (NGS) method
and the Newton-based SOR (NSOR) method
For \(\omega \in \Re \setminus \{0\}\) and \(\gamma \in \Re \), let
where
Then, the RMS method defined by (4) is called the relaxed-based AOR (RAOR) method defined as
For \(\tau =1\), the RAOR method (11) turns into the AOR method (Li 2017):
Comparing with the AOR method (12), the RAOR method (11) has a relaxation factor, so it is more effective.
For the case when \(\omega = \gamma \), the RAOR method can be called the relaxed-based SOR (RSOR) method
For the AVEs (2), some known methods can be derived as follows.
In this case, the RP method (5) is equivalent to the new SOR-like method (Dong et al. 2020), where \(\tau = \omega /\sigma \) with \(\sigma >0\).
The RP method (5) reduces to the fixed point iteration (FPI) method whenever \(\omega =1\) (Ke 2020; Yu et al. 2020)
and the SOR-like method whenever \(\tau =\omega \) (Ke and Ma 2017)
Comparing with the FPI and SOR-like methods, the RP method has more degrees of freedom, so it may be more effective.
The RSOR method (13) reduces to the modified SOR-like (MSOR-like) method whenever \(\tau =\omega \) (Li and Wu 2020)
Comparing with the MSOR-like method, the degree of freedom of the RSOR method is bigger, so it may be more effective.
4 Convergence analysis
In this section, we discuss the convergence of the methods proposed in Sect. 3.
For the AVEs (1) and the splitting (3), denote
Lemma 3
If
then the RMS method (4) for solving the AVEs (1) is convergent.
Proof
Let \(\{x^{*},y^{*}\}\) be solution of the AVEs (1). Then, it holds that
Denote
Subtracting (19) from (4), we have
so that
By Lemma 2, we have
This can be rewritten as
which implies that
Let
and let \(\lambda \) be an eigenvalue of W. Then, we have
That is
By (18), it holds that
It follows by Lemma 1 that \(\vert \lambda \vert <1\), which implies \(\rho (W)<1\).
Hence, \(W^{k} \rightarrow 0\) (\(k \rightarrow \infty \)), so that, by (20), \(\varepsilon _{k}^{x} \rightarrow 0\), \(\varepsilon _{k}^{y} \rightarrow 0\) (\(k \rightarrow \infty \)). This has proved that the RMS method (4) is convergent. \(\square \)
Theorem 4
Suppose that \(\mu + \nu <1\). If
then the RMS method (4) for solving the AVEs (1) is convergent.
Proof
If \(0 < \tau \le 1\), then \(\mu \vert 1-\tau \vert = \mu (1-\tau ) < 1\) and \(\tau \nu < \tau (1-\mu ) = (1-\mu )(1-\vert 1-\tau \vert )\). Hence, the inequality (18) holds.
For \(1\le \tau < 2(1-\mu )/(1-\mu +\nu )\), on the one hand, it gets that \(\tau \nu <(1-\mu )(2-\tau ) = (1-\mu )(1-\vert 1-\tau \vert )\), which shows that the second inequality in (18) holds. On the other hand, since \(2\mu (1-\mu )/(1-\mu +\nu ) < 1+\mu \), it derives that \(\tau \mu < 1+\mu \), so that \(\mu \vert \tau -1\vert = \mu (\tau -1)<1\), which shows that the first inequality in (18) holds.
In a word, we have proved that if (21) is satisfied, then the inequality (18) holds. It follows by Lemma 3 that the RMS method (4) for solving the AVEs (1) is convergent. \(\square \)
Theorem 5
Let A be nonsingular and \(\Vert A^{-1}B\Vert < 1\). Then, the RP method (5) for solving the AVEs (1) is convergent, whenever either
or
Proof
For the RP method, \(\mu \) and \(\nu \) defined by (17) reduce to \(\mu =\vert 1-\omega \vert \) and \(\nu =\vert \omega \vert \Vert A^{-1}B\Vert \).
When \(\Vert A^{-1}B\Vert < 1\) and \(0< \omega < 2/(1 + \Vert A^{-1}B\Vert )\), it is easy to prove that \(\mu + \nu = \vert 1-\omega \vert + \omega \Vert A^{-1}B\Vert < 1\). And it gets that
Now, by Theorem 4, the required result follows directly. \(\square \)
When \(\omega =1\), by Theorem 5, the following corollary is obvious.
Corollary 6
Let A be nonsingular. If
then the FPI method (14) for solving the AVEs (2) is convergent.
This convergence result is better than the corresponding one given by (Ke 2020, Theorem 2.1).
It is easy to see that if \(\tau = \omega \), then the condition in Theorem 5 reduces to
Hence, by Theorem 5, the following corollary is direct.
Corollary 7
Let A be nonsingular. If
then the SOR-like method (15) for solving the AVEs (2) is convergent.
This convergence condition is simpler than the corresponding one given by Ke and Ma (2017).
Lemma 8
(Li 2017, Lemma 4.1) If \(\rho (\vert Q^{-1}R\vert + \vert Q^{-1}B\vert ) < 1\), then the NMS method (6) for solving the AVEs (1) is convergent.
This lemma implies (Zhou et al. 2021, Theorem 4.1) and is better than (Wang et al. 2019, Theorem 3.1).
Since the Picard method (7) is a special case of the NMS method, from Lemma 8, the following corollary is obvious.
Corollary 9
If A is nonsingular and \(\rho (\vert A^{-1}B\vert ) < 1\), then the Picard method (7) for solving the AVEs (1) is convergent.
This corollary is consistent with (Moosaei et al. 2015, Theorem 4.3) and is better than (Wang et al. 2019, Corollary 3.2). And the convergence given in (Rohn et al. 2014, Theorem 2) can be derived from this corollary directly.
From Theorem 4, the following convergence for the RAOR method (11) is direct.
Theorem 10
Let \(\tilde{\mu }=\vert \vert (D-\gamma L)^{-1}[(1-\omega )D+(\omega -\gamma )L+\omega U]\vert \vert \), \(\tilde{\nu }=\vert \vert (D-\gamma L)^{-1} B\vert \vert \). Assume that \(\tilde{\mu } + \omega \tilde{\nu } < 1\). If
then the RAOR method (11) for solving the AVEs (1) is convergent.
When \(\omega =\gamma \), we can obtain the convergence of the RSOR method (13).
For the MSOR-like method (16), this theorem is consistent with (Li and Wu 2020, Theorem 1).
It is easy to prove that if \(\tilde{\mu } + \omega \tilde{\nu } < 1\) then
Hence, by Theorem 10, the following result is immediately.
Corollary 11
Let \(\tilde{\mu }=\vert \vert (D-\gamma L)^{-1}[(1-\omega )D+(\omega -\gamma )L+\omega U]\vert \vert \), \(\tilde{\nu }=\vert \vert (D-\gamma L)^{-1} B\vert \vert \). Assume that \(\tilde{\mu } + \omega \tilde{\nu } < 1\). Then, the AOR method (12) for solving the AVEs (1) is convergent.
5 Numerical examples
In this section, three numerical examples are tested to demonstrate the effectiveness and feasibility of our methods for solving the AVEs (1) and (2) from aspects of the iteration times, the elapsed computation time, and the relative residual error.
For convenience, the iteration times, the elapsed computation time (unit: second), and the relative residual error are, respectively, denoted as ‘IT’, ‘CPU’(unit: s) and ‘RES’. In the computation, all initial vector are taken as zero vectors, and numerical methods are terminated once the relative error satisfies \(\vert \vert Ax_{k}-B\vert x_{k}\vert -b\vert \vert _{2}/\vert \vert b\vert \vert _{2} \le 10^{-6}\) or the CPU is more than 500 s. All the tests are performed under Matlab 2016b on a personal computer with 2.30GHZ central processing unit (Inter(R) Core(TM) i5-8259U), 8GB memory, and Windows 10 operating system.
Example 1
(Guo et al. 2019) Let m be a prescribed positive integer and \(n=m^{2}\). Consider the AVEs (2) with
We test the RP method (5) with the experimental optimal parameters \((\omega ,\tau )\), the SOR-like method (15) with the optimal parameter given in Guo et al. (2019), the RAOR method (11) with the parameters \((\omega ,\gamma ,\tau )=(0.9,0.85,1.6)\), the AOR method (12) with the parameters \((\omega ,\gamma )=(0.9,0.85)\), the MN method (8), the NGS method (9), and the NSOR method (10) with the experimental optimal parameter, respectively. For the MN, NGS, and NSOR methods, \(\Omega \) is taken as two common cases. The experimental results are listed in Table .
From Table 1, we can see that these seven tested methods can fast converge to the solution \(x^{*}\) for different sizes. When the matrix dimension increases, it is almost constant, which shows that these seven tested methods are stable. Furthermore, we can see that the RP, SOR-like, and RAOR methods have less CPU and IT than the other methods. In terms of the elapsed CPU and IT, the RP method requires the least computing time and iteration times.
In Mangasarian and Meyer (2006), the linear complementarity problem (LCP)
with \(M \in R^{n \times n}\), \(det(I-M) \ne 0\) and \(q \in R^n\), can be reduced to the AVEs
with \(x = \frac{1}{2}[(M - I )z + q]\).
Example 2
(Zhou et al. 2021) Consider the LCP (22), where \(M=\hat{M}+ 4 I\) with
and \(q=-Mz^{*}\) with \(z^{*}= (1.2,1.2,\cdots ,1.2)^{\top } \in R^{n}\) being its unique solution. In this case, the unique solution of the corresponding AVEs (23) is
We test the RP method (5) with the parameters \((\omega ,\tau ) = (0.6,0.998)\) and \((\omega ,\tau ) = (0.998,0.598)\), the Picard method (7), the MN method (8), the NGS method (9), and the NSOR method (10) with the optimal parameter \(\alpha = 0.9\). For the MN, NGS, and NSOR methods, set \(\Omega =\hat{M}\).
Some iterative methods for solving the LCP (22) have been proposed. In Huang and Cui (2022), the authors given a class of RMMS method, in which the RMSOR method is the more effective one. Here, we test the RMSOR method, where \(\Omega =0.5I\), \(\gamma =1\), and the parameter in SOR splitting and relaxation parameter \(\theta \) are, respectively, selected as the experimental optimal parameters 0.6 and 0.98.
The experimental results are listed in Table , where ‘-’ denotes the CPU time larger than 500 s.
From Table 2, we can see that the Picard method does not converge within 500 s, and the other six tested methods can fast converge to the solution \(x^{*}\) for different sizes. When the matrix dimension increases, IT is constant, which shows that the other six tested methods are all effective and stable except the MN method. Further, we can see that the RP, NGS, and NSOR methods have less CPU and IT than the other methods. In terms of the elapsed CPU and IT, the RP method requires the least computing time and iteration times.
Example 3
Consider the AVEs (1) with the matrix A and B are randomly generated by the following matlab procedure: \(A=rand(n,n)+n*eye(n) \in R^{n \times n}\), \(B=rand(n,n)+\frac{n}{2}* eye(n) \in R^{n \times n}\), \(b=ones(n,1) \in R^{n \times 1} \).
We test the RAOR method (11), the Picard method (7), and the MSOR-like method (16), where we choose five increasing sizes: \(n=4000, 5000, 6000, 7000, 8000\). The experimental results are listed in Table , where ‘–’ denotes the CPU time larger than 500 s.
From Table 3, we can see that the MSOR-like method does not converge within 500 s, and the RAOR method has less CPU and IT than the Picard method. This shows that the RAOR method is superior to the Picard and MSOR-like methods under some conditions.
6 Conclusions
In this paper, we investigate the iterative methods for solving the AVEs (1). Using matrix splitting and the relaxed technique, a relaxed-based matrix splitting (RMS) method is presented. As special cases, we propose the RP, RAOR, and RSOR methods. These methods include some known methods as special cases, such as the Newton-based matrix splitting iterative method (NMS method) (Zhou et al. 2021), the modified Newton type iteration method (MN method) (Wang et al. 2019), the Picard method (Rohn et al. 2014), the NSOR method (Dong et al. 2020), the fixed point iteration (FPI) method (Ke 2020; Yu et al. 2020), the SOR-like method (Ke and Ma 2017), the AOR method (Li 2017), the modified SOR-like (MSOR-like) method (Li and Wu 2020), etc.
We prove convergence theorems of the proposed methods. Numerical examples show that our methods are superior to the Picard, SOR-like, MN, NGS, AOR, NSOR, MSOR-like methods and the RMSOR method given in Huang and Cui (2022) under some conditions.
For the RMS method, how to construct more effective splitting and make the method converge faster still needs further research.
With reference to Li (2017), we can construct a preconditioned relaxed-based matrix splitting method.
For the RMS, RP, RAOR, and RSOR methods, there are some parameters involved. For the selection of parameters, trial calculation is mainly used. How to select better parameters is an interesting and important problem, and of course, it is also a difficult problem, which deserves further study.
References
Dong X, Shao X-H, Shen H-L (2020) A new SOR-like method for solving absolute value equations. Appl Numer Math 156:410–421
Edalatpour V, Hezari D, Salkuyeh DK (2017) A generalization of the Gauss–Seidel iteration method for solving absolute value equations. Appl Math Comput 293:156–167
Guo P, Wu S-L, Li C-X (2019) On the SOR-like method for solving absolute value equations. Appl Math Lett 97:107–113
Gu X, Huang T, Li H, Wang S, Li L (2017) Two CSCS-based iteration methods for solving absolute value equations. J Appl Anal Comput 7:1336–1356
Haghani FK (2015) On generalized Traubs method for absolute value equations. J Optim Theory Appl 166:619–625
Hu S-L, Huang Z-H (2010) A note on absolute value equations. Optim Lett 4:417–424
Hu S-L, Huang Z-H, Zhang Q (2011) A generalized Newton method for absolute value equations associated with second order cones. J Comput Appl Math 235:1490–1501
Huang Z-G, Cui J-J (2022) The relaxation modulus-based matrix splitting iteration method for horizontal linear complementarity problems. Bull Iran Math Soc. https://doi.org/10.1007/s41980-022-00695-y
Ke Y-F (2020) The new iteration algorithm for absolute value equation. Appl Math Lett 99:105990
Ke Y-F, Ma C-F (2017) SOR-Like iteration method for solving absolute value equations. Appl Math Comput 311:195–202
Li C-X (2016) A modified generalized Newton method for absolute value equations. J Optim Theory Appl 170:1055–1059
Li C-X (2016) On the modified Hermitian and skew-Hermitian splitting iteration methods for a class of weakly absolute value equations. J Inequal Appl 260:1–10
Li C-X (2017) A preconditioned AOR iterative method for the absolute value equations. Int J Comput Methods 14:1–12
Li C-X, Wu S-L (2020) Modified SOR-Like method for absolute value equations. Math Probl Eng 2020:9231639
Mangasarian OL (2007) Absolute value programming. Comput Optim Appl 36:43–53
Mangasarian OL (2009) A generalized Newton method for absolute value equations. Optim Lett 3:101–108
Mangasarian OL (2009) On unique solvability of the absolute value equation. Optim Lett 3:603–606
Mangasarian OL, Meyer RR (2006) Absolute value equations. Linear Algebra Appl 419:359–367
Mezzadri F (2020) On the solution of general absolute value equations. Appl Math Lett 107:106462
Miao X-H, Yang J-T, Hu S (2015) A generalized Newton method for absolute value equations associated with circular cones. Appl Math Comput 269:155–168
Miao X-H, Yang J-T, Saheya B, Chen J-S (2017) A smoothing Newton method for absolute value equation associated with second-order cone. Appl Numer Math 120:82–96
Moosaei H, Ketabchi S, Noor MA, Iqbal J, Hooshyarbakhsh V (2015) Some techniques for solving absolute value equations. Appl Math Comput 268:696–705
Rohn J (2004) A theorem of alternatives for the equation \(Ax+B\vert x\vert =b\). Linear Multilinear Algebra 52:421–426
Rohn J (2009) On unique solvability of the absolute value equation. Optim Lett 3:603–606
Rohn J, Hooshyarbakhsh V, Farhadsefat R (2014) An iterative method for solving absolute value equations and sufficient conditions for unique solvability. Optim Lett 8:35–44
Salkuyeh DK (2014) The Picard-HSS iteration method for absolute value equations. Optim Lett 8:2191–2202
Wang A, Cao Y, Chen J-X (2019) Modified Newton-type iteration methods for generalized absolute value equations. J Optim Theory Appl 181:216–230
Wu S-L (2021) The unique solution of a class of the new generalized absolute value equation. Appl Math Lett 116:107029
Wu S-L, Li C-X (2018) The unique solution of the absolute value equations. Appl Math Lett 76:195–200
Young DM (1971) Iterative solution of large linear systems. Academic Press, New York
Yu D, Chen C, Han D (2020) A modified fixed point iteration method for solving the system of absolute value equations. Optimization 71:449–461
Zhang C, Wei Q (2009) Global and finite convergence of a generalized Newton method for absolute value equations. J Optim Theory Appl 143:391–403
Zheng L (2020) The Picard-HSS-SOR iteration method for absolute value equations. J Inequal Appl 258:1–9
Zhou H-Y, Wu S-L, Li C-X (2021) Newton-based matrix splitting method for generalized absolute value equation. J Comput Appl Math 394:113578
Acknowledgements
This work was supported by the National Natural Science Foundation of China (Grant Nos. 11771213 and 11971242), the Priority Academic Program Development of Jiangsu Higher Education Institutions, and Wuxi University Research Start-up Fund for Introduced Talents. The authors would like to thank the referees for their many valuable suggestions and comments which led us to improve this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Jinyun Yuan.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Song, J., Song, Y. Relaxed-based matrix splitting methods for solving absolute value equations. Comp. Appl. Math. 42, 19 (2023). https://doi.org/10.1007/s40314-022-02157-x
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s40314-022-02157-x
Keywords
- Absolute value equations
- Relaxed-based matrix splitting method
- Relaxed-based Picard method
- Relaxed-based AOR method
- Relaxed-based SOR method
- Convergence