Abstract
The fixed point iteration method is an effective method for solving absolute value equation via equivalent two-by-two block form. To further improve the computational efficiency of the fixed point iteration method, by using the preconditioned shift-splitting strategy, we propose an inexact fixed point iteration method for solving absolute value equation in this paper. We obtain some convergence conditions for the proposed method. The effectiveness of the proposed method are shown by three examples.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Considering the absolute value equation(AVE)
where \(A\in {\mathbb {R}}^{n\times n}\) and \(b\in {\mathbb {R}}^n\), and \(x\in {\mathbb {R}}^n\) is an unknown vector to be determined, \(\vert x\vert \) denotes the vector with absolute values of each component of x. System (1) is a special case of the generalized absolute value equation (GAVE) [16]:
where \(B\in {\mathbb {R}}^{n \times n}\), which was firstly introduced by Rohn [16] and studied in a more general background in [12, 14, 15]. AVE (1) arises in a variety of scientific computing and engineering applications such as linear programming [10, 12], the quasi-complementarity problems [18], quadratic programming, the general linear complementarity problem [3], and so on.
Recently, many scholars have studied the unique solvability of AVE (1) and GAVE (2), for example, Wu and Li presented two necessary and sufficient conditions and some sufficient conditions for the unique solvability of AVE (1) in [19]. More solvability conditions of AVE (1) can be found in [7] and references therein. In order to approximate its numerical solution, a large number of methods have been proposed to solve AVE (1) or GAVE (2), including modified or generalized Newton method [15, 20], matrix splitting iterative method [1], Picard-type method [17], the neural network model methods [4, 13], and the methods based on the equivalent two-by-two block form, such as the SOR-like method [6, 8], the fixed point iteration (FPI) method [9], the modified fixed point iteration (MFPI) method [21] and the shift-splitting fixed point iteration method [11].
By reformulating the AVE (1) to an equivalent two-by-two block form, Ke [9] proposed the FPI method for solving the AVE (1), which can be described as
Method 1
(FPI Method [9]) Let \(A\in {\mathbb {R}}^{n\times n}\) be a nonsingular matrix and \(b\in {\mathbb {R}}^n\). Given the initial vectors \(x^{(0)}\in {\mathbb {R}}^n\) and \(y^{(0)}\in {\mathbb {R}}^n\), for \(k=0,1,2,\cdots \) until the iteration sequence \(\{x^{(k)},y^{(k)}\}_{k=0}^{+\infty }\) is convergent, compute
where the relaxation parameter \(\omega >0\).
Note that there is a linear system with coefficient matrix A need to be solved at each step of the FPI method, we prefer to use iterative method to approximate its solution since matrix A is always large and sparse. If we split A as
where \(\alpha \) is a positive parameter, and approximate \(x^{(k+1)}\) in the FPI method by the shift-splitting method [2], then we have the following shift-splitting FPI method (abbreviated as FPI-SS method) for solving AVE (1)
Method 2
(FPI-SS Method for AVE (1)) Let \(A\in {\mathbb {R}}^{n\times n}\) be a nonsingular matrix and \(b\in {\mathbb {R}}^n\). Let \(\alpha \) be a positive constant such that \(\alpha I+A\in R^{n\times n}\) is nonsingular. Given the initial vectors \(x^{(0)}\in {\mathbb {R}}^n\) and \(y^{(0)}\in {\mathbb {R}}^n\), compute \((x^{(k+1)}, y^{(k+1)})\) for \(k=0, 1, 2, \cdots \) until the iteration sequence \(\{x^{(k)}, y^{(k)}\}_{k=0}^{+\infty }\) is convergent, compute
where the parameter \(\omega \) is a positive constant.
The FPI-SS method is an inexact FPI method, and is firstly proposed for solving the GAVE (2) in [11]. When \(B=I\), the GAVE (2) is the AVE (1), and Algorithm 3 in [11] reduces to Method 2. In this paper, based on the preconditioned shift-splitting technique, we propose another inexact FPI method for solving the AVE (1). This paper is organized as follows. In Sect. 2, new inexact FPI method for solving the AVE (1) is established. The convergence analysis of the proposed method is studied in Sect. 3. In Sect. 4, numerical experiments are present to illustrate the effectiveness and feasibility of the proposed method. Finally, a brief conclusion is given in Sect. 5.
2 The FPI-PSS method
Similar to [5], assume that A is splitted as
with positive parameter \(\alpha \) and symmetric positive definite matrix P, then the \(x^{(k+1)}\) in the FPI method (3) can be solved by the following preconditioned shift-splitting (PSS) method
Hence, we have the following inexact FPI method, termed as the FPI-PSS method, for solving the AVE (1)
Method 3
(FPI-PSS Method for AVE (1)) Let \(A\in {\mathbb {R}}^{n\times n}\), \(b\in {\mathbb {R}}^{n}\). Given the initial vectors \(x^{(0)}\) \(\in \) \({\mathbb {R}}^{n}\) and \(y^{(0)}\in {\mathbb {R}}^{n}\), compute \(\{x^{(k+1)},y^{(k+1)}\}\) for \(k=0,1,2,\ldots \) using the following iteration scheme until \(\{x^{(k)},y^{(k)}\}_{k=0}^{+\infty }\) satisfies the stopping criterion:
where \(\alpha \) is a positive iteration parameter and P is symmetric positive definite matrix.
Clearly, the iteration matrix of the FPI-PSS method is
where D(x) is a diagonal matrix of the form \(D(x)=\textrm{diag}(\textrm{sign}(x))\) wherein \(\textrm{sign}(x)\) denotes a vector with components equal to 1, 0 or \(-1\) depending on whether the corresponding component of x is positive, zero or negative, respectively.
Especially, when \(P=I\), the FPI-PSS method becomes the FPI-SS method. Therefore, the proposed FPI-PSS method is a generalization of Method 2. Moreover, we can see that the FPI-PSS method has the same computational processes as the shift-splitting fixed point iteration method in [11], so the FPI-PSS method can also be used to solve the GAVE (2).
3 Convergence of the FPI-PSS method
In this section, the convergence of the FPI-PSS method for solving the AVE (1) is studied. Let \(\rho (M)\) denotes the spectral radius of the iteration matrix M, then the FPI-PSS method is convergent if and only if \(\rho (M)<1\). Assume that \(\lambda \) is an eigenvalue of matrix M and \([u,v]^{T}\) is the corresponding eigenvector, we have
where is equivalent to
Next, we will study the convergence of the FPI-PSS method. For this purpose, several helpful lemmas are presented as follows.
Lemma 1
[6] Let A \(\in {\mathbb {R}}^{n \times n}\), if the smallest singular value of the A exceed 1 and \(\eta \) is an eigenvalue of the matrix \(D(x)A^{-1}\), then \(\vert \eta \vert <1\).
Lemma 2
[22] Consider the real quadratic equation \(x^{2}+bx+d=0\), where b and d are real numbers. Both roots of the equation are less than one in modulus if and only if \(\vert d\vert <1\) and \(\vert b\vert <1+d\).
Lemma 3
Let A \(\in {\mathbb {R}}^{n \times n}\), if the smallest singular value of the A exceed 1 and \(\lambda \) is an eigenvalue of the matrix M, then \(\lambda \ne 1\).
Proof
If \(\lambda =1\) is an eigenvalue of matrix M, then (6) is equivalent to
From (7), we can get that
It follows from Lemma 1 that \(I-D(x)A^{-1}\) is nonsingular, so we have \(u=0\). Then, from (7) we can get that \(v=0\). We have the contradictory conclusion with properties of eigenvector. Hence \(\lambda \ne 1\). \(\square \)
Lemma 4
Let A \(\in {\mathbb {R}}^{n \times n}\) be a nonsingular matrix and \(\omega \) \(>0\), if \(\lambda \) satisfies
then \(\lambda \) is an eigenvalue of the matrix M. Conversely, if \(\lambda \) is an eigenvalue of the matrix M such that \(\lambda \ne 1-\omega \), then \(\lambda \) satisfies (8).
Proof
Let \([u,v]^{T}\) be the eigenvector of M corresponding to the eigenvalue \(\lambda \). Then it follows from (6) that
Combining the two equality in (9), we get (8). \(\square \)
We can prove the other assertion by reversing the process.
Theorem 1
Let A be a symmetric positive definite matrix. Assume that \(\lambda \) is an eigenvalue of iteration matrix M and \([u,v]^{T}\in {\mathbb {C}}^{n\times n}\) is the corresponding eigenvector. Denote \(a=\dfrac{u^{*}Au}{u^{*}Pu}, c=\dfrac{u^{*}D(x)u}{u^{*}Pu}.\) Then the FPI-PSS method is convergence if and only if the following conditions are satisfied
Proof
From Lemma 4 we know that \(\lambda \) satisfies (8). Multiplying \(\dfrac{u^{*}}{u^{*}Pu}\) on both sides of (8), we get
that is
or equivalently,
From Lemma 2 and Lemma 3, we know that the FPI-PSS method is convergent if and only if
and
In what follows, we divide our discussion into three cases for solving above inequalities.
Case 1: \(c=0\)
In this case, we have \(\lambda =\dfrac{\alpha -a}{\alpha +a}\), obviously, \(|\lambda |<1\).
Case 2: \(c>0\)
Now, when \(\alpha>a>c\), we get that
while when \(a>\alpha >c\), we obtain that
and when \(\alpha<c<a\), we have
Case 3: \(c<0\)
In this case, we have the same results as in the Case 2.
According to Case 1, 2 and 3, the proof is completed. \(\square \)
4 Numerical experiments
In this section, three example are given to illustrate the feasibility and efficiency of the FPI-PSS method proposed in this work. To this end, we compare the FPI-PSS method with the FPI method [9], the FPI-SS method (4) and two new fixed point iteration method [1] from aspects of the numbers of iteration steps (denoted as “IT”), elapsed CPU time in seconds (denoted as “CPU”), and relative residual error (denoted as “RES”) which is defined by
In our implementation, we choose \(P=I+H\) with \(H=\frac{A+A^T}{2}\), all initial guess vectors \(x^{(0)}\) and \(y^{(0)}\) are selected to zero vectors and all iterations are terminated if RES \(\le 10^{-6}\) or the maximum number of iteration step \(k_{\max }\) exceeds 500. All computations are performed in MATLAB R2022b on a personal computer with 1.80GHZ central processing unit (Intel(R) Core(TM) i7-8550U) and 8GB memory.
Example 1
Let the coefficient matrix \(A\in {\mathbb {R}}^{n\times n}\) of AVE (1) be defined by \(A={\widehat{A}}+\mu I\in {\mathbb {R}}^{n\times n}\), where
is a block-tridiagonal matrix,
is a tridiagonal matrix, \(n=m^2\). Let \(x^*=(-0.5,-1,-0.5,\ldots ,-0.5,-1,\ldots )^T\in {\mathbb {R}}^n\) be the exact solution of the AVE (1).
For different problem scales \(n=m^{2}\), the optimal experimental parameters, IT, CPU and RES of the FPI, FPI-SS and FPI-PSS methods for Example 1 are listed in Table 1 and Table 2 for \(\mu =1\) and \(\mu =4\), respectively. The optimal parameters are obtained through the numerical experiments, which result in the least number of iteration steps of each methods.
From Table 1 and Table 2, we can see that each of the tested methods can successfully converge to the exact solution of AVE (1), and the number of iteration steps decreases with \(\mu \). Among all tested iteration methods, the FPI-PSS method is the most efficient one as it is requires the least iterative steps and the least computation time to achieve the terminated criterion.
Example 2
Let the coefficient matrix \(A\in {\mathbb {R}}^{n\times n}\) of AVE (1) be defined by \(A={\widehat{A}}+\mu I\in {\mathbb {R}}^{n\times n}\), where
is a block-tridiagonal matrix,
is a tridiagonal matrix, \(n=m^2\). Let \(x^*=(-0.5,-1,-0.5,\ldots ,-0.5,-1,\ldots )^T\in {\mathbb {R}}^n\) be the exact solution of the AVE (1).
In Table 3 and Table 4, we report the numerical results for Example 2 with \(\mu =1\) and \(\mu =4\), respectively. Notably, the FPI-PSS method requires the least iteration steps and costs the least computing time than the FPI method and the FPI-SS method.
Example 3
[1] Let the coefficient matrix \(A\in {\mathbb {R}}^{n\times n}\) of AVE (1) be defined by \(A={\widehat{A}}+\mu I\in {\mathbb {R}}^{n\times n}\), where
is a block-tridiagonal matrix,
is a tridiagonal matrix, \(n=m^2\). Let \(x^*=(-1,1,-1,\ldots ,-1,1,\ldots )^T\in {\mathbb {R}}^n\) be the exact solution of the AVE (1).
Table 5 presents the numerical results of the FPI-PSS method and two methods in [1], where Method I with parameter 1 and Method II with parameter 0.97, see [1] for more details. From Table 5, we can see that the number of iteration steps for Method I is the same as the FPI-PSS method, but the FPI-PSS method requires less time than other two methods to achieve the terminated criterion. Thus, the proposed FPI-PSS method is more effective and feasible for solving the AVE (1).
At the end of this section, we give the following remark. From the numerical results of this section, we can see that the numerical optimal parameter \(\omega \) of the FPI-PSS method is \(\omega _{opt}=1\) in three tested examples with different problem scales. If \(\omega =1\), the iterative scheme of the FPI-PSS method becomes
which is an inexact Picard method for solving the AVE (1).
5 Conclusions
In this paper, we propose an inexact fixed point iteration method, termed as FPI-PSS method, to solve the absolute value equation. The FPI-PSS method is constructed by combining the preconditioned shift-splitting iteration method with the fixed point iteration method. Some convergence conditions of FPI-PSS method are given. In addition, three examples show that the FPI-PSS method is superior to other comparison methods from the aspects of iteration steps and computing times. However, how to choose the optimal involved parameters in the FPI-PSS method need further study.
Data availability
Data sharing is not applicable to this article as no datasets were generated or analyzed in this study.
References
Ali, R., Pan, K.-J.: Two new fixed point iteration schemes for absolute value equations. Jpn. J. Ind. Appl. Math. 40, 303–314 (2023)
Bai, Z.-Z., Yin, J.-F., Su, Y.-F.: A shift-splitting preconditioner for non-Hermitian positive definite matrices. J. Comput. Math. 24, 539–552 (2006)
Cottle, R.W., Pang, J.-S., Stone, R.E.: The Linear Complementarity Problem. Academic Press (1992)
Cui, L.-B., Hu, Q.: A chord-Zhang neural network model for solving absolute value equations. Pac. J. Optim. 18, 77–89 (2022)
Dou, Y., Yang, A.-L., Wu, Y.-J.: A new Uzawa-type iteration method for non-Hermitian saddle-point problems. East Asian J. Appl. Math. 7, 211–226 (2017)
Guo, P., Wu, S.-L., Li, C.-X.: On the SOR-like iteration method for solving absolute value equations. Appl. Math. Lett. 97, 107–113 (2019)
Hladik, M., Moosaei, H.: Some notes on the solvability conditions for absolute value equations. Optim. Lett. 17, 211–218 (2023)
Ke, Y.-F., Ma, C.-F.: SOR-like iteration method for solving absolute value equations. Appl. Math. Comput. 311, 195–202 (2017)
Ke, Y.-F.: The new iteration algorithm for absolute value equation. Appl. Math. Lett. 99, 105990 (2020)
Ketabchi, S., Moosaei, H.: An efficient method for optimal correcting of absolute value equations by minimal changes in the right hand side. Comput. Math. Appl. 64, 1882–1885 (2012)
Li, X., Li, Y.-X., Dou, Y.: Shift-splitting fixed point iteration method for solving generalized absolute value equations. Numer. Algorithms. 93, 695–710 (2023)
Mangasarian, O.L.: Absolute value programming. Comput. Optim. Appl. 36, 43–53 (2007)
Mansoori, A., Eshaghnezhad, M., Effati, S.: An efficient neural network model for solving the absolute value equations. IEEE Trans. Circuits Syst. II Express Briefs 65, 391–395 (2018)
Mangasarian, O.L.: Absolute value equation solution via concave minimization. Optim. Lett. 1, 3–8 (2007)
Noor, M.A., Iqbal, J., Noor, K.I., Al-Said, E.: On an iterative method for solving absolute value equations. Optim. Lett. 6, 1027–1033 (2012)
Rohn, J.: A theorem of the alternatives for the equation \(Ax+B\vert x\vert =b\). Linear Multilinear Algebra 52, 421–426 (2004)
Salkuyeh, D.K.: The Picard-HSS iteration method for absolute value equations. Optim. Lett. 8, 2191–2202 (2014)
Wu, S.-L., Guo, P.: Modulus-based matrix splitting algorithms for the quasi-complementarity problems. Appl. Numer. Math. 132, 127–137 (2018)
Wu, S.-L., Li, C.-X.: The unique solution of the absolute value equations. Appl. Math. Lett. 76, 195–200 (2018)
Wang, A., Cao, Y., Chen, J.-X.: Modified Newton-type iteration methods for generalized absolute value equations. J. Optim. Theory Appl. 181, 216–230 (2019)
Yu, D.-M., Chen, C.-R., Han, D.-R.: A modifed fixed point iteration method for solving the system of absolute value equations. Optimization 71, 449–461 (2022)
Young, D.M.: Iterative Solution of Large Linear Systems. Academic Press (1971)
Acknowledgements
The author thank editor and the anonymous referees for their constructive suggestions and helpful comments, which greatly improved the quality of this paper. First author is supported by the Excellent Postgraduate Innovation Star Scientific Research Project of Gansu Province (No. 2023CXZX-327).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Lv, XM., Miao, SX. An inexact fixed point iteration method for solving absolute value equation. Japan J. Indust. Appl. Math. 41, 1137–1148 (2024). https://doi.org/10.1007/s13160-023-00641-3
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13160-023-00641-3