Abstract
Absolute value equations (AVEs) can be used to solve many engineering, management science, and operations research problems. This paper proposes two new iterative schemes for solving \(Ax-{|x|}= b\), where A is an M-matrix. These methods depend on the splitting of the coefficient matrix. The convergence conditions for these two methods are given. Some numerical examples are given to demonstrate that the iterative schemes are valid and efficient. The results are inspiring and may animate more study in this direction.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In recent years, AVEs have been recognized as a form of NP-hard and non-differentiable problem, analogous to several other mathematical problems, including linear programming, circuit simulations, contact problems, bimatrix games, quadratic programming, and journal bearings [1, 9, 14, 18, 26].
The AVE consists of determining an \(x \in {\mathbb {R}}^{k}\) such that
where \(A \in {\mathbb {R}}^{k\times k}\) is an M-matrix, \(b \in {\mathbb {R}}^{k}\) and \({|x|=(|x_{1}|, |x_{2}|,\ldots ,|x_{k}|)^{T} }\). Moreover, Eq. (1) represents a special case of the following general case:
where \(B \in {\mathbb {R}}^{k\times k}\) was introduced in [26].
The AVE is also equivalently reformulated to mixed-integer programming [25, 30] and linear complementarity problem (LCP) [5, 7]. Using the LCP \((\tilde{M},b)\) problem as an example: it requires the determination of a vector \(z \in {\mathbb {R}}^{k}\) such that
where \(\tilde{M} \in {\mathbb {R}}^{k\times k}\) and \(b \in {\mathbb {R}}^{k}\). Equation (3) can be written as AVE
with
where \(B=(\tilde{M}-I)\) and \(A=(\tilde{M}+I)\). Abdullah et al. [3] described the AVE system as an LCP and computed it using a smoothing method. Mezzadri [21] proposed the concept of equivalency among AVEs and the horizontal problem of LCPs. In addition, the unique conditions of AVE as well as the connection to LCP have been analyzed by Hu and Huang [14].
In recent years, the problem of calculating AVEs has received considerable attention and has been extensively discussed in the literature. Ke [15] showed a new iterative algorithm for AVE (1) and explained the theory of convergence under proper conditions. Based on [31], Zhang and Wei investigated the generalized Newton technique to calculate (1) and designated finite as well as global convergence situations when the interval matrix \([A-I, A+I]\) is regular. Gu et al. [10] proposed nonlinear Picard-CSCS and Picard-like techniques to determine (1) based on the Toeplitz matrices. Chen et al. [8] described the optimal parameter SOR-like iterative technique for calculating AVE (1) and designated the convergence properties under appropriate conditions. Nguyen et al. [24] have demonstrated that the system (1) can be calculated using unified smoothing functions connected with a second-order cone. Using the shift splitting procedure, Wu and Li [28] investigated a shift splitting iterative method (SSM) to describe the system (1). Cacetta et al. [6] introduced a smoothing Newton strategy to calculate the system of AVE and discussed that the technique is globally convergent for \(\Vert A^{-1}\Vert <1\) and so on, see [12, 13, 19, 20, 32, 33] for more details.
Recently, Miao and Zhang [22], Li et al. [17] and Mao et al. [23] introduced different techniques to calculate the LCPs using the fixed point principle. In this analysis, we intend to apply this procedure to the system of AVEs using the fixed point principle and propose effective iterative procedures to determine the system (1). The contributions of this article are as follows: we divide the A matrix into three different matrices (diagonal, strictly lower and upper triangular matrices) and combine them with the fixed point formula to derive the new iterative methods. Moreover, we suppose the convergence outcomes of the newly formulated methods under different circumstances.
The paper is prepared as follows. Section 2 describes the suggested methods, along with their convergence to calculate the system (1). In addition, Sects. 3 and 4 present numerical tests and concluding remarks.
2 Proposed iterative methods
Here, we discuss the two new iterative approaches to determining the system (1). We begin by recalling some notations, a lemma, as well as an M-matrix definition.
Suppose \(A=(a_{ij})\in {\mathbb {R}}^{k\times k}\), we define \(A\ge 0\) if \(a_{ij}\ge 0\) for all \(1\le i, j\le k\). In addition, we suggest the spectral radius, and absolute value of A in terms of \(\rho (A)\) and \({|A|=(|a_{ij}|)}\), respectively.
Lemma 2.1
[29] Suppose x and \(z \in {\mathbb {R}}^{k}\), then \(|x- z|\ge ||x|-|z||\).
Definition 2.2
[23] The matrix A is known as
-
1.
Z-matrix if all of its off-diagonal elements are non-positive;
-
2.
M-matrix if \(A^{-1}\ge 0\) and A is a Z-matrix.
To present and investigate the new iterative methods, let the matrix A be divided into the following two parts:
with
where \({D}_{A}\), \({L}_{A}\), \({U}_{A}\) are respectively the diagonal, strictly lower triangular and strictly upper triangular parts of A, and \(U^{T}_{A}\) denotes the transpose of \(U_{A}\). Based on [4], the AVE (1) corresponds to the following fixed point problem:
where
Here, \(\lambda >0\) and \(E \in {\mathbb {R}}^{k\times k}\) is a diagonal matrix consisting of positive diagonal components (see [2, 4]). Based on (5), we introduce the following two iterative schemes for solving the AVEs:
Method I:
Method II:
Here \(0< \lambda \le 1\). Next, we examine the convergence of the above two iterative algorithms.
Theorem 2.1
Let \(A= N_{A}-M_{A}\) be non-singular and assume \(\rho (R^{-1}{\bar{S}})<1\) where \({R=I-\lambda E|M_{A}|}\) and \({\bar{S}}= \lambda E+|I-\lambda E N_{A}|\), then for any initial guess \(x^{0} \in {\mathbb {R}}^{k}\), the sequence \(\{x^{m}\}_{m=0}^\infty\) produced by Method I converges to the unique solution of the AVE (1).
Proof
From Eq. (6), we have
By subtracting (8) from (6), we get
By considering the absolute values on each side and applying Lemma 2.1, we obtain
which can be rewritten as
Since \(M_{A}\) is a strictly lower triangular matrix, and E is a diagonal matrix with positive diagonal components, we know that \(I- \lambda E | M_{A} |\) is a lower triangular matrix with diagonal entries being one. Hence, \((I- \lambda E | M_{A} | )\) is invertible. It follows from Eq. (9) that
where
In this case, the \(R^{-1}{\bar{S}}\) matrix is non-negative. According to Theorem 2.1 in [2, 4], when \(\rho (R^{-1}{\bar{S}})<1,\) then the \(\{x^{m}\}_{m=0}^\infty\) sequence of Method I leads to the solution \({\bar{x}}\) of Eq. (1).
Suppose \(\bar{\tilde{z}}\) is another solution to the AVE in order to determine the uniqueness of the solution. Based on the equations
and
we have
Since \(\rho (R^{-1}{\bar{S}})<1\), we have
The proof is completed.
Theorem 2.2
If \(\rho (G^{-1} J)<1\) where \(G= I-D_{A}^{-1}| M_{A}|\) and \(J= I+\lambda E-|\lambda EA+D_{A}^{-1} M_{A}|\), then for any starting guess \(x^{0}\in {\mathbb {R}}^{k}\), the sequence \(\{x^{m}\}_{m=0}^\infty\) created through Method II converges to the unique solution of the AVE (1).
Proof
From Eq. (7), we have
By subtracting (10) from (7), we have
Based on Lemma 2.1 and the absolute values of both sides, we obtain
which can be rewritten as
So,
Evidently, according to Theorem 2.1 in [2, 4], if \(\rho (G^{-1}J)<1\), the iterative sequence \(\lbrace x^{m}\rbrace _{m=0}^{\infty }\) developed from Method II leads to the AVE (1) solution.
The uniqueness proof is the same as Theorem 2.1, which is neglected here.
3 Numerical experiments
Throughout this section of the article, we offer three examples that explain the significance of the presented techniques from three distinct standpoints:
-
The number of iterations (signified as Iter)
-
The computation time (s) (marked as Time)
-
The residual vectors (signified as RES)
The stopping criteria is as follows:
In addition, all tests are executed on Intel (C) Core (TM) i5, 4 GB of RAM, CPU @ 1.80 GHz, Matlab (R2016a), and the null vector is used as the initial guess in Example 3.1.
Numerical investigations are conducted in order to verify the convergence conditions \(\rho (R^{-1}{\bar{S}}) <1\) and \(\rho (G^{-1}J)<1\). These outcomes are outlined in Table 1.
As shown in Table 1, we numerically studied the convergence requirements for both theorems. Our analysis clearly demonstrates that both strategies satisfy these conditions. In addition, we perform the following tests to determine the effectiveness of the new techniques.
Example 3.1
Supposed \(A=M+\Psi I\in {\mathbb {R}}^{k\times k}\) and \(Ax^{\star }-| x^{\star } |=b \in {\mathbb {R}}^{k}\) with
\(M=tridiag (-1.5I,S,-0.5I) \in {\mathbb {R}}^{k\times k}\), \(x^{\star }=((-1)^{w}, w=1,2,\ldots ,k)^{T} \in {\mathbb {R}}^{k}\),
where \(S=tridiag(-1.5,8,-0.5) \in {\mathbb {R}}^{\zeta \times \zeta }\), \(I \in {\mathbb {R}}^{\zeta \times \zeta }\) represents the identity or unit matrix, and \(k=\zeta ^{2}\). The results are explained in Table 2, while the graphic expressions are illustrated in Figs. 1 and 2.
Table 2presents some iterative outcomes to explain the implementation of the proposed approaches. We find that the given approaches become faster if the \(\lambda\) value is higher. The graphic expression is presented in Figs. 1 and 2. The curves in Figs. 1 and 2 illustrate the significance of the recommended techniques using various values of \(\lambda\).
Moreover, we extend Example 3.1 and assess the newly designed methods in comparison with the method presented in [16] (exposed as SD), the scheme stated in [4] (indicated as RA), and the technique described in [11] (defined as MG). These results are indicated in Table 3.
Based on Table 3, all experimented techniques can efficiently determine the solution of the system (1). We observe that the iterations (Iter) and analysis time (Time) of the presented techniques are less than those of other techniques.
Example 3.2
Suppose \(A=tridiag (-1,4,-1) \in {\mathbb {R}}^{k\times k}\), \(x^{\star }=((-1)^{w}, w=1,2,\ldots ,k)^{T} \in {\mathbb {R}}^{k}\),
and \(b=Ax^{\star }-| x^{\star } | \in {\mathbb {R}}^{k}\). In Examples 3.2 and 3.3, using the exact stopping criterion and the initial guess as provided in [28]. Furthermore, we compare the newly designed methods with the process given in [15] (exposed as NM), the approach shown in [8] (indicated as SOR), and the special shift splitting iterative method suggested in [28] (indicated by SSM). Table 4 provides the outcomes of the analysis.
Based on Table 4, all techniques can efficiently and precisely calculate the problem (1). We demonstrate that our methods (Method I and Method II) perform better than those of the NM, SOR, and SSM approaches in expressions of the iterations (Iter) as well as computation time (Time).
Example 3.3
Suppose the matrix \(A \in {\mathbb {R}}^{k\times k}\) is presented by
where \(S=tridiag(-1,\theta ,-1) \in {\mathbb {R}}^{ \zeta \times \zeta }\), \(I \in {\mathbb {R}}^{\zeta \times \zeta }\) represents the unit matrix, and \(k=\zeta ^{2}\). Calculate \(b=Ax^{\star }-| x^{\star } | \in {\mathbb {R}}^{k}\) using \(x^{\star }=(x_{1},x_{2},x_{3},\ldots , x_{k})^{T}\in {\mathbb {R}}^{k}\) such that \(x_{k}=(-1)^{k}\). The outcomes are listed in Table 5.
Based on Table 5, all experimented techniques can quickly determine the solution of the system (1). We observe that the iterations (Iter) and analysis time (Time) of the presented technique are less than those of other approaches. Eventually, we deduce that our suggested techniques are achievable and valuable for AVEs.
4 Conclusions
We have presented two new iterative methods to compute the AVEs and determined that these procedures converge to the AVE (1) with a suitable choice of parameter \(\lambda\). The theoretical analyses as well as numerical investigations have demonstrated that the proposed approaches appear promising for solving AVEs.
We have successfully developed two new iterative approaches to solve Eq. (1) when the coefficient matrix A is an M-matrix. The next issue to be addressed concerns the more general cases of coefficient matrices.
References
Amin, M., Erfanian, M.: A dynamic model to solve the absolute value equations. J. Comput. Appl. Math. 333, 28–35 (2018)
Ahn, B.H.: Solution of nonsymmetric linear complementarity problems by iterative methods. J. Optim. Theory Appl. 33, 185–197 (1981)
Abdallah, L., Haddou, M., Migot, T.: Solving absolute value equation using complementarity and smoothing functions. J. Comput. Appl. Math. 327, 196–207 (2018)
Ali, R., Pan, K.: The new iteration methods for solving absolute value equations. Appl. Math. 20, 1–14 (2021)
Bai, Z.Z.: Modulus-based matrix splitting iteration methods for linear complementarity problems. Numer. Linear Algebra Appl. 17, 917–933 (2010)
Caccetta, L., Qu, B., Zhou, G.L.: A globally and quadratically convergent method for absolute value equations. Comput. Optim. Appl. 48, 45–58 (2011)
Cottle, R.W., Pang, J.S., Stone, R.E.: The Linear Complementarity Problem. SIA, Philadelphia (2009)
Chen, C., Yu, D., Han, D.: Optimal parameter for the SOR-like iteration method for solving the system of absolute value equations. J. Appl. Anal. Comput. (2020). arXiv:2001.05781
Dehghan, M., Shirilord, A.: Matrix multisplitting Picard-iterative method for solving generalized absolute value matrix equation. Appl. Numer. Math. 158, 425–438 (2020)
Gu, X.M., Huang, T.Z., Li, H.B., Wang, S.F., Li, L.: Two-CSCS based iteration methods for solving absolute value equations. J. Appl. Math. Comput. 7, 1336–1356 (2017)
Guo, P., Wu, S.L.: A modified Gauss–Seidel iteration method for solving absolute value equations. In: Song H., Jiang D. (eds) Simulation Tools and Techniques. SIMUtools 2020 (2021)
Hashemi, F., Ketabchi, S.: Numerical comparisons of smoothing functions for optimal correction of an infeasible system of absolute value equations. Numer. Algebra Control. Optim. 10, 13–21 (2020)
Hashemi, F., Ketabchi, S.: Optimal correction of infeasible equations system as \(Ax + B | x | = b\) Using \(l_{p}\)-norm regularization. Bol. Soc. Paran. Mat. 20, 1–16 (2019)
Hu, S.L., Huang, Z.H.: A note on absolute value equations. Optim. Lett. 4, 417–424 (2010)
Ke, Y.F.: The new iteration algorithm for absolute value equation. Appl. Math. Lett. 99, 1–7 (2020)
Ke, Y.-F., Ma, C.-F.: SOR-like iteration method for solving absolute value equations. Appl. Math. Comput. 311, 195–202 (2017)
Li, S.-G., Jiang, H., Cheng, L.-Z., Liao, X.-K.: IGAOR and multisplitting IGAOR methods for linear complementarity problems. J. Comput. Appl. Math. 235, 2904–2912 (2011)
Moosaei, H., Ketabchi, S., Jafari, H.: Minimum norm solution of the absolute value equations via simulated annealing algorithm. Afr. Math. 26, 1221–1228 (2015)
Moosaei, H., Ketabchi, S.: Optimal correcting of absolute value equations by using smoothing techniques. J. Iinterdiscip. Math. 22, 531–538 (2019)
Mangasarian, O.L.: A generalized Newton method for absolute value equations. Optim. Lett. 3, 101–108 (2009)
Mezzadri, F.: On the solution of general absolute value equations. Appl. Math. Lett. 99, 1–7 (2020)
Miao, S.-X., Zhang, D.: On the preconditioned GAOR method for a linear complementarity problem with an M-matrix. J. Inequal. Appl. 2018, 195 (2018)
Mao, X., Wangi, X.W., Edalatpanah, S.A., Fallah, M.: The monomial preconditioned SSOR method for linear complementarity problem. IEEE Access. 7, 73649–73655 (2019)
Nguyen, C.T., Saheyab, B., Changa, Y.L., Chen, J.S.: Unified smoothing functions for absolute value equation associated with second-order cone. Appl. Numer. Math. 135, 206–227 (2019)
Prokopyev, O.: On equivalent reformulations for absolute value equations. Comput. Optim. Appl. 44, 363–372 (2009)
Rohn, J.: A theorem of the alternatives for the equation \(Ax + B |x|= b\). Linear Multilinear Algebra 52, 421–426 (2004)
Rohn, J., Hooshyarbakhsh, V., Farhadsefat, R.: An iterative method for solving absolute value equations and sufficient conditions for unique solvability. Optim. Lett. 8, 35–44 (2014)
Wu, S.L., Li, C.X.: A special shift splitting iteration method for absolute value equation. AIMS. Math. 5, 5171–5183 (2020)
Varga, R.S.: Matrix Iterative Analysis. Prentice-Hall, Englewood Cliffs (1962)
Yong, L.: Mixed integer linear programming method for absolute value equations. In: Proceedings of the 2009 International Symposium on Information Processing, pp 316–318 (2009)
Zhang, C., Wei, Q.J.: Global and finite convergence of a generalized Newton method for absolute value equations. J. Optim. Theory Appl. 143, 391–403 (2009)
Zhang, M., Huang, Z.-H., Li, Y.-F.: The sparsest solution to the system of absolute value equations. J. Oper. Res. Soc. China 3(1), 31–51 (2015)
Zainali, N., Lotfi, T.: On developing a stable and quadratic convergent method for solving absolute value equation. J. Comput. Appl. Math. 330, 742–747 (2018)
Acknowledgements
The authors would like to thank the editor and the anonymous reviewers for their valuable comments.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
This Appendix demonstrates how to execute the proposed iterative methods. Method I for the AVE:
Method II for the AVE:
Both iterative schemes on the right-hand side include \(x^{m+1}\), which defines the unknown vector. Based on \(Ax-|x|=b,\) we obtain
Thus, \(x^{m+1}\) can be approximated as follows:
This procedure is named the Picard scheme [27]. The subsequent step is to describe the Method I algorithm. Algorithm for Method I:
Method II follows the similar procedure.
About this article
Cite this article
Ali, R., Pan, K. Two new fixed point iterative schemes for absolute value equations. Japan J. Indust. Appl. Math. 40, 303–314 (2023). https://doi.org/10.1007/s13160-022-00526-x
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13160-022-00526-x