Abstract
Using the shift-splitting strategy, we propose a shift-splitting fixed point iteration (FPI-SS) method for solving large sparse generalized absolute value equations (GAVEs). The FPI-SS method is based on reformulating the GAVE as a two-by-two block nonlinear equation. Several different types of convergence conditions of the FPI-SS method are presented under suitable restrictions. Through numerical experiments, we demonstrate that the FPI-SS method is superior to the fixed point iteration method and the SOR-like iteration method in computing efficiency.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The generalized absolute value equation (GAVE) is formulated as:
where \(A, \ B\in \mathbb {R}^{n\times n}\) are given large sparse matrices, \(b\in \mathbb {R}^{n}\), and \(|x |=(|x_{1}|, \ldots , |x_{n}|)^{\text {T}}\in \mathbb {R}^{n}\) denotes the componentwise absolute value of an unknown \(x\in \mathbb {R}^{n}\). If \(B = I\), where I stands for an identity matrix of suitable dimension, the GAVE (1) can be simplified to the following absolute value equation (AVE)
GAVEs have arisen in various scientific and engineering fields and been presented in enormous applications since they were first introduced by Rohn [1]. Among many important applications, a well-known example is the linear complementarity problem (LCP) [2,3,4,5,6]. Besides LCPs, many other optimization problems can be transformed into the GAVEs (1), including linear programming and convex quadratic programming [1, 7].
Due to the existence of the nonlinear term \(B|x |\), the GAVE (1) can be regarded as a weakly nonlinear system
For solving the general weakly nonlinear systems
where the nonlinear function \(G:\mathbb {R}^{n} \rightarrow \mathbb {R}^{n}\) is B-differentiable, through the two-stage splitting \(A=E-F\) and \(E=M-N\), Bai for the first time introduced and studied the following two-stage iterative method [8]:
with \(x^{(k, 0)}:=x^{(k)}\) and \(x^{(k+1)}:=x^{(k, l_{k})}\). See also [9,10,11] for related methods. It is noted that the two-stage iterative method provides a general framework of matrix splitting iteration methods for solving the weakly nonlinear systems (4). For the GAVE (1), i.e., the case when \(G(x)=B|x |+b\), the two-stage iterative method includes a series of existing matrix splitting iteration methods [12,13,14,15,16] as its special cases. For example, when \(E=A\), \(F=0\), \(M=E\), \(N=0\), and \(l_{k} \equiv 1\), the two-stage iterative method reduces to the well-known Picard iteration method [12]
Recently, by reformulating the AVE (2) as a two-by-two block nonlinear equation, Ke et al. proposed an SOR-like iteration method [17] for solving the AVE (2). This method was also analyzed in [18]. The SOR-like iteration method received wide attentions and obtained considerable achievements in recent years. Using the similar technology, other SOR-like-based methods [19,20,21] are presented to solve the AVE (2). In order to further improve computational efficiency, Ke proposed an efficient fixed point iteration (FPI) method [22] to solve the AVE (2), which can be described as
Algorithm 1
(The FPI Method for AVE). Let \(A\in \mathbb {R}^{n\times n}\) be a nonsingular matrix and \(b\in \mathbb {R}^{n}\). Given the initial vectors \(x^{(0)}, y^{(0)}\in \mathbb {R}^{n}\), compute \((x^{(k+1)},y^{(k+1)})\) for \(k=0,1,2, \ldots\) using the following iteration scheme until \(\{(x^{(k)},y^{(k)})\}_{k=0}^{+\infty }\) satisfies the stopping criterion:
where \(\omega\) is a positive constant.
Note that the FPI method reduces to the Picard iteration method for \(\omega = 1\). Owing to the simplicity and effectiveness of FPI method for solving the AVE (2), Yu et al. developed a modified FPI (MFPI) method [23], which is a generalized version of the FPI method.
Clearly, at each step of the FPI method, a linear system \(Au = f\) needs to be solved. Since A is always large and sparse, a computationally efficient way is to use matrix splitting iteration methods to obtain the approximate solution of this linear system. For solving non-Hermitian positive definite linear systems, Bai et al. first proposed the shift-splitting (SS) iteration method [24]. Motivated by its promising performance, the SS method was extended to solve many linear systems with special structure such as the saddle point problems [25], block \(3\times 3\) saddle point problems [26], and time-harmonic eddy current problems [27]. In this paper, using the shift-splitting [24] of the coefficient matrix A, we propose a shift-splitting fixed point iteration (FPI-SS) method for solving the GAVE (1). Compared with the FPI method, the coefficient matrix of the first sub-iteration scheme of our method is more diagonally dominant. Our method is more efficient than the FPI method and the SOR-like iteration method as shown in our numerical experiments.
In what follows, some notations in this work are described. For \(x\in \mathbb {R}^{n}\), \(x_{i}\) stands for the ith entry of vector x for all \(i = 1, 2, \ldots , n\). \(\mathrm {sgn}(x)\in \mathbb {R}^{n}\) denotes a vector with components equal to 1, 0, or \(-1\) depending on whether the corresponding component of the vector x is positive, zero, or negative, respectively. Let \(\mathrm {diag}(x)\in \mathbb {R}^{n\times n}\) represent a diagonal matrix with \(x_{i}\) as its ith diagonal entry for \(i = 1, 2, \ldots , n\). For matrix \(M\in \mathbb {R}^{n\times n}\), \(\Vert M\Vert\) denotes the spectral norm defined by \(\Vert M\Vert :=\mathrm {max}\{\Vert Mx\Vert :x\in \mathbb {R}^{n}, \Vert x\Vert =1\}\), where \(\Vert x\Vert\) is the 2-norm.
The organization of the remaining parts is the following. In Section 2, we present a brief introduction of the FPI method and establish the FPI-SS method for solving the GAVE (1). In Section 3, the convergence theories for the FPI-SS method are presented in detail. In Section 4, we give two numerical examples in Section 4 to verify the effectiveness of our method. Finally, the conclusions are given in Section 5.
2 The shift-splitting fixed point iteration (FPI-SS) method
Let \(y=|x |\), then the GAVE (1) is equivalent to
which can be reformulated as the following two-by-two block nonlinear equation
where \(H(x)=\mathrm {diag}(\mathrm {sign}(x))\).
If A is a nonsingular matrix, (9) yields the following fixed point equation
where the relaxation parameter \(\omega >0\).
Then, we can obtain the following fixed point iteration (FPI) method for the GAVE (1).
Algorithm 2
(The FPI Method for GAVE). Let \(A\in \mathbb {R}^{n\times n}\) be nonsingular, \(B\in \mathbb {R}^{n\times n}\) and \(b\in \mathbb {R}^{n}\). Given the initial vectors \(x^{(0)}, y^{(0)}\in \mathbb {R}^{n}\), compute \((x^{(k+1)},y^{(k+1)})\) for \(k=0,1,2, \ldots\) using the following iteration scheme until \(\{(x^{(k)},y^{(k)})\}_{k=0}^{+\infty }\) satisfies the stopping criterion:
where \(\omega\) is a positive constant.
It is evident that Algorithm 2 reduces to Algorithm 1 when we take \(B=I\). Similarly, if we set \(\omega = 1\) in Algorithm 2, the Picard iteration method (6) for solving the GAVE (1) can be obtained. Since the convergence analyses of Algorithm 2 are analogous to those of Algorithm 1 discussed in detail in [22], we do not give them here.
Importantly, by employing the following shift-splitting of the matrix A [24]
where the parameter \(\alpha\) is a positive constant and the matrix \(\alpha I+A\) is invertible, we get the following fixed point equation from (9)
which leads to the following FPI-SS method for the GAVE (1).
Algorithm 3
(The FPI-SS Method for GAVE). Let \(A, \ B \in \mathbb {R}^{n\times n}\) and \(b\in \mathbb {R}^{n}\). Let \(\alpha\) be a positive constant such that \(\alpha I+A\in \mathbb {R}^{n\times n}\) is nonsingular. Given the initial vectors \(x^{(0)}, y^{(0)}\in \mathbb {R}^{n}\), compute \((x^{(k+1)},y^{(k+1)})\) for \(k=0,1,2, \ldots\) using the following iteration scheme until \(\{(x^{(k)},y^{(k)})\}_{k=0}^{+\infty }\) satisfies the stopping criterion:
where \(\omega\) is a positive constant.
Remark 1
If matrix A is positive semi-definite, the condition that \(\alpha I+A\) is nonsingular naturally holds. Even if matrix A is singular, we can always find some sufficiently large parameters \(\alpha\) to ensure that \(\alpha I+A\) is nonsingular. Therefore, the FPI-SS method has a broader range of application than the FPI method. In addition, owing to the positive scalar matrix \(\alpha I\), the matrix \(\alpha I+A\) is expected to be strictly diagonally dominant and better conditioned than the matrix A. Thus, our FPI-SS method may have better computing efficiency than the FPI method.
3 Convergence of the FPI-SS method
We first give some lemmas that will be used in convergence analysis of the FPI-SS method for solving the GAVE (1).
Lemma 1
[28,29,30] For any vectors \(x\in \mathbb {R}^{n}\) and \(y\in \mathbb {R}^{n}\), the following results hold:
-
(1)
\(\Vert |x |-|y |\Vert \le \Vert x-y\Vert\);
-
(2)
if \(0 \le x \le y\), then \(\Vert x\Vert _{p}\le \Vert y\Vert _{p}\), with \(\Vert \cdot \Vert _{p}\) standing for p-norm of vector;
-
(3)
if \(x \le y\) and P is a nonnegative matrix, then \(P x \le P y\).
Lemma 2
[28, 29] For any matrices \(A, B\in \mathbb {R}^{n \times n}\), if \(0 \le A \le B\), then \(\Vert A\Vert _{p}\le \Vert B\Vert _{p}\), with \(\Vert \cdot \Vert _{p}\) standing for p-norm of matrix.
Lemma 3
[28, 31] Both roots of the real quadratic equation \(x^{2}-ax+b=0\) are less than one in modulus if and only if \(|b |<1\) and \(|a |<1+b\).
In the remainder of this section, we assume that the GAVE (1) has a unique solution. Let \((x^{*},y^{*})\) be the solution pair of (12) and \((x^{(k)},y^{(k)})\) be generated by the FPI-SS iteration (13). The iteration errors are denoted by
Then, we can get the following convergence theorem by estimating the above two iteration errors.
Theorem 1
Let \(A, B\in \mathbb {R}^{n\times n}\) and \(b\in \mathbb {R}^{n}\). Let \(\alpha\) be a positive constant such that \(\alpha I+A\in \mathbb {R}^{n\times n}\) is nonsingular. Denote
and
Then, we have
where \(\Vert \cdot \Vert _{\infty }\) denotes the \(\infty\)-norm of vector or matrix and
Furthermore, \(\Vert L(\alpha ,\omega )\Vert _{\infty }<1\) if and only if parameters \(\alpha\) and \(\omega\) satisfy
i.e., if the conditions (15) hold, the iteration sequence \(\{x^{(k)}\}_{k=0}^{+\infty }\) generated by the FPI-SS iteration converges to the unique solution \(x^{*}\) of the GAVE (1) for any initial vector.
Proof
Subtracting (13) from (12), we get
According to (16), we can obtain
From (17) and Lemma 1, we have
Rearranging (18) and (19), we find
Let
Multiplying (20) from left by the nonnegative matrix P and according to Lemma 1, we have
which can be rewritten as
Taking the \(\infty\)-norm on both sides of inequality (22) and according to (2) of Lemma 1, the estimation (14) is obtained. Since
we have
From (14), we deduce that
Hence if the conditions (15) are satisfied, then we have \(\lim \limits _{k \rightarrow \infty }\Vert E^{(k)}\Vert _{\infty }=0.\)
As
it follows that
which mean that the iteration sequence \(\{(x^{(k)},y^{(k)})\}_{k=0}^{+\infty }\) is convergent to \((x^{*},y^{*})\) under the conditions (15). This proves the theorem.
Using a different error estimate by a new weighted norm, we can obtain another convergence theorem as follows.
Theorem 2
Let the assumptions of Theorem 1 hold, \(\delta\), \(\beta\), and \(\gamma\) be defined as in Theorem 1. Denote
Then, we have
where
Furthermore, \(\Vert T(\alpha ,\omega )\Vert <1\) if and only if parameters \(\alpha\) and \(\omega\) satisfy
and
i.e., if the conditions (24)–(25) hold, the iteration sequence \(\{x^{(k)}\}_{k=0}^{+\infty }\) generated by the FPI-SS iteration converges to the unique solution \(x^{*}\) of the GAVE (1) for any initial vector.
Proof
Denote
According to Lemma 1, we multiply left (21) by matrix D to obtain
which can be rewritten as
From the above, it follows that (23) holds.
Let \(\lambda\) be an eigenvalue of the matrix \(Q:=T(\alpha ,\omega )^{\mathrm {T}}T(\alpha ,\omega )\). Since
we get
and
Thus, \(\lambda\) is the root of the following real quadratic equation
From Lemma 3, it follows that \(\Vert T(\alpha ,\omega )\Vert <1\) if and only if
and
From (23), we conclude that
Hence, we have \(\lim \limits _{k \rightarrow \infty }\Vert E_{\omega }^{(k)}\Vert =0\) when the conditons (24)–(25) are satisfied.
From the definition
we get
which mean that the iteration sequence \(\{(x^{(k)},y^{(k)})\}_{k=0}^{+\infty }\) is convergent to \((x^{*},y^{*})\) under the conditions (24)–(25). This completes the proof.
Theorem 2 shows that in order to obtain the convergence of the FPI-SS method, we need to find the conditions in which \(\Vert T(\alpha ,\omega )\Vert <1\) holds. Here, we give convergence conditions that are simpler than those in Theorem 2.
Corollary 1
Let the assumptions of Theorem 1 hold, \(\delta\), \(\beta\), and \(\gamma\) be defined as in Theorem 1, \(E_{\omega }^{(k+1)}\) and \(T(\alpha ,\omega )\) be defined as in Theorem 2. If
and
then \(\Vert T(\alpha ,\omega )\Vert <1\), i.e., the FPI-SS method is convergent when the conditions (27)–(28) hold.
Proof
Let \(\eta =\mathrm {max}\{\delta , ~\omega \beta , ~\gamma \}\), we can get
where
From lemma 2, we obtain
Let \(\theta =\displaystyle \frac{3-\sqrt{5}}{2}\). Hence, we have \(\Vert T(\alpha ,\omega )\Vert <1\) if \(\eta <\theta\). Then,
Therefore, if the conditions (27)–(28) are satisfied, the iteration sequence \(\{(x^{(k)},y^{(k)})\}_{k=0}^{+\infty }\) is convergent to \((x^{*},y^{*})\).
4 Numerical experiments
In this section, two examples from LCPs are presented to show the feasibility and effectiveness of the FPI-SS method. We compare the FPI-SS method with the FPI method [22] and the SOR-like iteration method [17, 18] from aspects of the numbers of iteration steps (denoted as “IT”), elapsed CPU time in seconds (denoted as “CPU”), and relative residual error (denoted as “RES”) which is defined by
In our implementation, all initial guess vectors \(x^{(0)}\) and \(y^{(0)}\) are chosen to zero vectors and all iterations are terminated if \(\mathrm {RES}\le 10^{-6}\) or the maximum number of iteration steps \(k_{\mathrm {max}}\) exceeds 500. All computations are performed in MATLAB R2018b on a personal computer with 2.40GHz central processing unit (Intel(R) Core(TM) i5-6200U) and 8 GB memory.
Consider the following LCP(q, M) [2]: to derive two real vectors \(z, \omega \in \mathbb {R}^{n}\) such that
where \(M\in \mathbb {R}^{n\times n}\) and \(q\in \mathbb {R}^{n}\) are given. From [3,4,5,6], the LCP(q, M) (29) can be formulated as the following GAVE:
with
Example 1
([5, 6]) The matrix \(M\in \mathbb {R}^{n \times n}\) is defined by \(M=\widehat{M}+\mu I\in \mathbb {R}^{n \times n}\) and \(q\in \mathbb {R}^{n}\) is defined by \(q=-Mz^{*}\), where
is a block-tridiagonal matrix,
is a tridiagonal matrix, \(n=m^{2}\), and \(z^{*}=(1,2,1,2,\ldots ,1,2,\ldots )^{\mathrm {T}}\in \mathbb {R}^{n}\) is the unique solution of the LCP(q, M) (29). It can be derived that \(x^{*}=(-0.5,-1,-0.5,-1,\ldots ,-0.5,-1,\ldots )^{\mathrm {T}}\in \mathbb {R}^{n}\) is the exact solution after formulating the LCP(q, M) (29) as the GAVE (30).
For various problem sizes n, the optimal experimental parameters, the iteration steps, CPU time, and relative residual errors of three methods in the case of \(\mu =1\) and \(\mu =4\) are listed in Tables 1 and 2, respectively.
We find that each tested method converges to the exact solution and the number of iterative steps becomes smaller with the increase of \(\mu\). Notably, among these methods, the FPI-SS method requires the least iteration steps and costs the least computing time.
Example 2
([5]) Consider the LCP(q, M) (29). The matrix \(M\in \mathbb {R}^{n \times n}\) is defined by \(M=\widehat{M}+\mu I\in \mathbb {R}^{n \times n}\) and \(q\in \mathbb {R}^{n}\) is defined by \(q=-Mz^{*}\), where
is a block-tridiagonal matrix,
is a tridiagonal matrix, \(n=m^{2}\), and \(z^{*}=(1,2,1,2,\ldots ,1,2,\ldots )^{\mathrm {T}}\in \mathbb {R}^{n}\) is the unique solution of the LCP(q, M) (29). It can be derived that \(x^{*}=(-0.5,-1,-0.5,-1,\ldots ,-0.5,-1,\ldots )^{\mathrm {T}}\in \mathbb {R}^{n}\) is the exact solution after formulating the LCP(q, M) (29) as the GAVE (30).
In Tables 3 and 4, we list the numerical results of three methods by using experimental optimal parameters in the case of \(\mu =1\) and \(\mu =4\), respectively. From those results, we get the same conclusions as Example 1.
5 Conclusion
In this paper, by combining the shift-splitting of the coefficient matrix with the fixed point iteration (FPI) method, we proposed a shift-splitting fixed point iteration (FPI-SS) method to solve the generalized absolute value equation (GAVE). We have given several different types of convergence conditions of the FPI-SS method by introducing two different norms of the iteration error. Furthermore, using two numerical examples from linear complementarity problems, we have demonstrated that the FPI-SS method outperforms the FPI method and the SOR-like iteration method in terms of iteration steps and computing times.
Finally, we should mention that the FPI-SS method can be seen as an inexact version of the FPI method. If we replace the shift-splitting in the FPI-SS algorithm with other matrix splitting such as SOR-based splitting [28, 29, 31] and HSS-based splitting [28, 32,33,34], we can establish a series of inexact FPI methods which may have similar convergence results. In real applications of inexact FPI algorithms, how to choose the optimal (or quasi-optimal) parameters is an interesting and practical topic, which is left as our future work.
Data availability
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
References
Rohn, J.: A theorem of the alternatives for the equation \(Ax+B|x|=b\). Linear Multilinear Algebra 52(6), 421–426 (2004)
Cottle, R.W., Pang, J.-S., Stone, R.E.: The Linear Complementarity Problem. Academic Press, (1992)
Schäfer, U.: On the modulus algorithm for the linear complementarity problem. Oper. Res. Lett. 32(4), 350–354 (2004)
Mangasarian, O.L., Meyer, R.R.: Absolute value equations. Linear Algebra Appl. 419(2–3), 359–367 (2006)
Bai, Z.-Z.: Modulus-based matrix splitting iteration methods for linear complementarity problems. Numer. Linear Algebra Appl. 17(6), 917–933 (2010)
Bai, Z.-Z., Zhang, L.-L.: Modulus-based synchronous multisplitting iteration methods for linear complementarity problems. Numer. Linear Algebra Appl. 20(3), 425–439 (2013)
Mangasarian, O.L.: Absolute value programming. Comput. Optim. Appl. 36(1), 43–53 (2007)
Bai, Z.-Z.: A class of two-stage iterative methods for systems of weakly nonlinear equations. Numer. Algorithms 14(4), 295–319 (1997)
Bai, Z.-Z.: Parallel multisplitting two-stage iterative methods for large sparse systems of weakly nonlinear equations. Numer. Algorithms 15(3–4), 347–372 (1997)
Bai, Z.-Z., Migallón, V., Penadés, J., Szyld, D.B.: Block and asynchronous two-stage methods for mildly nonlinear systems. Numer. Math. 82(1), 1–20 (1999)
Bai, Z.-Z., Yang, X.: On HSS-based iteration methods for weakly nonlinear systems. Appl. Numer. Math. 59(12), 2923–2936 (2009)
Rohn, J., Hooshyarbakhsh, V., Farhadsefat, R.: An iterative method for solving absolute value equations and sufficient conditions for unique solvability. Optim. Lett. 8(1), 35–44 (2014)
Wang, A., Cao, Y., Chen, J.-X.: Modified Newton-type iteration methods for generalized absolute value equations. J. Optim. Theory Appl. 181(1), 216–230 (2019)
Zhou, H.-Y., Wu, S.-L., Li, C.-X.: Newton-based matrix splitting method for generalized absolute value equation. J. Comput. Appl. Math. 394, 113578–15 (2021)
Li, C.-X., Wu, S.-L.: A shift splitting iteration method for generalized absolute value equations. Comput. Methods Appl. Math. 21(4), 863–872 (2021)
Salkuyeh, D.K.: The Picard-HSS iteration method for absolute value equations. Optim. Lett. 8(8), 2191–2202 (2014)
Ke, Y.-F., Ma, C.-F.: SOR-like iteration method for solving absolute value equations. Appl. Math. Comput. 311, 195–202 (2017)
Guo, P., Wu, S.-L., Li, C.-X.: On the SOR-like iteration method for solving absolute value equations. Appl. Math. Lett. 97, 107–113 (2019)
Huang, B., Li, W.: A modified SOR-like method for absolute value equations associated with second order cones. J. Comput. Appl. Math. 400, 113745–20 (2022)
Dong, X., Shao, X.-H., Shen, H.-L.: A new SOR-like method for solving absolute value equations. Appl. Numer. Math. 156, 410–421 (2020)
Zhang, J.-L., Zhang, G.-F., Liang, Z.-Z.: A modified generalized SOR-like method for solving an absolute value equation. Linear Multilinear Algebra (2022). https://doi.org/10.1080/03081087.2022.2066614
Ke, Y.-F.: The new iteration algorithm for absolute value equation. Appl. Math. Lett. 99, 105990–7 (2020)
Yu, D., Chen, C., Han, D.: A modified fixed point iteration method for solving the system of absolute value equations. Optimization 71(3), 449–461 (2022)
Bai, Z.-Z., Yin, J.-F., Su, Y.-F.: A shift-splitting preconditioner for non-Hermitian positive definite matrices. J. Comput. Math. 24(4), 539–552 (2006)
Cao, Y., Du, J., Niu, Q.: Shift-splitting preconditioners for saddle point problems. J. Comput. Appl. Math. 272, 239–250 (2014)
Cao, Y.: Shift-splitting preconditioners for a class of block three-by-three saddle point problems. Appl. Math. Lett. 96, 40–46 (2019)
Cao, Y.: A general class of shift-splitting preconditioners for non-Hermitian saddle point problems with applications to time-harmonic eddy current models. Comput. Math. Appl. 77(4), 1124–1143 (2019)
Bai, Z.-Z., Pan, J.-Y.: Matrix Analysis and Computations. SIAM, (2021)
Varga, R.S.: Matrix Iterative Analysis. Springer, (2000)
Bai, Z.-Z., Evans, D.J.: Matrix multisplitting relaxation methods for linear complementarity problems. Int. J. Comput. Math. 63(3–4), 309–326 (1997)
Young, D.M.: Iterative Solution of Large Linear Systems. Academic Press, (1971)
Bai, Z.-Z., Golub, G.H., Ng, M.K.: Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. SIAM J. Matrix Anal. Appl. 24(3), 603–626 (2003)
Bai, Z.-Z., Golub, G.H., Ng, M.K.: On successive-overrelaxation acceleration of the Hermitian and skew-Hermitian splitting iterations. Numer. Linear Algebra Appl. 14(4), 319–335 (2007)
Bai, Z.-Z., Golub, G.H., Lu, L.-Z., Yin, J.-F.: Block triangular and skew-Hermitian splitting methods for positive-definite linear systems. SIAM J. Sci. Comput. 26(3), 844–863 (2005)
Acknowledgements
The authors would like to thank the two referees for their constructive suggestions which greatly improve the presentation of the paper. This research was funded by the Natural Science Foundation of Gansu Province (No. 20JR5RA464) and the National Natural Science Foundation of China (Nos. 11501272 and 11901267).
Funding
The research of Xu Li was supported by the Natural Science Foundation of Gansu Province (No. 20JR5RA464) and the National Natural Science Foundation of China (No. 11501272). The research of Yi-Xin Li was supported by the Natural Science Foundation of Gansu Province (No. 20JR5RA464). The research of Yan Dou was supported by the National Natural Science Foundation of China (No. 11901267).
Author information
Authors and Affiliations
Contributions
Xu Li and Yi-Xin Li wrote the main manuscript text and Yi-Xin Li performed the numerical experiments. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Ethics approval and consent to participate
Not applicable.
Consent for publication
The authors agree to publication of the article in English by Springer in Springer’s corresponding English-language journal.
Human and animal ethics
Not applicable.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Li, X., Li, YX. & Dou, Y. Shift-splitting fixed point iteration method for solving generalized absolute value equations. Numer Algor 93, 695–710 (2023). https://doi.org/10.1007/s11075-022-01435-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11075-022-01435-3