Abstract
In this paper, we extend the \(\mathrm{SOR}\)-like iteration method for a new generalized absolute value equation and obtain its convergence properties. What is more, the optimal parameter of the \(\mathrm{SOR}\)-like iteration is obtained. The result of numerical experiments shows that the proposed method is reliable and feasible.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1. Introduction
In this paper, we focus on the numerical solution of the new generalized absolute value equation (NGAVE)
where \(A,B\in\mathbb R^{n \times n}\), \(b\in\mathbb R^n\), and \(|x|\) denotes the vector that consists of the absolute values of components of the vector \(x\). Obviously, it is not difficult to find when \(B=I\) and the NGAVE (1.1) reduces to the standard absolute value equation (AVE), see [1]:
The AVE (1.2) has many applications in the field of optimization, such as linear complementarity problem, linear programming, complex quadratic programming, and binary matrix game. By now, the existence and uniqueness of the solution of the AVE have been comprehensively studied. For example, Mangasarian [1] proved that the AVE (1.2) has a unique solution if the smallest singular value of \(A\) exceeds \(1\). In [2], it was established that the AVE (1.2) has a unique solution if and only if \(A+(I-2D)\) is nonsingular, where \(D=\operatorname{diag}(d_i)\), \(d_i\in[-1,1]\). Wu [3] gave some necessary and sufficient conditions for the unique solvability of the NGAVE (1.1). For solving the AVE (1.2), many algorithms are proposed, such as the Newton method and its other versions in [4] and [5], the NINE-method (Noor, Iqbal, Noor) in [6], and the \(\mathrm{SOR}\)-like iteration method in [7].
To our knowledge, there does not exist a numberical method for solving the NGAVE (1.1) so far, and this is our motivation to write this paper. In this paper, we will try to extend the \(\mathrm{SOR}\)-like iterative method to solve the NGAVE (1.1) and further to discuss the convergence condition for the \(\mathrm{SOR}\)-like iterative method. Finally, the efficiency of the proposed method is verified by numerical experiments.
2. Preliminaries
In this section, we give some symbol descriptions and lemmas to help the discussion later.
We introduce the following notation:
-
•
\(I\) is an identity matrix.
-
•
\(\rho(\,\cdot\,)\) is the spectral radius.
-
•
\(\|\cdot\|\) is the 2-norm.
-
•
\(\sigma(\,\cdot\,)\) is a singular value.
-
•
\(\operatorname{diag}(x)\) is the diagonal matrix whose diagonal elements are components of the vector \(x\).
Lemma 2.1 [3].
Assume that \(A\) and \(B\) are nonsingular and \(\sigma_{\min}(AB^{-1})>1\). Then the NGAVE (1.1) has a unique solution for any \(b\in\mathbb R^n\), where \(\sigma_{\min}\) denotes the smallest singular value.
Lemma 2.2 [8].
For the real quadratic equation \(x^2-bx+c=0\) , where \(b,c\in\mathbb R\) , both absolute values of the roots are less than one if and only if \(|c|<1\) and \(|b|<1+c\) .
3. \(\mathrm{SOR}\)-Like Iteration Method
First, let \(y=|Bx|\). Then we find that the NGAVE (1.1) is equivalent to
We can rewrite (3.1) as
where \(D(x):=\operatorname{diag}(\operatorname{sgn}(Bx))B\).
Let \(\overline A=D-L-U\), where
Thus, we obtain
where \(\omega>0\). Equation (3.4) can be expanded as follows:
Based on (3.5), we can easily obtain the \(\mathrm{SOR}\)-like method, which is described below:
where \(k=0,1,\dots,n\).
To study the convergence property of (3.6), let
Now we note that the eigenvalue of \(M_\omega\) must be less than \(1\), and we need to find the values of the parameter \(\omega\) satisfying this condition. To solve this problem, we need some lemmas.
Lemma 3.1.
Let \(AB^{-1}\in\mathbb R^{n\times n}\) satisfy the assumptions of Lemma 2.1 . If \(\mu\) is an eigenvalue of the matrix \(D(x)A^{-1}\) , then \(|\mu|<1\) .
Proof.
From Lemma 2.1, we know that \(\sigma_{\min}(AB^{-1})>1\) is equivalent to \(\|BA^{-1}\|<1\). Further, we have
This completes the proof.
Lemma 3.2.
Let \(AB^{-1}\in\mathbb R^{n\times n}\) satisfy the assumptions of Lemma 2.1 . If \(\lambda\) is an eigenvalue of the matrix \(M_\omega\) , then \(\lambda\ne 1\) .
Proof.
Assume that a vector
is an eigenvector of the matrix \(M_\omega\) with the corresponding eigenvalue \(\lambda=1\). Then
that is,
Deforming (3.8), we obtain
From Lemma 3.1 it is clear that the matrix \(I-D(x)A^{-1}\) is nonsingular. So we have \(y=0\) and \(x=A^{-1}y=0\). This contradicts the fact that
is an eigenvector of the matrix \(M_\omega\). Hence \(\lambda\ne 1\).
Lemma 3.3.
Let \(A\in\mathbb R^{n\times n}\) be nonsingular, and let \(\omega>0\). Assume that \(\lambda\) is an eigenvalue of the matrix \(M_\omega\) and \(\lambda\ne1-\omega\). If \(\mu\) satisfies the condition
then \(\mu\) is an eigenvalue of \(D(x)A^{-1}\). This conclusion can also be reversed: if \(\mu\) is an eigenvalue of \(D(x)A^{-1}\), then \(\lambda\) is an eigenvalue of the matrix \(M_\omega\) provided that (3.10) holds.
Proof.
From Lemma 3.2, assume that
is an eigenvector of the matrix \(M_\omega\) with the corresponding eigenvalue \(\lambda\). Then
that is,
Let \(t=Ax\). Then from (3.12) we obtain
and so \(\mu=(\lambda+\omega-1)^2/(\lambda\omega^2)\) is an eigenvalue of the matrix \(D(x)A^{-1}\).
We can use a similar method to prove the second assertion.
Theorem 3.1.
Let \(AB^{-1}\in\mathbb R^{n \times n}\) satisfy the assumptions of Lemma 2.1 . Assume that \(\mu\) is an eigenvalue of the matrix \(D(x)A^{-1}\) . If \(\mu>0\) , then the \(\mathrm{SOR}\) -like iteration method converges for \(0<\omega<2\) .
Proof.
First, from (3.10) we obtain
By Lemma 2.2, \(|\lambda|<1\) if and only if
Equation (3.15) is equivalent to the following form:
Obviously, from (3.16) we find \(0<\omega<2\).
To minimize the number of iterations of the \(\mathrm{SOR}\)-like method, it is necessary to find the optimal relaxation parameters of the iterative equation. To this end, we have the following theorem.
Theorem 3.2.
Let \(AB^{-1}\in\mathbb R^{n\times n}\) satisfy the assumptions of Lemma 2.1 , and let \(\rho=\rho(D(x)A^{-1})\) . Assume also that \(\mu\) is any eigenvalue of the matrix \(D(x)A^{-1}\) and \(\mu\) is positive. Then the parameter \(\omega_0=2/(1+\sqrt{1-\rho})\) is optimal.
Proof.
From (3.10) we obtain
By calculating the modulus of \(\lambda\), having in mind that the range of \(\omega\) is \((0,2)\), we have
The graph of the function (3.18) is shown in Fig. 1.
In Fig. 1, \(|\lambda|\) denotes the vertical axis, and \(\omega\) denotes the horizontal axis. Different values of \(\mu\) represent different segmented function images, the larger \(\mu\) is, the higher the position of segmented function will be. It is clear that the \(|\lambda|\) is maximal when \(\mu=\mu_{\max}=\rho\). Hence we have
For a given \(\rho\), the \(\rho(M_\omega)\) is a function of \(\omega\), and we visualize (3.19) in Fig. 2.
Obviously, \(\rho_{\min}(M_\omega)\) is at the intersection of two curves, corresponding to \(\omega=2/(1+\sqrt{1-\rho})\).
4. Numerical Experiments
In this section, we test the efficiency of the optimal parameter by some numerical experiments. In these experiments, the initial vector is the zero vector, \(\rho=\rho(AB^{-1})\), the error is expressed by \(RES\), and it satisfies
The \(\mathrm{SOR}\)-like iterative process will be terminated only for \(RES\le 10^{-6}\) or after 500 iterations. All the tests are executed in MATLAB 2020b.
Example 4.1 [9].
Let \(A,B\in\mathbb R^{n\times n}\) and \(d\in\mathbb R^{n\times 1}\), where
We can calculate \(\rho=0.0977\) and \(\omega_0=1.0257\) easily, and the solution of the corresponding equation is
Then we draw the corresponding plot as shown below.
The optimal parameter can be seen in Fig. 3, between \(1.1\) and \(1.2\); this is consistent with our theoretical optimal parameters \(\omega_0\).
Example 4.2.
Let \(A,B\in\mathbb R^{n\times n}\) and \(d\in\mathbb R^{n\times 1}\), where
In Table 1, we display results for different sizes of \(n\). In this table, ‘IT’ denotes the number of iteration steps, and ‘CPU’ denotes the elapsed CPU time in seconds. From Table 1, we see that the iteration steps increase very slowly for different dimensions. The accuracy of the GN method is an order of magnitude higher than other methods and the number of iterations is the least. The NINA method is not an advantage compared with other methods. Nevertheless, the \(\mathrm{SOR}\)-like method can rapidly converge to the results under the specific parameter. In the elapsed CPU time, the \(\mathrm{SOR}\)-like method is advantageous over the GN and NINA methods.
Conclusions
In this paper, we mainly discuss the \(\mathrm{SOR}\)-like iterative method for solving the new generalized absolute value equation and further consider the convergence conditions. For the problem of the optimal parameter, we give the formula to calculate it. Finally, numerical experiments confirm our theory, showing that our algorithm is efficient and feasible.
References
O. L. Mangasarian and R. R. Meyer, “Absolute value equations,” Linear Algebra Appl. 419 (2–3), 359–367 (2006).
S.-L. Wu and C.-X. Li, “The unique solution of the absolute value equations,” Appl. Math. Lett. 76, 195–200 (2018).
S.-L. Wu, “The unique solution of a class of the new generalized absolute value equation,” Appl. Math. Lett. 116, 107029 (2021).
O. L. Mangasarian, “A generalized Newton method for absolute value equations,” Optim. Lett. 3 (1), 101–108 (2009).
H.-Y. Zhou, S.-L. Wu, and C.-X. Li, “Newton-based matrix splitting method for generalized absolute value equation,” J. Comput. Appl. Math. 394, 113578 (2021).
M. Noor, J. Iqbal, and K. Noor, “On an iterative method for solving absolute value equations,” Optim. Lett. 6 (5), 1027–1033 (2012).
P. Guo, S.-L. Wu, and C.-X. Li, “On the \(\mathrm{SOR}\)-like iteration method for solving absolute value equations,” Appl. Math. Lett. 97, 107–113 (2019).
D. M. Young, Iterative Solution of Large Linear Systems (Academic Press, New York, 2014).
Y.-F. Ke, “The new iteration algorithm for absolute value equation,” Appl. Math. Lett. 99, 105990 (2020).
Acknowledgments
The author wishes to express gratitude to the anonymous referees for their helpful comments.
Funding
This work was supported by the National Natural Science Foundation of China under grant no. 11961082.
Author information
Authors and Affiliations
Corresponding author
Additional information
Translated from Matematicheskie Zametki, 2023, Vol. 113, pp. 596–603 https://doi.org/10.4213/mzm13852.
Rights and permissions
About this article
Cite this article
Yang, S., Wu, SL. \(\mathrm{SOR}\)-Like Method for a New Generalized Absolute Value Equation. Math Notes 113, 567–573 (2023). https://doi.org/10.1134/S0001434623030276
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1134/S0001434623030276