1 Introduction

In recent years, AVEs have been recognized as a form of NP-hard and non-differentiable problem, analogous to several other mathematical problems, including linear programming, circuit simulations, contact problems, bimatrix games, quadratic programming, and journal bearings [1, 9, 14, 18, 26].

The AVE consists of determining an \(x \in {\mathbb {R}}^{k}\) such that

$$\begin{aligned} Ax-{|x|}= b, \end{aligned}$$
(1)

where \(A \in {\mathbb {R}}^{k\times k}\) is an M-matrix, \(b \in {\mathbb {R}}^{k}\) and \({|x|=(|x_{1}|, |x_{2}|,\ldots ,|x_{k}|)^{T} }\). Moreover, Eq. (1) represents a special case of the following general case:

$$\begin{aligned} Ax-B|x|= b, \end{aligned}$$
(2)

where \(B \in {\mathbb {R}}^{k\times k}\) was introduced in [26].

The AVE is also equivalently reformulated to mixed-integer programming [25, 30] and linear complementarity problem (LCP) [5, 7]. Using the LCP \((\tilde{M},b)\) problem as an example: it requires the determination of a vector \(z \in {\mathbb {R}}^{k}\) such that

$$\begin{aligned} z\ge 0,\Psi =(\tilde{M}z+b)\ge 0,z^{T}\Psi =0, \end{aligned}$$
(3)

where \(\tilde{M} \in {\mathbb {R}}^{k\times k}\) and \(b \in {\mathbb {R}}^{k}\). Equation (3) can be written as AVE

$$\begin{aligned} Ax-B|x|=b, \end{aligned}$$
(4)

with

$$\begin{aligned} \begin{array}{l} x=\frac{1}{2}(Bz+b), \end{array} \end{aligned}$$

where \(B=(\tilde{M}-I)\) and \(A=(\tilde{M}+I)\). Abdullah et al. [3] described the AVE system as an LCP and computed it using a smoothing method. Mezzadri [21] proposed the concept of equivalency among AVEs and the horizontal problem of LCPs. In addition, the unique conditions of AVE as well as the connection to LCP have been analyzed by Hu and Huang [14].

In recent years, the problem of calculating AVEs has received considerable attention and has been extensively discussed in the literature. Ke [15] showed a new iterative algorithm for AVE (1) and explained the theory of convergence under proper conditions. Based on [31], Zhang and Wei investigated the generalized Newton technique to calculate (1) and designated finite as well as global convergence situations when the interval matrix \([A-I, A+I]\) is regular. Gu et al. [10] proposed nonlinear Picard-CSCS and Picard-like techniques to determine (1) based on the Toeplitz matrices. Chen et al. [8] described the optimal parameter SOR-like iterative technique for calculating AVE (1) and designated the convergence properties under appropriate conditions. Nguyen et al. [24] have demonstrated that the system (1) can be calculated using unified smoothing functions connected with a second-order cone. Using the shift splitting procedure, Wu and Li [28] investigated a shift splitting iterative method (SSM) to describe the system (1). Cacetta et al. [6] introduced a smoothing Newton strategy to calculate the system of AVE and discussed that the technique is globally convergent for \(\Vert A^{-1}\Vert <1\) and so on, see [12, 13, 19, 20, 32, 33] for more details.

Recently, Miao and Zhang [22], Li et al. [17] and Mao et al. [23] introduced different techniques to calculate the LCPs using the fixed point principle. In this analysis, we intend to apply this procedure to the system of AVEs using the fixed point principle and propose effective iterative procedures to determine the system (1). The contributions of this article are as follows: we divide the A matrix into three different matrices (diagonal, strictly lower and upper triangular matrices) and combine them with the fixed point formula to derive the new iterative methods. Moreover, we suppose the convergence outcomes of the newly formulated methods under different circumstances.

The paper is prepared as follows. Section 2 describes the suggested methods, along with their convergence to calculate the system (1). In addition, Sects. 3 and 4 present numerical tests and concluding remarks.

2 Proposed iterative methods

Here, we discuss the two new iterative approaches to determining the system (1). We begin by recalling some notations, a lemma, as well as an M-matrix definition.

Suppose \(A=(a_{ij})\in {\mathbb {R}}^{k\times k}\), we define \(A\ge 0\) if \(a_{ij}\ge 0\) for all \(1\le i, j\le k\). In addition, we suggest the spectral radius, and absolute value of A in terms of \(\rho (A)\) and \({|A|=(|a_{ij}|)}\), respectively.

Lemma 2.1

[29] Suppose x and \(z \in {\mathbb {R}}^{k}\), then \(|x- z|\ge ||x|-|z||\).

Definition 2.2

[23] The matrix A is known as

  1. 1.

    Z-matrix if all of its off-diagonal elements are non-positive;

  2. 2.

    M-matrix if \(A^{-1}\ge 0\) and A is a Z-matrix.

To present and investigate the new iterative methods, let the matrix A be divided into the following two parts:

$$\begin{aligned} A= N_{A}-M_{A}, \end{aligned}$$
(5)

with

$$\begin{aligned} \begin{array}{l} N_{A}= D_{A}- U_{A}+ U^{T}_{A}{} { and} M_{A}= L_{A}+ U^{T}_{A}, \end{array} \end{aligned}$$

where \({D}_{A}\), \({L}_{A}\), \({U}_{A}\) are respectively the diagonal, strictly lower triangular and strictly upper triangular parts of A, and \(U^{T}_{A}\) denotes the transpose of \(U_{A}\). Based on [4], the AVE (1) corresponds to the following fixed point problem:

$$\begin{aligned} x=F(x), \end{aligned}$$

where

$$\begin{aligned} F(x)=x-\lambda E[ Ax-|x|-b ]. \end{aligned}$$

Here, \(\lambda >0\) and \(E \in {\mathbb {R}}^{k\times k}\) is a diagonal matrix consisting of positive diagonal components (see [2, 4]). Based on (5), we introduce the following two iterative schemes for solving the AVEs:

Method I:

$$\begin{aligned} x^{m+1}=x^{m}-\lambda E[-M_{A} x^{m+1}+N_{A}x^{m}-(|x^{m}|+b)]. \end{aligned}$$
(6)

Method II:

$$\begin{aligned} x^{m+1}=x^{m}+D_{A}^{-1}M_{A}x^{m+1}-\lambda E\left[ Ax^{m}-|x^{m}|-b \right] -D_{A}^{-1}M_{A}x^{m}. \end{aligned}$$
(7)

Here \(0< \lambda \le 1\). Next, we examine the convergence of the above two iterative algorithms.

Theorem 2.1

Let \(A= N_{A}-M_{A}\) be non-singular and assume \(\rho (R^{-1}{\bar{S}})<1\) where \({R=I-\lambda E|M_{A}|}\) and \({\bar{S}}= \lambda E+|I-\lambda E N_{A}|\), then for any initial guess \(x^{0} \in {\mathbb {R}}^{k}\), the sequence \(\{x^{m}\}_{m=0}^\infty\) produced by Method I converges to the unique solution of the AVE (1).

Proof

From Eq. (6), we have

$$\begin{aligned} x^{m}=x^{m-1}-\lambda E[-M_{A} x^{m}+N_{A}x^{m-1}-(|x^{m-1}|+b)]. \end{aligned}$$
(8)

By subtracting (8) from (6), we get

$$\begin{aligned} x^{m+1}-x^{m}&=x^{m}-x^{m-1}-\lambda E[-M_{A} (x^{m+1}-x^{m})+N_{A}(x^{m}-x^{m-1})\\&\quad -|x^{m}|+ |x^{m-1}|],\\&=x^{m}-x^{m-1}+\lambda E M_{A} (x^{m+1}-x^{m})-\lambda E N_{A}(x^{m}-x^{m-1})\\&\quad +\lambda E(|x^{m}|-|x^{m-1}|), \\&=(I-\lambda E N_{A})(x^{m}-x^{m-1})+\lambda E M_{A} (x^{m+1}-x^{m})+\lambda E(|x^{m}|\\&\quad - |x^{m-1}|). \end{aligned}$$

By considering the absolute values on each side and applying Lemma 2.1, we obtain

$$\begin{aligned} | x^{m+1}-x^{m} |&\le | (I-\lambda E N_{A})(x^{m}-x^{m-1})|+ | \lambda E M_{A} (x^{m+1}-x^{m})|\\&\quad + | \lambda E(| x^{m} |- | x^{m-1} |)|,\\&\le | I-\lambda E N_{A}| | x^{m}-x^{m-1}|+ \lambda E | M_{A} | | x^{m+1}-x^{m}|\\&\quad + \lambda E | | x^{m} |- | x^{m-1} ||,\\&\le | I-\lambda E N_{A}| | x^{m}-x^{m-1}|+ \lambda E | M_{A} | | x^{m+1}-x^{m}|\\&\quad + \lambda E | x^{m} - x^{m-1} |, \end{aligned}$$

which can be rewritten as

$$\begin{aligned} (I - \lambda E | M_{A} |) | x^{m+1}-x^{m}| \le (\lambda E+| I-\lambda E N_{A}|) | x^{m}-x^{m-1}|. \end{aligned}$$
(9)

Since \(M_{A}\) is a strictly lower triangular matrix, and E is a diagonal matrix with positive diagonal components, we know that \(I- \lambda E | M_{A} |\) is a lower triangular matrix with diagonal entries being one. Hence, \((I- \lambda E | M_{A} | )\) is invertible. It follows from Eq. (9) that

$$\begin{aligned} \begin{array}{l} | x^{m+1}-x^{m}| \le R^{-1} {\bar{S}} | x^{m}-x^{m-1} |, \end{array} \end{aligned}$$

where

$$\begin{aligned} \begin{array}{l} R= I- \lambda E | M_{A} | { and} {\bar{S}}= \lambda E+| I-\lambda E N_{A} |. \end{array} \end{aligned}$$

In this case, the \(R^{-1}{\bar{S}}\) matrix is non-negative. According to Theorem 2.1 in [2, 4], when \(\rho (R^{-1}{\bar{S}})<1,\) then the \(\{x^{m}\}_{m=0}^\infty\) sequence of Method I leads to the solution \({\bar{x}}\) of Eq. (1).

Suppose \(\bar{\tilde{z}}\) is another solution to the AVE in order to determine the uniqueness of the solution. Based on the equations

$$\begin{aligned} \begin{array}{l} \bar{x}=\bar{x}-\lambda E[-M_{A} \bar{x}+N_{A}\bar{x}-(| \bar{x}|+b)], \end{array} \end{aligned}$$

and

$$\begin{aligned} \begin{array}{l} \bar{\tilde{z}}=\bar{\tilde{z}}-\lambda E[-M_{A} \bar{\tilde{z}}+N_{A}\bar{\tilde{z}}-(| \bar{\tilde{z}} |+b)], \end{array} \end{aligned}$$

we have

$$\begin{aligned} \begin{array}{l} | \bar{x}-\bar{\tilde{z}}| \le R^{-1} {\bar{S}} | \bar{x}-\bar{\tilde{z}} |. \end{array} \end{aligned}$$

Since \(\rho (R^{-1}{\bar{S}})<1\), we have

$$\begin{aligned} \begin{array}{l} \bar{x}=\bar{\tilde{z}}. \end{array} \end{aligned}$$

The proof is completed.

Theorem 2.2

If \(\rho (G^{-1} J)<1\) where \(G= I-D_{A}^{-1}| M_{A}|\) and \(J= I+\lambda E-|\lambda EA+D_{A}^{-1} M_{A}|\), then for any starting guess \(x^{0}\in {\mathbb {R}}^{k}\), the sequence \(\{x^{m}\}_{m=0}^\infty\) created through Method II converges to the unique solution of the AVE (1).

Proof

From Eq. (7), we have

$$\begin{aligned} x^{m}=x^{m-1}+D_{A}^{-1}M_{A}x^{m}-\lambda E[ Ax^{m-1}-| x^{m-1}|-b ]-D_{A}^{-1}M_{A}x^{m-1}. \end{aligned}$$
(10)

By subtracting (10) from (7), we have

$$\begin{aligned} \begin{aligned} x^{m+1}-x^{m}&= x^{m}-x^{m-1}+D_{A}^{-1}M_{A}(x^{m+1}-x^{m})-\lambda EA(x^{m}-x^{m-1}) \\&\quad +\lambda E(| x^{m}|-| x^{m-1} |)- D_{A}^{-1}M_{A}(x^{m}-x^{m-1}), \end{aligned} \end{aligned}$$

Based on Lemma 2.1 and the absolute values of both sides, we obtain

$$\begin{aligned} | x^{m+1}-x^{m} |&\le | x^{m}-x^{m-1} |+D_{A}^{-1} | M_{A} | | x^{m+1}-x^{m}|\\&\quad - | \lambda EA+D_{A}^{-1}M_{A} | | x^{m}-x^{m-1} |+ \lambda E | | x^{m}|-| x^{m-1} | |,\\&\le | x^{m}-x^{m-1} |+D_{A}^{-1} | M_{A} | | x^{m+1}-x^{m}| \\&\quad - | \lambda EA+D_{A}^{-1}M_{A} | | x^{m}-x^{m-1} |+ \lambda E | x^{m}- x^{m-1} |, \end{aligned}$$

which can be rewritten as

$$\begin{aligned} \begin{array}{l} | x^{m+1}-x^{m} | \le (I+\lambda E-| \lambda EA+D_{A}^{-1}M_{A} |)| x^{m}-x^{m-1} |+D_{A}^{-1} | M_{A} | | x^{m+1}-x^{m}|, \end{array} \\ \begin{array}{l} (I-D_{A}^{-1} | M_{A}|)| x^{m+1}-x^{m} | \le (I+\lambda E-| \lambda EA+D_{A}^{-1}M_{A} |)| x^{m}-x^{m-1}|. \end{array} \end{aligned}$$

So,

$$\begin{aligned} \begin{array}{l} | x^{m+1}-x^{m} | \le G^{-1} J| x^{m}-x^{m-1} |. \end{array} \end{aligned}$$

Evidently, according to Theorem 2.1 in [2, 4], if \(\rho (G^{-1}J)<1\), the iterative sequence \(\lbrace x^{m}\rbrace _{m=0}^{\infty }\) developed from Method II leads to the AVE (1) solution.

The uniqueness proof is the same as Theorem 2.1, which is neglected here.

3 Numerical experiments

Throughout this section of the article, we offer three examples that explain the significance of the presented techniques from three distinct standpoints:

  • The number of iterations (signified as Iter)

  • The computation time (s) (marked as Time)

  • The residual vectors (signified as RES)

The stopping criteria is as follows:

$$\begin{aligned} {RES=\Vert b+| x^{m} |-Ax^{m}\Vert \le 10^{-6}.} \end{aligned}$$

In addition, all tests are executed on Intel (C) Core (TM) i5, 4 GB of RAM, CPU @ 1.80 GHz, Matlab (R2016a), and the null vector is used as the initial guess in Example 3.1.

Numerical investigations are conducted in order to verify the convergence conditions \(\rho (R^{-1}{\bar{S}}) <1\) and \(\rho (G^{-1}J)<1\). These outcomes are outlined in Table 1.

Table 1 Numerical verification of the convergence conditions for Theorems 2.1 and 2.2 with \(\lambda =0.8\)

As shown in Table 1, we numerically studied the convergence requirements for both theorems. Our analysis clearly demonstrates that both strategies satisfy these conditions. In addition, we perform the following tests to determine the effectiveness of the new techniques.

Example 3.1

Supposed \(A=M+\Psi I\in {\mathbb {R}}^{k\times k}\) and \(Ax^{\star }-| x^{\star } |=b \in {\mathbb {R}}^{k}\) with

\(M=tridiag (-1.5I,S,-0.5I) \in {\mathbb {R}}^{k\times k}\), \(x^{\star }=((-1)^{w}, w=1,2,\ldots ,k)^{T} \in {\mathbb {R}}^{k}\),

where \(S=tridiag(-1.5,8,-0.5) \in {\mathbb {R}}^{\zeta \times \zeta }\), \(I \in {\mathbb {R}}^{\zeta \times \zeta }\) represents the identity or unit matrix, and \(k=\zeta ^{2}\). The results are explained in Table 2, while the graphic expressions are illustrated in Figs. 1 and 2.

Table 2 Experimental calculations for Example 3.1 with \(\Psi =4\)
Fig. 1
figure 1

Analysis of the convergence curves for the suggested methods with different \(\lambda\) values

Fig. 2
figure 2

Analysis of the convergence curves for the suggested methods with different \(\lambda\) values

Table 2presents some iterative outcomes to explain the implementation of the proposed approaches. We find that the given approaches become faster if the \(\lambda\) value is higher. The graphic expression is presented in Figs. 1 and  2. The curves in Figs. 1 and  2 illustrate the significance of the recommended techniques using various values of \(\lambda\).

Moreover, we extend Example 3.1 and assess the newly designed methods in comparison with the method presented in [16] (exposed as SD), the scheme stated in [4] (indicated as RA), and the technique described in [11] (defined as MG). These results are indicated in Table 3.

Table 3 Experimental calculations for Example 3.1 with \(\Psi =4\)

Based on Table 3, all experimented techniques can efficiently determine the solution of the system (1). We observe that the iterations (Iter) and analysis time (Time) of the presented techniques are less than those of other techniques.

Example 3.2

Suppose \(A=tridiag (-1,4,-1) \in {\mathbb {R}}^{k\times k}\), \(x^{\star }=((-1)^{w}, w=1,2,\ldots ,k)^{T} \in {\mathbb {R}}^{k}\),

and \(b=Ax^{\star }-| x^{\star } | \in {\mathbb {R}}^{k}\). In Examples 3.2 and 3.3, using the exact stopping criterion and the initial guess as provided in [28]. Furthermore, we compare the newly designed methods with the process given in [15] (exposed as NM), the approach shown in [8] (indicated as SOR), and the special shift splitting iterative method suggested in [28] (indicated by SSM). Table 4 provides the outcomes of the analysis.

Table 4 Experimental calculations for Example 3.2 with \(\lambda =0.97\)

Based on Table 4, all techniques can efficiently and precisely calculate the problem (1). We demonstrate that our methods (Method I and Method II) perform better than those of the NM, SOR, and SSM approaches in expressions of the iterations (Iter) as well as computation time (Time).

Example 3.3

Suppose the matrix \(A \in {\mathbb {R}}^{k\times k}\) is presented by

$$\begin{aligned} {A =} {\left\{ \begin{array}{ll} S, &{} {for \;j=i}\\ {-I}, &{} {{for}}\;{\left\{ \begin{array}{ll} { j=i+1 , \; {i=1,2,\ldots ,k-1,} }\\ { j=i-1 , \; {i=2,\ldots ,k,} } \end{array}\right. }\\ 0. &{} \text {otherwise.} \end{array}\right. } \end{aligned}$$

where \(S=tridiag(-1,\theta ,-1) \in {\mathbb {R}}^{ \zeta \times \zeta }\), \(I \in {\mathbb {R}}^{\zeta \times \zeta }\) represents the unit matrix, and \(k=\zeta ^{2}\). Calculate \(b=Ax^{\star }-| x^{\star } | \in {\mathbb {R}}^{k}\) using \(x^{\star }=(x_{1},x_{2},x_{3},\ldots , x_{k})^{T}\in {\mathbb {R}}^{k}\) such that \(x_{k}=(-1)^{k}\). The outcomes are listed in Table 5.

Table 5 Experimental calculations for Example 3.3 with \(\lambda =0.97\) and \(\theta =8\)

Based on Table 5, all experimented techniques can quickly determine the solution of the system (1). We observe that the iterations (Iter) and analysis time (Time) of the presented technique are less than those of other approaches. Eventually, we deduce that our suggested techniques are achievable and valuable for AVEs.

4 Conclusions

We have presented two new iterative methods to compute the AVEs and determined that these procedures converge to the AVE (1) with a suitable choice of parameter \(\lambda\). The theoretical analyses as well as numerical investigations have demonstrated that the proposed approaches appear promising for solving AVEs.

We have successfully developed two new iterative approaches to solve Eq. (1) when the coefficient matrix A is an M-matrix. The next issue to be addressed concerns the more general cases of coefficient matrices.