1 Introduction

Given two square matrices \(M,N\in \mathbb {R}^{n\times n}\) and a vector \(q\in \mathbb {R}^{n}\), the horizontal linear complementarity problem (HLCP) consists in finding a pair \(x,y\in \mathbb {R}^{n}\) such that

$$\begin{aligned} x\ge 0,\,y\ge 0,\,Ny-Mx=q,\,\,x^{T}y=0. \end{aligned}$$
(1)

It is worth noting that the HLCP becomes the standard LCP if N is nonsingular. HLCP also includes linear optimization (LO) and convex quadratic optimization (CQO), and finds many applications in economic equilibrium problems, traffic assignment problems, and optimization problems [7]. There are a variety of solutions approaches for HLCP which have been studied intensively. Among them, path-following interior-point methods (IPMs) gained much more attention than other methods [14, 17]. These methods are powerful tools to solve a wide large of mathematical problems such as LO [14, 18, 20], CQO [1, 18], LCP [2, 7, 18, 20], linearly constrained optimization (LCOO) [3], the semidefnite optimization (SDO) [5, 12], and the semidefnite linear complementarity problem (SDLCP) [4]. Theoretically, HLCP can be solved by using any algorithm for LCP, but directly solving HLCP is a better choice than using any algorithm for LCP for solving the HLCP. The close connection between HLCP and LO, CQO, and LCP problems, many IPMs have been successfully extended to HLCP. For instance, Bonnans and Gonzaga [6] studied the HLCP and derived general properties for algorithms based on Newton iterations. Huang et al. [9] proposed a high-order feasible interior-point method for HLCP with polynomial complexity, namely, \(\mathcal {O}(\sqrt{n}\log \frac{\epsilon ^{0}}{\epsilon })\). Monteiro et al. [13], studied the limiting behavior of the derivatives of certain trajectories associated with the monotone HLCP. Zhang [14], presented a class of infeasible IPMs for HLCP and showed under certain conditions, that the algorithm has \(O(n^{2}\log \frac{1}{\epsilon })\) as a bound of iterations. Here some other relevant references can be found in [5, 7, 14, 15]. Earlier, Achache [2] presented a short-step feasible IPM for monotone standard LCP. He showed that the algorithm enjoys the iteration bound, namely, \(\mathcal {O}(\sqrt{n}\log \frac{n}{\epsilon })\). This study is followed by reporting some numerical results. Besides, Wang et al. [17] analyzed IPMs for \(P_{*}(\kappa )\)-HLCP based on a new eligible parametric kernel function. They established its complexity and presented some numerical results. Khierfam [11] presented an infeasible interior-point method for HLCP where he improved the iteration bound for an earlier proposed version. Very recently, Darvay et al. [8] designed a feasible IPM for LO by using a new reformulation of the central-path based on an algebraic equivalent transformation of the nonlinear equations which defines the central-path. They established its iteration bound. Moreover, they reported some numerical tests to validate the effective of their algorithm. Also Yong [19] proposed an iterative method based on the fixed point principle for solving monotone LCP. He showed its global convergence to a solution of LCP after a finite number of iterations. He reported some numerical results to show the ability of his method.

The goal of this paper is to present a full-Newton step feasible interior-point algorithm for monotone HLCP. The idea of this algorithm is to follow the centers of the perturbed HLCP by using only full-Newton steps with the advantage that no line search is required and restricts iterates in a small neighborhood of the central-path by introducing a suitable proximity measure during the solution process. Then we prove across a new appropriate choice of the defaults of the threshold of the parameter \(\tau \) which defines the size of the neighborhood of the central-path and of the update barrier parameter \(\theta \) that our algorithm is well-defined and the full-Newton step to the central-path is locally quadratically convergent. Moreover, we derive its complexity bound, namely, \(\mathcal {O}(\sqrt{n}\log \frac{n}{\epsilon })\) which coincides with the best known iteration bound for such feasible IPMs. Finally, we report some numerical results to show the ability of this approach. Our testing problems are reformulated from the so-called NP-hard absolute value equations and from CQO problems. Moreover, to confirm the performance of our method in practice, a comparison of our obtained numerical results with those obtained with an available non interior-point iterative method is made.

The paper is organized as follows. In Sect. 2, the full-Newton step feasible interior-point algorithm for HLCP is presented. In Sect. 3, the complexity analysis and the currently best known iteration bound for short-step methods are established. In Sect. 4, numerical results are reported. Finally, a conclusion and future works are drawn the last section of the paper.

The following notations are used throughout the paper. For a vector \(x\in \mathbb {R}^{n}\), the Euclidean and the infinity norm are denoted by \(\Vert x\Vert \) and \(\Vert x\Vert _{\infty }\), respectively. Given two vectors\(\,x\,\)and \(y\,\)in \(\mathbb {R}^{n}\)\(xy=(x_{i}y_{i})_{1\le i\le n}\) denotes their coordinate-wise product and the same as for the vectors \(x/y=(x_{i}/y_{i})_{1\le i\le n}\), \(\sqrt{x}=(\sqrt{x_{i}})_{1\le i\le n}\ \)and \(x^{-1}=(1/x_{i})_{1\le i\le n}\). For \(x\in \mathbb {R}^{n}\), X denotes the diagonal matrix having the components of x as diagonal entries, i.e., \(X={\hbox {Diag}} (x)\). The identity and the vector of all ones are denoted by I and e, respectively. If \(f(x)\ge 0\), is a real valued function of a real non-negative variable, then \(f(x)=\mathcal {O}(x)\) means that \(f(x)\le cx\) for some constant c. Finally, for a matrix \(A\in \mathbb {R}^{n\times n} \), \(\sigma _{\min }(A)\) and \(\sigma _{\max }(A)\) denote the smallest and the maximal singular value of A, respectively.

2 A full-Newton step interior-point algorithm for HLCP

In this section, we study first the central-path of HLCP and the search directions. Finally, we state the generic full-Newton step feasible interior-point algorithm for HLCP.

2.1 Central-path for HLCP

Throughout this paper, we assume that (1) satisfies the following assumptions [6, 21].

  • (Interior-point-condition (IPC)). There exists a pair of vectors \((x^0,y^0)\) such that

    $$\begin{aligned} Nx^0-My^0=q,\,y^0>0,\,x^0>0. \end{aligned}$$
  • (Monotonicity of (1)). The pair of matrices \([N,\,M]\) satisfies

    $$\begin{aligned} Nu-Mv=0 \Rightarrow u^{T}v\ge 0,\;\text{ for } \text{ any }\; u,v\in \mathbb {R}^{n}. \end{aligned}$$

These two assumptions imply the existence of a solution for HLCP. Finding an approximate solution of HLCP is equivalent to solving the following system:

$$\begin{aligned} \left\{ \begin{array}{ll} Ny-Mx=q,\\ xy=0,\,x\ge 0,\,y\ge 0.\\ \end{array} \right. \end{aligned}$$
(2)

The basic idea of the path-following interior-point algorithm is to replace the second equation in (2), the so-called the complementarity condition for (1), by the parameterized equation \(xy=\mu e\) with \(\mu >0\). Thus one may consider the following system:

$$\begin{aligned} \left\{ \begin{array}{ll} Ny-Mx=q, \\ xy=\mu e. \\ \end{array} \right. \end{aligned}$$
(3)

The system (3) has a unique solution denoted by \((x(\mu ),y(\mu ))\) for each \(\mu >0\) [18]. Then we call \((x(\mu ),y(\mu ))\) the \(\mu \)-center of HLCP. The set of \(\mu \)-center (with \(\mu \) running through all positive real numbers) is called the central-path of HLCP. If \(\mu \rightarrow 0\), then the limit of the central-path exists and since the limit point satisfy the complementarity condition \(xy=0\), the limit yields a solution for HLCP [6].

2.2 Search directions for HLCP

Now, we want to define search directions \((\varDelta x,\varDelta y)\) that move in the direction of the \(\mu \)-centers \((y(\mu ),x(\mu ))\). Applying Newton’s method for (3) for a given strictly feasible point (xy), i.e., the IPC holds, we get the following linear system:

$$\begin{aligned} \left\{ \begin{array}{ll} N\varDelta y - M\varDelta x=0,\\ X \varDelta y+Y \varDelta x=\mu e-xy, \end{array} \right. \end{aligned}$$
(4)

where \(X= \hbox {Diag}\,(x)\) and \(Y= \hbox {Diag}\,(y)\). The unique solution \((\varDelta x,\,\varDelta y)\) of (4) is guaranteed by our assumptions since the bloc matrix

$$\begin{aligned} \left( \begin{array}{cc} -M &{} N\\ Y &{} X\\ \end{array} \right) \end{aligned}$$

is nonsingular (cf Proposition 3.1 in [21]). Therefore, the new iterate is obtained by taking a full-Newton step according to:

$$\begin{aligned} x_{+}:= x+\varDelta x,\,\,y_{+}:=y+\varDelta y. \end{aligned}$$

Denote

$$\begin{aligned} v=\sqrt{\frac{xy}{\mu }}, \end{aligned}$$
(5)

and

$$\begin{aligned} d_{x}=\frac{v\varDelta x}{x},\,\,d_{y}=\frac{v\varDelta y}{y}. \end{aligned}$$
(6)

One can easily check that

$$\begin{aligned} \mu d_{x}d_{y}=\varDelta x \varDelta y,\,\,\text{ and }\,\,x\varDelta y+y\varDelta x = \mu v(d_{x}+d_{y}). \end{aligned}$$
(7)

Now due to (6) and (7), system (4) becomes

$$\begin{aligned} \left\{ \begin{array}{ll} \bar{N}d_{y}-\bar{M}d_{x}=0,\\ d_{x}+d_{y} = p_{v}, \end{array} \right. \end{aligned}$$
(8)

where \(\bar{M}=MXV^{-1}\), \(\bar{N}=NYV^{-1}\) with \(V:= {\hbox {Diag}} (v)\) and,

$$\begin{aligned} p_{v}=v^{-1}-v. \end{aligned}$$

For the analysis of the algorithm, we use a norm-based proximity measure \(\delta (v)\) defined by:

$$\begin{aligned} \delta :=\delta (xy;\mu )=\frac{1}{2}\Vert p_{v} \Vert . \end{aligned}$$
(9)

Clearly,

$$\begin{aligned} \delta (v)=0 \Leftrightarrow v=e \Leftrightarrow xy=\mu e. \end{aligned}$$

Hence, the value of \(\delta (v)\) can be considered as a measure for the distance between the given pair (xy) and the corresponding \(\mu \)-center \((x(\mu ),y(\mu ))\).

2.3 Algorithm for HLCP

The interior-point primal-dual algorithm for HLCP works as follows. First, we use a suitable threshold (default) value \(\tau >0,\) with \(0<\tau <1\) and we suppose that a strictly feasible initial point \((x^{0},y^{0})\) exists such that \(\delta (x^{0}y^{0};\mu _{0}) \le \tau ,\) for certain \(\mu _{0}\) is known. Using the obtained search directions \((\varDelta x,\varDelta y)\) from (4) and taking a full-Newton step, the algorithm produces a new iterate \((x_{+},y_{+})=(x+\varDelta x, y+\varDelta y)\). Then, it updates the barrier parameter \(\mu \) to \((1-\theta )\mu \) with \(0<\theta <1\) and solves the Newton system (4), and target a new \(\mu \)-center and so on. This procedure is repeated until the stopping criterion \(n\mu \le \epsilon \) is satisfied for a given accuracy parameter \(\epsilon \). The generic feasible full-Newton step interior-point algorithm for HLCP is now presented in Fig. 1.

Fig. 1
figure 1

Algorithm 2.3

3 Complexity analysis of the algorithm

In this section, we will show across the new defaults \(\theta \) and \(\tau \), described in Fig. 1, that Algorithm 2.3 is well-defined and solves the HLCP in polynomial complexity.

3.1 Feasibility and locally quadratically convergence of the feasible full-Newton step

We first investigate the feasibility of a full-Newton step. Then we mainly prove that the iterate is locally quadratically convergent. We first quote the following technical lemma which will be used later.

Lemma 1

Let \(\delta >0\) and \((d_{x},\,d_{y})\) be a solution of system (8). Then, we have

$$\begin{aligned} 0\le d_{x}^{T}d_{y}\le 2\delta ^{2}, \end{aligned}$$
(10)

and

$$\begin{aligned} \Vert d_{x}d_{y}\Vert _{\infty }\le \delta ^{2},\,\Vert d_{x}d_{y}\Vert \le \sqrt{2}\delta ^{2}. \end{aligned}$$
(11)

Proof

For the first part of the first statement, let \(d_{x}\) and \(d_{y}\) be the unique solution of system (8), hence from the first equation of it, we have \(\bar{N}d_{y}-\bar{M}d_{x}=0\). Substitution \(\bar{N}=NYV^{-1}\) and \(\bar{M}=MXV^{-1}\), then \(MXV^{-1} d_{y}-NYV^{-1}d_{x}=0\). But since the pair [NM] is in the monotone HLCP, so by assumption 2, we conclude that

$$\begin{aligned} (XV^{-1}d_{y})^{T}(YV^{-1}d_{x})=d_{x}^{T}d_{y}\ge 0. \end{aligned}$$

For the second part of it, it follows trivially from the following equality

$$\begin{aligned} 4\delta ^{2}=\Vert p_{v}\Vert ^{2}=\Vert d_{x}+d_{y}\Vert ^{2}=\Vert d_{x}\Vert ^{2}+\Vert d_{y}\Vert ^{2}+2d_{x}^{T}d_{y}\ge 2d_{x}^{T}d_{y}. \end{aligned}$$

For the second statement, we have

$$\begin{aligned} d_{x}d_{y}=\frac{1}{4}((d_{x}+d_{y})^{2}-(d_{x}-d_{y})^{2}), \end{aligned}$$

and

$$\begin{aligned} \Vert d_{x}+d_{y}\Vert ^{2}=\Vert d_{x}-d_{y}\Vert ^{2}+4d_{x}^{T}d_{y}. \end{aligned}$$

But since \(d_{x}^{T}d_{y}\ge 0\), it follows that

$$\begin{aligned} \Vert d_{x}-d_{y}\Vert \le \Vert d_{x}+d_{y}\Vert . \end{aligned}$$

On the other hand,

$$\begin{aligned} \Vert d_{x}d_{y}\Vert _{\infty }= & {} \frac{1}{4}\Vert (d_{x}+d_{y})^{2}-(d_{x}-d_{y})^{2}\Vert _{\infty }\\\le & {} \frac{1}{4} \max (\Vert d_{x}+d_{y}\Vert ^{2}_{\infty },\Vert d_{x}-d_{y}\Vert ^{2}_{\infty })\\\le & {} \frac{1}{4} \max (\Vert d_{x}+d_{y}\Vert ^{2},\Vert d_{x}-d_{y}\Vert ^{2})\\\le & {} \frac{1}{4}\Vert d_{x}+d_{y}\Vert ^{2}=\frac{1}{4}\Vert p_{v}\Vert ^{2}=\delta ^{2}. \end{aligned}$$

Therefore, \(\Vert d_{x}d_{y}\Vert _{\infty }\le \delta ^{2}\). To prove the last part of it, we have,

$$\begin{aligned} \Vert d_{x}d_{y}\Vert ^{2}= & {} e^{T}(d_{x}d_{y})^{2}=\frac{1}{16}e^{T}( (d_{x}+d_{y})^{2}-(d_{x}-d_{y})^{2} )^{2}\\= & {} \frac{1}{16}\Vert (d_{x}+d_{y})^{2}-(d_{x}-d_{y})^{2}\Vert ^{2}\\\le & {} \frac{1}{16} (\Vert (d_{x}+d_{y})^{2}\Vert ^{2}+\Vert (d_{x}-d_{y})^{2}\Vert ^{2})\\\le & {} \frac{1}{16} (\Vert d_{x}+d_{y}\Vert ^{4}+\Vert d_{x}-d_{y}\Vert ^{4})\\\le & {} \frac{1}{8} \Vert d_{x}+d_{y}\Vert ^{4}=\frac{1}{8}\Vert p_{v}\Vert ^{4}=2\delta ^{4}. \end{aligned}$$

Hence, \(\Vert d_{x}d_{y}\Vert \le \sqrt{2}\delta ^{2}\). This completes the proof. \(\square \)

Lemma 2

The full-Newton step is positive if and only if \(e+d_{x}d_{y}>0.\)

Proof

We have,

$$\begin{aligned} x_{+}y_{+}=(x+\varDelta x)(y+\varDelta y)=xy+x\varDelta y+y\varDelta x+\varDelta x\varDelta y. \end{aligned}$$

Using (6) and (7), we get

$$\begin{aligned} x_{+}y_{+}=\mu (e+d_{x}d_{y}). \end{aligned}$$

If \(x_{+}>0\) and \( y_{+}>0\) then \( x_{+}y_{+}>0\) and so \( e+d_{x}d_{y}>0.\) Conversely, let \(0\le \alpha \le 1\) and define \(x(\alpha ):=x+\alpha \varDelta (x)\), \(y(\alpha ):=y+\alpha \varDelta (y)\), we have

$$\begin{aligned} x(\alpha )y(\alpha )=xy+\alpha (\mu e-xy)+\alpha ^{2}\varDelta x\varDelta y. \end{aligned}$$

If \( e+d_{x}d_{y}>0\) then \(\mu e+\varDelta x\varDelta y>0\) and \(\varDelta x\varDelta y>-\mu e\). Hence

$$\begin{aligned} x(\alpha )y(\alpha )>(1-\alpha )xy+(\alpha -\alpha ^{2})\mu e\ge 0. \end{aligned}$$

So \(x(\alpha )y(\alpha )>0\) for each \(0\le \alpha \le 1\). Since \(x(\alpha )\) and \(y(\alpha )\) are linear functions of \(\alpha \) and \(x(0)=x>0\) and \(y(0)=y>0\), then \(x(1)=x_{+}>0\) and \(y(1)=y_{+}>0\). This completes the proof. \(\square \)

For convenience, we may write

$$\begin{aligned} v_{+}^{2}=\frac{x_{+}y_{+}}{\mu }. \end{aligned}$$

It is easy to deduce that

$$\begin{aligned} v_{+}^{2}=e+d_{x}d_{y}\Leftrightarrow x_{+}y_{+}=\mu (e+d_{x}d_{y}). \end{aligned}$$
(12)

Lemma 3

If \(\delta <1\), then \(x_{+}>0\) and \(y_{+}>0\). Thus concludes that \(x_{+}>0\) and \(y_{+}>0\) are strictly feasible for (1).

Proof

From Lemma 3.2, we have seen that \(x_{+}>0, y_{+}>0\) if and only if \(e+d_{x}d_{y}>0\). Because,

$$\begin{aligned} 1+(d_{x}d_{y})_{i} \ge 1-\vert (d_{x}d_{y})_{i}\vert \ge 1-\Vert d_{x}d_{y}\Vert _{\infty }\,\text{ for } \text{ all } i, \end{aligned}$$

so, by (11), it follows that \(1+(d_{x}d_{y})_{i} \ge 1-\delta ^{2}\). Thus \(e+d_{x}d_{y}> 0\) if \(\delta <1\). This completes the proof. \(\square \)

The next lemma shows the influence of the full-Newton step on the proximity measure.

Lemma 4

If \(\delta <1\). Then

$$\begin{aligned} \delta _{+}:=\delta (v_{+};\mu )\le \frac{{\delta }^{2}}{\sqrt{2(1-\delta ^{2}}}. \end{aligned}$$

Proof

We have

$$\begin{aligned} 2\delta _{+}=\Vert v_{+}-v_{+}^{-1}\Vert . \end{aligned}$$

Due to (12), we have \(v_{+}=\sqrt{e+d_{x}d_{y}}\) and \(v_{+}^{-1}=\frac{e}{\sqrt{e+d_{x}d_{y}}}\). Thus implies that

$$\begin{aligned} 2\delta _{+}=\left\| \frac{d_{x}d_{y}}{\sqrt{e+d_{x}d_{y}}}\right\| \le \frac{\Vert d_{x}d_{y}\Vert }{\sqrt{1-\Vert d_{x}d_{y}\Vert _{\infty }}}. \end{aligned}$$

It follows from (11) that \(\delta _{+}\le \frac{\delta ^{2}}{\sqrt{2(1-\delta ^{2})}}\). This completes the proof. \(\square \)

Corollary 1

Let \(\delta \le \frac{2}{\sqrt{10}}\), thus \(\delta _{+}\le \delta ^{2}\) which means the locally quadratically convergence of the full-Newton step to the central-path.

3.2 Updating the barrier parameter

The following lemma gives an upper bound for the duality gap after a full-Newton step.

Lemma 5

Let \(\delta \le \frac{2}{\sqrt{10}}\). Then after a full-Newton step the duality gap for the pair \((x_{+},y_{+})\) satisfies

$$\begin{aligned} x_{+}^{T}y_{+}\le 2\mu n. \end{aligned}$$
(13)

Proof

Using (12), we have

$$\begin{aligned} x_{+}^{T}y_{+}=\mu e^{T}(e+d_{x}d_{y})=\mu (n+d_{x}^{T}d_{y}). \end{aligned}$$

Using (10), it follows that

$$\begin{aligned} x_{+}^{T}y_{+}\le \mu (n+2\delta ^{2}). \end{aligned}$$

Now, let \(\delta \le \frac{2}{\sqrt{10}}\), then

$$\begin{aligned} x_{+}^{T}y_{+}\le \mu \left( n+\frac{4}{5}\right) . \end{aligned}$$

But since for all \(n\ge 1\), \(n+\frac{4}{5} \le 2n\), this gives the required result. \(\square \)

In the next lemma, we investigate the effect on the proximity of a full-Newton step followed by an update of the barrier parameter \(\mu \).

Lemma 6

Let \(\delta \le \frac{2}{\sqrt{10}}\) and \(\mu _{+}=(1-\theta )\mu \), where \(0\le \theta \le 1\). Then

$$\begin{aligned} \delta ^{2}(x_{+}y_{+};\mu _{+})\le \frac{2}{15}+\frac{\theta ^{2}(n+\frac{4}{5})}{4(1-\theta )}+\frac{4\theta }{15}. \end{aligned}$$

In addition, if \(\delta \le \frac{2}{\sqrt{10}}\), \(\theta =(\frac{6}{23n})^{1/2}\) and \(n\ge 2\), then \(\delta (x_{+}y_{+};\mu _{+})\le \frac{2}{\sqrt{10}}\).

Proof

We have,

$$\begin{aligned} 4\delta ^{2}(x_{+}y_{+};\mu _{+})= & {} \left\| \sqrt{1-\theta }v_{+}^{-1}-\frac{1}{\sqrt{1-\theta }}v_{+}\right\| ^{2}\\= & {} \left\| \sqrt{1-\theta }(v_{+}^{-1}-v_{+})-\frac{\theta }{\sqrt{1-\theta }}v_{+}\right\| ^{2}\\= & {} (1-\theta )\Vert v_{+}^{-1}-v_{+}\Vert ^{2}+\frac{\theta ^{2}}{1-\theta }\Vert v_{+}\Vert ^{2}-2\theta ( v_{+}^{-1}- v_{+})^{T} v_{+}\\= & {} (1-\theta )\Vert v_{+}^{-1}{-} v_{+}\Vert ^{2}{+}\frac{\theta ^{2}}{1-\theta }\Vert v_{+}\Vert ^{2}-2\theta ( v_{+}^{-1})^{T} v_{+}{+}2\theta ( v_{+})^{T} v_{+}. \end{aligned}$$

As \((v_{+}^{-1})^{T}v_{+}=n\) and (Lemma 3.5)

$$\begin{aligned} (v_{+})^{T} v_{+}=\Vert v_{+}\Vert ^{2}=\frac{1}{\mu }x_{+}^{T}y_{+}\le \left( n+\frac{4}{5}\right) , \end{aligned}$$

it follows that

$$\begin{aligned} \delta ^{2}(x_{+}y_{+};\mu _{+})\le (1-\theta )\delta _{+}^{2}+\frac{\theta ^{2}(n+\frac{4}{5})}{4(1-\theta )}+\frac{2\theta }{5}. \end{aligned}$$

Let \(\delta \le \frac{2}{\sqrt{10}}\), then \(\delta _{+}^{2}\le \frac{2}{15}\), and we deduce that

$$\begin{aligned} \delta ^{2}(x_{+}y_{+};\mu _{+})\le \frac{2(1-\theta )}{15}+\frac{\theta ^{2}(n+\frac{4}{5})}{4(1-\theta )}+\frac{2\theta }{5}. \end{aligned}$$

Now, taking \(\theta =(\frac{6}{23n})^{1/2}\) then \(\theta ^{2}=\frac{6}{23n}\), hence, we get

$$\begin{aligned} \delta ^{2}(x_{+}y_{+};\mu _{+})\le \frac{2(1-\theta )}{15}+\frac{\frac{6}{23n}(n+\frac{4}{5})}{4(1-\theta )}+\frac{2\theta }{5}. \end{aligned}$$

Now, as \(\frac{6(n+\frac{4}{5})}{23n}\le \frac{42}{115}\) for all \(n\ge 2\), then

$$\begin{aligned} \delta ^{2}(x_{+}y_{+};\mu _{+})\le \frac{2}{15}+\frac{21}{230(1-\theta )}+\frac{4\theta }{15}. \end{aligned}$$

Again, for \(n\ge 2\), we have \(\theta \in [0, (\frac{3}{23})^{1/2}]\). Letting

$$\begin{aligned} f(\theta )= \frac{2}{15}+\frac{21}{230(1-\theta )}+\frac{4\theta }{15}. \end{aligned}$$

The function \(f(\theta )\) is continuous and monotone increasing on \( [0,(\frac{3}{23})^{1/2}]\). Therefore,

$$\begin{aligned} f(\theta )\le f\left( \left( \frac{3}{23}\right) ^{1/2}\right) =0.37256 <\frac{4}{10},\, \text{ for } \text{ all }\,\theta \in \left[ 0,\left( \frac{3}{23}\right) ^{1/2}\right] . \end{aligned}$$

Then, after the barrier parameter is update to \(\mu _{+}=(1-\theta )\mu \) with \(\theta =(\frac{6}{23n})^{1/2}\) and if \(\delta \le \frac{2}{\sqrt{10}}\), then

$$\begin{aligned} \delta (x_{+}y_{+};\mu _{+})\le \frac{2}{\sqrt{10}}. \end{aligned}$$

This completes the proof. \(\square \)

From Lemma 3.6, we deduce that for the defaults \(\tau =\frac{2}{\sqrt{10}}\) and \(\theta =(\frac{6}{23n})^{1/2}\), Algorithm 2.3 is well-defined since the conditions \(x>0\), \(y>0\) and \(\delta (x_{+}y_{+};\mu _{+})\le \frac{2}{\sqrt{10}}\) are maintained during the solution process.

3.3 Iteration bound

We conclude this section with a theorem that gives us the iteration bound of Algorithm 2.3. Before doing this we apply the results obtained in the previous subsections and get the following lemma.

Lemma 7

Assume that \(x^{0}\) and \(y^{0}\) are strictly feasible starting point such that \(\delta (x^{0}y^{0};\mu _{0})\le \frac{2}{\sqrt{10}}\) for certain \(\mu _{0}>0\). Moreover, let \(x^{k}\) and \(y^{k}\) be the iterate produced by Algorithm 2.3, after k iterations. Then the inequality \((x^{k})^{T} y^{k}\le \epsilon \) is satisfied for

$$\begin{aligned} k\ge \frac{1}{\theta }\log \left( \frac{2n\mu _{0}}{\epsilon }\right) . \end{aligned}$$

Proof

From (13), it follows that

$$\begin{aligned} (x^{k})^{T} y^{k}\le 2n\mu _{k}=2n(1-\theta )^{k}\mu _{0}. \end{aligned}$$

Then the inequality \((x^{k})^{T} y^{k}\le \epsilon \) holds if

$$\begin{aligned} 2n(1-\theta )^{k}\mu _{0}\le \epsilon . \end{aligned}$$

Taking logarithms, we obtain

$$\begin{aligned} k \log (1-\theta )\le \log \epsilon -\log (2n\mu _{0}) \end{aligned}$$

and using \(-\log (1-\theta )\ge \theta \) for \(0\le \theta \le 1\), then we observe that the above inequality holds if

$$\begin{aligned} k \theta \ge \log \epsilon -\log (2n\mu _{0})=\log \left( \frac{2n\mu _{0}}{\epsilon }\right) . \end{aligned}$$

This implies the lemma. \(\square \)

Theorem 1

If \(\theta =(\frac{6}{23n})^{1/2}\) and \(\mu _{0}=\frac{1}{2}\), then Algorithm 2.3 requires at most

$$\begin{aligned} \mathcal {O}\left( \sqrt{n}\log \frac{n}{\epsilon }\right) \end{aligned}$$

iterations for getting an \(\epsilon \)-approximate solution of (1).

Proof

Let \(\theta =(\frac{6}{23n})^{1/2}\) and \(\mu _{0}=\frac{1}{2}\), by using Lemma 3.7, the proof is straightforward. \(\square \)

4 Numerical results

In this section, we test Algorithm 2.3, on some monotone HLCPs which are reformulated from three interesting problems, namely the absolute value equation (AVE) (see, e.g. [10, 12, 15]), convex quadratic optimization programs and the monotone standard LCP, respectively.

The AVE considered here, is of the type:

$$\begin{aligned} Ax-B\vert x\vert =b, \end{aligned}$$
(14)

where \(A,\,B\in \mathbb {R}^{n\times n}\), \(b\in \mathbb {R}^{n}\) and \(\vert x\vert \) denotes the vector in \(\mathbb {R}^{n}\) with absolute values of components of x. Indeed, for \(x\in \mathbb {R}^{n}\), we define \(x^{+}= \max (x,0)\) and \(x^{-}=\max (0,-x)\). Then it is easy to conclude that

$$\begin{aligned} x^{+}\ge 0,\,\,x^{-}\ge 0,\,\,x =x^{+}-x^{-},\,\,\left| x\right| = x^{+}+x^{-}\,\,\text{ and }\,\,x^{+T}x^{-}=0. \end{aligned}$$

Therefore, the AVE is equivalent to the following HLCP: find \(x^{+}\ge 0\) and \(x^{-}\ge 0\) such that

$$\begin{aligned} \left\{ \begin{array}{ll} Nx^{+}-Mx^{-}=q \\ x^{+T}x^{-}=0, \end{array} \right. \end{aligned}$$
(15)

with \(N=A-B\), \(M=A+B\) and \(q=b\). It shown under the condition that \(\sigma _{\min }(A)>\sigma _{\max }(B)\) (cf. Proposition 2.3 in [10]), that the HCLP in (15), is reduced to a P-matrix standard LCP and therefore the HLCP has a unique solution \((x^{+},x^{-})\) for every b (cf. Theorem 3.3.7 in [7]). Hence, the AVE has \(x=x^{+}-x^{-}\) as the unique solution. Now, for the implementation of Algorithm 2.3, our accuracy is set to \(\epsilon =10^{-6}\) and we use different values of the barrier parameters \(\mu _{0}\) and \(\theta \) in order to improve its performances. The Algorithm 2.3 is implemented on software MATLAB 7.9 and run on a PC with CPU 2.67 GHz and 4G RAM memory and double precision format. Now, four problems of monotone HLCP are constructed from two randomly AVE problems and from a convex quadratic program and a monotone standard LCP. To this end we compare our Algorithm 2.3 with an iterative fixed point method.

Problem 1 Consider the AVE problem of type (14), where A, B and b are given by

$$\begin{aligned} A = \left( \begin{array}{ccccc} 8 &{}\quad 0 &{}\quad -1 &{}\quad 1 &{}\quad -20 \\ 1 &{}\quad 1 &{}\quad 1 &{}\quad 4 &{}\quad 25 \\ 1 &{}\quad -5 &{}\quad 0 &{}\quad 8 &{}\quad -10 \\ 0 &{}\quad 8 &{}\quad 1 &{} \quad -6 &{} \quad 1 \\ 3 &{} \quad 5 &{} \quad -3 &{} \quad 0 &{}\quad 10 \end{array} \right) , B = \left( \begin{array}{ccccc} -1.5 &{}\quad 0 &{}\quad 1.5 &{}\quad 0.5 &{}\quad 0.1 \\ 0 &{}\quad 0.25 &{}\quad 1 &{}\quad 0 &{}\quad 0.5 \\ 1 &{}\quad 0.6 &{}\quad 1 &{}\quad 0.4 &{}\quad 0.5 \\ 0 &{} \quad 0.3 &{} \quad 1 &{}\quad 1 &{}\quad 0 \\ 1 &{}\quad 0 &{}\quad 1 &{}\quad 0 &{}\quad 0 \end{array} \right) ,\,b=e\in \mathbb {R}^{5}. \end{aligned}$$

One can easily check that the AVE in problem 1, has a unique solution since \(\sigma _{\min }(A)=2. 8215>\sigma _{\max }(B)=2. 7434\). Therefore the corresponding HLCP in (15) is strictly monotone and its data is given by

$$\begin{aligned} M=\left( \begin{array}{ccccc} 9.\,5 &{} 0 &{} -2.\,5 &{} 0.5 &{} -20.\,1 \\ 1 &{} 0.75 &{} 0 &{} 4 &{} 24.\,5 \\ 0 &{} -5.\,6 &{} -1 &{} 7.\,6 &{} -10.\,5 \\ 0 &{} 7.\,7 &{} 0 &{} -7 &{} 1 \\ 2 &{} 5 &{} -4 &{} 0 &{} 10 \end{array} \right) ,\,\,N=\left( \begin{array}{ccccc} 6.\,5 &{} 0 &{} 0.5 &{} 1.\,5 &{} -19.\,9 \\ 1 &{} 1.\,25 &{} 2 &{} 4 &{} 25.\,5 \\ 2 &{} -4.\,4 &{} 1 &{} 8.\,4 &{} -9.\,5 \\ 0 &{} 8.\,3 &{} 2 &{} -5 &{} 1 \\ 4 &{} 5 &{} -2 &{} 0 &{} 10 \end{array} \right) , \end{aligned}$$

and \(q=b\). The strictly feasible starting point for Algorithm 2.3, is

$$\begin{aligned} x_{0}^{-}=(2.6677,\,0.4111,\,1.3168,\,0.3506,\,1.6744)^{T}, \end{aligned}$$

and

$$\begin{aligned} x_{0}^{+}=(1.3825,\,4.9548,\,2.7173,\,4.6145,\,1.1166)^{T}. \end{aligned}$$

The unique solution of the corresponding HLCP is

$$\begin{aligned} x^{-}=(0,\,0,\,0,\,0,\,0.0735)^{T}, \end{aligned}$$

and

$$\begin{aligned} x^{+}=(0.0286,\,0.6808,\,0.4270,\,0.5953,\,0)^{T}. \end{aligned}$$

Therefore, the unique solution of the AVE is given by

$$\begin{aligned} x=(0.0285,\,0.6808,\,0.4270,\,0.5953,\,-0.0753)^{T}. \end{aligned}$$

The numerical results with different theoretical and relaxed barrier values of \(\mu _{0}\) and \(\theta \) are shown in Table 1.

Table 1 Numerical results for Problem 1

Problem 2 The matrices \(A,\,B\in \mathbb {R}^{n\times n}\), and the vector \(b\in \mathbb {R}^{n}\) of the AVE problem are given by

$$\begin{aligned} A=\left( \begin{array}{cccccc} 6 &{}\quad 0.5 &{}\quad 0.5 &{} \quad \ldots &{} \quad 0.5 &{}\quad 0 \\ 0.5 &{}\quad 6 &{} \quad 0.5 &{}\quad \ldots &{}\quad 0.5 &{}\quad 0 \\ 0.5 &{}\quad 0.5 &{}\quad 6 &{} \quad \cdots &{}\quad 0.5 &{}\quad \vdots \\ \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \ddots &{} \quad 0.5 &{} \quad 0 \\ 0.5 &{}\quad 0.5 &{}\quad 0.5 &{}\quad \ldots &{}\quad 6 &{}\quad 0 \\ 0 &{}\quad 0 &{} \quad \ldots &{}\quad 0 &{} \quad 0 &{}\quad 6 \end{array} \right) \end{aligned}$$

and

$$\begin{aligned} B=\left( \begin{array}{cccccc} -1 &{}\quad 0.5 &{}\quad 0.5 &{}\quad \ldots &{} \quad 0.5 &{}\quad 0 \\ 0.5 &{}\quad -1 &{} \quad 0.5 &{}\quad \ldots &{}\quad 0.5 &{}\quad 0 \\ 0.5 &{}\quad 0.5 &{}\quad -1 &{}\quad \cdots &{}\quad 0.5 &{} \quad \vdots \\ \vdots &{}\quad \vdots &{} \quad \ddots &{}\quad \ddots &{}\quad 0.5 &{}\quad 0 \\ 0.5 &{}\quad 0.5 &{} \quad 0.5 &{} \quad \ldots &{}\quad -1 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad \ldots &{} \quad 0 &{}\quad 0 &{} \quad -1 \end{array} \right) \end{aligned}$$

and

$$\begin{aligned} b=(21,\,28,\ldots ,28,\,21)^{T}. \end{aligned}$$

The numerical results with different size of n are summarized in Tables 2 and 3. For the initialization of the corresponding HLCP, we take

$$\begin{aligned} x_{0}^{-}=\frac{1}{2}e\,\,\text{ and }\,\,x_{0}^{+}=(3.4286,\,4.5,\ldots ,4.5,\,3.4286)^{T}. \end{aligned}$$

The unique solution of HLCP is given by

$$\begin{aligned} x^{+}=(3,\,4,\ldots ,\,4,\,3)^{T}\,\,\text{ and }\,\,x^{-}=(0,\ldots ,0)^{T}. \end{aligned}$$

Therefore the unique solution of the AVE is

$$\begin{aligned} x=(3,\,4,\ldots ,4,\,3)^{T}. \end{aligned}$$
Table 2 Numerical results with \(\theta =(\frac{6}{23n})^{1/2}\)
Table 3 Numerical results with \(\theta =\frac{1}{2\sqrt{n}}\)

Next example is constructed from a convex quadratic program.

Problem 3 Let us consider the following convex quadratic program:

$$\begin{aligned} \mathcal {(P)}\ \ \ \ \min _{x}\ c^{T}x+\frac{1}{2}x^{T}Qx:\ Ax\le b,\ x\ge 0, \end{aligned}$$

where Q is a symmetric semidefinite matrix in \(\mathbb {R}^{n\times n}\), \(c\in \mathbb {R}^{n}\), \(b\in \mathbb {R}^{m}\), \(x\in \mathbb {R}^{n}\) and \(A\in \mathbb {R}^{m\times n}\) with \(\hbox {rank}(A)=m\). Cottle [7] showed by invoking the K.K.T optimality conditions that x is an optimal solution of \(\mathcal {P}\) if and only if there exists \(y\in \mathbb {R}_{+}^{m},\) and \(\lambda \in \mathbb {R}_{+}^{n}\) such that

$$\begin{aligned} \left\{ \begin{array}{l} Qx+c+A^{T}y-\lambda =0 \\ y^{T}(b-Ax)=0, \\ \lambda ^{T}x=0 \end{array} \right. \Leftrightarrow \left\{ \begin{array}{l} \lambda =Qx+c+A^{T}y \\ \mu =-Ax+0y+b, \\ \lambda ^{T}x+\mu ^{T}y =0,\ x,y,\lambda ,\mu \ge 0. \end{array} \right. \end{aligned}$$

Therefore the K.K.T optimality conditions are equivalent to the following monotone HLCP

$$\begin{aligned} Nw-Mz=q,w^{T}z=0,w,z\ge 0 \end{aligned}$$

where

$$\begin{aligned} N=I,\ \ M=\left( \begin{array}{cc} Q &{} A^{T} \\ -A &{} 0 \end{array} \right) ,q=\left( \begin{array}{c} c \\ b \end{array} \right) ,w=\left( \begin{array}{c} \lambda \\ \mu \end{array} \right) ,\ z=\left( \begin{array}{c} x \\ y \end{array} \right) . \end{aligned}$$

Let us consider the following convex quadratic program \(\mathcal {P}\) with the following data:

$$\begin{aligned} A = \left( \begin{array}{ccc} 3 &{}4&{}-2\\ -3&{}2&{}1 \end{array} \right) ,\; Q = \left( \begin{array}{ccc} 2&{}1&{}0\\ 1&{}4&{}0\\ 0&{}0&{}6 \end{array} \right) ,\;b =(10,\,2)^{T},\;c =(1,\,-2,\,4)^{T}. \end{aligned}$$

Therefore the data of the corresponding monotone HLCP is given by

$$\begin{aligned} M= \left( \begin{array}{ccccc} 2&{}1&{}0&{}3&{}-3\\ 1&{}4&{}0&{}4&{}2\\ 0&{}0&{}6&{}-2&{}1\\ -3&{}-4&{}2&{}0&{}0\\ 3&{}-2&{}-1&{}0&{}0 \end{array} \right) ,\,N=I,q=(1,\,-2,\,4,\,10,\,2)^{T}. \end{aligned}$$

The starting point for Algorithm 2.3, is given by

$$\begin{aligned} z^{0}=e\in \mathbb {R}^{5}\,\,\text{ and }\,\,w^{0}=(4,\,9,\,9,\,5,\,2)^{T}. \end{aligned}$$

The unique solution of the HLCP is given by

$$\begin{aligned} z^{*}=(0,\,0.5,\,0,\,0,\,0)^{T} \end{aligned}$$

and

$$\begin{aligned} w^{*}=(1.5,\,0,\,4,\,8,\,1)^{T}. \end{aligned}$$

So the unique minimum \(x^{*}\) of \(\mathcal {P}\) is attained at the point

$$\begin{aligned} x^{*}=(0,\,0.5,\,0)^{T}. \end{aligned}$$

The numerical results with different theoretical and relaxed barrier values of \(\mu _{0}\) and \(\theta \) are shown in Table 4.

Table 4 Numerical results for Problem 3

In the next example, we compare the performance of Algorithm 2.3 with an available non-interior-point method, namely, an iterative fixed point algorithm developed earlier by Long [19]. In fact, if we take \(y=\left| z\right| -z \ge 0\) and \(x=\left| z\right| +z\ge 0\) in (1), then the HLCP can be equivalently transformed into a system of fixed point equations

$$\begin{aligned} (N+M)z=(N-M)\left| z\right| -q. \end{aligned}$$
(16)

Based on (16), and assume that the matrix \((N+M)\) is nonsingular, then the iterative fixed point algorithm can be described below as follows.

Fixed Point Algorithm.

Given an initial point \(z^{0}\in \mathbb {R}^{n}\);

compute \(z^{k+1}=(N+M)^{-1}(N-M)\left| z^{k}\right| -(N+M)^{-1}q\);

set \(x^{k+1}=z^{k+1}+\left| z^{k+1}\right| \); until the iteration sequence \(\{x^{k}\}_{k\ge 0}\) is convergent.

Yong specialized in the HLCP with \(N=I\) i.e., in monotone standard LCP, he proved under suitable conditions that the iterative fixed point method is globally convergent to a solution of HLCP. For more details the interested reader can consult the reference [19].

Problem 4 [19] Let us consider the following monotone standard LCP with the following data:

$$\begin{aligned} M= \left( \begin{array}{ccccc} 4&{}-1&{}0&{}\ldots &{}0\\ -1&{}4&{}-1&{}\ldots &{}0\\ 0&{}-1&{}4&{}\ldots &{}0\\ \vdots &{}\vdots &{} \vdots &{}\ddots &{}\vdots \\ 0&{}0&{}0&{}\ldots &{}4 \end{array} \right) ,\,N=I,\,q=-e. \end{aligned}$$

The starting point for Algorithm 2.3, is

$$\begin{aligned} x^{0}=e\,\,\text{ and }\,\,y^{0}=(2,1,\ldots ,1,2)^{T}. \end{aligned}$$

The unique solution of the HLCP is given by

$$\begin{aligned}&x^{*}=( 0.3660,\,0.4641,\,0.4904,\,0.4974,\,0.5,\,\ldots ,\\&0.5,\, 0.4974,\,0.4904,\, 0.4641,\,0.3660)^{T} \end{aligned}$$

and

$$\begin{aligned} y^{*}=(0,\,\ldots ,\,0)^{T}. \end{aligned}$$

So the unique solution \(z^{*}\) obtained by the iterative fixed point method is

$$\begin{aligned}&z^{*}=(0.1830,\, 0.2321,\, 0.2452,\, 0.2487,\, 0.25,\, \ldots ,\\&\quad 0.25,\, 0.2487,\, 0.2452,\, 0.2321,\, 0.1830)^{T}. \end{aligned}$$

The numerical results obtained by these two algorithms are shown in Table 5.

Table 5 Numerical results for Problem 4

Comments. Across the obtained numerical results stated in the above tables, we see that Algorithm 2.3 offers with these new defaults a solution for monotone HLCPs. Moreover, Algorithm 2.3 obtains better numerical results with the relaxed parameter \(\mu _{0}=0.00005\) and the update barrier \(\theta =(\frac{6}{23n})^{1/2}\), since the number of iterations is significantly reduced. Also our obtained numerical results compete well with those obtained with the classical strategy of the update barrier parameter, namely, \(\theta =\frac{1}{2\sqrt{n}}\). Moreover, across the numerical results obtained for Problem 4, confirm that Algorithm 2.3 performs well in practice in comparison with the iterative fixed point method since the number of iterations and the CPU times produced by Algorithm 2.3, are always less than those obtained by this latter.

5 Conclusion and future works

We have presented an interior-point algorithm of primal-dual type for solving monotone HLCP. We proved that the complexity of this short-step algorithm is \(\mathcal {O}\left( \sqrt{n}\log \frac{n}{\epsilon }\right) \). Moreover, the resulting analysis is based on a new neighborhood of the central-path with size \(\tau =\frac{2}{\sqrt{10}}\) and with the update barrier \(\theta =(\frac{6}{23n})^{1/2}\). The Algorithm 2.3 offered through the HLCP, a solution of an important class of NP-hard absolute value equations and for convex quadratic programs.

Some interesting topics remain for further research. Firstly, the extensions of Algorithm 2.3, to linear complementarity problems over symmetric cones. Secondly, the development of infeasible interior-point algorithm based on the analysis (feasible) given in this paper seems to be an interesting topic.