1 Introduction

Since Karmarkar’s ground-breaking paper [1], interior-point methods (IPMs) became one of the most active research areas. They have been widely used to obtain strong theoretical results for solving linear optimization (LO), linear complementarity (LCP), semidefinite optimization (SDO) and many other problems. In [2], the authors proved that different formulations of LCP, such as horizontal LCP (HLCP) and the mixed LCP, can be transformed into a standard LCP. In 1995, Miao [3] extended the Mizuno–Todd–Ye (MTY) predictor–corrector method [4] for P (κ)-LCPs. The class of all P (κ) matrices is called P . This class was extensively studied in the 1991 monograph of Kojima et al. [5]. Gurtuna et al. [6] generalized the MTY method to HLCP and presented a class of corrector–predictor IPMs for solving sufficient HLCP. Wang and Bai [7] presented a class of polynomial interior-point algorithms for P (κ)-HLCP based on parametric kernel functions. Darvay [8] proposed a full-Newton step primal–dual path-following interior-point algorithm for LO. Later on, Achache [9] and Asadi and Mansouri [10] respectively extended Darvay’s algorithm to LCP and P (κ)-HLCP. Recently, Kheirfam [11] presented a predictor–corrector interior-point algorithm for P (κ)-HLCP based on Darvay’s technique. Infeasible IPMs (IIPMs) start with an arbitrary positive point, and feasibility is reached as optimality is approached. In 1990, Lustig [12] proposed an IIPM for the first-time. Kojima et al. [13] proved the global convergence, whereas Zhang [14] and Mizuno [15] presented polynomial iteration complexity results for variants of this algorithm. Potra and Sheng [16] proposed an infeasible interior-point algorithm for the P (κ)-matrix LCP. This algorithm does not depend on the handicap κ of the problem. It operates in a larger neighborhood of the central path than the algorithm of [3] and has high order convergence for non-degenerate problems. For more details about IIPMs, one is referred to [17]. In 2006, Roos [18] proposed a full-Newton step IIPM for LO and proved that the complexity of the algorithm coincides with the best-known iteration bound for IIPMs. Mansouri et al. [19] extended the algorithm from LO to LCP, and obtained similar polynomial complexity results. Later on, Kheirfam [20, 21] presented some variants the algorithm for SDO and for LCP [22]. Recently, Kheirfam and Mahdavi-Amiri [23] extended the algorithm from LO to symmetric cone linear complementarity problems, and obtained similar polynomial complexity results. Zhang et al. [24] presented a full-Newton step IIPM for SDO based on a new proximity measure and simplified the complexity analysis of the algorithm.

The goal of the paper is to present a full-Newton step IPM and an IIPM for P (κ)-HLCP. We develop a different analysis from the mentioned works for full-Newton step IPMs and IIPMs by using the proximity measure introduced in [24]. We provide search directions and show that the iteration bound coincides with the best known bound for IPMs and IIPMs, while tendering to a simple and interesting analysis.

The remainder of our work is organized as follows: Section 2 is devoted to the analysis of full-Newton step IPMs. We provide some tools and obtain the best known iteration bound for IPMs. Section 3 is used to describe the IIPM for P (κ)-HLCP in more details. Section 4 deals with the analysis of the feasibility step of IIPM, and finally, we obtain the best known complexity bound for IIPMs. We conclude the paper in Sect. 5.

2 Feasible Full-Newton Step Interior-Point Method

In this section, we introduce a full-Newton step feasible interior-point algorithm for P (κ)-HLCP based on the proximity measure introduced in [24].

2.1 The P (κ)-HLCP and Its Central Path

Given two matrices \(Q, R\in{\mathbb{R}}^{n\times n}\), and a vector \(q\in \mathbb{R}^{n}\), the HLCP consists in finding a pair of vectors \(({ x}, {s})\in\mathbb{R}^{2n}\) such that

$$\begin{aligned} ({\rm P})\quad Qx+Rs=q,\quad x^Ts=0,\quad(x, s)\geq0. \end{aligned}$$

The standard monotone linear complementarity problem (LCP) is obtained by taking R=−I and a positive semidefinite matrix Q. There are other formulations of the LCPs as well but, as shown in [25], all popular formulations are equivalent, and the behavior of a large class of IPMs is identical on those formulations. Let κ≥0 be a given constant. We say that (P) is a P (κ)-HLCP iff

$$\begin{aligned} Qx+Rs=0 \quad\mbox{implies}\quad x^Ts\geq -4\kappa\sum _{i\in I_+}x_is_i,\quad I_+= \{i:x_is_i\geq0\}. \end{aligned}$$

If the above condition is satisfied, then we say that (Q,R) is a P (κ)-pair. Finding an approximate solution of P (κ)-HLCP is equivalent to solving the following system

$$\begin{aligned} Qx+Rs=q,\quad xs=0,\quad x, s\geq0. \end{aligned}$$

The basic idea of primal–dual IPMs is to replace the complementarity condition xs=0 by the parameterized equation xs=μe with μ>0 and with e=(1,1,…,1)T. This leads to the following system

$$\begin{aligned} Qx+Rs=q,\quad xs=\mu e,\quad x, s\geq0. \end{aligned}$$
(1)

Under the assumption that P (κ)-HLCP satisfies the interior-point condition (IPC), i.e., there exists (x 0,s 0)>0 such that Qx 0+Rs 0=q, system (1) has a unique solution for each μ>0 [5, 14]. This solution is denoted as (x(μ),s(μ)) and we call (x(μ),s(μ)) the μ-center of P (κ)-HLCP. The set of μ-centers is called the central path of P (κ)-HLCP. If μ→0, then the limit of the central path exists and since the limit points satisfy the complementarity condition, the limit yields a solution for P (κ)-HLCP [5, 14].

2.2 The Classic Newton Step

A natural way to define a search direction is to follow Newton’s approach and linearize the second equation in (1). This leads to the following system

$$\begin{aligned} Q\Delta x+R\Delta s=0,\quad\quad x\Delta s+s\Delta x=\mu e-xs. \end{aligned}$$
(2)

We use the following notations:

$$\begin{aligned} v:=\sqrt{\frac{xs}{\mu}}, \quad\quad d_x:= \frac{v\Delta x}{x},\quad\quad d_s:=\frac{v\Delta s}{s}. \end{aligned}$$
(3)

It follows from (3) that the system (2) reduces to

$$\begin{aligned} {\overline{Q}}d_x+{\overline{R}}d_s=0,\quad \quad d_x+d_s=v^{-1}-v, \end{aligned}$$
(4)

where \({\overline{Q}}:=QXV^{-1}, {\overline{R}}:=RSV^{-1}, X:={\rm diag}(x), S:={\rm diag}(s)\) and \(V:={\rm diag}(v)\). System (4) uniquely defines the search direction (d x ,d s ), and the Newton direction (Δxs) can be easily obtained from (3). The new iterates are given by

$$\begin{aligned} x^+:=x+\Delta x,\quad\quad s^+:=s+\Delta s. \end{aligned}$$

From the second equation of system (4), we have

$$\begin{aligned} v(d_x+d_s)=e-v^2. \end{aligned}$$

2.3 The Proximity Measure and Its Properties

We use the following proximity measure σ(x,s;μ) to show the closeness of iterates to the central path, which has been considered for SDO for the first time in [24]:

$$\begin{aligned} \sigma(v):=\sigma(x, s; \mu):=\|e-v^2\|. \end{aligned}$$
(5)

The proof of the following lemma is similar to the proof of Lemma 3.1 in [24].

Lemma 2.1

Let σ:=σ(x,s;μ). Then

$$\begin{aligned} \sqrt{1-\sigma}\leq v_i\leq\sqrt{1+\sigma},\quad i=1, 2, \dots, n. \end{aligned}$$

2.4 The Full-Newton Step Feasible IPM for P (κ)-HLCP

Here, we obtain lower and upper bounds for the inner product of the scaled directions, \(d_{x}^{T}d_{s}\). For this propose, we consider system (4). Since \((\overline{Q}, \overline{R})\) is a P (κ)-pair, we have

$$\begin{aligned} d_x^Td_s \geq&-4\kappa\sum _{i\in I^+}[d_x]_i[d_s]_i \geq-\kappa\sum_{i\in I^+} \bigl([d_x]_i+[d_s]_i \bigr)^2 \\ \geq&-\kappa\sum_{i=1}^n \bigl([d_x]_i+[d_s]_i \bigr)^2=-\kappa\|d_x+d_s\|^2 \\ =&-\kappa \big\| v^{-1}\bigl(e-v^2\bigr) \big\| ^2\geq- \kappa\frac{\sigma^2}{v_{\min}^2}\geq-\kappa\frac{\sigma^2}{1-\sigma}, \end{aligned}$$

where I +:={i:[d x ] i [d s ] i ≥0} and the last inequality follows by Lemma 2.1. On the other hand, we have

$$\begin{aligned} \|d_x+d_s\|^2=\|d_x \|^2+\|d_s\|^2+2d_x^Td_s= \|d_x-d_s\|^2+4d_x^Td_s, \end{aligned}$$

which implies that

$$\begin{aligned} d_x^Td_s\leq\frac{1}{4} \|d_x+d_s\|^2=\frac{1}{4} \|v^{-1}-v\|^2\leq\frac{1}{4v^2_{\min}}\|e-v^2 \|^2\leq\frac{\sigma^2}{4(1-\sigma)}. \end{aligned}$$
(6)

We describe the generic primal–dual IPM for P (κ)-HLCP as follows:

figure a

2.5 Some Properties of the Classic Full-Newton Step

The following lemmas describe the effect on duality gap, the condition for feasibility and the effect on σ(x,s;μ) of a full-Newton step. Using (3), we have

$$\begin{aligned} x^+=x+\Delta x=\frac{x}{v}(v+d_x), \quad\quad s^+=s+\Delta s= \frac{s}{v}(v+d_s). \end{aligned}$$

Moreover, using the second equation of (4), we obtain

$$\begin{aligned} x^+s^+=\frac{xs}{v^2}(v+d_x) (v+d_s)= \mu \bigl(v^2+v(d_x+d_s)+d_xd_s \bigr)=\mu(e+d_xd_s). \end{aligned}$$
(7)

Lemma 2.2

(Lemma II.48 in [26])

The full-Newton step is strictly feasible if

$$\begin{aligned} e+d_xd_s>0. \end{aligned}$$

Lemma 2.3

If \(\sigma:=\sigma(x, s;\mu)<\frac{2}{1+\sqrt{3+4\kappa}}\), then (x +,s +) is strictly feasible. Furthermore, we have

$$\begin{aligned} \sigma\bigl(x^+, s^+; \mu\bigr)\leq(1+2\kappa)\frac{\sigma^2}{2(1-\sigma)}. \end{aligned}$$

Proof

Using (5) and (7), we have

$$\begin{aligned} \sigma\bigl(x^+, s^+; \mu\bigr) :=& \bigg\| \frac{x^+s^+}{\mu}-e \bigg\| = \|d_xd_s\| \\ \leq&\frac{1}{2} \bigl(\|d_x\|^2+ \|d_s\|^2 \bigr)=\frac{1}{2} \bigl( \|d_x+d_s\|^2-2d_x^Td_s \bigr) \\ =&\frac{1}{2} \bigl(\|v^{-1}-v\|^2-2d_x^Td_s \bigr) \\ \leq&\frac{1}{2} \biggl(\frac{1}{v_{\min}^2}\|e-v^2 \|^2+2\kappa\frac{\sigma^2}{1-\sigma} \biggr) \\ \leq&(1+2\kappa)\frac{\sigma^2}{2(1-\sigma)}. \end{aligned}$$

This completes the second part of the proof of the lemma. As for the first part, we know that if \(\sigma<\frac{2}{1+\sqrt{3+4\kappa}}\), then

$$\begin{aligned} \|d_xd_s\|\leq(1+2\kappa)\frac{\sigma^2}{2(1-\sigma)}<1, \end{aligned}$$

which means that the new iterates are strictly feasible by Lemma 2.2. □

Corollary 2.1

If \(\sigma:=\sigma(x, s;\mu)\leq\frac{1}{1+\sqrt{3+4\kappa}}\), then

$$\begin{aligned} \sigma\bigl(x^+, s^+; \mu\bigr)\leq(\sqrt{1+2\kappa} \sigma)^2, \end{aligned}$$

which shows the quadratic convergence of the algorithm.

Proof

From Lemma 2.3, it follows that

$$\begin{aligned} \sigma\bigl(x^+, s^+; \mu\bigr)\leq{(1+2\kappa)\sigma^2} \frac{1+\sqrt{3+4\kappa}}{2\sqrt{3+4\kappa}}\leq(\sqrt{1+2\kappa} \sigma)^2. \end{aligned}$$

This proves the corollary. □

Lemma 2.4

Let x,s be strictly feasible and σ:=σ(x,s;μ). If μ +:=(1−θ)μ for 0<θ<1, then

$$\begin{aligned} \sigma\bigl(x, s; \mu^+\bigr)\leq\frac{\sigma+\theta\sqrt{n}}{1-\theta}. \end{aligned}$$

Proof

Using σ +:=σ(x,s;μ +), the definition of σ and the triangle inequality, we have

$$\begin{aligned} \sigma^+= \bigg\| \frac{v^2}{1-\theta}-e \bigg\| =\frac{1}{1-\theta} \big\| v^2-(1-\theta)e \big\| \leq\frac{1}{1-\theta} (\sigma+\theta\sqrt{n} ). \end{aligned}$$
(8)

This proves the lemma. □

From Lemmas 2.3 and 2.4, one can conclude that after one full-Newton step and one μ update, an upper bound for the proximity measure is

$$\begin{aligned} \sigma\bigl(x^+, s^+; \mu^+\bigr)\leq \frac{\sigma(x^+, s^+; \mu)+\theta\sqrt{n}}{1-\theta} \leq\frac{1}{1-\theta} \biggl((1+2\kappa)\frac{\sigma^2}{2(1-\sigma)}+\theta\sqrt{n} \biggr). \end{aligned}$$

Suppose that the initial iterate (x,s) satisfies σ(x,s;μ)≤τ. To restore the condition σ(x +,s +;μ +)≤τ, the following condition for θ should be satisfied

$$\begin{aligned} \frac{1}{1-\theta} \biggl((1+2\kappa)\frac{\tau^2}{2(1-\tau)}+\theta \sqrt{n} \biggr)\leq\tau. \end{aligned}$$
(9)

Note that in the process of iteration, all the iterates must be strictly feasible. By Lemma 2.3, an upper bound for τ is as follows:

$$\begin{aligned} \tau<\frac{2}{1+\sqrt{3+4\kappa}}. \end{aligned}$$

In this condition, if \(\tau=\frac{1}{3(1+2\kappa)}\) and \(\theta=\frac{2}{11(1+2\kappa)\sqrt{n}}\), then (9) holds.

Lemma 2.5

Suppose that x 0 and s 0 are strictly feasible, and \(\sigma(x^{0}, s^{0}; \mu^{0})\leq\frac{1}{1+\sqrt{3+4\kappa}}\) with \(\mu^{0}=\frac{(x^{0})^{T}s^{0}}{n}\). Let x k and s k be the iterates obtained after k iterations of Algorithm 1. Then the inequality (x k)T s kϵ is satisfied after at most

$$\begin{aligned} k\geq\frac{1}{\theta}\log\frac{\frac{21}{20}n\mu^0}{\epsilon} \end{aligned}$$

iterations.

Proof

From (6), (7) and the fact that \(\sigma\leq\frac{1}{1+\sqrt{3+4\kappa}}\), it follows that

$$\begin{aligned} \bigl(x^+\bigr)^Ts^+ =&\mu e^T(e+d_xd_s)= \mu \bigl(n+d_x^Td_s\bigr) \\ \leq&\mu \biggl(n+\frac{\sigma^2}{4(1-\sigma)} \biggr)\leq\mu\biggl(n+\frac{1}{20} \biggr)\leq\frac{21}{20}n\mu. \end{aligned}$$

Thus, we have

$$\begin{aligned} \bigl(x^k\bigr)^Ts^k\leq\frac{21}{20} n \mu^k=\frac{21}{20} n(1-\theta)^k\mu^0. \end{aligned}$$

Then, the inequality (x k)T s kϵ holds if

$$\begin{aligned} \frac{21}{20} n(1-\theta)^k\mu^0\leq\epsilon. \end{aligned}$$

Taking logarithms of both sides of the above inequality, we obtain

$$\begin{aligned} k\log(1-\theta)+\log\biggl(\frac{21}{20}n\mu^0\biggr)\leq \epsilon. \end{aligned}$$

Using the fact that −log(1−θ)≥θ, we observe that the above inequality holds if

$$\begin{aligned} k\theta\geq\log \biggl(\frac{21}{20}n\mu^0 \biggr)-\log\epsilon= \log\frac{\frac{21}{20}n\mu^0}{\epsilon}, \end{aligned}$$

and the lemma follows. □

The following theorem gives an upper bound for the total number of iterations produced by Algorithm 1.

Theorem 2.1

Let \(\tau=\frac{1}{3(1+2\kappa)}\) and \(\theta=\frac{2}{11(1+2\kappa)\sqrt{n}}\). Then, Algorithm 1 requires at most

$$\begin{aligned} O \biggl((1+2\kappa)\sqrt{n}\log\frac{n\mu^0}{\epsilon} \biggr), \end{aligned}$$

iterations to obtain a solution of P (κ)-HLCP.

3 Full-Newton Step Infeasible Interior-Point Method

In the case of an IIPM, we call the pair (x,s) an ϵ-solution of P (κ)-HLCP if the norm of the residual vector qQxRs does not exceed ϵ, and also x T sϵ. As usual for IIPMs, it is assumed that the initial iterates

$$\begin{aligned} \bigl(x^0, s^0\bigr)=(\rho_pe, \rho_de)\quad \mbox{and}\quad \mu^0=\rho_d \rho_p, \end{aligned}$$
(10)

where ρ p and ρ d are positive such that

$$\begin{aligned} \|x^*\|\leq\rho_p,\quad\quad\|s^*\|\leq \rho_d, \end{aligned}$$
(11)

for some optimal solution (x ,s ).

3.1 The Perturbed Problem

Denote the initial residual vector \(r_{q}^{0}\) as \(r_{q}^{0}:=q-Qx^{0}-Rs^{0}\). In general, \(r_{q}^{0}\neq0\). For any ν with 0<ν≤1, we consider the perturbed problem

$$\begin{aligned} (\mathrm{P}_{\nu})\quad q-Qx-Rs=\nu r_q^0,\quad (x, s) \geq0. \end{aligned}$$

Note that if ν=1, then (x,s)=(x 0,s 0) yields a strictly feasible solution of (P ν ). We conclude that if ν=1, then (P ν ) is strictly feasible, which means that the perturbed problem (P ν ) satisfies the IPC. More generally, we have the following lemma, whose proof is similar to the proof of Lemma 3.1 in [18].

Lemma 3.1

Let the original problem (P) be feasible and 0<ν≤1. Then, the perturbed problem (P ν ) satisfies the IPC.

Assuming that problem (P) is feasible, it follows from Lemma 3.1 that the problem (P ν ) satisfies the IPC, for each ν∈(0,1], and hence its central path exists. This means that the system

$$\begin{aligned} q-Q x-Rs=\nu r_q^{0},\quad xs=\mu e,\quad (x, s)\geq0, \end{aligned}$$
(12)

has a unique solution, for any μ>0. For ν∈(0,1] and μ=νμ 0, we denote this unique solution as (x(μ,ν),s(μ,ν)). It is the μ-center of (P ν ). In what follows, the parameters μ and ν will always be in one-to-one correspondence, according to μ=νμ 0. Therefore, we omit one of the parameters and denote the unique solution by (x(μ),s(μ)).

We measure the proximity to the μ-center of the perturbed problem by the quantity σ(x,s;μ) as defined in (7). Thus, initially x 0=ρ p e,s 0=ρ d e and μ 0=ρ p ρ d , whence v 0=e and σ(x 0,s 0;μ)=0. In the sequel, it is assumed that at the start of each iteration, σ(x,s;μ)≤τ, where τ is a positive threshold value. This certainly holds at the start of the first iteration.

3.2 An Iteration of the Algorithm

Now, we describe one main iteration of full-Newton step IIPM, in essence we follow the same chain of arguments as Roos in [18].

3.2.1 The Main Iteration of the Algorithm

Every main iteration consists of a feasibility step, a μ-update and a few centering steps. The algorithm begins with an infeasible interior-point (x,s) such that (x,s) is feasible for the perturbed problem (P ν ), with μ=νμ 0 and such that σ(x,s;μ)≤τ. With ν replaced by ν +=(1−θ)ν, the feasibility step serves to get iterates (x f,s f) that are strictly feasible for \((\mathrm{P}_{\nu^{+}})\), and close to μ +-centers. In fact, the feasibility step is designed in such a way that \(\sigma(x^{f}, s^{f}; \mu^{+})\leq\frac{1}{1+\sqrt{3+4\kappa}}\), that is, (x f,s f) lies in the quadratic convergence neighborhood with respect to the μ +-center of \((\mathrm{P}_{\nu^{+}})\). Then, just by performing a few centering steps starting at (x f,s f) and targeting at the μ +-center of \((\mathrm{P}_{\nu^{+}})\), one can easily get iterates (x +,s +) that are strictly feasible for \((\mathrm{P}_{\nu^{+}})\), and such that σ(x +,s +;μ +)≤τ.

3.2.2 The Feasibility Step

Here, we describe the feasibility step in details. Suppose that we have strictly feasible iterate (x,s) for (P ν ). This means that (x,s) satisfies the first equation in (12). With ν replaced by ν +=(1−θ)ν, to find new iterates (x f,s f) feasible for \((\mathrm{P}_{\nu^{+}})\), we need search directions Δf x and Δf s that satisfy the first equation in the following system

$$\begin{aligned} \begin{aligned} Q\Delta^fx+R\Delta^fs&=\theta \nu r_q^{0}, \\ x\Delta^f s+s\Delta^f x&=(1-\theta)\mu e-xs. \end{aligned} \end{aligned}$$
(13)

The second equation is inspired by the second equation in the system (2) that we used to define search directions for the feasible case, except that we target the μ +-centers of \((\mathrm{P}_{\nu^{+}})\).

After the feasibility step the iterates are given by

$$\begin{aligned} x^f:=x+\Delta^fx,\quad\quad s^f:=s+\Delta^fs. \end{aligned}$$
(14)

We conclude that after the feasibility step, we have iterates (x f,s f) that satisfy the affine equation of the new perturbed problem \((\mathrm{P}_{\nu^{+}})\). In the analysis, we should also guarantee that x f and s f are positive and belong to the region of quadratic convergence of its μ +-center. In other words, we must have \(\sigma(x^{f}, s^{f}; \mu^{+})\leq\frac{1}{1+\sqrt{3+4\kappa}}\). Proving this is the crucial part in the analysis of the algorithm.

3.3 Generic Infeasible Interior-Point Algorithm

Here, we summarize the steps of Sect. 3.2 as Algorithm 2 below.

figure b

4 Analysis of the Method

According to (13), after the feasibility step the iterates satisfy the feasibility condition for \((\mathrm{P}_{\nu^{+}})\). The hard part in the analysis will be to guarantee that x f,s f are positive and to guarantee that the new iterate satisfies \(\sigma(x^{f}, s^{f}; \mu^{+})\leq\frac{1}{1+\sqrt{3+4\kappa}}\).

4.1 Effect of the Feasibility Step

Let (x,s) be the iterate at the start of an iteration. We define

$$\begin{aligned} d_x^f:=\dfrac{v\Delta^fx}{x},\quad\quad d_s^f:=\dfrac{v\Delta^fs}{s}, \end{aligned}$$
(15)

where v is defined in (3). One can easily check that the system (13), which defines the search directions Δf x and Δf s, can be expressed in terms of the scaled search directions \(d_{x}^{f}\) and \(d_{s}^{f}\) as follows:

$$\begin{aligned} \begin{aligned} {\overline{Q}}d_x^f+{\overline{R}}d^f_s&=\theta \nu r_q^0, \\ d_x^f+d_s^f&=(1- \theta)v^{-1}-v, \end{aligned} \end{aligned}$$
(16)

where

$$\begin{aligned} {\overline{Q}}=QXV^{-1},\quad\quad {\overline{R}}=RSV^{-1}. \end{aligned}$$

Using (3) and (15), we may write

$$\begin{aligned} x^f=x+\Delta^fx=\dfrac{x}{v}\bigl(v+d_x^f \bigr),\quad\quad s^f=s+\Delta^fs=\dfrac{s}{v} \bigl(v+d_s^f\bigr). \end{aligned}$$

Therefore,

$$\begin{aligned} x^fs^f =&\mu\bigl(v+d_x^f \bigr) \bigl(v+d_s^f\bigr)=\mu \bigl(v^2+v \bigl(d^f_x+d^f_s \bigr)+d_x^fd_s^f \bigr) \\ =&\mu \bigl((1-\theta)e+d_x^fd_s^f \bigr). \end{aligned}$$
(17)

The proof of the following lemma is similar to the proof of Lemma 4.1 in [27].

Lemma 4.1

The new iterates (x f,s f) are strictly feasible if \((1-\theta)e+d^{f}_{x}d^{f}_{s}>0\).

Corollary 4.1

The new iterates (x f,s f) are strictly feasible if \(\|d_{x}^{f}d_{s}^{f}\|_{\infty}<1-\theta\).

Proof

By Lemma 4.1, (x f,s f) is strictly feasible if \((1-\theta)e+d_{x}^{f}d_{s}^{f}>0\). Since the last inequality holds if \(\|d_{x}^{f}d_{s}^{f}\|_{\infty}<1-\theta\), the corollary follows. □

In the sequel, we denote

$$\begin{aligned} w(v):=\frac{1}{2}\sqrt{\big\| d_x^f \big\| ^2+\big\| d_s^f\big\| ^2}, \end{aligned}$$

which implies \(\|d_{x}^{f}\|\leq2w(v)\) and \(\|d_{s}^{f}\|\leq2w(v)\). Moreover, we have

$$\begin{aligned} \big\| d_x^fd_s^f\big\| \leq& \big\| d^f_x\big\| \big\| d^f_s\big\| \leq \frac{1}{2}\bigl(\big\| d_x^f\big\| ^2+ \big\| d_s^f\big\| ^2\bigr)=2w(v)^2, \end{aligned}$$
(18)
$$\begin{aligned} \big\| d_x^fd_s^f\big\| _{\infty} \leq&\big\| d_x^f\big\| \big\| d_s^f\big\| \leq \frac{1}{2}\bigl(\big\| d_x^f\big\| ^2+ \big\| d_s^f\big\| ^2\bigr)=2w(v)^2. \end{aligned}$$
(19)

Lemma 4.2

The new iterates (x f,s f) are strictly feasible if \(w(v)<\sqrt{\frac{1-\theta}{2}}\).

Proof

By Corollary 4.1 and (19), the result easily follows. □

4.2 Upper Bound of σ(v f)

The following lemma gives an upper bound for σ(x f,s f,μ +). Recall from definition (5) that

$$\begin{aligned} \sigma\bigl(v^f\bigr):=\sigma\bigl(x^f, s^f; \mu^+\bigr)= \big\| e-\bigl(v^f\bigr)^2 \big\| , \quad\mbox{where } v^f:=\sqrt{\frac{x^fs^f}{\mu^+}}. \end{aligned}$$
(20)

Lemma 4.3

Let (x f,s f) be strictly feasible. Then we have

$$\begin{aligned} \sigma\bigl(v^f\bigr)\leq\frac{2}{1-\theta} w(v)^2. \end{aligned}$$

Proof

Dividing (17) by μ +=(1−θ)μ, we have

$$\begin{aligned} \frac{x^fs^f}{\mu^+}=\frac{(1-\theta)e+d_x^fd_s^f}{1-\theta}=e+\frac{d_x^fd_s^f}{1-\theta}. \end{aligned}$$

Hence

$$\begin{aligned} \sigma\bigl(v^f\bigr)= \big\| e-\bigl(v^f\bigr)^2 \big\| = \bigg\| e- \biggl(e+\frac{d_x^fd_s^f}{1-\theta} \biggr) \bigg\| =\frac{1}{1-\theta} \big\| d_x^fd_s^f \big\| \leq \frac{2 w(v)^2}{1-\theta}, \end{aligned}$$

where the last inequality follows by (18), and the proof is complete. □

We want the new iterate (x f,s f) to be within the neighborhood where the Newton process targeting the μ +-center of \((\mathrm{P}_{\nu^{+}})\) is quadratically convergent, that is, \(\sigma(v^{f})\leq\frac{1}{1+\sqrt{3+4\kappa}}\). According to Lemma 4.3, it suffices to have

$$\begin{aligned} w(v)^2\leq\frac{1-\theta}{2(1+\sqrt{3+4\kappa})}. \end{aligned}$$
(21)

4.3 Upper Bound for w(v)

In this section, we obtain an upper bound for w(v) which will enable us to find a default value for θ. For this purpose, consider system (16) which defines the search directions Δf x and Δf s in terms of the scaled search directions \(d_{x}^{f}\) and \(d_{s}^{f}\).

Lemma 4.4

(Lemma 3.3 in [28])

If HLCP is P(κ), then for any \(a, {\tilde{b}}\in \mathbb{R}^{n}\) and any \(z=(x^{T}, s^{T})^{T}\in \mathbb{R}^{2n}_{++}\) the linear system

$$\begin{aligned} Qu+Rv={\tilde{b}},\quad\quad su+xv=a, \end{aligned}$$

has a unique solution w:=(u T,v T)T and the following inequality is satisfied:

$$\begin{aligned} \|w\|_z\leq\sqrt{1+2\kappa}\|\tilde{a}\|_2+(1+\sqrt{2+4 \kappa}) \zeta(z, \tilde{b}), \end{aligned}$$

where

$$\begin{aligned} {\tilde{a}}=(xs)^{-\frac{1}{2}}a,\quad\quad D=X^{-\frac{1}{2}}S^{\frac{1}{2}},\quad\quad\|w \|^2_z=\big\| \bigl(u^T, v^T \bigr)^T\big\| ^2_z=\|Du\|_2^2+ \|D^{-1}v\|_2^2, \end{aligned}$$

and

$$\begin{aligned} \zeta(z, \tilde{b})^2=\min\bigl\{ \big\| \bigl({\tilde{u}}^T, { \tilde{v}}^T\bigr)^T\big\| ^2_z:~Q{ \tilde{u}}+R{\tilde{v}}={\tilde{b}}\bigr\} ={\tilde{b}}^T \bigl(QD^{-2}Q^T+RD^2R^T \bigr)^{-1}{\tilde{b}}. \end{aligned}$$

Due to Lemma 4.4, from system (13) we have

$$\begin{aligned} &\|D\Delta^fx\|^2_2+ \|D^{-1}\Delta^fs\|_2^2 \\ &\quad\leq \bigl(\sqrt{1+2\kappa} \big\| (xs)^{-\frac{1}{2}}\bigl((1-\theta)\mu e-xs \bigr)\big\| _2+\bigl(1+\sqrt{2(1+2\kappa)}\bigr)\zeta\bigl(z, \theta \nu r_q^0\bigr) \bigr)^2 \\ &\quad= \bigl(\sqrt{1+2\kappa}\big\| \sqrt{\mu}\bigl((1-\theta)v^{-1}-v \bigr)\big\| _2+\bigl(1+\sqrt{2(1+2\kappa)}\bigr)\theta\nu\zeta\bigl(z, r_q^0\bigr) \bigr)^2 \\ &\quad\leq \biggl(\sqrt{\mu(1+2\kappa)}\frac{\theta\sqrt{n}+\sigma}{\sqrt{1-\sigma}}+\bigl(1+\sqrt{2(1+2 \kappa)}\bigr)\theta\nu\zeta\bigl(z, r_q^0\bigr) \biggr)^2. \end{aligned}$$
(22)

Let (x ,s ) be the optimal solution of (P) that satisfies (11) and the algorithm starts with (x 0,s 0)=(ρ p e,ρ d e). Then, we have

$$\begin{aligned} x^*-x^0\leq\rho_p e,\quad\quad s^*-s^0\leq\rho_d e, \end{aligned}$$
(23)

and also

$$\begin{aligned} r^0_q=q-Qx^0-Rs^0=Q \bigl(x^*-x^0\bigr)+R\bigl(s^*-s^0\bigr). \end{aligned}$$
(24)

Now, by definition of \(\zeta(z, r^{0}_{q})\), (23) and (25), we have

$$\begin{aligned} \zeta\bigl(z, r^0_q\bigr)^2 \leq&\big\| D\bigl(x^*-x^0\bigr)\big\| ^2+\big\| D^{-1} \bigl(s^*-s^0\bigr)\big\| ^2 \\ \leq& \rho_p^2\|De\|^2+ \rho_d^2\|D^{-1}e\|^2 = \rho_p^2 \bigg\| \sqrt{\frac{s}{x}} \bigg\| ^2+ \rho_d^2 \bigg\| \sqrt{\frac{x}{s}} \bigg\| ^2 \\ \leq&\frac{\rho_p^2}{\mu} \bigg\| \frac{s}{v} \bigg\| ^2_1+ \frac{\rho_d^2}{\mu} \bigg\| \frac{x}{v} \bigg\| _1^2 \leq \frac{1}{\mu (1-\sigma)} \bigl(\rho_p^2\|s\|^2_1+ \rho_d^2\|x\|^2_1 \bigr). \end{aligned}$$
(25)

Using \(D\Delta^{f}x=\sqrt{\mu} d^{f}_{x}, D^{-1}\Delta^{f}s=\sqrt{\mu} d^{f}_{s}\), and substituting (25) into (22) and using the definition of w(v), we obtain

$$\begin{aligned} w(v)\leq\frac{1}{2} \biggl(\sqrt{1+2\kappa} \frac{\theta\sqrt{n}+\sigma}{\sqrt{1-\sigma}}+\frac{(1+\sqrt{2(1+2\kappa)})\theta}{\mu^0 }\sqrt{\frac{\rho_p^2\|s\|^2_1+\rho_d^2\|x\|^2_1}{1-\sigma}} \biggr). \end{aligned}$$
(26)

In the sequel, we obtain upper bounds for ∥x1 and ∥s1.

Lemma 4.5

(Lemma 12 in [29])

Let (x,s) be feasible for the perturbed problem \(({\rm \mathrm{P}_{\nu}})\) and (x 0,s 0)=(ρ p e,ρ d e). Then, for any optimal solution (x ,s ), we have

$$\begin{aligned} \nu \bigl(s^Tx^0+x^Ts^0 \bigr) \leq(1+4\kappa) \bigl(\nu^2(x^0)^Ts^0+\nu(1-\nu)\bigl((x^0)^Ts^*+(s^0)^Tx^*\bigr)+x^Ts \bigr). \end{aligned}$$

Lemma 4.6

Let (x,s) be feasible for the perturbed problem (P ν ) and let (x ,s ) be as defined in (11) and (x 0,s 0)=(ρ p e,ρ d e). Then, we have

$$\begin{aligned} \|x\|_1\leq(1+4\kappa) (3+\sigma )n\rho_p,\quad\quad \|s\|_1\leq(1+4\kappa) (3+\sigma )n\rho_d. \end{aligned}$$

Proof

Since x 0=ρ p e,s 0=ρ d e,∥x ≤∥x ∥≤ρ p and ∥s ≤∥s ∥≤ρ d , we have

$$\begin{aligned} \bigl(x^0\bigr)^Ts^*+\bigl(s^0 \bigr)^Tx^*\leq 2n\rho_p\rho_d,\quad\quad \bigl(x^0\bigr)^Ts^0=n\rho_p \rho_d. \end{aligned}$$

Hence, by Lemma 4.5, we get

$$\begin{aligned} s^Tx^0+x^Ts^0 \leq&(1+4\kappa) \biggl(n\nu\rho_p\rho_d+2(1-\nu)n\rho_p \rho_d+\dfrac{x^Ts}{\nu} \biggr) \\ \leq&(1+4\kappa) \biggl(2n\rho_p\rho_d+ \dfrac{x^Ts}{\nu} \biggr) =(1+4\kappa) \bigl(2n\rho_p \rho_d+\mu^0\|v\|^2 \bigr) \\ \leq&(1+4\kappa) \bigl(2+(1+\sigma) \bigr)n\rho_p \rho_d. \end{aligned}$$

Since x and s are positive, we obtain

$$\begin{aligned} \|x\|_1\leq(1+4\kappa) (3+\sigma )n\rho_p,\quad\quad \|s\|_1\leq(1+4\kappa) (3+\sigma )n\rho_d. \end{aligned}$$

This completes the proof. □

Substituting the bounds of ∥x1 and ∥s1 into (26), we have

$$\begin{aligned} w(v)\leq\frac{1}{2} \biggl(\sqrt{1+2\kappa} \frac{\theta\sqrt{n}+\sigma}{ \sqrt{1-\sigma}}+\frac{(1+\sqrt{2(1+2\kappa)})\sqrt{2}(1+4\kappa)(3+\sigma)n\theta}{ \sqrt{1-\sigma}} \biggr). \end{aligned}$$
(27)

4.4 Fixing θ

From the analysis of inequality (21), we know that if inequality (21) is satisfied then \(\sigma(v^{f})\leq\frac{1}{1+\sqrt{3+4\kappa}}\) certainly holds. It follows from (27) that if

$$\begin{aligned} &\frac{1}{2} \biggl(\sqrt{1+2\kappa}\frac{\theta\sqrt{n}+\sigma}{\sqrt{1-\sigma}}+ \frac{(1+\sqrt{2(1+2\kappa)})\sqrt{2}(1+4\kappa)(3+\sigma)n\theta}{ \sqrt{1-\sigma}} \biggr) \\ &\quad\leq\sqrt{\frac{1-\theta}{2(1+\sqrt{3+4\kappa})}}, \end{aligned}$$

then inequality (21) certainly holds. Note that the left-hand side of the above inequality is monotonically increasing with respect to σ. Given a threshold τ, for \(\sigma\leq\tau\leq\frac{1}{1+\sqrt{3+4\kappa}}\), the above inequality holds if

$$\begin{aligned} &\frac{1}{2} \biggl(\sqrt{1+2\kappa}\frac{\theta\sqrt{n}+\tau}{\sqrt{1-\tau}}+ \frac{(1+\sqrt{2(1+2\kappa)})\sqrt{2}(1+4\kappa)(3+\tau)n\theta}{ \sqrt{1-\tau}} \biggr) \\ &\quad\leq\sqrt{\frac{1-\theta}{2(1+\sqrt{3+4\kappa})}}, \end{aligned}$$

holds, then the new iterates (x f,s f) are strictly feasible and within the quadratic convergence region. At this stage, we assume that \(\tau=\frac{1}{6({1+2\kappa})}\). Substituting τ into the above inequality, after some calculations, the inequality holds for

$$\begin{aligned} \theta=\frac{1}{20 n(1+2\kappa)^2}. \end{aligned}$$

Note that, by Lemma 4.2, to keep the iterate (x f,s f) feasible, the following condition must be satisfied:

$$\begin{aligned} w(v)<\sqrt{\frac{1-\theta}{2}}. \end{aligned}$$

It follows from (27) that the above inequality certainly holds if

$$\begin{aligned} \frac{1}{2} \biggl(\sqrt{1+2\kappa}\frac{\theta\sqrt{n}+\sigma}{\sqrt{1-\sigma}}+\frac{(1+\sqrt{2(1+2\kappa)})\sqrt{2}(1+4\kappa)(3+\sigma)n\theta}{ \sqrt{1-\sigma}} \biggr)<\sqrt{\frac{1-\theta}{2}}. \end{aligned}$$

Using \(\theta=\frac{1}{20 n(1+2\kappa)^{2}}\) and \(\tau=\frac{1}{6({1+2\kappa})}\), an upper bound for the left-hand side of inequality is 0.4148, while a lower bound for the right-hand side of inequality is 0.6892. In this case, we conclude that the iterates (x f,s f) are strictly feasible.

4.5 Iteration Bound

Let \(\sigma(x^{f}, s^{f}; \mu^{+})\leq\frac{1}{3(1+2\kappa)}\), which is in agreement with Corollary 2.1. Starting at (x f,s f), we perform centering steps in order to get iterates (x +,s +) that satisfy δ(x +,s +,μ +)≤τ. We can estimate the required number of centering steps by using Corollary 2.1. Hence, after k centering steps we will have the iterate (x +,s +) that is still feasible for \((\mathrm{P}_{\nu^{+}})\) and satisfies

$$\begin{aligned} \delta\bigl(x^+, s^+; \mu^+\bigr) :=&\delta\bigl(v^k\bigr)\leq \bigl(\sqrt{1+2\kappa}\delta\bigl(v^{k-1}\bigr) \bigr)^2 \\ \leq& \bigl(\sqrt{1+2\kappa} \bigl[ \bigl(\sqrt{1+2\kappa} \delta \bigl(v^{k-2}\bigr) \bigr)^2 \bigr] \bigr)^2 \\ \vdots& \\ \leq& (\sqrt{1+2\kappa} )^{2+2^2+2^3+\cdots+2^k}\delta\bigl(v^f \bigr)^{2^k}= (\sqrt{1+2\kappa} )^{2^{k+1}-2}\delta \bigl(v^f\bigr)^{2^k} \\ =&\frac{1}{ (\sqrt{1+2\kappa} )^2} \bigl((1+2\kappa)\delta\bigl(v^f\bigr) \bigr)^{2^k} \leq \bigl((1+2\kappa)\delta\bigl(v^f\bigr) \bigr)^{2^k}\leq \biggl(\frac{1}{3} \biggr)^{2^k}. \end{aligned}$$

Hence δ(x +,s +;μ +)≤τ will hold if k satisfies \((\frac{1}{3} )^{2^{k}}\leq\tau\). This implies that at most

$$\begin{aligned} \biggl\lceil \log_2 \biggl(\frac{\log_2\frac{1}{\tau}}{\log_23} \biggr) \biggr\rceil \end{aligned}$$
(28)

centering steps are needed. Substituting the value of \(\tau=\frac{1}{6(1+2\kappa)}\) in (28) gives that in the algorithm at most \(\log_{2} (\frac{\log_{2}6(1+2\kappa)}{\log_{2}3} )\) centering steps are need.

In each main iteration, both the value of x T s and the norm of the residual vector are reduced by the factor of 1−θ. Hence, the total number of main iterations is bounded above by

$$\begin{aligned} \frac{1}{\theta}\log\frac{\max\{(x^0)^Ts^0, \|r_q^0\|\}}{\epsilon}. \end{aligned}$$

Since \(\theta=\frac{1}{20 n(1+2\kappa)^{2}}\), the total number of inner iterations is bounded above by

$$\begin{aligned} 20 n(1+2\kappa)^2\log_2 \biggl(\frac{\log_26(1+2\kappa)}{\log_23} \biggr) \log\frac{\max\{(x^0)^Ts^0, \|r_q^0\|\}}{\epsilon}. \end{aligned}$$

In the following, we state our main result without further proof.

Theorem 4.1

If (P) has an optimal solution (x ,s ) such that

$$\begin{aligned} \|x^*\|\leq\rho_p,\quad\quad\|s^*\|\leq\rho_d, \end{aligned}$$

then after at most

$$\begin{aligned} 20 n(1+2\kappa)^2\log_2 \biggl(\frac{\log_26(1+2\kappa)}{\log_23} \biggr)\log\frac{\max\{(x^0)^Ts^0, \|r_q^0\|\}}{\epsilon}, \end{aligned}$$

inner iterations, the algorithm finds an ϵ-optimal solution of (P).

5 Conclusions

Based on a proximity measure, we generalize the full-Newton step infeasible interior point method for linear optimization of Roos [18] to horizontal linear complementarity problems. Underlined method has many good properties. First, it uses full steps, so there is no need to calculate the step length. Second, the iterates always lie in the quadratic convergence neighborhood with respect to some perturbed problems. Third, during the solution process, both feasibility and optimality are improved at the same rate. Finally, the iteration bound coincides with the currently best-known iteration bound for infeasible interior-point methods. An interesting topic for further research may be the development of the analysis to the linear complementarity problems over symmetric cones.