Abstract
In this paper, we first present a full-Newton step feasible interior-point algorithm for solving horizontal linear complementarity problems. We prove that the full-Newton step to the central path is quadratically convergent. Then, we generalize an infeasible interior-point method for linear optimization to horizontal linear complementarity problems based on new search directions. This algorithm starts from strictly feasible iterates on the central path of a perturbed problem that is produced by a suitable perturbation in the horizontal linear complementarity problem. We use the so-called feasibility steps that find strictly feasible iterates for the next perturbed problem. By using centering steps for the new perturbed problem, we obtain a strictly feasible iterate close enough to the central path of the new perturbed problem. The complexity of the algorithm coincides with the best known iteration bound for infeasible interior-point methods.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Since Karmarkar’s ground-breaking paper [1], interior-point methods (IPMs) became one of the most active research areas. They have been widely used to obtain strong theoretical results for solving linear optimization (LO), linear complementarity (LCP), semidefinite optimization (SDO) and many other problems. In [2], the authors proved that different formulations of LCP, such as horizontal LCP (HLCP) and the mixed LCP, can be transformed into a standard LCP. In 1995, Miao [3] extended the Mizuno–Todd–Ye (MTY) predictor–corrector method [4] for P ∗(κ)-LCPs. The class of all P ∗(κ) matrices is called P ∗. This class was extensively studied in the 1991 monograph of Kojima et al. [5]. Gurtuna et al. [6] generalized the MTY method to HLCP and presented a class of corrector–predictor IPMs for solving sufficient HLCP. Wang and Bai [7] presented a class of polynomial interior-point algorithms for P ∗(κ)-HLCP based on parametric kernel functions. Darvay [8] proposed a full-Newton step primal–dual path-following interior-point algorithm for LO. Later on, Achache [9] and Asadi and Mansouri [10] respectively extended Darvay’s algorithm to LCP and P ∗(κ)-HLCP. Recently, Kheirfam [11] presented a predictor–corrector interior-point algorithm for P ∗(κ)-HLCP based on Darvay’s technique. Infeasible IPMs (IIPMs) start with an arbitrary positive point, and feasibility is reached as optimality is approached. In 1990, Lustig [12] proposed an IIPM for the first-time. Kojima et al. [13] proved the global convergence, whereas Zhang [14] and Mizuno [15] presented polynomial iteration complexity results for variants of this algorithm. Potra and Sheng [16] proposed an infeasible interior-point algorithm for the P ∗(κ)-matrix LCP. This algorithm does not depend on the handicap κ of the problem. It operates in a larger neighborhood of the central path than the algorithm of [3] and has high order convergence for non-degenerate problems. For more details about IIPMs, one is referred to [17]. In 2006, Roos [18] proposed a full-Newton step IIPM for LO and proved that the complexity of the algorithm coincides with the best-known iteration bound for IIPMs. Mansouri et al. [19] extended the algorithm from LO to LCP, and obtained similar polynomial complexity results. Later on, Kheirfam [20, 21] presented some variants the algorithm for SDO and for LCP [22]. Recently, Kheirfam and Mahdavi-Amiri [23] extended the algorithm from LO to symmetric cone linear complementarity problems, and obtained similar polynomial complexity results. Zhang et al. [24] presented a full-Newton step IIPM for SDO based on a new proximity measure and simplified the complexity analysis of the algorithm.
The goal of the paper is to present a full-Newton step IPM and an IIPM for P ∗(κ)-HLCP. We develop a different analysis from the mentioned works for full-Newton step IPMs and IIPMs by using the proximity measure introduced in [24]. We provide search directions and show that the iteration bound coincides with the best known bound for IPMs and IIPMs, while tendering to a simple and interesting analysis.
The remainder of our work is organized as follows: Section 2 is devoted to the analysis of full-Newton step IPMs. We provide some tools and obtain the best known iteration bound for IPMs. Section 3 is used to describe the IIPM for P ∗(κ)-HLCP in more details. Section 4 deals with the analysis of the feasibility step of IIPM, and finally, we obtain the best known complexity bound for IIPMs. We conclude the paper in Sect. 5.
2 Feasible Full-Newton Step Interior-Point Method
In this section, we introduce a full-Newton step feasible interior-point algorithm for P ∗(κ)-HLCP based on the proximity measure introduced in [24].
2.1 The P ∗(κ)-HLCP and Its Central Path
Given two matrices \(Q, R\in{\mathbb{R}}^{n\times n}\), and a vector \(q\in \mathbb{R}^{n}\), the HLCP consists in finding a pair of vectors \(({ x}, {s})\in\mathbb{R}^{2n}\) such that
The standard monotone linear complementarity problem (LCP) is obtained by taking R=−I and a positive semidefinite matrix Q. There are other formulations of the LCPs as well but, as shown in [25], all popular formulations are equivalent, and the behavior of a large class of IPMs is identical on those formulations. Let κ≥0 be a given constant. We say that (P) is a P ∗(κ)-HLCP iff
If the above condition is satisfied, then we say that (Q,R) is a P ∗(κ)-pair. Finding an approximate solution of P ∗(κ)-HLCP is equivalent to solving the following system
The basic idea of primal–dual IPMs is to replace the complementarity condition xs=0 by the parameterized equation xs=μe with μ>0 and with e=(1,1,…,1)T. This leads to the following system
Under the assumption that P ∗(κ)-HLCP satisfies the interior-point condition (IPC), i.e., there exists (x 0,s 0)>0 such that Qx 0+Rs 0=q, system (1) has a unique solution for each μ>0 [5, 14]. This solution is denoted as (x(μ),s(μ)) and we call (x(μ),s(μ)) the μ-center of P ∗(κ)-HLCP. The set of μ-centers is called the central path of P ∗(κ)-HLCP. If μ→0, then the limit of the central path exists and since the limit points satisfy the complementarity condition, the limit yields a solution for P ∗(κ)-HLCP [5, 14].
2.2 The Classic Newton Step
A natural way to define a search direction is to follow Newton’s approach and linearize the second equation in (1). This leads to the following system
We use the following notations:
It follows from (3) that the system (2) reduces to
where \({\overline{Q}}:=QXV^{-1}, {\overline{R}}:=RSV^{-1}, X:={\rm diag}(x), S:={\rm diag}(s)\) and \(V:={\rm diag}(v)\). System (4) uniquely defines the search direction (d x ,d s ), and the Newton direction (Δx,Δs) can be easily obtained from (3). The new iterates are given by
From the second equation of system (4), we have
2.3 The Proximity Measure and Its Properties
We use the following proximity measure σ(x,s;μ) to show the closeness of iterates to the central path, which has been considered for SDO for the first time in [24]:
The proof of the following lemma is similar to the proof of Lemma 3.1 in [24].
Lemma 2.1
Let σ:=σ(x,s;μ). Then
2.4 The Full-Newton Step Feasible IPM for P ∗(κ)-HLCP
Here, we obtain lower and upper bounds for the inner product of the scaled directions, \(d_{x}^{T}d_{s}\). For this propose, we consider system (4). Since \((\overline{Q}, \overline{R})\) is a P ∗(κ)-pair, we have
where I +:={i:[d x ] i [d s ] i ≥0} and the last inequality follows by Lemma 2.1. On the other hand, we have
which implies that
We describe the generic primal–dual IPM for P ∗(κ)-HLCP as follows:
2.5 Some Properties of the Classic Full-Newton Step
The following lemmas describe the effect on duality gap, the condition for feasibility and the effect on σ(x,s;μ) of a full-Newton step. Using (3), we have
Moreover, using the second equation of (4), we obtain
Lemma 2.2
(Lemma II.48 in [26])
The full-Newton step is strictly feasible if
Lemma 2.3
If \(\sigma:=\sigma(x, s;\mu)<\frac{2}{1+\sqrt{3+4\kappa}}\), then (x +,s +) is strictly feasible. Furthermore, we have
Proof
This completes the second part of the proof of the lemma. As for the first part, we know that if \(\sigma<\frac{2}{1+\sqrt{3+4\kappa}}\), then
which means that the new iterates are strictly feasible by Lemma 2.2. □
Corollary 2.1
If \(\sigma:=\sigma(x, s;\mu)\leq\frac{1}{1+\sqrt{3+4\kappa}}\), then
which shows the quadratic convergence of the algorithm.
Proof
From Lemma 2.3, it follows that
This proves the corollary. □
Lemma 2.4
Let x,s be strictly feasible and σ:=σ(x,s;μ). If μ +:=(1−θ)μ for 0<θ<1, then
Proof
Using σ +:=σ(x,s;μ +), the definition of σ and the triangle inequality, we have
This proves the lemma. □
From Lemmas 2.3 and 2.4, one can conclude that after one full-Newton step and one μ update, an upper bound for the proximity measure is
Suppose that the initial iterate (x,s) satisfies σ(x,s;μ)≤τ. To restore the condition σ(x +,s +;μ +)≤τ, the following condition for θ should be satisfied
Note that in the process of iteration, all the iterates must be strictly feasible. By Lemma 2.3, an upper bound for τ is as follows:
In this condition, if \(\tau=\frac{1}{3(1+2\kappa)}\) and \(\theta=\frac{2}{11(1+2\kappa)\sqrt{n}}\), then (9) holds.
Lemma 2.5
Suppose that x 0 and s 0 are strictly feasible, and \(\sigma(x^{0}, s^{0}; \mu^{0})\leq\frac{1}{1+\sqrt{3+4\kappa}}\) with \(\mu^{0}=\frac{(x^{0})^{T}s^{0}}{n}\). Let x k and s k be the iterates obtained after k iterations of Algorithm 1. Then the inequality (x k)T s k≤ϵ is satisfied after at most
iterations.
Proof
From (6), (7) and the fact that \(\sigma\leq\frac{1}{1+\sqrt{3+4\kappa}}\), it follows that
Thus, we have
Then, the inequality (x k)T s k≤ϵ holds if
Taking logarithms of both sides of the above inequality, we obtain
Using the fact that −log(1−θ)≥θ, we observe that the above inequality holds if
and the lemma follows. □
The following theorem gives an upper bound for the total number of iterations produced by Algorithm 1.
Theorem 2.1
Let \(\tau=\frac{1}{3(1+2\kappa)}\) and \(\theta=\frac{2}{11(1+2\kappa)\sqrt{n}}\). Then, Algorithm 1 requires at most
iterations to obtain a solution of P ∗(κ)-HLCP.
3 Full-Newton Step Infeasible Interior-Point Method
In the case of an IIPM, we call the pair (x,s) an ϵ-solution of P ∗(κ)-HLCP if the norm of the residual vector q−Qx−Rs does not exceed ϵ, and also x T s≤ϵ. As usual for IIPMs, it is assumed that the initial iterates
where ρ p and ρ d are positive such that
for some optimal solution (x ∗,s ∗).
3.1 The Perturbed Problem
Denote the initial residual vector \(r_{q}^{0}\) as \(r_{q}^{0}:=q-Qx^{0}-Rs^{0}\). In general, \(r_{q}^{0}\neq0\). For any ν with 0<ν≤1, we consider the perturbed problem
Note that if ν=1, then (x,s)=(x 0,s 0) yields a strictly feasible solution of (P ν ). We conclude that if ν=1, then (P ν ) is strictly feasible, which means that the perturbed problem (P ν ) satisfies the IPC. More generally, we have the following lemma, whose proof is similar to the proof of Lemma 3.1 in [18].
Lemma 3.1
Let the original problem (P) be feasible and 0<ν≤1. Then, the perturbed problem (P ν ) satisfies the IPC.
Assuming that problem (P) is feasible, it follows from Lemma 3.1 that the problem (P ν ) satisfies the IPC, for each ν∈(0,1], and hence its central path exists. This means that the system
has a unique solution, for any μ>0. For ν∈(0,1] and μ=νμ 0, we denote this unique solution as (x(μ,ν),s(μ,ν)). It is the μ-center of (P ν ). In what follows, the parameters μ and ν will always be in one-to-one correspondence, according to μ=νμ 0. Therefore, we omit one of the parameters and denote the unique solution by (x(μ),s(μ)).
We measure the proximity to the μ-center of the perturbed problem by the quantity σ(x,s;μ) as defined in (7). Thus, initially x 0=ρ p e,s 0=ρ d e and μ 0=ρ p ρ d , whence v 0=e and σ(x 0,s 0;μ)=0. In the sequel, it is assumed that at the start of each iteration, σ(x,s;μ)≤τ, where τ is a positive threshold value. This certainly holds at the start of the first iteration.
3.2 An Iteration of the Algorithm
Now, we describe one main iteration of full-Newton step IIPM, in essence we follow the same chain of arguments as Roos in [18].
3.2.1 The Main Iteration of the Algorithm
Every main iteration consists of a feasibility step, a μ-update and a few centering steps. The algorithm begins with an infeasible interior-point (x,s) such that (x,s) is feasible for the perturbed problem (P ν ), with μ=νμ 0 and such that σ(x,s;μ)≤τ. With ν replaced by ν +=(1−θ)ν, the feasibility step serves to get iterates (x f,s f) that are strictly feasible for \((\mathrm{P}_{\nu^{+}})\), and close to μ +-centers. In fact, the feasibility step is designed in such a way that \(\sigma(x^{f}, s^{f}; \mu^{+})\leq\frac{1}{1+\sqrt{3+4\kappa}}\), that is, (x f,s f) lies in the quadratic convergence neighborhood with respect to the μ +-center of \((\mathrm{P}_{\nu^{+}})\). Then, just by performing a few centering steps starting at (x f,s f) and targeting at the μ +-center of \((\mathrm{P}_{\nu^{+}})\), one can easily get iterates (x +,s +) that are strictly feasible for \((\mathrm{P}_{\nu^{+}})\), and such that σ(x +,s +;μ +)≤τ.
3.2.2 The Feasibility Step
Here, we describe the feasibility step in details. Suppose that we have strictly feasible iterate (x,s) for (P ν ). This means that (x,s) satisfies the first equation in (12). With ν replaced by ν +=(1−θ)ν, to find new iterates (x f,s f) feasible for \((\mathrm{P}_{\nu^{+}})\), we need search directions Δf x and Δf s that satisfy the first equation in the following system
The second equation is inspired by the second equation in the system (2) that we used to define search directions for the feasible case, except that we target the μ +-centers of \((\mathrm{P}_{\nu^{+}})\).
After the feasibility step the iterates are given by
We conclude that after the feasibility step, we have iterates (x f,s f) that satisfy the affine equation of the new perturbed problem \((\mathrm{P}_{\nu^{+}})\). In the analysis, we should also guarantee that x f and s f are positive and belong to the region of quadratic convergence of its μ +-center. In other words, we must have \(\sigma(x^{f}, s^{f}; \mu^{+})\leq\frac{1}{1+\sqrt{3+4\kappa}}\). Proving this is the crucial part in the analysis of the algorithm.
3.3 Generic Infeasible Interior-Point Algorithm
Here, we summarize the steps of Sect. 3.2 as Algorithm 2 below.
4 Analysis of the Method
According to (13), after the feasibility step the iterates satisfy the feasibility condition for \((\mathrm{P}_{\nu^{+}})\). The hard part in the analysis will be to guarantee that x f,s f are positive and to guarantee that the new iterate satisfies \(\sigma(x^{f}, s^{f}; \mu^{+})\leq\frac{1}{1+\sqrt{3+4\kappa}}\).
4.1 Effect of the Feasibility Step
Let (x,s) be the iterate at the start of an iteration. We define
where v is defined in (3). One can easily check that the system (13), which defines the search directions Δf x and Δf s, can be expressed in terms of the scaled search directions \(d_{x}^{f}\) and \(d_{s}^{f}\) as follows:
where
Using (3) and (15), we may write
Therefore,
The proof of the following lemma is similar to the proof of Lemma 4.1 in [27].
Lemma 4.1
The new iterates (x f,s f) are strictly feasible if \((1-\theta)e+d^{f}_{x}d^{f}_{s}>0\).
Corollary 4.1
The new iterates (x f,s f) are strictly feasible if \(\|d_{x}^{f}d_{s}^{f}\|_{\infty}<1-\theta\).
Proof
By Lemma 4.1, (x f,s f) is strictly feasible if \((1-\theta)e+d_{x}^{f}d_{s}^{f}>0\). Since the last inequality holds if \(\|d_{x}^{f}d_{s}^{f}\|_{\infty}<1-\theta\), the corollary follows. □
In the sequel, we denote
which implies \(\|d_{x}^{f}\|\leq2w(v)\) and \(\|d_{s}^{f}\|\leq2w(v)\). Moreover, we have
Lemma 4.2
The new iterates (x f,s f) are strictly feasible if \(w(v)<\sqrt{\frac{1-\theta}{2}}\).
Proof
By Corollary 4.1 and (19), the result easily follows. □
4.2 Upper Bound of σ(v f)
The following lemma gives an upper bound for σ(x f,s f,μ +). Recall from definition (5) that
Lemma 4.3
Let (x f,s f) be strictly feasible. Then we have
Proof
Dividing (17) by μ +=(1−θ)μ, we have
Hence
where the last inequality follows by (18), and the proof is complete. □
We want the new iterate (x f,s f) to be within the neighborhood where the Newton process targeting the μ +-center of \((\mathrm{P}_{\nu^{+}})\) is quadratically convergent, that is, \(\sigma(v^{f})\leq\frac{1}{1+\sqrt{3+4\kappa}}\). According to Lemma 4.3, it suffices to have
4.3 Upper Bound for w(v)
In this section, we obtain an upper bound for w(v) which will enable us to find a default value for θ. For this purpose, consider system (16) which defines the search directions Δf x and Δf s in terms of the scaled search directions \(d_{x}^{f}\) and \(d_{s}^{f}\).
Lemma 4.4
(Lemma 3.3 in [28])
If HLCP is P∗(κ), then for any \(a, {\tilde{b}}\in \mathbb{R}^{n}\) and any \(z=(x^{T}, s^{T})^{T}\in \mathbb{R}^{2n}_{++}\) the linear system
has a unique solution w:=(u T,v T)T and the following inequality is satisfied:
where
and
Due to Lemma 4.4, from system (13) we have
Let (x ∗,s ∗) be the optimal solution of (P) that satisfies (11) and the algorithm starts with (x 0,s 0)=(ρ p e,ρ d e). Then, we have
and also
Now, by definition of \(\zeta(z, r^{0}_{q})\), (23) and (25), we have
Using \(D\Delta^{f}x=\sqrt{\mu} d^{f}_{x}, D^{-1}\Delta^{f}s=\sqrt{\mu} d^{f}_{s}\), and substituting (25) into (22) and using the definition of w(v), we obtain
In the sequel, we obtain upper bounds for ∥x∥1 and ∥s∥1.
Lemma 4.5
(Lemma 12 in [29])
Let (x,s) be feasible for the perturbed problem \(({\rm \mathrm{P}_{\nu}})\) and (x 0,s 0)=(ρ p e,ρ d e). Then, for any optimal solution (x ∗,s ∗), we have
Lemma 4.6
Let (x,s) be feasible for the perturbed problem (P ν ) and let (x ∗,s ∗) be as defined in (11) and (x 0,s 0)=(ρ p e,ρ d e). Then, we have
Proof
Since x 0=ρ p e,s 0=ρ d e,∥x ∗∥∞≤∥x ∗∥≤ρ p and ∥s ∗∥∞≤∥s ∗∥≤ρ d , we have
Hence, by Lemma 4.5, we get
Since x and s are positive, we obtain
This completes the proof. □
Substituting the bounds of ∥x∥1 and ∥s∥1 into (26), we have
4.4 Fixing θ
From the analysis of inequality (21), we know that if inequality (21) is satisfied then \(\sigma(v^{f})\leq\frac{1}{1+\sqrt{3+4\kappa}}\) certainly holds. It follows from (27) that if
then inequality (21) certainly holds. Note that the left-hand side of the above inequality is monotonically increasing with respect to σ. Given a threshold τ, for \(\sigma\leq\tau\leq\frac{1}{1+\sqrt{3+4\kappa}}\), the above inequality holds if
holds, then the new iterates (x f,s f) are strictly feasible and within the quadratic convergence region. At this stage, we assume that \(\tau=\frac{1}{6({1+2\kappa})}\). Substituting τ into the above inequality, after some calculations, the inequality holds for
Note that, by Lemma 4.2, to keep the iterate (x f,s f) feasible, the following condition must be satisfied:
It follows from (27) that the above inequality certainly holds if
Using \(\theta=\frac{1}{20 n(1+2\kappa)^{2}}\) and \(\tau=\frac{1}{6({1+2\kappa})}\), an upper bound for the left-hand side of inequality is 0.4148, while a lower bound for the right-hand side of inequality is 0.6892. In this case, we conclude that the iterates (x f,s f) are strictly feasible.
4.5 Iteration Bound
Let \(\sigma(x^{f}, s^{f}; \mu^{+})\leq\frac{1}{3(1+2\kappa)}\), which is in agreement with Corollary 2.1. Starting at (x f,s f), we perform centering steps in order to get iterates (x +,s +) that satisfy δ(x +,s +,μ +)≤τ. We can estimate the required number of centering steps by using Corollary 2.1. Hence, after k centering steps we will have the iterate (x +,s +) that is still feasible for \((\mathrm{P}_{\nu^{+}})\) and satisfies
Hence δ(x +,s +;μ +)≤τ will hold if k satisfies \((\frac{1}{3} )^{2^{k}}\leq\tau\). This implies that at most
centering steps are needed. Substituting the value of \(\tau=\frac{1}{6(1+2\kappa)}\) in (28) gives that in the algorithm at most \(\log_{2} (\frac{\log_{2}6(1+2\kappa)}{\log_{2}3} )\) centering steps are need.
In each main iteration, both the value of x T s and the norm of the residual vector are reduced by the factor of 1−θ. Hence, the total number of main iterations is bounded above by
Since \(\theta=\frac{1}{20 n(1+2\kappa)^{2}}\), the total number of inner iterations is bounded above by
In the following, we state our main result without further proof.
Theorem 4.1
If (P) has an optimal solution (x ∗,s ∗) such that
then after at most
inner iterations, the algorithm finds an ϵ-optimal solution of (P).
5 Conclusions
Based on a proximity measure, we generalize the full-Newton step infeasible interior point method for linear optimization of Roos [18] to horizontal linear complementarity problems. Underlined method has many good properties. First, it uses full steps, so there is no need to calculate the step length. Second, the iterates always lie in the quadratic convergence neighborhood with respect to some perturbed problems. Third, during the solution process, both feasibility and optimality are improved at the same rate. Finally, the iteration bound coincides with the currently best-known iteration bound for infeasible interior-point methods. An interesting topic for further research may be the development of the analysis to the linear complementarity problems over symmetric cones.
References
Karmarkar, N.K.: A new polynomial-time algorithm for linear programming. Combinatorica 4, 373–395 (1984)
Ai, W.B., Zhang, S.Z.: An \(O(\sqrt{n}L)\) iteration primal-dual path following method, based on wide neighborhoods and large updates, for monotone linear complementarity problems. SIAM J. Optim. 16(2), 400–417 (2005)
Miao, J.: A quadratically convergent \(\mathcal{O}((1+\kappa)\sqrt{n}L)\)-iteration algorithm for the P ∗(κ)-matrix linear complementarity problem. Math. Program. 69, 355–368 (1995)
Mizuno, S., Todd, M.J., Ye, Y.: On adaptive-step primal–dual interior-point algorithms for linear programming. Math. Oper. Res. 18(4), 964–981 (1993)
Kojima, M., Megiddo, N., Noma, T., Yoshise, A.: A Unified Approach to Interior Point Algorithms for Linear Complementarity Problems. Lecture Notes in Comput. Sci., vol. 538. Springer, Berlin (1991)
Gurtuna, F., Petra, C., Potra, F., Shevchenko, O., Vancea, A.: Corrector-predictor methods for sufficient linear complementarity problems. Comput. Optim. Appl. 48, 453–485 (2011)
Wang, G.Q., Bai, Y.Q.: Polynomial interior-point algorithm for P ∗(κ) horizontal linear complementarity problem. J. Comput. Appl. Math. 233(2), 248–263 (2009)
Darvay, Z.: New interior-point algorithms in linear programming. Adv. Model. Optim. 5(1), 51–92 (2003)
Achache, M.: Complexity analysis and numerical implementation of a short-step primal–dual algorithm for linear complementarity problems. Appl. Math. Comput. 216(7), 1889–1895 (2010)
Asadi, S., Mansouri, H.: Polynomial interior-point algorithm for P ∗(κ)-horizontal linear complementarity problems. Numer. Algorithms 63(2), 385–398 (2013)
Kheirfam, B.: A predictor-corrector interior-point algorithm for P ∗(κ) horizontal linear complementarity problem. Numer. Algorithms (2013). doi:10.1007/s11075-013-9738-3
Lustig, I.J.: Feasible issues in a primal-dual interior-point method. Math. Program. 67, 145–162 (1990)
Kojima, M., Megiddo, N., Mizuno, S.: A primal–dual infeasible-interior-point algorithm for linear programming. Math. Program. 61(3), 263–280 (1993)
Zhang, Y.: On the convergence of a class of infeasible interior-point methods for the horizontal linear complementarity problem. SIAM J. Optim. 4(1), 208–227 (1994)
Mizuno, S.: Polynomiality of infeasible-interior-point algorithms for linear programming. Math. Program. 67(1), 109–119 (1994)
Potra, F.A., Sheng, R.Q.: A large-step infeasible-interior-point method for the P ∗-matrix LCP. SIAM J. Optim. 7(2), 318–335 (1997)
Wright, S.J.: Primal–Dual Interior-Point Methods. SIAM, Philadelphia (1997)
Roos, C.: A full-Newton step O(n) infeasible interior-point algorithm for linear optimization. SIAM J. Optim. 16(4), 1110–1136 (2006)
Mansouri, H., Zangiabadi, M., Pirhaji, M.: A full-Newton step O(n) infeasible-interior-point algorithm for linear complementarity problems. Nonlinear Anal., Real World Appl. 12(1), 545–561 (2011)
Kheirfam, B.: A full NT-step infeasible interior-point algorithm for semidefinite optimization based on a self-regular proximity. ANZIAM J. 53, 48–67 (2011)
Kheirfam, B.: Simplified infeasible interior-point algorithm for SDO using full Nesterov–Todd step. Numer. Algorithms 59(4), 589–606 (2012)
Kheirfam, B.: A full-Newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function. Algorithmic Oper. Res. 7, 103–110 (2013)
Kheirfam, B., Mahdavi-Amiri, N.: A full Nesterov–Todd step infeasible interior-point algorithm for symmetric cone linear complementarity problem. Bull. Iran. Math. Soc. (2013) in press
Zhang, L., Sun, L., Xu, Y.: Simplified analysis for full-Newton step infeasible interior-point algorithm for semidefinite programming. Optimization 62(2), 169–191 (2013)
Anitescu, M., Lesaja, G., Potra, F.A.: An infeasible-interior-point predictor–corrector algorithm for the P ∗-Geometric LCP. Appl. Math. Optim. 36(2), 203–228 (1997)
Roos, C., Terlaky, T., Vial, J.-P.: Theory and Algorithms for Linear Optimization. An Interior-Point Approach. Wiley, Chichester (1997)
Gu, G., Mansouri, H., Zangiabadi, M., Bai, Y.Q., Roos, C.: Improved full-Newton step O(nL) infeasible interior-point method for linear optimization. J. Optim. Theory Appl. 145, 271–288 (2010)
Potra, F., Stoer, J.: On a class of superlinearly convergent polynomial time interior point methods for sufficient LCP. SIAM J. Optim. 20(3), 1333–1363 (2009)
Kheirfam, B.: A full-Newton step infeasible interior-point algorithm for P ∗(κ)-horizontal linear complementarity problems. Submitted
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Florian A. Potra.
Rights and permissions
About this article
Cite this article
Kheirfam, B. A New Complexity Analysis for Full-Newton Step Infeasible Interior-Point Algorithm for Horizontal Linear Complementarity Problems. J Optim Theory Appl 161, 853–869 (2014). https://doi.org/10.1007/s10957-013-0457-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-013-0457-7