Abstract
We introduce an interior-point algorithm for linear optimization, which is based on a new technique for determining search directions. This method consists of a new type of transformation on the centering equations of the system which characterizes the central path. It can be shown that these new directions cannot be derived from usual kernel functions. Therefore, we extend the concept of the kernel functions, and we establish an equivalence between this approach and the proposed method for obtaining search directions. Moreover, we prove the polynomial complexity of the algorithm.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Linear optimization (LO) problems have been extensively studied since Karmarkar [19] introduced his well-known projective interior point method (IPM), which enjoyed polynomial complexity. In the last decades several researchers published new methods in this field. The most important results were summarized in the books written by Roos et al. [33], Wright [41] and Ye [42]. Terlaky [36] presented an introduction to the duality theory of LO from the point of view of the IPMs. In spite of the fact that the simplex method introduced by Dantzig [8] proved to be efficient in practice, Gondzio and Terlaky [13] and Andersen et al. [4] established that in several cases, especially when the size of the problem is large, IPMs can perform better. We mention that in the case of the simplex algorithm we usually keep primal or dual feasibility. The criss-cross methods represent other pivoting algorithms where the basic solutions can be both primal and dual infeasible as in the papers published by Klafszky and Terlaky [21] and Terlaky [35]. Illés and Terlaky [14] presented a detailed comparison between the pivot algorithms and IPMs. The determination of the starting point is an important question in the case of the IPMs. In theory, a good starting point can be chosen by using the self-dual embedding model introduced by Jansen et al. [18] and Ye et al. [43].
IPMs can be classified based on the length of the step and the used neighbourhood. In this way the algorithms that generate the new iterates in a smaller neighbourhood of the central path are named short-update methods, and the ones that use a wider neighbourhood are called large-update algorithms. Large-update methods turned out to be more efficient in practice, while the small-update ones have better theoretical complexity. Peng et al. [28] developed the concept of self-regular proximities. Using this notion, they introduced a new class of search directions in order to define polynomial primal–dual IPMs for LO, semidefinite optimization (SDO), and second-order cone optimization (SOCO). Besides, they reduced the above-mentioned gap between the large-update and small-update methods. Later on, Later on, Potra [30] and Ai and Zhang [3] introduced large-update IPMs for linear complementarity problems (LCPs) that have the same complexity as the best small-update IPMs. Li and Terlaky [23] proposed the first IPM for SDO, which works in a wide neighbourhood and has \(O(\sqrt{n}L)\) iteration complexity.
The determination of the search directions plays a key role in case of the IPMs. Therefore, the search directions can be obtained by using barrier functions based on their corresponding kernel functions. Bai et al. [6, 7] proposed a new type of barrier function which has a finite value at the boundary of the feasible region. Beside this, polynomial-time IPMs for convex optimization problems can be obtained by using self-concordant barriers introduced by Nesterov and Nemirovskii [27].
Another method for defining search directions for LO was given by Darvay [9, 10]. The main idea of this approach consisted in considering an algebraic equivalent transformation on the system which characterizes the central path. Firstly, the centering equations were divided by the barrier update parameter. After that the same continuously differentiable and invertible function \(\varphi \) was applied to both sides of the previously modified equation. In this way the equation \(\varphi (v) = \varphi (e)\) was obtained, where v is the variance vector \(v=\sqrt{\frac{xs}{\mu }}\); \(\mu \) is the barrier parameter; and e is the n-dimensional all-one vector (for details see Sect. 4). The new search directions were determined by applying Newton’s method to this equation. Achache [1] generalized this approach to convex quadratic optimization, while Zhang et al. [45] to linearly constrained convex programming problems. Many IPMs were also extended to more general problems, namely to different kind of LCPs. Kojima et al. [22] gave a unified approach to IPMs for LCPs. Potra and Sheng [31, 32] proposed predictor-corrector and infeasible IPMs for \(P^*(\kappa )\)-LCPs. Moreover, Illés et al. [15,16,17] introduced a polynomial-time IPM for general LCPs, which either gives an \(\epsilon \)-solution or it proves that the matrix of the problem does not have \(P^*(\kappa )\) property. Numerical results on LCPs based on Darvay’s technique were reported by Achache [2], Asadi and Mansouri [5], Mansouri and Pirhaji [24], and Kheirfam [20]. It can be observed that these results emphasize the efficiency of this method. Other extensions based on the above-mentioned technique were made to symmetric cone optimization (SCO) problems by Wang and Bai [40], to SDO problems by Wang and Bai [38] and Mansouri et al. [25] and to SOCO by Wang and Bai [39]. Furthermore, Wang [37] generalized this approach to monotone LCPs over symmetric cones.
Recently, Zhang and Xu [44] proposed a specific search direction for LO. They considered the equivalent form \(v^2=v\) of the centering equation, and they transformed it into the form \(xs = \mu v\). After that they assumed that the variance vector is fixed and they used Newton’s method.
In this paper we propose a new method for obtaining a class of search directions using a new type of transformation of the centering equations. First, we modify the nonlinear equation \(v^2=v\) by applying componentwise the function \(\varphi \) to both sides of this equation. Then, we apply Newton’s method in order to get the new search directions. It is worth mentioning that by using the map \(\varphi (t)=t\) we do not obtain the direction given in [44], because we do not consider the variance vector as constant. In order to facilitate our analysis we restrict ourselves to the case \(\varphi (t) = t^2\) which yields a new search direction. It turns out that the sum of the scaled directions is \(p_v=\frac{v-v^3}{2v^2-e}\). Therefore, the analysis of the algorithm becomes more difficult, because we have to ensure that the vector \(p_v\) is well defined. For this, during the whole process of the algorithm we have to guarantee a plus condition, namely the inequality \(v>\frac{1}{\sqrt{2}}e\). We establish the equivalence between the new technique and a proper approach based on kernel functions. It can be observed that the corresponding kernel function is not a usual one, because it is defined only on the interval \(\left( \frac{1}{\sqrt{2}},\infty \right) \). In spite of these facts we prove that the complexity of the algorithm coincides with the best known ones of IPMs for LO.
The paper is organized as follows. In the following section the LO problem and the notion of the central path are presented. In Sect. 3 we give an approach for the determination of the search directions based on kernel functions. Section 4 lays out the main idea of the new technique. Besides, the obtained generic algorithm is presented. Section 5 is devoted to the analysis of the complexity of the new method. Section 6 contains numerical results. In the last section some concluding remarks are enumerated.
The authors dedicate this paper to the honour of Tamás Terlaky on the occasion of his 60th birthday and to the memory of his Phd supervisor, Emil Klafszky.
2 The linear optimization problem
Let us consider the following primal problem
and it’s dual
where \(A \in \mathbb {R}^{m \times n}, \quad rank(A)=m, c \in \mathbb {R}^n \quad \) and \( b \in \mathbb {R}^m\).
Without loss of generality we can assume that the interior-point condition (IPC) holds for the primal and dual problems, which means that there exists \((x^0,y^0,s^0)\) such that
An optimal solution of the primal–dual problem is given by the following system:
The first two equations of system (1) are called feasibility conditions and they serve for maintaining feasibility. The third equation is named complementarity condition. The primal–dual interior point methods usually replace the complementarity equation with a parameterized equation. Using this, we obtain the following system:
where \(\mu >0\). If the interior-point condition holds, then for a fixed \(\mu >0\) the \(\mu \)-center or analytic center given by Sonnevend [34] is the unique solution of system (2). For the different values of \(\mu \) we obtain the central path, which converges to an optimal solution of the problem, if \(\mu \rightarrow 0\). Let \(v=\sqrt{\frac{xs}{\mu }}\). Using this, the third equation of (2) can be written in the form: \(v = e\).
3 Search directions using kernel functions
Peng et al. [28] introduced the class of self-regular barrier functions and defined large-update IPMs for solving LO problems.
A function \(\psi \) is called kernel function if the following conditions hold:
-
i.
\(\psi : \mathbb {R}_{++} \rightarrow \mathbb {R}_+\) is twice continuously differentiable;
-
ii.
\(\psi (1) = \psi ^\prime (1) = 0\);
-
iii.
\(\psi ^{\prime \prime }(t) > 0\) for all \(t > 0\),
where \(\mathbb {R}_{+}=\{x \in \mathbb {R}| x \ge 0 \}\) and \(\mathbb {R}_{++}=\{x \in \mathbb {R}| x > 0 \}.\) The kernel function is called coercive if
-
iv.
\(\lim _{t \downarrow 0} \psi (t) = \lim _{t \rightarrow \infty } \psi (t) = \infty \).
Assuming that x and (y, s) are feasible solutions of the primal–dual pair, we want to define the search directions \((\varDelta x, \varDelta y, \varDelta s)\) in order to get the new vectors determined by the algorithm.
The idea of introducing these new search directions is based on defining a generalized barrier function \(\varPsi (v)\), \(v \in \mathbb {R}_{++}^n\). It should be mentioned that \(\mathbb {R}_{++}^n=\{x \in \mathbb {R}^n| x_i > 0, i=1,\ldots ,n \}.\) We assume that \(\varPsi (v)\) is minimal at \(v = e\), \(\varPsi (e) = 0\) and \(\varPsi \) is a strictly convex differentiable function.
Using a kernel function \(\psi \), we can construct a barrier function in the following form:
and we obtain the search directions:
In the following section we propose another method for introducing search directions.
4 New search directions using an equivalent algebraic transformation
In this section we modify the technique introduced by Darvay [10] in order to get new search directions. Let us consider the function \(\varphi \) defined and continuously differentiable on the interval \((\kappa ^2,\infty )\), where \(0 \le \kappa < 1\), such that \(2t\varphi '(t^2)-\varphi '(t)> 0, \forall t>\kappa ^2\). Using this, system (2) can be written in the following form:
In this paper we propose a new class of directions starting from a simple equivalence. Using \(v>0\), we get:
Then, system (2) can be written in the following form:
If we consider the notation
then system (5) can be written in the form \(f(x,y,s)=0\). Applying Newton’s method to this system we get
where \(J_f(x,y,s)\) denotes the Jacobian matrix of f. After some calculations we obtain Newton’s system:
We introduce the following notations
and
hence we obtain
and
Using these notations we can give the scaled form of system (7):
where
If we use the function \(\varphi : \left( \frac{1}{2},\infty \right) \rightarrow \mathbb {R}\), \(\varphi (t)=t\), then we have
It can be mentioned that the condition \(2t\varphi '(t^2)-\varphi '(t)> 0, \forall t>\kappa ^2\) is satisfied in this case, where \(\kappa =\frac{1}{2}\). Darvay et al [11] considered another equivalent transformation of the system which defines the central path. In system (5) we used the transformation \(\varphi (v^2)=\varphi (e)\). After that, we analysed the case, where \(\varphi (t)=t-\sqrt{t}\). It should be mentioned, that we obtained the same vector \(p_v\) as the one given in (11).
Furthermore, we will consider the case of the function \(\varphi : \left( \frac{1}{\sqrt{2}},\infty \right) \rightarrow \mathbb {R}\), \(\varphi (t)=t^2\), so we obtain
Observe that the condition \(2t\varphi '(t^2)-\varphi '(t)> 0, \forall t>\kappa ^2\) is satisfied in this case, where \(\kappa =\frac{1}{\sqrt{2}}\).
We use a proximity measure to the central path
Let \(q_v=d_x-d_s\). Then, \(d_x^Td_s=0 \Rightarrow \Vert p_v\Vert =\Vert q_v\Vert \).
which implies
Observe that if we use the approach based on kernel functions, the third equation of (3) is equivalent to
where \(\varPsi \) is the function defined in Sect. 3. Therefore, in case of applying the transformation \(\varphi (v^2)=\varphi (v)\), the appropriate kernel function can be defined as:
In the case of the function \(\varphi (t)=t^2\) we obtain the following kernel function:
Note that this kernel function is not a usual one, because it is not defined on the whole interval \((0,\infty )\).
Now we are able to define the algorithm, which is given in Fig. 1.
In the following section we analyse the complexity of the algorithm.
5 Analysis of the algorithm
Let \(x_+=x+\varDelta x\) and \(s_+=s+\varDelta s \) be the generated point after a full-Newton step. Bellow, we give a condition which guarantees the feasibility of these vectors.
Lemma 1
Let \(\delta =\delta (x,s;\mu )<1\) and \(v>\frac{1}{\sqrt{2}}e \). Then,
which means that the full-Newton step is strictly feasible.
Proof
For each \(0\le \alpha \le 1\) denote \( x_+(\alpha )= x+\alpha \varDelta x\) and \( s_+(\alpha )= s+\alpha \varDelta s\). Therefore,
From Eqs. (8) and (9), we obtain
Also using (10) and (13) we can write
Moreover, from (12) we obtain
On the other hand, \((v^2-e)^2 \ge 0\) implies
and for this reason
Using these, we may write
If \(\left\| (1-\alpha )\frac{ p_v^2}{4}+ \alpha \frac{ q_v^2}{4}\right\| _\infty <1,\) then the inequality \( x_+(\alpha ) s_+(\alpha )> 0\) holds, where \(\Vert \cdot \Vert _\infty \) marks the Chebychev norm (or \(l_\infty \) norm). In this way
Hence, for each \(0\le \alpha \le 1\) the inequality \( x_+(\alpha ) s_+(\alpha )> 0\) holds. This means, that the linear functions of \(\alpha \), \( x_+(\alpha )\) and \( s_+(\alpha )\) do not change sign on the interval [0, 1]. Consequently, \( x_+(0)= x> 0\) and \( s_+(0)= s> 0\) yield \( x_+(1)= x_+> 0\) and \( s_+(1)= s_+> 0\). This means that the full-Newton step is strictly feasible.\(\square \)
In the following lemma we analyse the conditions under which the Newton process is quadratically convergent.
Lemma 2
Let \(\delta =\delta (x,s;\mu )<\frac{1}{\sqrt{2}}\) and \(v>\frac{1}{\sqrt{2}}e\). Then, \(v_+>\frac{1}{\sqrt{2}}e\) and
which means that the full-Newton step enusures local quadratic convergence of the proximity measure.
Proof
We know from Lemma 1 that \( x_+> 0\) and \( s_+> 0\). We introduce the following notation
Also we know from (16) that \(v^2+ v p_v = \frac{v^4}{2v^2-e}\), which implies
We consider the substitution \(\alpha =1\) in (15) and we obtain
Using (12) and (19) we may write
We have \(\frac{9v^2-4e}{v^2} < 9 e\), for all \(v>\frac{1}{\sqrt{2}}e\). Thus,
Beside these, we know that \(\frac{p_v^2}{4} >0 \) and we also have from (17) that \(v^2+vp_v \ge e \), so these imply that
Therefore,
Using \(\delta > \frac{1}{\sqrt{2}}\) we have \(\sqrt{1-\delta ^2} > \frac{1}{\sqrt{2}}\). From (24) and this we have \(v_+ > \frac{1}{\sqrt{2}}\), which plays a key role, because during the whole process of the algorithm we have to ensure that the variance vector v satisfies this condition. We have
Let us consider the function \(g(t)=\frac{t}{2t^2-1}\) for all \(t>\frac{1}{\sqrt{2}}\). Because \(g'(t)<0\) we obtain that the function g is a decreasing one for all \(t>\frac{1}{\sqrt{2}}\). Using this and (24) and (23) we get
This proves the lemma.\(\square \)
The next lemma shows how the full-Newton step effects the duality gap.
Lemma 3
Let \(\delta =\delta ( x, s;\mu )\). Then,
Proof
We know from (16) that
This implies
which proves the lemma.\(\square \)
The fourth lemma shows that the algorithm is well defined.
Lemma 4
Let \(\delta =\delta ( x, s; \mu )<\frac{1}{\sqrt{2}}\), \(v>\frac{1}{\sqrt{2}} e\) and \(\mu _+=(1-\theta )\mu \), where \(0<\theta <1\). Furthermore, let \(v_\sharp =\sqrt{\frac{x_+s_+}{\mu _+}}\). Then, \(v_\sharp > \frac{1}{\sqrt{2}}e\) and
If \(\delta < \frac{1}{10}\) and \(\theta = \frac{1}{12 \sqrt{n}}\), then \(\delta ( x_+, s_+; \mu _+) < \frac{1}{10}\).
Proof
Using Lemma 2 we have \(v_+>\frac{1}{\sqrt{2}}e\). From \(v_\sharp =\sqrt{\frac{x_+s_+}{\mu _+}}\) it follows that \(v_\sharp =\frac{1}{\sqrt{1-\theta }} v_+\), \(v_\sharp ^2=\frac{1}{1-\theta } v_+^2\) and \(v_\sharp > \frac{1}{\sqrt{2}}e\). Besides these, from the definition of the \(\delta \) we may write
Then,
and
From (27) and (28) it follows that
Besides, \(\frac{t}{2t^2-(1-\theta )}\) is a decreasing function of t, for all \(t > \frac{1}{\sqrt{2}}\). Using this and (29) it follows that
Using (23) we may write
From (30) and (31) it follows that
which proves the first part of the lemma. For the proof of the second part suppose that \(\delta < \frac{1}{10}\) and \(\theta = \frac{1}{12\sqrt{n}}\). Using these and the inequalities \(\sqrt{1-\delta ^2}<1\) and \(\theta > 0\) we obtain \(\delta (v_\sharp )< \frac{\theta \sqrt{n}+10\delta ^2}{2\sqrt{1-\theta }(1-2\delta ^2)} < \frac{1}{2\sqrt{1-\theta }} \frac{50}{49}\left( \frac{1}{12}+\frac{1}{10} \right) \). Using \(n \ge 1\) we get \(2\sqrt{1-\theta } \ge \sqrt{\frac{11}{3}}\). Taking all these inequalities into consideration we conclude \(\delta ( x_+, s_+; \mu _+)< \frac{5 \sqrt{33}}{294} < \frac{1}{10}\). This proves the lemma.\(\square \)
The next lemma gives a bound on the number of iterations.
Lemma 5
We suppose that \((x^0,s^0)\) are strictly feasible, \(\mu ^0=\frac{(x^0)^T s^0}{n}\) and \(\delta (x^0, s^0;\mu ^0) < \frac{1}{\sqrt{2}}\). Let \(x^k\) and \(s^k\) be the vectors obtained after k iterations. Then, for every
we get \((x^k)^Ts^k\le \epsilon \).
Proof
From Lemma 3 and from \(\delta (x, s; \mu ) < \frac{1}{\sqrt{2}}\) we get
In this way \(( x^k)^T s^k\le \epsilon \) stands if
We take logarithms, so we may write
We know that \(-\log (1-\theta )\ge \theta \), so the inequality stands only if
This proves the lemma.\(\square \)
Theorem 1
We assume that \(x^0=s^0=e\). Using the default values for \(\theta \) and \(\tau \) we obtain that the algorithm given in Fig. 1 demands no more then
interior-point iterations.
Now we give a consequence of the previous lemmas when \(\tau = \frac{1}{10}\) and \(\theta = \frac{1}{12 \sqrt{n}}\). Note that (x, y, s) is an \(\epsilon \)-solution of the primal–dual pair if the following inequality holds
Corollary 1
Let \(x^0=s^0=e\). If \(\tau = \frac{1}{10}\) and \(\theta = \frac{1}{12 \sqrt{n}}\), then after at most
interior-point iterations the algorithm finds an \(\epsilon \)-solution for (P) and (D). The resulting vectors satisfy \(x^Ts\le \epsilon \).
6 Numerical results
In order to compare the efficiency of the algorithm to other methods, we implemented the algorithm given in Fig. 1 in the \(C++\) programming language. We analysed the algorithm for five different search directions, including ours. Let SD1 be the method working with the search direction introduced by us in this paper. Furthermore, we denote by SD2 the algorithm which uses the classical search direction proposed by Roos, Terlaky and Vial [33]. Let SD3 and SD4 be the algorithms that are based on the search directions given in [10] and [11], respectively. Moreover, we denote by SD5 the method which works with the trigonometric search direction introduced by Peyghami and Hafshejani [29].
In each step we reduce the value of the barrier update parameter \(\mu \) by the factor \(1-\theta \). We considered the following different values for the update parameter \(\theta \): 0.1, 0.2, 0.3, 0.4. We set the value for the accuracy parameter \(\epsilon \) to \(10^{-4}\).
We analysed two small-sized feasible problems where the starting points were already given. After that, we solved the problem AFIRO from the Netlib test collection [12]. Hereafter, we present the results obtained by solving these three problems.
6.1 Problem 1
Let us consider the problems proposed in this paper, where
We consider the following starting points: \(x^0= \left[ 9,\; 9,\; 1,\; 5,\; 5,\; 2,\; 1 \right] ^T\),
\(y^0=\left[ 5,\; -3,\; 4,\; 2,\; -4 \right] ^T\) and \(s^0= \left[ 1,\; 5,\; 8,\; 2,\; 9,\; 9,\; 6 \right] ^T.\)
In Table 1 we summarize the obtained number of iterations for the specified search directions and update parameters.
6.2 Problem 2
Let us consider:
The starting points are the following: \(x^0= \left[ 6,\; 10,\; 6,\; 6,\; 5,\; 11,\; 1\right] ^T\), \(y^0=\left[ 10,\; -7,\; 10,\; -1,\; 9,\; -4\right] ^T\) and \(s^0= \left[ 4,\; 3,\; 4,\; 11,\; 6,\; 1,\; 8\right] ^T.\)
In Table 2 we present the obtained number of iterations that refer to Problem 2.
6.3 Problem AFIRO
In this case for the choice of the initial points we used Mehrotra’s heuristic method [26].
Table 3 contains the obtained results for the different cases.
We conclude that in some cases the proposed method solves the linear optimization problem more efficiently than the one using other search directions presented in the literature.
7 Conclusion
We proposed an IPM for LO based on a new method for defining search directions. Using the variance vector v we found out that the centering equation is equivalent to \(v^2=v\) and we applied componentwise the function \(\varphi (t)=t^2\) to both sides of this equation. After that, we used Newton’s method in order to get the search directions. We showed that the obtained algorithm solves the problem in polynomial time and has the same complexity as the best known IPMs for LO. Moreover, we presented some numerical results that prove the efficiency of the proposed method.
References
Achache, M.: A new primal–dual path-following method for convex quadratic programming. Comput. Appl. Math. 25(1), 97–110 (2006)
Achache, M.: Complexity analysis and numerical implementation of a short-step primal–dual algorithm for linear complementarity problems. Appl. Math. Comput. 216(7), 1889–1895 (2010)
Ai, W., Zhang, S.: An O\((\sqrt{n} {L})\) iteration primal–dual path-following method, based on wide neighborhoods and large updates, for monotone LCP. SIAM J. Optim. 16(2), 400–417 (2005)
Andersen, E., Gondzio, J., Mészáros, C., Xu, X.: Implementation of interior point methods for large scale linear programs. In: Terlaky, T. (ed.) Interior Point Methods of Mathematical Programming, pp. 189–252. Kluwer Academic Publishers, Dordrecht (1996)
Asadi, S., Mansouri, H.: Polynomial interior-point algorithm for \({P}_*(\kappa )\) horizontal linear complementarity problems. Numer. Algorithms 63(2), 385–398 (2013)
Bai, Y., El Ghami, M., Roos, C.: A comparative study of kernel functions for primal–dual interior-point algorithms in linear optimization. SIAM J. Optim. 15(1), 101–128 (2004)
Bai, Y.Q., El Ghami, M., Roos, C.: A new efficient large-update primal–dual interior-point method based on a finite barrier. SIAM J. Optim. 13, 766–782 (2003)
Dantzig, G.: Linear Programming and Extension. Princeton University Press, Princeton, NJ (1963)
Darvay, Zs: A new algorithm for solving self-dual linear optimization problems. Studia Univ. Babeş-Bolyai Ser. Inf. 47(1), 15–26 (2002)
Darvay, Zs: New interior point algorithms in linear programming. Adv. Model. Optim. 5(1), 51–92 (2003)
Darvay, Zs, Papp, I.M., Takács, P.R.: Complexity analysis of a full-Newton step interior-point method for linear optimization. Period. Math. Hung. 73(1), 27–42 (2016)
Gay, D.: Electronic mail distribution of linear programming test problems. Math. Program. Soc. COAL Newsl. 3, 10–12 (1985)
Gondzio, J., Terlaky, T.: A computational view of interior point methods for linear programming. In: Beasley, J. (ed.) Advances in Linear and Integer Programming. Oxford University Press, Oxford (1995)
Illés, T., Terlaky, T.: Pivot versus interior point methods: pros and cons. Eur. J. Oper. Res. 140, 6–26 (2002)
Illés, T., Nagy, M., Terlaky, T.: EP theorem for dual linear complementarity problems. J. Optim. Theory Appl. 140(2), 233–238 (2009)
Illés, T., Nagy, M., Terlaky, T.: Polynomial interior point algorithms for general linear complementarity problems. Algoritm. Oper. Res. 5(1), 1–12 (2010)
Illés, T., Nagy, M., Terlaky, T.: A polynomial path-following interior point algorithm for general linear complementarity problems. J. Global Optim. 47(3), 329–342 (2010)
Jansen, B., Roos, C., Terlaky, T.: The theory of linear programming: skew symmetric self-dual problems and the central path. Optimization 29, 225–233 (1994)
Karmarkar, N.: A new polynomial-time algorithm for linear programming. Combinatorica 4(4), 373–395 (1984)
Kheirfam, B.: A predictor-corrector interior-point algorithm for \({P}_*(\kappa )\)-horizontal linear complementarity problem. Numer. Algorithms 66(2), 349–361 (2014)
Klafszky, E., Terlaky, T.: Some generalizations of the criss-cross method for the linear complementarity problem of oriented matroids. Combinatorica 9(2), 189–198 (1989)
Kojima, M., Megiddo, N., Noma, T., Yoshise, A.: A Unified Approach to Interior Point Algorithms for Linear Complementarity Problems. Lecture Notes in Computer Science, vol. 538. Springer Verlag, Berlin, Germany (1991)
Li, Y., Terlaky, T.: A new class of large neighborhood path-following interior point algorithms for semidefinite optimization with \({O}(\sqrt{n}\log \frac{\rm {T}r(x^0s^0)}{\epsilon })\) iteration complexity. SIAM J. Optim. 20(6), 2853–2875 (2010)
Mansouri, H., Pirhaji, M.: A polynomial interior-point algorithm for monotone linear complementarity problems. J. Optim. Theory Appl. 157(2), 451–461 (2013)
Mansouri, H., Siyavash, T., Zangiabadi, M.: A path-following infeasible interior-point algorithm for semidefinite programming. Iran. J. Oper. Res. 3(1), 11–30 (2012)
Mehrotra, S.: On the implementation of a primal–dual interior point method. SIAM J. Optim. 2(4), 575–601 (1992)
Nesterov, Y.E., Nemirovskii, A.S.: Interior Point Polynomial Methods in Convex Programming, SIAM Studies in Applied Mathematics, vol. 13. SIAM Publications, Philadelphia (1994)
Peng, J., Roos, C., Terlaky, T.: Self-Regular Functions: A New Paradigm for primal–dual Interior-Point Methods. Princeton University Press, Princeton (2002)
Peyghami, M., Hafshejani, S.: Complexity analysis of an interior-point algorithm for linear optimization based on a new proximity function. Numer. Algorithms 67(1), 33–48 (2014)
Potra, F.: A superlinearly convergent predictor-corrector method for degenerate LCP in a wide neighborhood of the central path with O\(\left(\sqrt{n}L\right)\) iteration complexity. Math. Program. 100(2), 317–337 (2004)
Potra, F.A., Sheng, R.: Predictor-corrector algorithm for solving \({P}_*(\kappa )\)-matrix LCP from arbitrary positive starting points. Math. Program. 76(1), 223–244 (1996)
Potra, F.A., Sheng, R.: A large-step infeasible-interior-point method for the \({P}^*\)-Matrix LCP. SIAM J. Optim. 7(2), 318–335 (1997)
Roos, C., Terlaky, T., Vial, J-Ph: Theory and Algorithms for Linear Optimization. Springer, New York (2005)
Sonnevend, G.: An “analytic center” for polyhedrons and new classes of global algorithms for linear (smooth, convex) programming. In: Prékopa, A., Szelezsán, J., Strazicky, B. (eds.) System Modelling and Optimization: Proceedings of the 12th IFIP-Conference held in Budapest, Hungary, September 1985. Lecture Notes in Control and Information Sciences, vol. 84, pp. 866–876. Springer Verlag, Berlin, West-Germany (1986)
Terlaky, T.: A convergent criss-cross method. Optimization 16(5), 683–690 (1985)
Terlaky, T.: An easy way to teach interior-point methods. Eur. J. Oper. Res. 130(1), 1–19 (2001)
Wang, G.Q.: A new polynomial interior-point algorithm for the monotone linear complementarity problem over symmetric cones with full NT-steps. Asia-Pacific J. Oper. Res. 29(2), 1250,015 (2012)
Wang, G.Q., Bai, Y.Q.: A new primal–dual path-following interior-point algorithm for semidefinite optimization. J. Math. Anal. Appl. 353(1), 339–349 (2009)
Wang, G.Q., Bai, Y.Q.: A primal–dual interior-point algorithm for second-order cone optimization with full Nesterov-Todd step. Appl. Math. Comput. 215(3), 1047–1061 (2009)
Wang, G.Q., Bai, Y.Q.: A new full Nesterov–Todd step primal–dual path-following interior-point algorithm for symmetric optimization. J. Optim. Theory Appl. 154(3), 966–985 (2012)
Wright, S.: Primal–Dual Interior-Point Methods. SIAM, Philadelphia (1997)
Ye, Y.: Interior Point Algorithms, Theory and Analysis. Wiley, Chichester (1997)
Ye, Y., Todd, M., Mizuno, S.: An \(O(\sqrt{n}L)\)-iteration homogeneous and self-dual linear programming algorithm. Math. Oper. Res. 19, 53–67 (1994)
Zhang, L., Xu, Y.: A full-Newton step interior-point algorithm based on modified Newton direction. Oper. Res. Lett. 39, 318–322 (2011)
Zhang, M., Bai, Y., Wang, G.: A new primal–dual path-following interior-point algorithm for linearly constrained convex optimization. J. Shanghai Univ. 12(6), 475–480 (2008)
Acknowledgements
The authors are thankful for the valuable suggestions of the editor and the reviewers that helped improve the clarity and overall presentation of the paper. The authors are also grateful for the support of Edutus College (Collegium Talentum). This work was supported by a Grant of the Romanian National Authority for Scientific Research, CNCS-UEFISCDI, Project Number PN-III-P4-ID-PCE-2016-0190.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Darvay, Z., Takács, PR. New method for determining search directions for interior-point algorithms in linear optimization. Optim Lett 12, 1099–1116 (2018). https://doi.org/10.1007/s11590-017-1171-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11590-017-1171-4