Abstract
In this paper, a full-Newton step infeasible interior-point method for solving linear optimization problems is presented. In each iteration, the algorithm uses only one so-called feasibility step and computes the feasibility search directions by using a trigonometric kernel function with a double barrier term. Convergence of the algorithm is proved and it is shown that the complexity bound of the algorithm matches the currently best known iteration bound for infeasible interior-point methods. Finally, some numerical results are provided to illustrate the performance of the proposed algorithm.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Among proposed approaches for solving linear optimization (LO) problems, interior-point methods (IPMs) gained more attention than others because they were successful in practice and theory. The primal-dual IPMs for LO problems was first suggested by Kojima et al. [17] and Megiddo [21]. Primal-dual IPMs can be classified into two categories: feasible IPMs and infeasible IPMs (IIPMs). The first category requires a strictly feasible starting point and maintains feasibility during the solution process. Usually such a starting point is not at hand. In that case the second category should be used which start with an arbitrary positive point and feasibility is reached as optimality is approached. The first IIPM was proposed for LO by Lustig [20]. Kojima et al. [16] gave the first theoretical results on IIPMs.
In 2006, Roos [23] introduced the first full-Newton step IIPM for LO and derived the currently best known iteration bound. Instead of using damped steps in the line search, the key feature of his method is that it uses only full-Newton steps at each iteration. Kheirfam and Mahdavi-Amiri [14] generalized the proposed algorithm in [23] to linear complementarity problems over symmetric cone (SCLCP) using Euclidean Jordan algebras. Some variants of Roos’ algorithm can be found in [5,6,7, 27].
El Ghami et al. [3] first introduced a trigonometric kernel function for primal-dual IPM for LO problems. Kheirfam [8] proposed a primal-dual IPM for semidefinite optimization (SDO) problems based a new trigonometric kernel function. Kheirfam [4] suggested a large update IPM with a new trigonometric kernel function for SCLCP problems and showed that the proposed algorithm enjoys the best-known iteration bound for such methods. Based on using a kernel function, Liu et al. [19] proposed a full-Newton step infeasible interior-point algorithm for LO problems and proved that the result complexity coincides with the best result for IIPMs. Recently, Kheirfam and Haghighi [13] presented a full-Newton step infeasible interior-point algorithm based on the trigonometric kernel function introduced in [15] for LO problems.
All the above-mentioned infeasible interior-point algorithms consisted of one so-called feasibility step and several centering steps to get an 𝜖-optimal solution of the underlying problem. Very recently, Roos [24] proposed an infeasible interior-point algorithm for LO so that his algorithm does not need centering steps and takes only one feasibility step in order to get a new iterate close enough to the central path. Kheirfam [9,10,11,12] respectively extended this algorithm to HLCP, the Cartesian P∗(κ)-LCP, the convex quadratic symmetric cone optimization (CQSCO), and symmetric optimization (SO) problems.
Motivated by Roos [24], Kheirfam [9,10,11,12] and Kheirfam and Haghighi [13], we propose a full-Newton step infeasible interior-point algorithm for solving LO problems. Each iteration of algorithm consists of only one feasibility step and computes the search directions using the barrier function based on the trigonometric kernel function
Moreover, we prove that the proposed algorithm enjoys the best-known iteration complexity for IIPMs.
Let us consider the LO problem in the standard form
where c, x ∈ Rn, b ∈ Rm, and A ∈ Rm×n with rank(A) = m. The corresponding dual to (P) can be expressed as follows:
where s ∈ Rn and y ∈ Rm. Without loss of generality, we assume that both (P) and (D) satisfy interior-point condition (IPC). In accordance with the available results on infeasible IPMs (IIPMs), e.g., see [23], let (x∗, y∗, s∗) be an optimal solution of the problem pair (P) and (D) that \(\|x^{*}+s^{*}\|_{\infty }\leq \zeta \), and consider the starting point (x0, y0, s0) = (ζe,0, ζe) which satisfies x0s0 = μ0e with μ0 = ζ2 and e is the all-one vector. It is worth noting that since x∗, s∗≥ 0, and (x∗)Ts∗ = 0, \(\|x^{*}+s^{*}\|_{\infty }\leq \zeta \) holds if and only if
The rest of the paper is structured as follows. Section 2 presents the infeasible full-Newton step IPM for solving LO problems. In Section 2.1, we recall the perturbed problem pair previously presented in [23]. Section 2.2 describes an iteration of our algorithm, and then we give new search directions in Section 2.3. In Section 2.4, a framework of the algorithm is presented. Section 3 is devoted to some technical results. Section 4 contains the analysis of the proposed algorithm. Section 4.1 gives an upper bound for the proximity measure after a main iteration of the algorithm. Section 4.2 serves to derive an upper bound for ω. In Section 4.3, we fix values for the parameters τ and 𝜃. In Section 4.4, we state that the algorithm is well defined for the chosen values of τ and 𝜃, provided that n ≥ 2. Some numerical results are presented in Section 5. Finally, we conclude the paper and outline some future research lines in Section 6.
2 Infeasible full-Newton step IPM
In the case of an infeasible method, we call the triple (x, y, s) an 𝜖-solution of (P) and (D) if the norm of the residual vectors do not exceed 𝜖, and also xTs ≤ 𝜖.
2.1 The perturbed problem
Let \({r^{0}_{b}}=b-Ax^{0}\) and \({r^{0}_{c}}=c-A^{T}y^{0}-s^{0}\) denote the initial residual vectors of (P) and (D), respectively. For any ν with 0 < ν ≤ 1, we consider the perturbed problem pair (Pν) and (Dν)
and
Note that (x, y, s) = (x0, y0, s0) yields a strictly feasible solution of the perturbed problem pair (Pν) and (Dν) when ν = 1. We conclude that if ν = 1, then the perturbed problem pair (Pν) and (Dν) satisfies the interior-point condition (IPC).
Lemma 1
([26, Theorem 5.13]) The problems (P) and (D) are feasible if and only if the perturbed problems (Pν) and (Dν) satisfy the IPC for every ν satisfying 0 < ν ≤ 1.
Due to Lemma 1, we conclude that the central path of the perturbed problem pair (Pν) and (Dν) exists. That is, the following system
has a unique solution (x(μ, ν), y(μ, ν), s(μ, ν)) for every μ > 0 as the μ-center of the perturbed problem pair (Pν) and (Dν). In what follows, we assume that the parameters μ and ν always satisfy the relation μ = νμ0 and we denote (x(μ, ν), y(μ, ν), s(μ, ν)) = (x(ν), y(ν), s(ν)). We measure the distance of the iterate (x, y, s) to the μ-center of the perturbed pair (Pν) and (Dν) by the quantity
As an immediate consequence, we have the following lemma.
Lemma 2
([25, Lemma II.62]) Let δ := δ(x, s;μ) and \(\rho (\delta ):=\delta +\sqrt {1+\delta ^{2}}\). Then
2.2 An iteration of our algorithm
Here, we describe one (main) iteration of our algorithm. Suppose that for some μ ∈ (0, μ0], the algorithm starts from an arbitrary point (x, y, s) which satisfies the feasibility conditions (2) and (3) for \(\nu =\frac {\mu }{\mu ^{0}}\) and such that δ(x, s;μ) ≤ τ, where τ is a threshold value. This is certainly true at the start of the first iteration. The algorithm finds a new iterate (x+, y+, s+) that satisfies the feasibility conditions (2) and (3) with ν replaced by ν+ = (1 − 𝜃)ν with 𝜃 ∈ (0,1). Then μ is reduced to μ+ = (1 − 𝜃)μ and such that δ(x+, s+;μ+) ≤ τ.
To generate the feasibility solution (x+, y+, s+) = (x + Δx, y + Δy, s + Δs), we need the displacement (Δx, Δy, Δs) which obtained by solving the following system
2.3 New search directions
Defining the scaled search directions
where v is defined in (5), one can easily verify that the feasibility search direction system (6) can be written in terms of the scaled search directions dx and ds as follows:
where \(\bar A:=AV^{-1}X, V:=\text {diag}(v)\) and X := diag(x).
Definition 1
A twice differentiable function \(\psi (t):(0, \infty )\rightarrow (0, \infty )\) is called a kernel function if
Note that the right-hand side of the third equation in (8) is the negative gradient of the logarithmic barrier function \({\varPsi }(v)={\sum }_{i=1}^{n}\psi (v_{i})\) where \(\psi (t)=\frac {t^{2}-1}{2}-\log t\). In this paper, the new search direction is obtained by the following system
where the kernel function of ϕ(v) is
The above kernel function was first introduced in [18] for feasible IPMs based on the kernel functions. Since \(\phi ^{\prime }(t)=\frac {2t^{3}-t^{2}-1}{2t^{2}}+\frac {1}{4}h^{\prime }(t)\tan (h(t))\left (1+\tan ^{2}(h(t))\right )\), the third equation of (9) can be written as
We define
One easily verifies that
Thus σ(v) is a suitable proximity.
2.4 The algorithm
A more formal description of the algorithm is presented in Fig. 1.
3 Technical results
In this section, we give some lemmas which are used in the analysis later.
Lemma 3
For t > 0, we have
Proof
Since \(h^{\prime }(t)=\frac {-6\pi }{(2+4t)^{2}}\), for \(f(t):=\tan (h(t))\sec ^{2}(h(t))\) we have
That is, the function f(t) is monotonically decreasing with respect to t. So, it follows that
Hence, we obtain
The proof is complete. □
Lemma 4
For t > 0, we have
Proof
If 0 < t ≤ 1, then we have \(h(t)\in [0, \frac {\pi }{2})\) and \(\tan (h(t))>0\). Thus,
Using \(\cos \limits (x)\geq 1-\frac {2}{\pi }x\) and \(\sin \limits (x)\leq \frac {4}{\pi }x-\frac {4}{\pi ^{2}}x^{2}\) for \(x\in [0, \frac {\pi }{2}]\) [22], we get
Therefore,
The above inequality and (12) imply that
If t > 1, then \(h(t)\in (-\frac {\pi }{4},0]\) and \(\tan (h(t))<0\). Therefore, we obtain
From \(\cos \limits (x)\geq 1-\frac {2}{\pi }x, x\in \left [0, \frac {\pi }{2}\right ]\) and \(\tan (x)<\frac {\pi ^{2}x}{\pi ^{2}-4x^{2}}, x\in \left (0, \frac {\pi }{2}\right )\) [1], and the fact that \(\cos \limits (-x)=\cos \limits (x)\) and \(\tan (-x)=-\tan (x)\), we have
Thus, we get
These two inequalities imply that
Now, using (13) and (14) for t > 1, we obtain
This proves the lemma for t > 1. Therefore, the proof is complete. □
Lemma 5
For t > 0, we have
Proof
If 0 < t ≤ 1, then we have \(\frac {1}{t}-t\geq 0\) and 1 + t2 − 2t3 ≥ 0. Therefore, we get
If t > 1, then \(t-\frac {1}{t}>0\) and 1 + t2 − 2t3 < 0. In this case, we have
These two inequalities imply the desired result. □
4 Analysis of the algorithm
In this section, we investigate the proposed algorithm in Fig. 1 is well defined. The main part of our analysis is to find some values for the parameters τ and 𝜃 such that after a full-Newton step, the iterate (x+, y+, s+) is strictly feasible for the perturbed problem pair \({\text (P_{\nu ^{+}})}\) and \({\text (D_{\nu ^{+}})}\), and also δ(x+, s+;μ+) ≤ τ.
4.1 Upper bound for δ(v+)
The following lemma guarantees the strict feasibility of the iterate (x+, y+, s+) for the perturbed pair \((P_{\nu ^{+}})\) and \((D_{\nu ^{+}})\).
Lemma 6
The new iterate (x+, y+, s+) is strictly feasible if
Proof
Defining
and using (7) and (11), we have
where the last inequality is due to Lemma 3 and v > 0. Hence, none of the entries of x(α) and s(α)vanishes for 0 ≤ α ≤ 1. Since x(0) = x > 0 and s(0) = s > 0 and x(α) and s(α) depend linearly on α, this implies that x(α) > 0 and s(α) > 0 for 0 ≤ α ≤ 1. Hence, x(1) = x+ and s(1) = s+ must be positive. This proves the lemma. □
In the sequel, we use the notation \(\omega :=\frac {1}{2}(\Vert d_{x}\Vert ^{2} + \Vert d_{s}\Vert ^{2})\), and we have the following inequality:
Lemma 7
If \(\omega <\frac {1+\rho (\delta )^{2}}{2\rho (\delta )^{3}}-\frac {1}{4}\delta \rho (\delta )\), then the iterate (x+, y+, s+) is strictly feasible.
Proof
By Lemma 6, the iterates (x+, y+, s+) are strictly feasible if
The last inequality certainly holds if
Using Lemma 4, Lemma 2, and the definition of δ, we obtain
By Lemma 2, (17) and (15), we obtain
This means that (16) holds, and the proof is complete. □
Assuming v+ as the variance vector of the iterate (x+, y+, s+) with respect to μ+, i.e., \(v^{+}=\sqrt {\frac {x_{+}s_{+}}{\mu ^{+}}}\), the following lemma gives a lower bound for the components of v+.
Lemma 8
Let v+ be the variance vector related to the iterate (x+, y+, s+) with respect to μ+, then
Proof
Dividing both sides the above equality by μ+ = (1 − 𝜃)μ, we obtain
which implies by Lemma 2, (15) and (17),
Taking square roots gives the desired result. □
We proceed by deriving an upper bound for δ(x+, s+;μ+). By definition (5), we have
Lemma 9
If \(\omega <\frac {1+\rho (\delta )^{2}}{2\rho (\delta )^{3}}-\frac {1}{4}\delta \rho (\delta )\), then
Proof
Using (18) and the triangle inequality, we have
where the last inequality follows from (15).
To obtain an upper bound for \(\frac {1}{2}\|v^{-1}\|_{\infty }\|e-v\|^{2}\), due to |1 − t|≤|t− 1 − t|, for each t > 0, Lemma 2 and (5), we have
Using Lemma 4, we get
Substituting these two bounds into (19), and also using Lemma 8, we obtain the inequality in the lemma. □
4.2 Upper bound for ω
Let us denote the null space of the matrix \(\bar {A}\) as \({\mathscr{L}}\). So
Obviously, the affine space \(\{\xi \in \mathbb {R}^{n}:\bar {A}\xi =\theta \nu {r_{b}^{0}}\}\) equals \(d_{x}+{\mathscr{L}}\). The row space of \(\bar {A}\) equals the orthogonal complement \({\mathscr{L}}^{\perp }\) of \({\mathscr{L}}\), and \(d_{s}\in \theta \nu v s^{-1}{r_{c}^{0}}+{\mathscr{L}}^{\perp }\). Also note that \({\mathscr{L}}\cap {\mathscr{L}}^{\perp }=\{0\}\), and as a consequence the affine spaces \(d_{x}+{\mathscr{L}}\) and \(d_{s}+{\mathscr{L}}^{\perp }\) meet in a unique point q.
Lemma 10
Let q be the (unique) point in the intersection of the affine space \(d_{x}+{\mathscr{L}}\)and \(d_{s}+{\mathscr{L}}^{\perp }\). Then
Proof
The proof of lemma is exactly the same as the proof of Lemma 3.4 in [24], except we make use of −∇Φ(v) instead of v− 1 − v. □
Following the same argument as in Section 3.4 [24], we obtain
The following lemma provides an upper bound for σ(v) in the terms of the measure proximity δ.
Lemma 11
One has
Proof
Using the definition of σ(v), the triangle inequality, Lemma 5, and (21), we obtain
The last inequality is due to Lemma 2. This completes the proof. □
Using Lemma 10, (22) and Lemma 11 we obtain an upper bound for ω.
4.3 Values for 𝜃 and τ
Our aim is to find a positive number τ such that if δ(v) ≤ τ holds, then δ(v+) ≤ τ. By Lemma 9, this is certainly true if
Using (23), the inequality (24) holds if
One easily verifies that the left-hand side expression in the above inequality is monotonically increasing with respect to δ, whereas the right-hand side expression is the monotonically decreasing. Hence, it suffices to have
One may easily check that the above inequality is satisfied if
This means that the inequality (24) holds, i.e., the iterate (x+, y+, s+) is strictly feasible, by Lemma 7. On the other hand, note that
provides an upper bound for ω, by (23). Moreover, \(g(n, \frac {1}{16})\leq g(2, \frac {1}{16})\leq 0.0533\). From Lemma 9, we have
This means that the inequality (25) is true. That is, the algorithm is well defined in the sense that the property δ(x, s;μ) ≤ τ with τ and 𝜃 defined in (27) is maintained in all iterations.
4.4 Complexity analysis
In the previous sections, we have found that if n ≥ 2 and at the start of an iteration the iterates (x, y, s) satisfying δ(x, s;μ) ≤ τ, with τ and 𝜃 as defined in (27), then after the full-Newton step, the new iterate (x+, y+, s+) is strictly feasible and δ(x+, s+;μ+) ≤ τ. This makes the algorithm well defined. After each iteration, the residuals and the duality gap are reduced by a factor 1 − 𝜃. Hence, using (x0)Ts0 = nζ2, the total number of main iterations is bounded above by
Using \(\theta =\frac {1}{22n}\), we have the following result.
Theorem 1
Let (P) and (D) be feasible and ζ > 0 such that \(\|x^{*}+ s^{*}\|_{\infty }\leq \zeta \)for some optimal solutions x∗ of (P) and (y∗, s∗) of (D). Then after at most
inner iterations the algorithm finds an 𝜖-solution of (P) and (D).
5 Numerical results
In this section, we report some computational results for the test problems given in Table 1 that are taken from the standard NETLIB test repository [2]. Numerical results were obtained by using MATLAB R2009a (version 7.8.0.347) on Windows XP Enterprize 32-bit operating system. The numerical results are summarized in Table 1, where “Iter.” denotes the required iteration numbers and “CPU” denotes the CPU time (in seconds) required to obtain an 𝜖-solution of the underlying problem. Moreover, in Table 1, “bound” and ”object. value” respectively denote the obtained bound in Theorem 1 and the objective function value. We set 𝜖 = 10− 4 as the accuracy parameter and choose the initial starting point (x0, y0, s0) = (ζe,0, ζe) such that \(\|x^{*}+s^{*}\|_{\infty }\leq \zeta \). Also, we set \(\theta =\frac {1}{22 n}\) for the proposed algorithm and \(\theta =\frac {1}{8n}\) for the algorithm in [24].
Note that the iteration bound in this paper is a worst-case bound, as is usual for theoretical bounds for IPMs (including IIPMs). When solving a particular problem, usually much smaller iteration numbers can be realized by taking 𝜃 larger than the value that is theoretically justified [24]. Now, we compare the proposed algorithm in this paper with the given algorithm in [13]. In order to, we again consider 𝜖 = 10− 4,(x0, y0, s0) = (ζe,0, ζe) such that \(\|x^{*}+s^{*}\|_{\infty }\leq \zeta \) and 𝜃 = 0.2. In Table 2, “C. Iter.” denotes the required centering iteration numbers. The numerical results are listed in Table 2.
The obtained numerical results in Table 2 show that for each problem instance, the algorithm in [13] needs a number of inner iterations (C. Iter. ), while the proposed algorithm does not need to these iterations. Moreover, from the results in the Table 2, we find that the execution time of the proposed algorithm is almost one-third of the execution time of the algorithm in [13].
6 Concluding remarks
In this paper, we proposed a new infeasible interior-point algorithm for solving LO problems. In each iteration, the algorithm uses only one step as feasibility step and admits a trigonometric kernel function instead of the logarithmic kernel function in system of scaled search directions. Moreover, the new algorithm does not need the centering steps. The proposed algorithm uses only full steps and therefore no line-searches are needed for generating the new iterates. The complexity bound of the proposed algorithm coincides with the currently best known iteration bound of infeasible IPMs for LO problems. We presented some numerical results to illustrate the performance of the algorithm on NETLIB test problems.
References
Becker, M., Strak, E.L.: On a hierarchy of quolynomial inequalities for tan x. Univ. Beograd. Publ. Elektrotehn. Fak. Ser. Mat. Fiz. 602-633, 133–138 (1978)
Browne, S., Dongarra, J., Grosse, E., Rowan, T.: The netlib mathematical software repository, Corporation for National Reasrch Initiatives (1995)
El Ghami, M., Guennoun, Z.A., Bouali, S., Steihaug, T.: Interior-point methods for linear optimization based on a kernel function with trigonmetric barrier term. J. Comput. Appl. Math. 236(15), 3613–3623 (2012)
Kheirfam, B.: A generic interior-point algorithm for monotone symmetric cone linear complementarity problems based on a new kernel function. J. Math. Model. Algorithms Oper. Res. 13(4), 471–491 (2014)
Kheirfam, B.: A new complexity analysis for full-Newton step infeasible interior-point algorithm for p∗(κ)-horizontal linear complementarity problems. J. Optim. Theory Appl. 161(3), 853–869 (2014)
Kheirfam, B.: A full Nesterov-Todd step infeasible interior-point algorithm for symmetric optimization based on a specific kernel function. NACO 3(4), 601–614 (2013)
Kheirfam, B.: A new infeasible interior-point method based on Darvay’s technique for symmetric optimization. Ann. Oper. Res. 211(1), 209–224 (2013)
Kheirfam, B.: Primal-dual interior-point algorithm for semidefinite optimization based on a new kernel function with trigonometric barrier term. Numer. Algorithms 61(4), 659–680 (2012)
Kheirfam, B.: An improved full-Newton step O(n) infeasible interior-point method for horizontal linear complementarity problem. Numer. Algorithms 71(3), 491–503 (2016)
Kheirfam, B.: A full step infeasible interior-point method for Cartesian p∗(κ)-SCLCP. Optim Lett. 10(3), 591–603 (2016)
Kheirfam, B.: An improved and modified infeasible interior-point method for symmetric optimization. Asian-Eur. J. Math. 9(2), 1650059 (2016). (13 pages)
Kheirfam, B.: An infeasible full-NT step interior point algorithm for CQSCO. Numer. Algorithms 74(1), 93–109 (2017)
Kheirfam, B., Haghighi, M.: A full-Newton step infeasible interior-point method for linear optimization based on a trigonometric kernel function. Optimization 65(4), 841–857 (2016)
Kheirfam, B., Mahdavi-Amiri, N.: A full Nesterov-Todd step infeasible interior-point algorithm for symmetric cone linear complementarity problem. Bull. Iranian Math. Soc. 40(3), 541–564 (2014)
Kheirfam, B., Moslemi, M.: A polynomial-time algorithm for linear optimization based on a new kernel function with trigonometric barrier term. Yugosl. J. Oper. Res. 25(2), 233–250 (2015)
Kojima, M., Megiddo, N., Mizuno, S.: A primal-dual infeasible-interior-point algorithm for linear programming. Math. Program. 61, 263–280 (1993)
Kojima, M., Mizuno, S., Yoshise, A.: A primal-dual interior-point algorithm for linear programming, Progress in Mathematical Programming (Pacific Grove, CA, 1987), pp. 29–47. Springer, New York (1989a)
Li, X., Zhang, M.: Interior-point algorithm for linear optimization based on a new trigonometric kernel function. Oper. Res. Lett 43(5), 471–475 (2015)
Liu, Z., Sun, W., Tian, F., full-Newton step, A: O(n) infeasible interior-point algorithm for linear programming based on kernel function. Appl. Math. Optim. 60, 237–251 (2009)
Lustig, I.J.: Feasibility issues in a primal-dual interior-point method for linear programming. Math. Program. 49(1-3), 145–162 (1991)
Megiddo, S.: Pathways to the optimal set in linear programming, progress in mathematical programming (Pacific Grove, CA. 1987) pp. 131–158. Springer, New York (1989)
Qi, F.: Jordan’s inequality: refinements, generalizations, applications and related problems. RGMIA Res. Rep. Coll. 2006;9:12 (16p) (electronic). Budengshi Yanjiu Tongxun (Communications in Studies on Inequalities) 13, 243–259 (2006)
Roos, C.: A full-newton step O(n) infeasible interior- point algorithm for linear optimization. SIAM J. Optim. 16(4), 1110–1136 (2006)
Roos, C.: An improved and simplified full-Newton step O(n) infeasible interior-point method for linear optimization. SIAM J. Optim. 25(1), 102–114 (2015)
Roos, C., Terlaky, T., Vial, J.-Ph: Theory and algorithms for linear optimization. An interior-point approach. Wiley, Chichester (1997)
Ye, Y.: Interior point algorithms. Wiley-Interscience Series in Discrete Mathematics and Optimization. Wiley, New York (1997). Theory and analysis, A Wiley-Interscience Publication
Zhang, L., Sun, L., Xu, Y.: Simplified analysis for full-Newton step infeasible interior-point algorithm for semidefinite programming. Optimization 62(2), 169–191 (2013)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Kheirfam, B., Haghighi, M. A full-Newton step infeasible interior-point method based on a trigonometric kernel function without centering steps. Numer Algor 85, 59–75 (2020). https://doi.org/10.1007/s11075-019-00802-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11075-019-00802-x
Keywords
- Linear optimization
- Infeasible interior-point method
- Full-Newton step
- Kernel functions
- Polynomial complexity