Abstract
In this paper, we propose a full-Newton step infeasible interior-point algorithm (IPA) for solving linear optimization problems based on a new kernel function (KF). The latter belongs to the newly introduced hyperbolic type (Guerdouh et al. in An efficient primal-dual interior point algorithm for linear optimization problems based on a novel parameterized kernel function with a hyperbolic barrier term, 2021; Touil and Chikouche in Acta Math Appl Sin Engl Ser 38:44–67, 2022; Touil and Chikouche in Filomat 34:3957–3969, 2020). Unlike feasible IPAs, our algorithm doesn’t require a feasible starting point. In each iteration, the new feasibility search directions are computed using the newly introduced hyperbolic KF whereas the centering search directions are obtained using the classical KF. A simple analysis for the primal-dual infeasible interior-point method (IIPM) based on the new KF shows that the iteration bound of the algorithm matches the currently best iteration bound for IIPMs. We consolidate these theoretical results by performing some numerical experiments in which we compare our algorithm with the famous SeDuMi solver. To our knowledge, this is the first full-Newton step IIPM based on a KF with a hyperbolic barrier term.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Linear optimization (LO) problem has increasingly been addressed in the literature, both from the theoretical and computational points of view. This model has been widely applied to modelize different problems in various areas such as economics, engineering and operations research.
The first modelisation of an economic problem in the form of a LO problem was made by the Russian mathematician Kantorovich (1939, [3]) and the general formulation was given later by the American mathematician G.B. Dantzig in his work [1].
In this paper, we study linear program in primal-dual standard form
with \(A\in {\mathbb {R}}^{m\times n},\) \(x,s,c\in {\mathbb {R}}^n\) and \(b,y\in {\mathbb {R}}^m.\)
Several methods are used to find an optimal solution for this pair of problems. Interior-point methods (IPMs) are among the most popular methods. IPMs were first developed by Karmarkar [4] for LO. These methods are based on using Newton’s method in a careful and controlled manner.
Initially, IPAs that required a feasible starting point were studied [7, 13]. However, the feasible initial point is not given in general. Lustig [10] proposed a solution to this when he introduced the first infeasible-start algorithm. His approach was further improved in the predictor-corrector algorithm of Mehrotra [12]. After that, Roos [16] introduced a new infeasible IPA, which uses only full-Newton steps. Some extensions on LO were carried out by Liu and Sun [8], Liu et al. [9] and Mansouri and Roos [11].
Salahi et al. [18] presented a new IIPM for LO based on a specific self-regular KF. Recently, Kheirfam and Haghighi [5, 6] and Moslemi and Kheirfam [14] discussed the complexity of trigonometric proximity based IIPMs for different types of problems.
An alternative method to determine new search directions was introduced by Rigó and Darvay [15] where they presented a new method based on an algebraic equivalent transformation on the centering equation of the system which characterizes the central path.
In this paper, our main contribution is an infeasible-start IPM for LO that builds on the works of Roos [16] and Kheirfam and Haghighi [5, 6]. The algorithm avoids a big-M or a two-phase approach. Furthermore, under general conditions we guarantee that our algorithm will converge to an optimal solution.
We present some notations used throughout the paper. \(\Vert \cdot \Vert ,\) \(\Vert \cdot \Vert _1\) and \(\Vert \cdot \Vert _{\infty } \) denotes the Euclidean norm, the 1-norm and the infinity norm respectively. \({\mathbb {R}}^n_+\) and \({\mathbb {R}}^n_{++}\) denote the nonnegative and the positive orthants respectively. For given vectors \(x,s\in {\mathbb {R}}^n,\) \(X=\mathrm{{diag}}(x)\) denotes the \(n\times n\) diagonal matrix whose diagonal entries are the components of x, and the vector xs indicate the component-wise product of x and s.
The remainder of this paper is organized as follows: In the next section we recall the basics of IIPMs. The complexity has been analyzed in Sect. 3. Our computational experiments are presented in Sect. 4. Finally, we give the conclusion in the last section.
2 Preliminaries
In this section, we briefly describe the basics of IIPMs for LO. We outline some concepts and basic tools required in IIPMs such as starting point, perturbed problem, central path and proximity measure quantity. We start by recalling the standard LO problem
and its dual problem
where \(A\in {\mathbb {R}}^{m\times n}\) with rank\((A)=m\le n,\) \(c\in {\mathbb {R}}^n\) and \(b\in {\mathbb {R}}^m\) are given. Unlike feasible IPMs, in IIPMs, a triple (x, y, s) is called an \(\epsilon -\) solution of (P) and (D) if the norms of the residual vectors \(r_b=b-Ax\) and \(r_c=c-A^Ty-s\) do not exceed \(\epsilon ,\) and also \(x^Ts\le \epsilon .\)
Following the basics of IIPMs, we choose \(x^0>0\) and \(s^0>0\) such that \(x^0s^0=\mu ^0 e\) for some positive number \(\mu ^0,\) while \({r_b}^0\) and \({r_c}^0\) denote the initial residual vectors. In this work, we assume that
with \(\mu ^0\) the initial duality gap and \(\xi \) satisfies the following inequality
For any \(\nu \) with \(0<\nu \le 1,\) we consider the perturbed problem pair \((P_\nu )\) and \((D_\nu )\)
Note that if \(\nu =1\) then \((P_\nu )\) and \((D_\nu )\) satisfy the interior-point condition (IPC), i.e., \((P_\nu )\) has a feasible solution \(x>0\) and \((D_\nu )\) has a solution (y, s) with \(s>0.\) This implies that the triple \((x,y,s)=(x^0,y^0,s^0)\) yields a strictly feasible solution of the pair of problems \((P_\nu )\) and \((D_\nu ).\)
Theorem 2.1
(Lemma 3.1 in [16]) The original problems (P) and (D) are feasible if and only if for each \(\nu \) satisfying \(0<\nu \le 1\) the perturbed problems \((P_\nu )\) and \((D_\nu )\) satisfy the IPC.
In what follows, we assume that (P) and (D) are feasible. Then Theorem 2.1 implies that for every \(0<\nu \le 1\) the system (2) has a unique solution
This means that the central path exists and the set of unique solutions \(\{(x(\mu ,\nu ),y(\mu , \nu ),s(\mu ,\nu )): \mu >0,\;0<\nu \le 1\}\) forms the central path with \((x(\mu ,\nu ),y(\mu ,\nu ),s(\mu , \nu ))\) the \(\mu -\)centers of \((P_\nu )\) and \((D_\nu )\) and \(\mu =\nu \mu ^0.\) Furthermore, we denote \((x(\mu ,\nu ),y(\mu ,\nu ),s(\mu ,\nu ))=(x(\nu ),y(\nu ),s(\nu ))\) for simplicity purposes.
Now, we would like to compute an approximate solution of (2). Let (x, y, s) be a positive feasible triple of (2) and \(\mu >0.\) A direct application of Newton’s method produces the following system for the search direction \((\Delta x,\Delta y,\Delta s)\)
Following this centrality step, the new point \((x^+,y^+,s^+)\) is then computed according to:
For convenience, we introduce the following scaled vector v and scaled search directions \(d_x\) and \(d_s\)
Using these notations, system (3) can be rewritten as follows:
where \({\overline{A}}=AV^{-1}X,\;\;V=\mathrm{{diag}}(v)\) and \(X=\mathrm{{diag}}(x).\) The third equation in (5) is called the scaled centering equation.
Defining the proximity mesure \(\delta \) as follows:
We use \(\delta \) to measure the distance between an iterate (x, y, s) and the \(\mu -\)center of the perturbed problem pair \((P_\nu )\) and \((D_\nu ).\)
We showcase the outline of one iteration of the algorithm. Starting by the initialization defined in (1), each main iteration consists of a so-called feasibility step, a \(\mu -\)update and a few centrality steps. The feasibility step provides iterates \((x^f,y^f,s^f)\) that are strictly feasible for \((P_{\nu ^+})\) and \((D_{\nu ^+}),\) with \(\nu ^+=(1-\theta )\nu \) and \(\theta \in ]0,1[.\) These iterates belong to the quadratic convergence region with respect to the \(\mu ^+-\)center of \((P_{\nu ^+})\) and \((D_{\nu ^+}),\) with \(\mu ^+=\xi ^2\nu ^+.\) After that, we apply a limited number of centering steps (at most 5 centering steps). These centering steps produces iterates \((x^+,y^+,s^+)\) that are strictly feasible for \((P_{\nu ^+})\) and \((D_{\nu ^+}),\) and such that \(\delta (x^+,s^+;\mu ^+)\le \tau .\)
To obtain iterates that are feasible for \((P_{\nu ^+})\) and \((D_{\nu ^+}),\) we need to solve the following system of equations
to compute the displacements \((\Delta ^f x,\Delta ^f y,\Delta ^f s).\) The feasible new iterates are then defined by
Let the scaled search directions \({d}^f_x\) and \({d}^f_s\) be defined as follows:
System (7) is then rewritten in the following form
where \({\overline{A}}=AV^{-1}X,\;\;V=\mathrm{{diag}}(v),\;\;X=\mathrm{{diag}}(x).\)
Observe that the right-hand side in the last equation of (8) is equal to minus gradient of the classical logarithmic scaled barrier (proximity) function
where
is the so-called KF of the barrier function \(\Psi (v).\)
Coming back to system (8), we can convert it to
For a different KF, one gets a different search direction. In this context, we consider a new univariate KF \(\psi \) defined as follows:
Based on our KF, let us define a new proximity mesure as follows:
Note that
\(\sigma \) is thus a suitable proximity.
The generic primal-dual IIPM is summarized in Algorithm 1.
3 Analysis of the algorithm
In this section, we show that the IIPM based on the new KF is well defined. We first state some technical lemmas that we use in the complexity analysis of the algorithm. Then, we prove the strict feasibility of the iterates obtained after the feasibility step. After that, we derive an upper bound for the number of iterations required by the algorithm to obtain an optimal solution.
3.1 Technical lemmas
For conveniency, we give the first two derivatives of \(\psi \)
We can easily verify that \(\psi \) defined in (10) is indeed a KF.
Let \(\delta \) be the proximity defined previously in (6). For simplicity purposes, we put \(\delta (x,s,\mu )=\delta .\) Then, we can take advantage of the following well-known results.
Lemma 3.1
(Lemma \(\Pi .60\) in [17]) Let \(\rho (\delta ):=\delta +\sqrt{1+\delta ^2}.\) Then
Lemma 3.2
(Lemma \(\Pi .51\) in [17]) If \(\delta \le 1\), then the primal-dual Newton step is feasible, i.e. \(x^+\) and \(s^+\) are nonnegative and \((x^+)^Ts^+=n\mu .\) Moreover, if \(\delta <1,\) then
A direct consequence of this lemma is the following corollary.
Corollary 3.3
(Corollary 2.3 in [5]) If \(\delta \le \frac{1}{\root 4 \of {2}}\) then \(\delta (x^+,s^+;\mu )\le \delta ^2\) and
Remark 3.4
Using Corollary 3.3, we conclude that \(\delta (x^+,s^+;\mu ^+)\le \tau \) will hold after
centering steps.
The following lemma provides an important feature of the hyperbolic cotangent function that enables us to prove that the new KF is efficient.
Lemma 3.5
One has
Proof
Let’s define the function l as follows:
Deriving l, we obtain
Recall that
Using this property of the hyperbolic functions, we can rewrite \(l^\prime \) as follows:
Moreover, the Taylor expansion of the hyperbolic sine function implies that
It follows that the function l is strictly increasing on the interval \(]0,+\infty [.\) Since
we conclude that \(t\coth (t)-1>0,\;\forall t>0.\) \(\square \)
The following lemma plays an important role in the complexity analysis of our algorithm.
Lemma 3.6
One has
Proof
It’s clear that (14) is satisfied for \(t=1.\) Let g be the function defined for \(t>0\) as follows:
For this function, we have
An important observation is that \(\lim \limits _{t\rightarrow 0^+}g(t)=\lim \limits _{t\rightarrow +\infty }g(t)=-\infty \) and \(g(1)=0.\) So, to prove the inequality (14), it suffices to verify that
We start by the case \(t> 1.\) Since the function \(t\mapsto \sinh (t)\) is monotonically increasing, this implies that
By deriving h, and using inequality (13), we can easily prove that \(h(t)<0,\;\forall t> 1.\)
Now, we move to the second case, i.e., \(t< 1.\) We rewrite \(g^\prime \) as follows:
where the inequalities are obtained using the increase of the hyperbolic cosine function and the fact that \(\sinh (1)\cosh (1)<2.\) This completes the proof. \(\square \)
A direct consequence of this lemma is the following result which gives an upper bound for \(\sigma \) in terms of \(\delta .\)
Lemma 3.7
One has
Proof
The proof is a direct consequence of Lemma 3.6 and definitions (11) and (6). \(\square \)
3.2 Analysis of the feasibility step
In this subsection, we show that after the feasibility step the new iterates are positive and within the region where the Newton process targeting at \(\mu ^+-\)centers of \((P_{\nu ^+})\) and \((D_{\nu ^+})\) is quadratically convergent, i.e.
with \(x^f,y^f\) and \(s^f\) denote the new iterates generated by the feasibility step such that the feasibility conditions of \((P_{\nu ^+})\) and \((D_{\nu ^+})\) are satisfied. Using (12) and the third equation of (9), we have
The next lemma reveals a sufficient condition for the strict feasibility of the feasibility step.
Lemma 3.8
The new iterates \((x^f,y^f,s^f)\) are strictly feasible if
Proof
We first introduce a step length \(\alpha \in [0,1].\) Then, we define
Note that for \(\alpha =0\) (\(\alpha =1\)), \(x^0=x\) (\(x^1=x^f\)) respectively and \(x^0s^0>0.\) Hence, using notations (4) and inequality (16), we get
Obviously \(\left( (1-\alpha )v^2+\alpha (1-\alpha )\dfrac{v\sinh (e)}{2\sinh (v)}\right) \ge 0\) since \(\dfrac{v\sinh (e)}{2\sinh (v)}\ge 0,\;\forall v>0.\) Thus, for every \(\alpha \in [0,1],\) \(x^\alpha s^\alpha >0\) which implies that none of the components of \(x^\alpha \) and \(s^\alpha \) vanishes. Taking \(\alpha =1,\) we obtain \(x^f>0\) and \(s^f>0.\) This completes the proof. \(\square \)
In the sequel, we denote
and
Remark 3.9
Using the previous notations, it follows that
The following lemma gives an upper bound for the proximity function after the feasibility step.
Lemma 3.10
If \(\frac{v^2}{2}+\frac{v\sinh (e)}{2\sinh (v)}+d^f_xd^f_s>0,\) then
Proof
Definition (6) implies that
Using (15) and Remark 3.9, we get
Moreover, using definition (6), Lemma 3.1 and Lemma 3.6 we obtain the following inequalities
and
The substitution of inequalities (19) and (20) into (18) yields the following inequality
Using (15) and the definition of \(v^f,\) we have
where the last inequality is due to Lemma 3.1 and (20). The substitution of (21) and (22) in (17) yields the desired inequality. \(\square \)
Corollary 3.11
Let \(n\ge 2,\) \(\delta \le \tau .\) Choosing \(\tau =\dfrac{1}{16},\;\omega \le \dfrac{1}{2\sqrt{2}},\) and \(\theta =\dfrac{\alpha }{4\sqrt{n}}\) with \(\alpha \le 1,\) we have
-
1.
the iterates \((x^f,y^f,s^f)\) obtained after feasibility step are strictly feasible, i.e., \(x^fs^f>0.\)
-
2.
after the feasibility step, the new iterates \((x^f,y^f,s^f)\) are within the region where the Newton process targeting at the \(\mu ^+-\)centers of \((P_{\nu ^+})\) and \((D_{\nu ^+}),\) is quadratically convergent, i.e., \(\delta (v^f)\le \dfrac{1}{\root 4 \of {2}}.\)
Proof
For the first item, using (22), we obtain
where the second inequality is due to the fact that \(\rho (\delta )\) is monotonically increasing with respect to \(\delta .\) The result derives then from Lemma 3.8. As for the second item, it’s clear that \(\dfrac{v^2}{2}+\dfrac{v\sinh (e)}{2\sinh (v)}+d^f_xd^f_s>0,\) from the proof of the first item. Therefore, using Lemma 3.10 we obtain
which implies that \(\delta (v^f)\le \dfrac{1}{\root 4 \of {2}}.\) This completes the proof. \(\square \)
In the rest of this section, we will assume that
3.3 Upper bounds for \(\omega \) and \(\Vert q \Vert \)
In this subsection, we will follow the same procedure as in Section 4.3 in [16]. Let \({\mathcal {L}}\) be the null space of the matrix \({\bar{A}}.\) Note that the affine space
equals \(d_x+{\mathcal {L}}.\) Furthermore, the row space \({\bar{A}}\) equals the orthogonal complement \({\mathcal {L}}^ \perp \) of \({\mathcal {L}}\) defined as follows:
An obvious observation is that \(d_s\in \theta \nu v s^{-1}{r_c}^0+{\mathcal {L}}^ \perp .\) In addition, \({\mathcal {L}}^ \perp \cap {\mathcal {L}}=\{0\}.\) As a consequence, the affine spaces \(d_x+{\mathcal {L}}\) and \(d_s+{\mathcal {L}}^ \perp \) meet in a unique point q, i.e., q is the solution of the system
Lemma 3.12
Let q be the unique solution of (23). Then,
Proof
The lemma is obtained using the same arguments as in Lemma 4.6 in [16] with \(r=-\nabla \Psi (v).\) \(\square \)
Lemma 3.13
(Lemma 4.7 in [16]) One has
Corollary 3.14
(Corollary 3.10 in [5]) Let \(\tau =\dfrac{1}{16}\) and \(\delta (v)\le \tau .\) Then
Recall that \(\delta (v^f)\le \frac{1}{\root 4 \of {2}}\) holds if \(\omega \le \frac{1}{2\sqrt{2}}.\) Using Lemma 3.12 and Lemma 3.7, this will certainly hold if
Or from Lemma 3.13 and Corollary 3.14, it follows that
As in [16], using \(\mu =\mu ^0\nu =\nu \xi ^2\) and \(\theta =\frac{\alpha }{4\sqrt{n}}\), we obtain the following upper bound for the norm of q
Let us denote
and
We rewrite the upper bound for the norm of q as follows:
After some calculations, we conclude that
Or since \(\Vert q\Vert \le \frac{1}{2}\alpha {\bar{\kappa }}(\xi ),\) the latter inequality will be satisfied if
The following result gives a range for the parameter \({\bar{\kappa }}(\xi ).\) We follow the technics from Section 4.6 of [16]. We skip some details of the proof since it doesn’t depend on the proposed KF.
Lemma 3.15
One has
Proof
From initialization (1), we get \(\kappa (\xi )=1.\) This yields the left-hand side of the inequality. For the other side, let \(x^*\) be an optimal solution of (P) and \((y^*,s^*)\) an optimal solution of (D). For simplicity, we denote \(x=x(\mu ,\nu ),\) \(y=y(\mu ,\nu )\) and \(s=s(\mu ,\nu ).\) Hence, the triple (x, y, s) is the unique solution of the following system
Using the fact that the row space of a matrix and its null space are orthogonal, we get
Let us define
From (25), we get
Therefore, since \({x^*}^Ts^*=0,\) and \(x^*+s^*\le \xi e\) we can obtain
In addition, we can easily verify that
Using (26), (27) and (28) it follows that
Using the equivalence between the Euclidean norm and the 1-norm, we get the final inequality. \(\square \)
3.4 Iteration bound
We arrive at the final result of this section which summarizes the complexity bound. As we found in the previous sections, starting from an iterate (x, y, s) satisfying \(\delta (x,s;\mu )\le \tau \) with \(\tau \) and \(\theta \) defined previously, the new iterate \((x^f,y^f,s^f)\) is strictly feasible and \(\delta (x^f,s^f;\mu ^+)\le \frac{1}{\root 4 \of {2}}.\) Moreover, according to Remark 3.4, the number of centrality steps needed to obtain iterates \((x^+,y^+,s^+)\) satisfying \(\delta (x^+,s^+;\mu ^+)\le \tau \) is at most 5. Therefore, the total number of main iterations is bounded by
Let us recall that \(\theta =\dfrac{\alpha }{4\sqrt{n}}.\) Thus, using (24) and the fact that \((x^0)^T s^0=n\xi ^2,\) we obtain the following upper bound for the total number of iterations
From Lemma 3.15, we can state the final result of this section which summarizes the complexity bound.
Theorem 3.16
Let (P) and (D) be feasible and \(\xi >0\) such that
for some optimal solutions \(x^*\) of (P) and \((y^*,s^*)\) of (D). Then, the algorithm finds an \(\epsilon -\)solution of (P) and (D) after at most
iterations.
4 Numerical tests
In this section, we showcase the performance of the proposed algorithm outlined in 1 by performing some preliminary numerical tests. Our experiments were directly implemented in MATLAB R2012b and performed on Supermicro dual\(-\)2.80 GHz Intel Core i5 server with 4.00 Go RAM.
We compare our algorithm (Alg. 1) with the SeDuMi solver using the average CPU time. The latter is the time needed to obtain an optimal solution. The comparison was done on a set of seven Netlib problems. While implementing our algorithm, we set \(\epsilon = 10^{-4},\) \( \tau =\dfrac{1}{16}\) and \((x^0,y^0,s^0)=(\xi e,0,\xi e),\) with \(\xi \) chosen such that \(\Vert x^*+s^*\Vert _{\infty }\le \xi .\) As for \(\theta ,\) we use fixed values, \(\theta \in \{0.4,0.5,0.6,0.7\}\) because they perform better than the theoretical value \(\theta =\dfrac{1}{5\sqrt{2}n}.\) The results are summarized in Table 1. For each example, we use bold font to highlight the best, i.e., the shortest CPU time.
From Table 1, it becomes clear that both our algorithm and SeDuMi solver take similar time to obtain an optimal solution.
5 Conclusions and remarks
This paper proposes a full-Newton step infeasible IPA with new search directions. The centrality step is derived using the classical search direction, while we used a KF with hyperbolic barrier term to induce the feasibility step. The complexity analysis for the primal-dual infeasible IPA based on this KF indicates that the proposed algorithm enjoys the favorable iteration bound for LO. Furthermore, numerical tests on a selected set of Netlib problems show that our algorithm seems promising in practice.
References
Dantzig, G.B.: Maximization of a linear function of variables subject to linear inequalities. In: Koopmans, T.C. (ed.) Activity Analysis of Production and Allocation, pp. 339–347. Wiley, New York (1951)
Guerdouh, S., Chikouche, W., Touil, I.: An efficient primal-dual interior point algorithm for linear optimization problems based on a novel parameterized kernel function with a hyperbolic barrier term. halshs-03228790 (2021)
Kantorovich, L.V.: Mathematical methods in the organization and planning of production. Publication House of the Leningrad State University, 1939. Transl. Manag. Sci. 6, 366–422 (1960)
Karmarkar, N. K.: A new polynomial-time algorithm for linear programming. In: Proceedings of the 16th Annual ACM Symposium on Theory of Computing. 4, 373–395 (1984)
Kheirfam, B., Haghighi, M.: A full-Newton step infeasible interior-point method for linear optimization based on a trigonometric kernel function. Optimization 65(4), 841–857 (2016)
Kheirfam, B., Haghighi, M.: A full-Newton step infeasible interior-point method based on a trigonometric kernel function without centering steps. Numer. Algorithms 85, 59–75 (2020)
Kojima, M., Mizuno, S., Yoshise, A.: A primal-dual interior point algorithm for linear programming. In: Megiddo, N. (ed.) Progress in Math. Program, pp. 29–47. Springer, New York (1989)
Liu, Z., Sun, W.: An infeasible interior-point algorithm with full-Newton step for linear optimization. Numer. Algorithms 46(2), 173–188 (2007)
Liu, Z., Sun, W., Tian, F.: A full-Newton step infeasible interior-point algorithm for linear programming based on a kernel function. Appl. Math. Optim. 60, 237–251 (2009)
Lustig, I.J.: Feasibility issues in a primal-dual interior-point method for linear programming. Math. Program. 49(1–3), 145–162 (1990)
Mansouri, H., Roos, C.: Simplified \({\cal{O} }(nL)\) infeasible interior-point algorithm for linear optimization using full-Newton step. Optim. Methods Softw. 22(3), 519–530 (2007)
Mehrotra, S.: On the implementation of a primal-dual interior point method. SIAM J. Optim. 2(4), 575–601 (1992)
Monteiro, R.D., Adler, I.: Interior path following primal-dual algorithms. Part I: Linear programming. Math. Program. 44(1), 27–41 (1989)
Moslemi, M., Kheirfam, B.: Complexity analysis of infeasible interior-point method for semidefinite optimization based on a new trigonometric kernel function. Optim. Lett. 13, 127–145 (2019)
Rigó, P.R., Darvay, Z.: Infeasible interior-point method for symmetric optimization using a positive-asymptotic barrier. Comput. Optim. Appl. 71, 483–508 (2018)
Roos, C.: A full-Newton step \({\cal{O} }(n)\) infeasible interior-point algorithm for linear optimization. SIAM J. Optim. 16(4), 1110–1136 (2006)
Roos, C., Terlaky, T.J., Vial, Ph.: Theory and Algorithms for Linear Optimization. An Interior Point Approach. Wiley, Chichester (1997)
Salahi, M., Peyghami, M.R., Terlaky, T.: New complexity analysis of IIPMs for linear optimization based on a specific self-regular function. Eur. J. Oper. Res. 186(2), 466–485 (2008)
Touil, I., Chikouche, W.: Novel kernel function with a hyperbolic barrier term to primal-dual interior point algorithm for SDP problems. Acta Math. Appl. Sin. Engl. Ser. 38, 44–67 (2022)
Touil, I., Chikouche, W.: Primal-dual interior point methods for semidefinite programming based on a new type of kernel functions. Filomat 34(12), 3957–3969 (2020)
Acknowledgements
This work has been supported by the general direction of scientific research and technological development (DGRSDT) under project PRFU No. C00L03UN180120220008. Algeria. The authors would like to thank the anonymous reviewers for the valuable comments and suggestions which improved the quality of the paper.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
No conflict of interest was reported by the authors.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Guerdouh, S., Chikouche, W. & Kheirfam, B. A full-Newton step infeasible interior-point algorithm based on a kernel function with a new barrier term. J. Appl. Math. Comput. 69, 2935–2953 (2023). https://doi.org/10.1007/s12190-023-01858-8
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12190-023-01858-8
Keywords
- Linear optimization
- Infeasible interior-point methods
- Kernel function
- Full-Newton step
- Complexity analysis