Abstract
In this paper, we present a new corrector-predictor interior-point method for solving semidefinite optimization. We use an algebraic equivalent transformation of the centering equation of the system which defines the central path. The algebraic transformation plays an essential role in the calculation of the new search directions. We prove that the iteration complexity of the algorithm coincides with the best known ones for interior-point methods (IPMs). To the best of our knowledge, this is the first corrector-predictor interior-point algorithm that uses the search directions obtained from the desired algebraic transformation for semidefinite optimization. Finally, some numerical experiments are provided to demonstrate the efficiency of our new algorithm.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Semidefinite optimization (SDO) optimizes a linear objective function over the intersection of the cone positive semidefinite matrices with an affine space. Many practical problems in combinatorial optimization can be modeled or approximated as SDOs [1]. SDOs are also used in control theory [4] and eigenvalue optimization problems [23]. SDOs can be efficiently solved by IPMs. Independently of Nesterov and Nemirovskii [27] and Alizadeh [2] generalized IPMs from linear optimization (LO) to SDO. Some articles on IPMs for SDO are published by Helmberg et al. [14], de Klerk [12], Halicka et al. [13], Kheirfam [19, 20] and Wang et al. [32].
The full-Newton step feasible IPM was first introduced for LO by Roos et al. [30]. They provided a novel convergence analysis of the method and obtained the best known iteration complexity for such methods. The extension of this method to SDO based on Nesterov-Todd (NT) directions is discussed by de Klerk [12]. In 2003, an algebraic equivalent transformation (AET) of the system that defines the central path is proposed by Darvay [5]. He used the Newton method for resulting system to obtain search directions based on the square root function, and provided a full-Newton step IPM for LO. The method later generalized to SDO [32], second-order cone optimization (SOCO) [31], symmetric cone optimization (SCO) [33] and convex quadratic symmetric cone optimization (CQSCO) [3]. Subsequently, Wang et al. [34, 35] have tried to improve complexity analysis of IPMs for SDO and SCO, respectively.
An important class of interior-point algorithms which have proven to be efficient in practice are called predictor-corrector (PC) IPMs (see [25, 26]). The first PC interior-point algorithm that uses the AET method for defining the search directions has been proposed by Darvay [6] for LO. Later on, Kheirfam [16,17,18] developed corrector-predictor (CP) IPMs for \(P_*(\kappa )\) linear complementarity problem, convex quadratic symmetric optimization (CQSO) and second-order cone optimization (SOCO), which use the search directions based on the square root function. Darvay et al. [11] introduced an CP interior-point algorithm for LO. The authors used the AET method that is based on the difference of the identity map and the square root function, which the first proposed by Darvay et al. in [10] for a full-Newton step IPM. Later on, Darvay et al. [7] and Kheirfam et al. [21] generalized this method to \(P_*(\kappa )\)-LCP and SCO, respectively.
The aim of this paper is to give a new CP IPM for SDO. The method uses the AET technique that is based on the transformation \(\psi (t)=\psi (\sqrt{t})\) with \(\psi (t)=t^2\), which was first introduced in [9]. To the best of our knowledge, this is the first CP interior-point algorithm that uses the aforementioned AET for SDO, and therefore the analysis of the algorithm differs from the existing ones. We prove that the complexity of the algorithm coincides with the best known ones of IPMs.
The paper is organized as follows. In Sect. 2 the primal-dual pair of SDO problem, the notation of the central path and the main idea of the AET of the system defining the central path are given. In Sect. 3 we present the new CP interior-point algorithm. Section 4 is devoted to the convergence analysis of the proposed algorithm. In Sect. 5 we present some numerical results. Concluding remarks are given in Sect. 6.
2 The SDO Problem
Let us consider the following primal-dual pair of SDO problems
and
where \(C, X, S\in \mathbb {S}^n\), \(b_i\in {\mathbb R}\) and \(A_i\in {\mathbb S}^n, i=1, \ldots , m\) are linearly independent. Here, \(\mathbb {S}^n\) denotes the set of real symmetric \(n\times n\) matrices and \(X\succeq 0\) means that X is a symmetric positive semidefinite matrix. For convenience, we denote the feasible set of the primal-dual pair of (P) and (D) as
and its relative interior by
where \(X\succ 0\) is to mean that X is a symmetric positive definite matrix. Throughout this paper, we assume that \(\mathcal {F}^0\ne \emptyset \). Under this last assumption, it is well known that the problem pair (P) and (D) have optimal solutions and their optimal values coincide. Hence, the set of optimal solutions contain of all solutions \((X, y, S)\in {\mathbb S^n}\times \mathbb {R}^m\times \mathbb {S}^n\) to the following system is [12]:
where the last equality so-called the complementarity condition. The primal-dual path-following IPMs usually replace the complementarity condition with \(XS=\mu I\) where \(\mu \in \mathbb {R}, \mu >0\). Under this, we have
It is also well known that the system (2) has a unique solution; \((X(\mu ), y(\mu ), S(\mu ))\), for each \(\mu >0\) [22, 27]. The set of all solutions \((X(\mu ), y(\mu ), S(\mu ))\) is called the central path. The limit of the central path exists as \(\mu \downarrow 0\) and is a solution of (1) [13].
Let
where
Note that the matrices D and V are symmetric and positive definite. Thus, we have
Now the central path problem (2) can be equivalently stated as
Let \(\bar{\psi }: (\xi ^2, \infty )\rightarrow \mathbb {R}\), with \(0 \le \xi < 1\), be a continuously differentiable, invertible and monotone increasing function. Now, by applying the AET to the central path problem in the form (2), we have
However, if the AET is applied to (4), using the continuously differentiable, invertible function \(\psi : (\xi ^2, \infty )\rightarrow \mathbb {R}\) such that \(2t\psi ^{'}(t^2)-\psi ^{'}(t)>0\) for all \(t>\xi \) with \(0 \le \xi < 1\) [8], then we have
Assuming that we are given a feasible solution (X, y, S) with \(X\succ 0\) and \(S\succ 0\) and applying Newton’s method to system (6) leads to the following system, which defines the search directions \(\Delta X, \Delta y\) and \(\Delta S\)
Applying Lemma 2.5 in [32], the third equation of system (7) can be written as
Therefore, we obtain the following system
Clearly, from the second equation of (8) we have \(\Delta S\in \mathbb {S}^n\). However, in generally \(\Delta X\) does not belong to \(\mathbb {S}^n\), because \(X\Delta S S^{-1}\) may be non-symmetric. To remedy this situation, many researchers have proposed methods for symmetrizing the third equation in the system (8) such that the resulting new system has a unique symmetric solution. Here, we consider the symmetrization scheme that yields the Nesterov–Todd (NT) direction [28, 29]. In the NT-scheme, the term \(X\Delta S S^{-1}\) is replaced by \(D^2\Delta S (D^2)^T\), where D is defined as in (3). Moreover, let us define
Using these notations, the Newton system (8) can be written as
where
3 Corrector-Predictor Algorithm
In this section, we present a corrector-predictor interior-point algorithm based on the search directions obtained by using the function \(\psi (t)=t^2, t>\frac{1}{\sqrt{2}}\) introduced in [9]. In this way, we have
and the third equation of (8) becomes
For analysis of our algorithm, we define a norm-based proximity measure \(\delta (X, S; \mu )\) as follows:
Using this, we give the \(\tau \)-neighborhood of the central path as follows:
where \(\tau \in (0, 1)\). Our algorithm starts with a given point \((X, y, S)\in \mathcal {N}(\tau )\) and performs corrector and predictor steps.
In a corrector step, the search direction \((D_X^c, \Delta ^cy, D^c_S)\) is obtained by solving the following scaled system:
Using the first two equations of the system (13), it follows that
Now, using (9), we have \(\Delta ^c X=\sqrt{\mu }DD^c_XD\) and \(\Delta ^cS=\sqrt{\mu }D^{-1}D^c_SD^{-1}\). In this way, the corrector iterate is calculated by considering a full-NT step as follows:
In general, the goal of the predictor step is to reach the optimal solution of the underlying problem in a greedy way. This corresponds to the case when we set \(\mu =0\) in (11), which leads to
or equivalently, in terms of scaled directions, we have
Therefore after updating
where \(\mu =\frac{\textrm{tr }(XS)}{n}\), in the predictor we follow the search direction \((D^p_X, \Delta ^py, D^p_S)\) by solving the following scaled Newton system
Then, we compute the predictor directions in the original space as
and the new predictor iterate is given by
where \(\theta \in (0, 1)\) is the update parameter and also \(\mu ^p=(1-\frac{1}{2}\theta )\mu \). We expect to obtain a new iterate which belongs to the same neighborhood, thus \((X^p, y^p, S^p)\in \mathcal {N}(\tau ).\) The algorithm repeats corrector and predictor steps alternatively until \(\textrm{tr}(XS)\le \varepsilon \) is satisfied.
4 Analysis of the Algorithm
In this section, we will analyze the corrector and the predictor steps in detail, respectively. We first recall some results from [12] which are required in the rest of this paper.
Lemma 1
Suppose that \(X\succ 0\) and \(S\succ 0\). Moreover, let \(X(\alpha )=X+\alpha \Delta X\) and \(S(\alpha )=S+\alpha \Delta S\) for \(0\le \alpha \le 1\). If one has
then \(X(\bar{\alpha })\succ 0\) and \(S(\bar{\alpha })\succ 0.\)
Lemma 2
Suppose that \(Q\in \mathbb {S}^n_{++}\), and let \(M\in R^{n\times n}\) be skew-symmetric. One has \(\det (Q+M)>0\). Moreover, if \(\lambda _i(Q+M)\in \mathbb {R}, i=1, \ldots , n\), then
which implies \(Q+M\succ 0\).
Lemma 3
Let \(D_X, D_S\in \mathbb {S}^n\) be such that \(\textrm{tr }(D_XD_S)=0\). Then
where \(D_{XS}:=\frac{1}{2}(D_XD_S+D_SD_X).\)
Let \(Q_V=D^c_X-D^c_S\). In this way, by using (14), we have
Thus, we can define
Furthermore, we have
hence
4.1 The Corrector Step
The next lemma gives a condition which guarantees the strict feasibility of the corrector step.
Lemma 4
Let \(\delta :=\delta (X, S; \mu )<1\) and \(\lambda _{\min }(V)>\frac{1}{\sqrt{2}}\). Then \(X_+\succ 0\) and \(S_+\succ 0\).
Proof
Let us consider \(0\le \alpha \le 1\) and denote \(X(\alpha )=X+\alpha \Delta ^cX\) and \(S(\alpha )=S+\alpha \Delta ^cS\). In this way, using (3) and (9), we have
We have
It is clear that \(M(\alpha )\) is skew-symmetric and also we have
From this last inequality and (17) we deduce that
In view of the above inequality, \(Q(\alpha )\) is positive definite if \(\alpha \le 1\) and
The last condition follows from
Thus, by Lemma 2, \(\det (X(\alpha )S(\alpha ))>0, \forall \alpha \in [0, 1]\), in addition since \(X(0)=X\succ 0, S(0)=S\succ 0\), Lemma 1 implies that \(X(1)=X_+\succ 0\) and \(S(1)=S_+\succ 0\). Thus we have completed the proof. \(\square \)
Lemma 5
Let \(\delta :=\delta (X, S; \mu )<\frac{1}{\sqrt{2}}\) and \(\lambda _{\min }(V)>\frac{1}{\sqrt{2}}\). Then, \(\lambda _{\min }(V_+)>\frac{1}{\sqrt{2}}\) and
Proof
From (18), with \(\alpha =1\), we obtain
Therefore, we have
where the first inequality is due to M(1) is skew-symmetric. Beside these, since \(\frac{P_V^2}{4}\succeq 0\) and \(V^2+VP_V\succeq I\), we have
Therefore,
Using \(\delta <\frac{1}{\sqrt{2}}\), we have \(\lambda _{\min }\big (V_+\big )\ge \sqrt{1-\delta ^2}>\frac{1}{\sqrt{2}}\). This proves the first part of the lemma. Moreover, we have
Let us consider the function \(f(t)=\frac{t}{2t^2-1}, \forall t>\frac{1}{\sqrt{2}}\). We get \(f^{'}(t)<0\), this means that f(t) is decreasing on \(t>\frac{1}{\sqrt{2}}\). Using this, (22), (21) and (20), we obtain
This completes the proof. \(\square \)
Lemma 6
Suppose that \(\delta :=\delta (X, S; \mu )<\frac{1}{\sqrt{2}}\) and \(\lambda _{\min }(V)>\frac{1}{2}\). Then
Proof
Since \(V^2_+\sim \frac{X_+S_+}{\mu }\), we have \(\textrm{tr}(X_+S_+)=\mu \textrm{tr}(V^2_+)\). Therefore, from (19) it follows that
where the first inequality is due to M(1) is skew-symmetric. \(\square \)
4.2 The Predictor Step
The next lemma gives a sufficient condition for the strict feasibility of the predictor step.
Lemma 7
Let \((X_+, y_+, S_+)\in \mathcal {F}^0\) and \(\mu >0\). Let
denote the iterates after a predictor step, where \(\theta \in [0, 1]\). Then \((X^p, y^p, S^p)\in \mathcal {F}^0\) if
where \(\delta _+=\delta (X_+, S_+; \mu )\) and \(\rho (\delta _+)=\delta _++\sqrt{1+\delta _+^2}\).
Proof
Let \(0\le \alpha \le 1\). We set \(X^p(\alpha )=X_++\alpha \theta \Delta ^pX\) and \(S^p(\alpha )=S_++\alpha \theta \Delta ^pS.\) In this way, we have
Note that
and
is skew-symmetric. Invoking Lemmas 2 and 3, and using (23) it follows that
where the third inequality follows from Lemma 3 and the last inequality is due to the fact that the \(g(\alpha )=\frac{\alpha ^2\theta ^2}{1-\alpha \theta /2}\) function is increasing with respect to \(0\le \alpha \le 1\) and each fixed \(0<\theta <1\); that is \(g(\alpha )\le g(1)\), and the third equation of (15).
Now, we define \(\sigma _+=\frac{1}{2}\big \Vert V_+^{-1}-V_+\big \Vert _F\) that was first introduced by Jiang [15] (without the constant \(\frac{1}{2}\)) and with the coefficient \(\frac{1}{2}\) in [12]. In this case, for each \(1\le i\le n\) we have
On the other hand, we have
where the inequality follows from the inequality of \(V_+\succ \frac{1}{\sqrt{2}}I\) and the fact that the function \(f(t)=\frac{t^2}{2t^2-1}\ge \frac{1}{2}\) for \(t>\frac{1}{\sqrt{2}}\). The above inequality implies that \(\sigma _+\le 2\delta _+\). Since \(\rho (\sigma _+)\) is increasing with respect to \(\sigma _+\), we obtain \(\rho (\sigma _+)\le \rho (2\delta _+)\). Hence, \(\lambda _{\min }(V_+)\ge \frac{1}{\rho (2\delta _+)}\). Therefore,
The above inequality implies that \(\det (X^p(\alpha )S^p(\alpha ))>0\) for each \(0\le \alpha \le 1\). Therefore, in view of Lemma 1 it follows that \(X^p(1)=X^p\succ 0\) and \(S^p(1)=S^p\succ 0\). The proof is completed. \(\square \)
Let \(V^p=\frac{1}{\sqrt{\mu ^p}}{(D^p)}^{-1}X^p{(D^p)}^{-1}=\frac{1}{\sqrt{\mu ^p}}D^pS^pD^p,\) where
In this way, we have
Invoking (26) with \(\alpha =1\), together with (27), we obtain
Lemma 8
Let \(X_+\succ 0\) and \(S_+\succ 0\) be a primal-dual feasible solution and \(\mu ^p=(1-\frac{\theta }{2})\mu \), where \(0<\theta <1\). Moreover, let \(h(\delta _+, \theta , n)>\frac{1}{2}\) and let \((X^p, y^p, S^p)\) denotes the iterate after a predictor step. Then, \(\lambda _{\min }(V^p)>\frac{1}{\sqrt{2}}\) and
Proof
From the assumption \(h(\delta _+, \theta , n)>\frac{1}{2}\) and (28) it follows that \(\lambda _{\min }(V^p)>\frac{1}{\sqrt{2}}\). From Lemma 7, together with \(h(\delta _+, \theta , n)>\frac{1}{2}>0\), we deduce that the predictor step is strictly feasible; i.e, \((X^p, y^p, S^p)\in \mathcal {F}^0\).
By the definition of proximity measure at \((X^p, y^p, S^p)\) one finds that
Let us consider the function
Since \(g'(t)<0\), g(t) is decreasing, from (29) and \(\lambda _i(V^p)\ge \lambda _{\min }(V^p)\) it follows that
where the second inequality follows from that g is decreasing and (28), the equality is due to (23) with \(\alpha =1\) and (27). Since M(1) is skew-symmetric, the third inequality is obtained in a similar fashion to the proof of Lemma 4.8 in [24] and the triangle inequality, the fourth inequality concludes from Lemma 3, the fifth inequality is obtained from the third equation of the system (15) and the last inequality is due to (20), (25) and \(\rho (\sigma _+)\le \rho (2\delta _+)\). Thus we have completed the proof. \(\square \)
In the next lemma, we give an upper bound for the duality gap after a main iteration.
Lemma 9
Suppose that \(\delta :=\delta (X, S; \mu )<\frac{1}{\sqrt{2}}\) and \(\lambda _{\min }(V)>\frac{1}{\sqrt{2}}\). Moreover, let \(0<\theta <1\). Then
Proof
Using (27) and (23) with \(\alpha =1\), we have
where the third equality is due to the fact that M(1) is skew-symmetric and \(\textrm{tr}(D^p_XD^p_S)=\textrm{tr}(D^p_SD^p_X)=0\) and the last inequality follows from Lemma 6. The proof is completed. \(\square \)
4.3 Fixing Parameters
In this subsection, we fix the parameters \(\theta \) and \(\tau \) to guarantee that after a main iteration, the proximity measure will not exceed the proximity parameter \(\tau \).
Let \((X, y, S)\in \mathcal {N}(\tau )\) be the iterate at the start of a main iteration with \(X\succ 0\) and \(S\succ 0\) such that \(\delta :=\delta (X, S; \mu )\le \tau <\frac{1}{\sqrt{2}}\). Then, after a corrector step, by Lemma 5, we have
One can easily verify that the right-hand side of the inequality above is increasing with respect to \(\delta \), thus we have
Following the predictor step and the \(\mu \)-update, by Lemma 8, we obtain
The function \(h(\delta _+, \theta , n)\) is decreasing with respect to \(\delta _+\), hence \(h(\delta _+, \theta , n)\ge h(\omega (\tau ), \theta , n)\). We have seen earlier that the function \(g(t)=\frac{t}{2(2t^2-1)}\) for \(t>\frac{1}{\sqrt{2}}\) is decreasing, thus we get
By invoking (31) and using (32) and the fact that \(\rho (\delta _+)\) is increasing with respect to \(\delta _+\), we deduce that
To keep \(\delta ^p\le \tau \), it suffices that
If we take \(\tau =\frac{1}{10}\) and \(\theta =\frac{1}{3\sqrt{n}}, n\ge 2\), then the above inequality holds and \(h(\delta _+, \theta , n)>\frac{1}{2}\). This means that \(X, S\succ 0\) and \(\delta (X, S; \mu )<\frac{1}{\sqrt{2}}\) are maintained during the algorithm. Therefore, the algorithm is well-defined.
4.4 Polynomial Complexity
The next lemma gives an upper bound for the total number of iterations produced by the algorithm.
Lemma 10
Let \(X^0\succ 0\) and \(S^0\succ 0\) be strictly feasible primal-dual solutions, \(\mu ^0=\frac{\textrm{tr}(X^0S^0)}{n}\) and \(\delta (X^0, S^0; \mu ^0)<\frac{1}{\sqrt{2}}\). Moreover, let \(X^k\) and \(S^k\) be the iterates obtained after k iterations. Then, \(\textrm{tr}\big (X^kS^k\big )\le \varepsilon \) if
Proof
Let \(\mu ^k\) denote the barrier parameter after the k main iteration. From Lemma 9 we deduce that
This means that \(\textrm{tr}\big (X^kS^k\big )\le \varepsilon \) holds if
Taking logarithms, we obtain
Since \(\log (1+\beta )\le \beta \) for \(\beta >-1\), we obtain that the above inequality holds if
From this, we conclude that
This proves the lemma. \(\square \)
Using Lemma 10, we can easily conclude the main result of the paper.
Theorem 1
Let \(\theta =\frac{1}{3\sqrt{n}}\) and \(\tau =\frac{1}{10}\). Then, the proposed corrector-predictor interior-point algorithm is well-defined and requires at most
iterations. The output is a strictly feasible primal-dual pair (X, S) satisfying \(\textrm{tr}(XS)\le \varepsilon .\)
5 Numerical Results
In this section, we compare our new algorithm (new Algor.) with the presented algorithms in [17, 18]. Numerical results were obtained by using MATLAB R2014b on an Intel Core i7 PC with 8GB RAM under Windows 10 to test some SDO problems: Max-cut problem(Mc); Norm-min problem(Nm); Control problem(C) and Graph partitioning problem (Gp). The algorithms are stopped when \(\mu \le \varepsilon \mu _0\) with \(\varepsilon =10^{-5}\). In Table 1, we present the names of the test problems, the dimension of the blocks and the number of the constraint equations (denoted by (n, m)), the number of iterations (iter) and CPU time. Numerical results show that our presented algorithm is reliable and promising.
6 Concluding Remarks
We have presented a new CP interior-point algorithm for SDO. We used the AET method based on the function \(\psi (t)=\psi (\sqrt{t})\) with \(\psi (t)=t^2\) for the system which defines the central path. We then used the symmetrization scheme that yields the NT directions and applied Newton’s approach to the transformed system in order to get the new search directions. Furthermore, we presented the convergence analysis of the proposed algorithm and obtained the iteration complexity bound \(\mathcal {O}\Big (\sqrt{n}\log \frac{5\textrm{tr}(X^0S^0)}{\varepsilon }\Big )\). We had to assure that the smallest eigenvalue of V-matrices of the scaled space is greater than \(\frac{1}{\sqrt{2}}\). To our best knowledge, this is the first CP IPM for solving SDO where the AET method based on the function \(\psi (t)=\psi (\sqrt{t})\) is used to derive the new search directions. According to our preliminary numerical results, the new algorithm performs efficiently.
Data Availability
Enquiries about data availability should be directed to the authors.
References
Alizadeh, F.: Interior point methods in semidefnite programming with applications to combinatorial optimization. SIAM J. Optim. 5(1), 13–51 (1995)
Alizadeh, F.: Combinatorial optimization with interior-point methods and semi-definite matrices. Ph.D. Thesis, Computer Science Department, University of Minnesota, Minneapolis (1991)
Bai, Y., Zhang, L.: A full-Newton step interior-point algorithm for symmetric cone convex quadratic optimization. J. Ind. Manag. Optim. 7(4), 891–906 (2011)
Boyd, S., El Ghaoui, L., Feron, E., Balakrishnan, V.: Linear Matrix Inequalities in System and Control Theory: Studies in Applied Mathematics. SIAM, Philadelphia (1994)
Darvay, Zs.: New interior-point algorithms in linear programming. Adv. Model. Optim., 5(1), 51–92 (2003)
Darvay, Z.: A new predictor-corrector algorithm for linear programming. Alkalmaz. Mat. Lapok 22, 135–161 (2005). (in Hungarian)
Darvay, Zs., Illés, T., Povh, J., Rigó, P..R.: Feasible corrector-predictor interior-point algorithm for \(P_*(\kappa )\)-linear complementarity problems based on a new search direction. SIAM J. Optim. 30(3), 2628–2658 (2020)
Darvay, Z., Illés, T., Rigó, P.R.: Predictor-corrector interior-point algorithm for \(P_*(\kappa )\)-linear complementarity problems based on a new type of algebraic equivalent transformation techniqe. Eur. J. Oper. Res. 298(1), 25–35 (2022)
Darvay, Zs., Takács, P. R.: New method for determining search directions for interior-point algorithms in linear optimization. Optim. Lett. 12(5), 1099–1116 (2018)
Darvay, Zs., Papp, I. M., Takács, P. R.: Complexity analysis of a full-Newton step interior-point method for linear optimization. Period. Math. Hung. 73(1), 27–42 (2016)
Darvay, Zs., Illés, T., Kheirfam, B., Rigó, P. R.: A corrector-predictor interior-point method with new search direction for linear optimization. Cent. Eur. J. Oper. Res. 28(3), 1123–1140 (2020)
de Klerk, E.: Aspects of Semidefinite Programming: Interior Point Algorithms and Selected Applications. Kluwer Acadamic Publishers, Dordrecht (2002)
Halicka, M., De Klerk, E., Roos, C.: On the convergence of the central path in semidefinite optimization. SIAM J. Optim. 12(4), 1090–1099 (2002)
Helmberg, C., Rendl, F., Vanderbei, R.J., Wolkowicz, H.: An interior-point method for semidefinite programming. SIAM J. Optim. 6, 342–361 (1996)
Jiang, J.: A long step primal-dual path following method for semidefinite programming. Oper. Res. Lett. 23(1–2), 53–62 (1998)
Kheirfam, B.: A predictor-corrector interior-point algorithm for \(P_*(\kappa )\)-horizontal linear complementarity problem. Numer. Algorithms 66, 349–361 (2014)
Kheirfam, B.: A corrector-predictor path-following method for second-order cone optimization. Int. J. Comput. Math. 93(12), 2064–2078 (2016)
Kheirfam, B.: A corrector-predictor path-following method for convex quadratic symmetric cone optimization. J. Optim. Theory Appl. 164(1), 246–260 (2015)
Kheirfam, B.: New complexity analysis of a full Nesterov–Todd step interior-point method for semidefinite optimization. Asian-Eur. J. Math. 10(4), 1750070 (2017)
Kheirfam, B.: An infeasible interior point method for the monotone SDLCP based on a transformation of the central path. J. Appl. Math. Comput. 57(1), 685–702 (2018)
Kheirfam, B., Hosseinpour, N., Abedi, H.: A new corrector-predictor interior-point method for symmetric cone optimization. Period. Math. Hung. 85(2), 312–327 (2022)
Kojima, M., Shindoh, S., Hara, S.: Interior-point methods for the monotone semidefinite linear complementarity problem in symmetric matrices. SIAM J. Optim. 7, 86–125 (1997)
Lewis, A.S., Overton, M.L.: Eigenvalue optimization. Acta Numer. 5, 149–190 (1996)
Mansouri, H., Roos, C.: A new full-Newton step \(O(n)\) infeasible interior-point algorithm for semidefinite optimization. Numer. Algorithms 52(2), 225–255 (2009)
Mehrotra, S.: On the implementation of a primal-dual interior point method. SIAM J. Optim. 2(4), 575–601 (1992)
Mizuno, S., Todd, M.J., Ye, Y.: On adaptive-step primal-dual interior-point algorithms for linear programming. Math. Oper. Res. 18, 964–981 (1993)
Nesterov, Y.E., Nemirovskii, A.S.: Interior-Point Polynomial Algorithms in Convex Programming. SIAM, Philadelphia (1994)
Nesterov, Y.E., Todd, M.J.: Self-scaled barriers and interior-point methods for convex programming. Math. Oper. Res. 22(1), 1–42 (1997)
Nesterov, Y.E., Todd, M.J.: Primal-dual interior-point methods for self-scaled cones. SIAM J. Optim. 8, 324–364 (1998)
Roos, C., Terlaky, T., Vial, J.-P.: Theory and Algorithms for Linear Optimization. An Interior-Point Approach, 2nd edn. Springer (2006)
Wang, G.Q., Bai, Y.Q.: A primal-dual interior-point algorithm for second-order cone optimization with full Nesterov–Todd step. Appl. Math. Comput. 215, 1047–1061 (2009)
Wang, G.Q., Bai, Y.Q.: A new primal-dual path-following interior-point algorithm for semidefinite optimization. J. Math. Anal. Appl. 353, 339–349 (2009)
Wang, G.Q., Bai, Y.Q.: A new full Nesterov-Todd step primal-dual path-following interior-point algorithm for symmetric optimization. J. Optim. Theory Appl. 154(3), 966–985 (2012)
Wang, G.Q., Bai, Y.Q., Gao, X.Y., Wang, D.Z.: Improved complexity analysis of full Nesterov–Todd step interior-point methods for semidefinite optimization. J. Optim. Theory Appl. 165(1), 242–262 (2015)
Wang, G.Q., Kong, L.C., Tao, J.Y., Lesaja, G.: Improved complexity analysis of full Nesterov–Todd step feasible interior-point method for symmetric optimization. J. Optim. Theory Appl. 166(2), 588–604 (2015)
Acknowledgements
The author would like to thank the editor and the anonymous referees for their careful reading of the paper, and for their constructive remarks that greatly helped to improve its presentation. The author is also grateful to N. Osmanpour for his assistance in the numerical results.
Funding
The authors have not disclosed any funding.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have not disclosed any competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Kheirfam, B. Corrector-Predictor Interior-Point Method With New Search Direction for Semidefinite Optimization. J Sci Comput 95, 10 (2023). https://doi.org/10.1007/s10915-023-02137-1
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10915-023-02137-1
Keywords
- Semidefinite optimization
- Interior-point methods
- Corrector-predictor methods
- New search directions
- Polynomial complexity