Abstract
In this paper, we propose a new complexity analysis of the full Nesterov–Todd step infeasible interior-point algorithm for semidefinite optimization. With a specific feasibility step and the centering step induced by a well-known kernel function, the property of exponential convexity of the kernel function underlying the matrix barrier function is crucial in the analysis and enable us easily to estimate the proximity of iterates to center path. The analysis of the algorithm is simplified and the iteration bound obtained coincides with the currently best iteration bound for infeasible interior-point algorithm.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Semidefinite optimization (SDO) problems are convex optimization problems over the intersection of an affine set and the cone of positive semidefinite matrices. They have received considerable attention and become one of the most active research areas in mathematical programming. Due to the success of interior-point methods (IPMs) in solving linear optimization (LO), primal-dual IPMs were extended to SDO, which was an important contribution made by Nesterov and Todd [16, 17]. One may distinguish different IPMs by whether they are feasible or infeasible. Feasible IPMs start with a strictly feasible point and maintain feasibility during the solution process. Different from feasible IPMs, interior-point algorithms that accept infeasible initial points are often referred to IIPMs, see more in [9, 12, 22].
Earlier, all primal-dual IPMs used the Newton direction as the search direction. Peng et al. [19] presented a new class of feasible IPMs. They used a direction determined by a so-called self-regular barrier function. Bai et al. [4] introduced a new class of kernel function. Based on the properties of these kernel functions, they derived many new and tight estimates that greatly simplified the analysis of feasible IPMs. By using an algebraic equivalent transformation of the central path, Darvay [5] explored a new technique in finding a class of search directions. Pan et al. [20] found the connection between Darvay’s approach and the method based on kernel functions.
Roos [21] designed a new IIPM for LO based on using the perturbed problems. Since the algorithm uses only full-Newton steps, it has the advantage that no line searches are needed. Later on, Mansouri and Roos [14] extended the proposed algorithm [21] to SDO. Using a new proximity measure to overcome the difficulty of obtaining an upper bound of updated proximity after one full-Newton step, Zhang et al. [30] presented a simplified analysis of the full-Newton step IIPM for SDO. By another definition for feasibility step, Kheirfam [8] generalized the simplified full-Newton step IIPM for LO [13] to SDO.
The aim of this paper is to investigate the kernel function, which has a finite value at the boundary of feasible region:
The search direction determined by the kernel function in (1) is the same as the one introduced by Darvay [5]. Liu and Sun [11] proposed a full Nesterov–Todd(NT) step IIPM for SDO, where the feasibility step was induced by a specific kernel function. The kernel function in (1) is only up to a constant comparing with the one in [11]. It is also discussed by Wang and Bai [26] for feasible IPM, and used by Zhang and Xu [28] for LO to determine the search directions and measure the proximity of iterates to center path. However, the properties of the kernel function, such as exponential convexity, are not discussed in these papers.
Recently, Wang et al. [27] used the property of exponential convexity of the kernel function to simplify the analysis of full-Newton step feasible IPMs for LO problems. In this paper, based on using Nesterov–Todd search direction, the definition of feasibility step by Khierfam [8], and motivated by Wang et al. [27], we not only use the kernel function to determine the centering step, but also apply the property of exponential convexity of the kernel function to derive new estimates, which simplify the analysis of a full NT-step IIPM for SDO.
The article is organized as follows. In Sect. 2, we introduce the concept of some special matrix functions and basic results of feasible IPM for SDO. In Sect. 3, we present the search directions and describe our algorithm in detail. In Sect. 4, we introduce the kernel function and study the properties of barrier function. In Sect. 5, we analyze the algorithm and derive the complexity bound. In Sect. 6, we give a simple numerical example for the algorithm. Finally, some concluding remarks can be found in Sect. 7.
The following notations are used in the paper. \(\mathbb {R}^n\) denotes the n-dimensional Euclidean space. The set of all \(m\times n\) matrices with real entries is denoted by \(\mathbb {R}^{m\times n}\). \(A^T\) denotes the transpose of \(A\in \mathbb {R}^{m\times n}\). The set of all symmetric \(n\times n\) real matrices is denoted by \(\mathcal {S}^n\). \(\mathcal {S}^n_{++}~(\mathcal {S}^n_{+})\) denotes the set of all matrices in \(\mathcal {S}^n\) which are positive definite(positive semidefinite). I denotes \(n\times n\) identity matrix. \(A\succ 0~(A\succeq 0)\) means that A is positive definite (positive semidefinite). For any \(V\in \mathcal {S}^n\), we denote by \(\lambda (V)\) the vector of eigenvalues of V arranged in non-increasing order, that is, \(\lambda _{\max }(V)=\lambda _1(V)\ge \lambda _2(V)\ge \cdots \ge \lambda _n(V)=\lambda _{\min }(V).\) For any matrix M, we denote by \(\sigma _1(M)\ge \sigma _2(M)\ge \cdots \ge \sigma _n(M)\) the singular values of M. For \(Q\in \mathcal {S}^n_{++}\), we use \(Q^{\frac{1}{2}}\) to denote the symmetric square root of Q. The sign \(\sim \) denotes similarity of two matrices. Given \(G,H\in \mathbb {R}^{m\times n}\), the inner product between them is defined as \(\langle G,H\rangle :={\mathrm{Tr}}(G^TH)\), the trace of the matrix \(G^TH\). The Frobenius norm of \(M\in \mathbb {R}^{n\times n}\) is \(\Vert M\Vert :=\langle M,M\rangle ^{1/2}\).
2 Preliminaries
2.1 Special matrix functions
Since we need to use some special matrix functions that are useful in the analysis of the algorithm, we review some facts.
Definition 2.1
[25, Definition 2.2] Let \(V\in S^n_{++}\) and
where Q is an arbitrary orthogonal matrix that diagonalizes V, and let \(\psi (t)\) be the function in (1). The matrix function \(\psi (V): S^n_{++}\rightarrow S^n\) is defined by
Definition 2.2
[25, Definition 2.3] The real value matrix function \(\varPsi (V): S^n_{++}\rightarrow R_{+}\) is defined by
Replacing \(\psi (\lambda _i(V))\) in (2) by \(\psi '(\lambda _i(V))\), we obtain the matrix function \(\psi '(V)\). We call \(\varPsi (V)\) matrix barrier function and \(\psi (t)\) the kernel function for it.
Lemma 2.1
\(\varPsi (V)\) is strictly convex with respect to \(V\succ 0\) and vanishes at its global minimal point \(V=I\), i.e., \(\psi (I)=\psi '(I)=0_{n\times n}\). Moreover, \(\varPsi (I)=0\).
Proof
Since \(\psi (t)=(t-1)^2\), we have \(\psi '(t)=2(t-1),\psi ''(t)=2>0\). We can see that \(\psi (t)\) is strictly convex with respect to \(t > 0\) and vanishes at its global minimal point \(t=1\), i.e., \(\psi (1)=\psi '(1)=0\). By using the strictly convex of \(\psi (t)\), definitions 2.1 and 2.2, the rest proof of the lemma is similar to the proof of Proposition 5.2.6 (i) in [19]. \(\square \)
2.2 The central path for SDO
We consider the standard form of the SDO problem:
where \(C, X\in \mathcal {S}^n, b\in \mathbb {R}^m\), and \(A_i\in \mathcal {S}^n,~i=1,2,\ldots ,m\). We call problem (P) the primal form of SDO problem, and X is the primal variable.
The dual problem of (P) is given by
where \(y\in \mathbb {R}^m\) and \(S\in \mathcal {S}^n\) are the dual variables.
The set of primal-dual feasible solutions is denoted by
and the relative interior of the primal-dual feasible set is
We assume that both (P) and (D) satisfy the interior point condition (IPC), i.e., \(\mathcal {F}^0\) is nonempty and the matrices \(A_i,~i=1,2,\ldots ,m\), are linearly independent. It is well known [6] under the IPC the optimality conditions for (P) and (D) can be written as follows:
where the last equality is called the complementarity equation. The basic idea of primal-dual IPMs is to replace the complementarity condition \(XS=0\) by the parameterized equation \(XS=\mu I\) with \(\mu >0\). Then we get the perturbed system
It is proved in [10, 15] that there is a unique solution \((X(\mu ), y(\mu ), S(\mu ))\) to the central path equations (7) for any barrier parameter \(\mu >0\). Moreover, the limit point \((X^{*}, y^{*}, S^{*})\), as \(\mu \) goes to 0, is a primal-dual optimal solution of the corresponding SDO problem.
2.3 The classic NT search direction
We consider the symmerization scheme yielding the Nesterov–Todd (NT) direction [17], or simply the Newton-direction [23]. The direction is determined by the following system:
where \((\triangle X,\triangle y,\triangle S)\in \mathcal {S}^n\times \mathbb {R}^m\times \mathcal {S}^n\) is the search direction, and the choice of P is
Let \(D=P^{1/2}\). Then D can be used to scale X and S to the same matrix V defined by
Obviously
After we define
the system (8) is reduced to
which gives the usual scaled NT search direction.
3 Infeasible full NT step IPM
We call the triple (X, y, S) an \(\varepsilon \)-optimal solution of (P) and (D) if the norms of the residuals
and
do not exceed \(\varepsilon \) and the duality satisfies \(Tr(XS)\le \varepsilon \).
As usual, we assume (P) and (D) have an optimal solution \((X^{*},y^{*},S^{*})\) with \(Tr(X^{*}S^{*})=0\), and we choose the initial iterate \((X^0,y^0,S^0)\) with
where \(\zeta \) is a positive number such that
3.1 The perturbed problems
We denote the initial values of the primal and dual residuals by \(r^0_b\) and \(R^0_C\), respectively:
For any \(\nu \) with \(0<\nu \le 1\), we consider the perturbed problem \((P_\nu )\)
and its dual problem \((D_\nu )\)
Note that if \(\nu =1\), then \(X=X^0\) yields a strictly feasible solution of \((P_\nu )\), and \((y,S)=(y^0,S^0)\) yields strictly feasible solution of \((D_\nu )\). We conclude that if \(\nu =1\), then \((P_\nu )\) and \((D_\nu )\) are strictly feasible. Generally, we have the following lemma.
Lemma 3.1
[14, Lemma 4.1] Let the original problems (P) and (D) be feasible. Then for each \(\nu \) satisfying \(0<\nu \le 1\), the perturbed problems \((P_\nu )\) and \((D_\nu )\) are strictly feasible.
Assuming that (P) and (D) are both feasible, for each \(\nu \) such that \(0<\nu \le 1\), Lemma 3.1 implies that the problems \((P_\nu )\) and \((D_\nu )\) satisfy IPC, then their central paths exist. This means that the system
has a unique solution for every \(\mu >0\). We denote it by \((X(\mu ,\nu ),y(\mu ,\nu ),S(\mu ,\nu ))\), which is the \(\mu \)-center of \((P_\nu )\) and \((D_\nu )\). Next, we will always have \(\mu =\nu \zeta ^2\).
3.2 Search directions
3.2.1 Search directions for centering steps
We define
For centering steps, by the Definitions 2.1 and 2.2, following [19] we replace the right-hand side \(V^{-1}-V\) in the third equation in system (13) by \(-\psi '(V)\), then the scaled search direction \((D^c_X,\triangle ^c y,D^c_S)\) is given by the system
with \(D_V=-\psi '(V)\). The search directions \(\varDelta ^c X\) and \(\varDelta ^c S\) are computed by (18).
3.2.2 Search directions for feasibility step
Following [8] the search direction \((\varDelta ^f X, \varDelta ^f y, \varDelta ^f S)\) is given by the following system:
where \(\theta \) is a fixed barrier update parameter.
We define
Then the system (20) can be written as
3.3 An iteration of the algorithm
Initially \(X^0=S^0=\zeta I\) and \(\mu ^0=\zeta ^2\), whence \(V^0=I\) and \(\varPsi (V^0)=0\). We assume that at the start of each iteration, just before a \(\mu \)-update, \(\varPsi (V)\) is smaller than or equal to a (small) threshold value \(\tau >0\). This obviously holds at the start of the first iteration.
Now we describe a main iteration of our algorithm. Suppose that for some \(\mu \in (0,1]\) we have (X, y, S) is strictly feasible for the perturbed problems \((P_\nu )\) and \((D_\nu )\), where \(\mu =\nu \zeta ^2\), and such that \(\varPsi (V)\le \tau \). We first perform a feasibility step to generate iterates \((X^f,y^f,S^f)\) which is strictly feasible for the perturbed problems \((P_{\nu ^+})\) and \((D_{\nu ^+})\), where \(\nu ^+=(1-\theta )\nu \) with \(\theta \in (0,1)\), and close to the \(\mu ^+\)-center of \((P_{\nu ^+})\) and \((D_{\nu ^+})\) with \(\mu ^+=\nu ^+\zeta ^2\), i.e., \(\varPsi (V^f)=\varPsi (X^f,S^f,\mu ^+)\le \tau ^f\) for some suitable value of \(\tau ^f\). After the feasibility step, we perform some centering steps to get a strictly feasible triple \((X^+,y^+,S^+)\) of \((P_{\nu ^+})\) and \((D_{\nu ^+})\) such that \(\varPsi (X^+,S^+,\mu ^+)\le \tau \). The algorithm stops if the norms of the residuals and the duality gap are less than the accuracy parameter \(\varepsilon \).
We now summarize the generic full NT-step IIPM for SDO in Fig. 1.
4 The properties of the barrier function
By \(\psi '(t)=2(t-1)\) we get \(D_V=-\psi '(V)=2(I-V)\). We also use the norm-based proximity measure \(\delta (V)\) defined by
By Lemma 2.1, we have
Lemma 4.1
\(\psi (t)=\psi '(t)^2/4\), and \(\varPsi (V)=\delta (V)^2\) with \(\delta (V)=\Vert D_V\Vert /2\).
Proof
Since \(\psi (t)=(t-1)^2\), it is straightforward that \(\psi (t)=\psi '(t)^2/4\). By \(\psi (t)=\psi '(t)^2/4\), the definition 2.2 and (23), we obtain
\(\square \)
In order to discuss the property of exponential convexity, we need the following lemma. The lemma is a direct consequence of conclusion (d) of Theorem 3.3.14 in [7, pp.176-177]. We cite it here without proof, see [7] for more details.
Lemma 4.2
Let \(M,N\in S^n\) be two nonsingular matrices and f(t) a real-valued function such that \(f(e^t)\) is a convex function. Then
where \(\sigma _i(M),\sigma _i(N)\) and \(\sigma _i(MN),i=1,2,\ldots ,n\), denote the singular values of M, N and MN arranged in non-increasing order, respectively.
Lemma 4.3
Let \(t_{1}\ge 1/2\) and \(t_{2}\ge 1/2\). Then
Proof
By some simple calculation, the above inequality is equivalent to \(\sqrt{t_{1}}+\sqrt{t_{2}}\ge \sqrt{2}\). The expression holds if \(t_1,t_2\ge 1/2\). \(\square \)
Similarly as in [3, 4], we say that \(\psi (t)\) is e-convex whenever \(t \ge 1/2\). It is easily verified that \(\psi (e^t)\) is a convex function, so we can apply the Lemma 4.2 to \(\psi (t)\).
Lemma 4.4
Suppose that matrices \(V_1,V_2\) are symmetric positive definite. Let \(\lambda _{min}(V_1)\ge 1/2\) and \(\lambda _{min}(V_2)\ge 1/2\), then
Proof
By the definition of the singular values of a matrix, for any nonsingular matrix \(V\in S^n\),
Hence,
Since \(V_1,V_2\) are symmetric positive definite, we have
By the Definition 2.2, Lemmas 4.2 and 4.3, we obtain
\(\square \)
Lemma 4.5
\(\Vert V\Vert \le \sqrt{n}+\delta (V).\)
Proof
It follows from \(\Vert V\Vert =\Vert I-D_V/2\Vert \le \Vert I\Vert +\Vert D_V\Vert /2\). \(\square \)
Lemma 4.6
\(1-\delta (V)\le \lambda _i(V) \le 1+\delta (V), 1\le i \le n.\)
Proof
By Lemma 4.1 we have \(\delta (V)=\sqrt{\varPsi (V)}=\sqrt{\sum _{i=1}^{n}(1-\lambda _i(V))^2}\), then \(|1-\lambda _i(V)|\le \delta (V)\), so the lemma follows. \(\square \)
The following lemma shows the effect of a \(\mu \)-update on the value of \(\varPsi (V)\).
Lemma 4.7
Let \(0<\theta <1\), for any positive definite matrix V:
Proof
Let \(\beta \ge 1\), define \(v_i:=\lambda _i(V), 1\le i\le n\), then \(v>0\). From the definition of \(\varPsi (V)\), we have
By the definition of \(\psi (t)\) in (1),
Let \(\beta =1/\sqrt{1-\theta }\), then
\(\square \)
5 Analysis of the algorithm
5.1 Analysis of the centering step
After a centering step, the new iterates are given by
The new V-matrix is
The following results are crucial in the analysis of the algorithm. We list them without proof.
Lemma 5.1
[29, Lemma 4.2] The new iterates \((X^+,y^+,S^+)\) are strictly feasible if \(\varPsi (V)<1\).
Theorem 5.1
[29, Theorem 4.5] If \(\varPsi (V)<1\), then \(\varPsi (V^+)<\varPsi (V)^2\).
Theorem 5.1 implies that the centering step is quadratically convergent.
5.2 Analysis of the feasibility step
After a feasibility step, the new iterates are given by
By (20) and (28), it is easy to show that the iterates are feasible for the new perturbed problems. A key point in the analysis is to show that \(X^f\) and \(S^f\) are positive definite and satisfy \(\varPsi (V^f)\le \tau ^f(\tau ^f<1)\).
By using (21), we obtain
It follows from (28) that
Therefore
which implies that
Let
Note that \(D^f_{XS}\) is a symmetric matrix and M a skew-symmetric matrix. From the third equation in system (22), by multiplying both sides from the left with V, we get
Therefore
The following lemma shows the sufficient condition of the strict feasibility of the new iterates \((X^f,y^f,S^f)\).
Lemma 5.2
Let \(X\succ 0\) and \(S\succ 0\). Then the iterates \((X^f,y^f,S^f)\) are strictly feasible if
Proof
By Lemma 10 in [8], \((X^f,y^f,S^f)\) are strictly feasible if \(V^2+D^f_{XS}\succ 0\). Using \(D^f_S=-D^f_X\) we have \(\lambda _{i}\left( D^f_{XS}\right) =-\lambda ^2_{i}\left( D^f_X\right) \), therefore \(V^2+D^f_{XS}\succ 0\) if \(\lambda _{min}(V)>\left| \lambda _i\left( D^f_X\right) \right| , 1 \le i\le n.\) This completes the proof. \(\square \)
Before \(\mu \)-update, by (10), the new V-matrix is given by
Now we want to find an upper bound for \(\varPsi (\hat{V})\) by applying exponential convexity, so we assume the eigenvalues of the matrices \(V+D^f_X\) and \(V+D^f_S\) are at least 1 / 2, i.e.
It is obvious that (36) holds if
By (35) we have \(\hat{V}^2\sim \frac{1}{\mu }X^fS^f\sim \frac{1}{\mu }\left( X^f\right) ^{\frac{1}{2}}S^f\left( X^f\right) ^{\frac{1}{2}}\), from (31),
Lemma 5.3
\(\varPsi (\hat{V})\le \varPsi (V)+\left\| D^f_X\right\| ^2.\)
Proof
Using (38), the definition of \(\psi (V)\) and \(\varPsi (V)\), Lemma 4.4 and \(D^f_S=-D^f_X\), we have
\(\square \)
5.3 Upper bound for \(\left\| D^f_X\right\| \)
By a similar argument given in [8, Section 5], we have
Next, we denote \(\delta (V)\) simply as \(\delta \).
Lemma 5.4
Proof
In [14, Lemma 5.15], it is shown that
By \(XS\sim \mu V^2\) and Lemma 4.5 we have
The lemma follows from (39) and Lemma 4.6. \(\square \)
By (37) and Lemma 4.6 we obtain that the e-convex holds if \(\left| \lambda _i(D^f_X)\right| \le \left\| D^f_X\right\| \le 1/2-\delta , 1\le i \le n \), then using Lemma 5.4 we have
Note that the right-hand side of the above inequality is monotonically decreasing with respect to \(\delta \).
5.4 The choice of \(\tau \), \(\theta \) and \(\tau ^f\)
Theorem 5.2
Let \(\mu ^+=(1-\theta )\mu \), where \(0<\theta < 1\) and \(V^f=\frac{\hat{V}}{\sqrt{1-\theta }}\), then
Particularly, if \(\varPsi (V)\le \tau =\frac{1}{16}\) and \(\theta =\frac{1}{13n}\), then
Proof
Before \(\mu \)-update, by Lemma 5.3 we have \(\varPsi (\hat{V})\le \varPsi (V)+\left\| D^f_X\right\| ^2.\) From (34) and (35),
Then
Since M is skew-symmetric, and \(D^f_S=-D^f_X\), we obtain
After \(\mu \)-update, by Lemma 4.7 we get
Since \(\left\| D^f_X\right\| ^2\le (1/2-\delta )^2\le 1/4\), by Lemmas 4.1 and 4.5, the inequality in (44) follows. If \(\varPsi (V)\le \tau =1/16\), by Lemma 4.1\(\delta (V)\le 1/4\), hence if we take \(\theta =1/(13n)\), the inequality (43) holds. Then \(\varPsi (V^f)\le \tau ^f=1/2\) follows from (44) and simple calculation. \(\square \)
5.5 Complexity analysis
After the feasibility step, we have derived an upper bound for the \(\varPsi (V^f)\), i.e. \(\varPsi (V^f)\le 1/2\). By the quadratic convergence property of the centering step (Theorem 5.1), at most
centering steps suffice to get iterates \((X^+,y^+,S^+)\) that satisfy \(\varPsi (X^+,S^+;\mu ^+)\le 1/16\). So each main iteration consists of at most 3 so-called inner iterations, in each of which we need to compute a new search direction.
In each main iteration both the duality gap and the norms of the residuals are reduced by the factor \(1-\theta \). Hence, using \(Tr(X^0S^0)=n\zeta ^2\), the total number of main iterations is bounded above by
Since
the total number of inner iterations is bounded above by
Now we state our main result without further proof.
Theorem 5.3
If (P) and (D) have an optimal solution \((X^*,y^*,S^*)\) such that \(X^*+S^*\le \zeta I\), then after at most
iterations, the algorithm finds an \(\varepsilon \)-solution of (P) and (D).
6 Numerical results
In this section we consider an application example and give some numerical results.
Maximum eigenvalue minimization Suppose the symmetric matrix A(x) depends affinely on \(x\in \mathbb {R}^{k}: A(x)=A_0+x_1A_1+\cdots +x_kA_k\), where \(A_i\in S^{n}.\) The problem of minimizing the maximum eigenvalue of the matrix A(x) can be cast as the semidefinite program
with variables \(x\in \mathbb {R}^{k}\) and \(t\in \mathbb {R}\). Problems of this type arise in control theory, structural optimization, graph theory and combinatorial optimization, and other fields (see [24] for details).
Example 1
[1, Example 14.1] Find scalars \(y_1,y_2,\) and \(y_3\) such that the maximum eigenvalue of \(F=A_0+y_1A_1+y_2A_2+y_3A_3\) with
is minimized.
This problem can be formulated as the standard form of the SDO problem
where \(b=[0,0,0,1]^T, y=[y_1,y_2,y_3,y_4]^T, C=-A_0,A_4=I\), by (49) we observe that \(-y_4\) is the maximum eigenvalue of matrix F.
The initial point is \((X^0,y^0,S^0)=(\zeta I, 0,\zeta I)\) with \(\zeta =1\). We set \(\tau =\frac{1}{16},\theta =\frac{1}{13n}\) and \(\varepsilon =10^{-3}\). A primal-dual optimal solution is given by
The algorithm reaches this solution in 271 iterations, taking 2.0836 seconds. It might be emphasized that the default value \(\theta =\frac{1}{13n}\) is theoretically justified in a worst-case. If a larger \(\theta \) is taken, for example \(\theta =\frac{1}{4n}\), the algorithm need just 80 iterations, taking 0.5817 seconds.
By using the first three components of \(y^*\), that is \(y_1,y_2,\) and \(y_3\), the maximum eigenvalue of \(F=A_0+y_1A_1+y_2A_2+y_3A_3\) is found to be 2.9973.
7 Concluding remarks
Based on the exponential convexity of a simple kernel function, we propose a new complexity analysis of a full NT-step infeasible interior-point algorithm for SDO. The iteration bound of the algorithm coincides with the best known iteration bound for IIPMs. Moreover, the resulting analysis is relatively simple comparing to that in [8, 11]. However, from a practical perspective, the full NT-step IIPM may be not so efficient. Recently, by applying the properties of the kernel function-based barrier function discussed in [18], Asadi and Roos [2] attempted to design a large-update version of Roos’ algorithm [21] for LO problems. Next, we may focus on designing a large-update IIPM involving the kernel function for SDO problems.
References
Antoniou, A., Lu, W.S.: Practical Optimization: Algorithms and Engineering Applications. Springer, Boston (2007)
Asadi, A., Roos, C.: Infeasible interior-point methods for linear optimization based on large neighborhood. J. Optim. Theory Appl. 170(2), 562–590 (2016)
Bai, Y.Q., El Ghami, M., Roos, C.: A new efficient large-update primal-dual interior-point method based on a finite barrier. SIAM J. Optim. 13(3), 766–782 (2003)
Bai, Y.Q., El Ghami, M., Roos, C.: A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization. SIAM J. Optim. 15(1), 101–128 (2004)
Darvay, Zs: New interior-point algorithms in linear programming. Adv. Model. Optim. 5(1), 51–92 (2003)
De Klerk, E.: Aspects of Semidefinite Programming: Interior Point Algorithms and Selected Applications. Kluwer, Dordrecht (2002)
Horn, R.A., Johnson, C.R.: Topics in Matrix Analysis. Cambridge University Press, Cambridge (1991/1994)
Kheirfam, B.: Simplified infeasible interior-point algorithm for SDO using full Nesterov-Todd step. Numer. Algorithms 59(4), 589–606 (2012)
Kojima, M., Megiddo, N., Mizuno, S.: A primal-dual infeasible-interior-point algorithm for linear programming. Math. Program. 61, 263–280 (1993)
Kojima, M., Shindoh, S., Hara, S.: Interior point methods for the monotone semidefinite linear complementarity problem in symmetric matrices. SIAM J. Optim. 7, 86–125 (1997)
Liu, Z., Sun, W.: A full-NT-step infeasible interior-point algorithm for SDP based on kernel functions. Appl. Math. Comput. 217, 4990–4999 (2011)
Lustig, I.J.: Feasible issues in a primal-dual interior point method for linear programming. Math. Program. 49, 145–162 (1990/1991)
Mansouri, H., Roos, C.: Simplified \(O(nL)\) infeasible interior-point algorithm for linear optimization using full-Newton steps. Optim. Methods Softw. 22(3), 519–530 (2007)
Mansouri, H., Roos, C.: A new full-Newton step \(O(n)\) infeasible interior-point algorithm for semidefinite optimization. Numer. Algorithms 52, 225–255 (2009)
Nesterov, Y.E., Nemirovsky, A.S.: Interior Point Methods in Convex Programming: Theory and Applications. SIAM, Philadelphia (1994)
Nesterov, Y.E., Todd, M.J.: Self-scaled barriers and interior-point mehods for convex program-ming. Math. Oper. Res. 22, 1–42 (1997)
Nesterov, Y.E., Todd, M.J.: Primal-dual interior-point methods for self-scaled cones. SIAM J. Optim. 8, 324–364 (1998)
Peng, J., Roos, C., Terlaky, T.: A new and efficient large-update interior-point method for linear optimization. J. Comput. Tech. 6, 61–80 (2001)
Peng, J., Roos, C., Terlaky, T.: Self-Regularity: A New Paradigm for Primal-Dual Interior-Point Algorithms. Princeton University Press, Princeton (2002)
Pan, S., Li, X., He, S.: An infeasible primal-dual interior-point algorithm for linear programs based on logarithmic equivalent transformation. J. Math. Anal. Appl. 314(2), 644–660 (2006)
Roos, C.: A full-Newton step \(O(n)\) infeasible interior-point algorithm for linear optimization. SIAM J. Optim. 16(4), 1110–1136 (2006)
Tanabe, K.: Centered Newton method for linear programming: interior and exterior point method (in Janpanese). In: Tone, K. (ed.) New Methods for Linear Programming, vol. 3, pp. 98–100. The Institute of Statistical Mathematics, Tokyo (1990)
Todd, M.J., Toh, K.C., Tütüncü, R.H.: On the Nesterov–Todd direction in semidefinite programming. SIAM J. Optim. 8(3), 769–796 (1998)
Vandenberghe, L., Boyd, S.: Semidefinite programming. SIAM Rev. 38(1), 49–95 (1996)
Wang, G.Q., Bai, Y.Q., Roos, C.: Primal-dual interior-point algorithm for semidefinite optimization based on a simple kernel function. J. Math. Model. Algorithms 4, 409–433 (2005)
Wang, G.Q., Bai, Y.Q.: A new primal-dual path-following interior-point algorithm for semidefinite optimization. J. Math. Anal. Appl. 353(1), 339–349 (2009)
Wang, W., Bi, H., Liu, H.: A full-Newton step interior-point algorithm for linear optimization based on a finite barrier. Oper. Res. Lett. 44(6), 750–753 (2016)
Zhang, L.P., Xu, Y.H.: A new infeasible interior-point algorithm with full step for linear optimization based on a simple function. Int. J. Comput. Math. 88(15), 3163–3185 (2011)
Zhang, L.P.: Full-Newton Step Infeasible Interior-point Algorithm for Conic Programming. Ph.D. Thesis. Shanghai University, Shanghai (2011)
Zhang, L.P., Sun, L.M., Xu, Y.H.: Simplified analysis for full-Newton step infeasible interior-point algorithm for semidefinite programming. Optim. 62(2), 169–191 (2013)
Author information
Authors and Affiliations
Corresponding author
Additional information
This work is supported by Natural Science Basis Research Plan in Shaanxi Province of China (Program No. 2017JM1014).
Rights and permissions
About this article
Cite this article
Wang, W., Liu, H. & Bi, H. Simplified full Nesterov–Todd step infeasible interior-point algorithm for semidefinite optimization based on a kernel function. J. Appl. Math. Comput. 59, 445–463 (2019). https://doi.org/10.1007/s12190-018-1187-7
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12190-018-1187-7
Keywords
- Semidefinite optimization
- Infeasible interior-point algorithm
- Full Nesterov–Todd step
- Kernel function
- Exponential convexity