Abstract
After a brief introduction to Euclidean Jordan algebra, we present a new corrector–predictor path-following interior-point algorithm for convex, quadratic, and symmetric cone optimization. In each iteration, the algorithm involves two kind of steps: a predictor (affine-scaling) step and a full Nesterov and Todd (centring) step. Moreover, we derive the complexity for the algorithm, and we obtain the best-known iteration bound for the small-update method.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
This paper aims at introducing a corrector–predictor path-following interior-point method (IPM) for a convex and quadratic optimization (CQO) problem over symmetric cones underlying Euclidean Jordan algebra.
This problem includes the symmetric cone optimization (SCO) problems as a special case. In particular, it includes CQO and semidefinite optimization (SDO) problems as special cases. Li and Toh [1] presented an inexact primal–dual infeasible path-following algorithm for convex, quadratic, and symmetric cone programming (CQSCO). Nesterov and Todd [2] provided a theoretical foundation for efficient primal–dual IPMs on a special class of convex optimization, where the associated cone was self-scaled. Schmieta and Alizadeh [3, 4] studied primal–dual IPMs for SCO extensively under the framework of Euclidean Jordan algebra. Darvay [5] proposed a full-Newton step primal–dual path-following interior-point algorithm for linear optimization (LO). The search direction of his algorithm is introduced by using an algebraic equivalent transformation of the centring equation, which define the central path and then applying Newton’s method for the new system of equations. Recently, Wang and Bai [6] and Bai and Zhang [7] extended Darvay’s algorithm for LO to SCO and CQSCO by using Euclidean Jordan algebras, respectively. The first predictor–corrector interior-point algorithms based on Darvay’s directions are introduced in [8, 9].
By using a high order corrector–predictor approach, Liu and Potra [10] introduced an interior-point algorithm for solving sufficient horizontal linear complementarity problems acting in a wide neighborhood of the central path that does not depend on the handicap of the problem, and has the best known iteration complexity for sufficient LCP. The algorithm reduces the duality gap both in the corrector and the predictor steps, and hence it is more efficient. Potra [11] presented two corrector–predictor interior-point algorithms for solving monotone linear complementarity problems. The algorithms produce a sequence of iterations in the wide neighborhood of the central path, and the complexity is reduced by the use of higher order information. The latter approach ensures superlinear convergence even in the absence of strict complementarity. Mizuno et al. [12] presented a predictor–corrector interior-point algorithm for LO in which each predictor step is followed by a single corrector step and whose iteration complexity is the best known in LO literature. This algorithm is sometimes referred to MTY predictor–corrector algorithm. The MTY predictor–corrector algorithm extended for sufficient LCP in 1995 by Miao [13]. This class extensively studied in the 1991 monograph by Kojima et al. [14], where it provided a unified framework for studying IPMs. Recently, Kheirfam [15] presented a predictor–corrector interior-point algorithm for horizontal linear complementarity problems based on Darvay’s technique. Gurtuna et al. [16] presented a corrector–predictor method for solving sufficient linear complementarity problems that do not depend on the handicap of the matrix, so that it can be applied for any sufficient LCP. The algorithm is quadratically convergent for problems having a strictly complementary solution and has the same computational complexity as Miao’s algorithm for sufficient LCP. Using higher order predictors, the authors extended a class of corrector–predictor methods that are superlinearly convergent even in the absence of strict complementarity, and the algorithms of this class are globally convergent for general positive starting points.
Motivated by their work, we propose a corrector–predictor path-following algorithm for solving CQSCO based on the new search directions. Our algorithm follows the central path by predictor step (which moves along the central path to the optimal solution) and a corrector step (which moves back toward the central path from the predicted point). The algorithm in the predictor step uses the full Newton-step, while it operates one damped Newton step in the corrector step. We analyze the algorithm and obtain the complexity bound.
The paper is organized as follows: In Sect. 2, we briefly provide the theory of the Euclidean Jordan algebra and their associated symmetric cones. In Sect. 3, we propose the new search directions. In Sect. 4, the corrector–predictor algorithm based on the new directions is presented. In Sect. 5, we analyze the algorithm and derive the iteration bound. Finally, we give some conclusions in Sect. 6.
2 Euclidean Jordan Algebra
Here, we outline some needed main results on Euclidean Jordan algebra and symmetric cones. For a comprehensive study, the reader is referred to [17–19].
A Euclidean Jordan algebra \({\mathcal {J}}\) is a finite dimensional vector space endowed with a bilinear map \(\circ : {\mathcal {J}}\times {\mathcal {J}}\rightarrow {\mathcal {J}}\) iff, for all \({x}, {y}\in {\mathcal {J}}, {x}\circ { y}={ y}\circ { x}\), \({ x}\circ ({ x}^2\circ { y})={ x}^2\circ ({ x}\circ { y})\), where \({ x}^2={ x}\circ { x},\) and there exists an inner product such that \(\langle x\circ y, z\rangle =\langle x, y\circ z\rangle .\) A Jordan algebra has an identity element, if there exists a unique element \(e\in {\mathcal {J}}\) such that \(x\circ e=e\circ x=x\), for all \(x\in {\mathcal {J}}\). The set \({\mathcal {K}}(\mathcal {J}):=\{x^2:~x\in {\mathcal {J}}\}\) is called the cone of squares of Euclidean Jordan algebra \(({\mathcal {J}}, \circ , \langle \cdot , \cdot \rangle )\). A cone is symmetric if and only if it is the cone of squares of a Euclidean Jordan algebra [17].
An element \(c\in {\mathcal {J}}\) is said to be idempotent iff \(c\ne 0\) and \(c\circ c=c\). Two elements \(x\) and \(y\) are orthogonal if \(x\circ y=0\). An idempotent \(c\) is primitive if it is nonzero and cannot be expressed by sum of two other nonzero idempotents. A set of primitive idempotents \(\{{ c}_1, { c}_2, \ldots , { c}_k\}\) is called a Jordan frame iff \({ c}_i\circ { c}_j=0\), for any \(i\ne j\in \{1, 2, \ldots , k\}\) and \(\sum _{i=1}^k{ c}_i={ e}.\) For any \({ x}\in {\mathcal {J}}\), let \(r\) be the smallest positive integer such that \(\{{ e}, { x}, { x}^2, \ldots , { x}^r\}\) is linearly dependent; \(r\) is called the degree of \({ x}\) and denoted by \(\mathrm{deg}({ x})\). The rank of \({\mathcal {J}}\), denoted by \(\mathrm{rank}({\mathcal {J}})\), is defined as the maximum of \(\mathrm{deg}({ x})\) over all \({ x}\in {\mathcal {J}}\).
Theorem 2.1
(Theorem III.1.2 in [17]) Let \(({\mathcal {J}}, \circ , \langle \cdot ,\cdot \rangle )\) be a Euclidean Jordan algebra with \(rank({\mathcal {J}})=r\). Then, for any \({ x}\in {\mathcal {J}}\), there exists a Jordan frame \(\{{ c}_1, { c}_2, \ldots , { c}_r\}\) and real numbers \(\lambda _1({ x}), \lambda _2({ x}), \ldots , \lambda _r({ x})\) such that \({ x}=\sum _{i=1}^r\lambda _i({ x}){ c}_i.\)
Every \(\lambda _i({ x})\) is called an eigenvalue of \({ x}\). We denote \(\lambda _{\min }({ x}) (\lambda _{\max }({ x}))\) as the minimal (maximal) eigenvalue of \({ x}\). We can define the following: the square root, \({ x}^{\frac{1}{2}}:=\sum _{i=1}^r\sqrt{\lambda _i({ x})}{ c}_i\), wherever all \(\lambda _i\ge 0\), the inverse, \({ x}^{-1}:=\sum _{i=1}^r\lambda _i({ x})^{-1}{ c}_i\), wherever all \(\lambda _i\ne 0\), the square \(x^2:=\sum _{i=1}^r\lambda _i({ x})^{2}{ c}_i\); and the trace \(\mathrm{Tr}(x):=\sum _{i=1}^r\lambda _i({ x})\).
Since \(``\circ \)” is bilinear map, for every \({x}\in {\mathcal {J}}\), there exists a linear operator \(L({ x})\) such that for every \(y\in {\mathcal {J}}\), \(x\circ y:=L(x)y\). In particular, \(L({ x}){ e}={ x}\) and \(L({ x}){ x}={ x}^2\). For each \({ x} \in {\mathcal {J}},\) we define
where \(L({ x})^2:=L({ x})L({ x})\). The map \(P({ x})\) is called the quadratic representation of \(\mathcal {J}\). For any \({ x}, { y}\in {\mathcal {J}}\), \({ x}\), and \({ y}\) are said to be operator commutable iff \(L({ x})\) and \(L({ y})\) commute, i.e., \(L({ x})L({ y})=L({ y})L({ x})\). In other words, \({ x}\) and \({ y}\) operator commutable iff, for all \({ z}\in {\mathcal {J}}\), \({ x}\circ ({ y}\circ { z})={ y}\circ ({ x}\circ { z})\) (see [4]). For any \({ x}, { y}\in {\mathcal {J}}\), the inner product is defined as \(\langle { x}, { y}\rangle :={Tr}({ x}\circ { y}),\) and the Frobenius norm of \({ x}\) as follows: \(\Vert { x}\Vert _F:=\sqrt{{ \mathrm{Tr}}({ x}^2)}=\sqrt{\sum _{i=1}^r\lambda _i^2({ x})}.\) Observe that \(\Vert e\Vert _F=\sqrt{r}\), since identity element \(e\) has eigenvalue \(1\).
We say that two elements \({ x}\) and \({ y}\) in \(\mathcal {J}\) are similar, and briefly denoted as \({ x}\sim { y}\), if and only if \({ x}\) and \({ y}\) share the same set of eigenvalues. Let \(\mathrm{int}\mathcal {K}\) denotes the interior of the symmetric cone \(\mathcal {K}\). We say \({ x}\in {\mathcal {K}}\) if and only if \(\lambda _i\ge 0\), and \({ x}\in \mathrm{int}{\mathcal {K}}\) if and only if \(\lambda _i>0\), for all \(i=1, 2, \ldots , r\). We also say \({ x}\) is positive semidefinite (positive definite) iff \({ x}\in {\mathcal {K}}~ ({ x}\in \mathrm{int}{\mathcal {K}})\).
Lemma 2.1
(Proposition 21 in [4]) Let \({ x}, { s}, { u}\in \mathrm{int}{\mathcal {K}}\). Then
-
(i)
\(P\left( { x}^{\frac{1}{2}}\right) { s}\sim P\left( { s}^{\frac{1}{2}}\right) { x}\).
-
(ii)
\(P\Big ((P({ u}){ x})^{\frac{1}{2}}\Big )P({ u}^{-1}){ s}\sim P\left( { x}^{\frac{1}{2}}\right) { s}.\)
Lemma 2.2
(Proposition 3.2.4 in [19]) Let \({ x}, { s}\in \mathrm{int}{\mathcal {K}}\), and \(w\) be the scaling point of \({ x}\) and \({ s}\). Then
Lemma 2.3
(Lemma 14 in [4]) Let \(x, s\in \mathcal {J}\). Then
Lemma 2.4
(Lemma 30 in [4]) Let \({ x, s}\in \mathrm{int}{\mathcal {K}}\). Then
Lemma 2.5
(Theorem 4 in [20]) Let \({ x, s}\in \mathrm{int}{\mathcal {K}}\). Then
Lemma 2.6
(Lemma 2.11 in [6]) Let \(t>0\) and \(v\in \mathcal {K}\). Then
Lemma 2.7
(Lemma 4.1 in [6]) Let \(x(\alpha ):=x+\alpha \Delta x, s(\alpha ):=s+\alpha \Delta s\) for \(0\le \alpha \le 1\). Let \(x, s \in \mathrm{int}{\mathcal {K}}\), and \(x(\alpha )\circ s(\alpha )\in \mathrm{int}{\mathcal {K}}\) for \(0\le \alpha \le {\bar{\alpha }}\). Then \(x({\bar{\alpha }}), s({\bar{\alpha }})\in \mathrm{int}{\mathcal {K}}.\)
3 The New Search Directions
Consider the CQSCO problem as
along with its dual problem as
where \(c\) and the rows \(a_i\) of \(A\) lie in \({\mathcal {J}}\), and \(b, y\in R^m\), \(\langle x, s\rangle =\mathrm{Tr}(x\circ s)\) stands for the trace inner product in \(\mathcal {J}\). \(Q: \mathcal {J}\longrightarrow \mathcal {J}\) is a symmetric linear operator with the property \(\langle x, Qx\rangle \ge 0, x\in \mathcal {J}\). Furthermore, \(Ax=b\) and \(A^Ty+s-Qx=c\) mean that
respectively. Throughout the paper, we assume that (P) and (D) satisfy the interior-point condition (IPC), i.e., there exists \((x^0, y^0, s^0)\) such that
and the matrix \(A\) is of rank \(m\). Under the IPC, finding an optimal solution of (P) and (D) is equivalent to solving the following system
The basic idea of primal–dual IPMs is to replace the third equation in (1), the so-called complementarity condition for (P) and (D), by the parameterized equation \(x\circ s=\mu e\) with \(\mu >0\). The system (1) can be written as
For each \(\mu >0\), the perturbed system (2) has a unique solution \((x(\mu ), y(\mu ), s(\mu ))\), and we call \(x(\mu )\) and \((y(\mu ), s(\mu ))\) the \(\mu \)-centers of problem (P) and (D), respectively. The set of \(\mu \)-centers gives a homotopy path, which is called the central path. If \(\mu \rightarrow 0\), then the limit of the central path exists, and since the limit points satisfy the complementarity condition, the limit yields an \(\epsilon \)-approximate solution for (P) and (D) [18].
Lemma 3.1
(Lemma 28 in [3]) Let \(x, s\), and \(w\) be in some Euclidean Jordan algebra \({\mathcal {J}}, x, s\in \mathrm{int}{\mathcal {K}}\) and \(w\) invertible. Then \({x}\circ {s}=\mu {e}\) iff \(P(w^{-\frac{1}{2}}){x}\circ P(w^{\frac{1}{2}}){s}=\mu {e}.\)
Suppose that the iterate \((x, y, s)\) is primal–dual feasible solution, and we may write the relaxed complementarity condition \(x\circ s=\mu e\) as \(P(w^{-\frac{1}{2}}){x}\circ P(w^{\frac{1}{2}}){s}=\mu {e}\). Therefore, applying Newton’s method to the resulting system leads us to the linear system
For the choice of
we have \(P\left( w^{-\frac{1}{2}}\right) x=P\left( w^{\frac{1}{2}}\right) s\), which gives the Nesterov–Todd (NT) direction [2, 21]. Define
Note that \(x\circ s=\mu e\) if and only if \(v=e\) (Proposition 5.7.2 in [19]). Therefore, we have \(\mu v=\mu e\). Combining the last equation with the system (3), we have
Let us denote
Consequently, the system (5) can be written in the following form
where \({\bar{Q}}:=P(w^{\frac{1}{2}}){Q}P(w^{-\frac{1}{2}}).\) For the analysis of the algorithm, we define a norm-based proximity measure \(\sigma (x, s; \mu )\) as follows:
where \(v\) is defined as (4). Due to the first two equations of the system (7), we have
4 The Corrector–Predictor Algorithm
Here, we propose a corrector–predictor algorithm based on the modified NT-search directions, which uses these directions in the both corrector and predictor steps. Firstly, we define the \(\tau \)-neighborhood of the central path as follows:
The algorithm starts with \((x, y, s)\) in the \(\tau \)-neighborhood, which certainly holds at \((x^0, y^0, s^0)\) since \(\sigma (x^0, s^0; \mu ^0)=0\). If for the current iterate \((x, y, s)\), \(r\mu >\epsilon \), then the algorithm performs corrector and predictor steps. In corrector steps, by solving the system (7), for the scaled-directions \((d_x, \Delta y, d_s)\), that is,
and using (6) for \((\Delta x, \Delta s)\), we obtain
In the predictor (affine-scaling) steps, starting at the iterate \((x, y, s)\) and targeting at the \(\mu \)-centers, the search directions \((\Delta ^px, \Delta ^py, \Delta ^ps)\) are the damped Newton directions, defined by
where
We denote the iterates after a predictor step by
where \(\theta \in ]0, 1[.\) The iterate \((x^p, y^p, s^p)\) will be in the \(\tau \)-neighborhood again. The algorithm repeats until the duality gap is less than the accuracy parameter \(\epsilon \). A formal description of the algorithm is given in Fig. 1.
5 Analysis
In this Section, we deal with the analysis of the previous algorithm. At the analysis of the affine-scaling step, we will give sufficient conditions for strict feasibility and the effect on the proximity measure if the proximity measure does not exceed the proximity parameter. At the corrector step, we obtain the relation between the proximity parameters. The following lemmas are essential for analysis of the algorithm.
Lemma 5.1
(Lemma 8 in [22]) Let \(x, s\in {\mathcal {J}}\) with \(\mathrm{Tr}(x\circ s)\ge 0\). Then
Lemma 5.2
(Lemma 9 in [22]) Let \(x, s\in \mathrm{int}{\mathcal {K}}\) and \(\mu >0\). Assume that \(\sigma :=\sigma (v)\). Then, \(1-\sigma \le \lambda _i(v)\le 1+\sigma ,~~\mathrm{for}~i=1, 2, \ldots , r.\)
5.1 The Affine-Scaling Step
The next lemma gives a sufficient condition for yielding strict feasibility after an affine-scaling step.
Lemma 5.3
Let \(x, s\in \mathrm{int}\mathcal {K}\) and \(\mu >0\) such that \(\sigma :=\sigma (x, s; \mu )<1\). Furthermore, let \(0<\theta <1\). Let \(x^p=x+\theta \Delta ^px\) and \(s^p=s+\theta \Delta ^ps\) denote the iterates after an affine-scaling step. Then \(x^p, s^p\in \mathrm{int}\mathcal {K}\) if \(K(\sigma , \theta , r)>0,\) where
Proof
Let us introduce
for \(\alpha \in [0, 1]\). Using (4) and (12), we have
Since \(P(w^{\frac{1}{2}})\) and \(P(w^{-\frac{1}{2}})\) are automorphisms of \(\mathrm{int}{\mathcal {K}}\) (Theorem III.2.1 in [17]), by (13), \(x^p\) and \(s^p\) belong to \(\mathrm{int}{\mathcal {K}}\) if and only if \(v+\theta d^p_x\) and \(v+\theta d^p_s\) belong to \(\mathrm{int}{\mathcal {K}}\). Therefore, using the third equation of (11), we obtain
From the above relation and Lemma 2.3, we obtain
the last inequality follows by \(f(\alpha )=\frac{\alpha ^2\theta ^2}{1-\alpha \theta }\) for \(0\le \alpha \le 1\), and each fixed \(0<\theta <1\) is strictly increasing. From Lemma 5.1, the third equation of (11) and Lemma 5.2 we get
This implies that \(\mathrm{det}\big (v_x^p(\alpha )\circ v_s^p(\alpha )\big )>0\) for \(0\le \alpha \le 1\). From Lemma 2.7, we have \(v+\theta d_x^p\in \mathrm{int}{\mathcal {K}}\) and \(v+\theta d_s^p\in \mathrm{int}{\mathcal {K}}\) for \(\alpha =1\). This proves the lemma.\(\square \)
By following (4), we define
where \(w^p\) is the scaling point of \(x^p\) and \(s^p\). Using (13) with \(\alpha =1\), (17) and Lemmas 2.1 and 2.2, we have
Using Lemma 2.5 and (16) with \(\alpha =1\), we obtain
In the following lemma, we investigate the effect on the proximity measure of an affine-scaling step followed by an update of the parameter \(\mu \).
Lemma 5.4
Let \(\sigma :=\sigma (x, s; \mu )<1\), \(\mu ^p=(1-\theta )\mu \), where \(0<\theta <1\), \(K(\sigma , \theta , r)>0\) and let \(x^p, s^p\) denote the iterates after an affine-scaling step, i.e., \(x^p=x+\theta \Delta ^px\) and \(s^p=s+\theta \Delta ^ps\). Then
Proof
From Lemma 5.3 we deduce that the affine-scaling step is strictly feasible. Using (18), (19), (14) with \(\alpha =1\) and Lemma 2.4, we obtain
On the other hand, by Lemma 5.2, we have
Finally, we get
which completes the proof.\(\square \)
5.2 The Corrector Step
The following lemma gives a condition for strict feasibility of full NT-step. Its proof is similar to the proof of Lemma 11 in [22].
Lemma 5.5
Let \(\sigma (v)\) be defined as (8), and \((x, s)\in \mathrm{int}{\mathcal {K}}\times \mathrm{int}{\mathcal {K}}\). If \(\sigma (v)<\frac{2}{1+\sqrt{1+\sqrt{2}}}\), then the full-NT step for CQSCO is strictly feasible.
The next lemma is devoted to the proximity measure of the iterates obtained by a full NT-step.
Lemma 5.6
Let \(x^+=x+\Delta x\) and \(s^+=s+\Delta s\) and \(\sigma (v)<\frac{2}{1+\sqrt{1+\sqrt{2}}}\). Then
Proof
Let \(v^+=\frac{P(w^+)^{\frac{1}{2}}s^+}{\sqrt{\mu }}\) then, similar to (18) and (19), we respectively have
Using the above relations, Lemma 2.6 with \(t=1\) and Lemma 2.4, we have
This completes the proof. \(\square \)
The next lemma gives the effect of full NT-step on duality gap. Its proof is similar to the proof of Lemma 12 in [22].
Lemma 5.7
If \(\sigma (v)<\frac{2}{1+\sqrt{1+\sqrt{2}}}\), then
5.3 The Effect on Duality Gap After a Main Iteration
The following lemma gives an upper bound of the duality gap after a main iteration.
Lemma 5.8
Let \(x, s\in \mathrm{int}\mathcal {K}\), \(\mu >0\) such that \(\sigma :=\sigma (x, s; \mu )<1,\) and \(0<\theta \le 1\). If \(x^p\) and \(s^p\) are the iterates obtained after the affine-scaling step of the algorithm, then
Proof
Letting \(\alpha =1\) in (13) and (14), we obtain
The first inequality follows from the third equation of (11). Since \(x\) and \(s\) are obtained by a full NT-step of the algorithm, by Lemma 5.7, we have
The proof is complete. \(\square \)
5.4 Fixing the Parameter
In the following, we want to fix the parameters \(\tau \) and \(\theta \), which guarantee that after a main iteration, the proximity measure will not exceed the proximity parameter before.
Let \((x, y, s)\) be the iterate at the start of a main iteration with \(x\in \mathrm{int}\mathcal {K}\) and \(s\in \mathrm{int}\mathcal {K}\) such that \(\sigma (x, s; \mu )\le \tau \). After a corrector step, by Lemma 5.6, one has
It can be easily verified that the right-hand side of the above inequality is monotonically increasing with respect to \(\sigma \), which implies that
Following the affine-scaling step and a \(\mu \)-update, by Lemma 5.4, one has
where \(K(\sigma , \theta , r)\) is defined as Lemma 5.3. It can be easily verified that the right-hand side of (22) is monotonically increasing with respect to \(\sigma \), which means that
To keep \(\sigma ^p\le \tau \), it suffices that
At this stage, for a fixed threshold \(\tau =\frac{5}{11}\) and update parameter \(\theta =\frac{1}{4\sqrt{2r}}\), after some simple calculation, we obtain \(\sigma ^p\le 0.4468<\tau =0.4545.\) Thus, the algorithm is well defined. Moreover, \(K(\sigma , \theta , r)\ge 0.4493.\) This implies, by Lemma 5.3, that the predictor step is strictly feasible.
5.5 Complexity Bound
The following lemma gives an upper bound for the number of iterations produced by our algorithm.
Lemma 5.9
Let \(x^0\) and \(s^0\) be strictly feasible, \(\mu ^0=\frac{\mathrm{Tr}(x^0\circ s^0)}{r}\) and \(\sigma (x^0, s^0; \mu ^0)<\tau \). Moreover, let \(x^k\) and \(s^k\) be the iterates obtained after \(k\) iterations. Then \(\mathrm{Tr}(x^k\circ s^k)\le \epsilon \), for \(k\ge 1+\Big \lceil \frac{1}{\theta }\log \frac{4\mathrm{Tr}(x^0\circ s^0)}{\epsilon }\Big \rceil .\)
Proof
It follows from Lemma 5.8 that
Then the inequality \(\mathrm{Tr}(x^k\circ s^k)\le \epsilon \) holds if \(4(1-\theta )^{k-1}\mathrm{Tr}(x^0\circ s^0)\le \epsilon .\) Taking logarithms and using \(\log (1+\theta )\le \theta , \theta \ge -1\), we observe that the above inequality holds if \(-\theta (k-1)+\log 4\mathrm{Tr}(x^0\circ s^0)\le \log \epsilon .\) This implies the result. \(\square \)
Theorem 5.1
Let \(\tau =\frac{5}{11}\) and \(\theta =\frac{1}{4\sqrt{2r}}\). Then the algorithm is well defined and the algorithm requires at most
iterations. The output is a primal–dual pair \((x, s)\) satisfying \(\mathrm{Tr}(x\circ s)\le \epsilon .\)
Proof
Let \(\tau =\frac{5}{11}\) and \(\theta =\frac{1}{4\sqrt{2r}}\), the desired result follows immediately from Lemma 5.9. \(\square \)
6 Conclusions
We presented a corrector–predictor algorithm and its derived complexity for solving CQO over symmetric cones. The complexity results show that the proposed algorithm enjoys the best known complexity bound for interior point methods. An interesting topic for further research may be the development of the analysis to the Cartesian \(P_*(\kappa )\) linear complementarity problems over symmetric cones.
References
Li, L., Toh, K.C.: A polynomial-time inexact interior-point method for convex quadratic symmetric cone programming. J. Math. Ind. 2, 199–212 (2010)
Nesterov, Y.E., Todd, M.J.: Primal-dual interior-point methods for self-scaled cones. SIAM J. Optim. 8(2), 324–364 (1998)
Schmieta, S.H., Alizadeh, F.: Extension of primal-dual interior-point algorithms to symmetric cones. Math. Program. 96(3), 409–438 (2003)
Schmieta, S.H., Alizadeh, F.: Associative and Jordan algebras and polynomial time interior-point algorithms for symmetric cones. Math. Oper. Res. 26, 543–564 (2001)
Darvay, Z.: New interior-point algorithms in linear programming. Adv. Model. Optim. 5(1), 51–92 (2003)
Wang, G.Q., Bai, Y.Q.: A new full NesterovTodd step primal-dual path-following interior-point algorithm for symmetric optimization. J. Optim. Theory Appl. 154(3), 966–985 (2012)
Bai, Y., Zhang, L.: A full-Newton step interior-point algorithm for symmetric cone convex quadratic optimization. J. Ind. Manag. Optim. 7(4), 891–906 (2011)
Darvay, Z.: A new predictor-corrector algorithm for linear programming. Alkalmazott matematikai lapok 22, 135–161 (2005). (In hungarian)
Darvay, Z.: A predictor-corrector algorithm for linearly constrained convex optimization. Studia Univ. Babes-Bolyai, Informatica LIV(2), (2009).
Liu, X., Potra, F.A.: Corrector-predictor methods sufficient linear complementarity problems in a wide neighborhood of the central path. SIAM J. Optim. 17(3), 871–890 (2006)
Potra, F.A.: Corrector-predictor methods for monotone linear complementarity problems in a wide neighborhood of the central path. Math. Program. 111, 243–272 (2008)
Mizuno, S., Todd, M.J., Ye, Y.: On adaptive-step primal-dual interior point algorithms for linear programming. Math. Oper. Res. 18, 964–981 (1993)
Miao, J.: A quadratically convergent \({\cal O}((1+\kappa )\sqrt{n}L)\)-iteration algorithm for the \(P_*(\kappa )\)-matrix linear complementarity problem. Math. Program. 69, 355–368 (1995)
Kojima, M., Megiddo, N., Noma, T., Yoshise, A.: A unified approach to interior point algorithms for linear complementarity problems. Lecture notes in Comput. Sci. 538, Springer-Verlag, Berlin (1991).
Kheirfam, B.: A predictor-corrector interior-point algorithm for \(P_*(\kappa )\) horizontal linear complementarity problem. Numer. Algorithms (2013). doi:10.1007/s11075-013-9738-3
Gurtuna, F., Petra, C., Potra, F., Shevchenko, O., Vancea, A.: Corrector-predictor methods for sufficient linear complementarity problems. Comput. Optim. Appl. 48, 453–485 (2011)
Faraut, J., Korányi, A.: Analysis on symmetric cones. Oxford Mathematical Monographs. The Clarendon Press Oxford University Press, New York (1994)
Faybusovich, L.: Linear systems in Jordan algebras and primal-dual interior-point algorithms. J. Comput. Appl. Math. 86, 149–175 (1997)
Vieira, M.V.C.: Jordan algebraic approach to symmetric optimization. Ph.D thesis, Electrical Engineering, Mathematics and Computer Science, Delft University of Technology, The Netherlands (2007).
Sturm, J.F.: Similarity and other spectral relations for symmetric cones. Algebra Appl. 312(1–3), 135–154 (2000)
Nesterov, Y.E., Todd, M.J.: Self-scaled barriers and interior-point methods for convex programming. Math. Oper. Res. 22, 1–42 (1997)
Kheirfam, B., Mahdavi-Amiri, N.: A new interior-point algorithm based on modified Nesterov–Todd direction for symmetric cone linear complementarity problem. Optim. Lett. 8(3), 1017–1029 (2014)
Acknowledgments
The author is grateful to two anonymous referees and the Editors for their constructive comments and suggestions to improve the presentation.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Florian A. Potra.
Rights and permissions
About this article
Cite this article
Kheirfam, B. A Corrector–Predictor Path-Following Method for Convex Quadratic Symmetric Cone Optimization. J Optim Theory Appl 164, 246–260 (2015). https://doi.org/10.1007/s10957-014-0554-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-014-0554-2
Keywords
- Convex quadratic symmetric cone optimization
- Interior-point method
- Corrector–predictor method
- Polynomial complexity