1 Introduction

Linear programming over cone \(\mathcal {K}\), or simply cone programming problem is defined as minimizing a linear objective function subject to linear constraints and over cone \(\mathcal {K}\), which is a closed, pointed, convex cone with a nonempty interior in \(\mathbb {R}^n\). In this paper, we are interested in the interior-point-methods (IPMs) of the cone programming where cone \(\mathcal {K}\) is a symmetric cone, that is \(\mathcal {K}\) is self-dual and its automorphism group acts transitively on its interior. Symmetric cones are intimately related to Euclidean Jordan algebras (see [1] and [2], etc), and these algebras provide us a basic toolbox to carry out our analysis.

It is well known that symmetric cone programming (SCP) includes linear programming (LP), semidefinite programming (SDP), and second-order cone programming (SOCP) as special cases. Thus, more and more attention has been focused on the optimization problems over symmetric cones. Nesterov and Todd [35] provided a theoretical foundation of the primal-dual IPMs on a special class of conic programming, where the associated cone is so-called self-scaled cone. Güler [2] observed that the self-scaled cones are precisely symmetric cones, which have been studied much in other areas of mathematical sciences (see, for example, [1]). Faybusovich [6, 7] first analyzed the IPMs over symmetric cones by using Euclidean Jordan algebraic tools. Schmieta and Alizadeh [8] proved polynomial iteration complexities for variants of the short, semi-long, and long step path-following algorithms based on commutative class of search directions over symmetric cones. Vieira [9, 10] proposed primal-dual IPMs for SCP based on the kernel functions. Recently, Wang et al. [1113] presented a new class of full Nesterov-Todd (NT) step IPMs for SCP and convex quadratic optimization over symmetric cone. They derived the iteration bounds that matched the currently best known iteration bounds for full NT step feasible IPMs. Liu et al. [14] proposed IPM with the second-order corrector step for SCP and showed the polynomial convergence. In IPMs, an interesting result was given by Ai and Zhang [15] for linear complementarity problem (LCP). Their algorithm decomposes the classical Newton direction into two orthogonal ones and proceeds in a new wide neighborhood. It is proved that their algorithm stops after at most \(\mathcal {O}(\sqrt{n}L)\) iterations, where n is the number of variables and L is the input data length. This result yields the first wide neighborhood path-following algorithm having the same theoretical complexity as a small neighborhood algorithm for monotone LCP. Later, Li and Terlaky [16] generalized Ai-Zhang’s algorithm [15] to SDP, and Potra [17] generalized the algorithm in [15] to sufficient horizontal LCPs.

All above-mentioned methods are feasible IPMs, which require the starting point is strictly feasible. However, it is sometimes difficult to obtain such starting point in practice. Therefore, infeasible IPMs, which do not require that the iterates be feasible to the relevant linear systems but only be in the interior of the cone \(\mathcal {K}\), have been the focus of active research. Rangarajan [18] first proposed an infeasible IPM for SCP using the so-called negative infinity wide neighborhood \(N^-_{\infty }\) and proved that the iteration complexity bound is \(\mathcal {O}(r^{2}\log \varepsilon ^{-1})\) for NT search direction and \(\mathcal {O}(r^{2.5}\log \varepsilon ^{-1})\) for xs and sx search direction. Potra [19] proposed the broad class of infeasible IPM for solving linear complementarity problems over symmetric cones and established polynomial complexity and superlinear convergence of algorithm. Recently, Gu et al. [20] generalized the full-Newton step infeasible IPM for LP of [21] to symmetric cones. Liu and Yang [2224] proposed some new infeasible IPMs using a new class of wide neighborhoods and proved polynomial iteration complexities based on commutative class of search directions. Motivated by these works, we present an improvement infeasible IPM for SCP. We prove that the complexity bound of the new algorithm is \(\mathcal {O}(r\log \varepsilon ^{-1})\) for the NT direction, and \(\mathcal {O}(r^{1.5}\log \varepsilon ^{-1})\) for the xs and sx directions. The complexity bounds obtained here are the same as small neighborhood infeasible-interior-point algorithms over symmetric cones.

In Sect. 2, we review the theory of Euclidean Jordan algebras and symmetric cones. In Sect. 3, we first briefly explain the primal-dual path-following IPMs for SCP problems, and then state the generic framework of our infeasible-interior-point algorithm. In Sect. 4, we first demonstrate several technical lemmas, and then establish the iteration complexity of the proposed algorithm based on the commutative class of directions. Finally, some conclusions are given in Sect. 5.

2 Euclidean Jordan Algebras and Symmetric Cones

Let \(\mathcal {J}\) be an n-dimensional vector space over real field \(\mathbb {R}\), along with a bilinear map \(\circ : \mathcal {J}\times \mathcal {J}\longmapsto \mathcal {J}\). Then \((\mathcal {J},\circ )\) is a Jordan algebra if for all \(x,y \in \mathcal {J }\), \(x\circ y=y\circ x\) and \(x\circ (x^2\circ y)=x^2\circ (x\circ y)\) where \(x^2=x\circ x\). A Jordan algebra \(\mathcal {J}\) is called Euclidean if there exists a symmetric positive definite quadratic form \(\mathcal {Q}\) on \(\mathcal {J}\) such that \(\mathcal {Q}(x\circ y,z)=\mathcal {Q}(x,y\circ z)\). An element \(e\in \mathcal {J}\) is an identity element if \(x\circ e=e\circ x=x\) for all \(x\in \mathcal {J}\). The cone of squares of a Euclidean Jordan algebra \(\mathcal {J}\) is the set \(\mathcal {K}:=\{x^2:x\in \mathcal {J}\}\). A cone is symmetric if and only if it is the cone of squares of some Euclidean Jordan algebra (see [1, Theorems III.2.1 and III.3.1]). Since “\(\circ \)” is bilinear for every \(x\in \mathcal {J}\), there exists a linear operator \(L_x\) such that \(x\circ y=L_xy\) for all \(y\in \mathcal {J}\). For each \(x\in \mathcal {J}\) define \( Q_x:=2L^2_x-L_{x^2}\), which is called the quadratic representation of x, and it plays an important role in our subsequent analysis.

For \(x\in \mathcal {J}\), let r be the smallest integer such that the set \(\{e,x,x^2,\cdots ,x^r\}\) is linearly dependent. Then r is called the degree of x and denoted by \(\mathrm{deg}(x)\). The rank of \(\mathcal {J}\), denoted by \(\mathrm{rank}(\mathcal {J})\), is the maximum of \(\mathrm{deg}(x)\) over all members \(x\in \mathcal {J}\). An idempotent c is a nonzero element of \(\mathcal {J}\) such that \(c^2=c\). A complete system of orthogonal idempotents is a set \(\{c_1,\cdots ,c_k\}\) of idempotents, where \(c_i\circ c_j=0\) for all \(i\ne j\), and \(c_1+\cdots +c_k=e\). An idempotent is primitive if it is not the sum of two other idempotents. A complete system of orthogonal primitive idempotents is called a Jordan frame.

Theorem 2.1

[1, Theorem III.1.2]. Let \(\mathcal {J}\) be a Euclidean Jordan algebra with rank r. Then for every \(x\in \mathcal {J}\), there exist a Jordan frame \(\{c_1,\cdots ,c_r\}\) and real numbers \(\lambda _1,\cdots ,\lambda _r\) such that \(x=\lambda _1c_1+\cdots +\lambda _rc_r.\) The numbers \(\lambda _i\) are called the eigenvalues of x.

We define the following: The inverse \(x^{-1}:=\lambda ^{-1}_1c_1+\cdots +\lambda ^{-1}_rc_r\), whenever all \(\lambda _i\ne 0\); The square root \(x^{1/2}:=\lambda ^{1/2}_1c_1+\cdots +\lambda ^{1/2}_rc_r\), whenever all \(\lambda _i\geqslant 0\); The trace \(\mathrm{tr}(x):=\lambda _1+\cdots +\lambda _r\); The determinant \(\det (x):=\lambda _1\cdots \lambda _r\). Denote the minimum (maximum) eigenvalues of \(x\in \mathcal {J}\) by \(\lambda _{\min }(x) (\lambda _{\max }(x))\). If \(x^{-1}\) is defined, we call x invertible. We call \(x\in \mathcal {J}\) positive semidefinite (positive definite), denoted by \(x\succeq 0\) (\(x\succ 0\)), if all its eigenvalues are nonnegative (positive). It is clear that an element is positive semidefinite (positive definite) if and only if it belongs to (the interior of) the cone of squares.

Since \(\mathrm{tr}(x\circ y)\) is a bilinear function, the inner product can be defined as \(\langle x,y\rangle :=\mathrm{tr}(x\circ y)\). Since \(\mathrm{tr}(\cdot )\) is associative [1], it follows that the inner product is associative, that is \(\langle x\circ y,z\rangle =\langle x,y\circ z\rangle \). Since the inner product is associative, it follows that \(L_x\) and \(L^{-1}_x\) are symmetric with respect to \(\langle \cdot ,\cdot \rangle \), that is \(\langle L_xy, z\rangle =\langle y, L_xz\rangle ,\) and \(\langle L^{-1}_xy, z\rangle =\langle y, L^{-1}_xz\rangle \). From definition of \(Q_x\), it follows that it too is symmetric with respect to \(\langle \cdot ,\cdot \rangle \).

For \(x\in \mathcal {J}\), with eigenvalues \(\lambda _i, 1\leqslant i\leqslant r\), the Frobenius norm, the spectral norm, and one-norm are defined, respectively, as

$$\begin{aligned} \Vert x\Vert _F:=\sqrt{\langle x,x\rangle }=\sqrt{\left( \sum ^r_{i=1}\lambda ^2_i\right) },\quad \Vert x\Vert _2:=\max _i|\lambda _i|,\quad \Vert x\Vert _1:=\sum ^r_{i=1}|\lambda _i|. \end{aligned}$$

We state some useful propositions as follows:

Lemma 2.2

[20, Lemma 2.15].    If \(x\circ y\in \mathrm{int}\,\mathcal {K}\), then \(\det (x)\ne 0\).

Lemma 2.3

[25, Theorem 3.1].    For \(x,y\in \mathcal {J}\), we have \(\Vert x\circ y\Vert _1\leqslant \Vert x\Vert _F\Vert y\Vert _F\).

Proposition 2.4

[1, Proposition III.2.2].    If \(x,y\in \mathrm{int}\,\mathcal {K}\), then \(Q_xy\in \mathrm{int}\,\mathcal {K}\).

Proposition 2.5

[8, Proposition 21].    Let \(x,y,p\in \mathrm{int}\,\mathcal {K}\) and define \(\tilde{x}:=Q_px\) and \(\tilde{y}:=Q_{p^{-1}}y\). Then

  1. 1.

    \(Q_{x^{1/2}}y\) and \(Q_{y^{1/2}}x\) have the same spectrum.

  2. 2.

    \(Q_{x^{1/2}}y\) and \(Q_{\tilde{x}^{1/2}}\tilde{y}\) have the same spectrum.

We say two elements \(x, y\in \mathcal {J}\) operator commute if \(L_xL_y=L_yL_x\). The tool of operator commutativity is very useful in the analysis of algorithms.

Theorem 2.6

[8, Theorem 27]. Let x and y be two elements of Euclidean Jordan algebra \(\mathcal {J}\). Then x and y operator commute if and only if there is a Jordan frame \(c_1,\cdots ,c_r\) such that \(x=\sum ^r_{i=1}\lambda _ic_i\) and \(y=\sum ^r_{i=1}\mu _ic_i\).

Lemma 2.7

[8, Lemma 30]. Let \(x,y\in \mathrm{int}\,\mathcal {K}\) and define \(w:=Q_{x^{1/2}}y\), then \(\mathrm{tr}(x\circ y)=\mathrm{tr}(w)\). Moreover, if x and y operator commute then \(x\circ y=w\).

3 SCP Problems and Algorithm Framework

Let \(\mathcal {J}\) be a Euclidean Jordan algebra with dimension n, rank r, and cone of squares \(\mathcal {K}\). Consider the primal-dual pair of SCP problems

$$\begin{aligned} (\mathrm{P})\quad \min ~\langle c,x\rangle \quad \mathrm{s.t.}~Ax=b,~x\in \mathcal {K}, \end{aligned}$$

and

$$\begin{aligned} (D)\quad \max ~\langle b,y\rangle \quad \mathrm{s.t.}~A^*y+s=c,~s\in \mathcal {K},y\in \mathbb {R}^m, \end{aligned}$$

where \(c\in \mathcal {J}\) and \(b\in \mathbb {R}^m\). Here, A is a linear operator that maps \(\mathcal {J}\) into \(\mathbb {R}^m\), and \(A^*\) is its adjoint operator. Define the feasibility set and strictly feasibility set as follows:

$$\begin{aligned}&\mathcal {F}:=\{(x,y,s)\in \mathcal {K}\times \mathbb {R}^m\times \mathcal {K}: Ax=b, A^*y+s=c\},\\&\mathcal {F}^0:=\{(x,y,s)\in \mathrm{int\,}\mathcal {K}\times \mathbb {R}^m\times \mathrm{int\,}\mathcal {K}: Ax=b, A^*y+s=c\}. \end{aligned}$$

In this paper, we assume that A is surjective and \(\mathcal {F}^0\ne \varnothing \). It is shown in [3, 7] that under the assumptions above, the sets of optimal solutions \(\mathcal {P}^*\) and \(\mathcal {D}^*\) are nonempty and bounded, and moreover \(\langle x^*,s^*\rangle =0\) for \(x^*\in \mathcal {P}^*\) and \((y^*,s^*)\in \mathcal {D}^*\). They also proved that, for \(x,s\in \mathcal {K}\), \(\langle x,s\rangle =0\) is equivalent to \(x\circ s=0\). Therefore, \(x^*\) and \((y^*,s^*)\) are optimal solutions if and only if they satisfy the following system:

$$\begin{aligned} Ax= & {} b,\quad x\in \mathcal {K},\nonumber \\ A^*y+s= & {} c,\quad s\in \mathcal {K},y\in \mathbb {R}^m,\nonumber \\ x\circ s= & {} 0, \end{aligned}$$
(3.1)

where the last equality is called the complementarity condition. Replace \(x\circ s=0\) in (3.1) with the perturbed complementary condition \(x\circ s=\mu e\) for \(\mu >0\), we have the relaxed system

$$\begin{aligned} Ax= & {} b,\quad x\in \mathcal {K},\nonumber \\ A^*y+s= & {} c,\quad s\in \mathcal {K},y\in \mathbb {R}^m,\nonumber \\ x\circ s= & {} \mu e. \end{aligned}$$
(3.2)

Primal-dual path-following interior-point algorithms follow the solutions of the relaxed system (3.2) as \(\mu \) goes to zero. The relaxed system have unique solutions for all \(\mu >0\), and these solutions form the so-called central trajectory (central path). Moreover, the limit of the trajectory as \(\mu \) goes to 0 yields optimal solution for (P) and (D).

In the classical IPMs, the iterates are allowed to move in a wide neighborhood of the central path. The so-called negative infinity neighborhood that is a wide neighborhood, is defined as

$$\begin{aligned} \mathcal {N}_\infty ^-(1-\gamma ):=\{(x,y,s)\in \mathrm{int\,}\mathcal {K}\times \mathbb {R}^m\times \mathrm{int\,}\mathcal {K}: \lambda _{\min }(Q_{x^{1/2}}s)\geqslant \gamma \mu \}, \end{aligned}$$

where \(\gamma \in (0,1)\) and \(\mu = \langle x, s\rangle /r\) is the normalized duality gap. Ai [26], Ai and Zhang [15] introduced a new class of wide neighborhoods of the central path. Later, Li and Terlaky [16] extended the algorithm in [15] to SDP problems. And then, Liu et al. [22, 27] extended the algorithm in [15, 26] to SCP problems. Analogously, Yang et al. [24] defined the wide neighborhood based one-norm as

$$\begin{aligned} \mathcal {N}_1(\tau ,\beta ) :=\{(x,y,s)\in \mathrm{int\,}\mathcal {K}\times \mathbb {R}^m\times \mathrm{int\,}\mathcal {K}: \Vert (\tau \mu {e}-Q_{x^{1/2}}s)^+\Vert _1\leqslant \beta \tau \mu \}, \end{aligned}$$

where \(\tau ,\beta \in (0,1)\). This neighborhood is a wide neighborhood since one can verify that

$$\begin{aligned} \mathcal {N}_\infty ^-(1-\tau )\subseteq \mathcal {N}_1(\tau ,\beta )\subseteq \mathcal {N}_\infty ^-(1-(1-\beta )\tau ), \quad \forall ~0<\tau ,\,\beta <1. \end{aligned}$$
(3.3)

Most classic primal-dual path-following algorithms take Newton steps toward points on the central path defined by system (3.2) for \(\mu >0\), rather than pure Newton steps for the optimality system (3.1), sometimes known as the affine-scaling direction. Since these steps are biased toward the interior of \(\mathcal {K}\), it usually is possible to take longer steps along them than along the pure Newton steps for (3.1) before violating the positive definite condition. To move from the current point (xys) toward the target on the central path corresponds to \(\tau \mu \), which leads us to the linear system

$$\begin{aligned} A\Delta x= & {} b-Ax,\nonumber \\ A^*\Delta y+\Delta s= & {} c-s-A^*y,\nonumber \\ \Delta x\circ s + x\circ \Delta s= & {} \tau \mu {e}-x\circ s, \end{aligned}$$
(3.4)

where \((\Delta x,\Delta y,\Delta s)\in \mathcal {J}\times \mathbb {R}^m\times \mathcal {J}\) is the search direction, \(\tau \in [0,1]\) is called centering parameter.

The following lemma motivates different, but equivalent, ways of forming the perturbed complementarity condition \(x\circ s=\mu e\), thus leading to different Newton systems.

Lemma 3.1

[8, Lemma 28]. Let xs, and p be in some Euclidean Jordan \(\mathcal {J}\), \(x,s\in \mathrm{int\,}\mathcal {K}\), and p invertible. Then \(x\circ s=\mu e\) if and only if \(Q_px\circ Q_{p^{-1}}s=\mu e\).

Denote by \(\mathcal {C}(x,s)\) the set of all elements so that the scaled elements operator commute, i.e.,

$$\begin{aligned} \mathcal {C}(x,s):=\{p: p\in \mathrm{int\,}\mathcal {K}~\text{ such } \text{ that }~ Q_px ~\text{ and }~ Q_{p^{-1}}s ~\text{ operator } \text{ commute }\}. \end{aligned}$$

This is a subclass of the Monteiro-Zhang family of search directions called the commutative class. In particular, choosing \(p=s^{1/2}\) and \(p=x^{-1/2}\) we get the xs and sx search directions, respectively. For the choice of

$$\begin{aligned} p=\big [Q_{x^{1/2}}(Q_{x^{1/2}}s)^{-1/2}\big ]^{-1/2}=\big [Q_{s^{-1/2}}(Q_{s^{1/2}}x)^{1/2}\big ]^{-1/2}, \end{aligned}$$
(3.5)

we obtain the NT search direction. In this paper, we restrict the scaling \(p\in \mathcal {C}(x,s)\). In the following we denote by \(\tilde{A}=AQ_{p^{-1}}, \tilde{c}=Q_{p^{-1}}c, \tilde{x}=Q_px\), and \(\tilde{s}=Q_{p^{-1}}s\). With this notation, the Newton system (3.4) becomes

$$\begin{aligned} \tilde{A}\Delta \tilde{x}= & {} b-\tilde{A}\tilde{x},\nonumber \\ \tilde{A}^*\Delta y+\Delta \tilde{s}= & {} \tilde{c}-\tilde{A}^*y-\tilde{s},\nonumber \\ \Delta \tilde{x}\circ \tilde{s}+\tilde{x}\circ \Delta \tilde{s}= & {} \tau \mu {e}-\tilde{x}\circ \tilde{s}. \end{aligned}$$
(3.6)

In our new algorithm, we decompose the Newton system (3.6) into the following two systems:

$$\begin{aligned} \tilde{A}\Delta \tilde{x}_-= & {} b-\tilde{A}\tilde{x},\nonumber \\ \tilde{A}^*\Delta y_-+\Delta \tilde{s}_-= & {} \tilde{c}-\tilde{A}^*y-\tilde{s},\nonumber \\ \Delta \tilde{x}_-\circ \tilde{s}+\tilde{x}\circ \Delta \tilde{s}_-= & {} (\tau \mu {e}-\tilde{x}\circ \tilde{s})^-, \end{aligned}$$
(3.7)

and

$$\begin{aligned} \tilde{A}\Delta \tilde{x}_+= & {} 0,\nonumber \\ \tilde{A}^*\Delta y_++\Delta \tilde{s}_+= & {} 0,\nonumber \\ \Delta \tilde{x}_+\circ \tilde{s}+\tilde{x}\circ \Delta \tilde{s}_+= & {} (\tau \mu {e}-\tilde{x}\circ \tilde{s})^+. \end{aligned}$$
(3.8)

As pointed in [15, 16], the negative part \((\tau \mu {e}-\tilde{x}\circ \tilde{s})^-\) is responsible for reducing the duality gap, and the positive part \((\tau \mu {e}-\tilde{x}\circ \tilde{s})^+\) is used to control the centrality. So, we treat the negative part and the positive part separately and equip the two directions with different step sizes. Let \(\alpha :=(\alpha _1,\alpha _2)\in {\mathbb {R}^2_+}\) be the step sizes taken along \((\Delta \tilde{x}_-,\Delta y_-,\Delta \tilde{s}_-)\) and \((\Delta \tilde{x}_+,\Delta y_+,\Delta \tilde{s}_+)\), respectively. The new iterate is

$$\begin{aligned} (\tilde{x}(\alpha ),y(\alpha ),\tilde{s}(\alpha )) :=(\tilde{x},y,\tilde{s})+\alpha _1(\Delta \tilde{x}_-,\Delta y_-,\Delta \tilde{s}_-)+\alpha _2(\Delta \tilde{x}_+,\Delta y_+,\Delta \tilde{s}_+). \end{aligned}$$

We denote by \(\tilde{w}=Q_{\tilde{x}^{1/2}}\tilde{s}\). By part (ii) of Proposition 2.5, w and \(\tilde{w}\) have the same eigenvalues. In addition, \(\tilde{\mu }=\langle \tilde{x}, \tilde{s}\rangle /r=\langle Q_px, Q_{p^{-1}}s\rangle /r=\langle x, s\rangle /r=\mu \). Hence the neighborhood \(\mathcal {N}_1(\tau ,\beta )\) is scaling invariant, that is (xys) is in the neighborhood if and only if \((\tilde{x},y,\tilde{s})\) is.

Having introduced the key elements for the new algorithm, we state the generic framework of our algorithm.

Algorithm 3.2

Input parameters: \(\varepsilon >0, 0<\tau ,\beta <1\), and an initial point \((x^0,y^0,s^0)\in \mathcal {N}_1(\tau ,\beta )\). Set \( \mu _0=\langle x^0,s^0\rangle /r,k:=0\).

  1. Step 1

    If \(\mu _k\leqslant \varepsilon \mu _0\), then stop.

  2. Step 2

    Choose a scaling element \(p\in \mathcal {C}(x^k,s^k)\) and compute \((\tilde{x}^k,\tilde{s}^k)\).

  3. Step 3

    Compute the directions \((\Delta {\tilde{x}^k_-},\Delta y^k_-,\Delta {\tilde{s}^k_-})\) and \((\Delta {\tilde{x}^k_+},\Delta y^k_+,\Delta {\tilde{s}^k_+})\) by solving the scaled Newton systems (3.7) and (3.8), respectively.

  4. Step 4

    Choose step size vector \(\alpha ^k=(\alpha _1^k,\alpha _2^k)>0\), such that the new iterates

    $$\begin{aligned} (\tilde{x}^{k+1},y^{k+1},\tilde{s}^{k+1}):= & {} (\tilde{x}^k,y^k,\tilde{s}^k)+\alpha ^k_1(\Delta \tilde{x}^k_-,\Delta y^k_-,\Delta \tilde{s}^k_-)\\&+\,\alpha ^k_2(\Delta \tilde{x}^k_+,\Delta y^k_+,\Delta \tilde{s}^k_+), \end{aligned}$$

    remain in \(\mathcal {N}_1(\tau ,\beta )\).

  5. Step 5

    Let \(\big ({x}^{k+1},y^{k+1},{s}^{k+1}\big )=(Q_{p^{-1}}\tilde{x}^{k+1},y^{k+1},Q_\mathrm{p}\tilde{s}^{k+1})\) and \(\mu _{k+1}=\langle x^{k+1},s^{k+1}\rangle /r\). Set \(k:=k+1\) and go to Step 1.

We note that we will specify our choice for \(\alpha ^k\) in next section, and present the convergency and iteration complexity of Algorithm 3.2. Our choice is based on several factors, including keeping the centrality, improvement of the infeasibility, and decreasing the duality gap. First, using (3.7) and (3.8) the following proposition is readily verified.

Proposition 3.3

Let \(\{(\tilde{x}^k,y^k,\tilde{s}^k)\}\) be generated by Algorithm 3.2. Then for \(k\geqslant 0\), one has \(\tilde{A}\tilde{x}^{k+1}-b=\nu ^{k+1}(\tilde{A}\tilde{x}^{0}-b),\) and \(\tilde{A}^*y^{k+1}+\tilde{s}^{k+1}-\tilde{c}=\nu ^{k+1}(\tilde{A}^*y^{0}+\tilde{s}^{0}-\tilde{c}),\) where \(\nu ^0=1\) and

$$\begin{aligned} \nu ^{k+1}=(1-\alpha ^k_1)\nu ^k=\prod ^k_{i=0}\big (1-\alpha ^i_1\big )\in [0,1]. \end{aligned}$$
(3.9)

Then, we have \(\nu ^k=\frac{\Vert \tilde{A}\tilde{x}^{k}-b\Vert _F}{\Vert \tilde{A}\tilde{x}^{0}-b\Vert _F} =\frac{\Vert \tilde{A}^*y^{k}+\tilde{s}^{k}-\tilde{c}\Vert _F}{\Vert \tilde{A}^*y^{0}+\tilde{s}^{0}-\tilde{c}\Vert _F},\) which implies \(\nu ^k\) represents the relative infeasibility at \((\tilde{x}^k,y^k,\tilde{s}^k)\). Hence at every iterate, we maintain the condition:

$$\begin{aligned} \langle \tilde{x}^k ,\tilde{s}^k\rangle \geqslant \nu ^k\langle \tilde{x}^0 ,\tilde{s}^0\rangle , \end{aligned}$$
(3.10)

which ensures that the infeasibility approaches to zero as the complementarity \(\langle x,s\rangle \) approaches to zero.

We now specify a particular starting point for Algorithm 3.2. This choice was first proposed by [28] for LCP, and then was extended by [18] to symmetric cones.

Let \(u^0\) and \((r^0, v^0)\) be the minimum-norm solutions to the linear systems \(Ax=b\) and \(A^*y+s=c\), respectively. That is

$$\begin{aligned} u^0=\mathrm{argmin}\{\Vert u\Vert _F:Au=b\},\quad (r^0, v^0)=\mathrm{argmin}\{\Vert v\Vert _F:A^*r+v=c\}. \end{aligned}$$
(3.11)

We choose \((x^0,y^0,s^0)\) such that

$$\begin{aligned} x^0=s^0=\rho ^0e,\quad \rho ^0\geqslant \max \{\Vert u^0\Vert _2,\Vert v^0\Vert _2\}. \end{aligned}$$
(3.12)

This implies that \(x^0,s^0\in \mathrm{int\,}\mathcal {K} \), \(x^0-u^0\in \mathcal {K}\), and \(s^0-v^0\in \mathcal {K}\).

Let

$$\begin{aligned} \rho ^*=\min \{\max (\Vert x^*\Vert _2,\Vert s^*\Vert _2):x^*\in \mathcal {P}^*,(y^*,s^*)\in \mathcal {D}^*\}, \end{aligned}$$
(3.13)

and in addition, we assume that for some constant \(\varPsi >0\), it has \(\rho ^0\geqslant \rho ^*/\varPsi \). Note that we can always increase \(\rho ^0\).

For a given sequence of iterates \(\{(x^k,y^k,s^k)\}\) we define

$$\begin{aligned}&\quad \big (u^{k+1},r^{k+1},v^{k+1}\big )\nonumber \\&=\big (x^{k+1},y^{k+1},s^{k+1}\big )-\big (1-\alpha ^k_1\big )\big (x^k-u^k,y^k-r^k,s^k-v^k\big ). \end{aligned}$$
(3.14)

The auxiliary sequence will be used in our analysis of complexity and need not be actually computed in Algorithm 3.2. The following lemma gives useful properties of the auxiliary sequence \(\{(u^k,r^k,v^k)\}\).

Lemma 3.4

Let \(\{(x^k,y^k, s^k)\}\) be generated by Algorithm 3.2, \(\{(u^k,r^k,v^k)\}\) be given by (3.14), and \(\{\nu ^k\}\) be given by (3.9). Then for \(k\geqslant 0\),

  1. (1)

    \(Au^k=b\) and \(A^*r^k+v^k=c;\)

  2. (2)

    \(x^k-u^k=\nu ^k\big (x^0-u^0\big )\in \mathcal {K}\) and \(s^k-v^k=\nu ^k\big (s^0-v^0\big )\in \mathcal {K}\).

Proof

The proof follows from direct substitution.

4 Analysis of Complexity for Algorithm 3.2

In this section, we first give the strategy of choice for step size \(\alpha ^k\). Then we develop several technical lemmas. At the end of this section, we present our main result of polynomial convergence. For simplicity, from now on we will suppress the superscript k, except for \(k=0\) whenever no confusion arises. However, we will denote \(\alpha ^k\) by \(\hat{\alpha }\) while using \(\alpha \) as a free variable.

Our choice of \(\hat{\alpha }\) is based on several considerations. We require that the step size vector \(\hat{\alpha }=(\hat{\alpha }_1,\hat{\alpha }_2)\) satisfies the following three conditions:

$$\begin{aligned}&(\tilde{x}(\alpha ),y(\alpha ),\tilde{s}(\alpha ))\in \mathcal {N}_1(\tau ,\beta ), \end{aligned}$$
(4.1)
$$\begin{aligned}&\langle \tilde{x}(\alpha ),\tilde{s}(\alpha )\rangle \geqslant (1-\alpha _1)\nu \langle \tilde{x}^0,\tilde{s}^0\rangle , \end{aligned}$$
(4.2)

and

$$\begin{aligned} \langle \tilde{x}(\alpha ),\tilde{s}(\alpha )\rangle \leqslant (1-\Delta \alpha _1)\langle \tilde{x},\tilde{s}\rangle , \end{aligned}$$
(4.3)

where \(\nu =\nu ^k\) is defined in (3.9), and \(\Delta \in (0,1)\) is a constant independent of r. Condition (4.1) is a centrality condition that prevents iterates from prematurely getting too close to the boundary of the symmetric cone \(\mathcal {K}\). Condition (4.2), as we see in (3.10), ensures that the infeasibility approaches to zero as the complementarity approaches to zero. Condition (4.3) is needed in order to make a comparable progress in the complementarity. From this point on, by Algorithm 3.2 we mean that the step size \(\hat{\alpha }\) satisfies (4.1) ,(4.2), and (4.3).

4.1 Technical Lemmas

We will use the notation:

$$\begin{aligned}&(\Delta \tilde{x}(\alpha ),\Delta {y}(\alpha ),\Delta \tilde{s}(\alpha )) =\alpha _1(\Delta \tilde{x}_-,\Delta {y}_-,\Delta \tilde{s}_-)+\alpha _2(\Delta \tilde{x}_+,\Delta {y}_+,\Delta \tilde{s}_+),\\&\chi (\alpha )=\tilde{x}\circ \tilde{s}+\alpha _1(\tau \mu e-\tilde{x}\circ \tilde{s})^-+\alpha _2(\tau \mu e-\tilde{x}\circ \tilde{s})^+. \end{aligned}$$

It can be easily verified that \(\langle \Delta {\tilde{x}_+},\Delta {\tilde{s}_+}\rangle =0\), \(\tilde{x}(\alpha )\circ \tilde{s}(\alpha )=\chi (\alpha )+\Delta \tilde{x}(\alpha )\circ \Delta \tilde{s}(\alpha )\), and

$$\begin{aligned}&\quad \Delta \tilde{x}(\alpha )\circ \Delta \tilde{s}(\alpha )\nonumber \\&=\alpha _1^2\Delta {\tilde{x}_-}\circ \Delta {\tilde{s}_-} +\alpha _1\alpha _2(\Delta {\tilde{x}_-}\circ \Delta {\tilde{s}_+}+\Delta {\tilde{s}_-}\circ \Delta {\tilde{x}_+}) +\alpha _2^2\Delta {\tilde{x}_+}\circ \Delta {\tilde{s}_+},\quad \end{aligned}$$
(4.4)
$$\begin{aligned}&\quad \langle \tilde{x}(\alpha ),\tilde{s}(\alpha )\rangle \nonumber \\&=\mathrm{tr\,}(\chi (\alpha )) +\alpha _1^2\langle \Delta {\tilde{x}_-},\Delta {\tilde{s}_-}\rangle +\alpha _1\alpha _2(\langle \Delta {\tilde{x}_-},\Delta {\tilde{s}_+}\rangle +\langle \Delta {\tilde{s}_-},\Delta {\tilde{x}_+}\rangle ). \end{aligned}$$
(4.5)

By using \(\mathrm{tr\,}((\tau \mu {e}-\tilde{x}\circ \tilde{s})^-)+\mathrm{tr\,}((\tau \mu {e}-\tilde{x}\circ \tilde{s})^+)=\mathrm{tr\,}(\tau \mu {e}-\tilde{x}\circ \tilde{s})=-(1-\tau )r\mu \), we have

$$\begin{aligned} \mathrm{tr\,}((\tau \mu {e}-\tilde{x}\circ \tilde{s})^-)\leqslant -(1-\tau )r\mu . \end{aligned}$$
(4.6)

When \((\tilde{x},y,\tilde{s})\in \mathcal {N}_1(\tau ,\beta )\), we have

$$\begin{aligned} \mathrm{tr\,}((\tau \mu {e}-\tilde{x}\circ \tilde{s})^+)=\Vert (\tau \mu {e}-\tilde{x}\circ \tilde{s})^+\Vert _1\leqslant \beta \tau \mu . \end{aligned}$$
(4.7)

The following lemma will be used frequently during the analysis.

Lemma 4.1

[8, Lemma 33]. Let \(p,q\in \mathcal {J}\), and G a positive definite matrix which is symmetric with respect to the inner product \(\langle \cdot ,\cdot \rangle \). Then

$$\begin{aligned} \Vert p\Vert _F\Vert q\Vert _F\leqslant & {} \sqrt{\mathrm{cond}\,(G)}\Vert G^{-1/2}p\Vert _F\Vert G^{1/2}q\Vert _F\\\leqslant & {} \frac{1}{2}\sqrt{\mathrm{cond}\,(G)}\left( \Vert G^{-1/2}p\Vert ^2_F+\Vert G^{1/2}q\Vert ^2_F\right) , \end{aligned}$$

where \(\mathrm{cond}\,(G)=\lambda _{\max }\,(G)/\lambda _{\min }(G)\).

We note that, for \(x,s\in \mathrm{int\,}\mathcal {K}\), and \(p\in \mathcal {C}\), \(G := L^{-1}_{\tilde{s}}L_{\tilde{x}}\) is a positive definite matrix and is symmetric with respect to the inner product \(\langle \cdot ,\cdot \rangle \). We can see that the mapping defined by \(\Vert (u,v)\Vert _G=(\Vert G^{-1/2}u\Vert _F^2+\Vert G^{1/2}v\Vert _F^2)^{1/2},~ u,v\in \mathcal {J}\) is a norm on \(\mathcal {J}\times \mathcal {J}\).

Lemma 4.2

Let \(x,s\in \mathrm{int\,}\mathcal {K}, p\in \mathcal {C}\), and \(G = L^{-1}_{\tilde{s}}L_{\tilde{x}}\). If \(\beta \leqslant 1/2\), then \(\Vert (\Delta \tilde{x}_+,\Delta \tilde{s}_+)\Vert ^2_G\leqslant \beta \tau \mu \).

Proof

Since \(\tilde{x}\) and \(\tilde{s}\) operator commute, there is a Jordan frame \(c_1,\cdots ,c_r\) such that \(\tilde{x}=\sum ^r_{i=1}\lambda _ic_i\) and \(\tilde{s}=\sum ^r_{i=1}\mu _ic_i\). Then, \(\tilde{x}\circ \tilde{s}=\sum ^r_{i=1}\lambda _i\mu _ic_i\) and \(L_{\tilde{x}}L_{\tilde{s}}c_i=\tilde{x}\circ (\tilde{s}\circ c_i)=\lambda _i\mu _ic_i, ~i=1,\cdots ,r,\) which implies

$$\begin{aligned} (L_{\tilde{x}}L_{\tilde{s}})^{-1}c_i=c_i/(\lambda _i\mu _i). \end{aligned}$$
(4.8)

Multiplying the last equation of (3.8) by \((L_{\tilde{x}}L_{\tilde{s}})^{-1/2}\), we obtain

$$\begin{aligned} G^{-1/2}\Delta \tilde{x}_++G^{1/2}\Delta \tilde{s}_+=(L_{\tilde{x}}L_{\tilde{s}})^{-1/2}(\tau \mu e-\tilde{x}\circ \tilde{s})^+. \end{aligned}$$

Taking norm-squared on both sides, we have

$$\begin{aligned}&\Vert G^{-1/2}\Delta \tilde{x}_+\Vert ^2_F+\Vert G^{1/2}\Delta \tilde{s}_+\Vert ^2_F \\= & {} \Vert (L_{\tilde{x}}L_{\tilde{s}})^{-1/2}(\tau \mu e-\tilde{x}\circ \tilde{s})^+\Vert ^2_F\\= & {} \left\langle (L_{\tilde{x}}L_{\tilde{s}})^{-1/2}(\tau \mu e-\tilde{x}\circ \tilde{s})^+,(L_{\tilde{x}}L_{\tilde{s}})^{-1/2}(\tau \mu e-\tilde{x}\circ \tilde{s})^+\right\rangle \\= & {} \left\langle (\tau \mu e-\tilde{x}\circ \tilde{s})^+,(L_{\tilde{x}}L_{\tilde{s}})^{-1}(\tau \mu e-\tilde{x}\circ \tilde{s})^+\right\rangle \\= & {} \left\langle \sum (\tau \mu -\lambda _i\mu _i)^+c_i, (L_{\tilde{x}}L_{\tilde{s}})^{-1}\sum (\tau \mu -\lambda _i\mu _i)^+c_i\right\rangle \\= & {} \left\langle \sum (\tau \mu -\lambda _i\mu _i)^+c_i, \sum (\tau \mu -\lambda _i\mu _i)^+c_i/(\lambda _i\mu _i)\right\rangle \\= & {} \sum \big [(\tau \mu -\lambda _i\mu _i)^+\big ]^2/(\lambda _i\mu _i)\\\leqslant & {} \left[ \sum (\tau \mu -\lambda _i\mu _i)^+\right] ^2/(\lambda _i\mu _i)\\\leqslant & {} (\beta \tau \mu )^2/(1-\beta )\tau \mu \\\leqslant & {} \beta \tau \mu . \end{aligned}$$

Here, the fifth equality follows from (4.8), the second inequality follows from \(\tilde{w}=\tilde{x}\circ \tilde{s}\) and \((\tilde{x},y,\tilde{s})\in \mathcal {N}_1(\tau ,\beta )\), and the last inequality holds due to \(\beta \leqslant 1/2\).

Lemma 4.3

Let \(G = L^{-1}_{\tilde{s}}L_{\tilde{x}}\). Then \(\Vert (\Delta \tilde{x}_-,\Delta \tilde{s}_-)\Vert _G\leqslant \sqrt{r\mu }+(1+\sqrt{2})\xi ,\) where \(\xi :=\min \{\Vert (\bar{u},\bar{v})\Vert _G:\tilde{A}\bar{u}=b-\tilde{A}\tilde{x}, \tilde{A}^*\bar{r}+\bar{v}=\tilde{c}-\tilde{A}^*y-\tilde{s}\}.\)

Proof

Let \((\bar{u},\bar{r},\bar{v})\in \mathcal {J}\times \mathbb {R}^m\times \mathcal {J}\) satisfy the equations \(\tilde{A}\bar{u}=b-\tilde{A}\tilde{x}\) and \(\tilde{A}^*\bar{r}+\bar{v}=\tilde{c}-\tilde{A}^*y-\tilde{s}\), then by system (3.7) we have

$$\begin{aligned} \tilde{A}(\Delta \tilde{x}_--\bar{u})= & {} 0, \\ \tilde{A}^*(\Delta y_--\bar{r})+(\Delta \tilde{s}_--\bar{v})= & {} 0,\\ L_{\tilde{s}}(\Delta \tilde{x}_--\bar{u})+L_{\tilde{x}}(\Delta \tilde{s}_--\bar{v})= & {} (\tau \mu {e}-\tilde{x}\circ \tilde{s})^- -(L_{\tilde{s}}\bar{u}+L_{\tilde{x}}\bar{v}). \end{aligned}$$

Multiplying the last equation by \((L_{\tilde{x}}L_{\tilde{s}})^{-1/2}\), we obtain

$$\begin{aligned}&\quad G^{-1/2}(\Delta \tilde{x}_--\bar{u})+G^{1/2}(\Delta \tilde{s}_--\bar{v})\\&=(L_{\tilde{x}}L_{\tilde{s}})^{-1/2}(\tau \mu e-\tilde{x}\circ \tilde{s})^- -(G^{-1/2}\bar{u}+G^{1/2}\bar{v}). \end{aligned}$$

Therefore

$$\begin{aligned}&\quad \Vert (\Delta \tilde{x}_-,\Delta \tilde{s}_-)\Vert _G\\&\leqslant \Vert (\Delta \tilde{x}_--\bar{u},\Delta \tilde{s}_--\bar{v})\Vert _G +\Vert (\bar{u},\bar{v})\Vert _G\\&=\Vert G^{-1/2}(\Delta \tilde{x}_--\bar{u})+G^{1/2}(\Delta \tilde{s}_--\bar{v})\Vert _F+\Vert (\bar{u},\bar{v})\Vert _G\\&=\Vert (L_{\tilde{x}}L_{\tilde{s}})^{-1/2}(\tau \mu e-\tilde{x}\circ \tilde{s})^- -\big (G^{-1/2}\bar{u}+G^{1/2}\bar{v}\big )\Vert _F+\Vert (\bar{u},\bar{v})\Vert _G\\&\leqslant \Vert (L_{\tilde{x}}L_{\tilde{s}})^{-1/2}(\tau \mu e-\tilde{x}\circ \tilde{s})^-\Vert _F+\Vert G^{-1/2}\bar{u}\Vert _F+\Vert G^{1/2}\bar{v}\Vert _F+\Vert (\bar{u},\bar{v})\Vert _G, \end{aligned}$$

where the first equality holds due to the definition of \(\Vert \cdot \Vert _G\) and \(\langle \Delta \tilde{x}_--\bar{u},\Delta \tilde{s}_--\bar{v}\rangle =0\).

Similar to the proof of Lemma 4.2,

$$\begin{aligned} \Vert (L_{\tilde{x}}L_{\tilde{s}})^{-1/2}(\tau \mu e-\tilde{x}\circ \tilde{s})^-\Vert _F^2 =\sum _{i=1}^r \big [(\tau \mu -\lambda _i\mu _i)^-\big ]^2\Big /(\lambda _i\mu _i)\leqslant \sum _{i=1}^r \lambda _i\mu _i =r\mu . \end{aligned}$$

By the definition of \(\Vert \cdot \Vert _G\), we have \(\Vert G^{-1/2}\bar{u}\Vert _F+\Vert G^{1/2}\bar{v}\Vert _F\leqslant \sqrt{2}\Vert (\bar{u},\bar{v})\Vert _G.\) Hence the required result follows.

Lemma 4.4

Let \((u^0,r^0,v^0)\) and \((x^0,y^0,s^0)\) satisfy (3.11) and (3.12), respectively. Then \(\xi \leqslant (5+4\varPsi )r\sqrt{\mu }/\sqrt{(1-\beta )\tau }\).

Proof

Let \(\tilde{u}=Q_p{u}\) and \(\tilde{v}=Q_{p^{-1}}{v}\). Then by Lemma 3.4 and Proposition 2.4 we have \(\tilde{A}\tilde{u}=b, \tilde{A}^*{r}+\tilde{v}=\tilde{c}, \tilde{x}-\tilde{u}\in \mathcal {K}\) and \(\tilde{s}-\tilde{v}\in \mathcal {K}\). Let \((\bar{u},\bar{r},\bar{v}):=(\tilde{x}-\tilde{u},y-r,\tilde{s}-\tilde{v}),\) then, we have \(\tilde{A}(-\bar{u})=b-\tilde{A}\tilde{x},\quad \tilde{A}^*(-\bar{r})+(-\bar{v})=\tilde{c}-\tilde{A}^*y-\tilde{s}.\) Hence,

$$\begin{aligned} \xi \leqslant \Vert (-\bar{u},-\bar{v})\Vert _G\leqslant \Vert G^{-1/2}\bar{u}\Vert _F+\Vert G^{1/2}\bar{v}\Vert _F. \end{aligned}$$

Since \(\tilde{x}\) and \(\tilde{s}\) operator commute, then \(G=L^{-1}_{\tilde{s}}L_{\tilde{x}}\) and \(Q_{\tilde{x}}\) commute, and we have

$$\begin{aligned} \Vert G^{1/2}\bar{v}\Vert ^2_F=\langle \bar{v},G\bar{v}\rangle =\langle Q^{1/2}_{\tilde{x}}\bar{v},Q^{-1}_{\tilde{x}}GQ^{1/2}_{\tilde{x}}\bar{v}\rangle \leqslant \lambda _{\max }(Q^{-1}_{\tilde{x}}G)\Vert Q^{1/2}_{\tilde{x}}\bar{v}\Vert ^2_F. \end{aligned}$$

Then, by using [18, Lemma 4.1] we have

$$\begin{aligned} \Vert G^{1/2}\bar{v}\Vert ^2_F\leqslant \frac{\langle \tilde{x},\bar{v}\rangle ^2}{\lambda _{\min }(\tilde{w})} =\frac{\langle \tilde{x},\tilde{v}-\tilde{s}\rangle ^2}{\lambda _{\min }(\tilde{w})} \leqslant \frac{\langle \tilde{x},\tilde{v}-\tilde{s}\rangle ^2}{(1-\beta )\tau \mu } =\frac{\langle {x},{v}-{s}\rangle ^2}{(1-\beta )\tau \mu }. \end{aligned}$$

Similarly it can be shown that \(\Vert G^{-1/2}\bar{u}\Vert ^2_F\leqslant \frac{\langle {s},{u}-{x}\rangle ^2}{(1-\beta )\tau \mu }.\) Therefore,

$$\begin{aligned} \xi \leqslant \frac{\langle x,s-v\rangle +\langle x-u,s\rangle }{\sqrt{(1-\beta )\tau \mu }}. \end{aligned}$$
(4.9)

For \(x^*\in \mathcal {P}^*\), and \((y^*,s^*)\in \mathcal {D}^*\), we have \(A(x^*-u)=0\) and \(A^*(y^*-r)+(s^*-v)=0\) by using part one of Lemma 3.4. Hence,

$$\begin{aligned} 0=&\langle x^*-u,s^*-v\rangle =\langle x^*-x+x-u,s^*-s+s-v\rangle \\ =&\langle x^*,s^*\rangle +\langle x,s\rangle +\langle x^*,s-v\rangle +\langle x-u,s^*\rangle +\langle x-u,s-v\rangle \\&-\langle x^*,s\rangle -\langle x,s^*\rangle -\langle x,s-v\rangle -\langle x-u,s\rangle . \end{aligned}$$

It follows that

(4.10)

where the third equation follows from part two of Lemma 3.4, and the last inequality follows from (3.10), (3.12), and \(0<\nu <1\). For the initial points choice as in Sect.  3, it holds that \(\langle x^0,s^0\rangle =r(\rho ^0)^2\) and \(\max \{\Vert x^*\Vert _2, \Vert s^*\Vert _2\}\leqslant \rho ^*\). Using Cauchy–Schwarz and the fact \(\Vert \cdot \Vert _F\leqslant \sqrt{r}\Vert \cdot \Vert _2\), we have that \(\langle p,q\rangle \leqslant \Vert p\Vert _F\Vert q\Vert _F\leqslant r\Vert p\Vert _2\Vert q\Vert _2\). Therefore,

$$\begin{aligned}&\quad \frac{\langle x^*,s^0-v^0\rangle +\langle x^0-u^0,s^*\rangle +\langle x^0-u^0,s^0-v^0\rangle }{\langle x^0,s^0\rangle }\nonumber \\&\leqslant \frac{2r\rho ^*\rho ^0+2r\rho ^*\rho ^0+4r(\rho ^0)^2}{r(\rho ^0)^2}\nonumber \\&=4+4\rho ^*/\rho ^0\nonumber \\&\leqslant 4+4\varPsi . \end{aligned}$$
(4.11)

By substituting (4.11) and (4.10) into (4.9), we obtain the required result.

Lemma 4.5

Let \(G = L^{-1}_{\tilde{s}}L_{\tilde{x}}\). Then \(\Vert (\Delta \tilde{x}_-,\Delta \tilde{s}_-)\Vert ^2_G\leqslant \omega ^2 r^2\mu ,\) where

$$\begin{aligned} \omega =\left( 1+(1+\sqrt{2})(5+4\varPsi )\right) \Big /\sqrt{(1-\beta )\tau }\geqslant 13. \end{aligned}$$

By Lemma 4.2 and Lemma , we have the following corollary.

Corollary 4.6

Let \(\beta \leqslant 1/2\), then

  1. (1)

    \(|\langle \Delta \tilde{x}_-, \Delta \tilde{s}_-\rangle |\leqslant \omega ^2 r^2\mu /2\);

  2. (2)

    \(|\langle \Delta \tilde{x}_-, \Delta \tilde{s}_+\rangle + \langle \Delta \tilde{s}_-, \Delta \tilde{x}_+\rangle |\leqslant 2\sqrt{\beta \tau }\omega r\mu \);

Proof

By Lemma , we have

$$\begin{aligned} |\langle \Delta \tilde{x}_-, \Delta \tilde{s}_-\rangle |= & {} |\langle G^{-1/2}\Delta \tilde{x}_-, G^{1/2}\Delta \tilde{s}_-\rangle |\\\leqslant & {} \Vert G^{-1/2}\Delta \tilde{x}_-\Vert _F\Vert G^{1/2}\Delta \tilde{s}_-\Vert _F\\\leqslant & {} \frac{1}{2}\big (\Vert G^{-1/2}\Delta \tilde{x}_-\Vert _F^2+\Vert G^{1/2}\Delta \tilde{s}_-\Vert _F^2\big )\\= & {} \frac{1}{2}\omega ^2 r^2\mu . \end{aligned}$$

To show (2), we note that from the definition of \(\Vert \cdot \Vert _G\),

$$\begin{aligned} |\langle \Delta \tilde{x}_-, \Delta \tilde{s}_+\rangle |= & {} |\langle G^{-1/2}\Delta \tilde{x}_-, G^{1/2}\Delta \tilde{s}_+\rangle | \nonumber \\\leqslant & {} \Vert G^{-1/2}\Delta \tilde{x}_-\Vert _F\Vert G^{1/2}\Delta \tilde{s}_+\Vert _F \nonumber \\\leqslant & {} \Vert (\Delta \tilde{x}_-, \Delta \tilde{s}_-)\Vert _G\Vert (\Delta \tilde{x}_+, \Delta \tilde{s}_+)\Vert _G\nonumber \\\leqslant & {} \omega r\sqrt{\mu }\sqrt{\beta \tau \mu } \nonumber \\= & {} \sqrt{\beta \tau }\omega r\mu . \end{aligned}$$
(4.12)

Similarly, we have

$$\begin{aligned} |\langle \Delta \tilde{s}_-, \Delta \tilde{x}_+\rangle |\leqslant \sqrt{\beta \tau }\omega r\mu . \end{aligned}$$

Hence the required result follows.

Corollary 4.7

Let \(G = L^{-1}_{\tilde{s}}L_{\tilde{x}}\). If \(\beta \leqslant 1/2\), then

  1. (1)

    \(\Vert \Delta \tilde{x}_+\Vert _F\Vert \Delta \tilde{s}_+\Vert _F\leqslant \sqrt{\text{ cond }\,(G)}\beta \tau \mu /2;\)

  2. (2)

    \(\Vert \Delta \tilde{x}_-\Vert _F\Vert \Delta \tilde{s}_-\Vert _F\leqslant \sqrt{\text{ cond }\,(G)}\omega ^2 r^2\mu /2;\)

  3. (3)

    \(\Vert \Delta \tilde{x}_-\Vert _F\Vert \Delta \tilde{s}_+\Vert _F + \Vert \Delta \tilde{s}_-\Vert _F\Vert \Delta \tilde{x}_+\Vert _F\leqslant 2\sqrt{\text{ cond }\,(G)}\sqrt{\beta \tau }\omega r\mu ;\)

where \(\text{ cond }(G)=\lambda _{\max }\,(G)/\lambda _{\min }(G)\).

Proof

By Lemma 4.1, (1) and (2) follow from Lemma 4.2 and Lemma , respectively. From the first inequality in Lemma 4.1 and (4.12), we have

$$\begin{aligned} \Vert \Delta \tilde{x}_-\Vert _F\Vert \Delta \tilde{s}_+\Vert _F\leqslant & {} \sqrt{\mathrm{cond}\,(G)}\Vert G^{-1/2}\Delta \tilde{x}_-\Vert _F\Vert G^{1/2}\Delta \tilde{s}_+\Vert _F\\\leqslant & {} \sqrt{\mathrm{cond}\,(G)}\sqrt{\beta \tau }\omega r\mu . \end{aligned}$$

Similarly,

$$\begin{aligned} \Vert \Delta \tilde{s}_-\Vert _F\Vert \Delta \tilde{x}_+\Vert _F\leqslant \sqrt{\text{ cond }\,(G)}\sqrt{\beta \tau }\omega r\mu . \end{aligned}$$

This proves the corollary.

Lemma 4.8

If \((\tilde{x},y,\tilde{s})\in \mathcal {N}_1(\tau ,\beta )\) and \(\mu (\alpha )\leqslant \mu \), then

$$\begin{aligned} \Vert (\tau \mu (\alpha )e-\chi (\alpha ))^+\Vert _1\leqslant (1-\alpha _2)\beta \tau \mu (\alpha ). \end{aligned}$$

Proof

Let \(\tilde{x}\circ \tilde{s}=\lambda _1c_1+\cdots +\lambda _rc_r,\) where \(\{c_1,\cdots ,c_r\}\) is a Jordan frame and the spectral eigenvalues satisfy

$$\begin{aligned} \tau \mu -\lambda _1\leqslant \tau \mu -\lambda _2\leqslant \cdots \leqslant \tau \mu -\lambda _{k}\leqslant 0\leqslant \tau \mu -\lambda _{k+1}\leqslant \cdots \leqslant \tau \mu -\lambda _r. \end{aligned}$$

Then, we have \((\tau \mu e-\tilde{x}\circ \tilde{s})^+=(\tau \mu -\lambda _{k+1})c_{k+1}+\cdots +(\tau \mu -\lambda _r)c_r,\) and

$$\begin{aligned} \chi (\alpha )= & {} \sum ^r_{i=1}\lambda _ic_i+\alpha _1\sum ^k_{i=1}(\tau \mu -\lambda _i)c_i+\alpha _2\sum ^r_{i=k+1}(\tau \mu -\lambda _i)c_i\nonumber \\= & {} \sum ^k_{i=1}((1-\alpha _1)\lambda _i+\alpha _1\tau \mu )c_i+\sum ^r_{i=k+1}((1-\alpha _2)\lambda _i+\alpha _2\tau \mu )c_i, \end{aligned}$$
(4.13)

which implies \(\lambda _i(\chi (\alpha ))\geqslant 0,i=1,\cdots ,r\). Using (4.13), we have

$$\begin{aligned} \Vert (\tau \mu (\alpha )e-\chi (\alpha ))^+\Vert _1=&\sum \left[ \tau \mu (\alpha )-\lambda _i(\chi (\alpha ))\right] ^+\\ \leqslant&\sum \left[ \tau \mu (\alpha )-\frac{\mu (\alpha )}{\mu }\lambda _i(\chi (\alpha ))\right] ^+\\ =\,&\frac{\mu (\alpha )}{\mu }\sum \left[ \tau \mu -\lambda _i(\chi (\alpha ))\right] ^+\\ =\,&(1-\alpha _2)\frac{\mu (\alpha )}{\mu }\sum ^r_{i=k+1}[\tau \mu -\lambda _i]^+\\ \leqslant&(1-\alpha _2)\frac{\mu (\alpha )}{\mu }\beta \tau \mu \\ =\,&(1-\alpha _2)\beta \tau \mu (\alpha ), \end{aligned}$$

where the first inequality holds due to \(0<\mu (\alpha )\leqslant \mu \) and \(\chi (\alpha )\succeq 0\).

Lemma 4.9

Let \(\tau \leqslant 1/4, \beta \leqslant 1/2\), and \((\tilde{x},y,\tilde{s})\in \mathcal {N}_1(\tau ,\beta )\). If \(\alpha _1=\alpha _2\sqrt{\beta \tau }/r\) and \(\alpha _2\leqslant \sqrt{\tau }/(\sqrt{\mathrm{cond}\,(G)}\omega ^2)\), then we have \(\Vert \Delta \tilde{x}(\alpha )\circ \Delta \tilde{s}(\alpha )\Vert _1\leqslant \alpha _2\beta \tau \mu (\alpha ).\)

Proof

By (4.4), Lemma 2.3 and Corollary 4.7, we have

$$\begin{aligned} \Vert \Delta \tilde{x}(\alpha )\circ \Delta \tilde{s}(\alpha )\Vert _1 \leqslant \,&\alpha _1^2\Vert \Delta {\tilde{x}_-}\circ \Delta {\tilde{s}_-}\Vert _1 +\,\alpha _1\alpha _2(\Vert \Delta {\tilde{x}_-}\circ \Delta {\tilde{s}_+}\Vert _1\\&+\,\Vert \Delta {\tilde{s}_-}\circ \Delta {\tilde{x}_+}\Vert _1) +\alpha _2^2\Vert \Delta {\tilde{x}_+}\circ \Delta {\tilde{s}_+}\Vert _1\\ \leqslant \,&\alpha _1^2\Vert \Delta {\tilde{x}_-}\Vert _F\Vert \Delta {\tilde{s}_-}\Vert _F +\alpha _1\alpha _2(\Vert \Delta {\tilde{x}_-}\Vert _F\Vert \Delta {\tilde{s}_+}\Vert _F\\&+\Vert \Delta {\tilde{s}_-}\Vert _F\Vert \Delta {\tilde{x}_+}\Vert _F) +\,\alpha _2^2\Vert \Delta {\tilde{x}_+}\Vert _F\Vert \Delta {\tilde{s}_+}\Vert _F\\ \leqslant \,&\sqrt{\mathrm{cond}\,(G)}\mu (\alpha _1^2\omega ^2 r^2/2 +\,2\alpha _1\alpha _2\sqrt{\beta \tau }\omega r +\,\alpha _2^2\beta \tau /2)\\ =\,&\sqrt{\mathrm{cond}\,(G)}\mu (\alpha _2^2\beta \tau \omega ^2/2 +\,2\alpha _2^2\beta \tau \omega +\,\alpha _2^2\beta \tau /2)\\ =\,&\alpha _2^2\sqrt{\mathrm{cond}\,(G)}\omega ^2\beta \tau \mu (1/2+2/\omega +1/2\omega ^2)\\ \leqslant \,&5\alpha _2\beta \tau \mu /13. \end{aligned}$$

Here, the last inequality follows from \(\alpha _2\leqslant \sqrt{\tau }/(\sqrt{\mathrm{cond}\,(G)}\omega ^2), \tau \leqslant 1/4\), and \(\omega \geqslant 13\).

On the other hand, by using \(\alpha _1\leqslant \alpha _2\), we have

$$\begin{aligned} \mathrm{tr\,}(\chi (\alpha ))&=\mathrm{tr\,}(\tilde{x}\circ \tilde{s})+\alpha _1\mathrm{tr\,}(\tau \mu e-\tilde{x}\circ \tilde{s})+(\alpha _2-\alpha _1)\mathrm{tr\,}((\tau \mu e-\tilde{x}\circ \tilde{s})^+)\nonumber \\&\geqslant r\mu -\alpha _1(1-\tau )r\mu \nonumber \\&\geqslant r\mu -\alpha _1r\mu . \end{aligned}$$
(4.14)

Then, using Corollary 4.6, we have

$$\begin{aligned} \langle \tilde{x}(\alpha ),\tilde{s}(\alpha )\rangle&=\mathrm{tr\,}(\chi (\alpha ))+\alpha _1^2\langle \Delta {\tilde{x}_-},\Delta {\tilde{s}_-}\rangle +\alpha _1\alpha _2(\langle \Delta {\tilde{x}_-},\Delta {\tilde{s}_+}\rangle +\langle \Delta {\tilde{s}_-},\Delta {\tilde{x}_+}\rangle )\\&\geqslant r\mu -\alpha _1r\mu -\alpha _1^2\omega ^2 r^2\mu /2-2\alpha _1\alpha _2\sqrt{\beta \tau }\omega r\mu \\&=r\mu -\alpha _2\sqrt{\beta \tau }\mu -\alpha _2^2\beta \tau \omega ^2\mu /2-2\alpha _2^2\beta \tau \omega \mu \\&\geqslant r\mu -\sqrt{\beta }\tau \mu /\omega ^2-\beta \tau ^2\mu /2\omega ^2-2\beta \tau ^2\mu /\omega ^3\\&\geqslant r\mu -\mu /13. \end{aligned}$$

Here, the second inequality follows from \(\alpha _2\leqslant \sqrt{\tau }/(\sqrt{\mathrm{cond}\,(G)}\omega ^2)\), the last inequality follows from the fact that \(\tau \leqslant 1/4, \beta \leqslant 1/2\), and \(\omega \geqslant 13\).

A straightforward calculation shows that

$$\begin{aligned}&\Vert \Delta \tilde{x}(\alpha )\circ \Delta \tilde{s}(\alpha )\Vert _1-\alpha _2\beta \tau \mu (\alpha )\\ \leqslant&\,8\alpha _2\beta \tau \mu /13-\alpha _2\beta \tau \mu (1-1/13r)\\ =&\,\alpha _2\beta \tau \mu (5/13-1+1/13r)\\ \leqslant&\,0. \end{aligned}$$

Lemma 4.10

Let \(\tau \leqslant 1/4, \beta \leqslant 1/2\) and \((\tilde{x},y,\tilde{s})\in \mathcal {N}_1(\tau ,\beta )\). If \(\alpha _1=\alpha _2\sqrt{\beta \tau }/r\) and \(\alpha _2\leqslant \sqrt{\tau }/\omega ^2\), then we have

$$\begin{aligned} \mu (\alpha )\leqslant \left( 1-\frac{3\alpha _1}{16}\right) \mu . \end{aligned}$$

Proof

By using (4.6), (4.7), and Corollary 4.6, we have

$$\begin{aligned} \langle \tilde{x}(\alpha ),\tilde{s}(\alpha )\rangle =\,&\mathrm{tr\,}(\chi (\alpha ))+\alpha _1^2\langle \Delta {\tilde{x}_-},\Delta {\tilde{s}_-}\rangle +\,\alpha _1\alpha _2(\langle \Delta {\tilde{x}_-},\Delta {\tilde{s}_+}\rangle +\langle \Delta {\tilde{s}_-},\Delta {\tilde{x}_+}\rangle )\nonumber \\ \leqslant \,&r\mu -\alpha _1(1-\tau )r\mu +\alpha _2\beta \tau \mu +\,\alpha _1^2\omega ^2 r^2\mu /2+2\alpha _1\alpha _2\sqrt{\beta \tau }\omega r\mu \nonumber \\ =\,&r\mu -\alpha _2(1-\tau )\sqrt{\beta \tau }\mu +\alpha _2\beta \tau \mu +\alpha ^2_2\omega ^2\beta \tau \mu /2+2\alpha ^2_2\omega \beta \tau \mu \nonumber \\ \leqslant \,&r\mu -\alpha _2(1-\tau )\sqrt{\beta \tau }\mu +\alpha _2\beta \tau \mu +\alpha _2\beta \tau ^{3/2}\mu /2+2\alpha _2\beta \tau ^{3/2}\mu /\omega \nonumber \\ =\,&\left[ 1-\alpha _2\sqrt{\beta \tau }/r\left( 1-\tau -\sqrt{\beta \tau }-\sqrt{\beta }\tau /2-2\sqrt{\beta }\tau /\omega \right) \right] r\mu \nonumber \\ \leqslant \,&\left( 1-3\alpha _2\sqrt{\beta \tau }/16r\right) r\mu ,\\ =\,&\left( 1-3\alpha _1/16\right) r\mu , \end{aligned}$$

where the last inequality follows from the fact \(\tau \leqslant 1/4,\beta \leqslant 1/2\), and \(\omega \geqslant 13\). Then, by using \(\mu (\alpha )=\langle \tilde{x}(\alpha ),\tilde{s}(\alpha )\rangle /r\), we obtain the required result.

The following lemma gives a sufficient condition which guarantees all the iterates in the neighborhood \(\mathcal {N}_1(\tau ,\beta )\).

Lemma 4.11

Let \(\tau \leqslant 1/4, \beta \leqslant 1/2\) and \((\tilde{x},y,\tilde{s})\in \mathcal {N}_1(\tau ,\beta )\). If \(\alpha _1=\alpha _2\sqrt{\beta \tau }/r\) and \(\alpha _2\leqslant \sqrt{\tau }/(\sqrt{\mathrm{cond}\,(G)}\omega ^2)\), then \((\tilde{x}(\alpha ),y(\alpha ),\tilde{s}(\alpha ))\in \mathcal {N}_1(\tau ,\beta ).\)

Proof

By Lemma 4.10, it holds that \(\mu (\alpha )\leqslant \mu \). Furthermore, by using Lemma 4.8, and Lemma 4.9, we have

$$\begin{aligned}&\quad \,\,\,\Vert (\tau \mu (\alpha )e-\tilde{x}(\alpha )\circ \tilde{s}(\alpha ))^+\Vert _1\\&=\Vert (\tau \mu (\alpha )e-\chi (\alpha )-\Delta \tilde{x}(\alpha )\circ \Delta \tilde{s}(\alpha ))^+\Vert _1\\&\leqslant \Vert (\tau \mu (\alpha )e-\chi (\alpha ))^+\Vert _1+\Vert (-\Delta \tilde{x}(\alpha )\circ \Delta \tilde{s}(\alpha ))^+\Vert _1\\&\leqslant \Vert (\tau \mu (\alpha )e-\chi (\alpha ))^+\Vert _1+\Vert \Delta \tilde{x}(\alpha )\circ \Delta \tilde{s}(\alpha )\Vert _1\\&\leqslant (1-\alpha _2)\beta \tau \mu (\alpha )+\alpha _2\beta \tau \mu (\alpha )\\&=\beta \tau \mu (\alpha ), \end{aligned}$$

where the first inequality follows from [24, Lemma 3.1]. From (3.3), we have \(\lambda (\tilde{x}(\alpha )\circ \tilde{s}(\alpha ))\geqslant (1-\beta )\tau \mu (\alpha )>0.\) Thus, by Lemma 2.2 we have \({\det }(\tilde{x}(\alpha ))\ne 0\) and \({\det }(\tilde{s}(\alpha ))\ne 0\). Then, since \(\tilde{x},\tilde{s}\succ 0\), by continuity it follows that \(\tilde{x}(\alpha )\succ 0\) and \(\tilde{s}(\alpha )\succ 0\). Moreover, by [24, Theorem 3.3], we have

$$\begin{aligned} \Vert (\tau \mu (\alpha )e-\tilde{w}(\alpha ))^+\Vert _1\leqslant \Vert (\tau \mu (\alpha )e-\tilde{x}(\alpha )\circ \tilde{s}(\alpha ))^+\Vert _1\leqslant \beta \tau \mu (\alpha ), \end{aligned}$$

where \(\tilde{w}(\alpha )=Q_{\tilde{x}(\alpha )^{1/2}}\tilde{s}(\alpha )\). Consequently, we have \((\tilde{x}(\alpha ),y(\alpha ),\tilde{s}(\alpha ))\in \mathcal {N}_1(\tau ,\beta ).\)

For the feasibility condition (4.2) we have the following lemma:

Lemma 4.12

Let \((\tilde{x},y,\tilde{s})\in \mathcal {N}_1(\tau ,\beta )\). If \(\alpha _1=\alpha _2\sqrt{\beta \tau }/r\) and \(\alpha _2\leqslant \sqrt{\tau }/\omega ^2\), then \(\langle \tilde{x}(\alpha ),\tilde{s}(\alpha )\rangle \geqslant (1-\alpha _1)\nu \langle \tilde{x}^0,\tilde{s}^0\rangle .\)

Proof

Firstly, by using (4.14) and Corollary 4.6, we have

$$\begin{aligned} \langle \tilde{x}(\alpha ),\tilde{s}(\alpha )\rangle&=\mathrm{tr\,}(\chi (\alpha ))+\alpha _1^2\langle \Delta {\tilde{x}_-},\Delta {\tilde{s}_-}\rangle +\alpha _1\alpha _2(\langle \Delta {\tilde{x}_-},\Delta {\tilde{s}_+}\rangle +\langle \Delta {\tilde{s}_-},\Delta {\tilde{x}_+}\rangle )\\&\geqslant r\mu -\alpha _1(1-\tau )r\mu -\alpha _1^2\omega ^2 r^2\mu /2-2\alpha _1\alpha _2\sqrt{\beta \tau }\omega r\mu \\&=(1-\alpha _1)r\mu +\alpha _1r\mu \left( \tau -\alpha _2\sqrt{\beta \tau }\omega ^2/2-2\alpha _2\sqrt{\beta \tau }\omega \right) \\&\geqslant (1-\alpha _1)r\mu +\alpha _1\tau r\mu \left( 1-\sqrt{\beta }/2-2\sqrt{\beta }/\omega \right) \\&\geqslant (1-\alpha _1)\nu \langle \tilde{x}^0,\tilde{s}^0\rangle , \end{aligned}$$

where in the last inequality we used (3.10) and \(\omega \geqslant 13\).

4.2 Polynomial Convergence

In view of Lemma 4.11, Lemma 4.12 and Lemma 4.10, we may find step size in the following way. First, set \(\alpha _2=\sqrt{\tau }/(\sqrt{\mathrm{cond}\,(G)}\omega ^2)\). Second, find the greatest \(\alpha _1\in [0,1]\) such that conditions (4.1), (4.2), and (4.3) hold. Lemma 4.11, Lemma 4.12, and Lemma 4.10 guarantee that \(\alpha _1\geqslant \alpha _2\sqrt{\beta \tau }/r\). Therefore, if we let \(\tau \leqslant 1/4,\beta \leqslant 1/2, \Delta \leqslant 3/16\), and replace Step 4 in Algorithm 3.2 by the following Step 4’, then the algorithm is well defined.

Step 4’ Set \(\alpha ^k_2=\sqrt{\tau }/(\sqrt{\mathrm{cond}\,(G)}\omega ^2)\) and find the largest \(\alpha ^k_1\) on the closed interval \([\alpha ^k_2\sqrt{\beta \tau }/r,1]\), such that (4.1), (4.2), and (4.3) hold. Let

$$\begin{aligned} \big (\tilde{x}^{k+1},y^{k+1},\tilde{s}^{k+1}\big ) :=\big (\tilde{x}^k,y^k,\tilde{s}^k\big )+\alpha ^k_1\big (\Delta \tilde{x}^k_-,\Delta y^k_-,\Delta \tilde{s}^k_-\big )+\alpha ^k_2\big (\Delta \tilde{x}^k_+,\Delta y^k_+,\Delta \tilde{s}^k_+\big ). \end{aligned}$$

The following theorem gives an upper bound for the number of iterations in which Algorithm 3.2 stops with an \(\varepsilon \)-approximate solution.

Theorem 4.13

Suppose that \(\sqrt{\mathrm{cond}\,(G)}\leqslant \kappa <\infty \) for all iterations. Then Algorithm 3.2 terminates in at most \(\mathcal {O}(\kappa r\log \varepsilon ^{-1})\) iterations.

Proof

At each iteration, by using Lemma 4.10 we have

$$\begin{aligned} \mu (\alpha )\leqslant \left( 1-\frac{3\alpha _1}{16}\right) \mu \leqslant \left( 1-\frac{3\alpha _2\sqrt{\beta \tau }}{16r}\right) \mu \leqslant \left( 1-\frac{3\sqrt{\beta }\tau }{16\omega ^2\kappa r}\right) \mu , \end{aligned}$$

from which the statement of the theorem follows.

By (3.3) and [8, Lemma 36], we have \(\mathrm{cond}\,(G)=1\) for the NT direction, and \(\mathrm{cond}\,(G)\leqslant r/(1-\beta )\tau \) for the xs and sx directions. Therefore, we give the complexity bounds of Algorithm 3.2.

Theorem 4.14

If the NT search direction is used, the iteration complexity of Algorithm 3.2 is \(\mathcal {O}(r\log \varepsilon ^{-1})\). If the xs and sx search directions are used, the iteration complexities of Algorithm 3.2 are \(\mathcal {O}(r^{1.5}\log \varepsilon ^{-1})\).

5 Conclusions

We have established complexity bound of an infeasible-interior-point algorithm based on a new wide neighborhood for SCP. We summarize the obtained complexity results in Table 1, where r is the rank of the associated Euclidean Jordan algebra and \(\varepsilon >0\) is the required precision. For comparison, we also include the complexity bounds for the infeasible IPM in [18] and the infeasible IPM in [22]. To our knowledge, when the NT search direction is used, the complexity bounds obtained here are so far the best available for infeasible IPMs using wide neighborhood.

Table 1 Summary of complexity bounds