Abstract
This paper discusses second-order cone tensor eigenvalue complementarity problem. We reformulate second-order cone tensor eigenvalue complementarity problem as two constrained polynomial optimizations. For these two reformulated optimizations, Lasserre-type semidefinite relaxation methods are proposed to compute all second-order cone tensor complementarity eigenpairs. The proposed algorithms terminate when there are finitely many second-order cone complementarity eigenvalues. Numerical examples are reported to show the efficiency of the proposed algorithms.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
For positive integers m and \(n_1,n_2,\ldots , n_m\), an mth order and \((n_1,n_2,\ldots , n_m)\)-dimensional real tensor can be viewed as an array in the space \({\mathbb {R}}^{n_1\times n_2\times \cdots \times n_m}\). Such a tensor \({\mathcal {A}}\) can be indexed as
When \(n_1=\cdots =n_m=n\), \({\mathcal {A}}\) is called an mth order n-dimensional square tensor. In such case, the tensor space \({\mathbb {R}}^{n_1\times \cdots \times n_m}\) is denoted as \({\mathtt {T}}^m({\mathbb {R}}^n)\). A tensor in \({\mathtt {T}}^m({\mathbb {R}}^n)\) is said to be symmetric if its entries are invariant under permutations of indices. That is, \({\mathcal {A}}_{i_1\ldots i_m} = {\mathcal {A}}_{j_1\ldots j_m}\) for any permutation \((j_1,\ldots , j_m\)) of index \((i_1,\ldots , i_m)\). For \({\mathcal {A}}\in {\mathtt {T}}^m({\mathbb {R}}^n)\) and \(x:=(x_1,\ldots , x_n)^T\in \mathbb {R}^n\), we use the notation
Clearly, \({\mathcal {A}} x^{m-1}\in \mathbb {R}^n\) is an n-dimensional vector.
In this paper, we consider the second-order cone tensor eigenvalue complementarity problem (SOCTEiCP): for two given tensors \({\mathcal {A}}, {{\mathcal {B}}} \in {\mathtt {T}}^m (\mathbb {R}^{n+1} )\), SOCTEiCP is to find a nonzero vector x with a real number \(\lambda \) such that
where \({\mathcal {K}}\) is the second-order cone (or Lorentz cone, ice-cream cone in the literature) defined by \({\mathcal {K}}:=\{x=(z,t)\in \mathbb {R}^{n+1}: \Vert z\Vert _{2}\le t\}\), and here \({\mathcal {K}}^*\) is the dual cone of \({\mathcal {K}}\), defined by \({\mathcal {K}}^*:=\{y |\ x^Ty\ge 0, \forall x\in {\mathcal {K}}\}\). For \((\lambda , x)\) satisfying (1.1), \(\lambda \) is called an SOC-complementarity eigenvalue and x is called an SOC-complementarity eigenvector of \(({\mathcal {A}},{\mathcal { B}})\), respectively. Furthermore, we call such \((\lambda ,x)\) an SOC-complementarity eigenpair of \(({\mathcal {A}},{\mathcal {B}})\).
For a general closed and convex cone, problem (1.1) is called the tensor generalized eigenvalue complementarity problem (TGEiCP), introduced and studied in [20]. When the cone is specified as the nonnegative orthant cone \(\mathbb {R}_{+}^{n}:=\{x\in \mathbb {R}^{n}{:} x \ge 0\}\), the problem is called the tensor eigenvalue complementarity problem (TEiCP) in [9, 20]. For such cone, the pair \((\lambda ,x)\) is called a Pareto-eigenpair of \(({\mathcal {A}},{\mathcal {B}})\). Theoretical and numerical results are presented in [2, 3, 8, 9, 20, 27, 28]. When the cone is a general second-order cone, that is, \({\mathcal {K}}={\mathcal {K}}^{n_{1}}\times {\mathcal {K}}^{n_{2}} \times \cdots \times {\mathcal {K}}^{n_{r}}\), where \({\mathcal {K}}^{n_i}\) is a second-order cone. Then problem (1.1) is the second-order cone tensor eigenvalue complementarity problem discussed in [13]. Clearly, our considered problem is a special case of the considered problem in [13].
Numerical methods have been discussed for the second-order cone tensor eigenvalue complementarity problem. A scaling-and-projection algorithm (SPA) was proposed in [20] and applied to solve SOCTEiCP in [13], which converges to an SOC-complementarity eigenpair. In general, the extreme SOC-complementarity eigenvalue is the most important one among all SOC-complementarity eigenvalues. For example, the nonnegativity of the smallest SOC-complementarity eigenvalue of matrix pair (A, I) asserts the SOC-copositivity of symmetric matrix \({\mathcal {A}}\), here I is an identity matrix and \(m=2\). However, it is hard to get the extreme SOC-complementarity eigenvalue by SPA. On the other hand, existing algorithms can not check whether if there are no SOC-complementarity eigenpairs of \(({\mathcal {A}},{\mathcal {B}})\).
In recent years, SDP relaxation methods have become an efficient numerical method for solving tensor optimization problems. Specifically, all tensor eigenvalues can be computed by solving a finite hierarchy of semidefinite relaxations [4, 24] when they are finite. Tensor complementarity problem and tensor eigenvalue complementarity problems can be solved efficiently by SDP relaxation methods [8, 9, 29]. Furthermore, the SDP relaxation methods present numerical certificates when there are no solutions to the tensor complementarity problem and the tensor eigenvalue complementarity problem. Motivated by these, we consider how to apply SDP relaxation methods to get all SOC-complementarity eigenpairs when they are finite.
The main contribution of this paper is to compute all real SOC-complementarity eigenpairs when there are finitely many ones. For convenience, we denote \(bd\mathbb {S}\) and \(int\mathbb {S}\) as its sets of boundary SOC-complementarity eigenpairs and interior SOC-complementarity eigenpairs. Both boundary SOC-complementarity eigenpairs and interior SOC-complementarity eigenpairs can be reformulated as feasible solutions of polynomial optimization problems. Based on these reformulations, all SOC-complementarity eigenpairs can be computed by solving a finite hierarchy of semidefinite relaxations.
\(\mathbf{Contributions} \): In this paper, we study the second-order cone tensor eigenvalue complementarity problem. First, we show that the solution set of SOCTEiCPs is two real polynomial systems. Based on this equivalence, an upper bound on the number of the SOC-complementarity eigenvalues is established. Then we reformulate the SOCTEiCP as two polynomial optimization problems. By solving a hierarchy of Lasserre-type semidefinite programming (SDP) relaxations, all SOC-complementarity eigenpairs of \(({\mathcal {A}},{\mathcal {B}})\) can be computed if they are finite.
The paper is organized as follows. Section 2 presents preliminaries in polynomial optimization. Section 3 reformulates SOCTEiCPs as polynomial optimization problems. Section 4 proposes Lasserre-type semidefinite relaxations for computing all boundary SOC-complementarity eigenpairs of SOCTEiCPs if there are finitely many ones. Section 5 proposes Lasserre-type semidefinite relaxation methods for computing all interior SOC-complementarity eigenpairs of SOCTEiCPs if there are finitely many ones. In Sect. 6, we report numerical results to show the efficiency of the proposed algorithms of this paper.
2 Preliminaries
In this paper, \( [n] = \{1,2,\ldots , n\} \). We use lowercase letters, e.g. x, y, to denote vectors. Higher-order tensors are denoted as \( {\mathcal {A}},{\mathcal {B}},{\mathcal {C}},\ldots \). For any \(i\le n\), \(x_{[i]}:=(x_1,\dots , x_i)^T\in \mathbb {R}^i\). \((x_1,x_2)\in \mathbb {R}^{n}\times \mathbb {R}\) is simplified as \((x_1,x_2)\in \mathbb {R}^{n+1}\). For a given tensor \({\mathcal {A}}\in {\mathtt {T}}^m({\mathbb {R}}^n)\) and any \(i\in [n]\), \({\mathcal {A}}_{i}\in {\mathtt {T}}^{m-1}(\mathbb {R}^n)\) and \({\mathcal {A}}_{[i]}\in \mathbb {R}^{i\times n\times \cdots \times n}\) with its entries
In the following, we review some basics in polynomial optimization. We refer to [17, 18] for surveys in polynomial optimization.
In the space \(\mathbb {R}^n\), the symbol \(\Vert \cdot \Vert _2\) denotes the standard Euclidean norm. Let \({\mathbb {R}}[x]\) be the ring of polynomials with real coefficients and in variables \(x:=(x_1,\dots , x_n)^T\), and let \({\mathbb {R}}[x]_d\) be the set of real polynomials in x whose degrees are at most d.
For a polynomial tuple \(h=(h_1,h_2,\ldots , h_s)\), the ideal generated by h is the set
The kth truncation of I(h) is the set
The complex and real algebraic varieties of h are respectively defined as
A polynomial p is said to be sum of squares (SOS) if there exist \(p_1,p_2,\ldots p_r\in {\mathbb {R}}[x]\) such that \(p=p_1^2+p_2^2+\cdots +p_r^2\). The set of all SOS polynomials is denoted as \(\Sigma [x]\). For a given degree m, denote
The quadratic module generated by a polynomial tuple \(g=(g_1,\ldots , g_t)\) is the set
The kth truncation of the quadratic module Q(g) is the set
Note that if \(g=\emptyset \) is an empty tuple, then \(Q(g) = \Sigma [x]\) and \(Q_k(g) = \Sigma [x]_{2k}\).
Let \({\mathbb {N}}\) be the set of nonnegative integers. We refer to [5, 11, 16,17,18,19] for details about polynomial optimization. For \(x:=(x_1,\dots , x_n)^T\), \(\alpha : = (\alpha _1, \ldots , \alpha _n)\) and a degree d, denote
Denote by \(\mathbb {R}^{{\mathbb {N}}_d^n}\) the space of all real vectors y that are indexed by \(\alpha \in {\mathbb {N}}_d^n\). For \(y \in \mathbb {R}^{{\mathbb {N}}_d^n}\), we can write it as
For \(f = \sum _{ \alpha \in {\mathbb {N}}_d^n} f_\alpha x^\alpha \in \mathbb {R}[x]_d\) and \(y \in \mathbb {R}^{{\mathbb {N}}_d^n}\), we define the operation
For an integer \(t \le d\) and \(y \in \mathbb {R}^{{\mathbb {N}}_d^n}\), denote the tth truncation of y as
Let \(q \in \mathbb {R}[x]\) with \({\text {deg}}(q) \le 2k\). For each \(y \in \mathbb {R}^{{\mathbb {N}}_{2k}^n}\), \(\langle q p^2, y \rangle \) is a quadratic form in vec(p), the coefficient vector of the polynomial p with \({\text {deg}}(qp^2) \le 2k\). Let \(L_q^{(k)}(y)\) be the symmetric matrix such that
The matrix \(L_q^{(k)}(y)\) is called the kth localizing matrix of q generated by y. It is linear in y. For instance, when \(n=2\), \(k=2\) and \(q=5x_1+4x_1x_2-6x_2^2\),
If \(q= (q_1, \ldots , q_r)\) is a tuple of polynomials, we then define
When \(q=1\) (the constant 1 polynomial), \(L_1^{(k)}(y)\) is called the kth moment matrix generated by y, and we denote
For instance, when \(n=2\) and \(k=2\),
For a degree d, denote the monomial vector
3 Reformulation and properties of the SOCTEiCP
For given tensors \({\mathcal {A}},{\mathcal {B}}\in {\mathtt {T}}^m(\mathbb {R}^{n+1})\), \(x\in \mathbb {R}^{n+1}\) is an SOC-complementarity eigenvector of \(({\mathcal {A}},{\mathcal {B}})\) associated with SOC-complementarity eigenvalue \(\lambda \) if and only if \(\alpha x\) \((\forall \, \alpha >0)\) is an SOC-complementarity eigenvector of \(({\mathcal {A}},{\mathcal {B}})\) associated with SOC-complementarity eigenvalue \(\lambda \). Moreover, x is an SOC-complementarity eigenvector means that \(x\ne 0\), and hence \(x_{n+1}>0\) from \(x\in {\mathcal {K}}\). Motivated by this fact, it is asserted that there exists an SOC-complementarity eigenvector \((z,1)\in {\mathcal {K}}\) with \(z\in \mathbb {R}^{n}\) for an SOC-complementarity eigenvalue. It suffices to consider the set of SOC-complementarity eigenpairs of \(({\mathcal {A}},{\mathcal {B}})\) as
Let
where \( bd\,{\mathcal {K}}:=\{x=(z,1)\in \mathbb {R}^{n+1}\mid \, z^T z=1\}\) and \( int\,{\mathcal {K}}:=\{x=(z,1)\in \mathbb {R}^{n+1}\mid \, z^T z<1\}\). For convenience, we call \((\lambda ,x)\in bd\,{\mathbb {S}}\,\, (int {\mathbb {S}})\) the boundary (interior) SOC-complementarity eigenpair. The corresponding \(\lambda \) and x are called the boundary (interior) SOC-complementarity eigenvalue and eigenvector, respectively.
In order to compute all the SOC-complementarity eigenpairs by SDP relaxation methods, we reformulate the second-order cone tensor eigenvalue complementarity problem as polynomial optimization problems. Before proceeding, we recall some related definitions and properties, which are necessary for the reformulation.
We first recall the definition of \( Jordan \) \( product \).
Definition 3.1
[10] For any \(x=(x_1,x_2)\in \mathbb {R}^{n+1}\), \(y=(y_1,y_2) \in \mathbb {R}^{n+1}\) with \(x_2,y_2\in \mathbb {R}\), their \( Jordan \) \( product \) is defined as
Based on \( Jordan \) \( product \), we have the following result.
Lemma 3.2
[10, Proposition 2.1] For any \(x, ~y\in \mathbb {R}^{n+1}\), we have
The next lemma, which can be checked by Sect. 1 and Proposition 1.1.3 in [7], describes the interior solution case of the \( complementarity \) \( problem \) \((CP({\mathcal {K}},F))\) when \({\mathcal {K}}\) is a cone and \(F:{\mathcal {K}} \rightarrow \mathbb {R}^{n}\) is a general mapping. Here \(CP({\mathcal {K}},F)\) is to find a vector x such that \( {\mathcal {K}}\ni x \perp F(x)\in {\mathcal {K}}^*\).
Lemma 3.3
Let \({\mathcal {K}}\) be a cone in \(\mathbb {R}^n\). Suppose that a vector \(x \in int\,{\mathcal {K}}\) solves the \(CP({\mathcal {K}},F)\). Then \(F(x)=0\).
Based on Lemmas 3.2 and 3.3, we have the following result.
Theorem 3.4
Let \({\mathcal {A}},{\mathcal { B}} \in {{\mathtt {T}}}^m ({\mathbb {R}}^{n+1})\) and \(x=(z,1)\in \mathbb {R}^{n+1}\).
-
(1)
\((\lambda ,x)\in int\,{\mathbb {S}}\) if and only if \((\lambda ,z) \in \Omega \), where
$$\begin{aligned} \begin{aligned} \Omega = \left\{ (\lambda ,z)\big |\, {\mathcal {A}} x^{m-1}-\lambda {\mathcal {B}}x^{m-1}=0,\quad z^Tz<1 \right\} .\\ \end{aligned} \end{aligned}$$(3.1) -
(2)
\((\lambda ,x)\in bd\,{\mathbb {S}}\) if and only if \((\lambda , z) \in \hat{\Omega }\), where
$$\begin{aligned} \begin{aligned} \hat{\Omega }= \left\{ (\lambda ,z)\Big |\, \begin{array}{ll} &{}({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{[n]}+({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{(n+1)}z = 0,\\ &{}({\mathcal {A}} x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{(n+1)}\ge 0, \quad z^Tz=1 \end{array} \right\} .\\ \end{aligned} \end{aligned}$$(3.2)
Proof
(1)“\(\Rightarrow \)” \((\lambda ,x)\in int\,{\mathbb {S}}\) means that
Let \(F(x)={\mathcal {A }}{{x}}^{m-1} -\lambda \mathcal {B}x^{m-1}\). Then \(x\in int\,{\mathcal {K}}\) solves \(CP({\mathcal {K}},F)\). From Lemma 3.3, we have that \({\mathcal {A }}{{x}}^{m-1} -\lambda \mathcal {B}x^{m-1}=0\). Together with \(x\in int\,{\mathcal {K}}\), we have \((\lambda , z)\in \Omega \).
“\(\Leftarrow \)” \( (\lambda , z)\in \Omega \) leads to \(x=(z,1)\in int\,{\mathcal {K}}\), \({\mathcal {A}} x^{m-1}-\lambda {\mathcal {B}}x^{m-1}=0\in {\mathcal {K}}\) and \(x^T({\mathcal {A}} x^{m-1}-\lambda {\mathcal {B}}x^{m-1})=0\). Hence \((\lambda , x)\in int\,{\mathbb {S}}\) is asserted.
(2) “\(\Rightarrow \)” From Lemma 3.2 and \((\lambda ,x)\in bd\,{\mathbb {S}}\), it follows that
\(x\circ ({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})=0\) implies that \(({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{[n]}+({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{(n+1)}z = 0\). From \(x\in bd{{\mathcal {K}}}\), it is clear that \( (\lambda , z)\in \hat{\Omega }\).
“\(\Leftarrow \)” From \( (\lambda , z)\in \hat{\Omega }\), we have
By direct computation,
and hence
Together with \(z^T z=1\), it holds
Hence, we can assert that \((\lambda ,x)\in bd\,{\mathbb {S}}\). \(\square \)
Based on Theorem 3.4, we now discuss the upper bound for the number of SOC-complementarity eigenvalues. Before proceeding, we need the following definition. Let \({\mathcal {A}}\) and \({\mathcal {B}}\) be two mth order tensors in \({\mathtt {T}}^m(\mathbb {C}^{n})\), \({({\mathcal {A}}, {\mathcal {B}})}\) is called a regular tensor pair if \(det(\alpha {\mathcal {A}}-\beta {\mathcal {B}}) = 0 \) for some \((\alpha , \beta ) \in \mathbb {C}\times \mathbb {C}\). Conversely, we call \(({\mathcal {A}}, {\mathcal {B}})\) a singular tensor pair if \(det(\alpha {\mathcal {A}}-\beta {\mathcal {B}}) = 0\) for all \((\alpha , \beta )\in \mathbb {C}\times \mathbb {C}\). Here, the determinant of the tensor was introduced by Qi [25] firstly, and was further discussed by Hu et al. [14]. The determinant of an mth order n-dimensional tensor \({\mathcal {A}}\) is the resultant of the system of homogeneous equation \({\mathcal {A}}x^{m-1} = 0\) with \(x\in \mathbb {R}^n\), which is the unique polynomial on the entries of \({\mathcal {A}}\) such that
For convenience of notation, tensors \(\bar{\mathcal {A}}, \bar{B}\in {\mathtt {T}}^{m+1}(\mathbb {C}^{n+1})\) are defined as below with \(x=(z,t)\in \mathbb {R}^{n+1}\)
Theorem 3.5
-
(1)
Let \({\mathcal {A}}\),\({\mathcal {B}}\in {\mathtt {T}}^m(\mathbb {R}^{n+1})\). Then there are at most \((n+1)(m^n+(m-1)^n)\) SOC-complementarity eigenvalues of \(({\mathcal {A}},{\mathcal {B}})\) if \(({\mathcal {A}},{\mathcal {B}})\) and \((\bar{\mathcal {A}},\bar{\mathcal {B}})\) are regular tensor pairs, respectively.
-
(2)
If \({\mathcal {A}},{\mathcal {B}}\) are generic tensors in \({\mathtt {T}}^m(\mathbb {R}^{n+1})\), then for each SOC-complementarity eigenvalue, there is a unique SOC-complementarity eigenvalue (up to scaling).
Proof
(1) For SOC-complementarity eigenpair \((\lambda , x)\) with \(x\in int\,{\mathcal {K}}\), we have \(({\mathcal {A}}-\lambda {\mathcal {B}}) x^{m-1}={\mathcal {A}} x^{m-1}-\lambda {\mathcal {B}} x^{m-1}=0\) from Theorem 3.4. This means that \((\lambda ,x)\) is a generalized tensor eigenpair of \(({\mathcal {A}},{\mathcal {B}})\). From Theorem 2.1 in [6], there are at most \((n+1)(m-1)^n\) eigenvalues counting multiplicity when (\({\mathcal {A}},{\mathcal {B}}\)) is a regular tensor pair.
For SOC-complementarity eigenpair \((\lambda , x)\) with \(x=(z,t)\in bd\,{\mathcal {K}}\), we have
Let \(\bar{{\mathcal {A}}}, \bar{{\mathcal {B}}}\in {\mathtt {T}}^{m+1}(\mathbb {R}^{n+1})\) be as in (3.3). Then above equality can be rewritten as
Thus, \((\lambda , x)\) is a generalized tensor pair and there are at most \((n+1)m^n\) eigenvalues counting multiplicity from Theorem 2.1 in [6] when \(({\mathcal {A}},{\mathcal {B}})\) is a regular tensor pair.
Together with these two cases, there are at most \((n+1)(m^n+(m-1)^n)\) SOC-complementarity eigenvalues of \(({\mathcal {A}},{\mathcal {B}})\) and the result (1) is established now.
(2) When \({\mathcal {A}},{\mathcal {B}}\) are generic in \({\mathtt {T}}^m(\mathbb {R}^{n+1})\), \(\bar{\mathcal {A}},\bar{\mathcal {B}}\) are also generic in \({\mathtt {T}}^{m+1}(\mathbb {R}^{n+1})\). From Theorem 3.1 in [8], x in \({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1}=0\) or \(\bar{\mathcal {A}}x^m-\lambda \bar{\mathcal {B}}x^m=0\) is unique for each \(\lambda \), up to scaling. Therefore, each SOC-complementarity eigenvector is unique for each SOC-complementarity eigenvalue. \(\square \)
The existence of SOCTEiCP can be found by Theorem 2.3 in [13], which is restated as follows.
Lemma 3.6
There exists at least one SOC-complementarity eigenpair when \({\mathcal {B}}\) is strictly SOC-positive, where tensor \({\mathcal {B}}\) is called strictly SOC-positive if \({\mathcal {B}}x^m>0\) for all nonzero vectors in second-order cone.
SOC-positivity can be tested by solving a polynomial optimization problem, which is similar to [23] and omitted here.
Motivated by Theorem 3.4, we consider the following polynomial optimization problems to compute the SOC-complementarity eigenpair
and
Clearly, \(\Omega \) is included in the feasible set of (3.4), and \(\hat{\Omega }\) is the feasible set of (3.5). Moreover, it is easy to check whether a feasible solution of (3.4) lies in \(\Omega \). Motivated by these facts, we consider how to solve (3.4) and (3.5) in the following.
4 Computation of boundary SOC-complementarity eigenpairs
In this section, we study how to compute all boundary SOC-complementarity eigenpairs. For general tensors \({\mathcal {A}}\) and \({\mathcal {B}}\), there are finitely many SOC-complementarity eigenvalues from Theorem 3.5. This motivates us to assume that \(\lambda ^2\le M\) for some \(M>0\). We order them monotonically as follows if they are finite:
Let
Then \(bd\,{\mathbb {S}}:=\{(\lambda ,x): x=(z,1), (\lambda ,z)\in {\mathbb {S}}_1\cup {\mathbb {S}}_2\cup \cdots \cup {\mathbb {S}}_N\}\). Therefore, it suffices to compute \({\mathbb {S}}_i\) to get \(bd\,{\mathbb {S}}\).
In the following, we compute all boundary SOC-complementarity eigenpairs sequentially if they are finite.
4.1 Computation of \({\mathbb {S}}_1\)
We first consider the following polynomial optimization problem:
It is clear that \((\lambda ,z)\) is feasible if and only if \((\lambda ,z)\in \hat{\Omega }\). For convenience of notation, let \(g=(g_1,g_2)\), \(h=(h_1,\ldots , h_{n+1})\) with \(g_1=({\mathcal {A}} x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{(n+1)}\), \(g_2=M-\lambda ^2\), \(h_i=(\mathcal {A}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{i}+({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{(n+1)}z_{i}\) for \(i\in [n]\) and \(h_{n+1}=z^T z-1\). Then above problem can be rewritten as follows:
which is a polynomial optimization problem of degree \(m+1\) on \((\lambda ,z)\). To solve (4.1), we consider Lasserre-type semidefinite relaxations. For convenience of notation, let \(\bar{e}\) be the vectorized polynomial coefficient of objective function in polynomial optimization (4.1). Then Lasserre’s hierarchy of semidefinite relaxations for solving (4.1) is
for the order \(k=k_0, k_0+1,\ldots \), where \(k_0=\lceil \frac{m+2}{2}\rceil .\) In the above, \(\langle 1, y\rangle =1\) means that the first entry of y is one, and the matrices \(L_{h}^k(y), L_{g}^k(y), M_{k}(y)\) are defined as in (2.3), (2.4) and (2.5). Its dual problem is
It can be shown that for all k
and the sequences \(\{ \nu _{1,k} \}\) and \(\{ \bar{\nu }_{1,k} \}\) are monotonically increasing, that is,
Suppose that \(y^ {1,k}\) is the optimizer of (4.2). If the truncation \(\bar{y} := y^{1,k}|_{2t} \) satisfies the rank condition
for some integer \(t \in [k_0,k]\), then \(\lambda _1=\nu _{1,k}\) and we can get \(r:=rank\ M_t(\bar{y})\) distinct real global optimizers of (4.1) according to [21]. All such optimizers returned by optimizers \(y^{1,k}\) of (4.2) constitute \({\mathbb {S}}_1\).
Theorem 4.1
Let \({\mathcal {A}},{\mathcal {B}}\in {\mathtt {T}}^m(\mathbb {R}^{n+1})\), and \(\hat{\Omega }\) be the feasible set of problem (4.1). Then we have the following properties:
-
(1)
If \({\hat{\Omega }}=\emptyset \) if and only if the semidefinite relaxation (4.2) is infeasible for some k.
-
(2)
If \({\hat{\Omega }}\ne \emptyset \), then the feasible set of (4.2) is nonempty and compact. Furthermore,
$$\begin{aligned} \lim \limits _{k\rightarrow \infty }\nu _{1,k}=\lim \limits _{k\rightarrow \infty } \bar{\nu }_{1,k}=\lambda _1. \end{aligned}$$ -
(3)
If there are finitely many boundary SOC-complementarity eigenvalues, then for all big enough k,
$$\begin{aligned} \nu _{1,k}=\bar{\nu }_{1,k}=\lambda _1. \end{aligned}$$Furthermore, if there are finitely many \(x=(z,1)\) such that \((\lambda ,z)\in {\mathbb {S}}_{1}\), then for all big enough k and each minimizer \(y^{1,k}\) of (4.2), rank condition (4.4) is satisfied for some \(t\le k\).
Proof
(1) “\(\Rightarrow \)” Suppose that \({\hat{\Omega }}=\emptyset \). By Positivstellensatz of [1], \(-1\in I(h)+Q(g)\). Hence \(-1\in I_{2k}(h)+Q_{k}(g)\) for big enough k. This means that problem (4.3) is unbounded above and hence (4.2) is infeasible for such k from weak duality.
“\(\Leftarrow \)” Suppose that problem (4.2) is infeasible for some k. Then problem (4.1) is infeasible since \([(\lambda , z)]_{2k}\) will be a feasible solution of (4.2) for any feasible solution \((\lambda , z)\) of (4.1), and hence \({\hat{\Omega }}=\emptyset \).
(2) \({\hat{\Omega }}\ne \emptyset \) means that the feasible set of (4.2) is nonempty since (4.2) is a relaxation of (4.1). With a similar discussion of Theorem 3.2 in [23], the feasible set of (4.2) is always compact. Furthermore, \(\nu _{1,k}\) with optimizer \(y^{1,k}\) is always achievable for all k. Since \(M-\lambda ^2\) and \(1-z^T z\) are polynomials of tuple g and h, Assumption 4.1 in [15] is satisfied by adopting \(u(\lambda , z)=M+1-z^T z-\lambda ^2\) in such assumption. Therefore, the asymptotic convergence is established from Theorem 4.2 [15].
(3) Suppose that all boundary SOC-complementarity eigenvalues are listed as: \(\lambda _1<\lambda _2<\cdots <\lambda _N\). Let \(s_i\) (\(i\in [N]\)) be univariate real polynomials in \(\lambda \) such that
Let \(f:=f_1+f_2+\cdots +f_N\in \sum [x]_{2k'}\) for some positive integer \(k'\) and \(\bar{f}:= \bar{e}-\lambda _1-f\). Clearly, \(\bar{f}= 0\) for all \((\lambda ,x)\in bd\,{\mathbb {S}}\). By Real Nullstellensatz [17, Theorem 2.12], there exists a \(q\in Q(g)\) and a positive integer l such that \({\bar{f}}^{2l}+q\in I(h)\). Although the remainder proof is similar to that of Theorem 3.1 in [24], we present here for completeness. For any \(\delta >0\) and \(c>0\), let
From Lemma 2.1 in [22], there exists \(k_1\) such that
for all \(\delta >0\) and \(c\ge \frac{1}{2l}(1-\frac{1}{2l})^{2l-1}\). This means that \(\bar{e}-(\lambda _1-\delta )=\theta _\delta +\phi _\delta +f\in I_{2k_1}(h)+Q_{k_1}(g)\).
Thus, for all \(\delta >0\), \(\lambda _1-\delta \) is feasible in (4.3) for order \(k_1\) and \(\bar{\nu }_{1,k_1}\ge \lambda _1\) from the arbitrariness of \(\delta >0\).
On the other hand, \(\bar{\nu }_{1,k}\le \lambda _1\) for all k. Now we can assert that \(\bar{\nu }_{1,k_1}=\lambda _1\) and hence \(\nu _{1,k_1}=\lambda _1\). Since \(\{\nu _{1,k}\}_k\) is monotonically increasing, \(\nu _{1,k}=\lambda _1\) for all \(k\ge k_1\). This means that for all big enough k,
Furthermore, if there are finitely many minimizer of (4.1), rank condition must be satisfied from Theorem 2.6 in [21], which completes the proof. \(\square \)
4.2 Computation of \({\mathbb {S}}_{i+1}\)
We now discuss how to determine whether \({\mathbb {S}}_{i+1}\) exists and how to compute \({\mathbb {S}}_{i+1}\) \((i = 1,\ldots )\) if it exists. Suppose that \({\mathbb {S}}_{i}\) is known and there exists \(\epsilon >0\) such that \(0< \epsilon <\lambda _{i+1}-\lambda _i \). Consider the following optimization problem:
Similarly, Lasserre-type semidefinite relaxations can be applied to solve (4.5). For the order \(k = k_0,k_0+1,\ldots \), the kth-order Lasserre relaxation is
The dual problem of (4.6) is
Similarly, we have
Moreover, the sequence \(\{ \nu _{i+1,k} \}\) and \(\{ \bar{\nu }_{i+1,k} \}\) are monotonically increasing with k.
If problem (4.6) is infeasible for some k, then \({\mathbb {S}}_{i+1}\) is empty, which means that \( bd\mathbb {S} :=\{(\lambda , x)\mid x=(z,1), (\lambda , z)\in {\mathbb {S}}_{1}\cup {\mathbb {S}}_{2} \cup \cdots \cup {\mathbb {S}}_{i}\}\). Otherwise, suppose that \(y^ {i+1, k}\) is an optimizer of (4.6). If for some integer \(t \in [k_0,k]\), the truncation \(\bar{y} := y^{i+1, k}|_{2t} \) satisfies the rank condition (4.4), then \(\nu _{i+1, k}= \lambda _{i+1}\) and we get global optimizers of (4.5) returned by \(y^{i+1, k}\). Then we have all global optimizers of (4.5) returned by all optimizers \(y^ {i+1, k}\) of (4.6).
In practice, the existence of \(\lambda _{i+1}\) is usually unknown in advance. Even if it exists, its value is typically not available. So we need to determine the value of \(\epsilon \) satisfying \(0<\epsilon <\lambda _{i+1}-\lambda _i\). For this aim, we consider the polynomial optimization problem:
It can be computed by Lasserre relaxations like (4.2) and (4.3). For numerical reasons, the value \(\epsilon > 0\) can not be too small. Let \(\tau \) be the optimal value of (4.7) and let \(\epsilon =0.05\) in (4.7). If \(\tau >\lambda _{i}\), we decrease \(\epsilon \) as \(\epsilon =\epsilon /2\) and solve (4.7) again. By repeating this process, we can get \(\tau =\lambda _{i}\) ultimately.
4.3 The choice of M
Suppose that there exists \(M>0\) such that all feasible solutions \((\lambda , z)\) of (3.5) satisfy that \(\lambda ^2 \le M\). Now we discuss how to obtain such M. To this end, we consider the following problem:
where \(F(\lambda , z)\) is a general SOS polynomial on \((\lambda , z)\) and \(\bar{g}=( g_1 , -g_2)\). Consider the following kth-order Lasserre-type semidefinite relaxation:
Clearly, M is big enough if and only if (4.8) is infeasible. Furthermore, problem (4.8) is infeasible if (4.9) is infeasible for some order k. Solve (4.9) with \(M:=M_0\) and \(k:=k_0\). If an optimal solution of (4.8) is obtained by solving (4.9), let \(M:=2M\) and solve (4.9) again. Otherwise, (4.9) is infeasible for some k and the corresponding M is the needed one.
4.4 Algorithm for computing boundary SOC-complementarity eigenpairs
Algorithm 4.2
For given tensors \({\mathcal {A}},{\mathcal {B}}\in {\mathtt {T}}^m(\mathbb {R}^{n+1})\), compute all the boundary SOC-complementarity eigenpairs as follows:
-
Input:
Choose a general SOS polynomial \( F \) and \(M>0\). Let \(k:=k_0=\ulcorner \frac{m+2}{2}\urcorner \), \(\epsilon _0=0.05\), \(i=1\), and \({\mathbb {S}}=\emptyset \).
-
Step 1:
Solve the kth-order semidefinite relaxation (4.9). If it is infeasible, let \(k:=k_0\) and go to Step 2. Otherwise, if an optimizer \(y^{1,k}\) is obtained with the rank condition (4.4), let \(M:=2M\), \(k:=k_0\) and go to Step 1; else let \(k:=k+1\) and go to Step 1;
-
Step 2:
Solve the kth-order semidefinite relaxation (4.2). If it is infeasible, then there are no boundary SOC-complementarity eigenpairs and stop. Otherwise, optimal value \(v_{1,k}\) with an optimizer \(y^{1,k}\) can be computed.
-
Step 3:
If the rank condition (4.4) is satisfied, then \(\lambda _1=\nu _{1,k}\) and all optimal solutions \((\lambda , z)\) can be extracted, which constitute \({\mathbb {S}}_1\). Let \({\mathbb {S}}:={\mathbb {S}}\cup {\mathbb {S}}_1\), \(k:=k_0\), \(\epsilon :=\epsilon _0\) and go to Step 4. Otherwise, let \(k:=k+1\) and go to Step 2.
-
Step 4:
Solve problem (4.7). If \(\tau >\lambda _{i}\), let \(\epsilon :=\epsilon /{2}\) and go to step 4. If \(\tau =\lambda _{i}\), go to Step 5.
-
Step 5:
Solve the relaxation (4.6). If it is infeasible, then \({\mathbb {S}}_{i+1}=\emptyset \) and stop. Otherwise, compute the optimal value \(\nu _{i+1,k}\) with an optimizer \(y^{i+1,k}\). If (4.4) is satisfied with \(\bar{y}=y^{i+1,k}|_{2t}\) for some \(t\in [k_0,k]\), then \(\nu _{i+1,k}=\lambda _{i+1}\) and optimal solutions \((\lambda , z)\) of (4.5) can be extracted, which constitute \({\mathbb {S}}_{i+1}\). Let \(k\,{:=}\,k_0\), \(\epsilon \,{:=}\,\epsilon _0\), \({\mathbb {S}}\,{:=}\,{\mathbb {S}}\cup {\mathbb {S}}_{i+1}\), \(i\,{:=}\,i+1\) and go to Step 4. Otherwise, let \(k\,{:=}\, k+1\) and go to Step 5.
-
Output:
\(bd\,{\mathbb {S}}:=\{(\lambda , x)\mid x=(z,1), (\lambda , z)\in {\mathbb {S}}\}\).
Theorem 4.3
Suppose that there are finitely many boundary SOC-complementarity eigenvalues, then Algorithm 4.2 terminates by solving finitely many semidefinite relaxations.
The proof can be induced similarly to the proof of Theorem 4.1 and we omit here.
5 Computation of interior SOC-complementarity eigenpairs
In this section, we consider how to compute all interior SOC-complementarity eigenpairs. Based on Theorem 3.4, the interior SOC-complementarity eigenpair set can be rewritten as
From Theorem 3.5, there exists \(M> 0\) such that all interior SOC-complementarity eigenvalues are contained in a ball. Similarly, \(F(\lambda ,z)\) achieves different values at different \((\lambda ,x)\in int\,{\mathbb {S}}\) since \(F(\lambda ,z)\) is generically chosen. We order \(\lambda \) monotonically as follows if they are finite:
For \(i\in [\bar{N}]\), let \(\bar{{\mathbb {S}}}_i:= \{(\bar{\lambda }_i,z)\big |\,{\mathcal {A}}x^{m-1}-\bar{\lambda }_i {\mathcal {B}}x^{m-1}=0,\, z^T z\le 1, \, x=(z,1) \}\) and \(int\,\bar{{\mathbb {S}}}:=\bar{{\mathbb {S}}}_{1}\cup \bar{{\mathbb {S}}}_{2}\cup \cdots \cup \bar{{\mathbb {S}}}_{\bar{N}}\), then we have
5.1 Computation of \(\bar{{\mathbb {S}}}_{1}\)
In this subsection, we show how to compute \(\bar{{\mathbb {S}}}_1\). Clearly, \((\lambda ,z)\) is the optimal value of the following polynomial optimization problem:
where \(\bar{h}(\lambda ,z)={\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1}\) and \(\bar{g}(\lambda ,z)=(1-z^T z, M-\lambda ^2)\).
The kth-order Lasserre’s hierarchy of semidefinite relaxation for (5.1) is
where \(k\ge k_0=\ulcorner \frac{m+1}{2}\urcorner \). The dual optimization problem of (5.2) is
Similarly, we have
and the sequences\(\{\rho _{1,k}\}\), \(\{\bar{\rho }_{1,k}\}\) are monotonically increasing. If there exists \(t\in [k_0,k]\) such that (4.4) on \(\bar{y}^{1,k}\), an optimal solution of (5.2), is satisfied, then \(\rho _{1,k}=\bar{\lambda }_1\). Furthermore, the optimal solutions can be extracted, which are in \( \bar{{\mathbb {S}}}_1\).
5.2 Computation of \(\bar{{\mathbb {S}}}_{i+1}\)
Suppose that \(\bar{{\mathbb {S}}}_{i}\) is known. We want to determine the next SOC-complementarity eigenpair. If it exists, we compute it. Let \(\epsilon >0\) be a small number such that \(\bar{\lambda }_{i+1}>\bar{\lambda }_i+\epsilon \). Consider the following optimization problem:
The kth-order Lasserre’s hierarchy of semidefinite relaxation for (5.4) is
where \(k=k_0, k_0+1,\ldots \). The dual optimization problem of (5.5) is
By solving (5.5) sequentially, all global optimizers \((\lambda ,z)\) of (5.4) can be computed. By a similar way for computing \(bd\,{\mathbb {S}}\), we can get \(int\,{\mathbb {S}}\). The choice of M and \(\epsilon \) are similar to Sect. 4.3 with \(\bar{h}(\lambda , z)\) and \(\bar{g}(\lambda ,z)\) instead of \(h(\lambda ,z)\) and \(g(\lambda ,z)\). Hence we omit the details here.
5.3 Algorithm for computing interior SOC-complementarity eigenpairs
Algorithm 5.1
For given tensors \({\mathcal {A}}, {\mathcal {B}}\in {\mathtt {T}}^m(\mathbb {R}^{n+1})\), compute all interior SOC-complementarity eigenpairs as follows:
-
Input:
Choose a general polynomial \( F \) and a positive number M. Let \(k:=k_0=\ulcorner \frac{m+1}{2}\urcorner \), \(\epsilon _0=0.05\), \(i=1\), and \(int\, \bar{{\mathbb {S}}}=\emptyset \).
-
Step 1:
Solve (4.9) with \(\bar{h},\bar{g}\) instead of h, g. If it is infeasible, let \(k:=k_0\) and go to Step 2. Otherwise, if an optimizer \(y^{1,k}\) is obtained with the rank condition (4.4) being satisfied, let \(M:=2M\), \(k:=k_0\) and go to Step 1; else, let \(k:=k+1\) and go to Step 1;
-
Step 2:
Solve the kth-order relaxation (5.2). If it is infeasible, then \(int\,\bar{{\mathbb {S}}}=\emptyset \) and stop. Otherwise, optimal value \(\rho _{1,k}\) with an optimizer \(\bar{y}^{1,k}\)can be computed.
-
Step 3:
If rank condition (4.4) is satisfied, then \(\bar{\lambda }_1=\rho _{1,k}\) and optimal solution set \(\bar{{\mathbb {S}}}_1\) of (5.1) can be obtained. Let \(int\,\bar{{\mathbb {S}}}:=int\,\bar{{\mathbb {S}}}\cup \bar{{\mathbb {S}}}_1\), \(k:=k_0\), and go to Step 4. Otherwise, let \(k:=k+1\) and go to Step 2.
-
Step 4:
Solve problem (4.7) with \(\bar{h},\bar{g}\). If \(\tau >\bar{\lambda }_{i}\), let \(\epsilon :=\epsilon /2\) and go to step 4. If \(\tau =\bar{\lambda }_{i}\), go to Step 5.
-
Step 5:
Solve the relaxation (5.5). If it is infeasible, then \(\bar{{\mathbb {S}}}_{i+1}=\emptyset \) and stop. Otherwise, compute the optimal value \(\rho _{i+1,k}\) with an optimizer \(\bar{y}^{i+1,k}\). If (4.4) is satisfied with \( \hat{y}=\bar{y}^{i+1,k}|_{2t}\) for some \(t\in [k_0,k]\), then \(\bar{\lambda }_{i+1}= \rho _{i+1,k}\) and \(\bar{{\mathbb {S}}}_{i+1}\) can be computed. Let \(int\,\bar{{\mathbb {S}}}:=int\,\bar{{\mathbb {S}}}\cup \bar{{\mathbb {S}}}_{i+1}\). Let \(k:=k_0\), \(i:=i+1\), \(\epsilon :=\epsilon _0\) and go to Step 4. If such t does not exist, let \(k:=k+1\), and go to step 5.
-
Output:
Let \(int\,{\mathbb {S}}=\{(\lambda , x)\mid x=(z,1),\,(\lambda ,z)\in int\,\bar{{\mathbb {S}}}\}{\setminus } bd\,{\mathbb {S}}\).
The finite convergence of Algorithm 5.1 is presented as follows.
Theorem 5.2
Assume that there are finitely many interior SOC-complementarity eigenvalues, then Algorithm 5.1 must terminate after solving finitely many semidefinite relaxations.
6 Numerical reports
In this section, we present numerical experiments for computing SOC-complementarity eigenvalues. We use the software GloptiPoly 3 [12] and SeDuMi [26] to solve the semidefinite relaxation problems. The experiments are implemented in MATLAB R2015 on a Dell laptop with an Intel Core i5CPU (1.70 GHz) and 4GB of RAM. We display four decimal digits in computational results.
In this section, we use a column vector \((\lambda ,x)\) to denote the SOC-complementarity eigenpair. The first entry in \((\lambda , x)\in \mathbb {R}^{n+2}\) is \(\lambda \in \mathbb {R}\), which is the SOC-complementarity eigenvalue, and the remainder entries constitute the SOC-complementarity eigenvector x. For convenience of notation, we denote the problem studied in [13] as:
which can be rewritten as problem SOCTEiCP of this paper with \({\mathcal {A}}=-\hat{\mathcal {B}}\) and \({\mathcal {B}}=-\hat{\mathcal {A}}\).
Example 6.1
Consider SOCTEiCP with matrices \(A, B\in {\mathtt {T}}^2(\mathbb {R}^2)\) given by
By direct computation, there are no SOC-complementarity eigenpairs.
Algorithm 4.2 presents a numerical certificate to indicate that no boundary SOC-complementarity eigenpairs exist with 0.654 s. Algorithm 5.1 presents a numerical certificate to indicate that no interior SOC-complementarity eigenpairs exist with 0.181 s. However, the matrix \(\pm \hat{B}=-A\) is not strictly SOC-positive, which is assumed to be SOC-positive in Algorithm SPA proposed in [20] to solve SOCTEiCP in [13]. Hence Algorithm SPA is not applicable for this example.
Example 6.2
Consider SOCTEiCP with the matrices \(A,B\in {\mathtt {T}}^2(\mathbb {R}^2)\) given by
Applying Algorithm 4.2, the boundary SOC-complementarity eigenpairs \((\lambda , x)=(0.6667,-1.0000,1.0000)^T\) and \((\lambda , x)=(2.0000,1.0000,1.0000)^T\) are computed with 5.313 s.
The unique interior SOC-complementarity eigenpair \((\lambda , x)=(2.0000,-0.3333,1.0000)^T\) can be found by Algorithm 5.1. It takes about 1.489 s.
Example 6.3
Consider SOCTEiCP with the tensors \({\mathcal {A}},{\mathcal { B}}\in {\mathtt {T}}^4(\mathbb {R}^2)\), whose nonzero entries are listed below
By Algorithm 4.2, we get two boundary SOC-complementarity eigenpairs
This computation takes about 2.484 s.
By Algorithm 5.1, there are two interior SOC-complementarity eigenpairs
This computation takes about 2.215 s.
Example 6.4
[20, Example 5.1] This example deals with SOCTEiCP with two 4th-order 2-dimensional tensors \(\mathcal {A,B}\), and all nonzero entries of tensors \({\mathcal {A}}\) and \({\mathcal {B}}\) are listed below
and
Applying Algorithm 4.2, there is a boundary SOC-complementarity eigenpair \((\lambda ,x)=(0.4783,1.0000,1.0000)^T\) computed with 1.431 s. On the other hand, there is an interior SOC-complementarity eigenpair \((\lambda ,x)=(0.4848, 0.3936,1.0000)^T\) returned by Algorithm 5.1 with 0.924 s.
Example 6.5
Consider SOCTEiCP with the tensors \({\mathcal {A}},{\mathcal { B}}\in {\mathtt {T}}^5(\mathbb {R}^3)\), whose nonzero entries are listed as
By Algorithm 4.2, there are four boundary SOC-complementarity eigenpairs
This computation takes about 9.727 s. However, there are no interior SOC-complementarity eigenpairs by Algorithm 5.1 with 2.685 s.
Example 6.6
Consider Example 4.1 in [13] with two 4th-order 4-dimensional tensors \({\mathcal {A}},{\mathcal { B}}\) with \({\mathcal {K}}={\mathcal {K}}^2\times {\mathcal {K}}^2\). That is, to find solutions of the following system
The nonzero entries of tensors \({\mathcal {A}}\) and \({\mathcal {B}}\) are listed in Tables 1 and 2, respectively.
Applying Algorithm SPA in [13] with 10 different initial points generated randomly, we have the same SOC-complementarity eigenpair
Note that for any \(\alpha >0\), \(\alpha (0.1221,0.0388,0.5433,0.2699)^{T}\) is an SOC-complementarity eigenvector. Hence, \((1.0000,0.3176,4.4498,2.2109)^T\) is an SOC-complementarity eigenvector by letting \(\alpha ={1\over 0.1221}\).
To solve this problem, we first divide the SOC-complementarity eigenpairs into boundary SOC-complementarity eigenpairs, boundary-interior SOC-complementarity eigenpairs and interior SOC-complementarity eigenpairs, respectively. Here, the boundary (interior) SOC-complementarity eigenpairs mean that the eigenvectors are in the boundary (interior) of \({\mathcal {K}}^2\times {\mathcal {K}}^2\). The boundary-interior SOC-complementarity eigenvectors are in \((bd {{\mathcal {K}}^2}\times int {{\mathcal {K}}^2})\cup (int {{\mathcal {K}}^2}\times bd {{\mathcal {K}}^2})\). To get all SOC-complementarity eigenpairs, four algorithms are proposed, which are similar to Algorithms 4.2 and 5.1, so we omit the details here. By such four algorithms, all SOC-complementarity eigenpairs are listed as:
Comparing the two results, it is asserted that our SDP relaxation method can get more SOC-complementarity eigenpairs than SPA.
References
Bochnak, J., Coste, M., Roy, M.-F.: Real Algebraic Geometry. Springer, Berlin (1998)
Chen, Z., Qi, L.: A semismooth Newton method for tensor eigenvalue complementarity problem. Comput. Optim. Appl. 65, 109–126 (2016)
Chen, Z., Yang, Q., Ye, L.: Generalized eigenvalue complementarity problem for tensors. Pac. J. Optim. 13, 527–545 (2017)
Cui, C., Dai, Y., Nie, J.: All real eigenvalues of symmetric tensors. SIAM J. Matrix Anal. Appl. 35, 1582–1601 (2014)
Curto, R., Fialkow, L.: Truncated K-moment problems in several variables. J. Oper. Theory 54, 189–226 (2005)
Ding, W., Wei, Y.: Generalized tensor eigenvalue problems. SIAM J. Matrix Anal. Appl. 36, 1073–1099 (2015)
Facchinei, F., Pang, J.: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, Berlin (2003)
Fan, J., Nie, J., Zhao, R.: The maximum tensor complementarity eigenvalues. Optim. Methods Softw. (2018). https://doi.org/10.1080/10556788.2018.1528251
Fan, J., Nie, J., Zhou, A.: Tensor eigenvalue complementarity problems. Math. Program. 170, 507–539 (2018)
Fukushima, M., Luo, Z., Tseng, P.: Smoothing functions for second-order-cone complementarity problems. SIAM J. Optim. 12, 436–460 (2002)
Helton, J., Nie, J.: A semidefinite approach for truncated K-moment problems. Found. Comput. Math. 12, 851–881 (2012)
Henrion, D., Lasserre, J., Lofberg, Y.: GloptiPoly 3: moments, optimization and semidfinite programming. Optim. Methods Softw. 24, 761–779 (2009)
Hou, J., Ling, C., He, H.: A class of second-order cone eigenvalue complementarity problems for higher-order tensors. J. Oper. Res. Soc. China 5, 45–64 (2017)
Hu, S., Huang, Z.-H., Ling, C., Qi, L.: On determinants and eigenvalue theory of tensors. J. Symb. Comput. 50, 508–531 (2013)
Lasserre, J.: Global optimization with polynomials and the problem of moments. SIAM J. Optim. 11, 796–817 (2001)
Lasserre, J.: Moments, Positive Polynomials and Their Applications. Imperial College Press, London (2009)
Lasserre, J.: Introduction to Polynomial and Semi-algebraic Optimization. Cambridge University Press, Cambridge (2015)
Laurent, M.: Sums of squares, moment matrices and optimization over polynomials. In: Putinar, M., Sullivant, S. (eds.) Emerging Applications of Algebraic Geometry. IMA Volumes in Mathematics and Its Applications, vol. 149, pp. 157–270. Springer, Berlin (2009)
Laurent, M.: Optimization over polynomials: selected topics. In: Proceedings of the International Congress of Mathematicians, Seoul (2014)
Ling, C., He, H., Qi, L.: On the cone eigenvalue complementarity problem for higher-order tensors. Comput. Optim. Appl. 63, 143–168 (2016)
Nie, J.: Certifying convergence of Lasserre’s hierarchy via flat truncation. Math. Program. 142, 485–510 (2013)
Nie, J.: Polynomial optimization with real varieties. SIAM J. Optim. 23, 1634–1646 (2013)
Nie, J., Yang, Z., Zhang, X.: A complete semidefinite algorithm for detecting copositive matrices and tensors. SIAM J. Optim. 28, 2902–2921 (2018)
Nie, J., Zhang, X.: Real eigenvalues of nonsymmetric tensors. Comput. Optim. Appl. 70, 1–32 (2018)
Qi, L.: Eigenvalues of a real supersymmetric tensor. J. Symb. Comput. 40, 1302–1324 (2005)
Sturm, J.: Sedumi 1.02: a matlab toolbox for optimization over symmetric cones. Optim. Methods Softw. 11–12, 625–653 (1999)
Xu, F., Ling, C.: Some properties on Pareto-eigenvalues of higher-order tensors. Oper. Res. Trans. 19, 34–41 (2015)
Yu, G., Song, Y., Xu, Y., Yu, Z.: Spectral projected gradient methods for generalized tensor eigenvalue complementarity problems. Numer. Algorithms 80, 1181–1201 (2019)
Zhao, X., Fan, J.: A semidefinite method for tensor complementarity problems. Optim. Methods Softw. 34, 758–769 (2019)
Acknowledgements
Xinzhen Zhang was partially supported by the National Natural Science Foundation of China (Grant Nos. 11871369 and 12071343). Guyan Ni was partially supported by the National Natural Science Foundation of China (Grant No. 11871472).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Cheng, L., Zhang, X. & Ni, G. A semidefinite relaxation method for second-order cone tensor eigenvalue complementarity problems. J Glob Optim 79, 715–732 (2021). https://doi.org/10.1007/s10898-020-00954-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10898-020-00954-4