1 Introduction

For positive integers m and \(n_1,n_2,\ldots , n_m\), an mth order and \((n_1,n_2,\ldots , n_m)\)-dimensional real tensor can be viewed as an array in the space \({\mathbb {R}}^{n_1\times n_2\times \cdots \times n_m}\). Such a tensor \({\mathcal {A}}\) can be indexed as

$$\begin{aligned} {\mathcal {A}}=({\mathcal {A}}_{i_1\ldots i_m}), \quad 1\le i_j\le n_j,\quad j=1,\ldots , m. \end{aligned}$$

When \(n_1=\cdots =n_m=n\), \({\mathcal {A}}\) is called an mth order n-dimensional square tensor. In such case, the tensor space \({\mathbb {R}}^{n_1\times \cdots \times n_m}\) is denoted as \({\mathtt {T}}^m({\mathbb {R}}^n)\). A tensor in \({\mathtt {T}}^m({\mathbb {R}}^n)\) is said to be symmetric if its entries are invariant under permutations of indices. That is, \({\mathcal {A}}_{i_1\ldots i_m} = {\mathcal {A}}_{j_1\ldots j_m}\) for any permutation \((j_1,\ldots , j_m\)) of index \((i_1,\ldots , i_m)\). For \({\mathcal {A}}\in {\mathtt {T}}^m({\mathbb {R}}^n)\) and \(x:=(x_1,\ldots , x_n)^T\in \mathbb {R}^n\), we use the notation

$$\begin{aligned} \left\{ \begin{array}{ll} {\mathcal {A}}x^m &{} :=\sum \limits _{1\le i_1,\ldots ,i_m\le n} {\mathcal {A}}_{i_1i_2\ldots i_m}x_{i_1}x_{i_2}\ldots x_{i_m},\\ {\mathcal {A}}x^{m-1}&{} : = \Big ( \sum \limits _{1\le i_2,\ldots , i_m\le n} {\mathcal {A}}_{ji_2\ldots i_m}x_{i_2}\ldots x_{i_m} \Big )_{j=1,\ldots ,n}. \end{array}\right. \end{aligned}$$

Clearly, \({\mathcal {A}} x^{m-1}\in \mathbb {R}^n\) is an n-dimensional vector.

In this paper, we consider the second-order cone tensor eigenvalue complementarity problem (SOCTEiCP): for two given tensors \({\mathcal {A}}, {{\mathcal {B}}} \in {\mathtt {T}}^m (\mathbb {R}^{n+1} )\), SOCTEiCP is to find a nonzero vector x with a real number \(\lambda \) such that

$$\begin{aligned} {\mathcal {K}}\ni x \perp ( {\mathcal {A }}{{x}}^{m-1} -\lambda \mathcal {B}x^{m-1}) \in \mathcal {K^*}, \end{aligned}$$
(1.1)

where \({\mathcal {K}}\) is the second-order cone (or Lorentz cone, ice-cream cone in the literature) defined by \({\mathcal {K}}:=\{x=(z,t)\in \mathbb {R}^{n+1}: \Vert z\Vert _{2}\le t\}\), and here \({\mathcal {K}}^*\) is the dual cone of \({\mathcal {K}}\), defined by \({\mathcal {K}}^*:=\{y |\ x^Ty\ge 0, \forall x\in {\mathcal {K}}\}\). For \((\lambda , x)\) satisfying (1.1), \(\lambda \) is called an SOC-complementarity eigenvalue and x is called an SOC-complementarity eigenvector of \(({\mathcal {A}},{\mathcal { B}})\), respectively. Furthermore, we call such \((\lambda ,x)\) an SOC-complementarity eigenpair of \(({\mathcal {A}},{\mathcal {B}})\).

For a general closed and convex cone, problem (1.1) is called the tensor generalized eigenvalue complementarity problem (TGEiCP), introduced and studied in [20]. When the cone is specified as the nonnegative orthant cone \(\mathbb {R}_{+}^{n}:=\{x\in \mathbb {R}^{n}{:} x \ge 0\}\), the problem is called the tensor eigenvalue complementarity problem (TEiCP) in [9, 20]. For such cone, the pair \((\lambda ,x)\) is called a Pareto-eigenpair of \(({\mathcal {A}},{\mathcal {B}})\). Theoretical and numerical results are presented in [2, 3, 8, 9, 20, 27, 28]. When the cone is a general second-order cone, that is, \({\mathcal {K}}={\mathcal {K}}^{n_{1}}\times {\mathcal {K}}^{n_{2}} \times \cdots \times {\mathcal {K}}^{n_{r}}\), where \({\mathcal {K}}^{n_i}\) is a second-order cone. Then problem (1.1) is the second-order cone tensor eigenvalue complementarity problem discussed in [13]. Clearly, our considered problem is a special case of the considered problem in [13].

Numerical methods have been discussed for the second-order cone tensor eigenvalue complementarity problem. A scaling-and-projection algorithm (SPA) was proposed in [20] and applied to solve SOCTEiCP in [13], which converges to an SOC-complementarity eigenpair. In general, the extreme SOC-complementarity eigenvalue is the most important one among all SOC-complementarity eigenvalues. For example, the nonnegativity of the smallest SOC-complementarity eigenvalue of matrix pair (AI) asserts the SOC-copositivity of symmetric matrix \({\mathcal {A}}\), here I is an identity matrix and \(m=2\). However, it is hard to get the extreme SOC-complementarity eigenvalue by SPA. On the other hand, existing algorithms can not check whether if there are no SOC-complementarity eigenpairs of \(({\mathcal {A}},{\mathcal {B}})\).

In recent years, SDP relaxation methods have become an efficient numerical method for solving tensor optimization problems. Specifically, all tensor eigenvalues can be computed by solving a finite hierarchy of semidefinite relaxations [4, 24] when they are finite. Tensor complementarity problem and tensor eigenvalue complementarity problems can be solved efficiently by SDP relaxation methods [8, 9, 29]. Furthermore, the SDP relaxation methods present numerical certificates when there are no solutions to the tensor complementarity problem and the tensor eigenvalue complementarity problem. Motivated by these, we consider how to apply SDP relaxation methods to get all SOC-complementarity eigenpairs when they are finite.

The main contribution of this paper is to compute all real SOC-complementarity eigenpairs when there are finitely many ones. For convenience, we denote \(bd\mathbb {S}\) and \(int\mathbb {S}\) as its sets of boundary SOC-complementarity eigenpairs and interior SOC-complementarity eigenpairs. Both boundary SOC-complementarity eigenpairs and interior SOC-complementarity eigenpairs can be reformulated as feasible solutions of polynomial optimization problems. Based on these reformulations, all SOC-complementarity eigenpairs can be computed by solving a finite hierarchy of semidefinite relaxations.

\(\mathbf{Contributions} \): In this paper, we study the second-order cone tensor eigenvalue complementarity problem. First, we show that the solution set of SOCTEiCPs is two real polynomial systems. Based on this equivalence, an upper bound on the number of the SOC-complementarity eigenvalues is established. Then we reformulate the SOCTEiCP as two polynomial optimization problems. By solving a hierarchy of Lasserre-type semidefinite programming (SDP) relaxations, all SOC-complementarity eigenpairs of \(({\mathcal {A}},{\mathcal {B}})\) can be computed if they are finite.

The paper is organized as follows. Section 2 presents preliminaries in polynomial optimization. Section 3 reformulates SOCTEiCPs as polynomial optimization problems. Section 4 proposes Lasserre-type semidefinite relaxations for computing all boundary SOC-complementarity eigenpairs of SOCTEiCPs if there are finitely many ones. Section 5 proposes Lasserre-type semidefinite relaxation methods for computing all interior SOC-complementarity eigenpairs of SOCTEiCPs if there are finitely many ones. In Sect. 6, we report numerical results to show the efficiency of the proposed algorithms of this paper.

2 Preliminaries

In this paper, \( [n] = \{1,2,\ldots , n\} \). We use lowercase letters, e.g. xy,  to denote vectors. Higher-order tensors are denoted as \( {\mathcal {A}},{\mathcal {B}},{\mathcal {C}},\ldots \). For any \(i\le n\), \(x_{[i]}:=(x_1,\dots , x_i)^T\in \mathbb {R}^i\). \((x_1,x_2)\in \mathbb {R}^{n}\times \mathbb {R}\) is simplified as \((x_1,x_2)\in \mathbb {R}^{n+1}\). For a given tensor \({\mathcal {A}}\in {\mathtt {T}}^m({\mathbb {R}}^n)\) and any \(i\in [n]\), \({\mathcal {A}}_{i}\in {\mathtt {T}}^{m-1}(\mathbb {R}^n)\) and \({\mathcal {A}}_{[i]}\in \mathbb {R}^{i\times n\times \cdots \times n}\) with its entries

$$\begin{aligned} ({\mathcal {A}}_{i})_{j_2\ldots j_m}={\mathcal {A}}_{ij_2\ldots j_m},\quad ({\mathcal {A}}_{[i]})_{j_1j_2\ldots j_m}={\mathcal {A}}_{j_1j_2\ldots j_m},\quad j_1\le i. \end{aligned}$$

In the following, we review some basics in polynomial optimization. We refer to [17, 18] for surveys in polynomial optimization.

In the space \(\mathbb {R}^n\), the symbol \(\Vert \cdot \Vert _2\) denotes the standard Euclidean norm. Let \({\mathbb {R}}[x]\) be the ring of polynomials with real coefficients and in variables \(x:=(x_1,\dots , x_n)^T\), and let \({\mathbb {R}}[x]_d\) be the set of real polynomials in x whose degrees are at most d.

For a polynomial tuple \(h=(h_1,h_2,\ldots , h_s)\), the ideal generated by h is the set

$$\begin{aligned} I(h):= h_1\cdot {\mathbb {R}}[x]+h_2\cdot {\mathbb {R}}[x]+\cdots +h_s\cdot {\mathbb {R}}[x]. \end{aligned}$$

The kth truncation of I(h) is the set

$$\begin{aligned} I_k(h) := h_1\cdot {\mathbb {R}}[x]_{k-deg(h_1)} +\cdots +h_s\cdot {\mathbb {R}}[x]_{k-deg(h_s)}. \end{aligned}$$

The complex and real algebraic varieties of h are respectively defined as

$$\begin{aligned} {\mathcal {V}}_{{\mathbb {C}}}(h):=\{x\in {\mathbb {C}}^n \, \mid \, h(x)=0 \}, \quad {\mathcal {V}}_{\mathbb {R}}(h):={\mathcal {V}}_{{\mathbb {C}}}(h) \cap {\mathbb {R}}^n. \end{aligned}$$

A polynomial p is said to be sum of squares (SOS) if there exist \(p_1,p_2,\ldots p_r\in {\mathbb {R}}[x]\) such that \(p=p_1^2+p_2^2+\cdots +p_r^2\). The set of all SOS polynomials is denoted as \(\Sigma [x]\). For a given degree m, denote

$$\begin{aligned} \Sigma [x]_m := \Sigma [x] \cap {\mathbb {R}}[x]_m. \end{aligned}$$

The quadratic module generated by a polynomial tuple \(g=(g_1,\ldots , g_t)\) is the set

$$\begin{aligned} Q(g) := \Sigma [x]+ g_1 \cdot \Sigma [x] +\cdots + g_t \cdot \Sigma [x]. \end{aligned}$$

The kth truncation of the quadratic module Q(g) is the set

$$\begin{aligned} Q_k(g) := \Sigma [x]_{2k} + g_1\cdot \Sigma [x]_{2k-deg(g_1)}+ \cdots + g_t \cdot \Sigma [x]_{2k-deg(g_t)}. \end{aligned}$$

Note that if \(g=\emptyset \) is an empty tuple, then \(Q(g) = \Sigma [x]\) and \(Q_k(g) = \Sigma [x]_{2k}\).

Let \({\mathbb {N}}\) be the set of nonnegative integers. We refer to [5, 11, 16,17,18,19] for details about polynomial optimization. For \(x:=(x_1,\dots , x_n)^T\), \(\alpha : = (\alpha _1, \ldots , \alpha _n)\) and a degree d, denote

$$\begin{aligned} x^\alpha := x_1^{\alpha _1} \cdots x_n^{\alpha _n}, \quad |\alpha | := \alpha _1 + \cdots + \alpha _n, \quad {\mathbb {N}}_d^n := \{ \alpha \in {\mathbb {N}}^n: |\alpha | \le d\}. \end{aligned}$$

Denote by \(\mathbb {R}^{{\mathbb {N}}_d^n}\) the space of all real vectors y that are indexed by \(\alpha \in {\mathbb {N}}_d^n\). For \(y \in \mathbb {R}^{{\mathbb {N}}_d^n}\), we can write it as

$$\begin{aligned} y = (y_\alpha ), \quad \alpha \in {\mathbb {N}}_d^n. \end{aligned}$$

For \(f = \sum _{ \alpha \in {\mathbb {N}}_d^n} f_\alpha x^\alpha \in \mathbb {R}[x]_d\) and \(y \in \mathbb {R}^{{\mathbb {N}}_d^n}\), we define the operation

$$\begin{aligned} \langle f, y \rangle \, := \, \sum _{ \alpha \in {\mathbb {N}}_d^n} f_\alpha y_\alpha . \end{aligned}$$
(2.1)

For an integer \(t \le d\) and \(y \in \mathbb {R}^{{\mathbb {N}}_d^n}\), denote the tth truncation of y as

$$\begin{aligned} y|_{t} := (y_\alpha )_{ \alpha \in {\mathbb {N}}_t^n }. \end{aligned}$$
(2.2)

Let \(q \in \mathbb {R}[x]\) with \({\text {deg}}(q) \le 2k\). For each \(y \in \mathbb {R}^{{\mathbb {N}}_{2k}^n}\), \(\langle q p^2, y \rangle \) is a quadratic form in vec(p), the coefficient vector of the polynomial p with \({\text {deg}}(qp^2) \le 2k\). Let \(L_q^{(k)}(y)\) be the symmetric matrix such that

$$\begin{aligned} \langle q p^2, y \rangle = vec(p)^T\Big ( L_q^{(k)}(y) \Big ) vec(p). \end{aligned}$$
(2.3)

The matrix \(L_q^{(k)}(y)\) is called the kth localizing matrix of q generated by y. It is linear in y. For instance, when \(n=2\), \(k=2\) and \(q=5x_1+4x_1x_2-6x_2^2\),

$$\begin{aligned} L_{5x_1+4x_1x_2-6x_2^2}^{(2)}(y) = \left( \begin{array}{rcc} 5y_{10}+4y_{11}-6y_{02} &{} 5y_{20}+4y_{21}-6y_{12} &{} 5y_{11}+4y_{12}-6y_{03} \\ 5y_{20}+4y_{21}-6y_{12} &{} 5y_{30}+4y_{31}-6y_{22} &{} 5y_{21}+4y_{22}-6y_{13}\\ 5y_{11}+4y_{12}-6y_{03} &{} 5y_{21}+4y_{22}-6y_{13} &{} 5y_{12}+4y_{13}-6y_{04} \\ \end{array} \right) . \end{aligned}$$

If \(q= (q_1, \ldots , q_r)\) is a tuple of polynomials, we then define

$$\begin{aligned} L_q^{(k)}(y) := Diag \left( L_{q_1}^{(k)}(y), \ldots , L_{q_r}^{(k)}(y) \right) . \end{aligned}$$
(2.4)

When \(q=1\) (the constant 1 polynomial), \(L_1^{(k)}(y)\) is called the kth moment matrix generated by y, and we denote

$$\begin{aligned} M_k(y) := L_1^{(k)}(y). \end{aligned}$$
(2.5)

For instance, when \(n=2\) and \(k=2\),

$$\begin{aligned} M_2(y)= \left( \begin{array}{cccccccccc} y_{00} &{} y_{10} &{} y_{01} &{} y_{20} &{} y_{11} &{} y_{02} \\ y_{10} &{} y_{20} &{} y_{11} &{} y_{30} &{} y_{21} &{} y_{12} \\ y_{01} &{} y_{11} &{} y_{02} &{} y_{21} &{} y_{12} &{} y_{03} \\ y_{20} &{} y_{30} &{} y_{21} &{} y_{40} &{} y_{31} &{} y_{22} \\ y_{11} &{} y_{21} &{} y_{12} &{} y_{31} &{} y_{22} &{} y_{13} \\ y_{02} &{} y_{12} &{} y_{03} &{} y_{22} &{} y_{13} &{} y_{04} \\ \end{array} \right) . \end{aligned}$$

For a degree d, denote the monomial vector

$$\begin{aligned}{}[x]_d := \begin{bmatrix} 1, \, x_1, \, \ldots , \, x_n , \, x_1^2 , \, x_1x_2 , \, \ldots , \, x_n^2 , \, \ldots , \, x_1^d , \, \ldots , \, x_n^d \end{bmatrix}^T. \end{aligned}$$
(2.6)

3 Reformulation and properties of the SOCTEiCP

For given tensors \({\mathcal {A}},{\mathcal {B}}\in {\mathtt {T}}^m(\mathbb {R}^{n+1})\), \(x\in \mathbb {R}^{n+1}\) is an SOC-complementarity eigenvector of \(({\mathcal {A}},{\mathcal {B}})\) associated with SOC-complementarity eigenvalue \(\lambda \) if and only if \(\alpha x\) \((\forall \, \alpha >0)\) is an SOC-complementarity eigenvector of \(({\mathcal {A}},{\mathcal {B}})\) associated with SOC-complementarity eigenvalue \(\lambda \). Moreover, x is an SOC-complementarity eigenvector means that \(x\ne 0\), and hence \(x_{n+1}>0\) from \(x\in {\mathcal {K}}\). Motivated by this fact, it is asserted that there exists an SOC-complementarity eigenvector \((z,1)\in {\mathcal {K}}\) with \(z\in \mathbb {R}^{n}\) for an SOC-complementarity eigenvalue. It suffices to consider the set of SOC-complementarity eigenpairs of \(({\mathcal {A}},{\mathcal {B}})\) as

$$\begin{aligned} {\mathbb {S}}\,({\mathcal {A}},{\mathcal {B}}):=\{(\lambda ,x): {\mathcal {K}} \ni x\perp ({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}} x^{m-1})\in {\mathcal {K}}^*, x=(z,1)\in \mathbb {R}^{n+1} \}. \end{aligned}$$

Let

$$\begin{aligned} bd\,{\mathbb {S}}:= & {} \{(\lambda ,x): x\in bd\,{\mathcal {K}}, ({\lambda ,x})\in {\mathbb {S}}({\mathcal {A}},{\mathcal {B}})\},\\ int\,{\mathbb {S}}:= & {} \{(\lambda ,x): x\in int\,{\mathcal {K}}, ({\lambda ,x})\in {\mathbb {S}}({\mathcal {A}},{\mathcal {B}})\}, \end{aligned}$$

where \( bd\,{\mathcal {K}}:=\{x=(z,1)\in \mathbb {R}^{n+1}\mid \, z^T z=1\}\) and \( int\,{\mathcal {K}}:=\{x=(z,1)\in \mathbb {R}^{n+1}\mid \, z^T z<1\}\). For convenience, we call \((\lambda ,x)\in bd\,{\mathbb {S}}\,\, (int {\mathbb {S}})\) the boundary (interior) SOC-complementarity eigenpair. The corresponding \(\lambda \) and x are called the boundary (interior) SOC-complementarity eigenvalue and eigenvector, respectively.

In order to compute all the SOC-complementarity eigenpairs by SDP relaxation methods, we reformulate the second-order cone tensor eigenvalue complementarity problem as polynomial optimization problems. Before proceeding, we recall some related definitions and properties, which are necessary for the reformulation.

We first recall the definition of \( Jordan \) \( product \).

Definition 3.1

[10] For any \(x=(x_1,x_2)\in \mathbb {R}^{n+1}\), \(y=(y_1,y_2) \in \mathbb {R}^{n+1}\) with \(x_2,y_2\in \mathbb {R}\), their \( Jordan \) \( product \) is defined as

$$\begin{aligned} x\circ y=(x^T y,\,x_2y_1+y_2x_1). \end{aligned}$$

Based on \( Jordan \) \( product \), we have the following result.

Lemma 3.2

[10, Proposition 2.1] For any \(x, ~y\in \mathbb {R}^{n+1}\), we have

$$\begin{aligned} x^T y=0,\, x\in {\mathcal {K}},\, y\in {\mathcal {K}} \Longleftrightarrow x\circ y=0,\, x\in {\mathcal {K}},\, y\in {\mathcal {K}}. \end{aligned}$$

The next lemma, which can be checked by Sect. 1 and Proposition 1.1.3 in [7], describes the interior solution case of the \( complementarity \) \( problem \) \((CP({\mathcal {K}},F))\) when \({\mathcal {K}}\) is a cone and \(F:{\mathcal {K}} \rightarrow \mathbb {R}^{n}\) is a general mapping. Here \(CP({\mathcal {K}},F)\) is to find a vector x such that \( {\mathcal {K}}\ni x \perp F(x)\in {\mathcal {K}}^*\).

Lemma 3.3

Let \({\mathcal {K}}\) be a cone in \(\mathbb {R}^n\). Suppose that a vector \(x \in int\,{\mathcal {K}}\) solves the \(CP({\mathcal {K}},F)\). Then \(F(x)=0\).

Based on Lemmas 3.2 and 3.3, we have the following result.

Theorem 3.4

Let \({\mathcal {A}},{\mathcal { B}} \in {{\mathtt {T}}}^m ({\mathbb {R}}^{n+1})\) and \(x=(z,1)\in \mathbb {R}^{n+1}\).

  1. (1)

    \((\lambda ,x)\in int\,{\mathbb {S}}\) if and only if \((\lambda ,z) \in \Omega \), where

    $$\begin{aligned} \begin{aligned} \Omega = \left\{ (\lambda ,z)\big |\, {\mathcal {A}} x^{m-1}-\lambda {\mathcal {B}}x^{m-1}=0,\quad z^Tz<1 \right\} .\\ \end{aligned} \end{aligned}$$
    (3.1)
  2. (2)

    \((\lambda ,x)\in bd\,{\mathbb {S}}\) if and only if \((\lambda , z) \in \hat{\Omega }\), where

    $$\begin{aligned} \begin{aligned} \hat{\Omega }= \left\{ (\lambda ,z)\Big |\, \begin{array}{ll} &{}({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{[n]}+({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{(n+1)}z = 0,\\ &{}({\mathcal {A}} x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{(n+1)}\ge 0, \quad z^Tz=1 \end{array} \right\} .\\ \end{aligned} \end{aligned}$$
    (3.2)

Proof

(1)“\(\Rightarrow \)\((\lambda ,x)\in int\,{\mathbb {S}}\) means that

$$\begin{aligned} x\in int\,{\mathcal {K}},\quad {\mathcal {A }}{{x}}^{m-1} -\lambda \mathcal {B}x^{m-1}\in {\mathcal {K}}^*,\quad x^T({\mathcal {A }}{{x}}^{m-1} -\lambda \mathcal {B}x^{m-1})=0. \end{aligned}$$

Let \(F(x)={\mathcal {A }}{{x}}^{m-1} -\lambda \mathcal {B}x^{m-1}\). Then \(x\in int\,{\mathcal {K}}\) solves \(CP({\mathcal {K}},F)\). From Lemma 3.3, we have that \({\mathcal {A }}{{x}}^{m-1} -\lambda \mathcal {B}x^{m-1}=0\). Together with \(x\in int\,{\mathcal {K}}\), we have \((\lambda , z)\in \Omega \).

\(\Leftarrow \)\( (\lambda , z)\in \Omega \) leads to \(x=(z,1)\in int\,{\mathcal {K}}\), \({\mathcal {A}} x^{m-1}-\lambda {\mathcal {B}}x^{m-1}=0\in {\mathcal {K}}\) and \(x^T({\mathcal {A}} x^{m-1}-\lambda {\mathcal {B}}x^{m-1})=0\). Hence \((\lambda , x)\in int\,{\mathbb {S}}\) is asserted.

(2) “\(\Rightarrow \)” From Lemma 3.2 and \((\lambda ,x)\in bd\,{\mathbb {S}}\), it follows that

$$\begin{aligned} x\circ ({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})=0,\, x\in bd\,{\mathcal {K}},\,({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})\in {\mathcal {K}}. \end{aligned}$$

\(x\circ ({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})=0\) implies that \(({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{[n]}+({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{(n+1)}z = 0\). From \(x\in bd{{\mathcal {K}}}\), it is clear that \( (\lambda , z)\in \hat{\Omega }\).

\(\Leftarrow \)” From \( (\lambda , z)\in \hat{\Omega }\), we have

$$\begin{aligned} ({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{[n]}+({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{(n+1)}z=0,\quad ({\mathcal {A}} x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{(n+1)}\ge 0. \end{aligned}$$

By direct computation,

$$\begin{aligned} \Vert (\mathcal {A}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{[n]}\Vert _{2}=(\mathcal {A} x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{(n+1)}, \end{aligned}$$

and hence

$$\begin{aligned} (\mathcal {A} x^{m-1}-\lambda {\mathcal {B}}x^{m-1})\in bd\,{\mathcal {K}}\subset {\mathcal {K}}. \end{aligned}$$

Together with \(z^T z=1\), it holds

$$\begin{aligned} \begin{array}{rl}&{}x^T({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})=({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{[n]}^T z+({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{(n+1)}\cdot 1\\ &{}=-(({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{(n+1)} z^T) z+({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{(n+1)} = 0.\end{array} \end{aligned}$$

Hence, we can assert that \((\lambda ,x)\in bd\,{\mathbb {S}}\). \(\square \)

Based on Theorem 3.4, we now discuss the upper bound for the number of SOC-complementarity eigenvalues. Before proceeding, we need the following definition. Let \({\mathcal {A}}\) and \({\mathcal {B}}\) be two mth order tensors in \({\mathtt {T}}^m(\mathbb {C}^{n})\), \({({\mathcal {A}}, {\mathcal {B}})}\) is called a regular tensor pair if \(det(\alpha {\mathcal {A}}-\beta {\mathcal {B}}) = 0 \) for some \((\alpha , \beta ) \in \mathbb {C}\times \mathbb {C}\). Conversely, we call \(({\mathcal {A}}, {\mathcal {B}})\) a singular tensor pair if \(det(\alpha {\mathcal {A}}-\beta {\mathcal {B}}) = 0\) for all \((\alpha , \beta )\in \mathbb {C}\times \mathbb {C}\). Here, the determinant of the tensor was introduced by Qi [25] firstly, and was further discussed by Hu et al. [14]. The determinant of an mth order n-dimensional tensor \({\mathcal {A}}\) is the resultant of the system of homogeneous equation \({\mathcal {A}}x^{m-1} = 0\) with \(x\in \mathbb {R}^n\), which is the unique polynomial on the entries of \({\mathcal {A}}\) such that

$$\begin{aligned} det({\mathcal {A}})=0 \Leftrightarrow {\mathcal {A}}x^{m-1} = 0 \quad \hbox {has\,a\,nonzero\,solution}. \end{aligned}$$

For convenience of notation, tensors \(\bar{\mathcal {A}}, \bar{B}\in {\mathtt {T}}^{m+1}(\mathbb {C}^{n+1})\) are defined as below with \(x=(z,t)\in \mathbb {R}^{n+1}\)

$$\begin{aligned} \bar{\mathcal {A}}x^{m}= t{\mathcal {A}}_{[n]}x^{m-1}+{\mathcal {A}}_{(n+1)} x^{m-1} z,\quad \bar{\mathcal {B}}x^{m} =t{\mathcal {B}}_{[n]}x^{m-1}+{\mathcal {B}}_{(n+1)} x^{m-1} z. \end{aligned}$$
(3.3)

Theorem 3.5

  1. (1)

    Let \({\mathcal {A}}\),\({\mathcal {B}}\in {\mathtt {T}}^m(\mathbb {R}^{n+1})\). Then there are at most \((n+1)(m^n+(m-1)^n)\) SOC-complementarity eigenvalues of \(({\mathcal {A}},{\mathcal {B}})\) if \(({\mathcal {A}},{\mathcal {B}})\) and \((\bar{\mathcal {A}},\bar{\mathcal {B}})\) are regular tensor pairs, respectively.

  2. (2)

    If \({\mathcal {A}},{\mathcal {B}}\) are generic tensors in \({\mathtt {T}}^m(\mathbb {R}^{n+1})\), then for each SOC-complementarity eigenvalue, there is a unique SOC-complementarity eigenvalue (up to scaling).

Proof

(1) For SOC-complementarity eigenpair \((\lambda , x)\) with \(x\in int\,{\mathcal {K}}\), we have \(({\mathcal {A}}-\lambda {\mathcal {B}}) x^{m-1}={\mathcal {A}} x^{m-1}-\lambda {\mathcal {B}} x^{m-1}=0\) from Theorem 3.4. This means that \((\lambda ,x)\) is a generalized tensor eigenpair of \(({\mathcal {A}},{\mathcal {B}})\). From Theorem 2.1 in [6], there are at most \((n+1)(m-1)^n\) eigenvalues counting multiplicity when (\({\mathcal {A}},{\mathcal {B}}\)) is a regular tensor pair.

For SOC-complementarity eigenpair \((\lambda , x)\) with \(x=(z,t)\in bd\,{\mathcal {K}}\), we have

$$\begin{aligned} t({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{[n]}+({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{(n+1)}z = 0,\quad z^T z=t^2. \end{aligned}$$

Let \(\bar{{\mathcal {A}}}, \bar{{\mathcal {B}}}\in {\mathtt {T}}^{m+1}(\mathbb {R}^{n+1})\) be as in (3.3). Then above equality can be rewritten as

$$\begin{aligned} \bar{{\mathcal {A}}} x^m-\lambda \bar{{\mathcal {B}}} x^m=0. \end{aligned}$$

Thus, \((\lambda , x)\) is a generalized tensor pair and there are at most \((n+1)m^n\) eigenvalues counting multiplicity from Theorem 2.1 in [6] when \(({\mathcal {A}},{\mathcal {B}})\) is a regular tensor pair.

Together with these two cases, there are at most \((n+1)(m^n+(m-1)^n)\) SOC-complementarity eigenvalues of \(({\mathcal {A}},{\mathcal {B}})\) and the result (1) is established now.

(2) When \({\mathcal {A}},{\mathcal {B}}\) are generic in \({\mathtt {T}}^m(\mathbb {R}^{n+1})\), \(\bar{\mathcal {A}},\bar{\mathcal {B}}\) are also generic in \({\mathtt {T}}^{m+1}(\mathbb {R}^{n+1})\). From Theorem 3.1 in [8], x in \({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1}=0\) or \(\bar{\mathcal {A}}x^m-\lambda \bar{\mathcal {B}}x^m=0\) is unique for each \(\lambda \), up to scaling. Therefore, each SOC-complementarity eigenvector is unique for each SOC-complementarity eigenvalue. \(\square \)

The existence of SOCTEiCP can be found by Theorem 2.3 in [13], which is restated as follows.

Lemma 3.6

There exists at least one SOC-complementarity eigenpair when \({\mathcal {B}}\) is strictly SOC-positive, where tensor \({\mathcal {B}}\) is called strictly SOC-positive if \({\mathcal {B}}x^m>0\) for all nonzero vectors in second-order cone.

SOC-positivity can be tested by solving a polynomial optimization problem, which is similar to [23] and omitted here.

Motivated by Theorem 3.4, we consider the following polynomial optimization problems to compute the SOC-complementarity eigenpair

$$\begin{aligned} \begin{array}{rl} \min &{}\lambda \\ \mathrm {s.t.} &{}{\mathcal {A}} x^{m-1}-\lambda {\mathcal {B}}x^{m-1}= 0,\\ &{}z^Tz\le 1,\, x=(z,1)\in \mathbb {R}^{n+1}, \end{array} \end{aligned}$$
(3.4)

and

$$\begin{aligned} \begin{array}{rl} \min &{}\lambda \\ \mathrm {s.t.} &{}({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{[n]}+({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{(n+1)}z = 0,\\ &{}({\mathcal {A}} x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{(n+1)}\ge 0,\\ &{}z^Tz=1,\, x=(z,1)\in \mathbb {R}^{n+1}. \end{array} \end{aligned}$$
(3.5)

Clearly, \(\Omega \) is included in the feasible set of (3.4), and \(\hat{\Omega }\) is the feasible set of (3.5). Moreover, it is easy to check whether a feasible solution of (3.4) lies in \(\Omega \). Motivated by these facts, we consider how to solve (3.4) and (3.5) in the following.

4 Computation of boundary SOC-complementarity eigenpairs

In this section, we study how to compute all boundary SOC-complementarity eigenpairs. For general tensors \({\mathcal {A}}\) and \({\mathcal {B}}\), there are finitely many SOC-complementarity eigenvalues from Theorem 3.5. This motivates us to assume that \(\lambda ^2\le M\) for some \(M>0\). We order them monotonically as follows if they are finite:

$$\begin{aligned} \lambda _1< \lambda _2<\cdots < \lambda _N. \end{aligned}$$

Let

$$\begin{aligned} {\mathbb {S}}_i:=\{(\lambda , z)\in \hat{\Omega }\mid \lambda =\lambda _i\}, \quad i=1,2,\ldots , N. \end{aligned}$$

Then \(bd\,{\mathbb {S}}:=\{(\lambda ,x): x=(z,1), (\lambda ,z)\in {\mathbb {S}}_1\cup {\mathbb {S}}_2\cup \cdots \cup {\mathbb {S}}_N\}\). Therefore, it suffices to compute \({\mathbb {S}}_i\) to get \(bd\,{\mathbb {S}}\).

In the following, we compute all boundary SOC-complementarity eigenpairs sequentially if they are finite.

4.1 Computation of \({\mathbb {S}}_1\)

We first consider the following polynomial optimization problem:

$$\begin{aligned} \begin{array}{rl} \min &{}\lambda \\ \mathrm {s.t.} &{}({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{[n]}+({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{(n+1)}z = 0,\\ &{}({\mathcal {A}} x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{(n+1)}\ge 0,\\ &{}z^Tz=1,\,\lambda ^2\le M, \, x=(z,1)\in \mathbb {R}^{n+1}. \end{array} \end{aligned}$$

It is clear that \((\lambda ,z)\) is feasible if and only if \((\lambda ,z)\in \hat{\Omega }\). For convenience of notation, let \(g=(g_1,g_2)\), \(h=(h_1,\ldots , h_{n+1})\) with \(g_1=({\mathcal {A}} x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{(n+1)}\), \(g_2=M-\lambda ^2\), \(h_i=(\mathcal {A}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{i}+({\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1})_{(n+1)}z_{i}\) for \(i\in [n]\) and \(h_{n+1}=z^T z-1\). Then above problem can be rewritten as follows:

$$\begin{aligned} \begin{array}{rl} \lambda _1:=\min \limits &{}\lambda \\ \mathrm {s.t.} &{}h(\lambda ,z)=0,\\ &{}g(\lambda ,z)\ge 0, \end{array} \end{aligned}$$
(4.1)

which is a polynomial optimization problem of degree \(m+1\) on \((\lambda ,z)\). To solve (4.1), we consider Lasserre-type semidefinite relaxations. For convenience of notation, let \(\bar{e}\) be the vectorized polynomial coefficient of objective function in polynomial optimization (4.1). Then Lasserre’s hierarchy of semidefinite relaxations for solving (4.1) is

$$\begin{aligned} \begin{array}{rl} \nu _{1,k}:=\min &{}\langle \bar{e}, y\rangle \\ \mathrm {s.t.} &{}\langle 1, y\rangle =1, \quad L_{h}^k(y)=0,\\ &{}L_{g}^k(y)\succeq 0, \quad M_{k}(y)\succeq 0, \\ &{}y \in \mathbb {R}^{\mathbb {N}^{n+1}_{2k}}, \\ \end{array} \end{aligned}$$
(4.2)

for the order \(k=k_0, k_0+1,\ldots \), where \(k_0=\lceil \frac{m+2}{2}\rceil .\) In the above, \(\langle 1, y\rangle =1\) means that the first entry of y is one, and the matrices \(L_{h}^k(y), L_{g}^k(y), M_{k}(y)\) are defined as in (2.3), (2.4) and (2.5). Its dual problem is

$$\begin{aligned} \begin{array}{rl} \bar{\nu }_{1,k}:=\max &{}\gamma \\ \mathrm {s.t.} &{}\bar{e}-\gamma \in I_{2k}(h)+ Q_{k}(g).\\ \end{array} \end{aligned}$$
(4.3)

It can be shown that for all k

$$\begin{aligned} \bar{\nu }_{1,k} \le \nu _{1,k} \le \lambda _1 \end{aligned}$$

and the sequences \(\{ \nu _{1,k} \}\) and \(\{ \bar{\nu }_{1,k} \}\) are monotonically increasing, that is,

$$\begin{aligned} \nu _{1,k}\le \ \nu _{1,k+1}\le \cdots \le \lambda _1,\quad \bar{\nu }_{1,k}\le \bar{\nu }_{1,k+1}\le \cdots \le \lambda _1. \end{aligned}$$

Suppose that \(y^ {1,k}\) is the optimizer of (4.2). If the truncation \(\bar{y} := y^{1,k}|_{2t} \) satisfies the rank condition

$$\begin{aligned} rank \ M_{t-k_0}(\bar{y})=rank \ M_t(\bar{y}) \end{aligned}$$
(4.4)

for some integer \(t \in [k_0,k]\), then \(\lambda _1=\nu _{1,k}\) and we can get \(r:=rank\ M_t(\bar{y})\) distinct real global optimizers of (4.1) according to [21]. All such optimizers returned by optimizers \(y^{1,k}\) of (4.2) constitute \({\mathbb {S}}_1\).

Theorem 4.1

Let \({\mathcal {A}},{\mathcal {B}}\in {\mathtt {T}}^m(\mathbb {R}^{n+1})\), and \(\hat{\Omega }\) be the feasible set of problem (4.1). Then we have the following properties:

  1. (1)

    If \({\hat{\Omega }}=\emptyset \) if and only if the semidefinite relaxation (4.2) is infeasible for some k.

  2. (2)

    If \({\hat{\Omega }}\ne \emptyset \), then the feasible set of (4.2) is nonempty and compact. Furthermore,

    $$\begin{aligned} \lim \limits _{k\rightarrow \infty }\nu _{1,k}=\lim \limits _{k\rightarrow \infty } \bar{\nu }_{1,k}=\lambda _1. \end{aligned}$$
  3. (3)

    If there are finitely many boundary SOC-complementarity eigenvalues, then for all big enough k,

    $$\begin{aligned} \nu _{1,k}=\bar{\nu }_{1,k}=\lambda _1. \end{aligned}$$

    Furthermore, if there are finitely many \(x=(z,1)\) such that \((\lambda ,z)\in {\mathbb {S}}_{1}\), then for all big enough k and each minimizer \(y^{1,k}\) of (4.2), rank condition (4.4) is satisfied for some \(t\le k\).

Proof

(1) “\(\Rightarrow \)” Suppose that \({\hat{\Omega }}=\emptyset \). By Positivstellensatz of [1], \(-1\in I(h)+Q(g)\). Hence \(-1\in I_{2k}(h)+Q_{k}(g)\) for big enough k. This means that problem (4.3) is unbounded above and hence (4.2) is infeasible for such k from weak duality.

\(\Leftarrow \)” Suppose that problem (4.2) is infeasible for some k. Then problem (4.1) is infeasible since \([(\lambda , z)]_{2k}\) will be a feasible solution of (4.2) for any feasible solution \((\lambda , z)\) of (4.1), and hence \({\hat{\Omega }}=\emptyset \).

(2) \({\hat{\Omega }}\ne \emptyset \) means that the feasible set of (4.2) is nonempty since (4.2) is a relaxation of (4.1). With a similar discussion of Theorem 3.2 in [23], the feasible set of (4.2) is always compact. Furthermore, \(\nu _{1,k}\) with optimizer \(y^{1,k}\) is always achievable for all k. Since \(M-\lambda ^2\) and \(1-z^T z\) are polynomials of tuple g and h, Assumption 4.1 in [15] is satisfied by adopting \(u(\lambda , z)=M+1-z^T z-\lambda ^2\) in such assumption. Therefore, the asymptotic convergence is established from Theorem 4.2 [15].

(3) Suppose that all boundary SOC-complementarity eigenvalues are listed as: \(\lambda _1<\lambda _2<\cdots <\lambda _N\). Let \(s_i\) (\(i\in [N]\)) be univariate real polynomials in \(\lambda \) such that

$$\begin{aligned}f_i(\lambda ,x)=(\lambda _i-\lambda _1)s_i(\lambda ) \quad \mathrm{and}\quad s_i(\lambda _j)=\left\{ \begin{array}{ll}1&{}\quad i\ne j,\\ 0&{}\quad i=j.\end{array}\right. \end{aligned}$$

Let \(f:=f_1+f_2+\cdots +f_N\in \sum [x]_{2k'}\) for some positive integer \(k'\) and \(\bar{f}:= \bar{e}-\lambda _1-f\). Clearly, \(\bar{f}= 0\) for all \((\lambda ,x)\in bd\,{\mathbb {S}}\). By Real Nullstellensatz [17, Theorem 2.12], there exists a \(q\in Q(g)\) and a positive integer l such that \({\bar{f}}^{2l}+q\in I(h)\). Although the remainder proof is similar to that of Theorem 3.1 in [24], we present here for completeness. For any \(\delta >0\) and \(c>0\), let

$$\begin{aligned}\theta _\delta :=-c\delta ^{1-2l}(\bar{f}^{2l}+q), \quad \phi _\delta =\delta \left( 1+{\bar{f}\over \delta }+c\left( \frac{\bar{f}}{\delta }\right) ^{2l}\right) +c\delta ^{1-2l}q. \end{aligned}$$

From Lemma 2.1 in [22], there exists \(k_1\) such that

$$\begin{aligned} \theta _\delta \in I_{2k_1}(h) \quad \hbox {and} \quad \phi _{\delta }\in Q_{k_1}(g) \end{aligned}$$

for all \(\delta >0\) and \(c\ge \frac{1}{2l}(1-\frac{1}{2l})^{2l-1}\). This means that \(\bar{e}-(\lambda _1-\delta )=\theta _\delta +\phi _\delta +f\in I_{2k_1}(h)+Q_{k_1}(g)\).

Thus, for all \(\delta >0\), \(\lambda _1-\delta \) is feasible in (4.3) for order \(k_1\) and \(\bar{\nu }_{1,k_1}\ge \lambda _1\) from the arbitrariness of \(\delta >0\).

On the other hand, \(\bar{\nu }_{1,k}\le \lambda _1\) for all k. Now we can assert that \(\bar{\nu }_{1,k_1}=\lambda _1\) and hence \(\nu _{1,k_1}=\lambda _1\). Since \(\{\nu _{1,k}\}_k\) is monotonically increasing, \(\nu _{1,k}=\lambda _1\) for all \(k\ge k_1\). This means that for all big enough k,

$$\begin{aligned} \nu _{1,k}=\bar{\nu }_{1,k}=\lambda _1. \end{aligned}$$

Furthermore, if there are finitely many minimizer of (4.1), rank condition must be satisfied from Theorem 2.6 in [21], which completes the proof. \(\square \)

4.2 Computation of \({\mathbb {S}}_{i+1}\)

We now discuss how to determine whether \({\mathbb {S}}_{i+1}\) exists and how to compute \({\mathbb {S}}_{i+1}\) \((i = 1,\ldots )\) if it exists. Suppose that \({\mathbb {S}}_{i}\) is known and there exists \(\epsilon >0\) such that \(0< \epsilon <\lambda _{i+1}-\lambda _i \). Consider the following optimization problem:

$$\begin{aligned} \begin{array}{rl} \lambda _{i+1}:=\min &{} \lambda \\ \mathrm {s.t.} &{} h (\lambda , z)= 0, \\ &{} g (\lambda , z)\ge 0, \\ &{}\lambda -\lambda _{i}-\epsilon \ge 0. \\ \end{array} \end{aligned}$$
(4.5)

Similarly, Lasserre-type semidefinite relaxations can be applied to solve (4.5). For the order \(k = k_0,k_0+1,\ldots \), the kth-order Lasserre relaxation is

$$\begin{aligned} \begin{array}{rl} \nu _{{i+1}, k}:=\min &{}\langle \bar{e}, y\rangle \\ \mathrm {s.t.} &{}\langle 1, y\rangle =1, \quad L_{h}^{(k)}(y)=0,\\ &{}L_{g}^{(k)}(y)\succeq 0, \quad M_{k}(y)\succeq 0, \\ &{}L_{\lambda -\lambda _{i}-\epsilon }^{(k)}(y)\succeq 0,\quad y \in \mathbb {R}^{\mathbb {N}^{n+1}_{2k}}. \\ \end{array} \end{aligned}$$
(4.6)

The dual problem of (4.6) is

$$\begin{aligned} \begin{array}{rl} \bar{\nu }_{i+1, k}:=\max &{}\gamma \\ \mathrm {s.t.} &{}\bar{e}-\gamma \in I_{2k}(h)+ Q_k(g, \lambda -\lambda _{i}-\epsilon ).\\ \end{array} \end{aligned}$$

Similarly, we have

$$\begin{aligned} \bar{\nu }_{i+1,k} \le \nu _{i+1,k} \le \lambda _{i+1}. \end{aligned}$$

Moreover, the sequence \(\{ \nu _{i+1,k} \}\) and \(\{ \bar{\nu }_{i+1,k} \}\) are monotonically increasing with k.

If problem (4.6) is infeasible for some k, then \({\mathbb {S}}_{i+1}\) is empty, which means that \( bd\mathbb {S} :=\{(\lambda , x)\mid x=(z,1), (\lambda , z)\in {\mathbb {S}}_{1}\cup {\mathbb {S}}_{2} \cup \cdots \cup {\mathbb {S}}_{i}\}\). Otherwise, suppose that \(y^ {i+1, k}\) is an optimizer of (4.6). If for some integer \(t \in [k_0,k]\), the truncation \(\bar{y} := y^{i+1, k}|_{2t} \) satisfies the rank condition (4.4), then \(\nu _{i+1, k}= \lambda _{i+1}\) and we get global optimizers of (4.5) returned by \(y^{i+1, k}\). Then we have all global optimizers of (4.5) returned by all optimizers \(y^ {i+1, k}\) of (4.6).

In practice, the existence of \(\lambda _{i+1}\) is usually unknown in advance. Even if it exists, its value is typically not available. So we need to determine the value of \(\epsilon \) satisfying \(0<\epsilon <\lambda _{i+1}-\lambda _i\). For this aim, we consider the polynomial optimization problem:

$$\begin{aligned} \begin{array}{rl} \tau := \max &{} \lambda \\ \mathrm {s.t.} &{} h (\lambda , z)= 0, \\ &{} g (\lambda , z)\ge 0, \\ &{}\lambda \le \lambda _i+\epsilon . \\ \end{array} \end{aligned}$$
(4.7)

It can be computed by Lasserre relaxations like (4.2) and (4.3). For numerical reasons, the value \(\epsilon > 0\) can not be too small. Let \(\tau \) be the optimal value of (4.7) and let \(\epsilon =0.05\) in (4.7). If \(\tau >\lambda _{i}\), we decrease \(\epsilon \) as \(\epsilon =\epsilon /2\) and solve (4.7) again. By repeating this process, we can get \(\tau =\lambda _{i}\) ultimately.

4.3 The choice of M

Suppose that there exists \(M>0\) such that all feasible solutions \((\lambda , z)\) of (3.5) satisfy that \(\lambda ^2 \le M\). Now we discuss how to obtain such M. To this end, we consider the following problem:

$$\begin{aligned} \begin{array}{rl} \min &{} F(\lambda , z)\\ \mathrm {s.t.} &{} h(\lambda , z)= 0, \\ &{} \bar{g}(\lambda , z)\ge 0, \end{array} \end{aligned}$$
(4.8)

where \(F(\lambda , z)\) is a general SOS polynomial on \((\lambda , z)\) and \(\bar{g}=( g_1 , -g_2)\). Consider the following kth-order Lasserre-type semidefinite relaxation:

$$\begin{aligned} \begin{array}{rl} \min &{}\langle F, y\rangle \\ \mathrm {s.t.} &{}\langle 1, y\rangle =1, \quad L_{h}^k(y)=0,\\ &{}L_{\bar{g}}^k(y)\succeq 0, \quad M_{k}(y)\succeq 0, y \in \mathbb {R}^{\mathbb {N}^{n+1}_{2k}}. \\ \end{array} \end{aligned}$$
(4.9)

Clearly, M is big enough if and only if (4.8) is infeasible. Furthermore, problem (4.8) is infeasible if (4.9) is infeasible for some order k. Solve (4.9) with \(M:=M_0\) and \(k:=k_0\). If an optimal solution of (4.8) is obtained by solving (4.9), let \(M:=2M\) and solve (4.9) again. Otherwise, (4.9) is infeasible for some k and the corresponding M is the needed one.

4.4 Algorithm for computing boundary SOC-complementarity eigenpairs

Algorithm 4.2

For given tensors \({\mathcal {A}},{\mathcal {B}}\in {\mathtt {T}}^m(\mathbb {R}^{n+1})\), compute all the boundary SOC-complementarity eigenpairs as follows:

  1. Input:

    Choose a general SOS polynomial \( F \) and \(M>0\). Let \(k:=k_0=\ulcorner \frac{m+2}{2}\urcorner \), \(\epsilon _0=0.05\), \(i=1\), and \({\mathbb {S}}=\emptyset \).

  2. Step 1:

    Solve the kth-order semidefinite relaxation (4.9). If it is infeasible, let \(k:=k_0\) and go to Step 2. Otherwise, if an optimizer \(y^{1,k}\) is obtained with the rank condition (4.4), let \(M:=2M\), \(k:=k_0\) and go to Step 1; else let \(k:=k+1\) and go to Step 1;

  3. Step 2:

    Solve the kth-order semidefinite relaxation (4.2). If it is infeasible, then there are no boundary SOC-complementarity eigenpairs and stop. Otherwise, optimal value \(v_{1,k}\) with an optimizer \(y^{1,k}\) can be computed.

  4. Step 3:

    If the rank condition (4.4) is satisfied, then \(\lambda _1=\nu _{1,k}\) and all optimal solutions \((\lambda , z)\) can be extracted, which constitute \({\mathbb {S}}_1\). Let \({\mathbb {S}}:={\mathbb {S}}\cup {\mathbb {S}}_1\), \(k:=k_0\), \(\epsilon :=\epsilon _0\) and go to Step 4. Otherwise, let \(k:=k+1\) and go to Step 2.

  5. Step 4:

    Solve problem (4.7). If \(\tau >\lambda _{i}\), let \(\epsilon :=\epsilon /{2}\) and go to step 4. If \(\tau =\lambda _{i}\), go to Step 5.

  6. Step 5:

    Solve the relaxation (4.6). If it is infeasible, then \({\mathbb {S}}_{i+1}=\emptyset \) and stop. Otherwise, compute the optimal value \(\nu _{i+1,k}\) with an optimizer \(y^{i+1,k}\). If (4.4) is satisfied with \(\bar{y}=y^{i+1,k}|_{2t}\) for some \(t\in [k_0,k]\), then \(\nu _{i+1,k}=\lambda _{i+1}\) and optimal solutions \((\lambda , z)\) of (4.5) can be extracted, which constitute \({\mathbb {S}}_{i+1}\). Let \(k\,{:=}\,k_0\), \(\epsilon \,{:=}\,\epsilon _0\), \({\mathbb {S}}\,{:=}\,{\mathbb {S}}\cup {\mathbb {S}}_{i+1}\), \(i\,{:=}\,i+1\) and go to Step 4. Otherwise, let \(k\,{:=}\, k+1\) and go to Step 5.

  7. Output:

    \(bd\,{\mathbb {S}}:=\{(\lambda , x)\mid x=(z,1), (\lambda , z)\in {\mathbb {S}}\}\).

Theorem 4.3

Suppose that there are finitely many boundary SOC-complementarity eigenvalues, then Algorithm 4.2 terminates by solving finitely many semidefinite relaxations.

The proof can be induced similarly to the proof of Theorem 4.1 and we omit here.

5 Computation of interior SOC-complementarity eigenpairs

In this section, we consider how to compute all interior SOC-complementarity eigenpairs. Based on Theorem 3.4, the interior SOC-complementarity eigenpair set can be rewritten as

$$\begin{aligned} int\,{\mathbb {S}}=\{(\lambda ,x)\big | {\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1}=0,\, z^T z< 1, \, x=(z,1)\}. \end{aligned}$$

From Theorem 3.5, there exists \(M> 0\) such that all interior SOC-complementarity eigenvalues are contained in a ball. Similarly, \(F(\lambda ,z)\) achieves different values at different \((\lambda ,x)\in int\,{\mathbb {S}}\) since \(F(\lambda ,z)\) is generically chosen. We order \(\lambda \) monotonically as follows if they are finite:

$$\begin{aligned} \bar{\lambda }_1< \bar{\lambda }_2<\cdots <\bar{\lambda }_{\bar{N}}. \end{aligned}$$

For \(i\in [\bar{N}]\), let \(\bar{{\mathbb {S}}}_i:= \{(\bar{\lambda }_i,z)\big |\,{\mathcal {A}}x^{m-1}-\bar{\lambda }_i {\mathcal {B}}x^{m-1}=0,\, z^T z\le 1, \, x=(z,1) \}\) and \(int\,\bar{{\mathbb {S}}}:=\bar{{\mathbb {S}}}_{1}\cup \bar{{\mathbb {S}}}_{2}\cup \cdots \cup \bar{{\mathbb {S}}}_{\bar{N}}\), then we have

$$\begin{aligned} int\,{\mathbb {S}}=\{(\lambda ,x)\mid x=(z,1), (\lambda , z)\in int\,\bar{{\mathbb {S}}}\}{\setminus } bd\,{\mathbb {S}}. \end{aligned}$$

5.1 Computation of \(\bar{{\mathbb {S}}}_{1}\)

In this subsection, we show how to compute \(\bar{{\mathbb {S}}}_1\). Clearly, \((\lambda ,z)\) is the optimal value of the following polynomial optimization problem:

$$\begin{aligned} \begin{array}{rl} \bar{\lambda }_1:=\min &{} \lambda \\ \mathrm {s.t.}&{} \bar{h}(\lambda ,z)= 0, \\ &{}\bar{g}(\lambda ,z)\ge 0, \end{array} \end{aligned}$$
(5.1)

where \(\bar{h}(\lambda ,z)={\mathcal {A}}x^{m-1}-\lambda {\mathcal {B}}x^{m-1}\) and \(\bar{g}(\lambda ,z)=(1-z^T z, M-\lambda ^2)\).

The kth-order Lasserre’s hierarchy of semidefinite relaxation for (5.1) is

$$\begin{aligned} \begin{array}{rl} \rho _{1,k} := \min &{} \langle \bar{e},y\rangle \\ \mathrm {s.t.} &{} \langle 1,y\rangle = 1, \quad L^{(k)}_{\bar{h}}(y)=0, \\ &{}L^{(k)}_{\bar{g}}(y)\succeq 0, \quad M_k(y)\succeq 0,\quad y\in \mathbb {R}^{ {\mathbb {N}}_{2k}^{n+1} }, \end{array} \end{aligned}$$
(5.2)

where \(k\ge k_0=\ulcorner \frac{m+1}{2}\urcorner \). The dual optimization problem of (5.2) is

$$\begin{aligned} \begin{array}{rl} \bar{\rho }_{1,k} :=\max &{} \gamma \\ \mathrm {s.t.}&{}\bar{e}-\gamma \in I_{2k}(\bar{h})+Q_{k}(\bar{g}). \end{array} \end{aligned}$$
(5.3)

Similarly, we have

$$\begin{aligned} \bar{\rho }_{1,k} \le \rho _{1,k} \le \bar{\lambda }_1, \quad k=k_0,k_0+1,\ldots , \end{aligned}$$

and the sequences\(\{\rho _{1,k}\}\), \(\{\bar{\rho }_{1,k}\}\) are monotonically increasing. If there exists \(t\in [k_0,k]\) such that (4.4) on \(\bar{y}^{1,k}\), an optimal solution of (5.2), is satisfied, then \(\rho _{1,k}=\bar{\lambda }_1\). Furthermore, the optimal solutions can be extracted, which are in \( \bar{{\mathbb {S}}}_1\).

5.2 Computation of \(\bar{{\mathbb {S}}}_{i+1}\)

Suppose that \(\bar{{\mathbb {S}}}_{i}\) is known. We want to determine the next SOC-complementarity eigenpair. If it exists, we compute it. Let \(\epsilon >0\) be a small number such that \(\bar{\lambda }_{i+1}>\bar{\lambda }_i+\epsilon \). Consider the following optimization problem:

$$\begin{aligned} \begin{array}{rl} \bar{\lambda }_{i+1}:=\min &{} \lambda \\ \mathrm {s.t.} &{} \bar{h}(\lambda ,z)= 0,\quad \bar{g}(\lambda ,z)\ge 0, \\ &{} \lambda \ge \bar{\lambda }_i+\epsilon . \end{array} \end{aligned}$$
(5.4)

The kth-order Lasserre’s hierarchy of semidefinite relaxation for (5.4) is

$$\begin{aligned} \begin{array}{rl} \rho _{i+1,k} := \min &{} \langle \bar{e} ,y\rangle \\ \mathrm {s.t.} &{} \langle 1,y\rangle = 1, \quad L^{(k)}_{\bar{h}}(y)=0, \\ &{}L^{(k)}_{\bar{g}}(y)\succeq 0, \quad M_k(y)\succeq 0, \\ &{} L^{(k)}_{\lambda -\bar{\lambda }_i-\epsilon }(y) \succeq 0,\quad y\in \mathbb {R}^{ {\mathbb {N}}_{2k}^{n+1} }, \\ \end{array} \end{aligned}$$
(5.5)

where \(k=k_0, k_0+1,\ldots \). The dual optimization problem of (5.5) is

$$\begin{aligned} \begin{array}{rl} \bar{\rho }_{i+1,k} :=\max &{} \gamma \\ \mathrm {s.t.}&{} \bar{e}-\gamma \in I_{2k}(\bar{h})+Q_k{(\bar{g},{\lambda -\bar{\lambda }_i}-\epsilon }). \end{array} \end{aligned}$$
(5.6)

By solving (5.5) sequentially, all global optimizers \((\lambda ,z)\) of (5.4) can be computed. By a similar way for computing \(bd\,{\mathbb {S}}\), we can get \(int\,{\mathbb {S}}\). The choice of M and \(\epsilon \) are similar to Sect. 4.3 with \(\bar{h}(\lambda , z)\) and \(\bar{g}(\lambda ,z)\) instead of \(h(\lambda ,z)\) and \(g(\lambda ,z)\). Hence we omit the details here.

5.3 Algorithm for computing interior SOC-complementarity eigenpairs

Algorithm 5.1

For given tensors \({\mathcal {A}}, {\mathcal {B}}\in {\mathtt {T}}^m(\mathbb {R}^{n+1})\), compute all interior SOC-complementarity eigenpairs as follows:

  1. Input:

    Choose a general polynomial \( F \) and a positive number M. Let \(k:=k_0=\ulcorner \frac{m+1}{2}\urcorner \), \(\epsilon _0=0.05\), \(i=1\), and \(int\, \bar{{\mathbb {S}}}=\emptyset \).

  2. Step 1:

    Solve (4.9) with \(\bar{h},\bar{g}\) instead of hg. If it is infeasible, let \(k:=k_0\) and go to Step 2. Otherwise, if an optimizer \(y^{1,k}\) is obtained with the rank condition (4.4) being satisfied, let \(M:=2M\), \(k:=k_0\) and go to Step 1; else, let \(k:=k+1\) and go to Step 1;

  3. Step 2:

    Solve the kth-order relaxation (5.2). If it is infeasible, then \(int\,\bar{{\mathbb {S}}}=\emptyset \) and stop. Otherwise, optimal value \(\rho _{1,k}\) with an optimizer \(\bar{y}^{1,k}\)can be computed.

  4. Step 3:

    If rank condition (4.4) is satisfied, then \(\bar{\lambda }_1=\rho _{1,k}\) and optimal solution set \(\bar{{\mathbb {S}}}_1\) of (5.1) can be obtained. Let \(int\,\bar{{\mathbb {S}}}:=int\,\bar{{\mathbb {S}}}\cup \bar{{\mathbb {S}}}_1\), \(k:=k_0\), and go to Step 4. Otherwise, let \(k:=k+1\) and go to Step 2.

  5. Step 4:

    Solve problem (4.7) with \(\bar{h},\bar{g}\). If \(\tau >\bar{\lambda }_{i}\), let \(\epsilon :=\epsilon /2\) and go to step 4. If \(\tau =\bar{\lambda }_{i}\), go to Step 5.

  6. Step 5:

    Solve the relaxation (5.5). If it is infeasible, then \(\bar{{\mathbb {S}}}_{i+1}=\emptyset \) and stop. Otherwise, compute the optimal value \(\rho _{i+1,k}\) with an optimizer \(\bar{y}^{i+1,k}\). If (4.4) is satisfied with \( \hat{y}=\bar{y}^{i+1,k}|_{2t}\) for some \(t\in [k_0,k]\), then \(\bar{\lambda }_{i+1}= \rho _{i+1,k}\) and \(\bar{{\mathbb {S}}}_{i+1}\) can be computed. Let \(int\,\bar{{\mathbb {S}}}:=int\,\bar{{\mathbb {S}}}\cup \bar{{\mathbb {S}}}_{i+1}\). Let \(k:=k_0\), \(i:=i+1\), \(\epsilon :=\epsilon _0\) and go to Step 4. If such t does not exist, let \(k:=k+1\), and go to step 5.

  7. Output:

    Let \(int\,{\mathbb {S}}=\{(\lambda , x)\mid x=(z,1),\,(\lambda ,z)\in int\,\bar{{\mathbb {S}}}\}{\setminus } bd\,{\mathbb {S}}\).

The finite convergence of Algorithm 5.1 is presented as follows.

Theorem 5.2

Assume that there are finitely many interior SOC-complementarity eigenvalues, then Algorithm 5.1 must terminate after solving finitely many semidefinite relaxations.

6 Numerical reports

In this section, we present numerical experiments for computing SOC-complementarity eigenvalues. We use the software GloptiPoly 3 [12] and SeDuMi [26] to solve the semidefinite relaxation problems. The experiments are implemented in MATLAB R2015 on a Dell laptop with an Intel Core i5CPU (1.70 GHz) and 4GB of RAM. We display four decimal digits in computational results.

In this section, we use a column vector \((\lambda ,x)\) to denote the SOC-complementarity eigenpair. The first entry in \((\lambda , x)\in \mathbb {R}^{n+2}\) is \(\lambda \in \mathbb {R}\), which is the SOC-complementarity eigenvalue, and the remainder entries constitute the SOC-complementarity eigenvector x. For convenience of notation, we denote the problem studied in [13] as:

$$\begin{aligned} {\mathcal {K}} \ni x\perp (\lambda \hat{\mathcal {A}}x^{m-1}-\hat{\mathcal {B}}x^{m-1})\in {\mathcal {K}}^*, \end{aligned}$$

which can be rewritten as problem SOCTEiCP of this paper with \({\mathcal {A}}=-\hat{\mathcal {B}}\) and \({\mathcal {B}}=-\hat{\mathcal {A}}\).

Example 6.1

Consider SOCTEiCP with matrices \(A, B\in {\mathtt {T}}^2(\mathbb {R}^2)\) given by

$$\begin{aligned} A= \left( \begin{array}{cc} 1 &{} 2\\ 1 &{} -2\\ \end{array} \right) ,\quad B=\left( \begin{array}{cc} 1 &{} 1\\ 1 &{} 1\\ \end{array} \right) . \end{aligned}$$

By direct computation, there are no SOC-complementarity eigenpairs.

Algorithm 4.2 presents a numerical certificate to indicate that no boundary SOC-complementarity eigenpairs exist with 0.654 s. Algorithm 5.1 presents a numerical certificate to indicate that no interior SOC-complementarity eigenpairs exist with 0.181 s. However, the matrix \(\pm \hat{B}=-A\) is not strictly SOC-positive, which is assumed to be SOC-positive in Algorithm SPA proposed in [20] to solve SOCTEiCP in [13]. Hence Algorithm SPA is not applicable for this example.

Example 6.2

Consider SOCTEiCP with the matrices \(A,B\in {\mathtt {T}}^2(\mathbb {R}^2)\) given by

$$\begin{aligned} A= \left( \begin{array}{cc} 1 &{} 1\\ 3 &{} 5\\ \end{array} \right) ,\quad B=\left( \begin{array}{cc} 2 &{} 1\\ 0 &{} 2\\ \end{array} \right) . \end{aligned}$$

Applying Algorithm 4.2, the boundary SOC-complementarity eigenpairs \((\lambda , x)=(0.6667,-1.0000,1.0000)^T\) and \((\lambda , x)=(2.0000,1.0000,1.0000)^T\) are computed with 5.313 s.

The unique interior SOC-complementarity eigenpair \((\lambda , x)=(2.0000,-0.3333,1.0000)^T\) can be found by Algorithm 5.1. It takes about 1.489 s.

Example 6.3

Consider SOCTEiCP with the tensors \({\mathcal {A}},{\mathcal { B}}\in {\mathtt {T}}^4(\mathbb {R}^2)\), whose nonzero entries are listed below

$$\begin{aligned} \begin{array}{lllll} {\mathcal {A}}_{1111}=8,\, &{}{\mathcal {A}}_{1221}=6,\, &{}{\mathcal {A}}_{1222}=9,\, &{}{\mathcal {A}}_{2111}=4,\, &{}{\mathcal {A}}_{2112}=2,\\ {\mathcal {A}}_{2222}=6, &{}{\mathcal {B}}_{1111}=1, &{}{\mathcal {B}}_{1112}=1, &{}{\mathcal {B}}_{1222}=3, &{}{\mathcal {B}}_{2222}=2. \end{array} \end{aligned}$$

By Algorithm 4.2, we get two boundary SOC-complementarity eigenpairs

$$\begin{aligned} (\lambda ,x)=(-9.0000,-1.0000, 1.0000)^T \quad and \quad (\lambda ,x)=(5.0000,1.0000, 1.0000)^T. \end{aligned}$$

This computation takes about 2.484 s.

By Algorithm 5.1, there are two interior SOC-complementarity eigenpairs

$$\begin{aligned} (\lambda ,x)=(3.0000,0.0000, 1.0000)^T\quad and \quad (\lambda ,x)=( 4.1580 ,0.6941, 1.0000)^T. \end{aligned}$$

This computation takes about 2.215 s.

Example 6.4

[20, Example  5.1] This example deals with SOCTEiCP with two 4th-order 2-dimensional tensors \(\mathcal {A,B}\), and all nonzero entries of tensors \({\mathcal {A}}\) and \({\mathcal {B}}\) are listed below

$$\begin{aligned} \begin{array}{rl} &{}{\mathcal {A}}_{1111}=0.8147, \,{\mathcal {A}}_{1211}=0.5164,\, {\mathcal {A}}_{2111}=0.5164,\,{\mathcal {A}}_{2211}=0.9134,\\ &{}{\mathcal {A}}_{1112}=0.4218,\, {\mathcal {A}}_{1212}=0.8540,\, {\mathcal {A}}_{2112}=0.8540,\, {\mathcal {A}}_{2212}=0.9595,\\ &{}{\mathcal {A}}_{1121}=0.4218,\, {\mathcal {A}}_{1221}=0.8540,\, {\mathcal {A}}_{2121}=0.8540,\, {\mathcal {A}}_{2221}=0.9595,\\ &{}{\mathcal {A}}_{1122}=0.6787, \, {\mathcal {A}}_{1222}=0.7504,\, {\mathcal {A}}_{2122}=0.7504,\,{\mathcal {A}}_{2222}=0.3922, \end{array} \end{aligned}$$

and

$$\begin{aligned} \begin{array}{rl} &{}{\mathcal {B}}_{1111}=1.6324,\, {\mathcal {B}}_{1211}=1.1880,\, {\mathcal {B}}_{2111}=1.1880,\, {\mathcal {B}}_{2211}=1.5469,\\ &{}{\mathcal {B}}_{1112}=1.6557,\, {\mathcal {B}}_{1212}=1.4424,\, {\mathcal {B}}_{2112}=1.4424,\, {\mathcal {B}}_{2212}=1.9340,\\ &{}{\mathcal {B}}_{1121}=1.6557,\, {\mathcal {B}}_{1221}=1.4424,\, {\mathcal {B}}_{2121}=1.4424,\, {\mathcal {B}}_{2221}=1.9340,\\ &{}{\mathcal {B}}_{1122}=1.6555, \,{\mathcal {B}}_{1222}=1.4386, \,{\mathcal {B}}_{2122}=1.4386,\,{\mathcal {B}}_{2222}=1.0318. \end{array} \end{aligned}$$

Applying Algorithm 4.2, there is a boundary SOC-complementarity eigenpair \((\lambda ,x)=(0.4783,1.0000,1.0000)^T\) computed with 1.431 s. On the other hand, there is an interior SOC-complementarity eigenpair \((\lambda ,x)=(0.4848, 0.3936,1.0000)^T\) returned by Algorithm 5.1 with 0.924 s.

Example 6.5

Consider SOCTEiCP with the tensors \({\mathcal {A}},{\mathcal { B}}\in {\mathtt {T}}^5(\mathbb {R}^3)\), whose nonzero entries are listed as

$$\begin{aligned} \begin{array}{lllll} &{}{\mathcal {A}}_{11111}=9,\quad &{}{\mathcal {A}}_{13333}=3,\quad &{}{\mathcal {A}}_{21111}=-15,\quad &{}{\mathcal {A}}_{23333}=5,\\ &{}{\mathcal {A}}_{31111}=7,\quad &{}{\mathcal {A}}_{33333}=7,\quad &{}{\mathcal {B}}_{11112}=4,\quad &{}{\mathcal {B}}_{11233}=11,\\ &{}{\mathcal {B}}_{21233}=2,\quad &{}{\mathcal {B}}_{11133}=6,\quad &{}{\mathcal {B}}_{31133}=4,\quad &{}{\mathcal {B}}_{21133}=10. \end{array} \end{aligned}$$

By Algorithm 4.2, there are four boundary SOC-complementarity eigenpairs

$$\begin{aligned} \begin{array}{rll} &{}(0.3206, -0.8552,0.5183,1.0000)^T,\quad &{}(1.0413,0.6999,0.7142,1.0000)^T,\\ &{}(-2.5419,0.5466,-0.8374,1.0000)^T, &{}(-1.0000,-1.0000,0.0000,1.0000)^T. \end{array} \end{aligned}$$

This computation takes about 9.727 s. However, there are no interior SOC-complementarity eigenpairs by Algorithm 5.1 with 2.685 s.

Table 1 Nonzero components of the tensor \({\mathcal {A}}\) for Example 6.6
Table 2 Nonzero components of the tensor \( {\mathcal {B}}\) for Example 6.6

Example 6.6

Consider Example 4.1 in [13] with two 4th-order 4-dimensional tensors \({\mathcal {A}},{\mathcal { B}}\) with \({\mathcal {K}}={\mathcal {K}}^2\times {\mathcal {K}}^2\). That is, to find solutions of the following system

$$\begin{aligned} {\mathcal {K}}\ni x\perp (\lambda {\mathcal {A}}x^{m-1}-{\mathcal {B}}x^{m-1})\in {\mathcal {K}^*}. \end{aligned}$$

The nonzero entries of tensors \({\mathcal {A}}\) and \({\mathcal {B}}\) are listed in Tables 1 and 2, respectively.

Applying Algorithm SPA in [13] with 10 different initial points generated randomly, we have the same SOC-complementarity eigenpair

$$\begin{aligned} (\lambda ,x )=(0.1613,0.1221,0.0388,0.5433,0.2699)^{T}. \end{aligned}$$

Note that for any \(\alpha >0\), \(\alpha (0.1221,0.0388,0.5433,0.2699)^{T}\) is an SOC-complementarity eigenvector. Hence, \((1.0000,0.3176,4.4498,2.2109)^T\) is an SOC-complementarity eigenvector by letting \(\alpha ={1\over 0.1221}\).

To solve this problem, we first divide the SOC-complementarity eigenpairs into boundary SOC-complementarity eigenpairs, boundary-interior SOC-complementarity eigenpairs and interior SOC-complementarity eigenpairs, respectively. Here, the boundary (interior) SOC-complementarity eigenpairs mean that the eigenvectors are in the boundary (interior) of \({\mathcal {K}}^2\times {\mathcal {K}}^2\). The boundary-interior SOC-complementarity eigenvectors are in \((bd {{\mathcal {K}}^2}\times int {{\mathcal {K}}^2})\cup (int {{\mathcal {K}}^2}\times bd {{\mathcal {K}}^2})\). To get all SOC-complementarity eigenpairs, four algorithms are proposed, which are similar to Algorithms 4.2 and 5.1, so we omit the details here. By such four algorithms, all SOC-complementarity eigenpairs are listed as:

$$\begin{aligned} \begin{array}{ll} (0.1375,1.0000,1.0000,0.6240,-0.6240)^T,~&{}(0.2300,1.0000,1.0000,0.5295,0.5295)^T,\\ (0.1613,1.0000,0.3176,4.4498,2.2109)^T,~ &{}(0.2269,1.0000,0.4121,0.4265,0.0089)^T. \end{array} \end{aligned}$$

Comparing the two results, it is asserted that our SDP relaxation method can get more SOC-complementarity eigenpairs than SPA.