1 Introduction

A tensor, as a natural extension of the concept of matrices, is a multidimensional array, whose order refers to the dimensionality of the array, or equivalently, the number of indices needed to label a component of that array. Mathematically, a real mth order n-dimensional square tensor, denoted by \(\mathcal {A}\), can be expressed as \(\mathcal {A}= (a_{i_1\cdots i_m} )\), where each component \(a_{i_1\cdots i_m} \in \mathbb {R}\) for \(1\leqslant i_1, \cdots , i_m\leqslant n\). For the sake of convenience, we denote by \(\mathcal {T}_{m,n}\) the space of mth order n-dimensional real square tensors. If the entries of \(\mathcal {A}\in \mathcal {T}_{m,n}\) are invariant under any permutation of its indices, we call \(\mathcal {A}\) a symmetric tensor. For a vector \(x: = (x_1, \cdots ,x_n)^{\text {T}}\in \mathbb {R}^n\) and a tensor \(\mathcal {A}= (a_{i_1\cdots i_m})\in \mathcal {T}_{m,n}\), we define \(\mathcal {A}x^{m-1}\) as an n-dimensional vector whose ith component is given by

$$\begin{aligned} (\mathcal {A}x^{m-1})_i=\sum _{i_{2},\cdots ,i_{m}=1}^na_{ii_{2\cdots } i_{m}}x_{i_2}\cdots x_{i_m},~~\hbox { for }~i=1,2,\cdots ,n, \end{aligned}$$

and throughout, let \(\mathcal {A}x^m\) be the value at x of a homogeneous polynomial, defined by

$$\begin{aligned} \mathcal {A}x^{m}=\sum _{i_1,i_2,\cdots ,i_m=1}^na_{i_1i_2\cdots i_m}x_{i_1}x_{i_2}\cdots x_{i_m}. \end{aligned}$$

Given two tensors \(\mathcal {A},\;\mathcal {B}\in \mathcal {T}_{m,n}\), we say that \((\mathcal {A},\mathcal {B})\) is an identical singular pair, if

$$\begin{aligned} \left\{ x\in \mathbb {R}^n\backslash \{0\}~|~\mathcal {A}x^{m-1}=0,\;\mathcal {B}x^{m-1}=0\right\} \ne \varnothing . \end{aligned}$$

Under the assumption that \((\mathcal {A},\mathcal {B})\) is not an identical singular pair, we call \((x,\lambda )\in (\mathbb {C}^n\backslash \{0\})\times \mathbb {C}\) an eigenvector–eigenvalue pair of \((\mathcal {A}, \mathcal {B})\), if we could find a nonzero solution x to the following n-system of equations:

$$\begin{aligned} (\lambda \mathcal {A}-\mathcal {B})x^{m-1}=0, \end{aligned}$$
(1.1)

where the nonzero vector x satisfying (1.1) is also called an eigenvector of \((\mathcal {A},\mathcal {B})\), and \(\lambda \) is the associated eigenvalue to the eigenvector x of \((\mathcal {A},\mathcal {B})\). The concept of eigenvector–eigenvalue pair for tensors can be dated back to the independent work of Lim [1] and Qi [2], and the appearance of such a concept has greatly initiated the rapid developments of the spectral theory of tensors. In 2009, Chang et al. [3] further introduced a unified definition of eigenvector–eigenvalue pair for general square tensors, thereby making the study of tensor in the direction of complementarity problems more interesting, e.g., see [47]. Indeed, the importance of tensor and its eigenvalue/eigenvector has been highlighted due to the concise mathematical framework for formulating and analyzing many real-world problems in areas such as magnetic resonance imaging [8, 9], higher-order Markov chains [10], and best-rank one approximation in data analysis [11]. Accordingly, many nice properties such as the Perron-Frobenius theorem for eigenvalues/eigenvectors of nonnegative square tensor have been established, see, e.g., [12, 13].

In the literature, e.g., see [14, 15], complementarity problems have been developed well due to the widespread applications in engineering and economics. As an important special case of complementarity problems, the eigenvalue complementarity problem (EiCP) for matrices also has been studied extensively, see [1622] for example. Most recently, the EiCP for matrices has been generalized to tensors in [5], where the authors called it tensor generalized eigenvalue complementarity problem (TGEiCP) which has been further studied from both theoretical and numerical perspective in [2326]. It is well known that the second-order cone is an important class of cones in applied mathematics, whose high applicability encourages the study of the specific EiCP on second-order cones for matrices, which is called SOCEiCP. By utilizing the special structure of second-order cones, some more interesting results have been developed, see, e.g., [27, 28]. To the best of our knowledge, the development of TGEiCP is still in its infancy. Therefore, a natural question is that can we also extend the SOCEiCP to tensors and obtain more interesting properties for such a specific TGEiCP.

In this paper, we study the second-order cone tensor eigenvalue complementarity problem (SOCTEiCP), which seeks a nonzero vector \(x\in \mathbb {R}^n\) and a scalar \(\lambda \in \mathbb {R}\) satisfying

$$\begin{aligned} x \in \mathcal {K},~~ w:=\left( \lambda \mathcal {A} -\mathcal {B}\right) x^{m-1}\in \mathcal {K}^* ~~~\hbox {and}~~~\langle x, w\rangle =0, \end{aligned}$$
(1.2)

where \(\mathcal {A}\) and \(\mathcal {B}\) are two real mth order n-dimensional tensors, \(\langle \cdot ,\cdot \rangle \) denotes the standard inner product in real Euclidean space, \(\mathcal {K}^*\) is the dual cone of \(\mathcal {K}\), and here \(\mathcal {K}\) is the second-order cone defined by

$$\begin{aligned} \mathcal {K}:=\mathcal {K}^{n_1}\times \mathcal {K}^{n_2}\times \cdots \times \mathcal {K}^{n_r} \end{aligned}$$
(1.3)

with \( \mathcal {K}^{n_i}:=\left\{ x^i\in \mathbb {R}^{n_i} ~|~ x_{\circ }^i\geqslant \Vert x_{\bullet }^i\Vert \right\} \) for \(i=1,2,\cdots ,r\). Note that the n-dimensional vector x can also be separated into r parts, i.e., \(x:=(x^1, x^2, \cdots , x^r)\in \mathbb {R}^{n_1}\times \mathbb {R}^{n_2}\times \cdots \times \mathbb {R}^{n_r}\) with \(\sum _{i=1}^r{n_i}=n\), and each part \(x^i:=(x_{\circ }^i, x_{\bullet }^i)\in \mathbb {R}\times \mathbb {R}^{n_i-1}\) for \(i=1,\cdots ,r\). It is obvious that each cone \(\mathcal {K}^{n_i}\) is pointed and self-dual, which means that \((\mathcal {K}^{n_i})^*=\mathcal {K}^{n_i}\), where the dual cone \((\mathcal {K}^{n_i})^*\) of \(\mathcal {K}^{n_i}\) is defined by

$$\begin{aligned} (\mathcal {K}^{n_i})^*:=\left\{ y^i\in \mathbb {R}^{n_i}~|~\langle y^i,x^i\rangle \geqslant 0,~\forall ~x^i\in \mathcal {K}^{n_i}\right\} . \end{aligned}$$

As a consequence, we know that \(\mathcal {K}\) is also pointed and self-dual.

The main contributions of this paper are summarized as follows: First, we show that SOCTEiCP (1.2) is provably equivalent to a variational inequality, thereby establishing the existence of a solution to SOCTEiCP (1.2). Actually, one more important benefit is that such a characterization might provides an efficient way for the study of properties (e.g., sensitivity and stability) of SOCTEiCP (1.2) in the context of variational inequality. Then, we focus on two special cases of SOCTEiCP (1.2) with symmetric and sub-symmetric tensors, and reformulate both of them as two nonlinear programming problems for the purpose of designing numerical algorithms to find some of their eigenvector–eigenvalue pairs. To illustrate the solvability of SOCTEiCP (1.2), we employ the so-named scaling-and-projection algorithm (SPA) [5] to solve SOCTEiCP (1.2) and report some preliminary computational results to verify the reliability of SPA.

The structure of this paper is divided into five parts. In Sect. 2, we first show that SOCTEiCP (1.2) is essentially equivalent to a variational inequality problem. In Sect. 3, we consider two special cases of SOCTEiCP (1.2). More concretely, in Sect. 3.1, we are concerned with the symmetric SOCTEiCP, that is, the underlying tensors \(\mathcal {A}\) and \(\mathcal {B}\) are symmetric. Based upon such a symmetry condition, we can gainfully formulate the symmetric SOCTEiCP as a fractional polynomial optimization problem. As a more general case, in Sect. 3.2, we discuss the case where SOCTEiCP (1.2) has two sub-symmetric tensors \(\mathcal {A}\) and \(\mathcal {B}\). Similarly, we also give a nonlinear programming formulation for the sub-symmetric SOCTEiCP. In Sect. 4, we report some numerical results to verify the reliability of the SPA proposed in [5]. Finally, we complete this paper with drawing some concluding remarks in Sect. 5.

Notation Let \(\mathbb {R}^n\) denote the real Euclidean space of column vectors of length n. The superscript \(^{\text {T}}\) represents the transpose. Denote \(\mathbb {R}_+^n:=\{x\in \mathbb {R}^n\;|\;x\geqslant 0\}\). For given \(x\in \mathbb {R}^n\), we also rewrite \(x:=(x_1,x_2,\cdots ,x_n)^{\text {T}}\) as r parts, i.e., \(x:=(x^1, x^2, \cdots , x^r)\in \mathbb {R}^{n_1}\times \mathbb {R}^{n_2}\times \cdots \times \mathbb {R}^{n_r}\), with \(\sum _{i=1}^r{n_i}=n\) and \(x^i:=(x_{\circ }^i, x_{\bullet }^i)\in \mathbb {R}\times \mathbb {R}^{n_i-1}\) for \(i=1,\cdots ,r\). For \(\mathcal {A}\in \mathcal {T}_{m,n}\) and a subset J of the index set \([r]:=\{1,2,\cdots ,r\}\), we denote by \(\mathcal {A}_J\) the principal sub-tensor of \(\mathcal {A}\), which is obtained by homogeneous polynomial \(\mathcal {A}x^m\) for all \(x=(x^1,x^2,\cdots ,x^r)\) with \(x^i=0\) for \(i\in [r]\backslash J\). So, \(\mathcal {A}_J\) is a tensor of order m and dimension |J|, where \(|J|=\sum _{i\in J}n_i\). Correspondingly, denote by \(x_J\) the sub-vector of \(x=(x^1, x^2, \cdots , x^r)\), which is obtained by removing the components \(x^i\) with \(i\in [r]\backslash J\). For given \(\mathcal {C}:=(c_{i_1i_2\cdots i_m})\in \mathcal {T}_{m,n}\) and \(x\in \mathbb {R}^n\), \(\mathcal {C} x^{m-2}\) denotes the \(n\times n\) matrix with the (ij)th element given by

$$\begin{aligned} \left( \mathcal {C} x^{m-2}\right) _{ij}:=\sum _{i_3,\cdots ,i_m=1}^nc_{iji_3\cdots i_m} x_{i_3}\cdots x_{i_m}. \end{aligned}$$

2 A Variational Inequality Characterization to SOCTEiCP (1.2)

As a special case of TGEiCP introduced in [5], it is clear that SOCTEiCP (1.2) also has at least one solution under some mild conditions. In this section, we reformulate SOCTEiCP (1.2) as a variational inequality from a different perspective used in [5]. We start this section with recalling some basic definitions and properties on second-order cones and tensors, which will be used in this paper.

For the second-order cone \(\mathcal {K}\) defined by (1.3), it is well known that the complementarity condition on \(\mathcal {K}\) can be decomposed into complementarity conditions on each \(\mathcal {K}^{n_i}\), that is,

$$\begin{aligned} \begin{array}{l} x,y\in \mathcal {K}~\hbox {and}~\langle x,y\rangle =0\\ ~~~~~~~~~~~~~~~~~~\Leftrightarrow ~~x^i,y^i\in \mathcal {K}^{n_i}~\hbox { and }\langle x^i,y^i\rangle =0,~~~\hbox { for }~i=1,2,\cdots ,r, \end{array}\end{aligned}$$
(2.1)

where \(x=(x^1,x^2,\cdots ,x^r)\, and ~y=(y^1,y^2,\cdots ,y^r)\in \mathbb {R}^{n_1}\times \mathbb {R}^{n_2}\times \cdots \times \mathbb {R}^{n_r}\). Moreover, for any \(z=(z_{\circ },z_{\bullet })\) and \(w=(w_{\circ },w_{\bullet })\in \mathbb {R}\times \mathbb {R}^{l-1}\), we define the Jordan product between z and w as

$$\begin{aligned} z\circ w:=\left( \langle z,w\rangle ,~w_{\circ }z_{\bullet }+z_{\circ }w_{\bullet }\right) . \end{aligned}$$
(2.2)

With the above definition of Jordan product of vectors, we have the following result from [29].

Proposition 2.1

For any vectors \(z,w\in \mathbb {R}^l\), we have

$$\begin{aligned} z,w\in \mathcal {K}^l~ \hbox { and }~~\langle z,w\rangle =0 ~~~~\Leftrightarrow ~~~~z,w\in \mathcal {K}^l~\hbox { and }~z\circ w =0_l, \end{aligned}$$

where \(0_l\) is a zero vector in \(\mathbb {R}^l\).

For given tensors \(\mathcal {A}\) and \(\mathcal {B}\in \mathcal {T}_{m,n}\), we define the function \(F:\mathbb {R}^n\rightarrow \mathbb {R}^n\) by

$$\begin{aligned} F(x)=\lambda (x)\mathcal {A}x^{m-1}-\mathcal {B}x^{m-1}, \end{aligned}$$
(2.3)

where

$$\begin{aligned} \lambda (x)=\frac{\mathcal {B}x^m}{\mathcal {A}x^m}, \end{aligned}$$
(2.4)

which is called the generalized Rayleigh quotient related to \(\mathcal {A}\) and \(\mathcal {B}\). Throughout this paper, we assume that \(\mathcal {A}x^m\ne 0\) for any \(x\in \mathcal {K}\backslash \{0\}\), which means that \(\mathcal {A}\) (or \(-\mathcal {A}\)) is strictly \(\mathcal {K}\)-positive, i.e., \(\mathcal {A}x^m > 0\) (or \(-\mathcal {A}x^m > 0\)) for any \(x\in \mathcal {K}\backslash \{0\}\). Under this assumption, it is clear that F defined by (2.3) is well defined and continuous on \(\mathcal {K}_0:=\{x=(x^1,x^2,\cdots ,x^r)\in \mathcal {K}~|~ e^{\text {T}} x=1\}\), where \(e:=(e^1,e^2,\cdots ,e^r)\) with \(e^i=(1,0,\cdots ,0)^{\text {T}} \in \mathbb {R}^{n_i}\). Obviously, \(\mathcal {K}_0\) is exactly a convex compact basis of \(\mathcal {K}\).

Consider the following variational inequality problem (VIP), which refers to the task of finding a vector \({\bar{x}}\in \mathcal {K}_0\) such that

$$\begin{aligned} F({\bar{x}})^{\text {T}} (z-{\bar{x}})\geqslant 0,~~~\forall ~z\in \mathcal {K}_0. \end{aligned}$$
(2.5)

In what follows, we denote (2.5) by VIP(\(F,\mathcal {K}_0\)) for simplicity. Since \(\mathcal {K}_0\) is a nonempty convex compact set, we have the following existence result on the solutions of VIP(\(F,\mathcal {K}_0\)) (e.g., see [14]).

Proposition 2.2

VIP(\(F,\mathcal {K}_0\)) has at least one solution.

Now, we establish the equivalence of (1.2) to VIP(\(F,\mathcal {K}_0\)), and show that (1.2) has at least one solution.

Theorem 2.3

If \({\bar{x}}\) is a solution of VIP(\(F,\mathcal {K}_0\)), then \((\bar{x},{\bar{\lambda }})\) is a solution of (1.2), where \({\bar{\lambda }}:=\lambda ({\bar{x}})\) and \(\lambda (x)\) is defined by (2.4).

Proof

The proof is divided into two parts by distinguishing two cases of \(x_{\circ }^i\).

Case I We first consider the case where \({\bar{x}}_{\circ }^i>0\) for \(i=1,2,\cdots ,r\). Since \({\bar{x}}\) is a solution of VIP(\(F,\mathcal {K}_0\)), it immediately follows that \({\bar{x}}\) is a minimizer of the following optimization problem:

$$\begin{aligned} \min _{x} \,\left\{ \,F({\bar{x}})^{\text {T}} x\,\big |\, x\in \mathcal {K}_0\,\right\} , \end{aligned}$$

which can also be rewritten as

$$\begin{aligned} \begin{array}{cl} {\min }&{} F({\bar{x}})^{\text {T}} x\\ \hbox { s.t.}&{}(x_{\circ }^i)^2-\Vert x_{\bullet }^i\Vert ^2\geqslant 0,\\ &{}x_{\circ }^i\geqslant 0,~i=1,\cdots ,r,\\ &{}e^{\text {T}} x=1. \end{array} \end{aligned}$$

Since \({\bar{x}}_{\circ }^i>0\) for every \(i=1,2,\cdots ,r\), the linear independence constraint qualification holds at \({\bar{x}}\), we know that \({\bar{x}}\) satisfies the KKT conditions (see [30, Theorem 4.2.13]). Therefore, there exist Lagrange multipliers \(\bar{\beta }:=({\bar{\beta }}_1,{\bar{\beta }}_2,\cdots ,{\bar{\beta }}_r)^{\text {T}}\), \(\bar{\gamma }:=({\bar{\gamma }}_1,{\bar{\gamma }}_2,\cdots ,{\bar{\gamma }}_r)^{\text {T}} \in \mathbb {R}^r\) and \({\bar{\delta }}\in \mathbb {R}\) such that

$$\begin{aligned} \left\{ \begin{array}{l} F({\bar{x}})=2C{\bar{\beta }}+D{\bar{\gamma }}+{\bar{\delta }} e,\\ {\bar{\beta }}_i\geqslant 0,~({\bar{x}}_{\circ }^i)^2-\Vert {\bar{x}}_{\bullet }^i\Vert ^2\geqslant 0,~{\bar{\beta }}_i\left( ({\bar{x}}_{\circ }^i)^2-\Vert {\bar{x}}_{\bullet }^i\Vert ^2\right) =0,~i=1,\cdots ,r,\\ {\bar{\gamma }}_i\geqslant 0,~ {\bar{x}}_{\circ }^i\geqslant 0,~{\bar{\gamma }}_i {\bar{x}}_{\circ }^i=0,~i=1,\cdots ,r,\\ e^{\text {T}} {\bar{x}}=1, \end{array} \right. \end{aligned}$$
(2.6)

where

$$\begin{aligned} C:=\left[ \begin{array}{cccc} \left[ \begin{array}{c} {\bar{x}}_{\circ }^1\\ -{\bar{x}}_{\bullet }^1 \end{array} \right] &{}\quad 0_{n_1}&{}\quad \cdots &{}\quad 0_{n_1}\\ 0_{n_2}&{}\quad \left[ \begin{array}{c} {\bar{x}}_{\circ }^2\\ -{\bar{x}}_{\bullet }^2 \end{array} \right] &{}\quad \cdots &{}\quad 0_{n_2}\\ \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \vdots \\ 0_{n_r}&{}\quad 0_{n_r}&{}\quad \cdots &{}\quad \left[ \begin{array}{c} {\bar{x}}_{\circ }^r\\ -{\bar{x}}_{\bullet }^r \end{array} \right] \\ \end{array} \right] \hbox { and }D:=\left[ \begin{array}{cccc} e^1&{}\quad 0_{n_1}&{}\quad \cdots &{}\quad 0_{n_1}\\ 0_{n_2}&{}\quad e^2&{}\quad \cdots &{}\quad 0_{n_2}\\ \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \vdots \\ 0_{n_r}&{}\quad 0_{n_r}&{}\quad \cdots &{}\quad e^r \end{array}\right] \in \mathbb {R}^{n\times r} \end{aligned}$$

with \(0_{n_i}\) being the zero vector in \(\mathbb {R}^{n_i}\) for \(i=1,\cdots ,r\). By (2.6), we know \({\bar{x}}^{\text {T}} F(\bar{x})={\bar{\delta }}\), which implies, together with the fact that \(\bar{x}^{\text {T}} F({\bar{x}})=0\), that \({\bar{\delta }}=0\). Consequently, from the first expression of (2.6), we get

$$\begin{aligned} F({\bar{x}})=2C{\bar{\beta }}+D{\bar{\gamma }}. \end{aligned}$$
(2.7)

For the notational convenience, let us write

$$\begin{aligned} F({\bar{x}})=:{\bar{y}}:=({\bar{y}}^1,{\bar{y}}^2,\cdots ,{\bar{y}}^r) \in \mathbb {R}^{n_1}\times \mathbb {R}^{n_2}\times \cdots \times \mathbb {R}^{n_r} \end{aligned}$$

with \({\bar{y}}^i=({\bar{y}}_{\circ }^i,{\bar{y}}_{\bullet }^i)\in \mathbb {R}\times \mathbb {R}^{n_i-1}\). By (2.7), it is easy to verify that

$$\begin{aligned} {\bar{y}}_{\circ }^i-\Vert {\bar{y}}_{\bullet }^i\Vert =2{\bar{\beta }}_i\left( \bar{x}_{\circ }^i-\Vert \bar{x}_{\bullet }^i\Vert \right) +{\bar{\gamma }}_i\geqslant 0,\;i=1,\cdots ,r, \end{aligned}$$

which means that \({\bar{y}}^i\in \mathcal {K}^{n_i}\) for \(i=1,\cdots ,r\), and hence \({\bar{y}} \in \mathcal {K}~(=\mathcal {K}^*)\). Consequently, we have \({\bar{\lambda }}\mathcal {A}\bar{x}^{m-1}-\mathcal {B}{\bar{x}}^{m-1} \in \mathcal {K}\) as well as \(\bar{x}\in \mathcal {K}\). Since \({\bar{x}}^{\text {T}} F({\bar{x}})=0\), we know \(\bar{x}^{\text {T}}\left( {\bar{\lambda }}\mathcal {A}{\bar{x}}^{m-1}-\mathcal {B}\bar{x}^{m-1}\right) =0\), which implies that \(({\bar{x}},{\bar{\lambda }})\) is a solution of (1.2) because of \({\bar{x}}\ne 0\).

Case II Now we consider the case of \({\bar{x}}_{\circ }^j=0\) for some j. It follows from \({\bar{x}}^j\in \mathcal {K}^{n_j}\) that \(\bar{x}^j=0\). Using the constraint \(e^{\text {T}} {\bar{x}}=1\), \(r\geqslant 2\), we assume, for simplicity, that there is exactly one such j and that \(j=1\), i.e., \({\bar{x}} = ({\bar{x}}^1,{\bar{u}})\in \mathbb {R}^{n_1}\times \mathbb {R}^{n-n_1}\) with

$$\begin{aligned} {\bar{x}}^1=0,~~{\bar{u}}=({\bar{x}}^2,\cdots ,{\bar{x}}^r)~~\hbox {and}~~\bar{x}^i\ne 0,~i=2,\cdots ,r. \end{aligned}$$

Correspondingly, we have

$$\begin{aligned} F({\bar{x}})=\lambda ({\bar{u}})\left[ \begin{array}{c} \mathcal {A}_{12}{\bar{u}}^{m-1}\\ \mathcal {A}_{22}{\bar{u}}^{m-1} \end{array} \right] -\left[ \begin{array}{c} \mathcal {B}_{12}{\bar{u}}^{m-1}\\ \mathcal {B}_{22}{\bar{u}}^{m-1} \end{array} \right] =\left[ \begin{array}{c} F_1({\bar{u}})\\ F_2({\bar{u}}) \end{array} \right] \quad \hbox {with}\;\;\lambda (\bar{u}):=\frac{\mathcal {B}_{22}{\bar{u}}^{m}}{\mathcal {A}_{22}{\bar{u}}^{m}}. \end{aligned}$$

Here, for a given tensor \(\mathcal {C}:=(c_{i_1i_2\cdots i_m})\in \mathcal {T}_{m,n}\), let \(\mathcal {C}_{12}\) and \(\mathcal {C}_{22}\) be sub-tensors of \(\mathcal {C}\), whose elements are defined by

$$\begin{aligned} (\mathcal {C}_{12})_{i_1i_2\cdots i_m}:=c_{i_1(n_1+i_2)\cdots (n_1+i_m)},~~i_1=1,2,\cdots , n_1;~i_2,\cdots ,i_m=1,2,\cdots , n-n_1 \end{aligned}$$

and

$$\begin{aligned} (\mathcal {C}_{22})_{i_1i_2\cdots i_m}:=c_{(n_1+i_1)(n_1+i_2)\cdots (n_1+i_m)},~~i_1,i_2,\cdots ,i_m=1,2,\cdots , n-n_1, \end{aligned}$$

respectively. Since \({\bar{x}} = (0,{\bar{u}})\) is a solution of VIP(\(F,\mathcal {K}_0\)), it turns out that

$$\begin{aligned} F({\bar{x}})^{\text {T}} (x-{\bar{x}})=F_1({\bar{u}})^{\text {T}} x^1+F_2({\bar{u}})^{\text {T}} (u-{\bar{u}})\geqslant 0 \end{aligned}$$
(2.8)

for any \(x^1\in \mathbb {R}^{n_1}\) and \(u\in \mathbb {R}^{n-n_1}\) satisfying \(x:=(x^1,u)\in \mathcal {K}_0\). Taking \(x^1=0\) in (2.8) immediately leads to

$$\begin{aligned} F_2({\bar{u}})^{\text {T}} (u-{\bar{u}})\geqslant 0, ~~~\forall ~u\in \bar{\mathcal {K}}_0, \end{aligned}$$

where \(\bar{\mathcal {K}}_0:=\{u=(u^2,\cdots ,u^r)\in \mathbb {R}^{n-n_1}~|~u_{\circ }^i\geqslant \Vert u_{\bullet }^i\Vert , u_{\circ }^2+\cdots +u_{\circ }^r=1\}\), which means that \({\bar{u}}\) is a solution of VIP(\(F_2,\bar{\mathcal {K}}_0\)). Since \({\bar{x}}^i\ne 0\) for \(i=2,\cdots ,r\), it follows the proof of Case I that \((\bar{u},\lambda ({\bar{u}}))\) is a solution to

$$\begin{aligned} u \in \bar{\mathcal {K}},~~ v:=\lambda \mathcal {A}_{22} u^{m-1}-\mathcal {B}_{22}u^{m-1}\in \bar{\mathcal {K}} ~~~\hbox {and}~~~\langle u, v\rangle =0, \end{aligned}$$
(2.9)

where \(\bar{\mathcal {K}}\) is the second-order cone defined by \(\bar{\mathcal {K}}:=\mathcal {K}^{n_2}\times \cdots \times \mathcal {K}^{n_r}\). By (2.9), we have \(F_2({\bar{u}})^{\text {T}} {\bar{u}}=0\). Consequently, by taking \(u=0\) in (2.8), it can be seen that \(F_1({\bar{u}})^{\text {T}} x^1\geqslant 0\) for any \(x^1\in \mathcal {K}^{n_1}\), and hence \(F_1({\bar{u}})\in (\mathcal {K}^{n_1})^*=\mathcal {K}^{n_1}\). Therefore, we conclude that \(({\bar{x}},\lambda ({\bar{x}}))\) is a solution of (1.2). We complete the proof.

As a direct result of Proposition 2.2 and Theorem 2.3, if \(\mathcal {A}\) and \(\mathcal {B}\) are matrices, we can easily obtain the solution existence result of SOCEiCPs.

3 Nonlinear Programming for SOCTEiCP (1.2) with Special Structure

In this section, we focus on two special cases of SOCTEiCP (1.2) with symmetric and sub-symmetric tensors \(\mathcal {A}\) and \(\mathcal {B}\), and reformulate them as nonlinear programming problems for the purpose of utilizing or designing optimization methods to find solutions of the model.

3.1 The Symmetric SOCTEiCP

When \(\mathcal {A}\) and \(\mathcal {B}\) are symmetric, it is easy to see that the gradient of the generalized Rayleigh quotient \(\lambda (x)\) defined in (2.4) is

$$\begin{aligned} \nabla \lambda (x)=\frac{m}{\mathcal {A}x^m}\left( \mathcal {B}x^{m-1}-\lambda (x)\mathcal {A}x^{m-1}\right) . \end{aligned}$$
(3.1)

Here we should notice that the gradient formula (3.1) of the Rayleigh quotient holds only for the case where \(\mathcal {A}\) and \(\mathcal {B}\) are both symmetric.

The following lemma states two fundamental properties of \(\lambda (x)\), which have been studied in [31] for matrices. Its proof is elementary and skipped here.

Lemma 3.1

For all \(x\in \mathbb {R}^n\backslash \{0\}\), the following statements hold : 

  1. (i)

    \(\lambda (\tau x)=\lambda (x)\),   \(\forall ~\tau >0\) \(;\)

  2. (ii)

    \(x^{{T }} \nabla \lambda (x)=0\).

Now, we consider the following fractional programming model:

$$\begin{aligned} \max _{x}\;\left\{ \;\lambda (x):=\frac{\mathcal {B}x^m}{\mathcal {A}x^m}\;\big |\; x\in \mathcal {K}_0\;\right\} , \end{aligned}$$

which, from the definition of \(\mathcal {K}_0\), can also be rewritten as

$$\begin{aligned} \begin{array}{cl} \hbox {max}&{} \lambda (x):=\frac{\mathcal {B}x^m}{\mathcal {A}x^m}\\ \hbox {s.t.}&{}(x_{\circ }^i)^2-\Vert x_{\bullet }^i\Vert ^2\geqslant 0,\\ &{}x_{\circ }^i\geqslant 0,~i=1,\cdots ,r,\\ &{}e^{\text {T}} x=1. \end{array} \end{aligned}$$
(3.2)

Then, as a result of Theorem 2.3, the following theorem clarifies the relationship between (1.2) and (3.2), that is, solving the symmetric SOCTEiCP actually reduces to finding a stationary point of (3.2), which is greatly helpful for efficiently solving the model under consideration.

Theorem 3.2

Assume that \(\mathcal {A}\) and \(\mathcal {B}\) are symmetric tensors and \(\mathcal {A}\) is strictly \(\mathcal {K}\)-positive. Let \({\bar{x}}\) be a stationary point of (3.2). Then \(({\bar{x}},{\bar{\lambda }})\) is a solution of (1.2), where \({\bar{\lambda }}:=\lambda (\bar{x})\) and \(\lambda (x)\) is defined by (2.4).

For given nonempty subset \(J\subset [r]\), we now consider the following second-order cone optimization problem

$$\begin{aligned} \begin{array}{cl} \hbox {max}&{} \displaystyle \lambda _J(x_J):=\frac{\mathcal {B}_J(x_J)^m}{\mathcal {A}_J(x_J)^m}\\ \hbox {s.t.}&{} \displaystyle \sum _{i\in J}x_{\circ }^i=1,\\ &{}(x_{\circ }^i)^2-\Vert x_{\bullet }^i\Vert ^2\geqslant 0,\;i\in J,\\ &{}x_{\circ }^i\geqslant 0,~i\in J. \end{array} \end{aligned}$$
(3.3)

Theorem 3.3

Assume that \(\mathcal {A}\) and \(\mathcal {B}\) are symmetric tensors and \(\mathcal {A}\) is strictly \(\mathcal {K}\)-positive. Let \(({\bar{x}}, {\bar{\lambda }})\) be a solution of SOCTEiCP (1.2) with the second-order cone \(\mathcal {K}\) defined by (1.3). There exists a nonempty subset \(J\subset [r]\) such that \({\bar{x}}_J\) is a stationary point of (3.3).

Proof

By the homogeneity of the complementarity system SOCTEiCP with respect to x, we assume, without loss of generality, that \(\bar{x}\in \mathcal {K}\backslash \left\{ 0\right\} \) satisfies \(e^{\text {T}}\bar{x}=1\). Let us write \({\bar{x}}=({\bar{x}}^1,{\bar{x}}^2,\cdots ,{\bar{x}}^r)\in \mathbb {R}^{n_1}\times \mathbb {R}^{n_2}\times \cdots \times \mathbb {R}^{n_r}\) with \({\bar{x}}^i=({\bar{x}}_{\circ }^i,\bar{x}_{\bullet }^i) \in \mathbb {R}\times \mathbb {R}^{n_i-1}\) for \(i=1,2,\cdots ,r\). For the sake of simplicity, we write

$$\begin{aligned} {\bar{y}}:=\frac{m}{\mathcal {A}\bar{x}^m}\left( {\bar{\lambda }}\mathcal {A}{\bar{x}}^{m-1}-\mathcal {B}\bar{x}^{m-1}\right) . \end{aligned}$$

It is clear that \({\bar{y}}\in \mathcal {K}\) as \(\bar{w}={\bar{\lambda }}\mathcal {A}{\bar{x}}^{m-1}-\mathcal {B}{\bar{x}}^{m-1}\in \mathcal {K}\) and \(\mathcal {A}{\bar{x}}^m>0\). Since \((\bar{x},{\bar{\lambda }})\) is a solution of SOCTEiCP, which implies \(\langle {\bar{x}},{\bar{y}}\rangle =0\). By (2.2) and Proposition 2.1, it holds that

$$\begin{aligned} {{\bar{y}}_{\circ }^i}{{\bar{x}}_{\bullet }^i}+{{\bar{x}}_{\circ }^i}{\bar{y}_{\bullet }^i}=0, ~~~\forall ~i=1,2,\cdots ,r. \end{aligned}$$
(3.4)

Since \({\bar{x}}\in \mathcal {K}\backslash \{0\}\), there exists a nonempty subset \(J=\left\{ i\in [r]~|~ {\bar{x}}_{\circ }^i>0\right\} \) of [r]. It is clear that \({\bar{x}}^i=0\) for \(i\in [r]\backslash J\), and hence \({\bar{\lambda }}=\mathcal { B}_J({\bar{x}}_J)^m/\mathcal {A}_J(\bar{x}_J)^m\). Like Theorem 2.3, let \({\bar{\delta }}\), \({\bar{\beta }}\), and \({\bar{\gamma }}\) be Lagrange multipliers associated to the constraints of (3.3), respectively. Accordingly, we take \({\bar{\delta }}=0\). And for every \(i\in J\), since \({\bar{x}}_{\circ }^i> 0\), we take \({\bar{\gamma }}_i=0\) and

$$\begin{aligned} {\bar{\beta }}_i=\frac{1}{2{\bar{x}}_{\circ }^i}{\bar{y}}_{\circ }^i. \end{aligned}$$
(3.5)

Obviously, \({\bar{\beta }}_i\geqslant 0\) and \({\bar{\gamma }}_i\bar{x}_{\circ }^i=0\) for every \(i\in J\). Moreover, from (3.4) and (3.5), it is not difficult to see that

$$\begin{aligned} -2{\bar{x}}_{\bullet }^i{\bar{\beta }}_i={\bar{y}}_{\bullet }^i, ~~~~\forall ~i\in J. \end{aligned}$$
(3.6)

Combining (3.5) and (3.6) leads to

$$\begin{aligned} {\bar{y}}^i=2\left[ \begin{array}{c} {\bar{x}}_{\circ }^i\\ -{\bar{x}}_{\bullet }^i \end{array} \right] {\bar{\beta }}_i,~~~ \hbox {for}~i\in J. \end{aligned}$$
(3.7)

Moreover, it follows from \(\langle {\bar{x}},{\bar{y}}\rangle =0\) and (2.1) that \(\langle {\bar{x}}^i,{\bar{y}}^i\rangle =0\) for \(i=1,2,\cdots ,r\). Consequently, from (3.7), we immediately obtain

$$\begin{aligned} {\bar{\beta }}_i\left( ({\bar{x}}_{\circ }^i)^2-\Vert \bar{x}_{\bullet }^i\Vert ^2\right) =\frac{1}{2}\langle {\bar{x}}^i,\bar{y}^i\rangle =0,\;\;\forall i\in J. \end{aligned}$$
(3.8)

Finally, using (3.7) and (3.8), together with the fact that \(e^{\text {T}} {\bar{x}}=1\), we have

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle \frac{m}{\mathcal {A}_J{({\bar{x}}_J)}^m}\left[ {{\bar{\lambda }}}\mathcal {A}_J{({\bar{x}}_J)}^{m-1}-\mathcal {B}_J{({\bar{x}}_J)}^{m-1}\right] =2\sum _{i\in J}(c_i)_J{{\bar{\beta }}_i}+\sum _{i\in J}(d_i)_J{{\bar{\gamma }}_i}+{\bar{\delta }} e_{J},\\ \displaystyle \sum _{i\in J}{\bar{x}}_{\circ }^i=1,\\ {\bar{\beta }}_i\geqslant 0,~({\bar{x}}_{\circ }^i)^2-\Vert {\bar{x}}_{\bullet }^i\Vert ^2\geqslant 0,~ {\bar{\beta }}_i\left( ({\bar{x}}_{\circ }^i)^2-\Vert {\bar{x}}_{\bullet }^i\Vert ^2\right) =0, ~i\in J,\\ {\bar{\gamma }}_i\geqslant 0, ~{\bar{x}}_{\circ }^i\geqslant 0, ~{\bar{\gamma }}_i{\bar{x}}_{\circ }^i=0,~i\in J,\\ \end{array} \right. \end{aligned}$$
(3.9)

where \(c_i\) and \(d_i\) are the ith columns of C and D, respectively. By (3.9), we conclude that \({\bar{x}}_J\) is a stationary point of (3.3).

As an interesting by-product of Theorem 3.3, we have the following result showing that some special solutions of the symmetric SOCTEiCP are precisely stationary points of (3.2).

Corollary 3.4

Assume that \(\mathcal {A}\) and \(\mathcal {B}\) are symmetric tensors and \(\mathcal {A}\) is strictly \(\mathcal {K}\)-positive. Let \(({\bar{x}}, {\bar{\lambda }})\) be a solution of SOCTEiCP with the second-order cone \(\mathcal {K}\) defined by (1.3). If \({\bar{x}}^i_{\circ }>0\) for \(i=1,2,\cdots , r\), then \({\bar{x}}\) is a stationary point of (3.2).

3.2 The Sub-symmetric SOCTEiCP

For many real-world problems, the symmetry assumption on the two tensors \(\mathcal {A}\) and \(\mathcal {B}\) is usually regarded as a little stronger condition. In this section, we consider a slightly general case where the underlying tensors \(\mathcal {A}\) and \(\mathcal {B}\) of SOCTEiCP (1.2) are sub-symmetric.

Before our discussion, we first introduce the key concept of sub-symmetry on tensors. Let \(\mathcal {A}\in \mathcal {T}_{m,n}\). We say that \(\mathcal {A}\) is sub-symmetric with respect to the indices \(\{i_2,\cdots ,i_m\}\), if \(\mathcal {A}_i:=(a_{ii_2\cdots i_m})_{1\leqslant i_2,\cdots ,i_m\leqslant n}\), an \((m-1)\)th order n-dimensional higher tensor, is symmetric for every \(i=1,\cdots ,n\). Apparently, a symmetric tensor \(\mathcal {A}\) must be sub-symmetric, but the reverse is not true. Hereafter, we further assume throughout this section that m is even. Then, following the idea used in [20], we introduce an additional vector \(y=\lambda ^{\frac{1}{m-1}}x\), i.e.,

$$\begin{aligned} y^i=\lambda ^{\frac{1}{m-1}}x^i, ~~~\hbox {for}~i=1,2,\cdots , r \end{aligned}$$

to derive the nonlinear programming formulation of the sub-symmetric SOCTEiCP, where the complementarity requirement of (1.2) is absorbed into the objective function. More concretely, the nonlinear programming model can be expressed as follows:

$$\begin{aligned} \begin{array}{cl} \hbox {min}&{}f(x,y,w,\lambda ):=\Vert y-\lambda ^{\frac{1}{m-1}}x\Vert ^2+(x^{\text {T}} w)^2\\ \hbox {s.t.}&{}w-\mathcal {A}y^{m-1}+\mathcal {B}x^{m-1}=0,\\ &{} (x_{\circ }^i)^2-\Vert x_{\bullet }^i\Vert ^2\geqslant 0,~x_{\circ }^i\geqslant 0,~i=1,\cdots ,r,\\ &{} (w_{\circ }^i)^2-\Vert w_{\bullet }^i\Vert ^2\geqslant 0,~w_{\circ }^i\geqslant 0,~i=1,\cdots ,r,\\ &{}e^{\text {T}} x=1,\\ &{}e^{\text {T}} y=\lambda ^{\frac{1}{m-1}}, \end{array} \end{aligned}$$
(3.10)

where \(x=(x^1,x^2,\cdots ,x^r)\), \(y=(y^1,y^2,\cdots ,y^r)\), and \( w=(w^1,w^2,\cdots ,w^r)\in \mathbb {R}^{n_1}\times \mathbb {R}^{n_2}\times \cdots \times \mathbb {R}^{n_r}\) with \(x^i=(x_{\circ }^i,x_{\bullet }^i)\), \(w^i=(w_{\circ }^i,w_{\bullet }^i)\), and \( y^i=(y_{\circ }^i,y_{\bullet }^i)\in \mathbb {R}\times \mathbb {R}^{n_i-1}\) for \(i=1,\cdots ,r\).

With the preparation of the nonlinear programming (3.10), we have the following theorem.

Theorem 3.5

The sub-symmetric SOCTEiCP has a solution if and only if the nonlinear programming problem (3.10) has a global minimum with its objective value being zero.

Proof

Let \(({\bar{x}},{\bar{y}},{\bar{w}},{\bar{\lambda }})\) be a global minimum of (3.10) such that \(f({\bar{x}},{\bar{y}},\bar{w},{\bar{\lambda }})=0\). It is obvious that \({\bar{x}},{\bar{w}}\in \mathcal {K}\) and \({\bar{x}}\ne 0\). Moreover, it follows from \(f({\bar{x}},{\bar{y}},\bar{w},{\bar{\lambda }})=0\) that \({\bar{y}}={\bar{\lambda }}^{\frac{1}{m-1}}{\bar{x}}\) and \({\bar{x}}^{\text {T}} {\bar{w}}=0\). Consequently, it holds that

$$\begin{aligned} {\bar{w}}=\mathcal {A}{\bar{y}}^{m-1}-\mathcal {B}{\bar{x}}^{m-1}=\bar{\lambda }\mathcal {A}{\bar{x}}^{m-1}-\mathcal {B}{\bar{x}}^{m-1}, \end{aligned}$$

which, together with the fact that \({\bar{x}}^{\text {T}} {\bar{w}}=0\), implies that \(({\bar{x}},{\bar{\lambda }})\) is a solution of the sub-symmetric SOCTEiCP.

Conversely, let \(({\bar{x}},{\bar{\lambda }})\) be a solution of the sub-symmetric SOCTEiCP. Denote \(\tilde{x}:={\bar{x}}/(e^{\text {T}} {\bar{x}})\), \(\tilde{y}:={\bar{\lambda }}^{\frac{1}{m-1}}\tilde{x}\), and \(\tilde{w}:=\mathcal {A}\tilde{y}^{m-1}-\mathcal {B}\tilde{x}^{m-1}\). It is easy to verify that \((\tilde{x},\tilde{y},\tilde{w},\tilde{\lambda })\) is a global minimum of (3.10) satisfying \(f(\tilde{x},\tilde{y},\tilde{w},\tilde{\lambda })=0\).

Notice that any global minimum of a nonlinear programming problem is a stationary point. Comparatively speaking, computing a stationary point is much easier than finding a global minimum. Therefore, it is important to investigate when a stationary point of nonlinear programming problem is a solution of the sub-symmetric SOCTEiCP, which will be addressed in the subsequent theorem.

Theorem 3.6

Assume that \(\mathcal {A,B}\in \mathcal {T}_{m,n}\) are sub-symmetric tensors. Let \(({\bar{x}},{\bar{y}},{\bar{w}},{\bar{\lambda }})\) be a stationary point of (3.10) with \({\bar{\lambda }}\ne 0\). Then \(f({\bar{x}},{\bar{y}},{\bar{w}},{\bar{\lambda }})=0\) if and only if \({\bar{\delta }}={\bar{\eta }}=0\), where \({\bar{\delta }}\) and \({\bar{\eta }}\) are the Lagrange multipliers associated with the constraints \(e^{{T }} x=1\) and \(e^{{T }} y=\lambda ^{\frac{1}{m-1}}\) in (3.10), respectively.

Proof

Since \(({\bar{x}},{\bar{y}},{\bar{w}},{\bar{\lambda }})\) is a stationary point of (3.10), there exist Lagrange multipliers \({\bar{\alpha }}\in \mathbb {R}^n\), \({\bar{\beta }}\in \mathbb {R}^r\), \({\bar{\gamma }}\in \mathbb {R}^r\), \({\bar{\mu }}\in \mathbb {R}^r\), \({\bar{\theta }}\in \mathbb {R}^r\), \({\bar{\delta }}\in \mathbb {R}\), and \({\bar{\eta }}\in \mathbb {R}\), such that the following KKT conditions for (3.10) hold:

$$\begin{aligned} \left\{ \begin{array}{l} -2{\bar{\lambda }}^{\frac{1}{m-1}}\left( {\bar{y}}-{\bar{\lambda }}^{\frac{1}{m-1}}{\bar{x}}\right) +2\left( {\bar{x}}^{\text {T}} {\bar{w}}\right) {\bar{w}}=(m-1)\left( \mathcal {B}{\bar{x}}^{m-2}\right) ^{\text {T}}{\bar{\alpha }}+2{\widehat{C}}{\bar{\beta }}+{\widehat{E}}{\bar{\gamma }}+{\bar{\delta }} e,\\ 2\left( {\bar{y}}-{\bar{\lambda }}^{\frac{1}{m-1}}{\bar{x}}\right) =-(m-1)\left( \mathcal {A}{\bar{y}}^{m-2}\right) ^{\text {T}}{\bar{\alpha }}+{\bar{\eta }} e,\\ 2\left( {\bar{x}}^{\text {T}} {\bar{w}}\right) {\bar{x}}={\bar{\alpha }}+2{\widehat{D}}{\bar{\mu }}+{\widehat{E}}{\bar{\theta }},\\ -{\frac{1}{m-1}}{\bar{\lambda }}^{{\frac{1}{m-1}}-1}{\bar{x}}^{\text {T}}\left( {\bar{y}}-{\bar{\lambda }}^{\frac{1}{m-1}}{\bar{x}}\right) =-{\frac{1}{m-1}}{\bar{\lambda }}^{{\frac{1}{m-1}}-1}{\bar{\eta }},\\ {\bar{w}}-\mathcal {A}{\bar{y}}^{m-1}+\mathcal {B}{\bar{x}}^{m-1}=0,\\ {\bar{\beta }}_i\geqslant 0,~({\bar{x}}_{\circ }^i)^2-\Vert {\bar{x}}_{\bullet }^i\Vert ^2\geqslant 0,~{\bar{\beta }}_i\left[ ({\bar{x}}_{\circ }^i)^2-\Vert {\bar{x}}_{\bullet }^i\Vert ^2\right] =0,~i=1,\cdots ,r,\\ {\bar{\gamma }}_i\geqslant 0,~{\bar{x}}_{\circ }^i\geqslant 0,~{\bar{x}}_{\circ }^i{\bar{\gamma }}_i=0,~~i=1,\cdots ,r,\\ {\bar{\mu }}_i\geqslant 0,~({\bar{w}}_{\circ }^i)^2-\Vert {\bar{w}}_{\bullet }^i\Vert ^2\geqslant 0,~{\bar{\mu }}_i\left[ ({\bar{w}}_{\circ }^i)^2-\Vert {\bar{w}}_{\bullet }^i\Vert ^2\right] =0,~i=1,\cdots ,r,\\ {\bar{\theta }}_i\geqslant 0,~{\bar{w}}_{\circ }^i\geqslant 0,~{\bar{w}}_{\circ }^i{\bar{\theta }}_i=0, ~i=1,\cdots ,r,\\ e^{\text {T}}{\bar{x}}=1,\\ e^{\text {T}}{\bar{y}}={\bar{\lambda }}^{\frac{1}{m-1}}, \end{array} \right. \end{aligned}$$
(3.11)

where \({\bar{\beta }}_i,{\bar{\gamma }}_i,{\bar{\mu }}_i,{\bar{\theta }}_i\) are the ith components of vectors \({\bar{\beta }},{\bar{\gamma }},{\bar{\mu }},{\bar{\theta }}\in \mathbb {R}^r\), respectively;

$$\begin{aligned} {\widehat{C}}:=\left[ \begin{array}{cccc} \left[ \begin{array}{c} {\bar{x}}_{\circ }^1\\ -{\bar{x}}_{\bullet }^1 \end{array} \right] &{}0_{n_1}&{}\cdots &{}0_{n_1}\\ 0_{n_2}&{}\left[ \begin{array}{c} {\bar{x}}_{\circ }^2\\ -{\bar{x}}_{\bullet }^2 \end{array} \right] &{}\cdots &{}0_{n_2}\\ \vdots &{}\vdots &{}\ddots &{}\vdots \\ 0_{n_r}&{}0_{n_r}&{}\cdots &{}\left[ \begin{array}{c} {\bar{x}}_{\circ }^r\\ -{\bar{x}}_{\bullet }^r \end{array} \right] \\ \end{array} \right] ,~{\widehat{D}}:=\left[ \begin{array}{cccc} \left[ \begin{array}{c} {\bar{w}}_{\circ }^1\\ -{\bar{w}}_{\bullet }^1 \end{array} \right] &{}0_{n_1}&{}\cdots &{}0_{n_1}\\ 0_{n_2}&{}\left[ \begin{array}{c} {\bar{w}}_{\circ }^2\\ -{\bar{w}}_{\bullet }^2 \end{array} \right] &{}\cdots &{}0_{n_2}\\ \vdots &{}\vdots &{}\ddots &{}\vdots \\ 0_{n_r}&{}0_{n_r}&{}\cdots &{}\left[ \begin{array}{c} {\bar{w}}_{\circ }^r\\ -{\bar{w}}_{\bullet }^r \end{array} \right] \\ \end{array} \right] \in \mathbb {R}^{n\times r} \end{aligned}$$

and \({\widehat{E}}:=D\) used in (2.6).

Multiplying the first three expressions in (3.11) by \(\bar{x}^{\text {T}}\), \({\bar{y}}^{\text {T}}\), and \({\bar{w}}^{\text {T}}\), respectively, and using the last six expressions in (3.11), we have

$$\begin{aligned} \left\{ \begin{array}{l} -2{\bar{\lambda }}^{\frac{1}{m-1}}{\bar{x}}^{\text {T}}\left( {\bar{y}}-{\bar{\lambda }}^{\frac{1}{m-1}} {\bar{x}}\right) +2\left( {\bar{x}}^{\text {T}} {\bar{w}}\right) ^2=(m-1)\left( \mathcal {B}{\bar{x}}^{m-1}\right) ^{\text {T}}{\bar{\alpha }}+{\bar{\delta }},\\ 2{\bar{y}}^{\text {T}}\left( {\bar{y}}-{\bar{\lambda }}^{\frac{1}{m-1}}{\bar{x}}\right) =-(m-1) \left( \mathcal {A}{\bar{y}}^{m-1}\right) ^{\text {T}}{\bar{\alpha }}+{\bar{\lambda }}^{\frac{1}{m-1}}{\bar{\eta }},\\ 2\left( {\bar{x}}^{\text {T}} {\bar{w}}\right) ^2={\bar{w}}^{\text {T}}{\bar{\alpha }}, \end{array} \right. \end{aligned}$$

which, together with the fifth expression in (3.11), implies that

$$\begin{aligned} 2(m-1)({\bar{x}}^{\text {T}} {\bar{w}})^2+2f({\bar{x}}, {\bar{w}}, {\bar{\lambda }}, \bar{y})={\bar{\delta }}+{\bar{\lambda }}^{\frac{1}{m-1}}{\bar{\eta }}, \end{aligned}$$
(3.12)

where \(m\geqslant 2\).

If \({\bar{\delta }}={\bar{\eta }}=0\), it is clear from (3.12) that \(f({\bar{x}},{\bar{w}},{\bar{\lambda }},{\bar{y}})=0\). Conversely, if \(f(\bar{x},{\bar{y}},{\bar{w}},{\bar{\lambda }})=0\), then it holds that \({\bar{y}}=\bar{\lambda }^{\frac{1}{m-1}}{\bar{x}}\) and \({\bar{x}}^{\text {T}}{\bar{w}}=0\). By the fourth expression in (3.11), it holds that \({\bar{\lambda }}^{\frac{1}{m-1}}{\bar{\eta }}=0\), which implies \({\bar{\eta }}=0\) from the given condition that \({\bar{\lambda }} \ne 0\). Consequently, from \(f({\bar{x}},{\bar{w}},{\bar{\lambda }},{\bar{y}})=0\) and (3.12), we conclude that \({\bar{\delta }}=0\).

4 Numerical Experiments

In [5], the authors introduced a so-called SPA for TGEiCP. As remarked in that paper, such an algorithm is computationally efficient as long as the underlying projection step has closed-form solution. Thus, in this section, we further report some preliminary results to verify the efficiency of SPA for solving SOCTEiCPs.

Note that the underlying SOCTEiCP has more complicated structure than the TGEiCP studied in [5]. It is necessary to summarize some numerical notes on the second-order cone before the employment of SPA. For any vector \(z:=(z_{\circ }, z_{\bullet })\in \mathbb {R}\times \mathbb {R}^{l-1}\), it is well known from [32] (see also [29, 33]) that the spectral factorization of z is defined as

$$\begin{aligned} z=\zeta _1{u}_1+\zeta _2{u}_2, \end{aligned}$$
(4.1)

where \(\zeta _i\in \mathbb {R}\) and \({u}_i\in \mathbb {R}^l\) (\(i=1,2\)) are the spectral values and the associated spectral vectors, respectively, given by

$$\begin{aligned} \zeta _i := z_{\circ } + (-1)^i\Vert z_{\bullet }\Vert \end{aligned}$$
(4.2)

and

$$\begin{aligned} {u}_i:=\left\{ \begin{array}{ll} \displaystyle \frac{1}{2}\left( 1,(-1)^i\frac{z_{\bullet }}{\Vert z_{\bullet }\Vert }\right) ,&{}\;\;\hbox {if }z_{\bullet }\ne 0,\\ \displaystyle \frac{1}{2}\left( 1,(-1)^iw\right) ,&{}\;\;\hbox {otherwise},\\ \end{array} \right. \end{aligned}$$
(4.3)

with any vector \(w\in \mathbb {R}^{l-1}\) satisfying \(\Vert w\Vert =1\). Clearly, decomposition (4.1) is unique for the case \(z_{\bullet }\ne 0\). Define the projection of a given vector \(z\in {\mathbb {R}^l}\) onto a convex set \({\varOmega }\) as

$$\begin{aligned} {\varPi }_{{\varOmega }} (z) := \arg \min \;\left\{ \Vert z'-z\Vert \;|\;z'\in {\varOmega }\right\} . \end{aligned}$$

Then, the projection of \(z\in \mathbb {R}^l\) onto the second-order cone \(\mathcal {K}^l\) can be further written explicitly as

$$\begin{aligned} {\varPi }_{\mathcal {K}^l} (z) := \max \left\{ 0,\zeta _1\right\} {u}_1+\max \left\{ 0,\zeta _2\right\} {u}_2, \end{aligned}$$
(4.4)

where \(\zeta _i\) and \({u}_i\) (\(i=1,2\)) are defined by (4.2) and (4.3), respectively. We refer the readers to [29] for the detailed derivation of (4.4).

Taking a close look at the SPA (see [5, Algorithm 1]), there is a notable relaxation factor \(\alpha \) in the projection scheme, which was taken as \(\alpha =1\) in [18]. In fact, such a constant \(\alpha \) actually plays an important role in enlarging the step size \(s_k\) to achieve the acceleration of SPA in practice (see the numerical results reported in [5]). Here, we can take \(\alpha \in (1,8)\) empirically to speed up the convergence.

Throughout the experiments, we wrote the code in Matlab R2012b and conducted the numerical experiments on a TOSHIBA notebook with Intel(R)  Core(TM)   i7-5600U CPU 2.60 GHz and 8GB RAM running on Windows 7 Home Premium operating system.

Table 1 Nonzero components of the symmetric tensor \({\mathcal A}\) for Example 4.1
Table 2 Nonzero components of the symmetric tensor \({\mathcal B}\) for Example 4.1

In our experiments, we consider two concrete examples, where the underlying tensors \(\mathcal {A}\) and \(\mathcal {B}\) are symmetric. Note that \(\mathcal {A}\) is a strictly \(\mathcal {K}\)-positive tensor. We thus take \(\mathcal {A}\) as a sparse tensor throughout this section so that we can ensure and verify the strict \(\mathcal {K}\)-positiveness of \(\mathcal {A}\). Note that all tensors here are symmetric, and we only list the nonzero upper triangular entries.

Example 4.1

We consider two 4-order 4-dimensional symmetric tensors \({\mathcal A}\) and \({\mathcal B}\), where the second-order cone is specified as \({\mathcal K}:={\mathcal K}^2\times {\mathcal K}^2\). The tensors \({\mathcal A}\) and \({\mathcal B}\) take their components as listed in Tables 1 and 2, respectively.

Example 4.2

This example deals with two 4-order 6-dimensional symmetric tensors \({\mathcal A}\) and \({\mathcal B}\), where all components of \(\mathcal {B}\) are normally distributed in \((-2,2)\) and then rounded to one digit by utilizing the Matlab script ‘roundn’. The second-order cone is given by \(\mathcal {K}:=\mathcal {K}^4\times \mathcal {K}^2\), and both tensors \({\mathcal A}\) and \({\mathcal B}\) are specified as listed in Tables 3 and 4, respectively.

Table 3 Nonzero components of the symmetric tensor \({\mathcal A}\) for Example 4.2
Table 4 Nonzero components of the symmetric tensor \({\mathcal B}\) for Example 4.2

Following the suggestion in [5], we use

$$\begin{aligned} \hbox {RelErr:}=\Vert y^{(k)}\Vert :=\left\| {\mathcal B}(x^{(k)})^{m-1}-\lambda _k{\mathcal A}(x^{(k)})^{m-1}\right\| \leqslant \hbox {Tol} \end{aligned}$$
(4.5)

to be the termination criterion and attain an approximate numerical solution with a preset tolerance ‘\(\hbox {Tol}\)’. Now, we test four scenarios of ‘\(\hbox {Tol}\)’ by setting \(\hbox {Tol}:=\left\{ 10^{-3}\right. \), \(5\times 10^{-4}\), \(10^{-4}\), \(\left. 5\times 10^{-5}\right\} \). In our experiments, we consider two cases of the starting point \(u^{(0)}\in \mathcal {K}\), which is generated in two steps. The first step is that we generate two vectors \(z^{(0)}\): one is a vector of ones, i.e., \(z^{(0)}=(1,\cdots ,1)^{\text {T}}\) and the other one is a random vector uniformly distributed in (0, 1). Then, to guarantee the starting point \(u^{(0)}\in \mathcal {K}\), we project the intermediate vectors \(z^{(0)}\) onto the second-order cone \(\mathcal {K}\), i.e., \(u^{(0)}={\varPi }_{\mathcal {K}}(z^{(0)})\) in the second step. As suggested in [5], we throughout the experiments take \(\alpha \) as \(\alpha =5\). To support that the SPA is reliable for finding one of solutions of SOCTEiCPs, we report the number of iterations (‘Iter.’), computing time in seconds (‘Time’), the relative error (‘RelErr’) defined by (4.5), eigenvalue (‘EigValue’), and the associated eigenvector (‘EigVector’). The numerical results with respect to different initial points are listed in Tables 5 and 6, respectively.

Table 5 Computational results for the case where the intermediate vector \(z^{(0)}=(1,\cdots ,1)^{\text {T}}\)
Table 6 Computational results for the case where \(z^{(0)}\) is a random intermediate vector

It can be easily seen from the data reported in Tables 5 and 6 that the SPA can successfully find some eigenvector–eigenvalue pairs of SOCTEiCPs, even though it seems that the number of iterations increases significantly as the precision improvement on solutions. All numerical results sufficiently show that the SPA is a reliable solver for SOCTEiCPs.

5 Conclusions

We study a class of SOCTEiCPs, which generalize the SOCEiCP for matrices introduced in recent papers [27, 28]. Although SOCTEiCP (1.2) is a specific case of TGEiCP, we propose a new potentially helpful variational inequality reformulation for the problem under consideration. As we know, variational inequality is a powerful tool for mathematics analysis. Thus, such a variational inequality characterization might help us analyze further properties of SOCTEiCP (1.2), which are also our future concerns. Besides, we consider a special case of SOCTEiCP (1.2) with two symmetric tensors \(\mathcal {A}\) and \(\mathcal {B}\), and present a nonlinear programming reformulation. To break through the limitation of the symmetry condition, we discuss a class of slightly general SOCTEiCPs with sub-symmetric tensors, and similarly show that solving the sub-symmetric SOCTEiCP reduces to finding a stationary point of a nonlinear programming problem. However, our results do not completely break the bottleneck of (sub-) symmetry condition, and in the future, we will study more general SOCTEiCPs in the absence of symmetric and sub-symmetric properties.