1 Introduction

An element A of the space \({\mathcal {S}}^{n}\) of real symmetric \(n \times n\) matrices is called copositive if \(x^TAx \ge 0\) for all vectors \(x \in {\mathbb {R}}_+^n\). The set of such matrices forms the copositive cone \(\mathcal {COP}^{n}\). This cone plays an important role in non-convex optimization, as many difficult optimization problems can be reformulated as conic programs over \(\mathcal {COP}^{n}\). For a detailed survey of the applications of this cone see, e.g., [3, 4, 10, 13].

In [8] the local structure of the cone \(\mathcal {COP}^{n}\) around a given copositive matrix A was considered. In particular, the cone of feasible directions and the tangent cone at A and the minimal face of A have been computed. These objects have a description in terms of the minimal zeros of A.

A zero u of a copositive matrix A is a non-zero nonnegative vector such that \(u^TAu = 0\) [1, 6]. The support \({{\,\mathrm{supp}\,}}u\) of a zero \(u = (u_1,\dots ,u_n)^T \in {\mathbb {R}}_+^n\) is the subset of indices \(j \in \{1,\dots ,n\}\) such that \(u_j > 0\). A zero u of A is called minimal if there is no zero v of A such that \({{\,\mathrm{supp}\,}}v \subset {{\,\mathrm{supp}\,}}u\) holds strictly [11]. The minimal zero support set, i.e., the ensemble \({{\,\mathrm{supp}\,}}{\mathcal {V}}^{A}_{\min }\) of minimal zero supports of a copositive matrix A is a characteristic that yields a finite classification of the matrices in \(\mathcal {COP}^{n}\). However, this classification is quite coarse, e.g., the set of matrices \(A' \in \mathcal {COP}^{n}\) which share the minimal zero support set with A may have a description by different ensembles of equalities and inequalities around different points.

Here we make a further step in the study of the local properties of \(\mathcal {COP}^{n}\). While the paper [8] focussed on the infinitesimal structure of \(\mathcal {COP}^{n}\) near a given matrix A, in this work we consider finite neighbourhoods of A. To this end we propose a finer characteristic of copositive matrices, still leading to a finite classification, namely their extended minimal zero support set. In addition to the minimal zero supports of the matrix A this object contains the complementary index sets of its minimal zeros. Here the complementary index set \({{\,\mathrm{comp}\,}}{u}\) of a zero u of A is defined as the set of indices j such that \((Au)_j = 0\). We show that the set \(S_{{\mathcal {E}}}\) of matrices \(A' \in \mathcal {COP}^{n}\) which share the extended minimal zero support set \({{\mathcal {E}}}\) with A is an open subset of some explicit algebraic set \(Z_{{\mathcal {E}}} \subset {\mathcal {S}}^{n}\). As such it is described by the same set of polynomial equations at every point, which yields a complete and homogeneous characterization of its local structure.

It should be noted that a similar notion has been defined in the context of evolutionarily stable strategies in [2], see also [5]. Along with the support \(I = {{\,\mathrm{supp}\,}}{x}\) of a strategy x, which is a vector in the standard simplex, the set \(\{j \mid (Ax)_j = x^TAx \}\) is considered. It is called extended support and corresponds to the complementary index set defined in the present paper. For evolutionarily stable strategies we have \(I \subset J\). In both settings the index set is linked to the first order optimality condition at the corresponding point.

The proposed decomposition of \(\mathcal {COP}^{n}\) into subsets \(S_{{\mathcal {E}}}\) is compatible with the facial structure of the cone. For every face \({{\mathcal {F}}} \subset \mathcal {COP}^{n}\), the interior points of \({{\mathcal {F}}}\) all belong to the same subset \(S_{{\mathcal {E}}}\), and hence each \(S_{{\mathcal {E}}}\) can be represented as a disjoint union of such facial interiors. Moreover, for each subset \(S_{{\mathcal {E}}}\) we construct a coefficient matrix with entries polynomial in the elements of \(A \in S_{{\mathcal {E}}}\) such that the solution space of the homogeneous linear system of equations with this coefficient matrix is the linear hull of the minimal face of A. As a consequence, on each irreducible component of the algebraic set \(Z_{{\mathcal {E}}}\) the minimal faces of \(A \in S_{{\mathcal {E}}}\) have generically the same dimension, with possibly higher dimensions on some algebraic subset.

The main purpose of this contribution is to provide new tools for the study of the facial structure and especially the extreme rays of the copositive cone, which play a crucial role, e.g., in the verification of the exactness of computationally tractable relaxations of the copositive cone.

The remainder of the paper is structured as follows. In Sect. 1.1 we provide some notations and formal definitions. In Sect. 2 we prove our main result (Theorem 1) on the decomposition of the cone \(\mathcal {COP}^{n}\) into relatively open subsets according to the extended minimal zero support set. In Sect. 3 we derive some properties of the subsets and the extended minimal zero support set, in particular related to the facial structure of \(\mathcal {COP}^{n}\). We provide some necessary conditions on \({{\mathcal {E}}}\) for \(S_{{\mathcal {E}}}\) to be non-empty (Lemma 5), and for a subset \(S_{{{\mathcal {E}}}'}\) to intersect the boundary of another subset \(S_{{\mathcal {E}}}\) (Lemma 6). In Sect. 3.1 we provide some examples of subsets \(S_{{\mathcal {E}}}\).

1.1 Notations and definitions

The space of real symmetric matrices of size \(n \times n\) will be denoted by \({\mathcal {S}}^{n}\).

For an index set \(I \subset \{1,\dots ,n\}\), denote by \({\overline{I}}\) its complement \(\{1,\dots ,n\} \setminus I\).

We shall denote vectors by lower-case letters and matrices by upper-case letters. For a matrix A and a vector u of compatible dimension, the i-th element of the matrix-vector product Au will be denoted by \((Au)_i\). Inequalities \(u \ge \mathbf{0}\) on vectors will be meant element-wise, where we denote by \(\mathbf{0} = (0,\dots ,0)^T\) the all-zeros vector. Similarly we denote by \(\mathbf{1} = (1,\dots ,1)^T\) the all-ones vector. Let \(\varDelta = \{ u \in \mathbb R_+^n \mid \mathbf{1}^Tu = 1 \}\) be the standard simplex.

For a subset \(I \subset \{1,\dots ,n\}\) we denote by \(A_{I}\) the principal submatrix of A whose elements have row and column indices in I, i.e. \(A_{I} = (A_{ij})_{i,j \in I}\in {\mathcal {S}}^{|I|}\). For subsets \(I,J \subset \{1,\dots ,n\}\) we denote by \(A_{J \times I}\) the submatrix of A whose elements have row indices in J and column indices in I. Similarly for a vector \(u \in {\mathbb {R}}^n\) we define the sub-vector \(u_{I} = (u_i)_{i \in I}\in {\mathbb {R}}^{|I|}\).

For a nonnegative vector \(u \in {\mathbb {R}}_+^n\) we define its support as \({{\,\mathrm{supp}\,}}{u} = \{i\in \{1,\ldots ,n\}\mid u_i > 0 \}\).

A zero u of a copositive matrix A is called minimal if there exists no zero v of A such that the inclusion \({{\,\mathrm{supp}\,}}{v} \subset {{\,\mathrm{supp}\,}}{u}\) holds strictly. We shall denote the set of minimal zeros of a copositive matrix A by \({\mathcal {V}}^{A}_{\min }\) and the ensemble of supports of the minimal zeros of A by \({{\,\mathrm{supp}\,}}{{\mathcal {V}}^{A}_{\min }}\). To each index set I there exists at most one minimal zero \(u \in \varDelta \) of A with \({{\,\mathrm{supp}\,}}{u} = I\) [11, Lemma 3.5], hence the minimal zero support set \({{\,\mathrm{supp}\,}}{{\mathcal {V}}^{A}_{\min }}\) is in bijective correspondence to the minimal zeros of A which are contained in \(\varDelta \).

We now introduce the extended minimal zero support set of a copositive matrix. Note that if u is a zero of a copositive matrix A, then \(Au \ge \mathbf{0}\) [1, p. 200, lines 11–12].

Definition 1

Let \(A \in \mathcal {COP}^{n}\) and let u be a zero of A. The complementary index set \({{\,\mathrm{comp}\,}}{u}\) of u is the index set \(\{ j \mid (Au)_j = 0 \} = \overline{{{\,\mathrm{supp}\,}}(Au)}\). The extended support \({{\,\mathrm{esupp}\,}}{u}\) of u is the pair \(({{\,\mathrm{supp}\,}}{u},{{\,\mathrm{comp}\,}}{u})\) of index sets. The extended minimal zero support set \({{\,\mathrm{esupp}\,}}{{\mathcal {V}}^{A}_{\min }}\) is the ensemble of extended supports of the minimal zeros of A.

By [7, Lemma 2.5] we have that \({{\,\mathrm{supp}\,}}{u} \subset {{\,\mathrm{comp}\,}}{u}\) for every zero u of a copositive matrix A.

Let \({{\mathcal {E}}} = \{(I_{\alpha },J_{\alpha })\}_{{\alpha } = 1,\dots ,m}\) be a finite collection of pairs of index sets. Define the set

$$\begin{aligned} S_{{\mathcal {E}}} = \{ A \in \mathcal {COP}^{n} \mid {{\,\mathrm{esupp}\,}}{{\mathcal {V}}^{A}_{\min }} = {{\mathcal {E}}} \} \end{aligned}$$

of copositive matrices having extended minimal zero support set \({{\mathcal {E}}}\). Then the whole copositive cone \(\mathcal {COP}^{n}\) decomposes into a disjoint union of a finite number of such subsets \(S_\mathcal{E}\). We shall also associate to \({{\mathcal {E}}}\) the set

$$\begin{aligned} Z_{{\mathcal {E}}} = \{ A \in {\mathcal {S}}^{n} \mid A_{J_{\alpha } \times I_{\alpha }} \text{ is } \text{ rank } \text{ deficient } \forall \ {\alpha } = 1,\dots ,m \}. \end{aligned}$$

Clearly \(Z_{{\mathcal {E}}}\) is algebraic, given by the zero locus of a finite number of determinantal polynomials in the elements of A.

2 Openness of \(S_{{\mathcal {E}}}\) in \(Z_{{\mathcal {E}}}\)

In this section we prove our main result, which states that the sets \(S_{{\mathcal {E}}}\) of matrices sharing the same extended minimal zero support set \({{\mathcal {E}}}\) are open in the relative topology of the algebraic set \(Z_{{\mathcal {E}}}\). We shall need the following statements [11, Corollary 3.4 and Lemma 3.7].

Lemma 1

Let A be copositive and u a zero of A. Then u is a finite sum of minimal zeros of A.

Lemma 2

Let \(A \in \mathcal {COP}^{n}\) and \(I \subset \{1,\dots ,n\}\) be non-empty. Then the following are equivalent.

  1. (a)

    A has a minimal zero with support I.

  2. (b)

    the principal submatrix \(A_I\) is positive semi-definite with corank 1, and the generator of \(\ker \,A_I\) can be chosen with all its elements positive.

We may now proceed to our main results.

Lemma 3

Let \({{\mathcal {E}}} = \{(I_{\alpha },J_{\alpha })\}_{{\alpha } = 1,\dots ,m}\) be a collection of pairs of index sets. Then \(S_{{\mathcal {E}}} \subset Z_{{\mathcal {E}}}\).

Proof

We have to show that whenever a matrix \(A \in \mathcal {COP}^{n}\) has extended minimal zero support set \({{\mathcal {E}}}\), its submatrices \(A_{J_{\alpha } \times I_{\alpha }}\) are rank deficient for all \({\alpha } = 1,\dots ,m\). First note that \(I_{\alpha } \subset J_{\alpha }\), and hence \(A_{J_{\alpha } \times I_{\alpha }}\) is rank deficient if and only if it has a non-zero right kernel vector. Such a kernel vector is readily provided by the sub-vector \(u^{\alpha }_{I_{\alpha }}\), where \(u^{\alpha }\) is a minimal zero of A having support \(I_{\alpha }\). This completes the proof. \(\square \)

Theorem 1

Let \(A \in \mathcal {COP}^{n}\) and let \({{\mathcal {E}}} = \{ (I_{\alpha },J_{\alpha }) \}_{{\alpha } = 1,\dots ,m}\) be the extended minimal zero support set of A. Then there exists a neighbourhood \(U \subset {\mathcal {S}}^{n}\) of A such that \(U \cap S_\mathcal{E} = U \cap Z_{{\mathcal {E}}}\).

Proof

Assume for the sake of contradiction that there exists a sequence \(A_k \in Z_{{\mathcal {E}}} \setminus S_{{\mathcal {E}}}\) of matrices converging to A.

Let \(u^{\alpha } \in \varDelta \) be the minimal zero of A with support \(I_{\alpha }\) and complementary index set \(J_{\alpha }\), \({\alpha } = 1,\dots ,m\). By Lemma 2 the submatrix \(A_{I_{\alpha }}\) is positive semi-definite of co-rank 1. The 1-dimensional kernel of this submatrix is generated by the element-wise positive sub-vector \(u^{\alpha }_{I_{\alpha }}\).

By definition of \(Z_{{\mathcal {E}}}\) the submatrices \((A_k)_{J_{\alpha } \times I_{\alpha }}\) are rank-deficient. Since \(I_{\alpha } \subset J_{\alpha }\), the submatrices \((A_k)_{I_{\alpha }}\) are also rank deficient, i.e., their co-rank is at least 1. Since the co-rank is upper semi-continuous, it can be at most 1 for all submatrices \((A_k)_{I_{\alpha }}\) except possibly a finite number. Without loss of generality we may assume that the co-rank of \((A_k)_{I_{\alpha }}\) equals 1 for all k, and hence has a 1-dimensional kernel. Since \((A_k)_{I_{\alpha }} \rightarrow A_{I_{\alpha }}\), this kernel tends to the 1-dimensional subspace generated by the sub-vector \(u^{\alpha }_{I_{\alpha }} > \mathbf{0}\). Let us choose vectors \(v^{\alpha }_k\) with support \({{\,\mathrm{supp}\,}}{v^{\alpha }_k} \subset I_{\alpha }\) such that the sub-vectors \((v^{\alpha }_k)_{I_{\alpha }}\) generate the kernel of \((A_k)_{I_{\alpha }}\) and \(v^{\alpha }_k \rightarrow u^{\alpha }\). Without loss of generality we may assume that all sub-vectors \((v^{\alpha }_k)_{I_{\alpha }}\) are element-wise positive and their elements sum to 1. The vectors \(v^{\alpha }_k\) then have the properties

$$\begin{aligned} v^{\alpha }_k \in \varDelta ,\quad {{\,\mathrm{supp}\,}}{v^{\alpha }_k} = I_{\alpha }, \quad (A_kv^{\alpha }_k)_{I_{\alpha }} = \mathbf{0}. \end{aligned}$$
(1)

The submatrix \(A_{I_{\alpha }}\) has \(|I_{\alpha }|-1\) positive and one zero eigenvalue. Since \((A_k)_{I_{\alpha }} \rightarrow A_{I_{\alpha }}\) and the submatrices \((A_k)_{I_{\alpha }}\) have exactly one zero eigenvalue, the other eigenvalues of \((A_k)_{I_{\alpha }}\) must be positive for all k sufficiently large. Hence we may assume without loss of generality that the submatrices \((A_k)_{I_{\alpha }}\) are positive semi-definite for all k. From (1) it follows by Lemma 2 that \((v^{\alpha }_k)_{I_{\alpha }}\) is a minimal zero of the submatrix \((A_k)_{I_{\alpha }}\).

By definition the submatrix \(B_{J_{\alpha } \times I_{\alpha }}\) is rank deficient for every matrix \(B \in Z_{{\mathcal {E}}}\), which by virtue of the inclusion \(I_{\alpha } \subset J_{\alpha }\) implies that it has a non-zero right kernel vector. This kernel vector is also in the kernel of the principal submatrix \(B_{I_{\alpha }}\). However, the kernel of \((A_k)_{I_{\alpha }}\) is 1-dimensional and generated by the sub-vector \((v^{\alpha }_k)_{I_{\alpha }}\). Therefore \((v^{\alpha }_k)_{I_{\alpha }}\) is also in the kernel of \((A_k)_{J_{\alpha } \times I_{\alpha }}\), and \((A_kv^{\alpha }_k)_{J_{\alpha }} = \mathbf{0}\). On the other hand, we have \(A_kv^{\alpha }_k \rightarrow Au^{\alpha }\), and hence we may assume without loss of generality that \({{\,\mathrm{supp}\,}}(Au^{\alpha }) \subset {{\,\mathrm{supp}\,}}(A_kv^{\alpha }_k)\) for all k. Since \({{\,\mathrm{comp}\,}}{u^{\alpha }} = J_{\alpha }\) by assumption, we obtain \(\overline{{{\,\mathrm{supp}\,}}{(A_kv^{\alpha }_k)}} = J_{\alpha }\).

If \(A_k \in \mathcal {COP}^{n}\), then by the preceding \(v^{\alpha }_k\) is a minimal zero of \(A_k\) with extended support \((I_{\alpha },J_{\alpha })\). Here minimality of \(v^{\alpha }_k\) follows from the fact that the kernel of \((A_k)_{I_{\alpha }}\) is 1-dimensional and this submatrix is positive semi-definite. Hence every other zero with support in \(I_{\alpha }\) must be proportional to \(v^{\alpha }_k\).

Let us show that indeed \(A_k \in \mathcal {COP}^{n}\) except for possibly a finite number of indices k. For each k, consider the problem

$$\begin{aligned} \min _{w \in \varDelta }\ \frac{1}{2}w^TA_kw. \end{aligned}$$
(2)

Assume for the sake of contradiction that there exists a sub-sequence of matrices \(A_k\) converging to A, which for brevity will also be denoted by \(A_k\), such that the optimal value \(\gamma _k\) of problem (2) is negative for all k. Let \(w_k^* \in \varDelta \) be the corresponding minimizers. Without loss of generality we may pass to a sub-sequence \(\{w_k^*\}\) which converges to some vector \(u^* \in \varDelta \). Then \(0 > 2\gamma _k = (w_k^*)^TA_kw_k^* \rightarrow (u^*)^TAu^*\), and we must have \((u^*)^TAu^* \le 0\). However, the matrix A is copositive, which implies \((u^*)^TAu^* \ge 0\) and hence \(u^*\) is a zero of A. Define the index sets \(I = {{\,\mathrm{supp}\,}}{u^*}\), \(J = {{\,\mathrm{comp}\,}}{u^*}\), and note that \(I \subset J\). Since \(w^*_k \rightarrow u^*\), we may assume without loss of generality that \(I \subset {{\,\mathrm{supp}\,}}{w^*_k}\) for all k.

By Lemma 1 the zero \(u^*\) can be represented as a sum of minimal zeros of A. Equivalently, \(u^*\) is a convex combination of the minimal zeros \(u^{\alpha }\), and there exist nonnegative numbers \(\eta _{\alpha }\), \(\sum _{{\alpha }=1}^m \eta _{\alpha } = 1\), such that \(u^* = \sum _{{\alpha }=1}^m \eta _{\alpha }u^{\alpha }\). Note that \({{\,\mathrm{supp}\,}}{u^*} = \bigcup _{{\alpha }:\,\eta _{\alpha } > 0}I_{\alpha }\).

We have \(\mathbf{0} = (Au^*)_J = \sum _{{\alpha }=1}^m \eta _{\alpha } (Au^{\alpha })_J\). However, \(Au^{\alpha } \ge \mathbf{0}\) for all \({\alpha }\). Therefore \((Au^{\alpha })_J = \mathbf{0}\) and as a consequence \(J \subset J_{\alpha }\) and thus also \((A_kv^{\alpha }_k)_J = \mathbf{0}\) for all \({\alpha }\) such that \(\eta _{\alpha } > 0\). For each k, define \(v_k = \sum _{{\alpha }=1}^m \eta _{\alpha }v^{\alpha }_k\). By (1) we have \(v_k \in \varDelta \) for all k, and

$$\begin{aligned} {{\,\mathrm{supp}\,}}{v_k} = \bigcup _{{\alpha }:\,\eta _{\alpha }> 0}{{\,\mathrm{supp}\,}}{v^{\alpha }_k} = \bigcup _{{\alpha }:\,\eta _{\alpha } > 0}I_{\alpha } = I. \end{aligned}$$

Moreover, \((A_kv_k)_J = \sum _{{\alpha }=1}^m \eta _{\alpha }(A_kv^{\alpha }_k)_J = \mathbf{0}\), and \(v_k^TA_kv_k = (v_k)_I^T(A_kv_k)_I = 0\).

Let us consider the first order optimality condition at \(w^*_k\). It states that for each k there exists a nonnegative vector \(\lambda _k \in {\mathbb {R}}_+^n\) and a number \(\mu _k\) such that

$$\begin{aligned} \frac{\partial }{\partial w}\left( \frac{1}{2}w^TA_kw - \lambda _k^Tw + \mu _k(1 - \mathbf{1}^Tw) \right) = A_kw - \lambda _k - \mu _k\mathbf{1} = \mathbf{0} \end{aligned}$$

at \(w = w_k^*\), and \(\lambda _k^Tw_k^* = 0\). Multiplying from the left by \(w_k^*\), we obtain

$$\begin{aligned} 0 = (w_k^*)^TA_kw_k^* - (w_k^*)^T\lambda _k - \mu _k\mathbf{1}^Tw_k^* = 2\gamma _k - \mu _k, \end{aligned}$$

and hence

$$\begin{aligned} \lambda _k = A_kw^*_k - \mu _k\mathbf{1} = A_kw^*_k - 2\gamma _k\mathbf{1}. \end{aligned}$$

Passing to the limit on both sides and taking into account \(2\gamma _k \rightarrow (u^*)^TAu^* = 0\), we obtain \(\lim _{k \rightarrow \infty }\lambda _k = Au^*\). Without loss of generality we may hence assume that \({{\,\mathrm{supp}\,}}(Au^*) \subset {{\,\mathrm{supp}\,}}{\lambda _k}\) for all k. It follows that

$$\begin{aligned} {{\,\mathrm{supp}\,}}{w^*_k} \subset \overline{{{\,\mathrm{supp}\,}}{\lambda _k}} \subset \overline{{{\,\mathrm{supp}\,}}(Au^*)} = J. \end{aligned}$$

Let us now introduce a parameter \(\tau \ge 0\) and consider the vector \(w_k(\tau ) = (1-\tau )v_k + \tau w^*_k\). By virtue of \((w^*_k)^TA_kv_k = (w^*_k)_J^T(A_kv_k)_J = 0\) the value of the objective function of problem (2) on this vector equals

$$\begin{aligned} \frac{1}{2}w_k^T(\tau )A_kw_k(\tau )&= \frac{(1-\tau )^2}{2}v_k^TA_kv_k + \tau (1-\tau )(w^*_k)^TA_kv_k + \frac{\tau ^2}{2}(w^*_k)^TA_kw^*_k \\&= \tau ^2\gamma _k. \end{aligned}$$

Recall that \(I = {{\,\mathrm{supp}\,}}{v_k} \subset {{\,\mathrm{supp}\,}}{w^*_k}\) and hence the minimal face of \(w_k^*\) in \(\varDelta \) contains the vector \(v_k\). Since \(w_k^* = w_k(1)\) is in the relative interior of its minimal face, there exists \(\tau > 1\) such that \(w_k(\tau )\) is also in this face and hence in \(\varDelta \). However, \(\tau ^2\gamma _k < \gamma _k\) for such \(\tau \), contradicting that \(\gamma _k\) is the minimum of the objective function over \(\varDelta \).

Thus we may assume that \(A_k \in \mathcal {COP}^{n}\) for all k. It remains to show that \({{\,\mathrm{esupp}\,}}{{\mathcal {V}}^{A_k}_{\min }} = {{\mathcal {E}}}\) for all sufficiently large k. The minimal zeros \(v^{\alpha }_k\) of \(A_k\) ensure that \({{\mathcal {E}}} \subset {{\,\mathrm{esupp}\,}}{{\mathcal {V}}^{A_k}_{\min }}\). Let us show the opposite inclusion.

Suppose for the sake of contradiction that there exists a pair of index sets \(({\hat{I}},{\hat{J}})\) which is not contained in \({{\mathcal {E}}}\) and such that \(A_k\) has a minimal zero \({\hat{u}}_k \in \varDelta \) with extended support \(({\hat{I}},{\hat{J}})\) for sufficiently large k. Without loss of generality assume that \({\hat{u}}_k \rightarrow {\hat{u}} \in \varDelta \). Then \(0 = {\hat{u}}_k^TA_k{\hat{u}}_k \rightarrow {\hat{u}}^TA{\hat{u}}\), and \({\hat{u}}\) must be a zero of A with \({{\,\mathrm{supp}\,}}{{\hat{u}}} \subset {\hat{I}}\). Hence there exists a minimal zero \(u^{\alpha }\) of A such that \(I_{\alpha } \subset {{\,\mathrm{supp}\,}}{{\hat{u}}} \subset {\hat{I}}\). However, every \(A_k\) possesses a minimal zero with support \(I_{\alpha }\), namely \(v^{\alpha }_k\). By the minimality of \({\hat{u}}_k\) we then must have \({\hat{I}} = I_{\alpha }\) and \({\hat{u}}_k = v^{\alpha }_k\), contradicting the assumption \(({\hat{I}},{\hat{J}}) \not \in {{\mathcal {E}}}\).

Thus \(A_k \in S_{{\mathcal {E}}}\) for sufficiently large k, which completes the proof of the theorem. \(\square \)

The theorem implies that in a neighbourhood of any copositive matrix \(A \in \mathcal {COP}^{n}\) with \({{\,\mathrm{esupp}\,}}{{\mathcal {V}}^{A}_{\min }} = {{\mathcal {E}}}\), the structure of the set \(S_{{\mathcal {E}}}\) of copositive matrices sharing the extended minimal zero support set with A is completely described by the polynomial relations determining the algebraic set \(Z_{{\mathcal {E}}}\). We have the following result.

Corollary 1

Let \({{\mathcal {E}}} = \{ (I_{\alpha },J_{\alpha }) \}_{{\alpha } = 1,\dots ,m}\) be an arbitrary collection of pairs of index sets. Then \(S_{{\mathcal {E}}}\) is an open subset in the relative topology of the algebraic set \(Z_{{\mathcal {E}}}\).

Proof

If \(S_{{\mathcal {E}}} = \emptyset \), then the assertion of the corollary is trivial. In the opposite case it follows from Theorem 1. \(\square \)

3 Properties of the subsets \(S_{{\mathcal {E}}}\)

In this section we establish some properties of the extended minimal zero support set and the corresponding subsets \(S_{{\mathcal {E}}}\), in particular in relation to the facial structure of \(\mathcal {COP}^{n}\).

First we consider the action of the automorphism group of \(\mathcal {COP}^{n}\) on the decomposition into subsets \(S_{{\mathcal {E}}}\).

Lemma 4

The decomposition of \(\mathcal {COP}^{n}\) into subsets \(S_{{\mathcal {E}}}\) is invariant under scaling \(A \mapsto DAD\) by positive definite diagonal matrices and equivariant under the action \(A \mapsto PAP^T\) of the symmetric group \(S_n\).

Proof

Let u be a minimal zero of \(A \in \mathcal {COP}^{n}\) with extended support (IJ).

Suppose D is a positive definite diagonal matrix. Then \(D^{-1}u\) is a minimal zero of the diagonally scaled matrix \(DAD \in \mathcal {COP}^{n}\). It is easily seen that the extended support of \(D^{-1}u\) is again (IJ). Hence A and DAD have the same extended minimal zero support set and reside in the same subset \(S_{{\mathcal {E}}}\).

On the other hand, let \(P \in S_n\) be a permutation matrix. Then Pu is a minimal zero of the permuted matrix \(PAP^T\). However, the extended support \(({\tilde{I}},{\tilde{J}})\) of Pu is obtained from (IJ) by element-wise application of the permutation P. Hence the extended minimal zero support set of \(PAP^T\) is obtained from \({{\mathcal {E}}}\) by the element-wise action of P.

This completes the proof. \(\square \)

We have the following simple properties.

Lemma 5

Let \({{\mathcal {E}}} = \{ (I_{\alpha },J_{\alpha }) \}_{{\alpha } = 1,\dots ,m}\) be a collection of pairs of index sets such that \(S_{{\mathcal {E}}} \not = \emptyset \). Then for every \(\alpha ,\beta \) we have \(I_{\alpha } \subset J_{\beta }\) if and only if \(I_{\beta } \subset J_{\alpha }\). Moreover, \(I_{\alpha } \not = \emptyset \) and \(I_{\alpha } \subset J_{\alpha }\) for all \(\alpha \). If \(\alpha \not = \beta \), then \(I_{\alpha } \not \subset I_{\beta }\).

Proof

Let \(A \in S_{{\mathcal {E}}}\) and let \(u^{\alpha }\) be minimal zeros of A with \({{\,\mathrm{supp}\,}}{u^{\alpha }} = I_{\alpha }\), \({\alpha } = 1,\dots ,m\). Since \(u^{\alpha } \not = \mathbf{0}\), we have \(I_{\alpha } \not = \emptyset \) for all \(\alpha \).

We shall show that the inclusion \(I_{\beta } \subset J_{\alpha }\) is equivalent to the relation \((u^{\beta })^TAu^{\alpha } = 0\). Indeed, since \(u^{\beta } \ge \mathbf{0}\) and \(Au^{\alpha } \ge \mathbf{0}\), the relation \((u^{\beta })^TAu^{\alpha } = 0\) is equivalent to \({{\,\mathrm{supp}\,}}{u^{\beta }} \cap {{\,\mathrm{supp}\,}}(Au^{\alpha }) = I_{\beta } \cap \overline{J_{\alpha }} = \emptyset \). This in turn is equivalent to \(I_{\beta } \subset J_{\alpha }\).

By the symmetry of the condition \((u^{\beta })^TAu^{\alpha } = 0\) with respect to an exchange of \(\alpha ,\beta \) we obtain the first claim of the lemma.

The second claim follows from the relation \((u^{\alpha })^TAu^{\alpha } = 0\).

The last assertion holds by the minimality property of the supports \(I_{\alpha }\) of the minimal zeros \(u^{\alpha }\). \(\square \)

If the set \(S_{{\mathcal {E}}}\) is non-empty and not the zero set \(\{{\mathbf{0}}\}\), then it has a boundary, which by Corollary 1 is a subset of \(Z_{{\mathcal {E}}} \setminus S_{{\mathcal {E}}}\). Since \(\mathcal {COP}^{n}\) is closed, this boundary consists of copositive matrices, and hence of elements of other subsets \(S_{{{\mathcal {E}}}'}\) with \({{\mathcal {E}}}' \not = {{\mathcal {E}}}\). The following result describes a relation between the two collections \({{\mathcal {E}}}',{{\mathcal {E}}}\).

Lemma 6

Let \({{\mathcal {E}}} = \{ (I_{\alpha },J_{\alpha }) \}_{{\alpha } = 1,\dots ,m}\) be a collection of pairs of index sets, and let \(A_k \in S_{{\mathcal {E}}}\) be a sequence of matrices tending to some limit \(A \in S_{{{\mathcal {E}}}'}\), \({{\mathcal {E}}}' = \{ (I'_{\alpha },J'_{\alpha }) \}_{{\alpha } = 1,\dots ,m'}\). Then for every \({\alpha } = 1,\dots ,m\) there exists \({\alpha }' \in \{1,\dots ,m'\}\) such that \(I'_{{\alpha }'} \subset I_{\alpha }\), \(J_{\alpha } \subset J'_{{\alpha }'}\). In particular, we have \(Z_{{{\mathcal {E}}}'} \subset Z_{{\mathcal {E}}}\).

Proof

Let \(u_k \in \varDelta \) be the minimal zero of \(A_k\) with extended support \((I_{\alpha },J_{\alpha })\). Assume without loss of generality that \(u_k \rightarrow u \in \varDelta \). We have \(0 = u_k^TA_ku_k \rightarrow u^TAu\), and hence u is a zero of A. Moreover, \({{\,\mathrm{supp}\,}}{u} \subset {{\,\mathrm{supp}\,}}{u_k} = I_{\alpha }\). On the other hand, \(A_ku_k \rightarrow Au\), and hence \({{\,\mathrm{supp}\,}}(Au) \subset {{\,\mathrm{supp}\,}}(A_ku_k) = \overline{J_{\alpha }}\). It follows that \(J_{\alpha } \subset {{\,\mathrm{comp}\,}}{u}\).

By Lemma 1 the zero u of A can be decomposed as a sum of minimal zeros of A, \(u = \sum _{{\alpha }'} u^{{\alpha }'}\) with the extended support of the minimal zero \(u^{{\alpha }'}\) being \((I'_{{\alpha }'},J'_{{\alpha }'}) \in {{\mathcal {E}}}'\). Note also that \(Au = \sum _{{\alpha }'} Au^{{\alpha }'}\). Now both \(u^{{\alpha }'}\) and \(Au^{{\alpha }'}\) are nonnegative vectors, and hence

$$\begin{aligned} I'_{{\alpha }'} = {{\,\mathrm{supp}\,}}{u^{{\alpha }'}} \subset {{\,\mathrm{supp}\,}}{u},\ {{\,\mathrm{supp}\,}}(Au^{{\alpha }'}) \subset {{\,\mathrm{supp}\,}}(Au) \end{aligned}$$

for all \({\alpha }'\) appearing in the sum, the second inclusion being equivalent to \({{\,\mathrm{comp}\,}}{u} \subset J'_{{\alpha }'}\).

The first assertion of the lemma now readily follows.

Now if the submatrix \(B_{J' \times I'}\) of some matrix \(B \in {\mathcal {S}}^{n}\) has a non-zero right kernel vector, then also \(B_{J \times I}\) has a non-zero right kernel vector whenever \(I' \subset I\), \(J \subset J'\). This proves the second assertion. \(\square \)

We now pass to the properties related to the facial structure of \(\mathcal {COP}^{n}\).

Lemma 7

All matrices in the relative interior of a face \({{\mathcal {F}}} \subset \mathcal {COP}^{n}\) belong to the same subset \(S_{{\mathcal {E}}}\). The matrices in the boundary of the face \(\mathcal{F}\) do not belong to the subset \(S_{{\mathcal {E}}}\).

Proof

Let A in the relative interior of \({{\mathcal {F}}}\) have extended minimal zero support set \({{\mathcal {E}}} = \{ (I_{\alpha },J_{\alpha }) \}_{{\alpha } = 1,\dots ,m}\), and let \(u^{\alpha }\) be minimal zeros of A with support \(I_{\alpha }\). Note that \({{\mathcal {F}}}\) is the minimal face of A. By [8, Theorem 17] the linear hull of \({{\mathcal {F}}}\) is given by all matrices \(B \in {\mathcal {S}}^{n}\) such that \((Bu^{\alpha })_{J_{\alpha }} = {\mathbf{0}}\) for all \(\alpha = 1,\dots ,m\). Hence this linear hull is a subset of \(Z_{{\mathcal {E}}}\). Thus by virtue of Theorem 1 there exists a neighbourhood U of A such that \(U \cap {{\mathcal {F}}} \subset S_{{\mathcal {E}}}\).

It follows that the extended minimal zero support set is locally constant on the interior of the face \({{\mathcal {F}}}\). This implies the first assertion of the lemma.

On the other hand, suppose for the sake of contradiction that \(A' \in \partial {{\mathcal {F}}} \cap S_{{\mathcal {E}}}\). Then again by Theorem 1 there exists a neighbourhood \(U'\) of \(A'\) in the linear hull of \({{\mathcal {F}}}\) such that \(U' \subset S_{{\mathcal {E}}}\) and hence \(U' \subset \mathcal {COP}^{n}\). This contradicts the assumption \(A' \in \partial {{\mathcal {F}}}\) and proves the second assertion of the lemma. \(\square \)

Corollary 2

Each of the subsets \(S_{{\mathcal {E}}} \subset \mathcal {COP}^{n}\) is a disjoint union of relative interiors of faces of \(\mathcal {COP}^{n}\).

Proof

The corollary follows from Lemma 7 and the fact that relative interiors of different faces do not intersect. \(\square \)

We shall need the following auxiliary result.

Lemma 8

Let M be a \(k \times k\) matrix of rank \(k-1\) and with left kernel vector having a non-zero first element. Denote \(I_i = \{1,\dots ,k\} \setminus \{i\}\), \(i = 1,\dots ,k\). Then the right kernel of M is generated by the vector \(u \in {\mathbb {R}}^k\) with elements \(u_i = (-1)^i\det M_{I_1 \times I_i}\).

Proof

By assumption the first row of M is a linear combination of the other \(k-1\) rows. Since the matrix M is of rank \(k - 1\), the remaining \(k-1\) rows are linearly independent and at least one of the determinants defining the elements of u is non-zero. Let the vector \(v = (v_1,\dots ,v_k)^T\) generate the right kernel of M.

Now replace the elements of the first row of M by independent variables \(x_1,\dots ,x_k\) and let \(f(x) = -\sum _{i=1}^k u_ix_i\) be the determinant of the so-modified matrix. This determinant is zero if and only if the vector \(x = (x_1,\dots ,x_k)\) is a linear combination of the other rows of the matrix. In this case v is a right kernel vector of the matrix, and \(\sum _{i=1}^k v_ix_i = 0\). Thus v must be proportional to u, which completes the proof. \(\square \)

Let now \(A \in \mathcal {COP}^{n}\) be arbitrary and let \({{\mathcal {E}}}\) be the extended minimal zero support set of A. Let \({{\mathcal {F}}}\) be the minimal face of A. As mentioned in the proof of Lemma 7, its linear hull is given by the solution space of a linear homogeneous system of equations with the non-zero coefficients being the positive elements of the minimal zeros of A. By Lemma 8, these elements can be expressed by polynomials in the elements of the matrix A. Moreover, the linear system has the same form for all matrices in \(S_{{\mathcal {E}}}\).

Therefore the dimension of the minimal face of a matrix \(A \in S_{{\mathcal {E}}}\) is given by the column rank defect of a matrix depending polynomially on A. We obtain the following result.

Lemma 9

Let \({{\mathcal {E}}}\) be a collection of pairs of index sets, and let C be an irreducible component of the algebraic set \(Z_{{\mathcal {E}}}\) such that \(S = C \cap S_{{\mathcal {E}}} \not = \emptyset \). Then the dimension of the minimal face of a matrix \(A \in S\) is constant over S, with the possible exception of an algebraic subset of lower dimension where the dimension of the face is higher.

Proof

The column rank defect of a matrix is determined by which of its minors are zero or not. In our case these minors are polynomials in the elements of A, and hence they either identically vanish on C or their zero set is an algebraic subset of lower dimension. On this subset the column rank defect can only increase. \(\square \)

In particular, each component of \(S_{{\mathcal {E}}}\) either does not contain any extremal matrix, or all matrices in the component are extremal with the possible exception of an algebraic subset of lower dimension. This makes the decomposition proposed in this paper especially well-suited for the study of the extremal matrices of \(\mathcal {COP}^{n}\). One may denote non-extremal matrices of \(\mathcal {COP}^{n}\) which are elements of a subset \(S_{{\mathcal {E}}}\) containing extremal matrices by quasi-extremal.

3.1 Examples

In this section we provide some explicit examples of subsets \(S_{{\mathcal {E}}}\).

Interior of \(\mathcal {COP}^{n}\): The largest subset for any order n is the subset \(S_{\emptyset }\), which equals the interior of the cone. In this case \(Z_{\emptyset } = {\mathcal {S}}^{n}\). This example shows that the boundary of \(S_{{\mathcal {E}}}\) may be as complicated as the copositive cone itself.

Generic points in \(\partial \mathcal {COP}^{n}\): An open dense subset of the boundary \(\partial \mathcal {COP}^{n}\) must be defined by subsets \(S_{{\mathcal {E}}}\) of dimension \(\frac{n(n+1)}{2}-1\). The corresponding algebraic set \(Z_{{\mathcal {E}}}\) is determined by a single polynomial equation. In this case we must have \({{\mathcal {E}}} = \{(I,I)\}\) for some non-empty index set I, and the corresponding equation amounts to \(\det \,A_I = 0\).

Zero subset: The unique 0-dimensional subset \(S_{{\mathcal {E}}}\) is the point \(\{\mathbf{0}\}\), with \({{\mathcal {E}}} = \{(\{i\},\{1,\dots ,n\})\}_{i = 1,\dots ,n}\).

Orbit of the Horn matrix: A non-trivial example of a subset \(S_{{\mathcal {E}}}\) is the set of matrices

$$\begin{aligned} D\begin{pmatrix} 1 &{} -1 &{} 1 &{} 1 &{} -1 \\ -1 &{} 1 &{} -1 &{} 1 &{} 1 \\ 1 &{} -1 &{} 1 &{} -1 &{} 1 \\ 1 &{} 1 &{} -1 &{} 1 &{} -1 \\ -1 &{} 1 &{} 1 &{} -1 &{} 1 \end{pmatrix}D \in \mathcal {COP}^{5},\qquad D = {{\,\mathrm{diag}\,}}(d_1,\dots ,d_5) \succ 0. \end{aligned}$$

In this case the extended minimal zero support set is given by

$$\begin{aligned} {{\mathcal {E}}} =&\{(\{1,2\},\{5,1,2,3\}),(\{2,3\},\{1,2,3,4\}),(\{3,4\},\{2,3,4,5\}),\\&(\{4,5\},\{3,4,5,1\}),(\{5,1\},\{1,2,3,4\})\}. \end{aligned}$$

Matrices with circulant zero support set: The previous example can be generalized to arbitrary order n. The corresponding copositive matrices have been studied in [12].

Let \({{\mathcal {E}}} = \{(\overline{\{1,2,3\}},\overline{\{2\}}),\dots ,(\overline{\{n,1,2\}},\overline{\{1\}})\}\), \(n \ge 5\), where the pairs of index sets in \({{\mathcal {E}}}\) are obtained from each other by a circular shift of the indices \(1,\dots ,n\). Then \(S_{{\mathcal {E}}}\) is an algebraic manifold of dimension \(\frac{n(n-3)}{2}\) consisting of extremal exceptional copositive matrices [12, Theorem 6.3].

Let \({{\mathcal {E}}} = \{(\overline{\{1,2\}},\overline{\{1,2\}}),\dots ,(\overline{\{n,1\}},\overline{\{n,1\}})\}\), \(n \ge 5\), where the pairs of index sets in \({{\mathcal {E}}}\) are obtained from each other by a circular shift of the indices \(1,\dots ,n\). Then \(S_{{\mathcal {E}}}\) is an algebraic manifold of dimension \(\frac{n(n-1)}{2}\) consisting of exceptional copositive matrices, which can be extremal only for odd n [12, Theorem 6.1].

Stratification of \(\mathcal {COP}^{2}\): The 3-dimensional cone \(\mathcal {COP}^{2}\) decomposes into the 8 strata listed in Table 1. These strata are depicted in Fig. 1.

Table 1 Decomposition of the cone \(\mathcal {COP}^{2}\)
Fig. 1
figure 1

Decomposition of \(\mathcal {COP}^{2}\) into subsets \(S_{{\mathcal {E}}}\). The numbers correspond to those in Table 1. The union of subsets 3, 5, 7 is open and dense in the boundary of \(\mathcal {COP}^{2}\). The affine compact section of \(\mathcal {COP}^{2}\) corresponds to the section in [9, Fig. 1]

Let us derive the extended support sets allowed by Lemma 5. For any element \((I,J) \in {{\mathcal {E}}}\), we must have \(I \not = \emptyset \) and \(I \subset J\). This leaves only the possibilities \((\{1\},\{1\})\), \((\{1\},\{1,2\})\), \((\{2\},\{2\})\), \((\{2\},\{1,2\})\), \((\{1,2\},\{1,2\})\). Of these 5 elements, the first can co-exist only with the third, and the second only with the fourth without violating the conditions of Lemma 5, with no other possible combinations of several elements. Thus the 8 extended support sets which correspond to non-empty subsets \(S_{{\mathcal {E}}}\) are exactly those which are allowed by the lemma.

Further, for two distinct elements (IJ), \((I',J')\) out of the 5 possible ones listed in the previous paragraph, we have \(I' \subset I\) and \(J \subset J'\) only if the pair of elements is made up of the first and second, or the third and fourth, or the fifth and second, or the fifth and fourth. It follows that the conditions of Lemma 6 are satisfied if the pair \((\mathcal{E},{{\mathcal {E}}}')\) is given by those pairs of two of the 8 sets listed in Table 1 which have numbers \((1,\star )\), \((\star ,2)\), (3, 4), (3, 8), (5, 4), (5, 6), (7, 6), (7, 8). Here \(\star \) corresponds to an arbitrary number. It can be observed that for exactly those pairs the subset \(S_{{{\mathcal {E}}}'}\) is contained in the boundary of \(S_{{\mathcal {E}}}\).

Increase of facial dimension within \(S_{{\mathcal {E}}}\): Let us give an example of a subset \(S_{{\mathcal {E}}}\) which is made up of facial interiors of different dimension, as mentioned in Lemma 9. Let \(n = 6\) and consider the extended support set

$$\begin{aligned} {{\mathcal {E}}} =&\{(\{1,5\},\{1,2,4,5\}),(\{2,6\},\{1,2,3,6\}),(\{1,2,3\},\{1,2,3,6\}),\\&(\{2,3,4\},\{2,3,4\}),(\{3,4,5\},\{3,4,5\}),(\{4,5,6\},\{4,5,6\})\}. \end{aligned}$$

The corresponding subset \(S_{{\mathcal {E}}} \subset \mathcal {COP}^{6}\) has dimension 11 and consists of matrices A of the form

$$\begin{aligned} D\begin{pmatrix} 1 &{} -\cos \phi _1 &{} \cos (\phi _1+\phi _2) &{} \cos \phi _4 &{} -1 &{} \cos \phi _1\\ -\cos \phi _1 &{} 1 &{} -\cos \phi _2 &{} \cos (\phi _2+\phi _3) &{} \cos \phi _1 &{} -1\\ \cos (\phi _1+\phi _2) &{} -\cos \phi _2 &{} 1 &{} -\cos \phi _3 &{} \cos (\phi _3+\phi _4) &{} \cos \phi _2\\ \cos \phi _4 &{} \cos (\phi _2+\phi _3) &{} -\cos \phi _3 &{} 1 &{} -\cos \phi _4 &{} \cos (\phi _4+\phi _5)\\ -1 &{} \cos \phi _1 &{} \cos (\phi _3+\phi _4) &{} -\cos \phi _4 &{} 1 &{} -\cos \phi _5\\ \cos \phi _1 &{} -1 &{} \cos \phi _2 &{} \cos (\phi _4+\phi _5) &{} -\cos \phi _5 &{} 1 \end{pmatrix}D, \end{aligned}$$

where \(\phi _i>0\), \(i = 1,\dots ,5\), \(\phi _1 < \phi _5\), \(\phi _2+\phi _3+\phi _4+\phi _5 < \pi \), and D is a positive definite diagonal matrix. If \(\phi _1+\phi _5\not =\pi \), then the dimension of the face of A equals 1. For \(\phi _1+\phi _5=\pi \), however, the dimension of the face increases to 2, and the corresponding matrices are quasi-extremal.